Wieczorek, Michael
2014-01-01
This digital data release consists of seven data files of soil attributes for the United States and the District of Columbia. The files are derived from National Resources Conservations Service’s (NRCS) Soil Survey Geographic database (SSURGO). The data files can be linked to the raster datasets of soil mapping unit identifiers (MUKEY) available through the NRCS’s Gridded Soil Survey Geographic (gSSURGO) database (http://www.nrcs.usda.gov/wps/portal/nrcs/detail/soils/survey/geo/?cid=nrcs142p2_053628). The associated files, named DRAINAGECLASS, HYDRATING, HYDGRP, HYDRICCONDITION, LAYER, TEXT, and WTDEP are area- and depth-weighted average values for selected soil characteristics from the SSURGO database for the conterminous United States and the District of Columbia. The SSURGO tables were acquired from the NRCS on March 5, 2014. The soil characteristics in the DRAINAGE table are drainage class (DRNCLASS), which identifies the natural drainage conditions of the soil and refers to the frequency and duration of wet periods. The soil characteristics in the HYDRATING table are hydric rating (HYDRATE), a yes/no field that indicates whether or not a map unit component is classified as a "hydric soil". The soil characteristics in the HYDGRP table are the percentages for each hydrologic group per MUKEY. The soil characteristics in the HYDRICCONDITION table are hydric condition (HYDCON), which describes the natural condition of the soil component. The soil characteristics in the LAYER table are available water capacity (AVG_AWC), bulk density (AVG_BD), saturated hydraulic conductivity (AVG_KSAT), vertical saturated hydraulic conductivity (AVG_KV), soil erodibility factor (AVG_KFACT), porosity (AVG_POR), field capacity (AVG_FC), the soil fraction passing a number 4 sieve (AVG_NO4), the soil fraction passing a number 10 sieve (AVG_NO10), the soil fraction passing a number 200 sieve (AVG_NO200), and organic matter (AVG_OM). The soil characteristics in the TEXT table are
Average Weighted Receiving Time of Weighted Tetrahedron Koch Networks
NASA Astrophysics Data System (ADS)
Dai, Meifeng; Zhang, Danping; Ye, Dandan; Zhang, Cheng; Li, Lei
2015-07-01
We introduce weighted tetrahedron Koch networks with infinite weight factors, which are generalization of finite ones. The term of weighted time is firstly defined in this literature. The mean weighted first-passing time (MWFPT) and the average weighted receiving time (AWRT) are defined by weighted time accordingly. We study the AWRT with weight-dependent walk. Results show that the AWRT for a nontrivial weight factor sequence grows sublinearly with the network order. To investigate the reason of sublinearity, the average receiving time (ART) for four cases are discussed.
Asymmetric network connectivity using weighted harmonic averages
NASA Astrophysics Data System (ADS)
Morrison, Greg; Mahadevan, L.
2011-02-01
We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.
Average molecular weight of surfactants in aerosols
NASA Astrophysics Data System (ADS)
Latif, M. T.; Brimblecombe, P.
2007-09-01
Surfactants in atmospheric aerosols determined as methylene blue active substances (MBAS) and ethyl violet active substances (EVAS). The MBAS and EVAS concentrations can be correlated with surface tension as determined by pendant drop analysis. The effect of surface tension was more clearly indicated in fine mode aerosol extracts. The concentration of MBAS and EVAS was determined before and after ultrafiltration analysis using AMICON centrifuge tubes that define a 5000 Da (5 K Da) nominal molecular weight fraction. Overall, MBAS and to a greater extent EVAS predominates in fraction with molecular weight below 5 K Da. In case of aerosols collected in Malaysia the higher molecular fractions tended to be a more predominant. The MBAS and EVAS are correlated with yellow to brown colours in aerosol extracts. Further experiments showed possible sources of surfactants (e.g. petrol soot, diesel soot) in atmospheric aerosols to yield material having molecular size below 5 K Da except for humic acid. The concentration of surfactants from these sources increased after ozone exposure and for humic acids it also general included smaller molecular weight surfactants.
Estimating a weighted average of stratum-specific parameters.
Brumback, Babette A; Winner, Larry H; Casella, George; Ghosh, Malay; Hall, Allyson; Zhang, Jianyi; Chorba, Lorna; Duncan, Paul
2008-10-30
This article investigates estimators of a weighted average of stratum-specific univariate parameters and compares them in terms of a design-based estimate of mean-squared error (MSE). The research is motivated by a stratified survey sample of Florida Medicaid beneficiaries, in which the parameters are population stratum means and the weights are known and determined by the population sampling frame. Assuming heterogeneous parameters, it is common to estimate the weighted average with the weighted sum of sample stratum means; under homogeneity, one ignores the known weights in favor of precision weighting. Adaptive estimators arise from random effects models for the parameters. We propose adaptive estimators motivated from these random effects models, but we compare their design-based performance. We further propose selecting the tuning parameter to minimize a design-based estimate of mean-squared error. This differs from the model-based approach of selecting the tuning parameter to accurately represent the heterogeneity of stratum means. Our design-based approach effectively downweights strata with small weights in the assessment of homogeneity, which can lead to a smaller MSE. We compare the standard random effects model with identically distributed parameters to a novel alternative, which models the variances of the parameters as inversely proportional to the known weights. We also present theoretical and computational details for estimators based on a general class of random effects models. The methods are applied to estimate average satisfaction with health plan and care among Florida beneficiaries just prior to Medicaid reform.
Formulation of Maximized Weighted Averages in URTURIP Technique
2001-10-25
Formulation of Maximized Weighted Averages in URTURIP Technique Bruno Migeon, Philippe Deforge, Pierre Marché Laboratoire Vision et Robotique ...Organization Name(s) and Address(es) Laboratoire Vision et Robotique 63, avenue de Lattre de Tassigny, 18020 Bourges Cedex - France Performing Organization
Evaluation of spline and weighted average interpolation algorithms
NASA Astrophysics Data System (ADS)
Eckstein, Barbara Ann
Bivariate polynomial and weighted average interpolations were tested on two data sets. One data set consisted of irregularly spaced Bouguer gravity values. Maps derived from automated interpolation were compared to a manually created map to determine the best computer-generated diagram. For this data set, bivariate polynomial interpolation was inadequate, showing many spurious circular anomalies with extrema greatly exceeding the input values. The greatest distortion occurred near roughly colinear observations and steep field gradients. The computerized map from weighted average interpolation matched the manual map when the number of grid points was roughly nine times the number of input points. Groundwater recharge and discharge rates were used for the second example. The discharge zones are two narrow irrigation ditches, and measurements were along linear traverses. Again, polynomial interpolation produced unreasonably large interpolated values near high field gradients. The weighted average method required a higher ratio of grid points to input data (about 64 to 1) because of the long narrow shape of the discharge zones. The weighted average interpolation method was more reliable than the polynomial method because it was less sensitive to the nature of the data distribution and to the field gradients.
Weighted Average Consensus-Based Unscented Kalman Filtering.
Li, Wangyan; Wei, Guoliang; Han, Fei; Liu, Yurong
2016-02-01
In this paper, we are devoted to investigate the consensus-based distributed state estimation problems for a class of sensor networks within the unscented Kalman filter (UKF) framework. The communication status among sensors is represented by a connected undirected graph. Moreover, a weighted average consensus-based UKF algorithm is developed for the purpose of estimating the true state of interest, and its estimation error is bounded in mean square which has been proven in the following section. Finally, the effectiveness of the proposed consensus-based UKF algorithm is validated through a simulation example.
77 FR 74452 - Bus Testing: Calculation of Average Passenger Weight and Test Vehicle Weight
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-14
... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF TRANSPORTATION Federal Transit Administration 49 CFR Part 665 RIN 2132-AB01 Bus Testing: Calculation of Average Passenger Weight and Test Vehicle Weight AGENCY: Federal Transit Administration (FTA), DOT. ACTION: Notice...
Average weighted trapping time of the node- and edge- weighted fractal networks
NASA Astrophysics Data System (ADS)
Dai, Meifeng; Ye, Dandan; Hou, Jie; Xi, Lifeng; Su, Weiyi
2016-10-01
In this paper, we study the trapping problem in the node- and edge- weighted fractal networks with the underlying geometries, focusing on a particular case with a perfect trap located at the central node. We derive the exact analytic formulas of the average weighted trapping time (AWTT), the average of node-to-trap mean weighted first-passage time over the whole networks, in terms of the network size Ng, the number of copies s, the node-weight factor w and the edge-weight factor r. The obtained result displays that in the large network, the AWTT grows as a power-law function of the network size Ng with the exponent, represented by θ(s , r , w) =logs(srw2) when srw2 ≠ 1. Especially when srw2 = 1 , AWTT grows with increasing order Ng as log Ng. This also means that the efficiency of the trapping process depend on three main parameters: the number of copies s > 1, node-weight factor 0 < w ≤ 1, and edge-weight factor 0 < r ≤ 1. The smaller the value of srw2 is, the more efficient the trapping process is.
Modified box dimension and average weighted receiving time on the weighted fractal networks
Dai, Meifeng; Sun, Yanqiu; Shao, Shuxiang; Xi, Lifeng; Su, Weiyi
2015-01-01
In this paper a family of weighted fractal networks, in which the weights of edges have been assigned to different values with certain scale, are studied. For the case of the weighted fractal networks the definition of modified box dimension is introduced, and a rigorous proof for its existence is given. Then, the modified box dimension depending on the weighted factor and the number of copies is deduced. Assuming that the walker, at each step, starting from its current node, moves uniformly to any of its nearest neighbors. The weighted time for two adjacency nodes is the weight connecting the two nodes. Then the average weighted receiving time (AWRT) is a corresponding definition. The obtained remarkable result displays that in the large network, when the weight factor is larger than the number of copies, the AWRT grows as a power law function of the network order with the exponent, being the reciprocal of modified box dimension. This result shows that the efficiency of the trapping process depends on the modified box dimension: the larger the value of modified box dimension, the more efficient the trapping process is. PMID:26666355
Average receiving scaling of the weighted polygon Koch networks with the weight-dependent walk
NASA Astrophysics Data System (ADS)
Ye, Dandan; Dai, Meifeng; Sun, Yanqiu; Shao, Shuxiang; Xie, Qi
2016-09-01
Based on the weighted Koch networks and the self-similarity of fractals, we present a family of weighted polygon Koch networks with a weight factor r(0 < r ≤ 1) . We study the average receiving time (ART) on weight-dependent walk (i.e., the walker moves to any of its neighbors with probability proportional to the weight of edge linking them), whose key step is to calculate the sum of mean first-passage times (MFPTs) for all nodes absorpt at a hub node. We use a recursive division method to divide the weighted polygon Koch networks in order to calculate the ART scaling more conveniently. We show that the ART scaling exhibits a sublinear or linear dependence on network order. Thus, the weighted polygon Koch networks are more efficient than expended Koch networks in receiving information. Finally, compared with other previous studies' results (i.e., Koch networks, weighted Koch networks), we find out that our models are more general.
26 CFR 1.989(b)-1 - Definition of weighted average exchange rate.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 26 Internal Revenue 10 2013-04-01 2013-04-01 false Definition of weighted average exchange rate. 1... of weighted average exchange rate. For purposes of section 989(b)(3) and (4), the term “weighted average exchange rate” means the simple average of the daily exchange rates (determined by reference to...
26 CFR 1.989(b)-1 - Definition of weighted average exchange rate.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 26 Internal Revenue 10 2011-04-01 2011-04-01 false Definition of weighted average exchange rate. 1... of weighted average exchange rate. For purposes of section 989(b)(3) and (4), the term “weighted average exchange rate” means the simple average of the daily exchange rates (determined by reference to...
26 CFR 1.989(b)-1 - Definition of weighted average exchange rate.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 26 Internal Revenue 10 2012-04-01 2012-04-01 false Definition of weighted average exchange rate. 1... of weighted average exchange rate. For purposes of section 989(b)(3) and (4), the term “weighted average exchange rate” means the simple average of the daily exchange rates (determined by reference to...
26 CFR 1.989(b)-1 - Definition of weighted average exchange rate.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 26 Internal Revenue 10 2014-04-01 2013-04-01 true Definition of weighted average exchange rate. 1... of weighted average exchange rate. For purposes of section 989(b)(3) and (4), the term “weighted average exchange rate” means the simple average of the daily exchange rates (determined by reference to...
12 CFR 702.105 - Weighted-average life of investments.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 6 2011-01-01 2011-01-01 false Weighted-average life of investments. 702.105... PROMPT CORRECTIVE ACTION Net Worth Classification § 702.105 Weighted-average life of investments. Except as provided below (Table 3), the weighted-average life of an investment for purposes of §§...
12 CFR 702.105 - Weighted-average life of investments.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Weighted-average life of investments. 702.105... PROMPT CORRECTIVE ACTION Net Worth Classification § 702.105 Weighted-average life of investments. Except as provided below (Table 3), the weighted-average life of an investment for purposes of §§...
12 CFR 702.105 - Weighted-average life of investments.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 7 2013-01-01 2013-01-01 false Weighted-average life of investments. 702.105... PROMPT CORRECTIVE ACTION Net Worth Classification § 702.105 Weighted-average life of investments. Except as provided below (Table 3), the weighted-average life of an investment for purposes of §§...
12 CFR 702.105 - Weighted-average life of investments.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 7 2012-01-01 2012-01-01 false Weighted-average life of investments. 702.105... PROMPT CORRECTIVE ACTION Net Worth Classification § 702.105 Weighted-average life of investments. Except as provided below (Table 3), the weighted-average life of an investment for purposes of §§...
12 CFR 702.105 - Weighted-average life of investments.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 7 2014-01-01 2014-01-01 false Weighted-average life of investments. 702.105... PROMPT CORRECTIVE ACTION Net Worth Classification § 702.105 Weighted-average life of investments. Except as provided below (Table 3), the weighted-average life of an investment for purposes of §§...
26 CFR 1.989(b)-1 - Definition of weighted average exchange rate.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 10 2010-04-01 2010-04-01 false Definition of weighted average exchange rate. 1... average exchange rate. For purposes of section 989(b)(3) and (4), the term “weighted average exchange rate” means the simple average of the daily exchange rates (determined by reference to a qualified source...
The Conservation of Area Integrals in Averaging Transformations
NASA Astrophysics Data System (ADS)
Kuznetsov, E. D.
2010-06-01
It is shown for the two-planetary version of the weakly perturbed two-body problem that, in a system defined by a finite part of a Poisson expansion of the averaged Hamiltonian, only one of the three components of the area vector is conserved, corresponding to the longitudes measuring plane. The variability of the other two components is demonstrated in two ways. The first is based on calculating the Poisson bracket of the averaged Hamiltonian and the components of the area vector written in closed form. In the second, an echeloned Poisson series processor (EPSP) is used when calculating the Poisson bracket. The averaged Hamiltonian is taken with accuracy to second order in the small parameter of the problem, and the components of the area vector are expanded in a Poisson series.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-01
... International Trade Administration 19 CFR Part 351 RIN 0625-AA87 Antidumping Proceedings: Calculation of the... regarding the calculation of the weighted average dumping margin and antidumping duty assessment rate in... regarding the calculation of the weighted average dumping margin and antidumping duty assessment rate...
Latent-variable approaches to the Jamesian model of importance-weighted averages.
Scalas, L Francesca; Marsh, Herbert W; Nagengast, Benjamin; Morin, Alexandre J S
2013-01-01
The individually importance-weighted average (IIWA) model posits that the contribution of specific areas of self-concept to global self-esteem varies systematically with the individual importance placed on each specific component. Although intuitively appealing, this model has weak empirical support; thus, within the framework of a substantive-methodological synergy, we propose a multiple-item latent approach to the IIWA model as applied to a range of self-concept domains (physical, academic, spiritual self-concepts) and subdomains (appearance, math, verbal self-concepts) in young adolescents from two countries. Tests considering simultaneously the effects of self-concept domains on trait self-esteem did not support the IIWA model. On the contrary, support for a normative group importance model was found, in which importance varied as a function of domains but not individuals. Individuals differentially weight the various components of self-concept; however, the weights are largely determined by normative processes, so that little additional information is gained from individual weightings.
Cohen's Linearly Weighted Kappa Is a Weighted Average of 2 x 2 Kappas
ERIC Educational Resources Information Center
Warrens, Matthijs J.
2011-01-01
An agreement table with [n as an element of N is greater than or equal to] 3 ordered categories can be collapsed into n - 1 distinct 2 x 2 tables by combining adjacent categories. Vanbelle and Albert ("Stat. Methodol." 6:157-163, 2009c) showed that the components of Cohen's weighted kappa with linear weights can be obtained from these n - 1…
Raoult's law-based method for determination of coal tar average molecular weight
Brown, D.G.; Gupta, L.; Horace, H.K.; Coleman, A.J.
2005-08-01
A Raoult's law-based method for determining the number average molecular weight of coal tars is presented. The method requires data from two-phase coal tar/water equilibrium experiments, which readily are performed in environmental laboratories. An advantage of this method for environmental samples is that it is not impacted by the small amount of inert debris often present in coal tar samples obtained from contaminated sites. Results are presented for 10 coal tars from nine former manufactured gas plants located in the eastern United States. Vapor pressure osmometry (VPO) analysis provided similar average molecular weights to those determined with the Raoult's law-based method, except for one highly viscous coal tar sample. Use of the VPO-based average molecular weight for this coal tar resulted in underprediction of the coal tar constituents' aqueous concentrations. Additionally, one other coal tar was not completely soluble in solvents used for VPO analysis. The results indicate that the Raoult's law-based method is able to provide an average molecular weight that is consistent with the intended application of the data (e.g., modeling the dissolution of coal tar constituents into surrounding waters), and this method can be applied to coal tars that may be incompatible with other commonly used methods for determining average molecular weight, such as vapor pressure osmometry.
NASA Technical Reports Server (NTRS)
Scruggs, T.; Moraguez, M.; Patankar, K.; Fitz-Coy, N.; Liou, J.-C.; Sorge, M.; Huynh, T.
2016-01-01
Debris fragments from the hypervelocity impact testing of DebriSat are being collected and characterized for use in updating existing satellite breakup models. One of the key parameters utilized in these models is the ballistic coefficient of the fragment which is directly related to its area-to-mass ratio. However, since the attitude of fragments varies during their orbital lifetime, it is customary to use the average cross-sectional area in the calculation of the area-to-mass ratio. The average cross-sectional area is defined as the average of the projected surface areas perpendicular to the direction of motion and has been shown to be equal to one-fourth of the total surface area of a convex object. Unfortunately, numerous fragments obtained from the DebriSat experiment show significant concavity (i.e., shadowing) and thus we have explored alternate methods for computing the average cross-sectional area of the fragments. An imaging system based on the volumetric reconstruction of a 3D object from multiple 2D photographs of the object was developed for use in determining the size characteristic (i.e., characteristics length) of the DebriSat fragments. For each fragment, the imaging system generates N number of images from varied azimuth and elevation angles and processes them using a space-carving algorithm to construct a 3D point cloud of the fragment. This paper describes two approaches for calculating the average cross-sectional area of debris fragments based on the 3D imager. Approach A utilizes the constructed 3D object to generate equally distributed cross-sectional area projections and then averages them to determine the average cross-sectional area. Approach B utilizes a weighted average of the area of the 2D photographs to directly compute the average cross-sectional area. A comparison of the accuracy and computational needs of each approach is described as well as preliminary results of an analysis to determine the "optimal" number of images needed for
Steplewski, Z; Buczyńska, G; Rogoszewski, M; Kuban, T; Steplewska-Mazur, K; Jaskólecki, H; Kasperczyk, J
1998-02-01
Low birth weight is still important health problem in many countries. Children's low birth weight increases mortality, injures central nervous system, somatic, interferes with intellectual and emotional development. Low birth weight is frequently occurring in Poland--between 7-9% of live births. There are many risk factors, among them behavioural and environmental. In Poland an attention was put on chemical and physical environmental factors. Behavioural factors (stress) are disregarded. In the present paper it was decided to check the relationship between stress during pregnancy (estimated by pregnant), child birth weight and frequency of low birth weight. The research was carried out by use of a questionnaire using the "case-control study". In the research were involved 450 mothers of new-born children (the group of cases: untimely, premature delivery or child birth weight below 2500 g) and 450 mothers of new-born children (control group-physiologically delivered). Mothers were asked about their relations to the pregnancy; professional and personal stress during pregnancy was estimated. The results were analysed by counting risk ratio coefficient (RR) and correlation coefficient. The research showed, that there is no relation between acceptation of pregnancy, stress and frequency of low birth weight or the average child birth weight. The researches didn't prove unfavourable influence of stress reaction caused by professional and personal stressors on intrauterine foetus development.
Simulating thermal boundary conditions of spin-lattice models with weighted averages
NASA Astrophysics Data System (ADS)
Wang, Wenlong
2016-07-01
Thermal boundary conditions have played an increasingly important role in revealing the nature of short-range spin glasses and is likely to be relevant also for other disordered systems. Diffusion method initializing each replica with a random boundary condition at the infinite temperature using population annealing has been used in recent large-scale simulations. However, the efficiency of this method can be greatly suppressed because of temperature chaos. For example, most samples have some boundary conditions that are completely eliminated from the population in the process of annealing at low temperatures. In this work, I study a weighted average method to solve this problem by simulating each boundary conditions separately and collect data using weighted averages. The efficiency of the two methods is studied using both population annealing and parallel tempering, showing that the weighted average method is more efficient and accurate.
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek
Web-enabled spatial decision analysis using Ordered Weighted Averaging (OWA)
NASA Astrophysics Data System (ADS)
Rinner, Claus; Malczewski, Jacek
This paper presents a spatial decision support tool that implements the Ordered Weighted Averaging (OWA) method. OWA is a family of multicriteria evaluation operators characterised by two sets of weights: criterion importance weights and order weights. We propose a highly interactive way of choosing, modifying, and fine-tuning the decision strategy defined by the order weights. This exploratory approach to OWA is supported by a graphical representation of the operator's behaviour in terms of decision risk and tradeoff/dispersion between criteria. Our prototype implementation is based on the CommonGIS software, and thus, Web-enabled and working with vector data. We successfully demonstrate online, exploratory support of spatial decision strategies using a data set of skiing resorts in Wallis, Switzerland.
NASA Astrophysics Data System (ADS)
Lin, Bangjiang; Li, Yiwei; Zhang, Shihao; Tang, Xuan
2015-10-01
Weighted interframe averaging (WIFA)-based channel estimation (CE) is presented for orthogonal frequency division multiplexing passive optical network (OFDM-PON), in which the CE results of the adjacent frames are directly averaged to increase the estimation accuracy. The effectiveness of WIFA combined with conventional least square, intrasymbol frequency-domain averaging, and minimum mean square error, respectively, is demonstrated through 26.7-km standard single-mode fiber transmission. The experimental results show that the WIFA method with low complexity can significantly enhance transmission performance of OFDM-PON.
47 CFR 65.305 - Calculation of the weighted average cost of capital.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 3 2010-10-01 2010-10-01 false Calculation of the weighted average cost of capital. 65.305 Section 65.305 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-28
... International Trade Administration 19 CFR Part 351 RIN 0625-AA87 Antidumping Proceedings: Calculation of the... comments regarding the calculation of the weighted average dumping margin and antidumping duty assessment... providing offsets for non-dumped comparisons. Antidumping Proceedings: Calculation of the...
Modeling daily average stream temperature from air temperature and watershed area
NASA Astrophysics Data System (ADS)
Butler, N. L.; Hunt, J. R.
2012-12-01
Habitat restoration efforts within watersheds require spatial and temporal estimates of water temperature for aquatic species especially species that migrate within watersheds at different life stages. Monitoring programs are not able to fully sample all aquatic environments within watersheds under the extreme conditions that determine long-term habitat viability. Under these circumstances a combination of selective monitoring and modeling are required for predicting future geospatial and temporal conditions. This study describes a model that is broadly applicable to different watersheds while using readily available regional air temperature data. Daily water temperature data from thirty-eight gauges with drainage areas from 2 km2 to 2000 km2 in the Sonoma Valley, Napa Valley, and Russian River Valley in California were used to develop, calibrate, and test a stream temperature model. Air temperature data from seven NOAA gauges provided the daily maximum and minimum air temperatures. The model was developed and calibrated using five years of data from the Sonoma Valley at ten water temperature gauges and a NOAA air temperature gauge. The daily average stream temperatures within this watershed were bounded by the preceding maximum and minimum air temperatures with smaller upstream watersheds being more dependent on the minimum air temperature than maximum air temperature. The model assumed a linear dependence on maximum and minimum air temperature with a weighting factor dependent on upstream area determined by error minimization using observed data. Fitted minimum air temperature weighting factors were consistent over all five years of data for each gauge, and they ranged from 0.75 for upstream drainage areas less than 2 km2 to 0.45 for upstream drainage areas greater than 100 km2. For the calibration data sets within the Sonoma Valley, the average error between the model estimated daily water temperature and the observed water temperature data ranged from 0.7
Code of Federal Regulations, 2010 CFR
2010-04-01
... weighted-average dumping margins disregarded. 351.106 Section 351.106 Customs Duties INTERNATIONAL TRADE... minimis net countervailable subsidies and weighted-average dumping margins disregarded. (a) Introduction... practice of disregarding net countervailable subsidies or weighted-average dumping margins that were...
Estimation of the global average temperature with optimally weighted point gauges
NASA Technical Reports Server (NTRS)
Hardin, James W.; Upson, Robert B.
1993-01-01
This paper considers the minimum mean squared error (MSE) incurred in estimating an idealized Earth's global average temperature with a finite network of point gauges located over the globe. We follow the spectral MSE formalism given by North et al. (1992) and derive the optimal weights for N gauges in the problem of estimating the Earth's global average temperature. Our results suggest that for commonly used configurations the variance of the estimate due to sampling error can be reduced by as much as 50%.
Lee, Cue Hyunkyu; Cook, Seungho; Lee, Ji Sung
2016-01-01
The meta-analysis has become a widely used tool for many applications in bioinformatics, including genome-wide association studies. A commonly used approach for meta-analysis is the fixed effects model approach, for which there are two popular methods: the inverse variance-weighted average method and weighted sum of z-scores method. Although previous studies have shown that the two methods perform similarly, their characteristics and their relationship have not been thoroughly investigated. In this paper, we investigate the optimal characteristics of the two methods and show the connection between the two methods. We demonstrate that the each method is optimized for a unique goal, which gives us insight into the optimal weights for the weighted sum of z-scores method. We examine the connection between the two methods both analytically and empirically and show that their resulting statistics become equivalent under certain assumptions. Finally, we apply both methods to the Wellcome Trust Case Control Consortium data and demonstrate that the two methods can give distinct results in certain study designs. PMID:28154508
Wetland Boundary Determination in the Great Dismal Swamp Using Weighted Averages
Carter, Virginia; Garrett, Mary Keith; Gammon, Patricia T.
1988-01-01
A weighted average method was used to analyze transition zone vegetation in the Great Dismal Swamp to determine if a more uniform determination of wetland boundaries can be made nationwide. The method was applied to vegetation data collected on four transects and three vertical layers across the wetland-to-upland transition zone of the swamp. Ecological index values based on water tolerance were either taken from the literature or derived from local species tolerances. Wetland index values were calculated for 25-m increments using species cover and rankings based on the ecological indices. Wetland index values were used to designate increments as either wetland, transitional, or upland, and to examine the usefulness of a provisional wetland-upland break-point. The weighted average method did not provide for an objective placement of an absolute wetland boundary, but did serve to focus attention on the transitional boundary zone where supplementary information is necessary to select a wetland-upland breakpoint.
Model Averaging Methods for Weight Trimming in Generalized Linear Regression Models.
Elliott, Michael R
2009-03-01
In sample surveys where units have unequal probabilities of inclusion, associations between the inclusion probability and the statistic of interest can induce bias in unweighted estimates. This is true even in regression models, where the estimates of the population slope may be biased if the underlying mean model is misspecified or the sampling is nonignorable. Weights equal to the inverse of the probability of inclusion are often used to counteract this bias. Highly disproportional sample designs have highly variable weights; weight trimming reduces large weights to a maximum value, reducing variability but introducing bias. Most standard approaches are ad hoc in that they do not use the data to optimize bias-variance trade-offs. This article uses Bayesian model averaging to create "data driven" weight trimming estimators. We extend previous results for linear regression models (Elliott 2008) to generalized linear regression models, developing robust models that approximate fully-weighted estimators when bias correction is of greatest importance, and approximate unweighted estimators when variance reduction is critical.
Girshick, Ahna R.; Banks, Martin S.
2010-01-01
Depth perception involves combining multiple, possibly conflicting, sensory measurements to estimate the 3D structure of the viewed scene. Previous work has shown that the perceptual system combines measurements using a statistically optimal weighted average. However, the system should only combine measurements when they come from the same source. We asked whether the brain avoids combining measurements when they differ from one another: that is, whether the system is robust to outliers. To do this, we investigated how two slant cues—binocular disparity and texture gradients—influence perceived slant as a function of the size of the conflict between the cues. When the conflict was small, we observed weighted averaging. When the conflict was large, we observed robust behavior: perceived slant was dictated solely by one cue, the other being rejected. Interestingly, the rejected cue was either disparity or texture, and was not necessarily the more variable cue. We modeled the data in a probabilistic framework, and showed that weighted averaging and robustness are predicted if the underlying likelihoods have heavier tails than Gaussians. We also asked whether observers had conscious access to the single-cue estimates when they exhibited robustness and found they did not, i.e. they completely fused despite the robust percepts. PMID:19761341
Real Diffusion-Weighted MRI Enabling True Signal Averaging and Increased Diffusion Contrast
Eichner, Cornelius; Cauley, Stephen F; Cohen-Adad, Julien; Möller, Harald E; Turner, Robert; Setsompop, Kawin; Wald, Lawrence L
2015-01-01
This project aims to characterize the impact of underlying noise distributions on diffusion-weighted imaging. The noise floor is a well-known problem for traditional magnitude-based diffusion-weighted MRI (dMRI) data, leading to biased diffusion model fits and inaccurate signal averaging. Here, we introduce a total-variation-based algorithm to eliminate shot-to-shot phase variations of complex-valued diffusion data with the intention to extract real-valued dMRI datasets. The obtained real-valued diffusion data are no longer superimposed by a noise floor but instead by a zero-mean Gaussian noise distribution, yielding dMRI data without signal bias. We acquired high-resolution dMRI data with strong diffusion weighting and, thus, low signal-to-noise ratio. Both the extracted real-valued and traditional magnitude data were compared regarding signal averaging, diffusion model fitting and accuracy in resolving crossing fibers. Our results clearly indicate that real-valued diffusion data enables idealized conditions for signal averaging. Furthermore, the proposed method enables unbiased use of widely employed linear least squares estimators for model fitting and demonstrates an increased sensitivity to detect secondary fiber directions with reduced angular error. The use of phase-corrected, real-valued data for dMRI will therefore help to clear the way for more detailed and accurate studies of white matter microstructure and structural connectivity on a fine scale. PMID:26241680
Real diffusion-weighted MRI enabling true signal averaging and increased diffusion contrast.
Eichner, Cornelius; Cauley, Stephen F; Cohen-Adad, Julien; Möller, Harald E; Turner, Robert; Setsompop, Kawin; Wald, Lawrence L
2015-11-15
This project aims to characterize the impact of underlying noise distributions on diffusion-weighted imaging. The noise floor is a well-known problem for traditional magnitude-based diffusion-weighted MRI (dMRI) data, leading to biased diffusion model fits and inaccurate signal averaging. Here, we introduce a total-variation-based algorithm to eliminate shot-to-shot phase variations of complex-valued diffusion data with the intention to extract real-valued dMRI datasets. The obtained real-valued diffusion data are no longer superimposed by a noise floor but instead by a zero-mean Gaussian noise distribution, yielding dMRI data without signal bias. We acquired high-resolution dMRI data with strong diffusion weighting and, thus, low signal-to-noise ratio. Both the extracted real-valued and traditional magnitude data were compared regarding signal averaging, diffusion model fitting and accuracy in resolving crossing fibers. Our results clearly indicate that real-valued diffusion data enables idealized conditions for signal averaging. Furthermore, the proposed method enables unbiased use of widely employed linear least squares estimators for model fitting and demonstrates an increased sensitivity to detect secondary fiber directions with reduced angular error. The use of phase-corrected, real-valued data for dMRI will therefore help to clear the way for more detailed and accurate studies of white matter microstructure and structural connectivity on a fine scale.
Analysis of litter size and average litter weight in pigs using a recursive model.
Varona, Luis; Sorensen, Daniel; Thompson, Robin
2007-11-01
An analysis of litter size and average piglet weight at birth in Landrace and Yorkshire using a standard two-trait mixed model (SMM) and a recursive mixed model (RMM) is presented. The RMM establishes a one-way link from litter size to average piglet weight. It is shown that there is a one-to-one correspondence between the parameters of SMM and RMM and that they generate equivalent likelihoods. As parameterized in this work, the RMM tests for the presence of a recursive relationship between additive genetic values, permanent environmental effects, and specific environmental effects of litter size, on average piglet weight. The equivalent standard mixed model tests whether or not the covariance matrices of the random effects have a diagonal structure. In Landrace, posterior predictive model checking supports a model without any form of recursion or, alternatively, a SMM with diagonal covariance matrices of the three random effects. In Yorkshire, the same criterion favors a model with recursion at the level of specific environmental effects only, or, in terms of the SMM, the association between traits is shown to be exclusively due to an environmental (negative) correlation. It is argued that the choice between a SMM or a RMM should be guided by the availability of software, by ease of interpretation, or by the need to test a particular theory or hypothesis that may best be formulated under one parameterization and not the other.
Exponentially Weighted Moving Average Change Detection Around the Country (and the World)
NASA Astrophysics Data System (ADS)
Brooks, E.; Wynne, R. H.; Thomas, V. A.; Blinn, C. E.; Coulston, J.
2014-12-01
With continuous, freely available moderate-resolution imagery of the Earth's surface available, and with the promise of more imagery to come, change detection based on continuous process models continues to be a major area of research. One such method, exponentially weighted moving average change detection (EWMACD), is based on a mixture of harmonic regression (HR) and statistical quality control, a branch of statistics commonly used to detect aberrations in industrial and medical processes. By using HR to approximate per-pixel seasonal curves, the resulting residuals characterize information about the pixels which stands outside of the periodic structure imposed by HR. Under stable pixels, these residuals behave as might be expected, but in the presence of changes (growth, stress, removal), the residuals clearly show these changes when they are used as inputs into an EWMA chart. In prior work in Alabama, USA, EWMACD yielded an overall accuracy of 85% on a random sample of known thinned stands, in some cases detecting thinnings as sparse as 25% removal. It was also shown to correctly identify the timing of the thinning activity, typically within a single image date of the change. The net result of the algorithm was to produce date-by-date maps of afforestation and deforestation on a variable scale of severity. In other research, EWMACD has also been applied to detect land use and land cover changes in central Java, Indonesia, despite the heavy incidence of clouds and a monsoonal climate. Preliminary results show that EWMACD accurately identifies land use conversion (agricultural to residential, for example) and also identifies neighborhoods where the building density has increased, removing neighborhood vegetation. In both cases, initial results indicate the potential utility of EWMACD to detect both gross and subtle ecosystem disturbance, but further testing across a range of ecosystems and disturbances is clearly warranted.
Scaling of the Average Receiving Time on a Family of Weighted Hierarchical Networks
NASA Astrophysics Data System (ADS)
Sun, Yu; Dai, Meifeng; Sun, Yanqiu; Shao, Shuxiang
2016-08-01
In this paper, based on the un-weight hierarchical networks, a family of weighted hierarchical networks are introduced, the weight factor is denoted by r. The weighted hierarchical networks depend on the number of nodes in complete bipartite graph, denoted by n1, n2 and n = n1 + n2. Assume that the walker, at each step, starting from its current node, moves to any of its neighbors with probability proportional to the weight of edge linking them. We deduce the analytical expression of the average receiving time (ART). The obtained remarkable results display two conditions. In the large network, when nr > n1n2, the ART grows as a power-law function of the network size |V (Gk)| with the exponent, represented by θ =logn( nr n1n2 ), 0 < θ < 1. This means that the smaller the value of θ, the more efficient the process of receiving information. When nr ≤ n1n2, the ART grows with increasing order |V (Gk)| as logn|V (Gk)| or (logn|V (Gk)|)2.
Tree-average distances on certain phylogenetic networks have their weights uniquely determined.
Willson, Stephen J
2012-01-01
A phylogenetic network N has vertices corresponding to species and arcs corresponding to direct genetic inheritance from the species at the tail to the species at the head. Measurements of DNA are often made on species in the leaf set, and one seeks to infer properties of the network, possibly including the graph itself. In the case of phylogenetic trees, distances between extant species are frequently used to infer the phylogenetic trees by methods such as neighbor-joining. This paper proposes a tree-average distance for networks more general than trees. The notion requires a weight on each arc measuring the genetic change along the arc. For each displayed tree the distance between two leaves is the sum of the weights along the path joining them. At a hybrid vertex, each character is inherited from one of its parents. We will assume that for each hybrid there is a probability that the inheritance of a character is from a specified parent. Assume that the inheritance events at different hybrids are independent. Then for each displayed tree there will be a probability that the inheritance of a given character follows the tree; this probability may be interpreted as the probability of the tree. The tree-average distance between the leaves is defined to be the expected value of their distance in the displayed trees. For a class of rooted networks that includes rooted trees, it is shown that the weights and the probabilities at each hybrid vertex can be calculated given the network and the tree-average distances between the leaves. Hence these weights and probabilities are uniquely determined. The hypotheses on the networks include that hybrid vertices have indegree exactly 2 and that vertices that are not leaves have a tree-child.
Rong, Y; Sillick, M; Gregson, C M
2009-01-01
Dextrose equivalent (DE) value is the most common parameter used to characterize the molecular weight of maltodextrins. Its theoretical value is inversely proportional to number average molecular weight (M(n)), providing a theoretical basis for correlations with physical properties important to food manufacturing, such as: hygroscopicity, the glass transition temperature, and colligative properties. The use of freezing point osmometry to measure DE and M(n) was assessed. Measurements were made on a homologous series of malto-oligomers as well as a variety of commercially available maltodextrin products with DE values ranging from 5 to 18. Results on malto-oligomer samples confirmed that freezing point osmometry provided a linear response with number average molecular weight. However, noncarbohydrate species in some commercial maltodextrin products were found to be in high enough concentration to interfere appreciably with DE measurement. Energy dispersive spectroscopy showed that sodium and chloride were the major ions present in most commercial samples. Osmolality was successfully corrected using conductivity measurements to estimate ion concentrations. The conductivity correction factor appeared to be dependent on the concentration of maltodextrin. Equations were developed to calculate corrected values of DE and M(n) based on measurements of osmolality, conductivity, and maltodextrin concentration. This study builds upon previously reported results through the identification of the major interfering ions and provides an osmolality correction factor that successfully accounts for the influence of maltodextrin concentration on the conductivity measurement. The resulting technique was found to be rapid, robust, and required no reagents.
Dikaios, Nikolaos; Punwani, Shonit; Hamy, Valentin; Purpura, Pierpaolo; Rice, Scott; Forster, Martin; Mendes, Ruheena; Taylor, Stuart; Atkinson, David
2014-01-01
Purpose Multiexponential decay parameters are estimated from diffusion-weighted-imaging that generally have inherently low signal-to-noise ratio and non-normal noise distributions, especially at high b-values. Conventional nonlinear regression algorithms assume normally distributed noise, introducing bias into the calculated decay parameters and potentially affecting their ability to classify tumors. This study aims to accurately estimate noise of averaged diffusion-weighted-imaging, to correct the noise induced bias, and to assess the effect upon cancer classification. Methods A new adaptation of the median-absolute-deviation technique in the wavelet-domain, using a closed form approximation of convolved probability-distribution-functions, is proposed to estimate noise. Nonlinear regression algorithms that account for the underlying noise (maximum probability) fit the biexponential/stretched exponential decay models to the diffusion-weighted signal. A logistic-regression model was built from the decay parameters to discriminate benign from metastatic neck lymph nodes in 40 patients. Results The adapted median-absolute-deviation method accurately predicted the noise of simulated (R2 = 0.96) and neck diffusion-weighted-imaging (averaged once or four times). Maximum probability recovers the true apparent-diffusion-coefficient of the simulated data better than nonlinear regression (up to 40%), whereas no apparent differences were found for the other decay parameters. Conclusions Perfusion-related parameters were best at cancer classification. Noise-corrected decay parameters did not significantly improve classification for the clinical data set though simulations show benefit for lower signal-to-noise ratio acquisitions. PMID:23913479
Calculation of weighted averages approach for the estimation of ping tolerance values
Silalom, S.; Carter, J.L.; Chantaramongkol, P.
2010-01-01
A biotic index was created and proposed as a tool to assess water quality in the Upper Mae Ping sub-watersheds. The Ping biotic index was calculated by utilizing Ping tolerance values. This paper presents the calculation of Ping tolerance values of the collected macroinvertebrates. Ping tolerance values were estimated by a weighted averages approach based on the abundance of macroinvertebrates and six chemical constituents that include conductivity, dissolved oxygen, biochemical oxygen demand, ammonia nitrogen, nitrate nitrogen and orthophosphate. Ping tolerance values range from 0 to 10. Macroinvertebrates assigned a 0 are very sensitive to organic pollution while macroinvertebrates assigned 10 are highly tolerant to pollution.
NASA Astrophysics Data System (ADS)
Imai, Takashi; Ota, Kaiichiro; Aoyagi, Toshio
2017-02-01
Phase reduction has been extensively used to study rhythmic phenomena. As a result of phase reduction, the rhythm dynamics of a given system can be described using the phase response curve. Measuring this characteristic curve is an important step toward understanding a system's behavior. Recently, a basic idea for a new measurement method (called the multicycle weighted spike-triggered average method) was proposed. This paper confirms the validity of this method by providing an analytical proof and demonstrates its effectiveness in actual experimental systems by applying the method to an oscillating electric circuit. Some practical tips to use the method are also presented.
Fuzzy weighted average based on left and right scores in Malaysia tourism industry
NASA Astrophysics Data System (ADS)
Kamis, Nor Hanimah; Abdullah, Kamilah; Zulkifli, Muhammad Hazim; Sahlan, Shahrazali; Mohd Yunus, Syaizzal
2013-04-01
Tourism is known as an important sector to the Malaysian economy including economic generator, creating business and job offers. It is reported to bring in almost RM30 billion of the national income, thanks to intense worldwide promotion by Tourism Malaysia. One of the well-known attractions in Malaysia is our beautiful islands. The islands continue to be developed into tourist spots and attracting a continuous number of tourists. Chalets, luxury bungalows and resorts quickly develop along the coastlines of popular islands like Tioman, Redang, Pangkor, Perhentian, Sibu and so many others. In this study, we applied Fuzzy Weighted Average (FWA) method based on left and right scores in order to determine the criteria weights and to select the best island in Malaysia. Cost, safety, attractive activities, accommodation and scenery are five main criteria to be considered and five selected islands in Malaysia are taken into accounts as alternatives. The most important criteria that have been considered by the tourist are defined based on criteria weights ranking order and the best island in Malaysia is then determined in terms of FWA values. This pilot study can be used as a reference to evaluate performances or solving any selection problems, where more criteria, alternatives and decision makers will be considered in the future.
Equating of Subscores and Weighted Averages under the NEAT Design. Research Report. ETS RR-11-01
ERIC Educational Resources Information Center
Sinharay, Sandip; Haberman, Shelby
2011-01-01
Recently, the literature has seen increasing interest in subscores for their potential diagnostic values; for example, one study suggested the report of weighted averages of a subscore and the total score, whereas others showed, for various operational and simulated data sets, that weighted averages, as compared to subscores, lead to more accurate…
Weighted averages of magnetization from magnetic field measurements: A fast interpretation tool
NASA Astrophysics Data System (ADS)
Fedi, Maurizio
2003-08-01
Magnetic anomalies may be interpreted in terms of weighted averages of magnetization (WAM) by a simple transformation. The WAM transformation consists of dividing at each measurement point the experimental magnetic field by a normalizing field, computed from a source volume with a homogeneous unit-magnetization. The transformation yields a straightforward link among source and field position vectors. A main WAM outcome is that sources at different depths appear well discriminated. Due to the symmetry of the problem, the higher the considered field altitude, the deeper the sources outlined by the transformation. This is shown for single and multi-source synthetic cases as well as for real data. We analyze the real case of Mt. Vulture volcano (Southern Italy), where the related anomaly strongly interferes with that from deep intrusive sources. The volcanic edifice is well identified. The deep source is estimated at about 9 km depth, in agreement with other results.
A Weight-Averaged Interpolation Method for Coupling Time-Accurate Rarefied and Continuum Flows
NASA Astrophysics Data System (ADS)
Diaz, Steven William
A novel approach to coupling rarefied and continuum flow regimes as a single, hybrid model is introduced. The method borrows from techniques used in the simulation of spray flows to interpolate Lagrangian point-particles onto an Eulerian grid in a weight-averaged sense. A brief overview of traditional methods for modeling both rarefied and continuum domains is given, and a review of the literature regarding rarefied/continuum flow coupling is presented. Details of the theoretical development of the method of weighted interpolation are then described. The method evaluates macroscopic properties at the nodes of a CFD grid via the weighted interpolation of all simulated molecules in a set surrounding the node. The weight factor applied to each simulated molecule is the inverse of the linear distance between it and the given node. During development, the method was applied to several preliminary cases, including supersonic flow over an airfoil, subsonic flow over tandem airfoils, and supersonic flow over a backward facing step; all at low Knudsen numbers. The main thrust of the research centered on the time-accurate expansion of a rocket plume into a near-vacuum. The method proves flexible enough to be used with various flow solvers, demonstrated by the use of Fluent as the continuum solver for the preliminary cases and a NASA-developed Large Eddy Simulation research code, WRLES, for the full lunar model. The method is applicable to a wide range of Mach numbers and is completely grid independent, allowing the rarefied and continuum solvers to be optimized for their respective domains without consideration of the other. The work presented demonstrates the validity, and flexibility of the method of weighted interpolation as a novel concept in the field of hybrid flow coupling. The method marks a significant divergence from current practices in the coupling of rarefied and continuum flow domains and offers a kernel on which to base an ongoing field of research. It has the
Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng
2016-06-01
The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function.
Robust HLLC Riemann solver with weighted average flux scheme for strong shock
NASA Astrophysics Data System (ADS)
Kim, Sung Don; Lee, Bok Jik; Lee, Hyoung Jin; Jeung, In-Seuck
2009-11-01
Many researchers have reported failures of the approximate Riemann solvers in the presence of strong shock. This is believed to be due to perturbation transfer in the transverse direction of shock waves. We propose a simple and clear method to prevent such problems for the Harten-Lax-van Leer contact (HLLC) scheme. By defining a sensing function in the transverse direction of strong shock, the HLLC flux is switched to the Harten-Lax-van Leer (HLL) flux in that direction locally, and the magnitude of the additional dissipation is automatically determined using the HLL scheme. We combine the HLLC and HLL schemes in a single framework using a switching function. High-order accuracy is achieved using a weighted average flux (WAF) scheme, and a method for v-shear treatment is presented. The modified HLLC scheme is named HLLC-HLL. It is tested against a steady normal shock instability problem and Quirk's test problems, and spurious solutions in the strong shock regions are successfully controlled.
Ouyang, Gangfeng; Zhao, Wennan; Bragg, Leslie; Qin, Zhipei; Alaee, Mehran; Pawliszyn, Janusz
2007-06-01
In this study, three types of solid-phase microextraction (SPME) passive samplers, including a fiber-retracted device, a polydimethylsiloxane (PDMS)-rod and a PDMS-membrane, were evaluated to determine the time weighted average (TWA) concentrations of polycyclic aromatic hydrocarbons (PAHs) in Hamilton Harbor (the western tip of Lake Ontario, ON, Canada). Field trials demonstrated that these types of SPME samplers are suitable for the long-term monitoring of organic pollutants in water. These samplers possess all of the advantages of SPME: they are solvent-free, sampling, extraction and concentration are combined into one step, and they can be directly injected into a gas chromatograph (GC) for analysis without further treatment. These samplers also address the additional needs of a passive sampling technique: they are economical, easy to deploy, and the TWA concentrations of target analytes can be obtained with one sampler. Moreover, the mass uptake of these samplers is independent of the face velocity, or the effect can be calibrated, which is desirable for long-term field sampling, especially when the convection conditions of the sampling environment are difficult to measure and calibrate. Among the three types of SPME samplers that were tested, the PDMS-membrane possesses the highest surface-to-volume ratio, which results in the highest sensitivity and mass uptake and the lowest detection level.
Fuzzy Petri nets Using Intuitionistic Fuzzy Sets and Ordered Weighted Averaging Operators.
Liu, Hu-Chen; You, Jian-Xin; You, Xiao-Yue; Su, Qiang
2016-08-01
Fuzzy Petri nets (FPNs) are an important modeling tool for knowledge representation and reasoning, which have been extensively used in a lot of fields. However, the conventional FPN models have been criticized as having many shortcomings in the literature. Many different models have been suggested to enhance the performance of FPNs, but deficiencies still exist in these models. First, various types of uncertain knowledge information provided by domain experts are very hard to be modeled by the existing FPN models. Second, the traditional FPNs determine the results of knowledge reasoning using the min, max, and product operators, which may not work well in many practical applications. In this paper, we propose a new type of FPN model based on intuitionistic fuzzy sets and ordered weighted averaging operators to deal with the problems and improve the effectiveness of the conventional FPNs. Moreover, a max-algebra-based reasoning algorithm is developed in order to implement the intuitionistic fuzzy reasoning formally and automatically. Finally, a case study concerning fault diagnosis of aircraft generator is presented to demonstrate the proposed intuitionistic FPN model. Numerical experiments show that the new FPN model is feasible and quite effective for knowledge representation and reasoning of intuitionistic fuzzy expert systems.
Iterative weighted average diffusion as a novel external force in the active contour model
NASA Astrophysics Data System (ADS)
Mirov, Ilya S.; Nakhmani, Arie
2016-03-01
The active contour model has good performance in boundary extraction for medical images; particularly, Gradient Vector Flow (GVF) active contour model shows good performance at concavity convergence and insensitivity to initialization, yet it is susceptible to edge leaking, deep and narrow concavities, and has some issues handling noisy images. This paper proposes a novel external force, called Iterative Weighted Average Diffusion (IWAD), which used in tandem with parametric active contours, provides superior performance in images with high values of concavity. The image gradient is first turned into an edge image, smoothed, and modified with enhanced corner detection, then the IWAD algorithm diffuses the force at a given pixel based on its 3x3 pixel neighborhood. A forgetting factor, φ, is employed to ensure that forces being spread away from the boundary of the image will attenuate. The experimental results show better behavior in high curvature regions, faster convergence, and less edge leaking than GVF when both are compared to expert manual segmentation of the images.
Time weighted average concentration monitoring based on thin film solid phase microextraction.
Ahmadi, Fardin; Sparham, Chris; Boyaci, Ezel; Pawliszyn, Janusz
2017-03-02
Time weighted average (TWA) passive sampling with thin film solid phase microextraction (TF-SPME) and liquid chromatography tandem mass spectrometry (LC-MS/MS) was used for collection, identification, and quantification of benzophenone-3, benzophenone-4, 2-phenylbenzimidazole-5-sulphonic acid, octocrylene, and triclosan in the aquatic environment. Two types of TF-SPME passive samplers, including a retracted thin film device using a hydrophilic lipophilic balance (HLB) coating, and an open bed configuration with an octadecyl silica-based (C18) coating, were evaluated in an aqueous standard generation (ASG) system. Laboratory calibration results indicated that the thin film retracted device using HLB coating is suitable to determine TWA concentrations of polar analytes in water, with an uptake that was linear up to 70 days. In open bed form, a one-calibrant kinetic calibration technique was accomplished by loading benzophenone3-d5 as calibrant on the C18 coating to quantify all non-polar compounds. The experimental results showed that the one-calibrant kinetic calibration technique can be used for determination of classes of compounds in cases where deuterated counterparts are either not available or expensive. The developed passive samplers were deployed in wastewater-dominated reaches of the Grand River (Kitchener, ON) to verify their feasibility for determination of TWA concentrations in on-site applications. Field trials results indicated that these devices are suitable for long-term and short-term monitoring of compounds varying in polarity, such as UV blockers and biocide compounds in water, and the data were in good agreement with literature data.
A thickness-weighted average perspective of force balance in an idealized circumpolar current
Ringler, Todd Darwin; Saenz, Juan Antonio; Wolfram, Jr., Phillip Justin; Roekel, Luke Van
2016-11-22
The exact, three-dimensional thickness-weighted averaged (TWA) Boussinesq equations are used to diagnose eddy-mean flow interaction in an idealized circumpolar current (ICC). The force exerted by mesoscale eddies on the TWA velocity is expressed as the divergence of the Eliassen-Palm flux tensor. Consistent with previous findings, the analysis indicates that the dynamically relevant definition of the ocean surface layer is comprised of the set of buoyancy coordinates that ever reside at the ocean surface at a given horizontal position. The surface layer is found to be a physically distinct object with a diabatic- and force-balance that is largely isolated from the underlying adiabatic region in the interior. Within the ICC surface layer, the TWA meridional velocity is southward/northward in the top/bottom half, and has a value near zero at the bottom. In the top half of the surface layer, the zonal forces due to wind stress and meridional advection of potential vorticity act to accelerate the TWA zonal velocity; equilibrium is obtained by eddies decelerating the zonal flow via a downward flux of eastward momentum that increases with depth. In the bottom half of the surface layer, the accelerating force of the wind stress is balanced by the eddy force and meridional advection of potential vorticity. The bottom of the surface layer coincides with the location where the zonal eddy force, meridional advection of potential vorticity and zonal wind stress force are all zero. The net meridional transport, S_{f}, within the surface layer is a small residual of its southward and northward TWA meridional flows. Furthermore, the mean meridional gradient of surface-layer buoyancy is advected by S_{f} to balance the surface buoyancy fluxs.
A thickness-weighted average perspective of force balance in an idealized circumpolar current
Ringler, Todd Darwin; Saenz, Juan Antonio; Wolfram, Jr., Phillip Justin; ...
2016-11-22
The exact, three-dimensional thickness-weighted averaged (TWA) Boussinesq equations are used to diagnose eddy-mean flow interaction in an idealized circumpolar current (ICC). The force exerted by mesoscale eddies on the TWA velocity is expressed as the divergence of the Eliassen-Palm flux tensor. Consistent with previous findings, the analysis indicates that the dynamically relevant definition of the ocean surface layer is comprised of the set of buoyancy coordinates that ever reside at the ocean surface at a given horizontal position. The surface layer is found to be a physically distinct object with a diabatic- and force-balance that is largely isolated from themore » underlying adiabatic region in the interior. Within the ICC surface layer, the TWA meridional velocity is southward/northward in the top/bottom half, and has a value near zero at the bottom. In the top half of the surface layer, the zonal forces due to wind stress and meridional advection of potential vorticity act to accelerate the TWA zonal velocity; equilibrium is obtained by eddies decelerating the zonal flow via a downward flux of eastward momentum that increases with depth. In the bottom half of the surface layer, the accelerating force of the wind stress is balanced by the eddy force and meridional advection of potential vorticity. The bottom of the surface layer coincides with the location where the zonal eddy force, meridional advection of potential vorticity and zonal wind stress force are all zero. The net meridional transport, Sf, within the surface layer is a small residual of its southward and northward TWA meridional flows. Furthermore, the mean meridional gradient of surface-layer buoyancy is advected by Sf to balance the surface buoyancy fluxs.« less
Uncertainty and variability in historical time-weighted average exposure data.
Davis, Adam J; Strom, Daniel J
2008-02-01
Beginning around 1940, private companies began processing of uranium and thorium ore, compounds, and metals for the Manhattan Engineer District and later the U.S. Atomic Energy Commission (AEC). Personnel from the AEC's Health and Safety Laboratory (HASL) visited many of the plants to assess worker exposures to radiation and radioactive materials. They developed a time-and-task approach to estimating "daily weighted average" (DWA) concentrations of airborne uranium, thorium, radon, and radon decay products. While short-term exposures greater than 10(5) dpm m(-3) of uranium and greater than 10(5) pCi L(-1) of radon were observed, DWA concentrations were much lower. The HASL-reported DWA values may be used as inputs for dose reconstruction in support of compensation decisions, but they have no numerical uncertainties associated with them. In this work, Monte Carlo methods are used retrospectively to assess the uncertainty and variability in the DWA values for 63 job titles from five different facilities that processed U, U ore, Th, or 226Ra-222Rn between 1948 and 1955. Most groups of repeated air samples are well described by lognormal distributions. Combining samples associated with different tasks often results in a reduction of the geometric standard deviation (GSD) of the DWA to less than those GSD values typical of individual tasks. Results support the assumption of a GSD value of 5 when information on uncertainty in DWA exposures is unavailable. Blunders involving arithmetic, transposition, and transcription are found in many of the HASL reports. In 5 out of the 63 cases, these mistakes result in overestimates of DWA values by a factor of 2 to 2.5, and in 2 cases DWA values are underestimated by factors of 3 to 10.
Wingard, G.L.; Hudley, J.W.
2012-01-01
A molluscan analogue dataset is presented in conjunction with a weighted-averaging technique as a tool for estimating past salinity patterns in south Florida’s estuaries and developing targets for restoration based on these reconstructions. The method, here referred to as cumulative weighted percent (CWP), was tested using modern surficial samples collected in Florida Bay from sites located near fixed water monitoring stations that record salinity. The results were calibrated using species weighting factors derived from examining species occurrence patterns. A comparison of the resulting calibrated species-weighted CWP (SW-CWP) to the observed salinity at the water monitoring stations averaged over a 3-year time period indicates, on average, the SW-CWP comes within less than two salinity units of estimating the observed salinity. The SW-CWP reconstructions were conducted on a core from near the mouth of Taylor Slough to illustrate the application of the method.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-19
... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF TRANSPORTATION Federal Transit Administration 49 CFR Part 665 RIN 2132-AB01 Bus Testing: Calculation of Average Passenger... call to address issues concerning its notice of proposed rulemaking (NPRM) regarding the calculation...
Area-averaged profiles over the mock urban setting test array
Nelson, M. A.; Brown, M. J.; Pardyjak, E. R.; Klewicki, J. C.
2004-01-01
Urban areas have a large effect on the local climate and meteorology. Efforts have been made to incorporate the bulk dynamic and thermodynamic effects of urban areas into mesoscale models (e.g., Chin et al., 2000; Holt et al., 2002; Lacser and Otte, 2002). At this scale buildings cannot be resolved individually, but parameterizations have been developed to capture their aggregate effect. These urban canopy parameterizations have been designed to account for the area-average drag, turbulent kinetic energy (TKE) production, and surface energy balance modifications due to buildings (e.g., Sorbjan and Uliasz, 1982; Ca, 1999; Brown, 2000; Martilli et al., 2002). These models compute an area-averaged mean profile that is representative of the bulk flow characteristics over the entire mesoscale grid cell. One difficulty has been testing of these parameterizations due to lack of area-averaged data. In this paper, area-averaged velocity and turbulent kinetic energy profiles are derived from data collected at the Mock Urban Setting Test (MUST). The MUST experiment was designed to be a near full-scale model of an idealized urban area imbedded in the Atmospheric Surface Layer (ASL). It's purpose was to study airflow and plume transport in urban areas and to provide a test case for model validation. A large number of velocity measurements were taken at the test site so that it was possible to derive area-averaged velocity and TKE profiles.
Bacillus subtilis 168 levansucrase (SacB) activity affects average levan molecular weight.
Porras-Domínguez, Jaime R; Ávila-Fernández, Ángela; Miranda-Molina, Afonso; Rodríguez-Alegría, María Elena; Munguía, Agustín López
2015-11-05
Levan is a fructan polymer that offers a variety of applications in the chemical, health, cosmetic and food industries. Most of the levan applications depend on levan molecular weight, which in turn depends on the source of the synthesizing enzyme and/or on reaction conditions. Here we demonstrate that in the particular case of levansucrase from Bacillus subtilis 168, enzyme concentration is also a factor defining the molecular weight levan distribution. While a bimodal distribution has been reported at the usual enzyme concentrations (1 U/ml equivalent to 0.1 μM levansucrase) we found that a low molecular weight normal distribution is solely obtained al high enzyme concentrations (>5 U/ml equivalent to 0.5 μM levansucrase) while a high normal molecular weight distribution is synthesized at low enzyme doses (0.1 U/ml equivalent to 0.01 μM of levansucrase).
López-Soria, S; Sibila, M; Nofrarías, M; Calsamiglia, M; Manzanilla, E G; Ramírez-Mendoza, H; Mínguez, A; Serrano, J M; Marín, O; Joisel, F; Charreyre, C; Segalés, J
2014-12-05
Porcine circovirus type 2 (PCV2) is a ubiquitous virus that mainly affects nursery and fattening pigs causing systemic disease (PCV2-SD) or subclinical infection. A characteristic sign in both presentations is reduction of average daily weight gain (ADWG). The present study aimed to assess the relationship between PCV2 load in serum and ADWG from 3 (weaning) to 21 weeks of age (slaughter) (ADWG 3-21). Thus, three different boar lines were used to inseminate sows from two PCV2-SD affected farms. One or two pigs per sow were selected (60, 61 and 51 piglets from Pietrain, Pietrain×Large White and Duroc×Large White boar lines, respectively). Pigs were bled at 3, 9, 15 and 21 weeks of age and weighted at 3 and 21 weeks. Area under the curve of the viral load at all sampling times (AUCqPCR 3-21) was calculated for each animal according to standard and real time quantitative PCR results; this variable was categorized as "negative or low" (<10(4.3) PCV2 genome copies/ml of serum), "medium" (≥10(4.3) to ≤10(5.3)) and "high" (>10(5.3)). Data regarding sex, PCV2 antibody titre at weaning and sow parity was also collected. A generalized linear model was performed, obtaining that paternal genetic line and AUCqPCR 3-21 were related to ADWG 3-21. ADWG 3-21 (mean±typical error) for "negative or low", "medium" and "high" AUCqPCR 3-21 was 672±9, 650±12 and 603±16 g/day, respectively, showing significant differences among them. This study describes different ADWG performances in 3 pig populations that suffered from different degrees of PCV2 viraemia.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 19 Customs Duties 3 2011-04-01 2011-04-01 false De minimis net countervailable subsidies and... ADMINISTRATION, DEPARTMENT OF COMMERCE ANTIDUMPING AND COUNTERVAILING DUTIES Scope and Definitions § 351.106 De... practice of disregarding net countervailable subsidies or weighted-average dumping margins that were...
Code of Federal Regulations, 2013 CFR
2013-04-01
... 19 Customs Duties 3 2013-04-01 2013-04-01 false De minimis net countervailable subsidies and... ADMINISTRATION, DEPARTMENT OF COMMERCE ANTIDUMPING AND COUNTERVAILING DUTIES Scope and Definitions § 351.106 De... practice of disregarding net countervailable subsidies or weighted-average dumping margins that were...
Code of Federal Regulations, 2012 CFR
2012-04-01
... 19 Customs Duties 3 2012-04-01 2012-04-01 false De minimis net countervailable subsidies and... ADMINISTRATION, DEPARTMENT OF COMMERCE ANTIDUMPING AND COUNTERVAILING DUTIES Scope and Definitions § 351.106 De... practice of disregarding net countervailable subsidies or weighted-average dumping margins that were...
Code of Federal Regulations, 2014 CFR
2014-04-01
... 19 Customs Duties 3 2014-04-01 2014-04-01 false De minimis net countervailable subsidies and... ADMINISTRATION, DEPARTMENT OF COMMERCE ANTIDUMPING AND COUNTERVAILING DUTIES Scope and Definitions § 351.106 De... practice of disregarding net countervailable subsidies or weighted-average dumping margins that were...
47 CFR 36.622 - National and study area average unseparated loop costs.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 2 2011-10-01 2011-10-01 false National and study area average unseparated... Universal Service Fund Calculation of Loop Costs for Expense Adjustment § 36.622 National and study area... provided in paragraph (c) of this section, this is equal to the sum of the Loop Costs for each study...
47 CFR 36.622 - National and study area average unseparated loop costs.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 2 2012-10-01 2012-10-01 false National and study area average unseparated...-Cost Loop Support Calculation of Loop Costs for Expense Adjustment § 36.622 National and study area... provided in paragraph (c) of this section, this is equal to the sum of the Loop Costs for each study...
47 CFR 36.622 - National and study area average unseparated loop costs.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 2 2013-10-01 2013-10-01 false National and study area average unseparated...-Cost Loop Support Calculation of Loop Costs for Expense Adjustment § 36.622 National and study area... provided in paragraph (c) of this section, this is equal to the sum of the Loop Costs for each study...
Full-custom design of split-set data weighted averaging with output register for jitter suppression
NASA Astrophysics Data System (ADS)
Jubay, M. C.; Gerasta, O. J.
2015-06-01
A full-custom design of an element selection algorithm, named as Split-set Data Weighted Averaging (SDWA) is implemented in 90nm CMOS Technology Synopsys Library. SDWA is applied in seven unit elements (3-bit) using a thermometer-coded input. Split-set DWA is an improved DWA algorithm which caters the requirement for randomization along with long-term equal element usage. Randomization and equal element-usage improve the spectral response of the unit elements due to higher Spurious-free dynamic range (SFDR) and without significantly degrading signal-to-noise ratio (SNR). Since a full-custom, the design is brought to transistor-level and the chip custom layout is also provided, having a total area of 0.3mm2, a power consumption of 0.566 mW, and simulated at 50MHz clock frequency. On this implementation, SDWA is successfully derived and improved by introducing a register at the output that suppresses the jitter introduced at the final stage due to switching loops and successive delays.
Kim, Tad; Rivara, Frederick P; Mozingo, David W; Lottenberg, Lawrence; Harris, Zachary B; Casella, George; Liu, Huazhi; Moldawer, Lyle L; Efron, Philip A; Ang, Darwin N
2015-01-01
Objective The state of Florida has some of the most dangerous highways in the USA. In 2006, Florida averaged 1.65 fatalities per 100 million vehicle miles travelled (VMT) compared with the national average of 1.42. A study was undertaken to find a method of identifying counties that contributed to the most driver fatalities after a motor vehicle collision (MVC). By regionalising interventions unique to this subset of counties, the use of resources would have the greatest potential of improving statewide driver death. Methods The Florida Highway Safety Motor Vehicle database 2000–2006 was used to calculate driver VMT-weighted deaths by county. A total of 3 468 326 motor vehicle crashes were evaluated. Counties that had driver death rates higher than the state average were sorted by a weighted averages method. Multivariate regression was used to calculate the likelihood of death for various risk factors. Results VMT-weighted death rates identified 12 out of 67 counties that contributed up to 50% of overall driver fatalities. These counties were primarily clustered in central and south Florida. The strongest independent risk factors for driver death attributable to MVC in these high-risk counties were alcohol/drug use, rural roads, speed limit ≥45 mph, adverse weather conditions, divided highways, vehicle type, vehicle defects and roadway location. Conclusions Using the weighted averages method, a small subset of counties contributing to the majority of statewide driver fatalities was identified. Regionalised interventions on specific risk factors in these counties may have the greatest impact on reducing driver-related MVC fatalities. PMID:21685144
On the theory relating changes in area-average and pan evaporation (Invited)
NASA Astrophysics Data System (ADS)
Shuttleworth, W.; Serrat-Capdevila, A.; Roderick, M. L.; Scott, R.
2009-12-01
Theory relating changes in area-average evaporation with changes in the evaporation from pans or open water is developed. Such changes can arise by Type (a) processes related to large-scale changes in atmospheric concentrations and circulation that modify surface evaporation rates in the same direction, and Type (b) processes related to coupling between the surface and atmospheric boundary layer (ABL) at the landscape scale that usually modify area-average evaporation and pan evaporation in different directions. The interrelationship between evaporation rates in response to Type (a) changes is derived. They have the same sign and broadly similar magnitude but the change in area-average evaporation is modified by surface resistance. As an alternative to assuming the complementary evaporation hypothesis, the results of previous modeling studies that investigated surface-atmosphere coupling are parameterized and used to develop a theoretical description of Type (b) coupling via vapor pressure deficit (VPD) in the ABL. The interrelationship between appropriately normalized pan and area-average evaporation rates is shown to vary with temperature and wind speed but, on average, the Type (b) changes are approximately equal and opposite. Long-term Australian pan evaporation data are analyzed to demonstrate the simultaneous presence of Type (a) and (b) processes, and observations from three field sites in southwestern USA show support for the theory describing Type (b) coupling via VPD. England's victory over Australia in 2009 Ashes cricket test match series will not be mentioned.
Vauchel, Peggy; Arhaliass, Abdellah; Legrand, Jack; Kaas, Raymond; Baron, Régis
2008-04-01
Alginates are natural polysaccharides that are extracted from brown seaweeds and widely used for their rheological properties. The central step in the extraction protocol used in the alginate industry is the alkaline extraction, which requires several hours. In this study, a significant decrease in alginate dynamic viscosity was observed after 2 h of alkaline treatment. Intrinsic viscosity and average molecular weight of alginates from alkaline extractions 1-4 h in duration were determined, indicating depolymerization of alginates: average molecular weight decreased significantly during the extraction, falling by a factor of 5 between 1 and 4 h of extraction. These results suggested that reducing extraction time could enable preserving the rheological properties of the extracted alginates.
Perrakis, A; Sixma, T K; Wilson, K S; Lamzin, V S
1997-07-01
wARP is a procedure that substantially improves crystallographic phases (and subsequently electron-density maps) as an additional step after density-modification methods such as solvent flattening and averaging. The initial phase set is used to create a number of dummy atom models which are subjected to least-squares or maximum-likelihood refinement and iterative model updating in an automated refinement procedure (ARP). Averaging of the phase sets calculated from the refined output models and weighting of structure factors by their similarity to an average vector results in a phase set that improves and extends the initial phases substantially. An important requirement is that the native data have a maximum resolution beyond approximately 2.4 A. The wARP procedure shortens the time-consuming step of model building in crystallographic structure determination and helps to prevent the introduction of errors.
Average coherence image derived observations over an urban area: the case of Athens city
NASA Astrophysics Data System (ADS)
Parcharidis, I.; Foumelis, M.; Kourkouli, P.
2007-10-01
In the present study coherence observations, in relation to the land-cover type, obtained using 20 C-band ERS SAR Single Look Complex (SLC) VV-polarization images acquired in descending mode over the metropolitan area of Athens covering the period 1992-1999 are presented. A straightforward approach using a single master SAR image on which the other images are mapped was adopted ensuring perfect registration of the interferometric results. After generating single coherence images, with temporal separation varying between 138 and 1335 days, an averaging procedure followed leading to the average coherence image. In order to identify and statistically interpret the properties of selected land cover types in terms of average degree of coherence, very high resolution QuickBird imagery was downloaded from Google Earth environment. The final geocoding of the average coherence image has been improved using common features in the coherence image and the very high-resolution QuickBird image. Overlay of coherence product on the QuickBird image allows correlating the level of coherence with characteristics and properties of the urban shell. As urban areas are considered of high coherence, observations of this type permit to investigate and evaluate their phase stability in details.
Shih, H C; Tsai, S W; Kuo, C H
2012-01-01
A solid-phase microextraction (SPME) device was used as a diffusive sampler for airborne propylene glycol ethers (PGEs), including propylene glycol monomethyl ether (PGME), propylene glycol monomethyl ether acetate (PGMEA), and dipropylene glycol monomethyl ether (DPGME). Carboxen-polydimethylsiloxane (CAR/PDMS) SPME fiber was selected for this study. A polytetrafluoroethylene (PTFE) tubing was used as the holder, and the SPME fiber assembly was inserted into the tubing as a diffusive sampler. The diffusion path length and area of the sampler were 0.3 cm and 0.00086 cm(2), respectively. The theoretical sampling constants at 30°C and 1 atm for PGME, PGMEA, and DPGME were 1.50 × 10(-2), 1.23 × 10(-2) and 1.14 × 10(-2) cm(3) min(-1), respectively. For evaluations, known concentrations of PGEs around the threshold limit values/time-weighted average with specific relative humidities (10% and 80%) were generated both by the air bag method and the dynamic generation system, while 15, 30, 60, 120, and 240 min were selected as the time periods for vapor exposures. Comparisons of the SPME diffusive sampling method to Occupational Safety and Health Administration (OSHA) organic Method 99 were performed side-by-side in an exposure chamber at 30°C for PGME. A gas chromatography/flame ionization detector (GC/FID) was used for sample analysis. The experimental sampling constants of the sampler at 30°C were (6.93 ± 0.12) × 10(-1), (4.72 ± 0.03) × 10(-1), and (3.29 ± 0.20) × 10(-1) cm(3) min(-1) for PGME, PGMEA, and DPGME, respectively. The adsorption of chemicals on the stainless steel needle of the SPME fiber was suspected to be one of the reasons why significant differences between theoretical and experimental sampling rates were observed. Correlations between the results for PGME from both SPME device and OSHA organic Method 99 were linear (r = 0.9984) and consistent (slope = 0.97 ± 0.03). Face velocity (0-0.18 m/s) also proved to have no effects on the sampler
Estimation of the Area of a Reverberant Plate Using Average Reverberation Properties
NASA Astrophysics Data System (ADS)
Achdjian, Hossep; Moulin, Emmanuel; Benmeddour, Farouk; Assaad, Jamal
This paper aims to present an original method for the estimation of the area of thin plates of arbitrary geometrical shapes. This method relies on the acquisition and ensemble processing of reverberated elastic signals on few sensors. The acoustical Green's function in a reverberant solid medium is modeled by a nonstationary random process based on the image-sources method. In that way, mathematical expectations of the signal envelopes can be analytically related to reverberation properties and structural parameters such as plate area, group velocity, or source-receiver distance. Then, a simple curve fitting applied to an ensemble average over N realizations of the late envelopes allows to estimate a global term involving the values of structural parameters. From simple statistical modal arguments, it is shown that the obtained relation depends on the plate area and not on the plate shape. Finally, by considering an additional relation obtained from the early characteristics (treated in a deterministic way) of the reverberation signals, it is possible to deduce the area value. This estimation is performed without geometrical measurements and requires an access to only a small portion of the plate. Furthermore, this method does not require any time measurement nor trigger synchronization between the input channels of instrumentation (between measured signals), thus implying low hardware constraints. Experimental results obtained on metallic plates with free boundary conditions and embedded window glasses will be presented. Areas of up to several meter-squares are correctly estimated with a relative error of a few percents.
Paten, A M; Pain, S J; Peterson, S W; Lopez-Villalobos, N; Kenyon, P R; Blair, H T
2016-11-21
The foetal mammary gland is sensitive to maternal weight and nutrition during gestation, which could affect offspring milk production. It has previously been shown that ewes born to dams offered maintenance nutrition during pregnancy (day 21 to 140 of gestation) produced greater milk, lactose and CP yields in their first lactation when compared with ewes born to dams offered ad libitum nutrition. In addition, ewes born to heavier dams produced greater milk and lactose yields when compared with ewes born to lighter dams. The objective of this study was to analyse and compare the 5-year lactation performance of the previously mentioned ewes, born to heavy or light dams that were offered maintenance or ad libitum pregnancy nutrition. Ewes were milked once per week, for the first 6 weeks of their lactation, for 5 years. Using milk yield and composition data, accumulated yields were calculated over a 42-day period for each year for milk, milk fat, CP, true protein, casein and lactose using a Legendre orthogonal polynomial model. Over the 5-year period, ewes born to heavy dams produced greater average milk (P=0.04), lactose (P=0.01) and CP (P=0.04) yields than offspring born to light dams. In contrast, over the 5-year period dam nutrition during pregnancy did not affect average (P>0.05) offspring milk yields or composition, but did increase milk and lactose accumulated yield (P=0.03 and 0.01, respectively) in the first lactation. These results indicate that maternal gestational nutrition appears to only affect the first lactational performance of ewe offspring. Neither dam nutrition nor size affected grand-offspring live weight gain to, or live weight at weaning (P>0.05). Combined these data indicate that under the conditions of the present study, manipulating dam weight or nutrition in pregnancy can have some effects of offspring lactational performance, however, these effects are not large enough to alter grand-offspring growth to weaning. Therefore, such manipulations
Time scales and variability of area-averaged tropical oceanic rainfall
NASA Technical Reports Server (NTRS)
Shin, Kyung-Sup; North, Gerald R.; Ahn, Yoo-Shin; Arkin, Phillip A.
1990-01-01
A statistical analysis of time series of area-averaged rainfall over the oceans has been conducted around the diurnal time scale. The results of this analysis can be applied directly to the problem of establishing the magnitude of expected errors to be incurred in the estimation of monthly area-averaged rain rate from low orbiting satellites. Such statistics as the mean, standard deviation, integral time scale of background red noise, and spectral analyses were performed on time series of the GOES precipitation index taken at 3-hour intervals during the period spanning December 19, 1987 to March 31, 1988 over the central and eastern tropical Pacific. The analyses have been conducted on 2.5 x 2.5 deg and 5 x 5 deg grid boxes, separately. The study shows that rainfall measurements by a sun-synchronous satellite visiting a spot twice per day will include a bias due to the existence of the semidiurnal cycle in the SPCZ ranging from 5 to 10 percentage points. The bias in the ITCZ may be of the order of 5 percentage points.
High surface area, low weight composite nickel fiber electrodes
NASA Technical Reports Server (NTRS)
Johnson, Bradley A.; Ferro, Richard E.; Swain, Greg M.; Tatarchuk, Bruce J.
1993-01-01
The energy density and power density of light weight aerospace batteries utilizing the nickel oxide electrode are often limited by the microstructures of both the collector and the resulting active deposit in/on the collector. Heretofore, these two microstructures were intimately linked to one another by the materials used to prepare the collector grid as well as the methods and conditions used to deposit the active material. Significant weight and performance advantages were demonstrated by Britton and Reid at NASA-LeRC using FIBREX nickel mats of ca. 28-32 microns diameter. Work in our laboratory investigated the potential performance advantages offered by nickel fiber composite electrodes containing a mixture of fibers as small as 2 microns diameter (Available from Memtec America Corporation). These electrode collectors possess in excess of an order of magnitude more surface area per gram of collector than FIBREX nickel. The increase in surface area of the collector roughly translates into an order of magnitude thinner layer of active material. Performance data and advantages of these thin layer structures are presented. Attributes and limitations of their electrode microstructure to independently control void volume, pore structure of the Ni(OH)2 deposition, and resulting electrical properties are discussed.
NASA Astrophysics Data System (ADS)
Gruber, Matthew
Scintillometer measurements of the turbulence inner-scale length lo and refractive index structure function C2n allow for the retrieval of large-scale area-averaged turbulent fluxes in the atmospheric surface layer. This retrieval involves the solution of the non-linear set of equations defined by the Monin-Obukhov similarity hypothesis. A new method that uses an analytic solution to the set of equations is presented, which leads to a stable and efficient numerical method of computation that has the potential of eliminating computational error. Mathematical expressions are derived that map out the sensitivity of the turbulent flux measurements to uncertainties in source measurements such as lo. These sensitivity functions differ from results in the previous literature; the reasons for the differences are explored.
NASA Astrophysics Data System (ADS)
Gruber, Matthew; Fochesatto, Gilberto J.
2013-07-01
Scintillometer measurements of the turbulence inner-scale length l_o and refractive index structure function C_n^2 allow for the retrieval of large-scale area-averaged turbulent fluxes in the atmospheric surface layer. This retrieval involves the solution of the non-linear set of equations defined by the Monin-Obukhov similarity hypothesis. A new method that uses an analytic solution to the set of equations is presented, which leads to a stable and efficient numerical method of computation that has the potential of eliminating computational error. Mathematical expressions are derived that map out the sensitivity of the turbulent flux measurements to uncertainties in source measurements such as l_o. These sensitivity functions differ from results in the previous literature; the reasons for the differences are explored.
NASA Astrophysics Data System (ADS)
Davies, G. R.; Chaplin, W. J.; Elsworth, Y.; Hale, S. J.
2014-07-01
The Birmingham Solar Oscillations Network (BiSON) has provided high-quality high-cadence observations from as far back in time as 1978. These data must be calibrated from the raw observations into radial velocity and the quality of the calibration has a large impact on the signal-to-noise ratio of the final time series. The aim of this work is to maximize the potential science that can be performed with the BiSON data set by optimizing the calibration procedure. To achieve better levels of signal-to-noise ratio, we perform two key steps in the calibration process: we attempt a correction for terrestrial atmospheric differential extinction; and the resulting improvement in the calibration allows us to perform weighted averaging of contemporaneous data from different BiSON stations. The improvements listed produce significant improvement in the signal-to-noise ratio of the BiSON frequency-power spectrum across all frequency ranges. The reduction of noise in the power spectrum will allow future work to provide greater constraint on changes in the oscillation spectrum with solar activity. In addition, the analysis of the low-frequency region suggests that we have achieved a noise level that may allow us to improve estimates of the upper limit of g-mode amplitudes.
Woolcock, Patrick J; Koziel, Jacek A; Cai, Lingshuang; Johnston, Patrick A; Brown, Robert C
2013-03-15
Time-weighted average (TWA) passive sampling using solid-phase microextraction (SPME) and gas chromatography was investigated as a new method of collecting, identifying and quantifying contaminants in process gas streams. Unlike previous TWA-SPME techniques using the retracted fiber configuration (fiber within needle) to monitor ambient conditions or relatively stagnant gases, this method was developed for fast-moving process gas streams at temperatures approaching 300 °C. The goal was to develop a consistent and reliable method of analyzing low concentrations of contaminants in hot gas streams without performing time-consuming exhaustive extraction with a slipstream. This work in particular aims to quantify trace tar compounds found in a syngas stream generated from biomass gasification. This paper evaluates the concept of retracted SPME at high temperatures by testing the three essential requirements for TWA passive sampling: (1) zero-sink assumption, (2) consistent and reliable response by the sampling device to changing concentrations, and (3) equal concentrations in the bulk gas stream relative to the face of the fiber syringe opening. Results indicated the method can accurately predict gas stream concentrations at elevated temperatures. Evidence was also discovered to validate the existence of a second boundary layer within the fiber during the adsorption/absorption process. This limits the technique to operating within reasonable mass loadings and loading rates, established by appropriate sampling depths and times for concentrations of interest. A limit of quantification for the benzene model tar system was estimated at 0.02 g m(-3) (8 ppm) with a limit of detection of 0.5 mg m(-3) (200 ppb). Using the appropriate conditions, the technique was applied to a pilot-scale fluidized-bed gasifier to verify its feasibility. Results from this test were in good agreement with literature and prior pilot plant operation, indicating the new method can measure low
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov
2007-01-01
Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.
Harding, Stephen E; Gillis, Richard B; Adams, Gary G
2016-01-01
Molecular weights (molar masses), molecular weight distributions, dissociation constants and other interaction parameters are fundamental characteristics of proteins, nucleic acids, polysaccharides and glycoconjugates in solution. Sedimentation equilibrium analytical ultracentrifugation provides a powerful method with no supplementary immobilization, columns or membranes required. It is a particularly powerful tool when used in conjunction with its sister technique, namely sedimentation velocity. Here, we describe key approaches now available and their application to the characterization of antibodies, polysaccharides and glycoconjugates. We indicate how major complications, such as thermodynamic non-ideality, can now be routinely dealt with, thanks to a great extent to the extensive contribution of Professor Don Winzor over several decades of research.
Kassianov, Evgueni I.; Barnard, James C.; Flynn, Connor J.; Riihimaki, Laura D.; Marinovici, Maria C.
2015-10-15
Areal-averaged albedos are particularly difficult to measure in coastal regions, because the surface is not homogenous, consisting of a sharp demarcation between land and water. With this difficulty in mind, we evaluate a simple retrieval of areal-averaged surface albedo using ground-based measurements of atmospheric transmission alone under fully overcast conditions. To illustrate the performance of our retrieval, we find the areal-averaged albedo using measurements from the Multi-Filter Rotating Shadowband Radiometer (MFRSR) at five wavelengths (415, 500, 615, 673, and 870 nm). These MFRSR data are collected at a coastal site in Graciosa Island, Azores supported by the U.S. Department of Energy’s (DOE’s) Atmospheric Radiation Measurement (ARM) Program. The areal-averaged albedos obtained from the MFRSR are compared with collocated and coincident Moderate Resolution Imaging Spectroradiometer (MODIS) white-sky albedo at four nominal wavelengths (470, 560, 670 and 860 nm). These comparisons are made during a 19-month period (June 2009 - December 2010). We also calculate composite-based spectral values of surface albedo by a weighted-average approach using estimated fractions of major surface types observed in an area surrounding this coastal site. Taken as a whole, these three methods of finding albedo show spectral and temporal similarities, and suggest that our simple, transmission-based technique holds promise, but with estimated errors of about ±0.03. Additional work is needed to reduce this uncertainty in areas with inhomogeneous surfaces.
NASA Astrophysics Data System (ADS)
Kassianov, Evgueni; Barnard, James; Flynn, Connor; Riihimaki, Laura; Marinovici, Cristina
2015-10-01
Areal-averaged albedos are particularly difficult to measure in coastal regions, because the surface is not homogenous, consisting of a sharp demarcation between land and water. With this difficulty in mind, we evaluate a simple retrieval of areal-averaged surface albedo using ground-based measurements of atmospheric transmission alone under fully overcast conditions. To illustrate the performance of our retrieval, we find the areal-averaged albedo using measurements from the Multi-Filter Rotating Shadowband Radiometer (MFRSR) at five wavelengths (415, 500, 615, 673, and 870 nm). These MFRSR data are collected at a coastal site in Graciosa Island, Azores supported by the U.S. Department of Energy's (DOE's) Atmospheric Radiation Measurement (ARM) Program. The areal-averaged albedos obtained from the MFRSR are compared with collocated and coincident Moderate Resolution Imaging Spectroradiometer (MODIS) whitesky albedo at four nominal wavelengths (470, 560, 670 and 860 nm). These comparisons are made during a 19-month period (June 2009 - December 2010). We also calculate composite-based spectral values of surface albedo by a weighted-average approach using estimated fractions of major surface types observed in an area surrounding this coastal site. Taken as a whole, these three methods of finding albedo show spectral and temporal similarities, and suggest that our simple, transmission-based technique holds promise, but with estimated errors of about ±0.03. Additional work is needed to reduce this uncertainty in areas with inhomogeneous surfaces.
NASA Astrophysics Data System (ADS)
Suzuki, Kouki; Kato, Takeyoshi; Suzuoki, Yasuo
A photovoltaic power generation system (PVS) is one of the promising measures to develop a low carbon society. Because of the unstable power output characteristics, a robust forecast method must be employed for realizing the high penetration of PVS into an electric power system. Considering the difference in power output patterns among PVSs dispersed in the service area of electric power system, the forecast error would vary among locations, resulting in the reduced forecast error of the ensemble average power output of high penetration PVS. In this paper, by using the multi-point data of insolation observed in Chubu area during four months, we evaluated the forecast error of the ensemble average insolation of 11 districts, and compared it with the forecast error of individual district. As the results, the number of periods with the forecast error larger than the average insolation during four months is reduced by 16 hours for the ensemble average insolation compared with the average value of individual forecast. The largest forecast error during four months is also reduced to 0.45 kWh/m2 for the ensemble average insolation from 0.68 kWh/m2 on average of 11 districts.
1976-10-13
Filter Maximum Entropy Weighted Error Matrices 20. ABSTRACT (Contlinue on revrse sde f neceesry and Identify b- *ock numbfb) A method for multivariate... maximum entropy , and autoregressive techniques has attracted much attention lately, especially for short data segments; see, for example, the biblio...individually predicted waveform values. The correlation values in (52) are called the maximum entropy correlation extrapolations. The recursion is stable
Andreasen, M; Mousing, J; Thomsen, L K
2001-04-13
The association between the average daily weight gain (from approximately 4 to 20 weeks of age) and the serological responses to respiratory infections was examined in a longitudinal study including 825 pigs from eight chronically infected herds. Pigs were bled every 4th week (starting from approximately 4 weeks of age), and sera were analyzed for antibodies to Mycoplasma hyopneumoniae and Actinobacillus pleuropneumoniae serotypes 2, 5-7 and 12.Mixed analysis of covariance analyzed the relationship between the average daily weight gain and a categorical variable defining seroconversion as none, early or late as compared to the median time (estimated across herds) of seroconversion for the particular pathogen. The variables "gender", "weight at an approximate age of 4 weeks" and "time" (defining the exact length of the follow-up period), were included as explanatory variables, and "litter" and "herd" were included as explanatory random variables. The individual pig was the unit of concern. The variable defining time at seroconversion was not significantly associated with the average daily weight gain, when evaluating models across all eight herds. The apparent lack of effect could be because most pigs included in the study were subclinically infected, or because a temporary negative influence of the infections is hidden due to an increased growth in the period following infection. In conclusion, at least in these eight herds, seroresponses to M. hyopneumoniae and A. pleuropneumoniae could not be used to predict the effect of the pathogens on the daily weight gain.
Hashimoto, Yota; Hirata, Akimasa; Morimoto, Ryota; Aonuma, Shinta; Laakso, Ilkka; Jokela, Kari; Foster, Kenneth R
2017-02-08
Incident power density is used as the dosimetric quantity to specify the restrictions on human exposure to electromagnetic fields at frequencies above 3 or 10 GHz in order to prevent excessive temperature elevation at the body surface. However, international standards and guidelines have different definitions for the size of the area over which the power density should be averaged. This study reports computational evaluation of the relationship between the size of the area over which incident power density is averaged and the local peak temperature elevation in a multi-layer model simulating a human body. Three wave sources are considered in the frequency range from 3 to 300 GHz: an ideal beam, a half-wave dipole antenna, and an antenna array. 1D analysis shows that averaging area of 20 mm × 20 mm is a good measure to correlate with the local peak temperature elevation when the field distribution is nearly uniform in that area. The averaging area is different from recommendations in the current international standards/guidelines, and not dependent on the frequency. For a non-uniform field distribution, such as a beam with small diameter, the incident power density should be compensated by multiplying a factor that can be derived from the ratio of the effective beam area to the averaging area. The findings in the present study suggest that the relationship obtained using the 1D approximation is applicable for deriving the relationship between the incident power density and the local temperature elevation.
On the averaging area for incident power density for human exposure limits at frequencies over 6 GHz
NASA Astrophysics Data System (ADS)
Hashimoto, Yota; Hirata, Akimasa; Morimoto, Ryota; Aonuma, Shinta; Laakso, Ilkka; Jokela, Kari; Foster, Kenneth R.
2017-04-01
Incident power density is used as the dosimetric quantity to specify the restrictions on human exposure to electromagnetic fields at frequencies above 3 or 10 GHz in order to prevent excessive temperature elevation at the body surface. However, international standards and guidelines have different definitions for the size of the area over which the power density should be averaged. This study reports computational evaluation of the relationship between the size of the area over which incident power density is averaged and the local peak temperature elevation in a multi-layer model simulating a human body. Three wave sources are considered in the frequency range from 3 to 300 GHz: an ideal beam, a half-wave dipole antenna, and an antenna array. 1D analysis shows that averaging area of 20 mm × 20 mm is a good measure to correlate with the local peak temperature elevation when the field distribution is nearly uniform in that area. The averaging area is different from recommendations in the current international standards/guidelines, and not dependent on the frequency. For a non-uniform field distribution, such as a beam with small diameter, the incident power density should be compensated by multiplying a factor that can be derived from the ratio of the effective beam area to the averaging area. The findings in the present study suggest that the relationship obtained using the 1D approximation is applicable for deriving the relationship between the incident power density and the local temperature elevation.
Juarez-Galan, Juan M; Valor, Ignacio
2009-04-10
A new cryogenic integrative air sampler (patent application number 08/00669), able to overcome many of the limitations in current volatile organic compounds and odour sampling methodologies is presented. The sample is spontaneously collected in a universal way at 15 mL/min, selectively dried (reaching up to 95% of moisture removal) and stored under cryogenic conditions. The sampler performance was tested under time weighted average (TWA) conditions, sampling 100L of air over 5 days for determination of NH(3), H(2)S, and benzene, toluene, ethylbenzene and xylenes (BTEX) in the ppm(v) range. Recovery was 100% (statistically) for all compounds, with a concentration factor of 5.5. Furthermore, an in-field evaluation was done by monitoring the TWA inmission levels of BTEX and dimethylethylamine (ppb(v) range) in an urban area with the developed technology and comparing the results with those monitored with a commercial graphitised charcoal diffusive sampler. The results obtained showed a good statistical agreement between the two techniques.
ERIC Educational Resources Information Center
Sadler, Philip M.; Tai, Robert H.
2007-01-01
Honors and advanced placement (AP) courses are commonly viewed as more demanding than standard high school offerings. Schools employ a range of methods to account for such differences when calculating grade point average and the associated rank in class for graduating students. In turn, these statistics have a sizeable impact on college admission…
ERIC Educational Resources Information Center
Warne, Russell T.; Nagaishi, Chanel; Slade, Michael K.; Hermesmeyer, Paul; Peck, Elizabeth Kimberli
2014-01-01
While research has shown the statistical significance of high school grade point averages (HSGPAs) in predicting future academic outcomes, the systems with which HSGPAs are calculated vary drastically across schools. Some schools employ unweighted grades that carry the same point value regardless of the course in which they are earned; other…
Dibben, Chris; Sigala, Maria; Macfarlane, Alison
2006-01-01
Objective To explore the relationship between low and very low birth weights, mother's age, individual socioeconomic status and area deprivation. Design Analysis of the incidence of low and very low birth weights by area deprivation, maternal age, social class of household and estimated income. Setting England 1996–2000. Subjects 2 894 440 singleton live births and the 10% sample of these births for which parents' individual‐level socioeconomic measures were coded. Results Social class, estimated household income, lone‐parenthood and mother's age were all associated with the risk of low and very low birth weight. Even when controlling for these individual level factors, area income deprivation was significantly associated with low and very low birth weight (p<0.00). For low birth weight there was a significant interaction between area income deprivation and mother's age. For very young mothers, the area effect was non‐significant (p<0.37). For older mothers, particularly those aged 30–34 years, it was stronger (p<0.00). As a result, mothers aged <18 years, although at relatively high risk of low birth weight irrespective of area income deprivation, were actually at slightly lower risk than mothers aged >40 years in the most deprived areas. Conclusions For all but very young mothers, there seems to be a negative effect on birth weight from living in areas of income deprivation, whatever their individual circumstances. PMID:17108301
Numerous urban canopy schemes have recently been developed for mesoscale models in order to approximate the drag and turbulent production effects of a city on the air flow. However, little data exists by which to evaluate the efficacy of the schemes since "area-averaged&quo...
Bulat, Felipe A; Toro-Labbé, Alejandro; Brinck, Tore; Murray, Jane S; Politzer, Peter
2010-11-01
We describe a procedure for performing quantitative analyses of fields f(r) on molecular surfaces, including statistical quantities and locating and evaluating their local extrema. Our approach avoids the need for explicit mathematical representation of the surface and can be implemented easily in existing graphical software, as it is based on the very popular representation of a surface as collection of polygons. We discuss applications involving the volumes, surface areas and molecular surface electrostatic potentials, and local ionization energies of a group of 11 molecules.
Christman, Mary C.; Roberts, Pamela D.
2017-01-01
Fungal growth inhibition on solid media has been historically measured and calculated based on the average of perpendicular diameter measurements of growth on fungicide amended media. We investigated the sensitivity of the calculated area (DA) and the measured area (MA) for assessing fungicide growth inhibition of the ascomycete, Phyllosticta citricarpa on solid media. Both the calculated, DA and the actual measured area, MA were adequate for distinguishing significant treatment effects of fungicide on fungal growth, however MA was more sensitive at identifying significant differences between the controls and fungicide concentrations below 5 ppm. PMID:28125679
Area-Averaged Fluxes from Field to Kilometer Scale with Optical and Microwave Scintillometers
NASA Astrophysics Data System (ADS)
Hartogensis, O. K.; de Bruin, H. A.; Meijninger, W. M.; Kohsiek, W.; Beyrich, F.; Moene, A. F.
2007-12-01
Scintillometry has proven to be a suitable method to obtain surface fluxes over heterogeneous areas over spatial scales of up to 10 km. We will present two of many field-studies conducted by the Meteorology and Air Quality Group of Wageningen University to illustrate this point. Different scintillometer types have been tested. Optical scintillometers yield the structure parameter of temperature, CT2, for long-path Large Aperture Scintillometers (LAS) and both CT2 and the dissipation rate of turbulent kinetic energy, e, for short-path laser scintillometers. CT2 and e are related to the surface fluxes of heat, H, and momentum, t, by virtue of Monin-Obukhov similarity theory. For the LAS - that provides CT2 only - t is obtained from additional wind speed measurements and an estimate of the roughness length. An optical scintillometer in combination with a millimeter-wave scintillometer (MWS) yields both CT2 and Cq2, the structure parameter of humidity, from which the sensible and the latent heat flux can be determined. The following two scintillometer field experiments will be discussed: EVAGRIPS, Lindenberg, Germany 2003. This study deals with a LAS and a MWS (94 GHz) installed over path length of 5 km at 45 m height over a heterogeneous flat agricultural terrain consisting of a mix of lakes, forest and agriculture fields over undulating terrain. The concept of an effective scintillometer height will be introduced, which needs to be applied when the scintillometer height is not constant over the path. RAPID, Idaho, USA, 1999. This study deals with the estimation of evapotranspiration using a LAS and laser scintillometer installed at field scale (~500m) over irrigated alfalfa in an area affected by advection of warm and dry desert air. In these conditions the sensible heat becomes negative and the water vapor deficit is increased, both enhancing evapotranspiration. References: De Bruin, H.A.R.: 2002, 'Introduction, renaissance of scintillometry', Boundary
The daily computed weighted averaging basic reproduction number R>0,k,ωn for MERS-CoV in South Korea
NASA Astrophysics Data System (ADS)
Jeong, Darae; Lee, Chang Hyeong; Choi, Yongho; Kim, Junseok
2016-06-01
In this paper, we propose the daily computed weighted averaging basic reproduction number R0,k,ωn for Middle East respiratory syndrome coronavirus (MERS-CoV) outbreak in South Korea, May to July 2015. We use an SIR model with piecewise constant parameters β (contact rate) and γ (removed rate). We use the explicit Euler's method for the solution of the SIR model and a nonlinear least-square fitting procedure for finding the best parameters. In R0,k,ωn, the parameters n, k, and w denote days from a reference date, the number of days in averaging, and a weighting factor, respectively. We perform a series of numerical experiments and compare the results with the real-world data. In particular, using the predicted reproduction number based on the previous two consecutive reproduction numbers, we can predict the future behavior of the reproduction number.
NASA Astrophysics Data System (ADS)
Elmore, A. J.; Guinn, S. M.
2009-12-01
Land surface phenology (LSP) is the seasonal pattern of vegetation dynamics that occur each spring and fall. Multiple drivers of spatial variation in LSP and its variation over time have been analyzed using satellite remote sensing. Until recently, these observations have been restricted to moderate- and low-resolution data, as it is only at these spatial resolutions for which temporally continuous data is available. However, understanding small scale variation in LSP over space and time may be key to linking pattern to process, and in particular, could be used to understand how ecological processes at the stand level scale to landscapes and continents. Through utilization of the large, and now free, Landsat record, recent research has led to the development of robust methods for calculating average phenological patterns at 30-m resolution by stacking two decades worth of data by acquisition day of year (DOY). Here we have extended these techniques to calculate the deviation from the average LSP for any given acquisition DOY-year combination. We model the average LSP as two sigmoid functions, one increasing in spring and a second decreasing in fall, connected by a sloped line representing gradual summer leaf area changes (see Figure). Deviation from the average LSP is considered here to take two forms: (1) residual vegetation cover in mid- to late-summer represent locations in which disturbance, drought, or (alternatively) better than average growing conditions have resulted a separation (either negative or positive) from the average vegetation cover for that DOY, and (2) climate conditions that result in an earlier or later onset of greenness, exhibited as a separation from the average spring onset of greenness curve in the DOY direction (either early or late.) Our study system for this work is the deciduous forests of the mid-Atlantic, USA, where we show that late summer vegetation cover is tied to edaphic properties governing the site specific soil moisture
Tsodikov, Oleg V; Record, M Thomas; Sergeev, Yuri V
2002-04-30
New computer programs, SurfRace and FastSurf, perform fast calculations of the solvent accessible and molecular (solvent excluded) surface areas of macromolecules. Program SurfRace also calculates the areas of cavities inaccessible from the outside. We introduce the definition of average curvature of molecular surface and calculate average molecular surface curvatures for each atom in a structure. All surface area and curvature calculations are analytic and therefore yield exact values of these quantities. High calculation speed of this software is achieved primarily by avoiding computationally expensive mathematical procedures wherever possible and by efficient handling of surface data structures. The programs are written initially in the language C for PCs running Windows 2000/98/NT, but their code is portable to other platforms with only minor changes in input-output procedures. The algorithm is robust and does not ignore either multiplicity or degeneracy of atomic overlaps. Fast, memory-efficient and robust execution make this software attractive for applications both in computationally expensive energy minimization algorithms, such as docking or molecular dynamics simulations, and in stand-alone surface area and curvature calculations.
Wilson, Stephen; Van Brussel, Leen; Saunders, Gillian; Taylor, Lucas; Zimmermann, Lisa; Heinritzi, Karl; Ritzmann, Mathias; Banholzer, Elisabeth; Eddicks, Matthias
2012-12-14
The field efficacy and safety of a single-dose inactivated Mycoplasma hyopneumoniae vaccine, Suvaxyn MH-One, was evaluated in 4-5-day-old piglets on a commercial farm with a history of Mycoplasma disease in Southern Germany. The piglets were injected intramuscularly with the vaccine or saline (control group) and raised under commercial conditions to slaughter weight. The efficacy of the vaccine was determined by comparing the lung lesions associated with infection by M. hyopneumoniae in control and vaccinated pigs post mortem. In this analysis the vaccinated pigs had the lower mean percentage lung lesion at 5% compared to 9% in controls. Of the vaccinated pigs 52.3% were shown to have low levels of lung lesions between 0% and 5% and no more than 5.4% were shown to have levels above 20%. In contrast, the pigs administered saline showed 36.5% in the lower category (0-5%), while 18.3% showed lesions greater than 20%. There were significant differences in the mean body weight of pigs at the final two weight measurements at approximately 21 weeks and 26 weeks of age, with those receiving Suvaxyn MH-One being on average 5 kg heavier at each time point. There was also a significant increase in average daily gain in the vaccinated animals compared to the control group, particularly in the period from vaccination to the final two body weight measurements on day 138 and 166, from weaning at day 28 to the final two body measurements and from mid-way during finishing at day 84 to the final two body weight measurements. Vaccination had no adverse impact on appetite, although small numbers of vaccinated and control pigs did show mild signs of coughing, sneezing, respiratory distress or depression. There was no adverse impact on rectal temperatures and no signs of injection site reactions during the course of the study. We can conclude that vaccination with Suvaxyn MH-One to pigs at less than 1 week of age is effective in reducing lung lesions resulting from M. hyopneumoniae and
Larsen, Inge; Hjulsager, Charlotte Kristiane; Holm, Anders; Olsen, John Elmerdahl; Nielsen, Søren Saxmose; Nielsen, Jens Peter
2016-01-01
Oral treatment with antimicrobials is widely used in pig production for the control of gastrointestinal infections. Lawsonia intracellularis (LI) causes enteritis in pigs older than six weeks of age and is commonly treated with antimicrobials. The objective of this study was to evaluate the efficacy of three oral dosage regimens (5, 10 and 20mg/kg body weight) of oxytetracycline (OTC) in drinking water over a five-day period on diarrhoea, faecal shedding of LI and average daily weight gain (ADG). A randomised clinical trial was carried out in four Danish pig herds. In total, 539 animals from 37 batches of nursery pigs were included in the study. The dosage regimens were randomly allocated to each batch and initiated at presence of assumed LI-related diarrhoea. In general, all OTC doses used for the treatment of LI infection resulted in reduced diarrhoea and LI shedding after treatment. Treatment with a low dose of 5mg/kg OTC per kg body weight, however, tended to cause more watery faeces and resulted in higher odds of pigs shedding LI above detection level when compared to medium and high doses (with odds ratios of 5.5 and 8.4, respectively). No association was found between the dose of OTC and the ADG. In conclusion, a dose of 5mg OTC per kg body weight was adequate for reducing the high-level LI shedding associated with enteropathy, but a dose of 10mg OTC per kg body weight was necessary to obtain a maximum reduction in LI shedding.
Residence in coal-mining areas and low-birth-weight outcomes.
Ahern, Melissa; Mullett, Martha; Mackay, Katherine; Hamilton, Candice
2011-10-01
The objective of this study was to estimate the association between residence in coal mining environments and low birth weight. We conducted a cross-sectional, retrospective analysis of the association between low birth weight and mother's residence in coal mining areas in West Virginia. Birth data were obtained from the West Virginia Birthscore Dataset, 2005-2007 (n = 42,770). Data on coal mining were from the US Department of Energy. Covariates regarding mothers' demographics, behaviors, and insurance coverage were included. We used nested logistic regression (SUDAAN Proc Multilog) to conduct the study. Mothers who were older, unmarried, less educated, smoked, did not receive prenatal care, were on Medicaid, and had recorded medical risks had a greater risk of low birth weight. After controlling for covariates, residence in coal mining areas of West Virginia posed an independent risk of low birth weight. Odds ratios for both unadjusted and adjusted findings suggest a dose-response effect. Adjusted findings show that living in areas with high levels of coal mining elevates the odds of a low-birth-weight infant by 16%, and by 14% in areas with lower mining levels, relative to counties with no coal mining. After covariate adjustment, the persistence of a mining effect on low-birth-weight outcomes suggests an environmental effect resulting from pollution from mining activities. Air and water quality assessments have been largely missing from mining communities, but the need for them is indicated by these findings.
Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Rockhold, Mark L.
2008-06-01
A methodology to systematically and quantitatively assess model predictive uncertainty was applied to saturated zone uranium transport at the 300 Area of the U.S. Department of Energy Hanford Site in Washington State, USA. The methodology extends Maximum Likelihood Bayesian Model Averaging (MLBMA) to account jointly for uncertainties due to the conceptual-mathematical basis of models, model parameters, and the scenarios to which the models are applied. Conceptual uncertainty was represented by postulating four alternative models of hydrogeology and uranium adsorption. Parameter uncertainties were represented by estimation covariances resulting from the joint calibration of each model to observed heads and uranium concentration. Posterior model probability was dominated by one model. Results demonstrated the role of model complexity and fidelity to observed system behavior in determining model probabilities, as well as the impact of prior information. Two scenarios representing alternative future behavior of the Columbia River adjacent to the site were considered. Predictive simulations carried out with the calibrated models illustrated the computation of model- and scenario-averaged predictions and how results can be displayed to clearly indicate the individual contributions to predictive uncertainty of the model, parameter, and scenario uncertainties. The application demonstrated the practicability of applying a comprehensive uncertainty assessment to large-scale, detailed groundwater flow and transport modelling.
Noncontact analysis of the fiber weight per unit area in prepreg by near-infrared spectroscopy.
Jiang, B; Huang, Y D
2008-05-26
The fiber weight per unit area in prepreg is an important factor to ensure the quality of the composite products. Near-infrared spectroscopy (NIRS) technology together with a noncontact reflectance sources has been applied for quality analysis of the fiber weight per unit area. The range of the unit area fiber weight was 13.39-14.14mgcm(-2). The regression method was employed by partial least squares (PLS) and principal components regression (PCR). The calibration model was developed by 55 samples to determine the fiber weight per unit area in prepreg. The determination coefficient (R(2)), root mean square error of calibration (RMSEC) and root mean square error of prediction (RMSEP) were 0.82, 0.092, 0.099, respectively. The predicted values of the fiber weight per unit area in prepreg measured by NIRS technology were comparable to the values obtained by the reference method. For this technology, the noncontact reflectance sources focused directly on the sample with neither previous treatment nor manipulation. The results of the paired t-test revealed that there was no significant difference between the NIR method and the reference method. Besides, the prepreg could be analyzed one time within 20s without sample destruction.
NASA Astrophysics Data System (ADS)
Kato, Takeyoshi; Inoue, Takato; Honda, Nobuyuki; Koaizawa, Kazumasa; Nishino, Shinichi; Suzuoki, Yasuo
For the detailed impact assessment of the total power output fluctuation of high penetration photovoltaic power generation system in terms of the load-frequency control, this study evaluated the relationship between the standard deviation (STD) including only shorter cycles than 32 minute and the maximum fluctuation width (MFW) calculated with various window width by using the two data-sets of multi-points observed insolation data. The main results are as follows. The R2 of regression line of STD - MFW correlation diagram is larger than 0.85 for various seasons, while the slope of regression line slightly varies with seasons. The slope of regression line is almost the same for various area sizes during the same season, although the variation ranges of both STD and MFW reduce with larger window width due to a so-called smoothing effect. The results suggest that if the STD of geographical average insolation can be calculated by using stochastic method, the MFW can be calculated with a linear function of STD because of the good correlation between STD and MFW independently of seasons and area sizes.
The use of sampling weights in Bayesian hierarchical models for small area estimation
Chen, Cici; Wakefield, Jon; Lumely, Thomas
2015-01-01
Hierarchical modeling has been used extensively for small area estimation. However, design weights that are required to reflect complex surveys are rarely considered in these models. We develop computationally efficient, Bayesian spatial smoothing models that acknowledge the design weights. Computation is carried out using the integrated nested Laplace approximation, which is fast. A simulation study is presented that considers the effects of non-response and non-random selection of individuals. We examine the impact of ignoring the design weights and the benefits of spatial smoothing. The results show that, when compared with standard approaches, mean squared error can be greatly reduced with the proposed models. Bias reduction occurs through the inclusion of the design weights, with variance reduction being achieved through hierarchical smoothing. We analyze data from the Washington State 2006 Behavioral Risk Factor Surveillance System. The models are easily and quickly fitted within the R environment, using existing packages. PMID:25457595
Inducing Conservation of Number, Weight, Volume, Area, and Mass in Pre-School Children.
ERIC Educational Resources Information Center
Young, Beverly S.
The major question this study attempted to answer was, "Can conservation of number, area, weight, mass, and volume to be induced and retained by 3- and 4-year-old children by structured instruction with a multivariate approach? Three nursery schools in Iowa City supplied subjects for this study. The Institute of Child Behavior and Development…
NASA Astrophysics Data System (ADS)
Yu, C.; Zinniker, D. A.; Moldowan, J.
2010-12-01
Urban air pollution is an ongoing and complicated problem for both residents and policy makers. This study aims to provide a better understanding of the geographic source and fate of organic pollutants in a dynamic urban environment. Natural and artificial hydrophobic substrates were employed for the passive monitoring and mapping of ground-level organic pollutants in the San Francisco Bay area. We focused specifically on volatile and semi-volatile polycyclic aromatic hydrocarbons (PAHs). These compounds are proxies for a broad range of combustion related air pollutants derived from local, regional, and global combustion sources. PAHs include several well-studied carcinogens and can be measured easily and accurately across a broad range of concentrations. Estimates of time-integrated vapor phase and particle deposition were made from measuring accumulated PAHs in the leaves of several widely distributed tree species (including the Quercus agrifolia and Sequoia sempervirens) and an artificial wax film. Samples were designed to represent pollutant exposure over a period of one to several months. The selective sampling and analysis of hydrophobic substrates providess insight into the average geographic distribution of ground-level air pollutants in a simple and inexpensive way. However, accumulated organics do not directly correlated with human exposure and the source signature of PAHs may be obscured by transport, deposition, and flux processes. We attempted to address some of these complications by studying 1) PAH accumulation rates within substrates in a controlled microcosm, 2) differences in PAH abundance in different substrate types at the same locality, and 3) samples near long-term high volume air sampling stations. We also set out to create a map of PAH concentrations based on our measurements. This map can be directly compared with interpolated data from high-volume sampling stations and used to address questions concerning atmospheric heterogeneity of these
NASA Astrophysics Data System (ADS)
Yang, Qingjie; Mao, Weijian
2017-01-01
The poroelastodynamic equations are used to describe the dynamic solid-fluid interaction in the reservoir. To obtain the intrinsic properties of reservoir rocks from geophysical data measured in both laboratory and field, we need an accurate solution of the wave propagation in porous media. At present, the poroelastic wave equations are mostly solved in the time domain, which involves a difficult and complicated time convolution. In order to avoid the issues caused by the time convolution, we propose a frequency-space domain method. The poroelastic wave equations are composed of a linear system in the frequency domain, which easily takes into account the effects of all frequencies on the dispersion and attenuation of seismic wave. A 25-point weighted-averaging finite different scheme is proposed to discretize the equations. For the finite model, the perfectly matched layer technique is applied at the model boundaries. We validated the proposed algorithm by testing three numerical examples of poroelastic models, which are homogenous, two-layered and heterogeneous with different fluids, respectively. The testing results are encouraging in the aspects of both computational accuracy and efficiency.
Feber, Janusz; Al-Matrafi, Jamila; Farhadi, Elham; Vaillancourt, Régis; Wolfish, Norman
2009-05-01
The current guidelines recommend a dosage of prednisone of 60 mg/m(2) body surface area per day (BSA PRED) for the initial therapy of nephrotic syndrome (NS). Alternatively, a dosage of 2 mg/kg body weight per day (W PRED) can be used. We hypothesized that the BSA PRED and W PRED are not equivalent and analyzed the differences between BSA PRED calculated with various formulas for body surface area (BSA), W PRED and the dose of prednisone prescribed for our patients. We performed a retrospective chart review of the patients at their initial presentation of NS. Thirty-three children were included, of median age 3.34 years at presentation. The W PRED was significantly lower than BSA PRED (P < 0.05), with a median W PRED:BSA PRED ratio of 0.85 [interquartile range (IQR) 0.8 to 0.9]. The difference between W PRED and BSA PRED decreased proportionally to patients' weights up to 30 kg. No differences were noted between the various BSA formulas using both weight and height for the calculation of BSA. The Bland-Altman analysis showed a proportional error between W PRED and BSA PRED up to the average daily dose of 60 mg, with a mean bias of 0.86 (95% limits of agreement = 0.68 to 1.05). Ten out of the 33 patients (30%) were given a lower than recommended BSA PRED dose by more than 5 mg/day. In conclusion, the dosage of prednisone at 2 mg/kg per day versus 60 mg/m(2) per day is not equivalent for patients with weights <30 kg and/or dose <60 mg/day.
Qualls-Creekmore, Emily; Rezai-Zadeh, Kavon; Jiang, Yanyan; Berthoud, Hans-Rudolf; Morrison, Christopher D.; Derbenev, Andrei V.; Zsombok, Andrea
2016-01-01
The preoptic area (POA) regulates body temperature, but is not considered a site for body weight control. A subpopulation of POA neurons express leptin receptors (LepRbPOA neurons) and modulate reproductive function. However, LepRbPOA neurons project to sympathetic premotor neurons that control brown adipose tissue (BAT) thermogenesis, suggesting an additional role in energy homeostasis and body weight regulation. We determined the role of LepRbPOA neurons in energy homeostasis using cre-dependent viral vectors to selectively activate these neurons and analyzed functional outcomes in mice. We show that LepRbPOA neurons mediate homeostatic adaptations to ambient temperature changes, and their pharmacogenetic activation drives robust suppression of energy expenditure and food intake, which lowers body temperature and body weight. Surprisingly, our data show that hypothermia-inducing LepRbPOA neurons are glutamatergic, while GABAergic POA neurons, originally thought to mediate warm-induced inhibition of sympathetic premotor neurons, have no effect on energy expenditure. Our data suggest a new view into the neurochemical and functional properties of BAT-related POA circuits and highlight their additional role in modulating food intake and body weight. SIGNIFICANCE STATEMENT Brown adipose tissue (BAT)-induced thermogenesis is a promising therapeutic target to treat obesity and metabolic diseases. The preoptic area (POA) controls body temperature by modulating BAT activity, but its role in body weight homeostasis has not been addressed. LepRbPOA neurons are BAT-related neurons and we show that they are sufficient to inhibit energy expenditure. We further show that LepRbPOA neurons modulate food intake and body weight, which is mediated by temperature-dependent homeostatic responses. We further found that LepRbPOA neurons are stimulatory glutamatergic neurons, contrary to prevalent models, providing a new view on thermoregulatory neural circuits. In summary, our study
Mapping Human Cortical Areas in vivo Based on Myelin Content as Revealed by T1- and T2-weighted MRI
Glasser, Matthew F.; Van Essen, David C.
2011-01-01
Non-invasively mapping the layout of cortical areas in humans is a continuing challenge for neuroscience. We present a new method of mapping cortical areas based on myelin content as revealed by T1-weighted (T1w) and T2-weighted (T2w) MRI. The method is generalizable across different 3T scanners and pulse sequences. We use the ratio of T1w/T2w image intensities to eliminate the MR-related image intensity bias and enhance the contrast to noise ratio for myelin. Data from each subject was mapped to the cortical surface and aligned across individuals using surface-based registration. The spatial gradient of the group average myelin map provides an observer-independent measure of sharp transitions in myelin content across the surface—i.e. putative cortical areal borders. We found excellent agreement between the gradients of the myelin maps and the gradients of published probabilistic cytoarchitectonically defined cortical areas that were registered to the same surface-based atlas. For other cortical regions, we used published anatomical and functional information to make putative identifications of dozens of cortical areas or candidate areas. In general, primary and early unimodal association cortices are heavily myelinated and higher, multi-modal, association cortices are more lightly myelinated, but there are notable exceptions in the literature that are confirmed by our results. The overall pattern in the myelin maps also has important correlations with the developmental onset of subcortical white matter myelination, evolutionary cortical areal expansion in humans compared to macaques, postnatal cortical expansion in humans, and maps of neuronal density in non-human primates. PMID:21832190
Baimatova, Nassiba; Koziel, Jacek A; Kenessov, Bulat
2015-05-11
A new and simple method for benzene, toluene, ethylbenzene and o-xylene (BTEX) quantification in vehicle exhaust was developed based on diffusion-controlled extraction onto a retracted solid-phase microextraction (SPME) fiber coating. The rationale was to develop a method based on existing and proven SPME technology that is feasible for field adaptation in developing countries. Passive sampling with SPME fiber retracted into the needle extracted nearly two orders of magnitude less mass (n) compared with exposed fiber (outside of needle) and sampling was in a time weighted-averaging (TWA) mode. Both the sampling time (t) and fiber retraction depth (Z) were adjusted to quantify a wider range of Cgas. Extraction and quantification is conducted in a non-equilibrium mode. Effects of Cgas, t, Z and T were tested. In addition, contribution of n extracted by metallic surfaces of needle assembly without SPME coating was studied. Effects of sample storage time on n loss was studied. Retracted TWA-SPME extractions followed the theoretical model. Extracted n of BTEX was proportional to Cgas, t, Dg, T and inversely proportional to Z. Method detection limits were 1.8, 2.7, 2.1 and 5.2 mg m(-3) (0.51, 0.83, 0.66 and 1.62 ppm) for BTEX, respectively. The contribution of extraction onto metallic surfaces was reproducible and influenced by Cgas and t and less so by T and by the Z. The new method was applied to measure BTEX in the exhaust gas of a Ford Crown Victoria 1995 and compared with a whole gas and direct injection method.
White, R R; Capper, J L
2013-12-01
The objective of this study was to assess environmental impact, economic viability, and social acceptability of 3 beef production systems with differing levels of efficiency. A deterministic model of U.S. beef production was used to predict the number of animals required to produce 1 × 10(9) kg HCW beef. Three production treatments were compared, 1 representing average U.S. production (control), 1 with a 15% increase in ADG, and 1 with a 15% increase in finishing weight (FW). For each treatment, various socioeconomic scenarios were compared to account for uncertainty in producer and consumer behavior. Environmental impact metrics included feed consumption, land use, water use, greenhouse gas emissions (GHGe), and N and P excretion. Feed cost, animal purchase cost, animal sales revenue, and income over costs (IOVC) were used as metrics of economic viability. Willingness to pay (WTP) was used to identify improvements or reductions in social acceptability. When ADG improved, feedstuff consumption, land use, and water use decreased by 6.4%, 3.2%, and 12.3%, respectively, compared with the control. Carbon footprint decreased 11.7% and N and P excretion were reduced by 4% and 13.8%, respectively. When FW improved, decreases were seen in feedstuff consumption (12.1%), water use (9.2%). and land use (15.5%); total GHGe decreased 14.7%; and N and P excretion decreased by 10.1% and 17.2%, compared with the control. Changes in IOVC were dependent on socioeconomic scenario. When the ADG scenario was compared with the control, changes in sector profitability ranged from 51 to 117% (cow-calf), -38 to 157% (stocker), and 37 to 134% (feedlot). When improved FW was compared, changes in cow-calf profit ranged from 67% to 143%, stocker profit ranged from -41% to 155% and feedlot profit ranged from 37% to 136%. When WTP was based on marketing beef being more efficiently produced, WTP improved by 10%; thus, social acceptability increased. When marketing was based on production
Krpálková, L; Cabrera, V E; Kvapilík, J; Burdych, J; Crump, P
2014-10-01
The objective of this study was to evaluate the associations of variable intensity in rearing dairy heifers on 33 commercial dairy herds, including 23,008 cows and 18,139 heifers, with age at first calving (AFC), average daily weight gain (ADG), and milk yield (MY) level on reproduction traits and profitability. Milk yield during the production period was analyzed relative to reproduction and economic parameters. Data were collected during a 1-yr period (2011). The farms were located in 12 regions in the Czech Republic. The results show that those herds with more intensive rearing periods had lower conception rates among heifers at first and overall services. The differences in those conception rates between the group with the greatest ADG (≥0.800 kg/d) and the group with the least ADG (≤0.699 kg/d) were approximately 10 percentage points in favor of the least ADG. All the evaluated reproduction traits differed between AFC groups. Conception at first and overall services (cows) was greatest in herds with AFC ≥800 d. The shortest days open (105 d) and calving interval (396 d) were found in the middle AFC group (799 to 750 d). The highest number of completed lactations (2.67) was observed in the group with latest AFC (≥800 d). The earliest AFC group (≤749 d) was characterized by the highest depreciation costs per cow at 8,275 Czech crowns (US$414), and the highest culling rate for cows of 41%. The most profitable rearing approach was reflected in the middle AFC (799 to 750 d) and middle ADG (0.799 to 0.700 kg) groups. The highest MY (≥8,500 kg) occurred with the earliest AFC of 780 d. Higher MY led to lower conception rates in cows, but the highest MY group also had the shortest days open (106 d) and a calving interval of 386 d. The same MY group had the highest cow depreciation costs, net profit, and profitability without subsidies of 2.67%. We conclude that achieving low AFC will not always be the most profitable approach, which will depend upon farm
Shmool, Jessie L C; Bobb, Jennifer F; Ito, Kazuhiko; Elston, Beth; Savitz, David A; Ross, Zev; Matte, Thomas D; Johnson, Sarah; Dominici, Francesca; Clougherty, Jane E
2015-10-01
Numerous studies have linked air pollution with adverse birth outcomes, but relatively few have examined differential associations across the socioeconomic gradient. To evaluate interaction effects of gestational nitrogen dioxide (NO2) and area-level socioeconomic deprivation on fetal growth, we used: (1) highly spatially-resolved air pollution data from the New York City Community Air Survey (NYCCAS); and (2) spatially-stratified principle component analysis of census variables previously associated with birth outcomes to define area-level deprivation. New York City (NYC) hospital birth records for years 2008-2010 were restricted to full-term, singleton births to non-smoking mothers (n=243,853). We used generalized additive mixed models to examine the potentially non-linear interaction of nitrogen dioxide (NO2) and deprivation categories on birth weight (and estimated linear associations, for comparison), adjusting for individual-level socio-demographic characteristics and sensitivity testing adjustment for co-pollutant exposures. Estimated NO2 exposures were highest, and most varying, among mothers residing in the most-affluent census tracts, and lowest among mothers residing in mid-range deprivation tracts. In non-linear models, we found an inverse association between NO2 and birth weight in the least-deprived and most-deprived areas (p-values<0.001 and 0.05, respectively) but no association in the mid-range of deprivation (p=0.8). Likewise, in linear models, a 10 ppb increase in NO2 was associated with a decrease in birth weight among mothers in the least-deprived and most-deprived areas of -16.2g (95% CI: -21.9 to -10.5) and -11.0 g (95% CI: -22.8 to 0.9), respectively, and a non-significant change in the mid-range areas [β=0.5 g (95% CI: -7.7 to 8.7)]. Linear slopes in the most- and least-deprived quartiles differed from the mid-range (reference group) (p-values<0.001 and 0.09, respectively). The complex patterning in air pollution exposure and deprivation
Ho, Kai-Yu; Epstein, Ryan; Garcia, Ron; Riley, Nicole; Lee, Szu-Ping
2017-02-01
Study Design Controlled laboratory study. Background Although it has been theorized that patellofemoral joint (PFJ) taping can correct patellar malalignment, the effects of PFJ taping techniques on patellar alignment and contact area have not yet been studied during weight bearing. Objective To examine the effects of 2 taping approaches (Kinesio and McConnell) on PFJ alignment and contact area. Methods Fourteen female subjects with patellofemoral pain and PFJ malalignment participated. Each subject underwent a pretaping magnetic resonance imaging (MRI) scan session and 2 MRI scan sessions after the application of the 2 taping techniques, which aimed to correct lateral patellar displacement. Subjects were asked to report their pain level prior to each scan session. During MRI assessment, subjects were loaded with 25% of body weight on their involved/more symptomatic leg at 0°, 20°, and 40° of knee flexion. The outcome measures included patellar lateral displacement (bisect-offset [BSO] index), mediolateral patellar tilt angle, patellar height (Insall-Salvati ratio), contact area, and pain. Patellofemoral joint alignment and contact area were compared among the 3 conditions (no tape, Kinesio, and McConnell) at 3 knee angles using a 2-factor, repeated-measures analysis of variance. Pain was compared among the 3 conditions using the Friedman test and post hoc Wilcoxon signed-rank tests. Results Our data did not reveal any significant effects of either McConnell or Kinesio taping on the BSO index, patellar tilt angle, Insall-Salvati ratio, or contact area across the 3 knee angles, whereas knee angle had a significant effect on the BSO index and contact area. A reduction in pain was observed after the application of the Kinesio taping technique. Conclusion In a weight-bearing condition, this preliminary study did not support the use of PFJ taping as a medial correction technique to alter the PFJ contact area or alignment of the patella. J Orthop Sports Phys Ther 2017
NASA Technical Reports Server (NTRS)
Huff, Edward M.; Mosher, Marianne; Barszcz, Eric
2002-01-01
Recent research using NASA Ames AH-1 and OH-58C helicopters, and NASA Glenn test rigs, has shown that in-flight vibration data are typically non-stationary [l-4]. The nature and extent of this non-stationarity is most likely produced by several factors operating simultaneously. The aerodynamic flight environment and pilot commands provide continuously changing inputs, with a complex dynamic response that includes automatic feedback control from the engine regulator. It would appear that the combined effects operate primarily through an induced torque profile, which causes concomitant stress modulation at the individual internal gear meshes in the transmission. This notion is supported by several analyses, which show that upwards of 93% of the vibration signal s variance can be explained by knowledge of torque alone. That this relationship is stronger in an AH-1 than an OH-58, where measured non-stationarity is greater, suggests that the overall mass of the vehicle is an important consideration. In the lighter aircraft, the unsteady aerodynamic influences transmit relatively greater unsteady dynamic forces on the mechanical components, quite possibly contributing to its greater non-stationarity . In a recent paper using OH-58C pinion data [5], the authors have shown that in computing a time synchronous average (TSA) for various single-value metric computations, an effective trade-off can be obtained between sample size and measured stationarity by using data from only a single mesh cycle. A mesh cycle, which is defined as the number of rotations required for the gear teeth to return to their original mating position, has the property of representing all of the discrete phase angles of the opposing gears exactly once in the average. Measured stationarity is probably maximized because a single mesh cycle of the pinion gear occurs over a very short span of time, during which time-dependent non-stationary effects are kept to a minimum. Clearly, the advantage of local
Code of Federal Regulations, 2010 CFR
2010-07-01
... Extraction for Vegetable Oil Production Compliance Requirements § 63.2854 How do I determine the weighted... received for use in your vegetable oil production process. By the end of each calendar month following an... the solvent in each delivery of solvent, including solvent recovered from off-site oil. To...
Code of Federal Regulations, 2011 CFR
2011-07-01
... Extraction for Vegetable Oil Production Compliance Requirements § 63.2854 How do I determine the weighted... received for use in your vegetable oil production process. By the end of each calendar month following an... the solvent in each delivery of solvent, including solvent recovered from off-site oil. To...
Roberts, Graham J; McDonald, Fraser; Neil, Monica; Lucas, Victoria S
2014-08-01
The mathematical principle of weighting averages to determine the most appropriate numerical outcome is well established in economic and social studies. It has seen little application in forensic dentistry. This study re-evaluated the data from a previous study of age assessment at the 10 year threshold. A semiautomatic process of weighting averages by n-td, x-tds, sd-tds, se-tds, 1/sd-tds, 1/se-tds was prepared in an Excel worksheet and the different weighted mean values reported. In addition the Fixed Effects and Random Effects models for Meta-Analysis were used and applied to the same data sets. In conclusion it has been shown that the most accurate age estimation method is to use the Random Effects Model for the mathematical procedures.
Twitch tension, muscle weight, and fiber area of exercised reinnervating rat skeletal muscle.
Hie, H B; van Nie, C J; Vermeulen-van der Zee, E
1982-12-01
The purpose of this study was to evaluate the effect of dynamic exercise on weight and isometric twitch tension of the reinnervating rat gastrocnemius-plantaris muscle complex as well as on histology of the reinnervating plantaris muscle. Two groups of 6-week-old female Wistar rats, 1 control (n = 17) and 1 experimental (n = 17), were denervated unilaterally by cutting and resecting the sciatic nerve. To effect reinnervation a skin grafting operation was carried out on the nerve so that the gap caused by resection was bridged. The experimental group began exercising on a motor-driven treadmill 18 days following the graft. A progressive training program of 18 weeks of treadmill running, 5 days/week, was carried out by the animals. Training intensity was gradually increased until during the final 3 weeks they were running up a 25% grade at a speed of 720m/hour for 2 hours a day. Exercise did not damage the reinnervating muscle. Absolute wet weight and maximum isometric twitch tension of the reinnervating gastrocnemius-plantaris muscle complex were increased significantly, by 15 1/2% and 30% respectively, after exercise. Training resulted in a significant increase in fiber and muscle cross-sectional areas of the reinnervating plantaris, by 28% and 23% respectively. Exercise brought about no change in total relative amount of connective tissue in the reinnervating plantaris. This study indicates that dynamic exercise has a significant positive effect on the weight, twitch tension and histologic appearance of the reinnervating gastrocnemius-plantaris muscle and thus may enhance their functional recovery. It is likely that this type of training is also effective in the treatment of patients recovering from peripheral nerve injuries.
NASA Astrophysics Data System (ADS)
Mordenti, Delphine; Brodzki, Dominique; Djéga-Mariadassou, Gérald
1998-11-01
A molybdenum carbide supported on active carbon for catalytic hydrotreating was prepared by temperature-programmed reaction (TPR) in flowing H2of an active carbon impregnated by an heptamolybdate. TPR led at 973 K to the formation of supported Mo2C. This new method of preparation avoids the use of methane as carburizing reactant and allowsin situpreparation of supported molybdenum carbide without any contact of this pyrrophoric material with air between preparation and catalytic run. The various steps of the carburization process were studied by trapping the solid intermediates at different temperatures during TPR. Two successive reactions were evidenced: the partial reduction by H2of the initial molybdenum precursor to MoO2, and its subsequent carburization to Mo2C. This last step is mainly due to the reduction of MoO2and carburization with native methane evolved from the reaction of the carbon support with dihydrogen. Solid materials were characterized by elemental analysis, X-Ray diffraction, transmission electron microscopy and specific surface area measurements.
Ruiz, J M; Busnel, J P; Benoît, J P
1990-09-01
The phase separation of fractionated poly(DL-lactic acid-co-glycolic acid) copolymers 50/50 was determined by silicone oil addition. Polymer fractionation by preparative size exclusion chromatography afforded five different microsphere batches. Average molecular weight determined the existence, width, and displacement of the "stability window" inside the phase diagrams, and also microsphere characteristics such as core loading and amount released over 6 hr. Further, the gyration and hydrodynamic radii were measured by light scattering. It is concluded that the polymer-solvent affinity is largely modified by the variation of average molecular weights owing to different levels of solubility. The lower the average molecular weight is, the better methylene chloride serves as a solvent for the coating material. However, a paradoxical effect due to an increase in free carboxyl and hydroxyl groups is noticed for polymers of 18,130 and 31,030 SEC (size exclusion chromatography) Mw. For microencapsulation, polymers having an intermediate molecular weight (47,250) were the most appropriate in terms of core loading and release purposes.
Lee, Tzu-Hsien; Tseng, Chia-Yun
2014-01-01
This study recruited 16 industrial workers to examine the effects of material, weight, and base area of container on reduction of grip force (ΔGF) and heart rate for a 100-m manual carrying task. This study examined 2 carrying materials (iron and water), 4 carrying weights (4.4, 8.9, 13.3, 17.8 kg), and 2 base areas of container (24 × 24 cm, 35 × 24 cm). This study showed that carrying water significantly increased ΔGF and heart rate as compared with carrying iron. Also, ΔGF and heart rate significantly increased with carrying weight and base area of container. The effects of base area of container on ΔGF and heart rate were greater in carrying water condition than in carrying iron condition. The maximum dynamic effect of water on ΔGF and heart rate occurred when water occupied ~60%-80% of full volume of the container.
NASA Astrophysics Data System (ADS)
Jadhav, Nitin A.; Singh, Pramod K.; Rhee, Hee Woo; Bhattacharya, Bhaskar
2014-10-01
Mesoporous ZnO nanoparticles have been synthesized with tremendous increase in specific surface area of up to 578 m2/g which was 5.54 m2/g in previous reports (J. Phys. Chem. C 113:14676-14680, 2009). Different mesoporous ZnO nanoparticles with average pore sizes ranging from 7.22 to 13.43 nm and specific surface area ranging from 50.41 to 578 m2/g were prepared through the sol-gel method via a simple evaporation-induced self-assembly process. The hydrolysis rate of zinc acetate was varied using different concentrations of sodium hydroxide. Morphology, crystallinity, porosity, and J- V characteristics of the materials have been studied using transmission electron microscopy (TEM), X-ray diffraction (XRD), BET nitrogen adsorption/desorption, and Keithley instruments.
Huang, Yuanyuan; Varsier, Nadège; Niksic, Stevan; Kocan, Enis; Pejanovic-Djurisic, Milica; Popovic, Milica; Koprivica, Mladen; Neskovic, Aleksandar; Milinkovic, Jelena; Gati, Azeddine; Person, Christian; Wiart, Joe
2016-09-01
This article is the first thorough study of average population exposure to third generation network (3G)-induced electromagnetic fields (EMFs), from both uplink and downlink radio emissions in different countries, geographical areas, and for different wireless device usages. Indeed, previous publications in the framework of exposure to EMFs generally focused on individual exposure coming from either personal devices or base stations. Results, derived from device usage statistics collected in France and Serbia, show a strong heterogeneity of exposure, both in time, that is, the traffic distribution over 24 h was found highly variable, and space, that is, the exposure to 3G networks in France was found to be roughly two times higher than in Serbia. Such heterogeneity is further explained based on real data and network architecture. Among those results, authors show that, contrary to popular belief, exposure to 3G EMFs is dominated by uplink radio emissions, resulting from voice and data traffic, and average population EMF exposure differs from one geographical area to another, as well as from one country to another, due to the different cellular network architectures and variability of mobile usage. Bioelectromagnetics. 37:382-390, 2016. © 2016 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Zaman, Muhammad; Kim, Guinyun; Naik, Haladhara; Kim, Kwangsoo; Cho, Young-Sik; Lee, Young-Ok; Shin, Sung-Gyun; Cho, Moo-Hyun; Kang, Yeong-Rok; Lee, Man-Woo
2017-04-01
The flux-weighted average cross-sections of (γ , xn) reactions on natZn induced by the bremsstrahlung end-point energies of 50, 55, 60, and 65 MeV have been determined by activation and off-line γ-ray spectrometric technique, using the 100 MeV electron linac at the Pohang Accelerator Laboratory (PAL), Pohang, Korea. The theoretical photon-induced reaction cross-sections of natZn as a function of photon energy were taken from TENDL-2014 nuclear data library based on TALYS 1.6 program. The flux-weighted average cross-sections were obtained from the literature data and the theoretical values of TENDL-2014 based on mono-energetic photon. The flux-weighted reaction cross-sections from the present work and literature data at different bremsstrahlung end-point energies are in good agreement with the theoretical values. It was found that the individual natZn(γ , xn) reaction cross-sections increase sharply from reaction threshold to certain values where the next reaction channel opens. There after it remains constant for a while, where the next reaction channel increases. Then it decreases slowly with increase of bremsstrahlung end-point energy due to opening of different reaction channels.
NASA Astrophysics Data System (ADS)
Shakilur Rahman, Md.; Kim, Kwangsoo; Kim, Guinyun; Naik, Haladhara; Nadeem, Muhammad; Thi Hien, Nguyen; Shahid, Muhammad; Yang, Sung-Chul; Cho, Young-Sik; Lee, Young-Ouk; Shin, Sung-Gyun; Cho, Moo-Hyun; Woo Lee, Man; Kang, Yeong-Rok; Yang, Gwang-Mo; Ro, Tae-Ik
2016-07-01
We measured the flux-weighted average cross-sections and the isomeric yield ratios of 99m, g, 100m, g, 101m, g, 102m, gRh in the 103Rh( γ, xn) reactions with the bremsstrahlung end-point energies of 55 and 60MeV by the activation and the off-line γ-ray spectrometric technique, using the 100MeV electron linac at the Pohang Accelerator Laboratory (PAL), Korea. The flux-weighted average cross-sections were calculated by using the computer code TALYS 1.6 based on mono-energetic photons, and compared with the present experimental data. The flux-weighted average cross-sections of 103Rh( γ, xn) reactions in intermediate bremsstrahlung energies are the first time measurement and are found to increase from their threshold value to a particular value, where the other reaction channels open up. Thereafter, it decreases with bremsstrahlung energy due to its partition in different reaction channels. The isomeric yield ratios (IR) of 99m, g, 100m, g, 101m, g, 102m, gRh in the 103Rh( γ, xn) reactions from the present work were compared with the literature data in the 103Rh(d, x), 102-99Ru(p, x) , 103Rh( α, αn) , 103Rh( α, 2p3n) , 102Ru(3He, x), and 103Rh( γ, xn) reactions. It was found that the IR values of 102, 101, 100, 99Rh in all these reactions increase with the projectile energy, which indicates the role of excitation energy. At the same excitation energy, the IR values of 102, 101, 100, 99Rh are higher in the charged particle-induced reactions than in the photon-induced reaction, which indicates the role of input angular momentum.
NASA Astrophysics Data System (ADS)
Zhang, Gui-Qing; Wang, Lin; Chen, Tian-Lun
2009-05-01
For most networks, the weight of connection is changing with their attachment and inner affinity. By introducing a mixed mechanism of weighted-driven and inner selection, the model exhibits wide range power-law distributions of node strength and edge weight, and the exponent can be adjusted by not only the parameter δ but also the probability q. Furthermore, we investigate the weighted average shortest distance, clustering coefficient, and the correlation of our network. In addition, the weighted assortativity coefficient which characterizes important information of weighted topological networks has been discussed, but the variation of coefficients is much smaller than the former researches.
Rudbeck, M; Mølbak, K; Uldum, S
2008-02-01
The incidence of Legionnaires' disease has an uneven geographical distribution in Denmark, ranging from 3 to 70 notified cases per million inhabitants per year in different towns. We investigated the prevalence of antibodies to Legionella in the one town with a consistently high incidence (Randers, Aarhus County) and compared it with that of an area of average incidence (Vejle, Vejle County). Blood samples were collected from healthy blood donors in Randers (n=308) and in Vejle (n=400), and analysed for antibodies to Legionella by indirect immunofluorescence antibody test with L. pneumophila, L. micdadei, and L. bozemanii as antigens. Overall 22.9% of the donors had antibody titres of > or = 1:128; indicating that antibodies to Legionella are common in healthy individuals, and reflecting that the bacteria may be widely distributed in the environment. Surprisingly, the study did not reveal a higher prevalence in the hyperendemic area. Thus, the high incidence of notified cases in this particular town may not be attributed to an overall increased exposure of the general population.
The Average of Rates and the Average Rate.
ERIC Educational Resources Information Center
Lindstrom, Peter
1988-01-01
Defines arithmetic, harmonic, and weighted harmonic means, and discusses their properties. Describes the application of these properties in problems involving fuel economy estimates and average rates of motion. Gives example problems and solutions. (CW)
Speidel, S E; Peel, R K; Crews, D H; Enns, R M
2016-02-01
Genetic evaluation research designed to reduce the required days to a specified end point has received very little attention in pertinent scientific literature, given that its economic importance was first discussed in 1957. There are many production scenarios in today's beef industry, making a prediction for the required number of days to a single end point a suboptimal option. Random regression is an attractive alternative to calculate days to weight (DTW), days to ultrasound back fat (DTUBF), and days to ultrasound rib eye area (DTUREA) genetic predictions that could overcome weaknesses of a single end point prediction. The objective of this study was to develop random regression approaches for the prediction of the DTW, DTUREA, and DTUBF. Data were obtained from the Agriculture and Agri-Food Canada Research Centre, Lethbridge, AB, Canada. Data consisted of records on 1,324 feedlot cattle spanning 1999 to 2007. Individual animals averaged 5.77 observations with weights, ultrasound rib eye area (UREA), ultrasound back fat depth (UBF), and ages ranging from 293 to 863 kg, 73.39 to 129.54 cm, 1.53 to 30.47 mm, and 276 to 519 d, respectively. Random regression models using Legendre polynomials were used to regress age of the individual on weight, UREA, and UBF. Fixed effects in the model included an overall fixed regression of age on end point (weight, UREA, and UBF) nested within breed to account for the mean relationship between age and weight as well as a contemporary group effect consisting of breed of the animal (Angus, Charolais, and Charolais sired), feedlot pen, and year of measure. Likelihood ratio tests were used to determine the appropriate random polynomial order. Use of the quadratic polynomial did not account for any additional genetic variation in days for DTW ( > 0.11), for DTUREA ( > 0.18), and for DTUBF ( > 0.20) when compared with the linear random polynomial. Heritability estimates from the linear random regression for DTW ranged from 0.54 to 0
Code of Federal Regulations, 2010 CFR
2010-10-01
... 50 Wildlife and Fisheries 9 2010-10-01 2010-10-01 false 2009, Specifications of ABCs, OYs, and HGs, by Management Area (weights in metric tons) 1a Table 1a to Part 660, Subpart C Wildlife and Fisheries..., Subpart C—2009, Specifications of ABCs, OYs, and HGs, by Management Area (weights in metric tons)...
Code of Federal Regulations, 2010 CFR
2010-10-01
... 50 Wildlife and Fisheries 9 2010-10-01 2010-10-01 false 2009, Specifications of ABCs, OYs, and HGs, by Management Area (weights in metric tons) 1a Table 1a to Part 660, Subpart G Wildlife and Fisheries..., Subpart G—2009, Specifications of ABCs, OYs, and HGs, by Management Area (weights in metric tons) Link...
Code of Federal Regulations, 2010 CFR
2010-10-01
... 50 Wildlife and Fisheries 9 2010-10-01 2010-10-01 false 2010, Specifications of ABCs, OYs, and HGs, by Management Area (weights in metric tons) 2a Table 2a to Part 660, Subpart C Wildlife and Fisheries..., Subpart C—2010, Specifications of ABCs, OYs, and HGs, by Management Area (weights in metric tons)...
Code of Federal Regulations, 2010 CFR
2010-10-01
... 50 Wildlife and Fisheries 9 2010-10-01 2010-10-01 false 2010, Specifications of ABCs, OYs, and HGs, by Management Area (weights in metric tons) 2a Table 2a to Part 660, Subpart G Wildlife and Fisheries..., Subpart G—2010, Specifications of ABCs, OYs, and HGs, by Management Area (weights in metric tons) Link...
Lopes, Thomas J.; Evetts, David M.
2004-01-01
Nevada's reliance on ground-water resources has increased because of increased development and surface-water resources being fully appropriated. The need to accurately quantify Nevada's water resources and water use is more critical than ever to meet future demands. Estimated ground-water pumpage, artificial and natural recharge, and interbasin flow can be used to help evaluate stresses on aquifer systems. In this report, estimates of ground-water pumpage and artificial recharge during calendar year 2000 were made using data from a variety of sources, such as reported estimates and estimates made using Landsat satellite imagery. Average annual natural recharge and interbasin flow were compiled from published reports. An estimated 1,427,100 acre-feet of ground water was pumped in Nevada during calendar year 2000. This total was calculated by summing six categories of ground-water pumpage, based on water use. Total artificial recharge during 2000 was about 145,970 acre-feet. At least one estimate of natural recharge was available for 209 of the 232 hydrographic areas (HAs). Natural recharge for the 209 HAs ranges from 1,793,420 to 2,583,150 acre-feet. Estimates of interbasin flow were available for 151 HAs. The categories and their percentage of the total ground-water pumpage are irrigation and stock watering (47 percent), mining (26 percent), water systems (14 percent), geothermal production (8 percent), self-supplied domestic (4 percent), and miscellaneous (less than 1 percent). Pumpage in the top 10 HAs accounted for about 49 percent of the total ground-water pumpage. The most ground-water pumpage in an HA was due to mining in Pumpernickel Valley (HA 65), Boulder Flat (HA 61), and Lower Reese River Valley (HA 59). Pumpage by water systems in Las Vegas Valley (HA 212) and Truckee Meadows (HA 87) were the fourth and fifth highest pumpage in 2000, respectively. Irrigation and stock watering pumpage accounted for most ground-water withdrawals in the HAs with the sixth
Sugiura, Mayuko; Satoh, Masayuki; Tabei, Ken-ichi; Saito, Tomoki; Mori, Mutsuki; Abe, Makiko; Kida, Hirotaka; Maeda, Masayuki; Sakuma, Hajime; Tomimoto, Hidekazu
2016-01-01
Background Little research has been conducted regarding the role of pulvinar nuclei in the pathogenesis of visual hallucinations due to the difficulty of assessing abnormalities in this region using conventional magnetic resonance imaging (MRI). The present study aimed to retrospectively investigate the relative abilities of diffusion-weighted imaging (DWI), fluid-attenuated inversion recovery (FLAIR), and susceptibility-weighted imaging (SWI) to visualize the pulvinar and to ascertain the relationship between pulvinar visualization and visual hallucinations. Methods A retrospective analysis of 3T MRIs from 73 patients (31 males, 42 females; mean age 73.5 ± 12.7 years) of the Memory Clinic of Mie University Hospital was conducted. Correlations between pulvinar visualization and the following were analyzed: age, sex, education, hypertension, hyperlipidemia, diabetes mellitus, Mini-Mental State Examination score, Evans index, and visual hallucinations. Results DWI detected low-signal pulvinar areas in approximately half of the patients (52.1%). Participants with pulvinar visualization were significantly older, and the pulvinar was more frequently visualized in patients who had experienced visual hallucinations compared to those who had not. No significant association was observed between whole brain atrophy and pulvinar visualization. Conclusions The results of the present study indicate that diffusion-weighted 3T MRI is the most suitable method for the detection of pulvinar nuclei in patients with dementia experiencing visual hallucinations. PMID:27790244
Nassel, Ariann F.; Thomas, Deborah
2016-01-01
Obesity rates are higher for ethnic minority, low-income, and rural communities. Programs are needed to support these communities with weight management. We determined the reach of a low-cost, nationally-available weight loss program in Health Resources and Services Administration medically underserved areas (MUAs) and described the demographics of the communities with program locations. This is a cross-sectional analysis of Take Off Pounds Sensibly (TOPS) chapter locations. Geographic information systems technology was used to combine information about TOPS chapter locations, the geographic boundaries of MUAs, and socioeconomic data from the Decennial 2010 Census. TOPS is available in 30 % of MUAs. The typical TOPS chapter is in a Census Tract that is predominantly white, urban, with a median annual income between $25,000 and $50,000. However, there are TOPS chapters in Census Tracts that can be classified as predominantly black or predominantly Hispanic; predominantly rural; and as low or high income. TOPS provides weight management services in MUAs and across many types of communities. TOPS can help treat obesity in the medically underserved. Future research should determine the differential effectiveness among chapters in different types of communities. PMID:26072259
Mitchell, Nia S; Nassel, Ariann F; Thomas, Deborah
2015-12-01
Obesity rates are higher for ethnic minority, low-income, and rural communities. Programs are needed to support these communities with weight management. We determined the reach of a low-cost, nationally-available weight loss program in Health Resources and Services Administration medically underserved areas (MUAs) and described the demographics of the communities with program locations. This is a cross-sectional analysis of Take Off Pounds Sensibly (TOPS) chapter locations. Geographic information systems technology was used to combine information about TOPS chapter locations, the geographic boundaries of MUAs, and socioeconomic data from the Decennial 2010 Census. TOPS is available in 30 % of MUAs. The typical TOPS chapter is in a Census Tract that is predominantly white, urban, with a median annual income between $25,000 and $50,000. However, there are TOPS chapters in Census Tracts that can be classified as predominantly black or predominantly Hispanic; predominantly rural; and as low or high income. TOPS provides weight management services in MUAs and across many types of communities. TOPS can help treat obesity in the medically underserved. Future research should determine the differential effectiveness among chapters in different types of communities.
Meyer-Marcotty, M V; Sutmoeller, K; Kopp, J; Vogt, P M
2012-04-01
Functional results regarding shoe modifications, gait analysis and long-term durability of the reconstructed foot have not been reported using insole paedobarography. This article presents insole-paedobarographic gait analysis and discusses the various pressure distribution patterns following the reconstruction of the foot. This retrospective study reports on the clinical and functional results in 23 out of 39 patients who received flap coverage of their feet in our department in the period from 2001 to 2010. Mean follow-up time amounted to 46.6 months. Patients were separated into two groups, those with flap coverage to the sole of the foot (group 1) and those with flap coverage to non-weight-bearing areas of the foot (group 2). Gait analysis was accomplished by using insole paedobarography. The results of the gait analysis have shown that in both patient groups, when comparing affected feet with sound feet, the affected feet were exposed to significantly less support time (group 1; affected vs. sound feet: 0.44 ± 0.07 s vs. 0.55 ± 0.11 s, p = 0.047), (group 2; affected vs. sound feet: 0.47 ± 0.07 s vs. 0.54 ± 0.07 s, p = 0.029). In addition, in both patient groups, the analysis of peak-pressure distributions revealed greater pressures on the affected feet compared to the sound feet (group 1; affected vs. sound feet: 47.9 ± 10.13 N cm(-2) vs. 36.3 ± 7.5 N cm(-2), p = 0.008), (group 2; affected vs. sound feet: 38.08 ± 13.98 N cm(-2) vs. 32.92 ± 14.77 N cm(-2), p = 0.061). The insole paedobarography can contribute to a more precise gait analysis following a soft-tissue reconstruction not only of the sole but also of other foot regions as well. It can help to identify and correct movement sequences and peak-pressure distributions which are damaging to the flaps. The resulting potential minimisation of the ulceration rate can lead to a further optimisation in the rate of completely rehabilitated patients and a reduction in the revision rate.
Fritzsche, Klaus H.; Thieke, Christian; Klein, Jan; Parzer, Peter; Weber, Marc-André; Stieltjes, Bram
2012-01-01
Abstract The apparent diffusion coefficient (ADC) derived from diffusion-weighted imaging (DWI) correlates inversely with tumor proliferation rates. High-grade gliomas are typically heterogeneous and the delineation of areas of high and low proliferation is impeded by partial volume effects and blurred borders. Commonly used manual delineation is further impeded by potential overlap with cerebrospinal fluid and necrosis. Here we present an algorithm to reproducibly delineate and probabilistically quantify the ADC in areas of high and low proliferation in heterogeneous gliomas, resulting in a reproducible quantification in regions of tissue inhomogeneity. We used an expectation maximization (EM) clustering algorithm, applied on a Gaussian mixture model, consisting of pure superpositions of Gaussian distributions. Soundness and reproducibility of this approach were evaluated in 10 patients with glioma. High- and low-proliferating areas found using the clustering correspond well with conservative regions of interest drawn using all available imaging data. Systematic placement of model initialization seeds shows good reproducibility of the method. Moreover, we illustrate an automatic initialization approach that completely removes user-induced variability. In conclusion, we present a rapid, reproducible and automatic method to separate and quantify heterogeneous regions in gliomas. PMID:22487677
Duncan, M J; Zuker, R M; Manktelow, R T
1985-01-01
Five cases of chronic ulceration following skin graft resurfacing of the weight bearing surface of the heel are presented. All were managed with debridement and coverage with a free innervated dorsalis pedis tissue transfer. The technical refinements that have contributed to the reliability of the flap include careful distal identification of the first dorsal metatarsal artery (FDMA) and division of the dorsalis pedis artery (DPA) under direct vision below the takeoff of the FDMA. Donor site morbidity has been minimized by taking care to preserve the extensor paratenon as a bed for the subsequent skin graft and by immobilization of the donor foot with plaster and bed rest for 10 days. Four of the patients were followed for 2, 4, 4, and 6 years; one was lost to follow-up. All were active with protective sensation in their flaps. No instances of flap breakdown and no significant donor site morbidity were noted. The dorsalis pedis innervated free tissue transfer is recommended as a reliable procedure for resurfacing weight bearing areas of the foot when simpler methods have failed.
NASA Technical Reports Server (NTRS)
Kovich, G.; Moore, R. D.; Urasek, D. C.
1973-01-01
The overall and blade-element performance are presented for an air compressor stage designed to study the effect of weight flow per unit annulus area on efficiency and flow range. At the design speed of 424.8 m/sec the peak efficiency of 0.81 occurred at the design weight flow and a total pressure ratio of 1.56. Design pressure ratio and weight flow were 1.57 and 29.5 kg/sec (65.0 lb/sec), respectively. Stall margin at design speed was 19 percent based on the weight flow and pressure ratio at peak efficiency and at stall.
Misut, Paul E.; Monti,, Jack
2016-10-05
To assist resource managers and planners in developing informed strategies to address nitrogen loading to coastal water bodies of Long Island, New York, the U.S. Geological Survey and the New York State Department of Environmental Conservation initiated a program to delineate a comprehensive dataset of groundwater recharge areas (or areas contributing groundwater), travel times, and outflows to streams and saline embayments on Long Island. A four-layer regional three-dimensional finite-difference groundwater-flow model of hydrologic conditions from 1968 to 1983 was used to provide delineations of 48 groundwater watersheds on Long Island. Sixteen particle starting points were evenly spaced within each of the 4,000- by 4,000-foot model cells that receive water-table recharge and tracked using forward particle-tracking analysis modeling software to outflow zones. For each particle, simulated travel times were grouped by age as follows: less than or equal to 10 years, greater than 10 years and less than or equal to 100 years, greater than 100 years and less than or equal to 1,000 years, and greater than 1,000 years; and simulated ending zones were grouped into 48 receiving water bodies, based on the New York State Department of Environmental Conservation Waterbody Inventory/Priority Waterbodies List. Areal delineation of travel time zones and groundwater contributing areas were generated and a table was prepared presenting the sum of groundwater outflow for each area.
NASA Astrophysics Data System (ADS)
Shi, Y.; Long, Y.; Wi, X. L.
2014-04-01
When tourists visiting multiple tourist scenic spots, the travel line is usually the most effective road network according to the actual tour process, and maybe the travel line is different from planned travel line. For in the field of navigation, a proposed travel line is normally generated automatically by path planning algorithm, considering the scenic spots' positions and road networks. But when a scenic spot have a certain area and have multiple entrances or exits, the traditional described mechanism of single point coordinates is difficult to reflect these own structural features. In order to solve this problem, this paper focuses on the influence on the process of path planning caused by scenic spots' own structural features such as multiple entrances or exits, and then proposes a doubleweighted Graph Model, for the weight of both vertexes and edges of proposed Model can be selected dynamically. And then discusses the model building method, and the optimal path planning algorithm based on Dijkstra algorithm and Prim algorithm. Experimental results show that the optimal planned travel line derived from the proposed model and algorithm is more reasonable, and the travelling order and distance would be further optimized.
Denzin, Nicolai; Schliephake, Annette; Ewert, Benno
2005-01-01
Between 1998 and 2004 1341 Red Foxes from 611 locations were examined parasitologically for Echinococcus multilocularis at the State Office of Consumer Protection Saxony-Anhalt. Examination was carried out in parallel to rabies monitoring. A period-prevalence of 9.2% of infestation was found. Employing a Scan Statistic a large area in the southwest of the federal state and two smaller areas of increased risk with respect to infestation with Echinococcus multilocularis were identified. The hypothesis of a negative association of the probability of infestation with the average annual maximum temperature of the location where the foxes were shot was supported by logistic regression analysis. A decreased probability of inactivation of Echinococcus multilocularis-oncospheres through heat and desiccation in areas of lower average annual maximum temperature seems to be likely.Thus, the infection pressure increases with reduced temperatures.
NASA Technical Reports Server (NTRS)
Urasek, D. C.; Kovich, G.; Moore, R. D.
1973-01-01
Performance was obtained for a 50-cm-diameter compressor designed for a high weight flow per unit annulus area of 208 (kg/sec)/sq m. Peak efficiency values of 0.83 and 0.79 were obtained for the rotor and stage, respectively. The stall margin for the stage was 23 percent, based on equivalent weight flow and total-pressure ratio at peak efficiency and stall.
Calculating areal average thickness of rigid gas-permeable contact lenses.
Weissman, B A
1986-11-01
A method to calculate areal average thickness of rigid contact lenses is shown. The method involves division of lens volume, which is determined from lens design specifications or derived from measured lens weight, by the area of the lens back surface. Areal average thickness may then be used with known oxygen permeability to generate oxygen transmissibility values.
Yonekura, Yuki; Sasaki, Ryohei; Yokoyama, Yukari; Tanno, Kozo; Sakata, Kiyomi; Ogawa, Akira; Kobayashi, Seichiro; Yamamoto, Taro
2016-01-01
Introduction Survivors who lost their homes in the Great East Japan Earthquake and Tsunami were forced to live in difficult conditions in temporary housing several months after the disaster. Body weights of survivors living in temporary housing for a long period might increase due to changes in their life style and psychosocial state during the medium-term and long-term recovery phases. The aim of this study was to determine whether there were differences between body weight changes of people living in temporary housing and those not living in temporary housing in a tsunami-stricken area during the medium-term and long-term recovery phases. Materials and methods Health check-ups were performed about 7 months after the disaster (in 2011) and about 18 months after the disaster (in 2012) for people living in a tsunami-stricken area (n = 6,601, mean age = 62.3 y). We compared the changes in body weight in people living in temporary housing (TH group, n = 2,002) and those not living in temporary housing (NTH group, n = 4,599) using a multiple linear regression model. Results While there was no significant difference between body weights in the TH and NTH groups in the 2011 survey, there was a significant difference between the mean changes in body weight in both sexes. We found that the changes in body weight were significantly greater in the TH group than in the NTH group in both sexes. The partial regression coefficients of mean change in body weight were +0.52 kg (P-value < 0.001) in males in the TH group and +0.56 kg (P-value < 0.001) in females in the TH group (reference: NTH group). Conclusion Analysis after adjustment for life style, psychosocial factors and cardiovascular risk factors found that people living in temporary housing in the tsunami- stricken area had a significant increase in body weight. PMID:27907015
Hou, Tingjun; Zhang, Wei; Huang, Qin; Xu, Xiaojie
2005-02-01
A new method is proposed for calculating aqueous solvation free energy based on atom-weighted solvent accessible surface areas. The method, SAWSA v2.0, gives the aqueous solvation free energy by summing the contributions of component atoms and a correction factor. We applied two different sets of atom typing rules and fitting processes for small organic molecules and proteins, respectively. For small organic molecules, the model classified the atoms in organic molecules into 65 basic types and additionally. For small organic molecules we proposed a correction factor of "hydrophobic carbon" to account for the aggregation of hydrocarbons and compounds with long hydrophobic aliphatic chains. The contributions for each atom type and correction factor were derived by multivariate regression analysis of 379 neutral molecules and 39 ions with known experimental aqueous solvation free energies. Based on the new atom typing rules, the correlation coefficient (r) for fitting the whole neutral organic molecules is 0.984, and the absolute mean error is 0.40 kcal mol(-1), which is much better than those of the model proposed by Wang et al. and the SAWSA model previously proposed by us. Furthermore, the SAWSA v2.0 model was compared with the simple atom-additive model based on the number of atom types (NA). The calculated results show that for small organic molecules, the predictions from the SAWSA v2.0 model are slightly better than those from the atom-additive model based on NA. However, for macromolecules such as proteins, due to the connection between their molecular conformation and their molecular surface area, the atom-additive model based on the number of atom types has little predictive power. In order to investigate the predictive power of our model, a systematic comparison was performed on seven solvation models including SAWSA v2.0, GB/SA_1, GB/SA_2, PB/SA_1, PB/SA_2, AM1/SM5.2R and SM5.0R. The results showed that for organic molecules the SAWSA v2.0 model is better
ERIC Educational Resources Information Center
Young, Beverly S.
The present study was designed to determine whether conservation of number, weight, volume, area, and mass could be learned and retained by disadvantaged preschool children when taught by an inexperienced classroom teacher. An instructional sequence of 10-minute lessons was presented on alternate days over a 3 1/2 week period by preservice…
Chrien, R.E.
1986-10-01
The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs.
Areal Average Albedo (AREALAVEALB)
Riihimaki, Laura; Marinovici, Cristina; Kassianov, Evgueni
2008-01-01
he Areal Averaged Albedo VAP yields areal averaged surface spectral albedo estimates from MFRSR measurements collected under fully overcast conditions via a simple one-line equation (Barnard et al., 2008), which links cloud optical depth, normalized cloud transmittance, asymmetry parameter, and areal averaged surface albedo under fully overcast conditions.
Haines, Aaron M.; Leu, Matthias; Svancara, Leona K.; Wilson, Gina; Scott, J. Michael
2010-01-01
Identification of biodiversity hotspots (hereafter, hotspots) has become a common strategy to delineate important areas for wildlife conservation. However, the use of hotspots has not often incorporated important habitat types, ecosystem services, anthropogenic activity, or consistency in identifying important conservation areas. The purpose of this study was to identify hotspots to improve avian conservation efforts for Species of Greatest Conservation Need (SGCN) in the state of Idaho, United States. We evaluated multiple approaches to define hotspots and used a unique approach based on weighting species by their distribution size and conservation status to identify hotspot areas. All hotspot approaches identified bodies of water (Bear Lake, Grays Lake, and American Falls Reservoir) as important hotspots for Idaho avian SGCN, but we found that the weighted approach produced more congruent hotspot areas when compared to other hotspot approaches. To incorporate anthropogenic activity into hotspot analysis, we grouped species based on their sensitivity to specific human threats (i.e., urban development, agriculture, fire suppression, grazing, roads, and logging) and identified ecological sections within Idaho that may require specific conservation actions to address these human threats using the weighted approach. The Snake River Basalts and Overthrust Mountains ecological sections were important areas for potential implementation of conservation actions to conserve biodiversity. Our approach to identifying hotspots may be useful as part of a larger conservation strategy to aid land managers or local governments in applying conservation actions on the ground.
2012-01-01
Background The study conducts statistical and spatial analyses to investigate amounts and types of permitted surface water pollution discharges in relation to population mortality rates for cancer and non-cancer causes nationwide and by urban-rural setting. Data from the Environmental Protection Agency's (EPA) Discharge Monitoring Report (DMR) were used to measure the location, type, and quantity of a selected set of 38 discharge chemicals for 10,395 facilities across the contiguous US. Exposures were refined by weighting amounts of chemical discharges by their estimated toxicity to human health, and by estimating the discharges that occur not only in a local county, but area-weighted discharges occurring upstream in the same watershed. Centers for Disease Control and Prevention (CDC) mortality files were used to measure age-adjusted population mortality rates for cancer, kidney disease, and total non-cancer causes. Analysis included multiple linear regressions to adjust for population health risk covariates. Spatial analyses were conducted by applying geographically weighted regression to examine the geographic relationships between releases and mortality. Results Greater non-carcinogenic chemical discharge quantities were associated with significantly higher non-cancer mortality rates, regardless of toxicity weighting or upstream discharge weighting. Cancer mortality was higher in association with carcinogenic discharges only after applying toxicity weights. Kidney disease mortality was related to higher non-carcinogenic discharges only when both applying toxicity weights and including upstream discharges. Effects for kidney mortality and total non-cancer mortality were stronger in rural areas than urban areas. Spatial results show correlations between non-carcinogenic discharges and cancer mortality for much of the contiguous United States, suggesting that chemicals not currently recognized as carcinogens may contribute to cancer mortality risk. The
ERIC Educational Resources Information Center
Gutiérrez-Zornoza, Myriam; Sánchez-López, Mairena; García-Hermoso, Antonio; González-García, Alberto; Chillón, Palma; Martínez-Vizcaíno, Vicente
2015-01-01
Purpose: The aim of this study was to examine (a) whether distance from home to school is a determinant of active commuting to school (ACS), (b) the relationship between distance from home to heavily used facilities (school, green spaces, and sports facilities) and the weight status and cardiometabolic risk categories, and (c) whether ACS has a…
Bloom, Benjamin; Mehta, Ambereen K.; Clark, Jeanne M.; Gudzune, Kimberly A.
2015-01-01
Objective To determine the reliability of Internet-based information on community-based weight-loss programs and grade their degree of concordance with 2013 American Heart Association, American College of Cardiology, and The Obesity Society weight management guidelines. Methods We conducted an online search for weight-loss programs in the Maryland-Washington, DC-Virginia corridor. We performed content analysis to abstract program components from their websites, and then randomly selected 80 programs for a telephone survey to verify this information. We determined reliability of Internet information in comparison with telephone interview responses. Results Of the 191 programs, we graded 1% as high, 8% as moderate, and 91% as low with respect to guideline concordance based on website content. Fifty-two programs participated in the telephone survey (65% response rate). Program intensity, diet, physical activity, and use of behavioral strategies were underreported on websites as compared to description of these activities during phone interview. Within our subsample, we graded 6% of programs as high based on website information, whereas we graded 19% as high after telephone interview. Conclusions Most weight-loss programs in an urban, mid-Atlantic region do not currently offer guideline-concordant practices and fail to disclose key information online, which may make clinician referrals challenging. PMID:26861769
Matheny, M; Strehler, K Y E; King, M; Tümer, N; Scarpace, P J
2014-07-01
The present investigation examined whether leptin stimulation of ventral tegmental area (VTA) or nucleus of the solitary tract (NTS) has a role in body weight homeostasis independent of the medial basal hypothalamus (MBH). To this end, recombinant adeno-associated viral techniques were employed to target leptin overexpression or overexpression of a dominant negative leptin mutant (leptin antagonist). Leptin antagonist overexpression in MBH or VTA increased food intake and body weight to similar extents over 14 days in rats. Simultaneous overexpression of leptin in VTA with antagonist in MBH resulted in food intake and body weight gain that were less than with control treatment but greater than with leptin alone in VTA. Notably, leptin overexpression in VTA increased P-STAT3 in MBH along with VTA, and leptin antagonist overexpression in the VTA partially attenuated P-STAT3 levels in MBH. Interestingly, leptin antagonist overexpression elevated body weight gain, but leptin overexpression in the NTS failed to modulate either food intake or body weight despite increased P-STAT3. These data suggest that leptin function in the VTA participates in the chronic regulation of food consumption and body weight in response to stimulation or blockade of VTA leptin receptors. Moreover, one component of VTA-leptin action appears to be independent of the MBH, and another component appears to be related to leptin receptor-mediated P-STAT3 activation in the MBH. Finally, leptin receptors in the NTS are necessary for normal energy homeostasis, but mostly they appear to have a permissive role. Direct leptin activation of NTS slightly increases UCP1 levels, but has little effect on food consumption or body weight.
States' Average College Tuition.
ERIC Educational Resources Information Center
Eglin, Joseph J., Jr.; And Others
This report presents statistical data on trends in tuition costs from 1980-81 through 1995-96. The average tuition for in-state undergraduate students of 4-year public colleges and universities for academic year 1995-96 was approximately 8.9 percent of median household income. This figure was obtained by dividing the students' average annual…
Zhang, Yan-Lin; Lee, Xin-Qing; Huang, Dai-Kuan; Huang, Rong-Sheng; Jiang, Wei
2009-03-15
40 rainwater samples were collected at Anshun from June 2007 to October 2007 and analysed in terms of pH values, electrical conductivity, major inorganic anions and soluble low molecular weight carboxylic acids. The results showed that pH of individual precipitation events ranged from 3.57-7.09 and the volume weight mean pH value was 4.57. The most abundant carboxylic acids were acetic (volume weight mean concentration 6.75 micromol x L(-1)) and formic (4.61 micromol x L(-1)) followed by oxalic (2.05 micromol x L(-1)). The concentration levels for these three species during summer especially June and July were comparatively high; it implied that organic acids in Anshun may came primarily from emissions from growing vegetations or products of the photochemical reactions of unsaturated hydrocarbons. Carboxylic acids were estimated to account for 32.2% to the free acidity in precipitation. The contribution was higher than in Guiyang rainwater, which indicated contamination by industry in Guiyang was more than in Anshun. The remarkable correlation(p = 0.01) between formic acid and acetic acid suggest that they have similar sources or similar intensity but different sources. And the remarkable correlation (p = 0.01) between and formic acid and oxalic acid showed that the precursors of oxalic acid and formic acid had similar sources. During this period, the overall wet deposition of carboxylic acids were 2.10 mmol/m2. And it appeared mainly in the summer, during which both concentration and contribution to free acidity were also relatively high. Consequently, it was necessary to control emission of organic acids in the summer to reduce frequence of acid rain in Anshun.
ERIC Educational Resources Information Center
Siegel, Irving H.
The arithmetic processes of aggregation and averaging are basic to quantitative investigations of employment, unemployment, and related concepts. In explaining these concepts, this report stresses need for accuracy and consistency in measurements, and describes tools for analyzing alternative measures. (BH)
Faustino, Laura I; Bulfe, Nardia M L; Pinazo, Martín A; Monteoliva, Silvia E; Graciano, Corina
2013-03-01
Plants of Pinus taeda L. from each of four families were fertilized with nitrogen (N), phosphorus (P) or N + P at planting. The H family had the highest growth in dry mass while the L family had the lowest growth. Measurements of plant hydraulic architecture traits were performed during the first year after planting. Stomatal conductance (gs), water potential at predawn (Ψpredawn) and at midday (Ψmidday), branch hydraulic conductivity (ks and kl) and shoot hydraulic conductance (K) were measured. One year after planting, dry weight partitioning of all aboveground organs was performed. Phosphorus fertilization increased growth in all four families, while N fertilization had a negative effect on growth. L family plants were more negatively affected than H family plants. This negative effect was not due to limitations in N or P uptake because plants from all the families and treatments had the same N and P concentration in the needles. Phosphorus fertilization changed some hydraulic parameters, but those changes did not affect growth. However, the negative effect of N can be explained by changes in hydraulic traits. L family plants had a high leaf dry weight per branch, which was increased by N fertilization. This change occurred together with a decrease in shoot conductance. Therefore, the reduction in gs was not enough to avoid the drop in Ψmidday. Consequently, stomatal closure and the deficient water status of the needles resulted in a reduction in growth. In H family plants, the increase in the number of needles per branch due to N fertilization was counteracted by a reduction in gs and also by a reduction in tracheid lumen size and length. Because of these two changes, Ψmidday did not drop and water availability in the needles was adequate for sustained growth. In conclusion, fertilization affects the hydraulic architecture of plants, and different families develop different strategies. Some of the hydraulic changes can explain the negative effect of N
Threaded average temperature thermocouple
NASA Technical Reports Server (NTRS)
Ward, Stanley W. (Inventor)
1990-01-01
A threaded average temperature thermocouple 11 is provided to measure the average temperature of a test situs of a test material 30. A ceramic insulator rod 15 with two parallel holes 17 and 18 through the length thereof is securely fitted in a cylinder 16, which is bored along the longitudinal axis of symmetry of threaded bolt 12. Threaded bolt 12 is composed of material having thermal properties similar to those of test material 30. Leads of a thermocouple wire 20 leading from a remotely situated temperature sensing device 35 are each fed through one of the holes 17 or 18, secured at head end 13 of ceramic insulator rod 15, and exit at tip end 14. Each lead of thermocouple wire 20 is bent into and secured in an opposite radial groove 25 in tip end 14 of threaded bolt 12. Resulting threaded average temperature thermocouple 11 is ready to be inserted into cylindrical receptacle 32. The tip end 14 of the threaded average temperature thermocouple 11 is in intimate contact with receptacle 32. A jam nut 36 secures the threaded average temperature thermocouple 11 to test material 30.
NASA Astrophysics Data System (ADS)
Karnitz, M. A.; Kornegay, F. C.; McLain, H. A.; Murphy, B. D.; Raridon, R. J.; Shlatter, E. C.
1981-03-01
Annual average SO2 concentrations in air at ground level were determined for a base year (1976) and for a future year (1987) with and without a 2600-MW(t) district heating system. Without district heating, the SO2 concentrations in the area are predicted to increase with time because of anticipated increased substitution of oil for curtailed natural gas. Implementation of the district heating/cogeneration system is predicted to mitigate this increase of SO2 concentrations significantly. Although the total emissions will be slightly higher with district heating/cogeneration because of the substitution of coal for natural gas and oil, use of tall stacks at the cogeneration plants will permit greater dispersion of the SO2 emissions. Considerable overall energy savings, particularly in the form of natural gas and oil, will be realized with a district heating/cogeneration system.
Reznik, Ed; Chaudhary, Osman; Segrè, Daniel
2013-01-01
The Michaelis-Menten equation for an irreversible enzymatic reaction depends linearly on the enzyme concentration. Even if the enzyme concentration changes in time, this linearity implies that the amount of substrate depleted during a given time interval depends only on the average enzyme concentration. Here, we use a time re-scaling approach to generalize this result to a broad category of multi-reaction systems, whose constituent enzymes have the same dependence on time, e.g. they belong to the same regulon. This “average enzyme principle” provides a natural methodology for jointly studying metabolism and its regulation. PMID:23892076
Haas, C N; Heller, B
1988-01-01
When plate count methods are used for microbial enumeration, if too-numerous-to-count results occur, they are commonly discarded. In this paper, a method for consideration of such results in computation of an average microbial density is developed, and its use is illustrated by example. PMID:3178211
How to Address Measurement Noise in Bayesian Model Averaging
NASA Astrophysics Data System (ADS)
Schöniger, A.; Wöhling, T.; Nowak, W.
2014-12-01
When confronted with the challenge of selecting one out of several competing conceptual models for a specific modeling task, Bayesian model averaging is a rigorous choice. It ranks the plausibility of models based on Bayes' theorem, which yields an optimal trade-off between performance and complexity. With the resulting posterior model probabilities, their individual predictions are combined into a robust weighted average and the overall predictive uncertainty (including conceptual uncertainty) can be quantified. This rigorous framework does, however, not yet explicitly consider statistical significance of measurement noise in the calibration data set. This is a major drawback, because model weights might be instable due to the uncertainty in noisy data, which may compromise the reliability of model ranking. We present a new extension to the Bayesian model averaging framework that explicitly accounts for measurement noise as a source of uncertainty for the weights. This enables modelers to assess the reliability of model ranking for a specific application and a given calibration data set. Also, the impact of measurement noise on the overall prediction uncertainty can be determined. Technically, our extension is built within a Monte Carlo framework. We repeatedly perturb the observed data with random realizations of measurement error. Then, we determine the robustness of the resulting model weights against measurement noise. We quantify the variability of posterior model weights as weighting variance. We add this new variance term to the overall prediction uncertainty analysis within the Bayesian model averaging framework to make uncertainty quantification more realistic and "complete". We illustrate the importance of our suggested extension with an application to soil-plant model selection, based on studies by Wöhling et al. (2013, 2014). Results confirm that noise in leaf area index or evaporation rate observations produces a significant amount of weighting
Hammoud, Ahmad O; Carrell, Douglas T; Gibson, Mark; Matthew Peterson, C; Wayne Meikle, A
2012-01-01
Obesity has a negative effect on male reproductive function. It is associated with low testosterone levels and alteration in gonadotropin secretion. Male obesity has been linked to reduced male fertility. Data regarding the relation of obesity to sperm parameters are conflicting in terms of the nature and magnitude of the effect. New areas of interest are emerging that can help explain the variation in study results, such as genetic polymorphism and sleep apnea. Sleep disorders have been linked to altered testosterone production and hypogonadism in men. It was also correlated to erectile dysfunction. The relation of sleep disorders to male fertility and sperm parameters remains to be investigated. Men with hypogonadism and infertility should be screened for sleep apnea. Treatment of obesity and sleep apnea improves testosterone levels and erectile function. PMID:22138900
Peakedness of Weighted Averages of Jointly Distributed Random Variables.
1985-10-01
under the integral sign is permissible here, so that ah’(a) f L Ix =U (---gl(u)(t- u) du -2~ t-a t - au =f f(u, --- (t -u) du. t f tu i- "u’ t -u) du...differentiation under the integral sign , we note that f Ifu, t - ) Idu əlf -u ( )daa which follows from (2.1). -4- This condition is clearly not a
NASA Astrophysics Data System (ADS)
Zavala, M.; Herndon, S. C.; Slott, R. S.; Dunlea, E. J.; Marr, L. C.; Shorter, J. H.; Zahniser, M.; Knighton, W. B.; Rogers, T. M.; Kolb, C. E.; Molina, L. T.; Molina, M. J.
2006-11-01
A mobile laboratory was used to measure on-road vehicle emission ratios during the MCMA-2003 field campaign held during the spring of 2003 in the Mexico City Metropolitan Area (MCMA). The measured emission ratios represent a sample of emissions of in-use vehicles under real world driving conditions for the MCMA. From the relative amounts of NOx and selected VOC's sampled, the results indicate that the technique is capable of differentiating among vehicle categories and fuel type in real world driving conditions. Emission ratios for NOx, NOy, NH3, H2CO, CH3CHO, and other selected volatile organic compounds (VOCs) are presented for chase sampled vehicles in the form of frequency distributions as well as estimates for the fleet averaged emissions. Our measurements of emission ratios for both CNG and gasoline powered "colectivos" (public transportation buses that are intensively used in the MCMA) indicate that - in a mole per mole basis - have significantly larger NOx and aldehydes emissions ratios as compared to other sampled vehicles in the MCMA. Similarly, ratios of selected VOCs and NOy showed a strong dependence on traffic mode. These results are compared with the vehicle emissions inventory for the MCMA, other vehicle emissions measurements in the MCMA, and measurements of on-road emissions in U.S. cities. We estimate NOx emissions as 100 600±29 200 metric tons per year for light duty gasoline vehicles in the MCMA for 2003. According to these results, annual NOx emissions estimated in the emissions inventory for this category are within the range of our estimated NOx annual emissions. Our estimates for motor vehicle emissions of benzene, toluene, formaldehyde, and acetaldehyde in the MCMA indicate these species are present in concentrations higher than previously reported. The high motor vehicle aldehyde emissions may have an impact on the photochemistry of urban areas.
Pearlman, David A; Rao, B Govinda; Charifson, Paul
2008-05-15
We demonstrate a new approach to the development of scoring functions through the formulation and parameterization of a new function, which can be used both for rapidly ranking the binding of ligands to proteins and for estimating relative aqueous molecular solubilities. The intent of this work is to introduce a new paradigm for creation of scoring functions, wherein we impose the following criteria upon the function: (1) simple; (2) intuitive; (3) requires no postparameterization tweaking; (4) can be applied (without reparameterization) to multiple target systems; and (5) can be rapidly evaluated for any potential ligand. Following these criteria, a new function, FURSMASA (function for rapid scoring using an MD-averaged grid and the accessible surface area) has been developed. Three novel features of the function include: (1) use of an MD-averaged potential energy grid for ligand-protein interactions, rather than a simple static grid; (2) inclusion of a term that depends on the change in the solvent-accessible surface area changes on an atomic (not molecular) basis; and (3) use of the recently derived predictive index (PI) target when optimizing the function, which focuses the function on its intended purpose of relative ranking. A genetic algorithm is used to optimize the function against test data sets that include ligands for the following proteins: IMPDH, p38, gyrase B, HIV-1, and TACE, as well as the Syracuse Research solubility database. We find that the function is predictive, and can simultaneously fit all the test data sets with cross-validated predictive indices ranging from 0.68 to 0.82. As a test of the ability of this function to predict binding for systems not in the training set, the resulting fitted FURSAMA function is then applied to 23 ligands of the COX-2 enzyme. Comparing the results for COX-2 against those obtained using a variety of well-known rapid scoring functions demonstrates that FURSMASA outperforms all of them in terms of the PI and
Americans' Average Radiation Exposure
NA
2000-08-11
We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body.
Temperature averaging thermal probe
NASA Technical Reports Server (NTRS)
Kalil, L. F.; Reinhardt, V. (Inventor)
1985-01-01
A thermal probe to average temperature fluctuations over a prolonged period was formed with a temperature sensor embedded inside a solid object of a thermally conducting material. The solid object is held in a position equidistantly spaced apart from the interior surfaces of a closed housing by a mount made of a thermally insulating material. The housing is sealed to trap a vacuum or mass of air inside and thereby prevent transfer of heat directly between the environment outside of the housing and the solid object. Electrical leads couple the temperature sensor with a connector on the outside of the housing. Other solid objects of different sizes and materials may be substituted for the cylindrically-shaped object to vary the time constant of the probe.
Temperature averaging thermal probe
NASA Astrophysics Data System (ADS)
Kalil, L. F.; Reinhardt, V.
1985-12-01
A thermal probe to average temperature fluctuations over a prolonged period was formed with a temperature sensor embedded inside a solid object of a thermally conducting material. The solid object is held in a position equidistantly spaced apart from the interior surfaces of a closed housing by a mount made of a thermally insulating material. The housing is sealed to trap a vacuum or mass of air inside and thereby prevent transfer of heat directly between the environment outside of the housing and the solid object. Electrical leads couple the temperature sensor with a connector on the outside of the housing. Other solid objects of different sizes and materials may be substituted for the cylindrically-shaped object to vary the time constant of the probe.
Yu, Xianyu; Wang, Yi; Niu, Ruiqing; Hu, Youjian
2016-01-01
In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR) technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM) classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO) algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%–19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides. PMID:27187430
Yu, Xianyu; Wang, Yi; Niu, Ruiqing; Hu, Youjian
2016-05-11
In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR) technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM) classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO) algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%-19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides.
Hakkinen, P.J.; Kelling, C.K.; Callender, J.C. )
1991-02-01
A thorough understanding of the routes and magnitudes of chemical exposures that consumers experience during the use of a household product is needed as part of a well-founded risk assessment for that product and its components. This review describes some sources of generic consumer data (eg, relevant body weight or total body surface area for a given human age), and exposure-related data (eg, task frequency and duration) for specific product types needed for exposure assessments. The review also contains a discussion of the importance of statistical characterization of the consumer data (eg, does its range follow a normal, log-normal, or other type of distribution ). The importance of examining these data for correlative interactions is emphasized.25 references.
2013-01-01
Background The Taiwan area comprises the main island of Taiwan and several small islands located off the coast of the Southern China. The eastern two-thirds of Taiwan are characterized by rugged mountains covered with tropical and subtropical vegetation. The western region of Taiwan is characterized by flat or gently rolling plains. Geographically, the Taiwan area is diverse in ecology and environment, although scrub typhus threatens local human populations. In this study, we investigate the effects of seasonal and meteorological factors on the incidence of scrub typhus infection among 10 local climate regions. The correlation between the spatial distribution of scrub typhus and cultivated forests in Taiwan, as well as the relationship between scrub typhus incidence and the population density of farm workers is examined. Methods We applied Pearson’s product moment correlation to calculate the correlation between the incidence of scrub typhus and meteorological factors among 10 local climate regions. We used the geographically weighted regression (GWR) method, a type of spatial regression that generates parameters disaggregated by the spatial units of analysis, to detail and map each regression point for the response variables of the standardized incidence ratio (SIR)-district scrub typhus. We also applied the GWR to examine the explanatory variables of types of forest-land use and farm worker density in Taiwan in 2005. Results In the Taiwan Area, scrub typhus endemic areas are located in the southeastern regions and mountainous townships of Taiwan, as well as the Pescadore, Kinmen, and Matou Islands. Among these islands and low-incidence areas in the central western and southwestern regions of Taiwan, we observed a significant correlation between scrub typhus incidence and surface temperature. No similar significant correlation was found in the endemic areas (e.g., the southeastern region and the mountainous area of Taiwan). Precipitation correlates positively
Mleczek, Mirosław; Magdziak, Zuzanna; Gąsecka, Monika; Niedzielski, Przemysław; Kalač, Pavel; Siwulski, Marek; Rzymski, Piotr; Zalicka, Sylwia; Sobieralski, Krzysztof
2016-10-01
The aim of the study was to (i) investigate the potential of edible mushroom Boletus badius (Fr.) Fr. to accumulate 53 elements from unpolluted acidic sandy soil and polluted alkaline flotation tailing sites in Poland, (ii) to estimate the low-molecular-weight organic acid (LMWOA) profile and contents in fruit bodies, and finally (iii) to explore the possible relationship between elements and LMWOA content in mushrooms. The content of most elements in fruiting bodies collected from the flotation tailings was significantly higher than in mushrooms from the unpolluted soils. The occurrence of elements determined in fruiting bodies of B. badius has been varied (from 0.01 mg kg(-1) for Eu, Lu, and Te up to 18,932 mg kg(-1) for K). The results established the high importance of element contents in substrate. Among ten organic acids, nine have been found in wide range: from below 0.01 mg kg(-1) for fumaric acid to 14.8 mg g(-1) for lactic acid. Lactic and succinic acids were dominant in both areas, and citric acid was also in high content in polluted area. The correlation between element contents and the individual and total content of LMWOAs was confirmed.
Dissociating Averageness and Attractiveness: Attractive Faces Are Not Always Average
ERIC Educational Resources Information Center
DeBruine, Lisa M.; Jones, Benedict C.; Unger, Layla; Little, Anthony C.; Feinberg, David R.
2007-01-01
Although the averageness hypothesis of facial attractiveness proposes that the attractiveness of faces is mostly a consequence of their averageness, 1 study has shown that caricaturing highly attractive faces makes them mathematically less average but more attractive. Here the authors systematically test the averageness hypothesis in 5 experiments…
Gong, Lunli; Zhou, Xiao; Wu, Yaohao; Zhang, Yun; Wang, Chen; Zhou, Heng; Guo, Fangfang
2014-01-01
The present study was designed to investigate the possibility of full-thickness defects repair in porcine articular cartilage (AC) weight-bearing area using chondrogenic differentiated autologous adipose-derived stem cells (ASCs) with a follow-up of 3 and 6 months, which is successive to our previous study on nonweight-bearing area. The isolated ASCs were seeded onto the phosphoglycerate/polylactic acid (PGA/PLA) with chondrogenic induction in vitro for 2 weeks as the experimental group prior to implantation in porcine AC defects (8 mm in diameter, deep to subchondral bone), with PGA/PLA only as control. With follow-up time being 3 and 6 months, both neo-cartilages of postimplantation integrated well with the neighboring normal cartilage and subchondral bone histologically in experimental group, whereas only fibrous tissue in control group. Immunohistochemical and toluidine blue staining confirmed similar distribution of COL II and glycosaminoglycan in the regenerated cartilage to the native one. A vivid remolding process with repair time was also witnessed in the neo-cartilage as the compressive modulus significantly increased from 70% of the normal cartilage at 3 months to nearly 90% at 6 months, which is similar to our former research. Nevertheless, differences of the regenerated cartilages still could be detected from the native one. Meanwhile, the exact mechanism involved in chondrogenic differentiation from ASCs seeded on PGA/PLA is still unknown. Therefore, proteome is resorted leading to 43 proteins differentially identified from 20 chosen two-dimensional spots, which do help us further our research on some committed factors. In conclusion, the comparison via proteome provided a thorough understanding of mechanisms implicating ASC differentiation toward chondrocytes, which is further substantiated by the present study as a perfect supplement to the former one in nonweight-bearing area. PMID:24044689
... Weight share What It Takes to Lose Weight: Calorie Basics When you’re trying to lose weight... ... wcdapps.hhs.gov/Badges/Handlers/Badge.ashx?js=0&widgetname=betobaccofreew200short</NOFRAMES& ...
Combining remotely sensed and other measurements for hydrologic areal averages
NASA Technical Reports Server (NTRS)
Johnson, E. R.; Peck, E. L.; Keefer, T. N.
1982-01-01
A method is described for combining measurements of hydrologic variables of various sampling geometries and measurement accuracies to produce an estimated mean areal value over a watershed and a measure of the accuracy of the mean areal value. The method provides a means to integrate measurements from conventional hydrological networks and remote sensing. The resulting areal averages can be used to enhance a wide variety of hydrological applications including basin modeling. The correlation area method assigns weights to each available measurement (point, line, or areal) based on the area of the basin most accurately represented by the measurement. The statistical characteristics of the accuracy of the various measurement technologies and of the random fields of the hydrologic variables used in the study (water equivalent of the snow cover and soil moisture) required to implement the method are discussed.
Chen, Xue-shuang; Jiang, Tao; Lu, Song; Wei, Shi-qiang; Wang, Ding-yong; Yan, Jin-long
2016-03-15
The study of the molecular weight (MW) fractions of dissolved organic matter (DOM) in aquatic environment is of interests because the size plays an important role in deciding the biogeochemical characteristics of DOM. Thus, using ultrafiltration ( UF) technique combined with three-dimensional fluorescence spectroscopy, DOM samples from four sampling sites in typical water-level fluctuation zones of Three Gorge Reservoir areas were selected to investigate the differences of properties and sources of different DOM MW fractions. The results showed that in these areas, the distribution of MW fractions was highly dispersive, but the approximately equal contributions from colloidal (Mr 1 x 10³-0.22 µm) and true dissolved fraction (Mr < 1 x 10³) to the total DOC concentration were found. Four fluorescence signals (humic-like A and C; protein-like B and T) were observed in all MW fractions including bulk DOM, which showed the same distribution trend: true dissolved > low MW (Mr 1 x 10³-10 x 10³) > medium MW (Mr 10 x 10³-30 x 10³) > high MW (Mr 30 x 10³-0.22 µm). Additionally, with decreasing MW fraction, fluorescence index (FI) and freshness index (BIX) increased suggesting enhanced signals of autochthonous inputs, whereas humification index ( HIX) decreased indicating lowe humification degree. It strongly suggested that terrestrial input mainly affected the composition and property of higher MW fractions of DOM, as compared to lower MW and true dissolved fractions that were controlled by autochthonous sources such as microbial and alga activities, instead of allochthonous sources. Meanwhile, the riparian different land-use types also affected obviously on the characteristics of DOM. Therefore, higher diversity of land-use types, and also higher complexity of ecosystem and landscapes, induced higher heterogeneity of fluorescence components in different MW fraction of DOM.
Córdova-Palomera, Aldo; Fatjó-Vilas, Mar; Falcón, Carles; Bargalló, Nuria; Alemany, Silvia; Crespo-Facorro, Benedicto; Nenadic, Igor; Fañanás, Lourdes
2015-01-01
Background Previous research suggests that low birth weight (BW) induces reduced brain cortical surface area (SA) which would persist until at least early adulthood. Moreover, low BW has been linked to psychiatric disorders such as depression and psychological distress, and to altered neurocognitive profiles. Aims We present novel findings obtained by analysing high-resolution structural MRI scans of 48 twins; specifically, we aimed: i) to test the BW-SA association in a middle-aged adult sample; and ii) to assess whether either depression/anxiety disorders or intellectual quotient (IQ) influence the BW-SA link, using a monozygotic (MZ) twin design to separate environmental and genetic effects. Results Both lower BW and decreased IQ were associated with smaller total and regional cortical SA in adulthood. Within a twin pair, lower BW was related to smaller total cortical and regional SA. In contrast, MZ twin differences in SA were not related to differences in either IQ or depression/anxiety disorders. Conclusion The present study supports findings indicating that i) BW has a long-lasting effect on cortical SA, where some familial and environmental influences alter both foetal growth and brain morphology; ii) uniquely environmental factors affecting BW also alter SA; iii) higher IQ correlates with larger SA; and iv) these effects are not modified by internalizing psychopathology. PMID:26086820
Wang, Tingting; Li, Wenhua; Wu, Xiangru; Yin, Bing; Chu, Caiting; Ding, Ming; Cui, Yanfen
2016-01-01
Objective To assess the added value of diffusion-weighted magnetic resonance imaging (DWI) with apparent diffusion coefficient (ADC) values compared to MRI, for characterizing the tubo-ovarian abscesses (TOA) mimicking ovarian malignancy. Materials and Methods Patients with TOA (or ovarian abscess alone; n = 34) or ovarian malignancy (n = 35) who underwent DWI and MRI were retrospectively reviewed. The signal intensity of cystic and solid component of TOAs and ovarian malignant tumors on DWI and the corresponding ADC values were evaluated, as well as clinical characteristics, morphological features, MRI findings were comparatively analyzed. Receiver operating characteristic (ROC) curve analysis based on logistic regression was applied to identify different imaging characteristics between the two patient groups and assess the predictive value of combination diagnosis with area under the curve (AUC) analysis. Results The mean ADC value of the cystic component in TOA was significantly lower than in malignant tumors (1.04 ± 0 .41 × 10−3 mm2/s vs. 2.42 ± 0.38 × 10−3 mm2/s; p < 0.001). The mean ADC value of the enhanced solid component in 26 TOAs was 1.43 ± 0.16×10−3mm2/s, and 46.2% (12 TOAs; pseudotumor areas) showed significantly higher signal intensity on DW-MRI than in ovarian malignancy (mean ADC value 1.44 ± 0.20×10−3 mm2/s vs.1.18 ± 0.36 × 10−3 mm2/s; p = 0.043). The combination diagnosis of ADC value and dilated tubal structure achieved the best AUC of 0.996. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy of MRI vs. DWI with ADC values for predicting TOA were 47.1%, 91.4%, 84.2%, 64%, and 69.6% vs. 100%, 97.1%, 97.1%, 100%, and 98.6%, respectively. Conclusions DW-MRI is superior to MRI in the assessment of TOA mimicking ovarian malignancy, and the ADC values aid in discriminating the pseudotumor area of TOA from the solid portion of ovarian malignancy. PMID:26894926
Cosmological ensemble and directional averages of observables
Bonvin, Camille; Clarkson, Chris; Durrer, Ruth; Maartens, Roy; Umeh, Obinna E-mail: chris.clarkson@gmail.com E-mail: roy.maartens@gmail.com
2015-07-01
We show that at second order, ensemble averages of observables and directional averages do not commute due to gravitational lensing—observing the same thing in many directions over the sky is not the same as taking an ensemble average. In principle this non-commutativity is significant for a variety of quantities that we often use as observables and can lead to a bias in parameter estimation. We derive the relation between the ensemble average and the directional average of an observable, at second order in perturbation theory. We discuss the relevance of these two types of averages for making predictions of cosmological observables, focusing on observables related to distances and magnitudes. In particular, we show that the ensemble average of the distance in a given observed direction is increased by gravitational lensing, whereas the directional average of the distance is decreased. For a generic observable, there exists a particular function of the observable that is not affected by second-order lensing perturbations. We also show that standard areas have an advantage over standard rulers, and we discuss the subtleties involved in averaging in the case of supernova observations.
Spatial limitations in averaging social cues
Florey, Joseph; Clifford, Colin W. G.; Dakin, Steven; Mareschal, Isabelle
2016-01-01
The direction of social attention from groups provides stronger cueing than from an individual. It has previously been shown that both basic visual features such as size or orientation and more complex features such as face emotion and identity can be averaged across multiple elements. Here we used an equivalent noise procedure to compare observers’ ability to average social cues with their averaging of a non-social cue. Estimates of observers’ internal noise (uncertainty associated with processing any individual) and sample-size (the effective number of gaze-directions pooled) were derived by fitting equivalent noise functions to discrimination thresholds. We also used reverse correlation analysis to estimate the spatial distribution of samples used by participants. Averaging of head-rotation and cone-rotation was less noisy and more efficient than averaging of gaze direction, though presenting only the eye region of faces at a larger size improved gaze averaging performance. The reverse correlation analysis revealed greater sampling areas for head rotation compared to gaze. We attribute these differences in averaging between gaze and head cues to poorer visual processing of faces in the periphery. The similarity between head and cone averaging are examined within the framework of a general mechanism for averaging of object rotation. PMID:27573589
NASA Astrophysics Data System (ADS)
Hermance, J. F.; Jacob, R. W.; Bradley, B. A.; Mustard, J. F.
2005-12-01
In studying vegetation patterns remotely, the objective is to draw inferences on the development of specific or general land surface phenology (LSP) as a function of space and time by determining the behavior of a parameter (in our case NDVI), when the parameter estimate may be biased by noise, data dropouts and obfuscations from atmospheric and other effects. We describe the underpinning concepts of a procedure for a robust interpolation of NDVI data that does not have the limitations of other mathematical approaches which require orthonormal basis functions (e.g. Fourier analysis). In this approach, data need not be uniformly sampled in time, nor do we expect noise to be Gaussian-distributed. Our approach is intuitive and straightforward, and is applied here to the refined modeling of LSP using 7 years of weekly and biweekly AVHRR NDVI data for a 150 x 150 km study area in central Nevada. This site is a microcosm of a broad range of vegetation classes, from irrigated agriculture with annual NDVIvalues of up to 0.7 to playas and alkali salt flats with annual NDVI values of only 0.07. Our procedure involves a form of parameter estimation employing Bayesian statistics. In utilitarian terms, the latter procedure is a method of statistical analysis (in our case, robustified, weighted least-squares recursive curve-fitting) that incorporates a variety of prior knowledge when forming current estimates of a particular process or parameter. In addition to the standard Bayesian approach, we account for outliers due to data dropouts or obfuscations because of clouds and snow cover. An initial "starting model" for the average annual cycle and long term (7 year) trend is determined by jointly fitting a common set of complex annual harmonics and a low order polynomial to an entire multi-year time series in one step. This is not a formal Fourier series in the conventional sense, but rather a set of 4 cosine and 4 sine coefficients with fundamental periods of 12, 6, 3 and 1
Model averaging and muddled multimodel inferences
Cade, Brian S.
2015-01-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the
Hou, T J; Xu, X J
2003-01-01
A novel method for the calculations of 1-octanol/water partition coefficient (log P) of organic molecules has been presented here. The method, SLOGP v1.0, estimates the log P values by summing the contribution of atom-weighted solvent accessible surface areas (SASA) and correction factors. Altogether 100 atom/group types were used to classify atoms with different chemical environments, and two correlation factors were used to consider the intermolecular hydrophobic interactions and intramolecular hydrogen bonds. Coefficient values for 100 atom/group and two correction factors have been derived from a training set of 1850 compounds. The parametrization procedure for different kinds of atoms was performed as follows: first, the atoms in a molecule were defined to different atom/group types based on SMARTS language, and the correction factors were determined by substructure searching; then, SASA for each atom/group type was calculated and added; finally, multivariate linear regression analysis was applied to optimize the hydrophobic parameters for different atom/group types and correction factors in order to reproduce the experimental log P. The correlation based on the training set gives a model with the correlation coefficient (r) of 0.988, the standard deviation (SD) of 0.368 log units, and the absolute unsigned mean error of 0.261. Comparison of various procedures of log P calculations for the external test set of 138 organic compounds demonstrates that our method bears very good accuracy and is comparable or even better than the fragment-based approaches. Moreover, the atom-additive approach based on SASA was compared with the simple atom-additive approach based on the number of atoms. The calculated results show that the atom-additive approach based on SASA gives better predictions than the simple atom-additive one. Due to the connection between the molecular conformation and the molecular surface areas, the atom-additive model based on SASA may be a more
... Activity Overweight & Obesity Healthy Weight Breastfeeding Micronutrient Malnutrition State and Local ... it comes to weight loss, there's no lack of fad diets promising fast results. But such diets limit your nutritional intake, can be unhealthy, and tend to fail ...
... Together Understanding Adult Overweight & Obesity About Food Portions Bariatric Surgery for Severe Obesity More Weight Management Topics Healthy ... Sleep Apnea Weight Management Topics About Food Portions Bariatric Surgery for Severe Obesity Being Healthy is a Big ...
76 FR 13580 - Bus Testing; Calculation of Average Passenger Weight and Test Vehicle Weight
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-14
... impact on a substantial number of small entities. ``Small entities'' include small businesses, not-for... Federal Register (74 FR 51083) that incorporated brake performance and emissions tests into FTA's bus... the actual performance of these buses in real-life service, particularly during rush hour...
NASA Technical Reports Server (NTRS)
Chen, Fei; Yates, David; LeMone, Margaret
2001-01-01
To understand the effects of land-surface heterogeneity and the interactions between the land-surface and the planetary boundary layer at different scales, we develop a multiscale data set. This data set, based on the Cooperative Atmosphere-Surface Exchange Study (CASES97) observations, includes atmospheric, surface, and sub-surface observations obtained from a dense observation network covering a large region on the order of 100 km. We use this data set to drive three land-surface models (LSMs) to generate multi-scale (with three resolutions of 1, 5, and 10 kilometers) gridded surface heat flux maps for the CASES area. Upon validating these flux maps with measurements from surface station and aircraft, we utilize them to investigate several approaches for estimating the area-integrated surface heat flux for the CASES97 domain of 71x74 square kilometers, which is crucial for land surface model development/validation and area water and energy budget studies. This research is aimed at understanding the relative contribution of random turbulence versus organized mesoscale circulations to the area-integrated surface flux at the scale of 100 kilometers, and identifying the most important effective parameters for characterizing the subgrid-scale variability for large-scale atmosphere-hydrology models.
Averaging Models: Parameters Estimation with the R-Average Procedure
ERIC Educational Resources Information Center
Vidotto, G.; Massidda, D.; Noventa, S.
2010-01-01
The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…
NASA Astrophysics Data System (ADS)
Du, Wen-Bo; Cao, Xian-Bin; Zhao, Lin; Zhou, Hong
2009-05-01
We investigate the evolutionary prisoner's dilemma game (PDG) on weighted Newman-Watts (NW) networks. In weighted NW networks, the link weight wij is assigned to the link between the nodes i and j as: wij = (κi · κj)β, where κi(κj) is the degree of node i(j) and β represents the strength of the correlations. Obviously, the link weight can be tuned by only one parameter β. We focus on the cooperative behavior and wealth distribution in the system. Simulation results show that the cooperator frequency is promoted by a large range of β and there is a minimal cooperation frequency around β = - 1. Moreover, we also employ the Gini coefficient to study the wealth distribution in the population. Numerical results show that the Gini coefficient reaches its minimum when β approx - 1. Our work may be helpful in understanding the emergence of cooperation and unequal wealth distribution in society.
How Young Is Standard Average European?
ERIC Educational Resources Information Center
Haspelmath, Martin
1998-01-01
An analysis of Standard Average European, a European linguistic area, looks at 11 of its features (definite, indefinite articles, have-perfect, participial passive, antiaccusative prominence, nominative experiencers, dative external possessors, negation/negative pronouns, particle comparatives, A-and-B conjunction, relative clauses, verb fronting…
Weighted Automata and Weighted Logics
NASA Astrophysics Data System (ADS)
Droste, Manfred; Gastin, Paul
In automata theory, a fundamental result of Büchi and Elgot states that the recognizable languages are precisely the ones definable by sentences of monadic second order logic. We will present a generalization of this result to the context of weighted automata. We develop syntax and semantics of a quantitative logic; like the behaviors of weighted automata, the semantics of sentences of our logic are formal power series describing ‘how often’ the sentence is true for a given word. Our main result shows that if the weights are taken in an arbitrary semiring, then the behaviors of weighted automata are precisely the series definable by sentences of our quantitative logic. We achieve a similar characterization for weighted Büchi automata acting on infinite words, if the underlying semiring satisfies suitable completeness assumptions. Moreover, if the semiring is additively locally finite or locally finite, then natural extensions of our weighted logic still have the same expressive power as weighted automata.
Interpreting Bivariate Regression Coefficients: Going beyond the Average
ERIC Educational Resources Information Center
Halcoussis, Dennis; Phillips, G. Michael
2010-01-01
Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…
42 CFR 423.279 - National average monthly bid amount.
Code of Federal Regulations, 2011 CFR
2011-10-01
... bid amounts for each prescription drug plan (not including fallbacks) and for each MA-PD plan...(h) of the Act. (b) Calculation of weighted average. (1) The national average monthly bid amount is a....258(c)(1) of this chapter) and the denominator equal to the total number of Part D...
NASA Technical Reports Server (NTRS)
Moore, R. D.; Urasek, D. C.; Kovich, G.
1973-01-01
The overall and blade-element performances are presented over the stable flow operating range from 50 to 100 percent of design speed. Stage peak efficiency of 0.834 was obtained at a weight flow of 26.4 kg/sec (58.3 lb/sec) and a pressure ratio of 1.581. The stall margin for the stage was 7.5 percent based on weight flow and pressure ratio at stall and peak efficiency conditions. The rotor minimum losses were approximately equal to design except in the blade vibration damper region. Stator minimum losses were less than design except in the tip and damper regions.
High average power pockels cell
Daly, Thomas P.
1991-01-01
A high average power pockels cell is disclosed which reduces the effect of thermally induced strains in high average power laser technology. The pockels cell includes an elongated, substantially rectangular crystalline structure formed from a KDP-type material to eliminate shear strains. The X- and Y-axes are oriented substantially perpendicular to the edges of the crystal cross-section and to the C-axis direction of propagation to eliminate shear strains.
Homelessness prevention in New York City: On average, it works.
Goodman, Sarena; Messeri, Peter; O'Flaherty, Brendan
2016-03-01
This study evaluates the community impact of the first four years of Homebase, a homelessness prevention program in New York City. Family shelter entries decreased on average in the neighborhoods in which Homebase was operating. Homebase effects appear to be heterogeneous, and so different kinds of averages imply different-sized effects. The (geometric) average decrease in shelter entries was about 5% when census tracts are weighted equally, and 11% when community districts (which are much larger) are weighted equally. This study also examines the effect of foreclosures. Foreclosures are associated with more shelter entries in neighborhoods that usually do not send large numbers of families to the shelter system.
Homelessness prevention in New York City: On average, it works
Goodman, Sarena; Messeri, Peter; O'Flaherty, Brendan
2016-01-01
This study evaluates the community impact of the first four years of Homebase, a homelessness prevention program in New York City. Family shelter entries decreased on average in the neighborhoods in which Homebase was operating. Homebase effects appear to be heterogeneous, and so different kinds of averages imply different-sized effects. The (geometric) average decrease in shelter entries was about 5% when census tracts are weighted equally, and 11% when community districts (which are much larger) are weighted equally. This study also examines the effect of foreclosures. Foreclosures are associated with more shelter entries in neighborhoods that usually do not send large numbers of families to the shelter system. PMID:26941543
Determining GPS average performance metrics
NASA Technical Reports Server (NTRS)
Moore, G. V.
1995-01-01
Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.
Sun, Shu-Wei; Mei, Jennifer; Tuel, Keelan
2013-11-01
Diffusion tensor imaging (DTI) is achieved by collecting a series of diffusion-weighted images (DWIs). Signal averaging of multiple repetitions can be performed in the k-space (k-avg) or in the image space (m-avg) to improve the image quality. Alternatively, one can treat each acquisition as an independent image and use all of the data to reconstruct the DTI without doing any signal averaging (no-avg). To compare these three approaches, in this study, in vivo DTI data were collected from five normal mice. Noisy data with signal-to-noise ratios (SNR) that varied between five and 30 (before averaging) were then simulated. The DTI indices, including relative anisotropy (RA), trace of diffusion tensor (TR), axial diffusivity (λ║), and radial diffusivity (λ⊥), derived from the k-avg, m-avg, and no-avg, were then compared in the corpus callosum white matter, cortex gray matter, and the ventricles. We found that k-avg and m-avg enhanced the SNR of DWI with no significant differences. However, k-avg produced lower RA in the white matter and higher RA in the gray matter, compared to the m-avg and no-avg, regardless of SNR. The latter two produced similar DTI quantifications. We concluded that k-avg is less preferred for DTI brain imaging.
Evaluations of average level spacings
Liou, H.I.
1980-01-01
The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of /sup 168/Er data. 19 figures, 2 tables.
Vibrational averages along thermal lines
NASA Astrophysics Data System (ADS)
Monserrat, Bartomeu
2016-01-01
A method is proposed for the calculation of vibrational quantum and thermal expectation values of physical properties from first principles. Thermal lines are introduced: these are lines in configuration space parametrized by temperature, such that the value of any physical property along them is approximately equal to the vibrational average of that property. The number of sampling points needed to explore the vibrational phase space is reduced by up to an order of magnitude when the full vibrational density is replaced by thermal lines. Calculations of the vibrational averages of several properties and systems are reported, namely, the internal energy and the electronic band gap of diamond and silicon, and the chemical shielding tensor of L-alanine. Thermal lines pave the way for complex calculations of vibrational averages, including large systems and methods beyond semilocal density functional theory.
NASA Technical Reports Server (NTRS)
Howard, W. H.; Young, D. R.
1972-01-01
Device applies compressive force to bone to minimize loss of bone calcium during weightlessness or bedrest. Force is applied through weights, or hydraulic, pneumatic or electrically actuated devices. Device is lightweight and easy to maintain and operate.
Weight and weddings. Engaged men's body weight ideals and wedding weight management behaviors.
Klos, Lori A; Sobal, Jeffery
2013-01-01
Most adults marry at some point in life, and many invest substantial resources in a wedding ceremony. Previous research reports that brides often strive towards culturally-bound appearance norms and engage in weight management behaviors in preparation for their wedding. However, little is known about wedding weight ideals and behaviors among engaged men. A cross-sectional survey of 163 engaged men asked them to complete a questionnaire about their current height and weight, ideal wedding body weight, wedding weight importance, weight management behaviors, formality of their upcoming wedding ceremony, and demographics. Results indicated that the discrepancy between men's current weight and reported ideal wedding weight averaged 9.61 lb. Most men considered being at a certain weight at their wedding to be somewhat important. About 39% were attempting to lose weight for their wedding, and 37% were not trying to change their weight. Attempting weight loss was more frequent among men with higher BMI's, those planning more formal weddings, and those who considered being the right weight at their wedding as important. Overall, these findings suggest that weight-related appearance norms and weight loss behaviors are evident among engaged men.
The Molecular Weight Distribution of Polymer Samples
ERIC Educational Resources Information Center
Horta, Arturo; Pastoriza, M. Alejandra
2007-01-01
Various methods for the determination of the molecular weight distribution (MWD) of different polymer samples are presented. The study shows that the molecular weight averages and distribution of a polymerization completely depend on the characteristics of the reaction itself.
Polyhedral Painting with Group Averaging
ERIC Educational Resources Information Center
Farris, Frank A.; Tsao, Ryan
2016-01-01
The technique of "group-averaging" produces colorings of a sphere that have the symmetries of various polyhedra. The concepts are accessible at the undergraduate level, without being well-known in typical courses on algebra or geometry. The material makes an excellent discovery project, especially for students with some background in…
NASA Technical Reports Server (NTRS)
Ustino, Eugene A.
2006-01-01
This slide presentation reviews the observable radiances as functions of atmospheric parameters and of surface parameters; the mathematics of atmospheric weighting functions (WFs) and surface partial derivatives (PDs) are presented; and the equation of the forward radiative transfer (RT) problem is presented. For non-scattering atmospheres this can be done analytically, and all WFs and PDs can be computed analytically using the direct linearization approach. For scattering atmospheres, in general case, the solution of the forward RT problem can be obtained only numerically, but we need only two numerical solutions: one of the forward RT problem and one of the adjoint RT problem to compute all WFs and PDs we can think of. In this presentation we discuss applications of both the linearization and adjoint approaches
Ensemble average theory of gravity
NASA Astrophysics Data System (ADS)
Khosravi, Nima
2016-12-01
We put forward the idea that all the theoretically consistent models of gravity have contributions to the observed gravity interaction. In this formulation, each model comes with its own Euclidean path-integral weight where general relativity (GR) has automatically the maximum weight in high-curvature regions. We employ this idea in the framework of Lovelock models and show that in four dimensions the result is a specific form of the f (R ,G ) model. This specific f (R ,G ) satisfies the stability conditions and possesses self-accelerating solutions. Our model is consistent with the local tests of gravity since its behavior is the same as in GR for the high-curvature regime. In the low-curvature regime the gravitational force is weaker than in GR, which can be interpreted as the existence of a repulsive fifth force for very large scales. Interestingly, there is an intermediate-curvature regime where the gravitational force is stronger in our model compared to GR. The different behavior of our model in comparison with GR in both low- and intermediate-curvature regimes makes it observationally distinguishable from Λ CDM .
Averaging Robertson-Walker cosmologies
Brown, Iain A.; Robbers, Georg; Behrend, Juliane E-mail: G.Robbers@thphys.uni-heidelberg.de
2009-04-15
The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the {Lambda}CDM concordance model, the backreaction is of the order of {Omega}{sub eff}{sup 0} Almost-Equal-To 4 Multiplication-Sign 10{sup -6}, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10{sup -8} and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w{sub eff} < -1/3 can be found for strongly phantom models.
Average neutronic properties of prompt fission products
Foster, D.G. Jr.; Arthur, E.D.
1982-02-01
Calculations of the average neutronic properties of the ensemble of fission products producted by fast-neutron fission of /sup 235/U and /sup 239/Pu, where the properties are determined before the first beta decay of any of the fragments, are described. For each case we approximate the ensemble by a weighted average over 10 selected nuclides, whose properties we calculate using nuclear-model parameters deduced from the systematic properties of other isotopes of the same elements as the fission fragments. The calculations were performed primarily with the COMNUC and GNASH statistical-model codes. The results, available in ENDF/B format, include cross sections, angular distributions of neutrons, and spectra of neutrons and photons, for incident-neutron energies between 10/sup -5/ eV and 20 MeV. Over most of this energy range, we find that the capture cross section of /sup 239/Pu fission fragments is systematically a factor of two to five greater than for /sup 235/U fission fragments.
Effect of high-speed jet on flow behavior, retrogradation, and molecular weight of rice starch.
Fu, Zhen; Luo, Shun-Jing; BeMiller, James N; Liu, Wei; Liu, Cheng-Mei
2015-11-20
Effects of high-speed jet (HSJ) treatment on flow behavior, retrogradation, and degradation of the molecular structure of indica rice starch were investigated. Decreasing with the number of HSJ treatment passes were the turbidity of pastes (degree of retrogradation), the enthalpy of melting of retrograded rice starch, weight-average molecular weights and weight-average root-mean square radii of gyration of the starch polysaccharides, and the amylopectin peak areas of SEC profiles. The areas of lower-molecular-weight polymers increased. The chain-length distribution was not significantly changed. Pastes of all starch samples exhibited pseudoplastic, shear-thinning behavior. HSJ treatment increased the flow behavior index and decreased the consistency coefficient and viscosity. The data suggested that degradation of amylopectin was mainly involved and that breakdown preferentially occurred in chains between clusters.
Achronal averaged null energy condition
Graham, Noah; Olum, Ken D.
2007-09-15
The averaged null energy condition (ANEC) requires that the integral over a complete null geodesic of the stress-energy tensor projected onto the geodesic tangent vector is never negative. This condition is sufficient to prove many important theorems in general relativity, but it is violated by quantum fields in curved spacetime. However there is a weaker condition, which is free of known violations, requiring only that there is no self-consistent spacetime in semiclassical gravity in which ANEC is violated on a complete, achronal null geodesic. We indicate why such a condition might be expected to hold and show that it is sufficient to rule out closed timelike curves and wormholes connecting different asymptotically flat regions.
Flexible time domain averaging technique
NASA Astrophysics Data System (ADS)
Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng
2013-09-01
Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.
NASA Technical Reports Server (NTRS)
1995-01-01
The Attitude Adjuster is a system for weight repositioning corresponding to a SCUBA diver's changing positions. Compact tubes on the diver's air tank permit controlled movement of lead balls within the Adjuster, automatically repositioning when the diver changes position. Manufactured by Think Tank Technologies, the system is light and small, reducing drag and energy requirements and contributing to lower air consumption. The Mid-Continent Technology Transfer Center helped the company with both technical and business information and arranged for the testing at Marshall Space Flight Center's Weightlessness Environmental Training Facility for astronauts.
40 CFR 63.652 - Emissions averaging provisions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... annual credits and debits in the Periodic Reports as specified in § 63.655(g)(8). Every fourth Periodic... reported in the next Periodic Report. (iii) The following procedures and equations shall be used to..., dimensionless (see table 33 of subpart G). P=Weighted average rack partial pressure of organic HAP's...
Combining forecast weights: Why and how?
NASA Astrophysics Data System (ADS)
Yin, Yip Chee; Kok-Haur, Ng; Hock-Eam, Lim
2012-09-01
This paper proposes a procedure called forecast weight averaging which is a specific combination of forecast weights obtained from different methods of constructing forecast weights for the purpose of improving the accuracy of pseudo out of sample forecasting. It is found that under certain specified conditions, forecast weight averaging can lower the mean squared forecast error obtained from model averaging. In addition, we show that in a linear and homoskedastic environment, this superior predictive ability of forecast weight averaging holds true irrespective whether the coefficients are tested by t statistic or z statistic provided the significant level is within the 10% range. By theoretical proofs and simulation study, we have shown that model averaging like, variance model averaging, simple model averaging and standard error model averaging, each produces mean squared forecast error larger than that of forecast weight averaging. Finally, this result also holds true marginally when applied to business and economic empirical data sets, Gross Domestic Product (GDP growth rate), Consumer Price Index (CPI) and Average Lending Rate (ALR) of Malaysia.
Modeling operating weight and axle weight distributions for highway vehicles
Greene, D.L.; Liang, J.C.
1988-07-01
The estimation of highway cost responsibility requires detailed information on vehicle operating weights and axle weights by type of vehicle. Typically, 10--20 vehicle types must be cross-classified by 10--20 registered weight classes and again by 20 or more operating weight categories, resulting in 100--400 relative frequencies to be determined for each vehicle type. For each of these, gross operating weight must be distributed to each axle or axle unit. Given the rarity of many of the heaviest vehicle types, direct estimation of these frequencies and axle weights from traffic classification count statistics and truck weight data may exceed the reliability of even the largest (e.g., 250,000 record) data sources. An alternative is to estimate statistical models of operating weight distributions as functions of registered weight, and models of axle weight shares as functions of operating weight. This paper describes the estimation of such functions using the multinomial logit model (a log-linear model) and the implementation of the modeling framework as a PC-based FORTRAN program. Areas for further research include the addition of highway class and region as explanatory variables in operating weight distribution models, and the development of theory for including registration costs and costs of operating overweight in the modeling framework. 14 refs., 45 figs., 5 tabs.
Using Bayes Model Averaging for Wind Power Forecasts
NASA Astrophysics Data System (ADS)
Preede Revheim, Pål; Beyer, Hans Georg
2014-05-01
For operational purposes predictions of the forecasts of the lumped output of groups of wind farms spread over larger geographic areas will often be of interest. A naive approach is to make forecasts for each individual site and sum them up to get the group forecast. It is however well documented that a better choice is to use a model that also takes advantage of spatial smoothing effects. It might however be the case that some sites tends to more accurately reflect the total output of the region, either in general or for certain wind directions. It will then be of interest giving these a greater influence over the group forecast. Bayesian model averaging (BMA) is a statistical post-processing method for producing probabilistic forecasts from ensembles. Raftery et al. [1] show how BMA can be used for statistical post processing of forecast ensembles, producing PDFs of future weather quantities. The BMA predictive PDF of a future weather quantity is a weighted average of the ensemble members' PDFs, where the weights can be interpreted as posterior probabilities and reflect the ensemble members' contribution to overall forecasting skill over a training period. In Revheim and Beyer [2] the BMA procedure used in Sloughter, Gneiting and Raftery [3] were found to produce fairly accurate PDFs for the future mean wind speed of a group of sites from the single sites wind speeds. However, when the procedure was attempted applied to wind power it resulted in either problems with the estimation of the parameters (mainly caused by longer consecutive periods of no power production) or severe underestimation (mainly caused by problems with reflecting the power curve). In this paper the problems that arose when applying BMA to wind power forecasting is met through two strategies. First, the BMA procedure is run with a combination of single site wind speeds and single site wind power production as input. This solves the problem with longer consecutive periods where the input data
Oh, Hee Kyung
2016-01-01
In order to attain heavier live weight without impairing pork or sensory quality characteristics, carcass performance, muscle fiber, pork quality, and sensory quality characteristics were compared among the heavy weight (HW, average live weight of 130.5 kg), medium weight (MW, average weight of 111.1 kg), and light weight (LW, average weight of 96.3 kg) pigs at time of slaughter. The loin eye area was 1.47 times greater in the HW group compared to the LW group (64.0 and 43.5 cm2, p<0.001), while carcass percent was similar between the HW and MW groups (p>0.05). This greater performance by the HW group compared to the LW group can be explained by a greater total number (1,436 vs. 1,188, ×103, p<0.001) and larger area (4,452 vs. 3,716 μm2, p<0.001) of muscle fibers. No significant differences were observed in muscle pH45 min, lightness, drip loss, and shear force among the groups (p>0.05), and higher live weights did not influence sensory quality attributes, including tenderness, juiciness, and flavor. Therefore, these findings indicate that increased live weights in this study did not influence the technological and sensory quality characteristics. Moreover, muscles with a higher number of medium or large size fibers tend to exhibit good carcass performance without impairing meat and sensory quality characteristics. PMID:27433110
... Parents MORE ON THIS TOPIC Weight Loss Surgery (Bariatric Surgery) Overweight and Obesity Weight and Diabetes Growth Charts ... Losing Weight: Brandon's Story (Video) Managing Your Weight Weight Loss Surgery When Being Overweight Is a Health Problem Who ...
On the average uncertainty for systems with nonlinear coupling
NASA Astrophysics Data System (ADS)
Nelson, Kenric P.; Umarov, Sabir R.; Kon, Mark A.
2017-02-01
The increased uncertainty and complexity of nonlinear systems have motivated investigators to consider generalized approaches to defining an entropy function. New insights are achieved by defining the average uncertainty in the probability domain as a transformation of entropy functions. The Shannon entropy when transformed to the probability domain is the weighted geometric mean of the probabilities. For the exponential and Gaussian distributions, we show that the weighted geometric mean of the distribution is equal to the density of the distribution at the location plus the scale (i.e. at the width of the distribution). The average uncertainty is generalized via the weighted generalized mean, in which the moment is a function of the nonlinear source. Both the Rényi and Tsallis entropies transform to this definition of the generalized average uncertainty in the probability domain. For the generalized Pareto and Student's t-distributions, which are the maximum entropy distributions for these generalized entropies, the appropriate weighted generalized mean also equals the density of the distribution at the location plus scale. A coupled entropy function is proposed, which is equal to the normalized Tsallis entropy divided by one plus the coupling.
Model Averaging for Improving Inference from Causal Diagrams.
Hamra, Ghassan B; Kaufman, Jay S; Vahratian, Anjel
2015-08-11
Model selection is an integral, yet contentious, component of epidemiologic research. Unfortunately, there remains no consensus on how to identify a single, best model among multiple candidate models. Researchers may be prone to selecting the model that best supports their a priori, preferred result; a phenomenon referred to as "wish bias". Directed acyclic graphs (DAGs), based on background causal and substantive knowledge, are a useful tool for specifying a subset of adjustment variables to obtain a causal effect estimate. In many cases, however, a DAG will support multiple, sufficient or minimally-sufficient adjustment sets. Even though all of these may theoretically produce unbiased effect estimates they may, in practice, yield somewhat distinct values, and the need to select between these models once again makes the research enterprise vulnerable to wish bias. In this work, we suggest combining adjustment sets with model averaging techniques to obtain causal estimates based on multiple, theoretically-unbiased models. We use three techniques for averaging the results among multiple candidate models: information criteria weighting, inverse variance weighting, and bootstrapping. We illustrate these approaches with an example from the Pregnancy, Infection, and Nutrition (PIN) study. We show that each averaging technique returns similar, model averaged causal estimates. An a priori strategy of model averaging provides a means of integrating uncertainty in selection among candidate, causal models, while also avoiding the temptation to report the most attractive estimate from a suite of equally valid alternatives.
Programmable noise bandwidth reduction by means of digital averaging
NASA Technical Reports Server (NTRS)
Poklemba, John J. (Inventor)
1993-01-01
Predetection noise bandwidth reduction is effected by a pre-averager capable of digitally averaging the samples of an input data signal over two or more symbols, the averaging interval being defined by the input sampling rate divided by the output sampling rate. As the averaged sample is clocked to a suitable detector at a much slower rate than the input signal sampling rate the noise bandwidth at the input to the detector is reduced, the input to the detector having an improved signal to noise ratio as a result of the averaging process, and the rate at which such subsequent processing must operate is correspondingly reduced. The pre-averager forms a data filter having an output sampling rate of one sample per symbol of received data. More specifically, selected ones of a plurality of samples accumulated over two or more symbol intervals are output in response to clock signals at a rate of one sample per symbol interval. The pre-averager includes circuitry for weighting digitized signal samples using stored finite impulse response (FIR) filter coefficients. A method according to the present invention is also disclosed.
ERIC Educational Resources Information Center
Molfese, Dennis L.; Key, Alexandra Fonaryova; Kelly, Spencer; Cunningham, Natalie; Terrell, Shona; Ferguson, Melissa; Molfese, Victoria J.; Bonebright, Terri
2006-01-01
Event-related potentials (ERPs) were recorded from 27 children (14 girls, 13 boys) who varied in their reading skill levels. Both behavior performance measures recorded during the ERP word classification task and the ERP responses themselves discriminated between children with above-average, average, and below-average reading skills. ERP…
Dowdall, A; Murphy, P; Pollard, D; Fenton, D
2017-04-01
In 2002, a National Radon Survey (NRS) in Ireland established that the geographically weighted national average indoor radon concentration was 89 Bq m(-3). Since then a number of developments have taken place which are likely to have impacted on the national average radon level. Key among these was the introduction of amending Building Regulations in 1998 requiring radon preventive measures in new buildings in High Radon Areas (HRAs). In 2014, the Irish Government adopted the National Radon Control Strategy (NRCS) for Ireland. A knowledge gap identified in the NRCS was to update the national average for Ireland given the developments since 2002. The updated national average would also be used as a baseline metric to assess the effectiveness of the NRCS over time. A new national survey protocol was required that would measure radon in a sample of homes representative of radon risk and geographical location. The design of the survey protocol took into account that it is not feasible to repeat the 11,319 measurements carried out for the 2002 NRS due to time and resource constraints. However, the existence of that comprehensive survey allowed for a new protocol to be developed, involving measurements carried out in unbiased randomly selected volunteer homes. This paper sets out the development and application of that survey protocol. The results of the 2015 survey showed that the current national average indoor radon concentration for homes in Ireland is 77 Bq m(-3), a decrease from the 89 Bq m(-3) reported in the 2002 NRS. Analysis of the results by build date demonstrate that the introduction of the amending Building Regulations in 1998 have led to a reduction in the average indoor radon level in Ireland.
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...
Do Diurnal Aerosol Changes Affect Daily Average Radiative Forcing?
Kassianov, Evgueni I.; Barnard, James C.; Pekour, Mikhail S.; Berg, Larry K.; Michalsky, Joseph J.; Lantz, K.; Hodges, G. B.
2013-06-17
Strong diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project (TCAP) on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF, when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.
Birth weight reduction associated with residence near a hazardous waste landfill.
Berry, M; Bove, F
1997-01-01
We examined the relationship between birth weight and mother's residence near a hazardous waste landfill. Twenty-five years of birth certificates (1961-1985) were collected for four towns. Births were grouped into five 5-year periods corresponding to hypothesized exposure periods (1971-1975 having the greatest potential for exposure). From 1971 to 1975, term births (37-44 weeks gestation) to parents living closest to the landfill (Area 1A) had a statistically significant lower average birth weight (192 g) and a statistically significant higher proportion of low birth weight [odds ratio (OR) = 5.1; 95% confidence interval (CI), 2.1-12.3] than the control population. Average term birth weights in Area 1A rebounded by about 332 g after 1975. Parallel results were found for all births (gestational age > 27 weeks) in Area 1A during 1971-1975. Area 1A infants had twice the risk of prematurity (OR = 2.1; 95 CI, 1.0-4.4) during 1971-1975 compared to the control group. The results indicate a significant impact to infants born to residents living near the landfill during the period postulated as having the greatest potential for exposure. The magnitude of the effect is in the range of birth weight reduction due to cigarette smoking during pregnancy. Images Figure 1. Figure 2. PMID:9347901
Averaging and Adding in Children's Worth Judgements
ERIC Educational Resources Information Center
Schlottmann, Anne; Harman, Rachel M.; Paine, Julie
2012-01-01
Under the normative Expected Value (EV) model, multiple outcomes are additive, but in everyday worth judgement intuitive averaging prevails. Young children also use averaging in EV judgements, leading to a disordinal, crossover violation of utility when children average the part worths of simple gambles involving independent events (Schlottmann,…
Code of Federal Regulations, 2010 CFR
2010-07-01
... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...
Code of Federal Regulations, 2012 CFR
2012-07-01
... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...
Code of Federal Regulations, 2011 CFR
2011-07-01
... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...
Code of Federal Regulations, 2013 CFR
2013-07-01
... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...
Code of Federal Regulations, 2014 CFR
2014-07-01
... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging...
Designing Digital Control Systems With Averaged Measurements
NASA Technical Reports Server (NTRS)
Polites, Michael E.; Beale, Guy O.
1990-01-01
Rational criteria represent improvement over "cut-and-try" approach. Recent development in theory of control systems yields improvements in mathematical modeling and design of digital feedback controllers using time-averaged measurements. By using one of new formulations for systems with time-averaged measurements, designer takes averaging effect into account when modeling plant, eliminating need to iterate design and simulation phases.
Pryke, Rachel; Docherty, Andrea
2008-02-01
In view of the limited success rates of all weight-loss strategies to date, this article hypothesises that in situations where previous dieting attempts have failed, better outcomes and health improvements will arise from advocating weight-stability goals. This means the promotion of weight maintenance (to ensure any reduction in weight is maintained) and weight constancy (where steps are taken to maintain existing weight without attempting weight loss), rather than advocating existing 5-10% weight-loss targets for these patients. The majority of approaches to obesity focus on weight reduction despite poor evidence of effectiveness. Primary care remains reluctant to engage in ineffective approaches, yet is well placed to give advice, and would undoubtedly adopt effective obesity-management approaches if they were developed. Despite guidance for overweight or obese people to aim for a 5-10% weight reduction, current trends demonstrate escalation of average weights and obesity. A literature review found little information about evaluation of weight-stability approaches (either weight maintenance or weight constancy), despite theoretical support for them. Yet taking steps to protect weight reduction where it is achieved, and to promote weight constancy (without weight loss) where further dieting is predicted to fail, would have a beneficial effect on preventing further growth of obesity-related morbidity in the population. Some evidence exists to support simple behavioural approaches to improve weight stability, but these measures do not feature in current advice and hence are not widely advocated.
Demonstration of a Model Averaging Capability in FRAMES
NASA Astrophysics Data System (ADS)
Meyer, P. D.; Castleton, K. J.
2009-12-01
Uncertainty in model structure can be incorporated in risk assessment using multiple alternative models and model averaging. To facilitate application of this approach to regulatory applications based on risk or dose assessment, a model averaging capability was integrated with the Framework for Risk Analysis in Multimedia Environmental Systems (FRAMES) version 2 software. FRAMES is a software platform that allows the non-parochial communication between disparate models, databases, and other frameworks. Users have the ability to implement and select environmental models for specific risk assessment and management problems. Standards are implemented so that models produce information that is readable by other downstream models and accept information from upstream models. Models can be linked across multiple media and from source terms to quantitative risk/dose estimates. Parameter sensitivity and uncertainty analysis tools are integrated. A model averaging module was implemented to accept output from multiple models and produce average results. These results can be deterministic quantities or probability distributions obtained from an analysis of parameter uncertainty. Output from alternative models is averaged using weights determined from user input and/or model calibration results. A model calibration module based on the PEST code was implemented to provide FRAMES with a general calibration capability. An application illustrates the implementation, user interfaces, execution, and results of the FRAMES model averaging capabilities.
Analysis of the lifted weight including height and frequency factors for workers in Colombia.
Saavedra-Robinson, Luisa; Quintana, Leonardo A J; Fortunato Leal, Luis Díaz; Niño, María
2012-01-01
Factors related to the height of the load and the frequency of handling have become a way to predict the acceptable standard weight lifted for workers whose main task is the manual lifting of materials and measuring the conditions is important to determine a maximum weight lifted. This study was conducted to twenty (20) workers between eighteen (18) and forty (40) years old with a minimum six months experience and belonging to the warehouse and packaging area of a dairy products company. Consideration was given to three different heights such as knuckle, shoulder and total height as well as frequencies of 2, 4 and 6 times per minute. Average values for lifted weight were 17.9306 ± 2.37 kg. The conclusions and recommendations included a review of legislation related to Colombian maximum acceptable weight of lifting due to the current law does not match the acceptable weight handled in this research.
Bayesian Model Averaging for Propensity Score Analysis.
Kaplan, David; Chen, Jianshen
2014-01-01
This article considers Bayesian model averaging as a means of addressing uncertainty in the selection of variables in the propensity score equation. We investigate an approximate Bayesian model averaging approach based on the model-averaged propensity score estimates produced by the R package BMA but that ignores uncertainty in the propensity score. We also provide a fully Bayesian model averaging approach via Markov chain Monte Carlo sampling (MCMC) to account for uncertainty in both parameters and models. A detailed study of our approach examines the differences in the causal estimate when incorporating noninformative versus informative priors in the model averaging stage. We examine these approaches under common methods of propensity score implementation. In addition, we evaluate the impact of changing the size of Occam's window used to narrow down the range of possible models. We also assess the predictive performance of both Bayesian model averaging propensity score approaches and compare it with the case without Bayesian model averaging. Overall, results show that both Bayesian model averaging propensity score approaches recover the treatment effect estimates well and generally provide larger uncertainty estimates, as expected. Both Bayesian model averaging approaches offer slightly better prediction of the propensity score compared with the Bayesian approach with a single propensity score equation. Covariate balance checks for the case study show that both Bayesian model averaging approaches offer good balance. The fully Bayesian model averaging approach also provides posterior probability intervals of the balance indices.
College Freshman Stress and Weight Change: Differences by Gender
ERIC Educational Resources Information Center
Economos, Christina D.; Hildebrandt, M. Lise; Hyatt, Raymond R.
2008-01-01
Objectives: To examine how stress and health-related behaviors affect freshman weight change by gender. Methods: Three hundred ninety-six freshmen completed a 40-item health behavior survey and height and weight were collected at baseline and follow-up. Results: Average weight change was 5.04 lbs for males, 5.49 lbs for females. Weight gain was…
Data Point Averaging for Computational Fluid Dynamics Data
NASA Technical Reports Server (NTRS)
Norman, David, Jr. (Inventor)
2014-01-01
A system and method for generating fluid flow parameter data for use in aerodynamic heating analysis. Computational fluid dynamics data is generated for a number of points in an area on a surface to be analyzed. Sub-areas corresponding to areas of the surface for which an aerodynamic heating analysis is to be performed are identified. A computer system automatically determines a sub-set of the number of points corresponding to each of the number of sub-areas and determines a value for each of the number of sub-areas using the data for the sub-set of points corresponding to each of the number of sub-areas. The value is determined as an average of the data for the sub-set of points corresponding to each of the number of sub-areas. The resulting parameter values then may be used to perform an aerodynamic heating analysis.
Data Point Averaging for Computational Fluid Dynamics Data
NASA Technical Reports Server (NTRS)
Norman, Jr., David (Inventor)
2016-01-01
A system and method for generating fluid flow parameter data for use in aerodynamic heating analysis. Computational fluid dynamics data is generated for a number of points in an area on a surface to be analyzed. Sub-areas corresponding to areas of the surface for which an aerodynamic heating analysis is to be performed are identified. A computer system automatically determines a sub-set of the number of points corresponding to each of the number of sub-areas and determines a value for each of the number of sub-areas using the data for the sub-set of points corresponding to each of the number of sub-areas. The value is determined as an average of the data for the sub-set of points corresponding to each of the number of sub-areas. The resulting parameter values then may be used to perform an aerodynamic heating analysis.
NASA Astrophysics Data System (ADS)
Ding, F.; Theobald, M.; Vollmer, B.; Savtchenko, A. K.; Hearty, T. J.; Esfandiari, A. E.
2012-12-01
Producing timely and accurate water forecast and information is the mission of National Weather Service River Forecast Centers (NWS RFCs) of National Oceanic and Atmospheric Administration (NOAA). The river forecast system in RFCs requires average surface temperature in the fixed 6-hour period 000-0600, 0600-1200, 1200-1800, and 1200-0000 UTC. The current logic of RFC temperature forecast relies on ingest of point values of daytime maximum and nighttime minimum temperature. Meanwhile, the mean temperature for the 6-hour period is estimated from a weighted average of daytime maximum and nighttime minimum temperature. The Atmospheric Infrared Sounder (AIRS) in the first high spectral resolution infrared sounder on board the Aqua satellite which was launched in May 2002 and follows a Sun-synchronous polar orbit. It is aimed to produce high resolution atmospheric profile and surface atmospheric parameters. As Aqua crosses the equator at about 1330 and 0130 local time, the AIRS retrieved surface temperature may represent daytime maximum and nighttime minimum value. Comparing to point observation from surface weather stations which are often sparse over the less-populated area and are unevenly distributed, satellite may obtain better area averaged observation. This test study assesses the potential of using AIRS retrieved surface temperature to forecast 6-hour average temperature for NWS RFCs. The California Nevada RFC is selected due to the poor coverage of surface observation in the mountainous region and spring snow melting. The study focuses on the March to May spring season when water from snowpack melting often plays important role in flood. AIRS retrieved temperature and surface weather station data set will be used to derive statistical weighting coefficient for 6-hour average temperature forecast. The resulting forecast biases and errors will be the main indicators of the potential usage. All study results will be presented in the meeting.
Effect of clothing weight on body weight
Technology Transfer Automated Retrieval System (TEKTRAN)
Background: In clinical settings, it is common to measure weight of clothed patients and estimate a correction for the weight of clothing, but we can find no papers in the medical literature regarding the variability in clothing weight with weather, season, and gender. Methods: Fifty adults (35 wom...
Average-cost based robust structural control
NASA Technical Reports Server (NTRS)
Hagood, Nesbitt W.
1993-01-01
A method is presented for the synthesis of robust controllers for linear time invariant structural systems with parameterized uncertainty. The method involves minimizing quantities related to the quadratic cost (H2-norm) averaged over a set of systems described by real parameters such as natural frequencies and modal residues. Bounded average cost is shown to imply stability over the set of systems. Approximations for the exact average are derived and proposed as cost functionals. The properties of these approximate average cost functionals are established. The exact average and approximate average cost functionals are used to derive dynamic controllers which can provide stability robustness. The robustness properties of these controllers are demonstrated in illustrative numerical examples and tested in a simple SISO experiment on the MIT multi-point alignment testbed.
Statistics of time averaged atmospheric scintillation
Stroud, P.
1994-02-01
A formulation has been constructed to recover the statistics of the moving average of the scintillation Strehl from a discrete set of measurements. A program of airborne atmospheric propagation measurements was analyzed to find the correlation function of the relative intensity over displaced propagation paths. The variance in continuous moving averages of the relative intensity was then found in terms of the correlation functions. An empirical formulation of the variance of the continuous moving average of the scintillation Strehl has been constructed. The resulting characterization of the variance of the finite time averaged Strehl ratios is being used to assess the performance of an airborne laser system.
Cell averaging Chebyshev methods for hyperbolic problems
NASA Technical Reports Server (NTRS)
Wei, Cai; Gottlieb, David; Harten, Ami
1990-01-01
A cell averaging method for the Chebyshev approximations of first order hyperbolic equations in conservation form is described. Formulas are presented for transforming between pointwise data at the collocation points and cell averaged quantities, and vice-versa. This step, trivial for the finite difference and Fourier methods, is nontrivial for the global polynomials used in spectral methods. The cell averaging methods presented are proven stable for linear scalar hyperbolic equations and present numerical simulations of shock-density wave interaction using the new cell averaging Chebyshev methods.
Somatotypes of weight lifters.
Orvanová, E
1990-01-01
The present paper reviews published studies on the body shape of weight lifters. The differences between the somatotype ratings of weight lifters studied using the Sheldon and the Heath-Carter methods, and the differences between performance levels and age groups of weight lifters are discussed. The differences in mean somatoplots among the weight lifters studied as a whole group, weight lifters divided into two, three or four groups according to body weight, and weight lifters considered according to the official weight classes, are assessed. Weight lifters in the lighter weight classes are found to be ectomorphic or balanced mesomorphs, while those in the heavier weight classes tend to be endomorphic mesomorphs. Ectomorphy decreases, whereas mesomorphy and endomorphy increase with weight class. When three age groups of weight lifters were compared within each weight class, the same pattern of differences between ages occurs. The younger lifters in each weight class have higher endomorphy and lower mesomorphy than the senior lifters. Ectomorphy is higher in the younger lifters below the weight class of 82.5 kg. Since significant differences in all three somatotype components between 10 weight classes of weight lifters and also within three age groups were noted, it will be necessary in future studies to consider the somatotypes of weight lifters according to the official weight classes.
Informed Test Component Weighting.
ERIC Educational Resources Information Center
Rudner, Lawrence M.
2001-01-01
Identifies and evaluates alternative methods for weighting tests. Presents formulas for composite reliability and validity as a function of component weights and suggests a rational process that identifies and considers trade-offs in determining weights. Discusses drawbacks to implicit weighting and explicit weighting and the difficulty of…
The average distances in random graphs with given expected degrees
NASA Astrophysics Data System (ADS)
Chung, Fan; Lu, Linyuan
2002-12-01
Random graph theory is used to examine the "small-world phenomenon"; any two strangers are connected through a short chain of mutual acquaintances. We will show that for certain families of random graphs with given expected degrees the average distance is almost surely of order log n/log , where is the weighted average of the sum of squares of the expected degrees. Of particular interest are power law random graphs in which the number of vertices of degree k is proportional to 1/k for some fixed exponent . For the case of > 3, we prove that the average distance of the power law graphs is almost surely of order log n/log β < 3 for which the power law random graphs have average distance almost surely of order log log n, but have diameter of order log n (provided having some mild constraints for the average distance and maximum degree). In particular, these graphs contain a dense subgraph, which we call the core, having nc/log log n vertices. Almost all vertices are within distance log log n of the core although there are vertices at distance log n from the core.
Effects of spatial variability and scale on areal -average evapotranspiration
NASA Technical Reports Server (NTRS)
Famiglietti, J. S.; Wood, Eric F.
1993-01-01
This paper explores the effect of spatial variability and scale on areally-averaged evapotranspiration. A spatially-distributed water and energy balance model is employed to determine the effect of explicit patterns of model parameters and atmospheric forcing on modeled areally-averaged evapotranspiration over a range of increasing spatial scales. The analysis is performed from the local scale to the catchment scale. The study area is King's Creek catchment, an 11.7 sq km watershed located on the native tallgrass prairie of Kansas. The dominant controls on the scaling behavior of catchment-average evapotranspiration are investigated by simulation, as is the existence of a threshold scale for evapotranspiration modeling, with implications for explicit versus statistical representation of important process controls. It appears that some of our findings are fairly general, and will therefore provide a framework for understanding the scaling behavior of areally-averaged evapotranspiration at the catchment and larger scales.
5 CFR 591.210 - What are weights?
Code of Federal Regulations, 2011 CFR
2011-01-01
... What are weights? (a) A weight is the relative importance or share of a subpart of a group compared... compared with the whole pie. (b) OPM uses two kinds of weights: Consumer expenditure weights and employment.... The employment weight is the relative employment population of the survey area compared with...
40 CFR 80.67 - Compliance on average.
Code of Federal Regulations, 2011 CFR
2011-07-01
... gasoline produced or imported during the period January 1, 2006, through May 5, 2006 or the volume and...) Compliance survey required in order to meet standards on average. (1) Any refiner or importer that complies... petition to include: (1) The identification of the refiner and refinery, or importer, the covered area,...
40 CFR 80.67 - Compliance on average.
Code of Federal Regulations, 2012 CFR
2012-07-01
... gasoline produced or imported during the period January 1, 2006, through May 5, 2006 or the volume and...) Compliance survey required in order to meet standards on average. (1) Any refiner or importer that complies... petition to include: (1) The identification of the refiner and refinery, or importer, the covered area,...
40 CFR 80.67 - Compliance on average.
Code of Federal Regulations, 2013 CFR
2013-07-01
... gasoline produced or imported during the period January 1, 2006, through May 5, 2006 or the volume and...) Compliance survey required in order to meet standards on average. (1) Any refiner or importer that complies... petition to include: (1) The identification of the refiner and refinery, or importer, the covered area,...
40 CFR 80.67 - Compliance on average.
Code of Federal Regulations, 2014 CFR
2014-07-01
... gasoline produced or imported during the period January 1, 2006, through May 5, 2006 or the volume and...) Compliance survey required in order to meet standards on average. (1) Any refiner or importer that complies... petition to include: (1) The identification of the refiner and refinery, or importer, the covered area,...
40 CFR 80.67 - Compliance on average.
Code of Federal Regulations, 2010 CFR
2010-07-01
... gasoline produced or imported during the period January 1, 2006, through May 5, 2006 or the volume and...) Compliance survey required in order to meet standards on average. (1) Any refiner or importer that complies... petition to include: (1) The identification of the refiner and refinery, or importer, the covered area,...
Human-experienced temperature changes exceed global average climate changes for all income groups
NASA Astrophysics Data System (ADS)
Hsiang, S. M.; Parshall, L.
2009-12-01
Global climate change alters local climates everywhere. Many climate change impacts, such as those affecting health, agriculture and labor productivity, depend on these local climatic changes, not global mean change. Traditional, spatially averaged climate change estimates are strongly influenced by the response of icecaps and oceans, providing limited information on human-experienced climatic changes. If used improperly by decision-makers, these estimates distort estimated costs of climate change. We overlay the IPCC’s 20 GCM simulations on the global population distribution to estimate local climatic changes experienced by the world population in the 21st century. The A1B scenario leads to a well-known rise in global average surface temperature of +2.0°C between the periods 2011-2030 and 2080-2099. Projected on the global population distribution in 2000, the median human will experience an annual average rise of +2.3°C (4.1°F) and the average human will experience a rise of +2.4°C (4.3°F). Less than 1% of the population will experience changes smaller than +1.0°C (1.8°F), while 25% and 10% of the population will experience changes greater than +2.9°C (5.2°F) and +3.5°C (6.2°F) respectively. 67% of the world population experiences temperature changes greater than the area-weighted average change of +2.0°C (3.6°F). Using two approaches to characterize the spatial distribution of income, we show that the wealthiest, middle and poorest thirds of the global population experience similar changes, with no group dominating the global average. Calculations for precipitation indicate that there is little change in average precipitation, but redistributions of precipitation occur in all income groups. These results suggest that economists and policy-makers using spatially averaged estimates of climate change to approximate local changes will systematically and significantly underestimate the impacts of climate change on the 21st century population. Top: The
Dynamic Multiscale Averaging (DMA) of Turbulent Flow
Richard W. Johnson
2012-09-01
A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2011 CFR
2011-07-01
... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2013 CFR
2013-07-01
... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2010 CFR
2010-07-01
... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...
Whatever Happened to the Average Student?
ERIC Educational Resources Information Center
Krause, Tom
2005-01-01
Mandated state testing, college entrance exams and their perceived need for higher and higher grade point averages have raised the anxiety levels felt by many of the average students. Too much focus is placed on state test scores and college entrance standards with not enough focus on the true level of the students. The author contends that…
40 CFR 86.449 - Averaging provisions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... class or subclass: Credit = (Average Standard − Emission Level) × (Total Annual Production) × (Useful Life) Deficit = (Emission Level − Average Standard) × (Total Annual Production) × (Useful Life) (l....000 Where: FELi = The FEL to which the engine family is certified. ULi = The useful life of the...
Determinants of College Grade Point Averages
ERIC Educational Resources Information Center
Bailey, Paul Dean
2012-01-01
Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…
Average Transmission Probability of a Random Stack
ERIC Educational Resources Information Center
Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg
2010-01-01
The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…
... A Week of Healthy Breakfasts Shyness Weight Loss Surgery KidsHealth > For Teens > Weight Loss Surgery A A ... Risks and Side Effects? What Is Weight Loss Surgery? For some people, being overweight is about more ...
... serious medical problems. Weight loss surgery (also called bariatric surgery) can help very obese people lose weight. But ... Gastric banding is the simplest of the three weight loss surgeries. People who get it might not lose as ...
Analogue Divider by Averaging a Triangular Wave
NASA Astrophysics Data System (ADS)
Selvam, Krishnagiri Chinnathambi
2017-03-01
A new analogue divider circuit by averaging a triangular wave using operational amplifiers is explained in this paper. The triangle wave averaging analog divider using operational amplifiers is explained here. The reference triangular waveform is shifted from zero voltage level up towards positive power supply voltage level. Its positive portion is obtained by a positive rectifier and its average value is obtained by a low pass filter. The same triangular waveform is shifted from zero voltage level to down towards negative power supply voltage level. Its negative portion is obtained by a negative rectifier and its average value is obtained by another low pass filter. Both the averaged voltages are combined in a summing amplifier and the summed voltage is given to an op-amp as negative input. This op-amp is configured to work in a negative closed environment. The op-amp output is the divider output.
Sivarasu, Sudesh; Mathew, Lazar
2009-01-01
Artificial knees have been used in total knee arthroplasty for more than 6 decades. The major drawback of the medical implant is its weight, with the average weight of an artificial knee implant made of stainless steel and ultra-high-molecular-weight polyethylene being approximately 450 g. Tne weight of the natural knee removed during arthroplasty is < 70 g. Thus, the increase in weight is approximately 600 percent, which causes muscle fatigue and decreased knee functionality. Our research aimed to develop an artificial knee implant, in which the design is modified and corrected to make the implant weigh less. The implant weight was reduced by drilling holes in thicker areas of the implant. The radius of the drill holes and their length inside the implant were controlled by conducting simulation studies using finite element modelling (FEM) techniques. These effects of using drills on implants reduced the implant weight to approximately 25 g. Performance was validated by loading the implants to 2000 N, which is approximately 15x the average body weight, and showed satisfactory results in weight reduction and performance of the new implant models.
Effect of molecular weight on polyphenylquinoxaline properties
NASA Technical Reports Server (NTRS)
Jensen, Brian J.
1991-01-01
A series of polyphenyl quinoxalines with different molecular weight and end-groups were prepared by varying monomer stoichiometry. Thus, 4,4'-oxydibenzil and 3,3'-diaminobenzidine were reacted in a 50/50 mixture of m-cresol and xylenes. Reaction concentration, temperature, and stir rate were studied and found to have an effect on polymer properties. Number and weight average molecular weights were determined and correlated well with viscosity data. Glass transition temperatures were determined and found to vary with molecular weight and end-groups. Mechanical properties of films from polymers with different molecular weights were essentially identical at room temperature but showed significant differences at 232 C. Diamine terminated polymers were found to be much less thermooxidatively stable than benzil terminated polymers when aged at 316 C even though dynamic thermogravimetric analysis revealed only slight differences. Lower molecular weight polymers exhibited better processability than higher molecular weight polymers.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-14
... captured. To illustrate this point, some draw on the ``speeding ticket'' analogy, whereby a driver caught exceeding the speed limit could nevertheless avoid the fine by submitting evidence that he or she...
Environmental effects on fruit ripening and average fruit weight for three peach cultivars
Technology Transfer Automated Retrieval System (TEKTRAN)
Three peach cultivars, ‘Crimson Lady’ (early), ‘Redhaven’ (mid-season) and ‘Cresthaven’ (late), were planted at twelve locations within the USA in 2009. All trees were grafted on ‘Lovell’ rootstock and came from the same nursery. Five trees of each cultivar were planted at a spacing of 6m by 5m at e...
Cernicchiaro, N; Renter, D G; Xiang, S; White, B J; Bello, N M
2013-06-01
Variability in ADG of feedlot cattle can affect profits, thus making overall returns more unstable. Hence, knowledge of the factors that contribute to heterogeneity of variances in animal performance can help feedlot managers evaluate risks and minimize profit volatility when making managerial and economic decisions in commercial feedlots. The objectives of the present study were to evaluate heteroskedasticity, defined as heterogeneity of variances, in ADG of cohorts of commercial feedlot cattle, and to identify cattle demographic factors at feedlot arrival as potential sources of variance heterogeneity, accounting for cohort- and feedlot-level information in the data structure. An operational dataset compiled from 24,050 cohorts from 25 U. S. commercial feedlots in 2005 and 2006 was used for this study. Inference was based on a hierarchical Bayesian model implemented with Markov chain Monte Carlo, whereby cohorts were modeled at the residual level and feedlot-year clusters were modeled as random effects. Forward model selection based on deviance information criteria was used to screen potentially important explanatory variables for heteroskedasticity at cohort- and feedlot-year levels. The Bayesian modeling framework was preferred as it naturally accommodates the inherently hierarchical structure of feedlot data whereby cohorts are nested within feedlot-year clusters. Evidence for heterogeneity of variance components of ADG was substantial and primarily concentrated at the cohort level. Feedlot-year specific effects were, by far, the greatest contributors to ADG heteroskedasticity among cohorts, with an estimated ∼12-fold change in dispersion between most and least extreme feedlot-year clusters. In addition, identifiable demographic factors associated with greater heterogeneity of cohort-level variance included smaller cohort sizes, fewer days on feed, and greater arrival BW, as well as feedlot arrival during summer months. These results support that heterogeneity of variances in ADG is prevalent in feedlot performance and indicate potential sources of heteroskedasticity. Further investigation of factors associated with heteroskedasticity in feedlot performance is warranted to increase consistency and uniformity in commercial beef cattle production and subsequent profitability.
ERIC Educational Resources Information Center
Jenny, Mirjam A.; Rieskamp, Jörg; Nilsson, Håkan
2014-01-01
Judging whether multiple events will co-occur is an important aspect of everyday decision making. The underlying probabilities of occurrence are usually unknown and have to be inferred from experience. Using a rigorous, quantitative model comparison, we investigate how people judge the conjunctive probabilities of multiple events to co-occur. In 2…
Enhancing Trust in the Smart Grid by Applying a Modified Exponentially Weighted Averages Algorithm
2012-06-01
and Transportation 4. Banking and Finance 5. Transportation 6. Water Supply Systems 7. Emergency Services 8. Continuity of Government E.O. 13010...broadened the list of critical infrastructure sectors to include electrical power system by name as identified in Table 2.1. In 1998 , President Clinton...economy and government. They include, but are not limited to, telecommunications, energy, banking and finance, transportation, water systems and
An income-weighted international average for comparative analysis of health expenditures.
Getzen, T E; Poullier, J P
1991-01-01
Data from 17 countries across 28 years are used to estimate an international health expenditure function based on real per capita GNP. Actual and expected spending levels are compared for 24 countries. Between 1960 and 1987, it has been rare for health expenditure in any country to be more than +/- 20 per cent from the projected value. The norm is for spending to rise at 1.5 times the growth rate of GDP. Two countries appear to display significant anomalies. Spending in the United Kingdom is consistently 15-25 per cent below normal for all years, and Danish expenditure has declined from 7 to 6 per cent of GDP since 1975.
Factors influencing weight gain after renal transplantation.
Johnson, C P; Gallagher-Lepak, S; Zhu, Y R; Porth, C; Kelber, S; Roza, A M; Adams, M B
1993-10-01
Weight gain following renal transplantation occurs frequently but has not been investigated quantitatively. A retrospective chart review of 115 adult renal transplant recipients was used to describe patterns of weight gain during the first 5 years after transplantation. Only 23 subjects (21%) were overweight before their transplant. Sixty-six subjects (57%) experienced a weight gain of greater than or equal to 10%, and 49 subjects (43%) were overweight according to Metropolitan relative weight criteria at 1 year after transplantation. There was an inverse correlation between advancing age and weight gain, with the youngest patients (18-29 years) having a 13.3% weight gain and the oldest patients (age greater than 50 years) having the lowest gain of 8.3% at 1 year (P = 0.047). Black recipients experienced a greater weight gain than whites during the first posttransplant year (14.6% vs. 9.0%; P = 0.043), and maintained or increased this difference over the 5-year period. Men and women experienced comparable weight gain during the first year (9.5% vs. 12.1%), but women continued to gain weight throughout the 5-year study (21.0% total weight gain). The men remained stable after the first year (10.8% total weight gain). Recipients who experienced at least a 10% weight gain also increased their serum cholesterol (mean 261 vs. 219) and triglyceride (mean 277 vs. 159) levels significantly, whereas those without weight gain did not. Weight gain did not correlate with cumulative steroid dose, donor source (living-related versus cadaver), rejection history, pre-existing obesity, the number of months on dialysis before transplantation, or posttransplant renal function. Posttransplant weight gain is related mainly to demographic factors, not to treatment factors associated with the transplant. The average weight gain during the first year after renal transplantation is approximately 10%. This increased weight, coupled with changes in lipid metabolism, may be significant in
Ginsberg, Howard; Lee, Chong; Volson, Barry; Dyer, Megan C.; LeBrun, Roger A.
2017-01-01
The relationship between engorgement weight of female Ixodes scapularis Say and characteristics of offspring was studied using field-collected females fed on rabbits in the laboratory. The number of eggs laid was positively related to maternal engorgement weight in one trial, and larval size (estimated by scutal area) was positively related to maternal engorgement weight in the other. These results suggest a trade-off in number of eggs produced versus average size of offspring, possibly determined during late engorgement. The adults for the two trials were collected from different sites in southern Rhode Island and in different seasons (the fall adults were newly emerged, while the spring adults had presumably lived through the winter), so it is not clear whether these results reflect genetic differences or subtle environmental differences between trials. Percent egg hatch and average fat content of larvae were not related to female engorgement weight. We present a modified method to measure lipid content of pooled larval ticks.
Light propagation in the averaged universe
Bagheri, Samae; Schwarz, Dominik J. E-mail: dschwarz@physik.uni-bielefeld.de
2014-10-01
Cosmic structures determine how light propagates through the Universe and consequently must be taken into account in the interpretation of observations. In the standard cosmological model at the largest scales, such structures are either ignored or treated as small perturbations to an isotropic and homogeneous Universe. This isotropic and homogeneous model is commonly assumed to emerge from some averaging process at the largest scales. We assume that there exists an averaging procedure that preserves the causal structure of space-time. Based on that assumption, we study the effects of averaging the geometry of space-time and derive an averaged version of the null geodesic equation of motion. For the averaged geometry we then assume a flat Friedmann-Lemaître (FL) model and find that light propagation in this averaged FL model is not given by null geodesics of that model, but rather by a modified light propagation equation that contains an effective Hubble expansion rate, which differs from the Hubble rate of the averaged space-time.
NASA Astrophysics Data System (ADS)
Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc
2015-10-01
This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.
Average shape of transport-limited aggregates.
Davidovitch, Benny; Choi, Jaehyuk; Bazant, Martin Z
2005-08-12
We study the relation between stochastic and continuous transport-limited growth models. We derive a nonlinear integro-differential equation for the average shape of stochastic aggregates, whose mean-field approximation is the corresponding continuous equation. Focusing on the advection-diffusion-limited aggregation (ADLA) model, we show that the average shape of the stochastic growth is similar, but not identical, to the corresponding continuous dynamics. Similar results should apply to DLA, thus explaining the known discrepancies between average DLA shapes and viscous fingers in a channel geometry.
Cosmic inhomogeneities and averaged cosmological dynamics.
Paranjape, Aseem; Singh, T P
2008-10-31
If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a "dark energy." However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be "no." Averaging effects negligibly influence the cosmological dynamics.
Average-passage flow model development
NASA Technical Reports Server (NTRS)
Adamczyk, John J.; Celestina, Mark L.; Beach, Tim A.; Kirtley, Kevin; Barnett, Mark
1989-01-01
A 3-D model was developed for simulating multistage turbomachinery flows using supercomputers. This average passage flow model described the time averaged flow field within a typical passage of a bladed wheel within a multistage configuration. To date, a number of inviscid simulations were executed to assess the resolution capabilities of the model. Recently, the viscous terms associated with the average passage model were incorporated into the inviscid computer code along with an algebraic turbulence model. A simulation of a stage-and-one-half, low speed turbine was executed. The results of this simulation, including a comparison with experimental data, is discussed.
Relationship of childhood weight status to morbidity in adults.
Abraham, Sidney; Collins, Gretchen; Nordsieck, Marie
2016-08-01
A cohort of white males who had attended elementary schools in Hagerstown, Md., between 1923 and 1928, and whose height-weight records for those years were available, was examined during 1961-63. A study of their childhood relative weight at ages 9-13, and of their adult relative weight 35-40 years later, was made in relation to selected physiological variables and diagnosed morbidity.Essential findings were as follows: Childhood relative weight at ages 9-13 had no significant relationship to adult levels of fasting blood sugar, serum cholesterol, beta-lipoprotein, or blood pressure, or to cardiovascular renal disease.Childhood relative weight at ages 9-13 was significantly related to hypertensive vascular disease. The below average weight group experienced a higher prevalence than observed in either average or moderately overweight childhood groups.Approximately 30 percent of the below average weight children became average weight adults and 21 percent became overweight adults. Of the average weight children, approximately 40 percent became overweight adults. Overweight children tended to remain overweight as adults.Adult relative weight of the same cohort, viewed 35-40 years later, was significantly associated with fasting blood sugar, beta-lipoprotein, and systolic and diastolic blood pressure. Elevated levels of each of these variables occurred with greater frequency in the overweight child.Adult relative weight was significantly associated with hypertensive vascular disease and cardiovascular renal disease; the higher prevalence occurred in the overweight adults.The highest risk for hypertensive vascular and cardiovascular renal disease was associated with the persons who acquired their overweight status as adults. The higher prevalence of these diseases among the overweight adults was largely attributable to the adults who moved from a below average childhood weight category to an overweight adult group. The moderately or markedly overweight adults who was
Body weight relationships in early marriage. Weight relevance, weight comparisons, and weight talk.
Bove, Caron F; Sobal, Jeffery
2011-12-01
This investigation uncovered processes underlying the dynamics of body weight and body image among individuals involved in nascent heterosexual marital relationships in Upstate New York. In-depth, semi-structured qualitative interviews conducted with 34 informants, 20 women and 14 men, just prior to marriage and again one year later were used to explore continuity and change in cognitive, affective, and behavioral factors relating to body weight and body image at the time of marriage, an important transition in the life course. Three major conceptual themes operated in the process of developing and enacting informants' body weight relationships with their partner: weight relevance, weight comparisons, and weight talk. Weight relevance encompassed the changing significance of weight during early marriage and included attracting and capturing a mate, relaxing about weight, living healthily, and concentrating on weight. Weight comparisons between partners involved weight relativism, weight competition, weight envy, and weight role models. Weight talk employed pragmatic talk, active and passive reassurance, and complaining and critiquing criticism. Concepts emerging from this investigation may be useful in designing future studies of and approaches to managing body weight in adulthood.
Body Weight Relationships in Early Marriage: Weight Relevance, Weight Comparisons, and Weight Talk
Bove, Caron F.; Sobal, Jeffery
2011-01-01
This investigation uncovered processes underlying the dynamics of body weight and body image among individuals involved in nascent heterosexual marital relationships in Upstate New York. In-depth, semi-structured qualitative interviews conducted with 34 informants, 20 women and 14 men, just prior to marriage and again one year later were used to explore continuity and change in cognitive, affective, and behavioral factors relating to body weight and body image at the time of marriage, an important transition in the life course. Three major conceptual themes operated in the process of developing and enacting informants’ body weight relationships with their partner: weight relevance, weight comparisons, and weight talk. Weight relevance encompassed the changing significance of weight during early marriage and included attracting and capturing a mate, relaxing about weight, living healthily, and concentrating on weight. Weight comparisons between partners involved weight relativism, weight competition, weight envy, and weight role models. Weight talk employed pragmatic talk, active and passive reassurance, and complaining and critiquing criticism. Concepts emerging from this investigation may be useful in designing future studies of and approaches to managing body weight in adulthood. PMID:21864601
Removing Cardiac Artefacts in Magnetoencephalography with Resampled Moving Average Subtraction
Ahlfors, Seppo P.; Hinrichs, Hermann
2016-01-01
Magnetoencephalography (MEG) signals are commonly contaminated by cardiac artefacts (CAs). Principle component analysis and independent component analysis have been widely used for removing CAs, but they typically require a complex procedure for the identification of CA-related components. We propose a simple and efficient method, resampled moving average subtraction (RMAS), to remove CAs from MEG data. Based on an electrocardiogram (ECG) channel, a template for each cardiac cycle was estimated by a weighted average of epochs of MEG data over consecutive cardiac cycles, combined with a resampling technique for accurate alignment of the time waveforms. The template was subtracted from the corresponding epoch of the MEG data. The resampling reduced distortions due to asynchrony between the cardiac cycle and the MEG sampling times. The RMAS method successfully suppressed CAs while preserving both event-related responses and high-frequency (>45 Hz) components in the MEG data. PMID:27503196
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2012 CFR
2012-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...
Spacetime Average Density (SAD) cosmological measures
Page, Don N.
2014-11-01
The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmological constant.
Bimetal sensor averages temperature of nonuniform profile
NASA Technical Reports Server (NTRS)
Dittrich, R. T.
1968-01-01
Instrument that measures an average temperature across a nonuniform temperature profile under steady-state conditions has been developed. The principle of operation is an application of the expansion of a solid material caused by a change in temperature.
Rotational averaging of multiphoton absorption cross sections
NASA Astrophysics Data System (ADS)
Friese, Daniel H.; Beerepoot, Maarten T. P.; Ruud, Kenneth
2014-11-01
Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.
Rotational averaging of multiphoton absorption cross sections.
Friese, Daniel H; Beerepoot, Maarten T P; Ruud, Kenneth
2014-11-28
Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.
Polyline averaging using distance surfaces: A spatial hurricane climatology
NASA Astrophysics Data System (ADS)
Scheitlin, Kelsey N.; Mesev, Victor; Elsner, James B.
2013-03-01
The US Gulf states are frequently hit by hurricanes, causing widespread damage resulting in economic loss and occasional human fatalities. Current hurricane climatologies and predictive models frequently omit information on the spatial characteristics of hurricane movement—their linear tracks. We investigate the construction of a spatial hurricane climatology that condenses linear tracks to one-dimensional polylines. With the aid of distance surfaces, an average hurricane track is calculated by summing polylines as part of a grid-based algorithm. We demonstrate the procedure on a particularly vulnerable coastline around the city of Galveston in Texas, where the tracks of the closest storms to Galveston are also weighted by an inverse distance function. Track averaging is also applied as a means of interpolating possible paths of historical storms where records are sporadic observations, and sometimes anecdotal. We offer the average track as a convenient regional summary of expected hurricane movement. The average track, together with other hurricane attributes, also provides a means to assess the expected local vulnerability of property and environmental damage.
Monthly average polar sea-ice concentration
Schweitzer, Peter N.
1995-01-01
The data contained in this CD-ROM depict monthly averages of sea-ice concentration in the modern polar oceans. These averages were derived from the Scanning Multichannel Microwave Radiometer (SMMR) and Special Sensor Microwave/Imager (SSM/I) instruments aboard satellites of the U.S. Air Force Defense Meteorological Satellite Program from 1978 through 1992. The data are provided as 8-bit images using the Hierarchical Data Format (HDF) developed by the National Center for Supercomputing Applications.
Radial averages of astigmatic TEM images.
Fernando, K Vince
2008-10-01
The Contrast Transfer Function (CTF) of an image, which modulates images taken from a Transmission Electron Microscope (TEM), is usually determined from the radial average of the power spectrum of the image (Frank, J., Three-dimensional Electron Microscopy of Macromolecular Assemblies, Oxford University Press, Oxford, 2006). The CTF is primarily defined by the defocus. If the defocus estimate is accurate enough then it is possible to demodulate the image, which is popularly known as the CTF correction. However, it is known that the radial average is somewhat attenuated if the image is astigmatic (see Fernando, K.V., Fuller, S.D., 2007. Determination of astigmatism in TEM images. Journal of Structural Biology 157, 189-200) but this distortion due to astigmatism has not been fully studied or understood up to now. We have discovered the exact mathematical relationship between the radial averages of TEM images with and without astigmatism. This relationship is determined by a zeroth order Bessel function of the first kind and hence we can exactly quantify this distortion in the radial averages of signal and power spectra of astigmatic images. The argument to this Bessel function is similar to an aberration function (without the spherical aberration term) except that the defocus parameter is replaced by the differences of the defoci in the major and minor axes of astigmatism. The ill effects due this Bessel function are twofold. Since the zeroth order Bessel function is a decaying oscillatory function, it introduces additional zeros to the radial average and it also attenuates the CTF signal in the radial averages. Using our analysis, it is possible to simulate the effects of astigmatism in radial averages by imposing Bessel functions on idealized radial averages of images which are not astigmatic. We validate our theory using astigmatic TEM images.
Fact Sheet Proven Weight Loss Methods What can weight loss do for you? Losing weight can improve your health in a number of ways. It can lower ... at www.hormone.org/Spanish . Proven Weight Loss Methods Fact Sheet www.hormone.org
Instrument to average 100 data sets
NASA Technical Reports Server (NTRS)
Tuma, G. B.; Birchenough, A. G.; Rice, W. J.
1977-01-01
An instrumentation system is currently under development which will measure many of the important parameters associated with the operation of an internal combustion engine. Some of these parameters include mass-fraction burn rate, ignition energy, and the indicated mean effective pressure. One of the characteristics of an internal combustion engine is the cycle-to-cycle variation of these parameters. A curve-averaging instrument has been produced which will generate the average curve, over 100 cycles, of any engine parameter. the average curve is described by 2048 discrete points which are displayed on an oscilloscope screen to facilitate recording and is available in real time. Input can be any parameter which is expressed as a + or - 10-volt signal. Operation of the curve-averaging instrument is defined between 100 and 6000 rpm. Provisions have also been made for averaging as many as four parameters simultaneously, with a subsequent decrease in resolution. This provides the means to correlate and perhaps interrelate the phenomena occurring in an internal combustion engine. This instrument has been used successfully on a 1975 Chevrolet V8 engine, and on a Continental 6-cylinder aircraft engine. While this instrument was designed for use on an internal combustion engine, with some modification it can be used to average any cyclically varying waveform.
Gestational weight gain among Hispanic women.
Sangi-Haghpeykar, Haleh; Lam, Kim; Raine, Susan P
2014-01-01
To describe gestational weight gain among Hispanic women and to examine psychological, social, and cultural contexts affecting weight gain. A total of 282 Hispanic women were surveyed post-partum before leaving the hospital. Women were queried about their prepregnancy weight and weight gained during pregnancy. Adequacy of gestational weight gain was based on guidelines set by the Institute of Medicine in 2009. Independent risk factors for excessive or insufficient weight gain were examined by logistic regression. Most women were unmarried (59 %), with a mean age of 28.4 ± 6.6 years and an average weight gain of 27.9 ± 13.3 lbs. Approximately 45 % of women had gained too much, 32 % too little, and only 24 % had an adequate amount of weight gain. The mean birth weight was 7.3, 7.9, and 6.8 lbs among the adequate, excessive, and insufficient weight gain groups. Among women who exercised before pregnancy, two-thirds continued to do so during pregnancy; the mean gestational weight gain of those who continued was lower than those who stopped (26.8 vs. 31.4 lbs, p = 0.04). Independent risk factors for excessive weight gain were being unmarried, U.S. born, higher prepregnancy body mass index, and having indifferent or negative views about weight gain. Independent risk factors for insufficient weight gain were low levels of support and late initiation of prenatal care. Depression, stress, and a woman's or her partner's happiness regarding pregnancy were unrelated to weight gain. The results of this study can be used by prenatal programs to identify Hispanic women at risk for excessive or insufficient gestational weight gain.
The generic modeling fallacy: Average biomechanical models often produce non-average results!
Cook, Douglas D; Robertson, Daniel J
2016-11-07
Computational biomechanics models constructed using nominal or average input parameters are often assumed to produce average results that are representative of a target population of interest. To investigate this assumption a stochastic Monte Carlo analysis of two common biomechanical models was conducted. Consistent discrepancies were found between the behavior of average models and the average behavior of the population from which the average models׳ input parameters were derived. More interestingly, broadly distributed sets of non-average input parameters were found to produce average or near average model behaviors. In other words, average models did not produce average results, and models that did produce average results possessed non-average input parameters. These findings have implications on the prevalent practice of employing average input parameters in computational models. To facilitate further discussion on the topic, the authors have termed this phenomenon the "Generic Modeling Fallacy". The mathematical explanation of the Generic Modeling Fallacy is presented and suggestions for avoiding it are provided. Analytical and empirical examples of the Generic Modeling Fallacy are also given.
Averaged controllability of parameter dependent conservative semigroups
NASA Astrophysics Data System (ADS)
Lohéac, Jérôme; Zuazua, Enrique
2017-02-01
We consider the problem of averaged controllability for parameter depending (either in a discrete or continuous fashion) control systems, the aim being to find a control, independent of the unknown parameters, so that the average of the states is controlled. We do it in the context of conservative models, both in an abstract setting and also analysing the specific examples of the wave and Schrödinger equations. Our first result is of perturbative nature. Assuming the averaging probability measure to be a small parameter-dependent perturbation (in a sense that we make precise) of an atomic measure given by a Dirac mass corresponding to a specific realisation of the system, we show that the averaged controllability property is achieved whenever the system corresponding to the support of the Dirac is controllable. Similar tools can be employed to obtain averaged versions of the so-called Ingham inequalities. Particular attention is devoted to the 1d wave equation in which the time-periodicity of solutions can be exploited to obtain more precise results, provided the parameters involved satisfy Diophantine conditions ensuring the lack of resonances.
Books Average Previous Decade of Economic Misery
Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159
Books average previous decade of economic misery.
Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.
Attractors and Time Averages for Random Maps
NASA Astrophysics Data System (ADS)
Araujo, Vitor
2006-07-01
Considering random noise in finite dimensional parameterized families of diffeomorphisms of a compact finite dimensional boundaryless manifold M, we show the existence of time averages for almost every orbit of each point of M, imposing mild conditions on the families. Moreover these averages are given by a finite number of physical absolutely continuous stationary probability measures. We use this result to deduce that situations with infinitely many sinks and Henon-like attractors are not stable under random perturbations, e.g., Newhouse's and Colli's phenomena in the generic unfolding of a quadratic homoclinic tangency by a one-parameter family of diffeomorphisms.
Polarized electron beams at milliampere average current
Poelker, Matthew
2013-11-01
This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today's CEBAF polarized source operating at ~ 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.
Average power meter for laser radiation
NASA Astrophysics Data System (ADS)
Shevnina, Elena I.; Maraev, Anton A.; Ishanin, Gennady G.
2016-04-01
Advanced metrology equipment, in particular an average power meter for laser radiation, is necessary for effective using of laser technology. In the paper we propose a measurement scheme with periodic scanning of a laser beam. The scheme is implemented in a pass-through average power meter that can perform continuous monitoring during the laser operation in pulse mode or in continuous wave mode and at the same time not to interrupt the operation. The detector used in the device is based on the thermoelastic effect in crystalline quartz as it has fast response, long-time stability of sensitivity, and almost uniform sensitivity dependence on the wavelength.
An improved moving average technical trading rule
NASA Astrophysics Data System (ADS)
Papailias, Fotis; Thomakos, Dimitrios D.
2015-06-01
This paper proposes a modified version of the widely used price and moving average cross-over trading strategies. The suggested approach (presented in its 'long only' version) is a combination of cross-over 'buy' signals and a dynamic threshold value which acts as a dynamic trailing stop. The trading behaviour and performance from this modified strategy are different from the standard approach with results showing that, on average, the proposed modification increases the cumulative return and the Sharpe ratio of the investor while exhibiting smaller maximum drawdown and smaller drawdown duration than the standard strategy.
Average: the juxtaposition of procedure and context
NASA Astrophysics Data System (ADS)
Watson, Jane; Chick, Helen; Callingham, Rosemary
2014-09-01
This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.
Average length of stay in hospitals.
Egawa, H
1984-03-01
The average length of stay is essentially an important and appropriate index for hospital bed administration. However, from the position that it is not necessarily an appropriate index in Japan, an analysis is made of the difference in the health care facility system between the United States and Japan. Concerning the length of stay in Japanese hospitals, the median appeared to better represent the situation. It is emphasized that in order for the average length of stay to become an appropriate index, there is need to promote regional health, especially facility planning.
Zhao, Kaiguang; Valle, Denis; Popescu, Sorin; Zhang, Xuesong; Malick, Bani
2013-05-15
Model specification remains challenging in spectroscopy of plant biochemistry, as exemplified by the availability of various spectral indices or band combinations for estimating the same biochemical. This lack of consensus in model choice across applications argues for a paradigm shift in hyperspectral methods to address model uncertainty and misspecification. We demonstrated one such method using Bayesian model averaging (BMA), which performs variable/band selection and quantifies the relative merits of many candidate models to synthesize a weighted average model with improved predictive performances. The utility of BMA was examined using a portfolio of 27 foliage spectral–chemical datasets representing over 80 species across the globe to estimate multiple biochemical properties, including nitrogen, hydrogen, carbon, cellulose, lignin, chlorophyll (a or b), carotenoid, polar and nonpolar extractives, leaf mass per area, and equivalent water thickness. We also compared BMA with partial least squares (PLS) and stepwise multiple regression (SMR). Results showed that all the biochemicals except carotenoid were accurately estimated from hyerspectral data with R2 values > 0.80.
Ensemble bayesian model averaging using markov chain Monte Carlo sampling
Vrugt, Jasper A; Diks, Cees G H; Clark, Martyn P
2008-01-01
Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.
Unbiased Average Age-Appropriate Atlases for Pediatric Studies
Fonov, Vladimir; Evans, Alan C.; Botteron, Kelly; Almli, C. Robert; McKinstry, Robert C.; Collins, D. Louis
2010-01-01
Spatial normalization, registration, and segmentation techniques for Magnetic Resonance Imaging (MRI) often use a target or template volume to facilitate processing, take advantage of prior information, and define a common coordinate system for analysis. In the neuroimaging literature, the MNI305 Talairach-like coordinate system is often used as a standard template. However, when studying pediatric populations, variation from the adult brain makes the MNI305 suboptimal for processing brain images of children. Morphological changes occurring during development render the use of age-appropriate templates desirable to reduce potential errors and minimize bias during processing of pediatric data. This paper presents the methods used to create unbiased, age-appropriate MRI atlas templates for pediatric studies that represent the average anatomy for the age range of 4.5–18.5 years, while maintaining a high level of anatomical detail and contrast. The creation of anatomical T1-weighted, T2-weighted, and proton density-weighted templates for specific developmentally important age-ranges, used data derived from the largest epidemiological, representative (healthy and normal) sample of the U.S. population, where each subject was carefully screened for medical and psychiatric factors and characterized using established neuropsychological and behavioral assessments. . Use of these age-specific templates was evaluated by computing average tissue maps for gray matter, white matter, and cerebrospinal fluid for each specific age range, and by conducting an exemplar voxel-wise deformation-based morphometry study using 66 young (4.5–6.9 years) participants to demonstrate the benefits of using the age-appropriate templates. The public availability of these atlases/templates will facilitate analysis of pediatric MRI data and enable comparison of results between studies in a common standardized space specific to pediatric research. PMID:20656036
Self-averaging in complex brain neuron signals
NASA Astrophysics Data System (ADS)
Bershadskii, A.; Dremencov, E.; Fukayama, D.; Yadid, G.
2002-12-01
Nonlinear statistical properties of Ventral Tegmental Area (VTA) of limbic brain are studied in vivo. VTA plays key role in generation of pleasure and in development of psychological drug addiction. It is shown that spiking time-series of the VTA dopaminergic neurons exhibit long-range correlations with self-averaging behavior. This specific VTA phenomenon has no relation to VTA rewarding function. Last result reveals complex role of VTA in limbic brain.
76 FR 19275 - Passenger Weight and Inspected Vessel Stability Requirements
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-07
... SECURITY Coast Guard 46 CFR Parts 115, 170, 176, and 178 RIN 1625-AB20 Passenger Weight and Inspected.... SUMMARY: On December 14, 2010, the Coast Guard amended its regulations governing the maximum weight and..., including increasing the Assumed Average Weight per Person (AAWPP) to 185 lb. The amendment triggered...
A Stochastic Model of Space-Time Variability of Mesoscale Rainfall: Statistics of Spatial Averages
NASA Technical Reports Server (NTRS)
Kundu, Prasun K.; Bell, Thomas L.
2003-01-01
A characteristic feature of rainfall statistics is that they depend on the space and time scales over which rain data are averaged. A previously developed spectral model of rain statistics that is designed to capture this property, predicts power law scaling behavior for the second moment statistics of area-averaged rain rate on the averaging length scale L as L right arrow 0. In the present work a more efficient method of estimating the model parameters is presented, and used to fit the model to the statistics of area-averaged rain rate derived from gridded radar precipitation data from TOGA COARE. Statistical properties of the data and the model predictions are compared over a wide range of averaging scales. An extension of the spectral model scaling relations to describe the dependence of the average fraction of grid boxes within an area containing nonzero rain (the "rainy area fraction") on the grid scale L is also explored.
40 CFR 86.449 - Averaging provisions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... certification: (1) A statement that, to the best of your belief, you will not have a negative credit balance for... calculations of projected emission credits (zero, positive, or negative) based on production projections. If..., rounding to the nearest tenth of a gram: Deficit = (Emission Level − Average Standard) × (Total...
40 CFR 86.449 - Averaging provisions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... certification: (1) A statement that, to the best of your belief, you will not have a negative credit balance for... calculations of projected emission credits (zero, positive, or negative) based on production projections. If..., rounding to the nearest tenth of a gram: Deficit = (Emission Level − Average Standard) × (Total...
40 CFR 86.449 - Averaging provisions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... certification: (1) A statement that, to the best of your belief, you will not have a negative credit balance for... calculations of projected emission credits (zero, positive, or negative) based on production projections. If..., rounding to the nearest tenth of a gram: Deficit = (Emission Level − Average Standard) × (Total...
Measuring Time-Averaged Blood Pressure
NASA Technical Reports Server (NTRS)
Rothman, Neil S.
1988-01-01
Device measures time-averaged component of absolute blood pressure in artery. Includes compliant cuff around artery and external monitoring unit. Ceramic construction in monitoring unit suppresses ebb and flow of pressure-transmitting fluid in sensor chamber. Transducer measures only static component of blood pressure.
Why Johnny Can Be Average Today.
ERIC Educational Resources Information Center
Sturrock, Alan
1997-01-01
During a (hypothetical) phone interview with a university researcher, an elementary principal reminisced about a lifetime of reading groups with unmemorable names, medium-paced math problems, patchworked social studies/science lessons, and totally "average" IQ and batting scores. The researcher hung up at the mention of bell-curved assembly lines…
Bayesian Model Averaging for Propensity Score Analysis
ERIC Educational Resources Information Center
Kaplan, David; Chen, Jianshen
2013-01-01
The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 11 2012-07-01 2012-07-01 false Emission averaging. 63.846 Section 63...) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air Pollutants for Primary Aluminum Reduction Plants § 63.846...
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 11 2014-07-01 2014-07-01 false Emission averaging. 63.846 Section 63...) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air Pollutants for Primary Aluminum Reduction Plants § 63.846...
Initial Conditions in the Averaging Cognitive Model
ERIC Educational Resources Information Center
Noventa, S.; Massidda, D.; Vidotto, G.
2010-01-01
The initial state parameters s[subscript 0] and w[subscript 0] are intricate issues of the averaging cognitive models in Information Integration Theory. Usually they are defined as a measure of prior information (Anderson, 1981; 1982) but there are no general rules to deal with them. In fact, there is no agreement as to their treatment except in…
Average thermal characteristics of solar wind electrons
NASA Technical Reports Server (NTRS)
Montgomery, M. D.
1972-01-01
Average solar wind electron properties based on a 1 year Vela 4 data sample-from May 1967 to May 1968 are presented. Frequency distributions of electron-to-ion temperature ratio, electron thermal anisotropy, and thermal energy flux are presented. The resulting evidence concerning heat transport in the solar wind is discussed.
Glenzinski, D.; /Fermilab
2008-01-01
This paper summarizes a talk given at the Top2008 Workshop at La Biodola, Isola d Elba, Italy. The status of the world average top-quark mass is discussed. Some comments about the challanges facing the experiments in order to further improve the precision are offered.
Averaging on Earth-Crossing Orbits
NASA Astrophysics Data System (ADS)
Gronchi, G. F.; Milani, A.
The orbits of planet-crossing asteroids (and comets) can undergo close approaches and collisions with some major planet. This introduces a singularity in the N-body Hamiltonian, and the averaging of the equations of motion, traditionally used to compute secular perturbations, is undefined. We show that it is possible to define in a rigorous way some generalised averaged equations of motion, in such a way that the generalised solutions are unique and piecewise smooth. This is obtained, both in the planar and in the three-dimensional case, by means of the method of extraction of the singularities by Kantorovich. The modified distance used to approximate the singularity is the one used by Wetherill in his method to compute probability of collision. Some examples of averaged dynamics have been computed; a systematic exploration of the averaged phase space to locate the secular resonances should be the next step. `Alice sighed wearily. ``I think you might do something better with the time'' she said, ``than waste it asking riddles with no answers'' (Alice in Wonderland, L. Carroll)
HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS.
BEN-ZVI, ILAN, DAYRAN, D.; LITVINENKO, V.
2005-08-21
Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department.
A Functional Measurement Study on Averaging Numerosity
ERIC Educational Resources Information Center
Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio
2014-01-01
In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…
... below the minimum number of calories you need. Breastfeeding If you are breastfeeding, you will want to lose weight slowly. Weight ... not affect your milk supply or your health. Breastfeeding makes your body burn calories. It helps you ...
Antidepressants and Weight Gain
Diseases and Conditions Depression (major depressive disorder) Can antidepressants cause weight gain? Answers from Daniel K. Hall-Flavin, M.D. Weight gain is a possible side effect of nearly all antidepressants. ...
Diet in the management of weight loss
Strychar, Irene
2006-01-01
Obesity is an established risk factor for numerous chronic diseases, and successful treatment will have an important impact on medical resources utilization, health care costs, and patient quality of life. With over 60% of our population being overweight, physicians face a major challenge in assisting patients in the process of weight loss and weight-loss maintenance. Low-calorie diets can lower total body weight by an average of 8% in the short term. These diets are well-tolerated and characterize successful strategies in maintaining significant weight loss over a 5-year period. Very-low-calorie diets produce a more rapid weight loss but should only be used for fewer than 16 weeks because of clinical adverse effects. Diets that are severely restricted in carbohydrates (3%–10% of total energy intake) and do not emphasize a reduction of energy intake may be effective in reducing weight in the short term, but there is no evidence that they are sustainable or innocuous in the long term because their high saturated-fat content may be atherogenic. Fat restriction in a weight-loss regimen is beneficial, but the optimal percentage has yet to be determined. Longitudinal trials are needed to resolve these issues. In this article I discuss the evidence for and pitfalls of various types of weight-loss diets and identify issues that physicians need to address in weight loss and weight-loss maintenance. PMID:16389240
ERIC Educational Resources Information Center
Ryan, Kevin Michael
2011-01-01
Research on syllable weight in generative phonology has focused almost exclusively on systems in which weight is treated as an ordinal hierarchy of clearly delineated categories (e.g. light and heavy). As I discuss, canonical weight-sensitive phenomena in phonology, including quantitative meter and quantity-sensitive stress, can also treat weight…
... to learn more? Preventing Weight Gain Choosing a lifestyle that includes good eating habits and daily physical activity can help you maintain a healthy weight and prevent weight gain. The Possible Health Effects from Having Obesity Having obesity can increase your chances of developing ...
Averaging processes in granular flows driven by gravity
NASA Astrophysics Data System (ADS)
Rossi, Giulia; Armanini, Aronne
2016-04-01
One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental
Li, Chunli; Choi, Phillip
2006-04-06
Surface tensions (gamma) of normal alkanes and methyl methacrylate (MMA) oligomers at various molecular weights in the low molecular weight range were computed using a newly proposed molecular dynamics (MD) simulation strategy which was developed based on the definition of gamma = ( partial differential U/ partial differential sigma)n,V,S. The MD simulations, even with the use of a generic force field, reproduced the experimentally observed molecular weight dependence of gamma (i.e., gamma proportional Mn(-2/3), where Mn is the number-average molecular weight) for both series of oligomers. Analysis of the data reveals that solvent accessible surface area, one of the key input variables used for the calculation of gamma, exhibits an Mn(2/3) (rather than Mn(1)) dependence. The reason for such dependence is that solvent accessible surface area formed by the chainlike small molecules depends, to a larger extent, on their orientations rather than their size. However, this is not the case for high molecular weight molecules as solvent accessible surface area of such surfaces are determined by the orientations of their segments which are determined by the conformations of the molecules. This may explain why surface tension of polymers experimentally exhibits an Mn(-1) dependence. It is inferred that the corresponding molecular weight dependence of the entropy changes associated with molecules in the low and high molecular weight ranges would be different.
Polarized electron beams at milliampere average current
Poelker, M.
2013-11-07
This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today’s CEBAF polarized source operating at ∼ 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.
Rigid shape matching by segmentation averaging.
Wang, Hongzhi; Oliensis, John
2010-04-01
We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking.
Factor weighting in DRASTIC modeling.
Pacheco, F A L; Pires, L M G R; Santos, R M B; Sanches Fernandes, L F
2015-02-01
Evaluation of aquifer vulnerability comprehends the integration of very diverse data, including soil characteristics (texture), hydrologic settings (recharge), aquifer properties (hydraulic conductivity), environmental parameters (relief), and ground water quality (nitrate contamination). It is therefore a multi-geosphere problem to be handled by a multidisciplinary team. The DRASTIC model remains the most popular technique in use for aquifer vulnerability assessments. The algorithm calculates an intrinsic vulnerability index based on a weighted addition of seven factors. In many studies, the method is subject to adjustments, especially in the factor weights, to meet the particularities of the studied regions. However, adjustments made by different techniques may lead to markedly different vulnerabilities and hence to insecurity in the selection of an appropriate technique. This paper reports the comparison of 5 weighting techniques, an enterprise not attempted before. The studied area comprises 26 aquifer systems located in Portugal. The tested approaches include: the Delphi consensus (original DRASTIC, used as reference), Sensitivity Analysis, Spearman correlations, Logistic Regression and Correspondence Analysis (used as adjustment techniques). In all cases but Sensitivity Analysis, adjustment techniques have privileged the factors representing soil characteristics, hydrologic settings, aquifer properties and environmental parameters, by leveling their weights to ≈4.4, and have subordinated the factors describing the aquifer media by downgrading their weights to ≈1.5. Logistic Regression predicts the highest and Sensitivity Analysis the lowest vulnerabilities. Overall, the vulnerability indices may be separated by a maximum value of 51 points. This represents an uncertainty of 2.5 vulnerability classes, because they are 20 points wide. Given this ambiguity, the selection of a weighting technique to integrate a vulnerability index may require additional
Weighted Watson-Crick automata
NASA Astrophysics Data System (ADS)
Tamrin, Mohd Izzuddin Mohd; Turaev, Sherzod; Sembok, Tengku Mohd Tengku
2014-07-01
There are tremendous works in biotechnology especially in area of DNA molecules. The computer society is attempting to develop smaller computing devices through computational models which are based on the operations performed on the DNA molecules. A Watson-Crick automaton, a theoretical model for DNA based computation, has two reading heads, and works on double-stranded sequences of the input related by a complementarity relation similar with the Watson-Crick complementarity of DNA nucleotides. Over the time, several variants of Watson-Crick automata have been introduced and investigated. However, they cannot be used as suitable DNA based computational models for molecular stochastic processes and fuzzy processes that are related to important practical problems such as molecular parsing, gene disease detection, and food authentication. In this paper we define new variants of Watson-Crick automata, called weighted Watson-Crick automata, developing theoretical models for molecular stochastic and fuzzy processes. We define weighted Watson-Crick automata adapting weight restriction mechanisms associated with formal grammars and automata. We also study the generative capacities of weighted Watson-Crick automata, including probabilistic and fuzzy variants. We show that weighted variants of Watson-Crick automata increase their generative power.
Weighted Watson-Crick automata
Tamrin, Mohd Izzuddin Mohd; Turaev, Sherzod; Sembok, Tengku Mohd Tengku
2014-07-10
There are tremendous works in biotechnology especially in area of DNA molecules. The computer society is attempting to develop smaller computing devices through computational models which are based on the operations performed on the DNA molecules. A Watson-Crick automaton, a theoretical model for DNA based computation, has two reading heads, and works on double-stranded sequences of the input related by a complementarity relation similar with the Watson-Crick complementarity of DNA nucleotides. Over the time, several variants of Watson-Crick automata have been introduced and investigated. However, they cannot be used as suitable DNA based computational models for molecular stochastic processes and fuzzy processes that are related to important practical problems such as molecular parsing, gene disease detection, and food authentication. In this paper we define new variants of Watson-Crick automata, called weighted Watson-Crick automata, developing theoretical models for molecular stochastic and fuzzy processes. We define weighted Watson-Crick automata adapting weight restriction mechanisms associated with formal grammars and automata. We also study the generative capacities of weighted Watson-Crick automata, including probabilistic and fuzzy variants. We show that weighted variants of Watson-Crick automata increase their generative power.
Average Annual Rainfall over the Globe
ERIC Educational Resources Information Center
Agrawal, D. C.
2013-01-01
The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…
Stochastic Games with Average Payoff Criterion
Ghosh, M. K.; Bagchi, A.
1998-11-15
We study two-person stochastic games on a Polish state and compact action spaces and with average payoff criterion under a certain ergodicity condition. For the zero-sum game we establish the existence of a value and stationary optimal strategies for both players. For the nonzero-sum case the existence of Nash equilibrium in stationary strategies is established under certain separability conditions.
The Average Velocity in a Queue
ERIC Educational Resources Information Center
Frette, Vidar
2009-01-01
A number of cars drive along a narrow road that does not allow overtaking. Each driver has a certain maximum speed at which he or she will drive if alone on the road. As a result of slower cars ahead, many cars are forced to drive at speeds lower than their maximum ones. The average velocity in the queue offers a non-trivial example of a mean…
Disk-Averaged Synthetic Spectra of Mars
NASA Astrophysics Data System (ADS)
Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong ,William; Velusamy, Thangasamy; Snively, Heather
2005-08-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.
Digital Averaging Phasemeter for Heterodyne Interferometry
NASA Technical Reports Server (NTRS)
Johnson, Donald; Spero, Robert; Shaklan, Stuart; Halverson, Peter; Kuhnert, Andreas
2004-01-01
A digital averaging phasemeter has been built for measuring the difference between the phases of the unknown and reference heterodyne signals in a heterodyne laser interferometer. This phasemeter performs well enough to enable interferometric measurements of distance with accuracy of the order of 100 pm and with the ability to track distance as it changes at a speed of as much as 50 cm/s. This phasemeter is unique in that it is a single, integral system capable of performing three major functions that, heretofore, have been performed by separate systems: (1) measurement of the fractional-cycle phase difference, (2) counting of multiple cycles of phase change, and (3) averaging of phase measurements over multiple cycles for improved resolution. This phasemeter also offers the advantage of making repeated measurements at a high rate: the phase is measured on every heterodyne cycle. Thus, for example, in measuring the relative phase of two signals having a heterodyne frequency of 10 kHz, the phasemeter would accumulate 10,000 measurements per second. At this high measurement rate, an accurate average phase determination can be made more quickly than is possible at a lower rate.
Disk-averaged synthetic spectra of Mars
NASA Technical Reports Server (NTRS)
Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather
2005-01-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.
On the ensemble averaging of PIC simulations
NASA Astrophysics Data System (ADS)
Codur, R. J. B.; Tsung, F. S.; Mori, W. B.
2016-10-01
Particle-in-cell simulations are used ubiquitously in plasma physics to study a variety of phenomena. They can be an efficient tool for modeling the Vlasov or Vlasov Fokker Planck equations in multi-dimensions. However, the PIC method actually models the Klimontovich equation for finite size particles. The Vlasov Fokker Planck equation can be derived as the ensemble average of the Klimontovich equation. We present results of studying Landau damping and Stimulated Raman Scattering using PIC simulations where we use identical ``drivers'' but change the random number generator seeds. We show that even for cases where a plasma wave is excited below the noise in a single simulation that the plasma wave can clearly be seen and studied if an ensemble average over O(10) simulations is made. Comparison between the results from an ensemble average and the subtraction technique are also presented. In the subtraction technique two simulations, one with the other without the ``driver'' are conducted with the same random number generator seed and the results are subtracted. This work is supported by DOE, NSF, and ENSC (France).
Disk-averaged synthetic spectra of Mars.
Tinetti, Giovanna; Meadows, Victoria S; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather
2005-08-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.
Modern average global sea-surface temperature
Schweitzer, Peter N.
1993-01-01
The data contained in this data set are derived from the NOAA Advanced Very High Resolution Radiometer Multichannel Sea Surface Temperature data (AVHRR MCSST), which are obtainable from the Distributed Active Archive Center at the Jet Propulsion Laboratory (JPL) in Pasadena, Calif. The JPL tapes contain weekly images of SST from October 1981 through December 1990 in nine regions of the world ocean: North Atlantic, Eastern North Atlantic, South Atlantic, Agulhas, Indian, Southeast Pacific, Southwest Pacific, Northeast Pacific, and Northwest Pacific. This data set represents the results of calculations carried out on the NOAA data and also contains the source code of the programs that made the calculations. The objective was to derive the average sea-surface temperature of each month and week throughout the whole 10-year series, meaning, for example, that data from January of each year would be averaged together. The result is 12 monthly and 52 weekly images for each of the oceanic regions. Averaging the images in this way tends to reduce the number of grid cells that lack valid data and to suppress interannual variability.
A simple algorithm for averaging spike trains.
Julienne, Hannah; Houghton, Conor
2013-02-25
Although spike trains are the principal channel of communication between neurons, a single stimulus will elicit different spike trains from trial to trial. This variability, in both spike timings and spike number can obscure the temporal structure of spike trains and often means that computations need to be run on numerous spike trains in order to extract features common across all the responses to a particular stimulus. This can increase the computational burden and obscure analytical results. As a consequence, it is useful to consider how to calculate a central spike train that summarizes a set of trials. Indeed, averaging responses over trials is routine for other signal types. Here, a simple method for finding a central spike train is described. The spike trains are first mapped to functions, these functions are averaged, and a greedy algorithm is then used to map the average function back to a spike train. The central spike trains are tested for a large data set. Their performance on a classification-based test is considerably better than the performance of the medoid spike trains.
Effects of wildfire disaster exposure on male birth weight in an Australian population
O’Donnell, M. H.; Behie, A. M.
2015-01-01
Background and objectives: Maternal stress can depress birth weight and gestational age, with potential health effects. A growing number of studies examine the effect of maternal stress caused by environmental disasters on birth outcomes. These changes may indicate an adaptive response. In this study, we examine the effects of maternal exposure to wildfire on birth weight and gestational age, hypothesising that maternal stress will negatively influence these measures. Methodology: Using data from the Australian Capital Territory, we employed Analysis of Variance to examine the influence of the 2003 Canberra wildfires on the weight of babies born to mothers resident in fire-affected regions, while considering the role of other factors. Results: We found that male infants born in the most severely fire-affected area had significantly higher average birth weights than their less exposed peers and were also heavier than males born in the same areas in non-fire years. Higher average weights were attributable to an increase in the number of macrosomic infants. There was no significant effect on the weight of female infants or on gestational age for either sex. Conclusions and implications: Our findings indicate heightened environmental responsivity in the male cohort. We find that elevated maternal stress acted to accelerate the growth of male fetuses, potentially through an elevation of maternal blood glucose levels. Like previous studies, our work finds effects of disaster exposure and suggests that fetal growth patterns respond to maternal signals. However, the direction of the change in birth weight is opposite to that of many earlier studies. PMID:26574560
The Effect of Sunspot Weighting
NASA Astrophysics Data System (ADS)
Svalgaard, Leif; Cagnotti, Marco; Cortesi, Sergio
2017-02-01
Although W. Brunner began to weight sunspot counts (from 1926), using a method whereby larger spots were counted more than once, he compensated for the weighting by not counting enough smaller spots in order to maintain the same reduction factor (0.6) as was used by his predecessor A. Wolfer to reduce the count to R. Wolf's original scale, so that the weighting did not have any effect on the scale of the sunspot number. In 1947, M. Waldmeier formalized the weighting (on a scale from 1 to 5) of the sunspot count made at Zurich and its auxiliary station Locarno. This explicit counting method, when followed, inflates the relative sunspot number over that which corresponds to the scale set by Wolfer (and matched by Brunner). Recounting some 60,000 sunspots on drawings from the reference station Locarno shows that the number of sunspots reported was "over counted" by {≈} 44 % on average, leading to an inflation (measured by an effective weight factor) in excess of 1.2 for high solar activity. In a double-blind parallel counting by the Locarno observer M. Cagnotti, we determined that Svalgaard's count closely matches that of Cagnotti, allowing us to determine from direct observation the daily weight factor for spots since 2003 (and sporadically before). The effective total inflation turns out to have two sources: a major one (15 - 18 %) caused by weighting of spots, and a minor source (4 - 5 %) caused by the introduction of the Zürich classification of sunspot groups which increases the group count by 7 - 8 % and the relative sunspot number by about half that. We find that a simple empirical equation (depending on the activity level) fits the observed factors well, and use that fit to estimate the weighting inflation factor for each month back to the introduction of effective inflation in 1947 and thus to be able to correct for the over-counts and to reduce sunspot counting to the Wolfer method in use from 1894 onwards.
Jacques, Paul F; Wang, Huifen
2014-05-01
A large body of observational studies and randomized controlled trials (RCTs) has examined the role of dairy products in weight loss and maintenance of healthy weight. Yogurt is a dairy product that is generally very similar to milk, but it also has some unique properties that may enhance its possible role in weight maintenance. This review summarizes the human RCT and prospective observational evidence on the relation of yogurt consumption to the management and maintenance of body weight and composition. The RCT evidence is limited to 2 small, short-term, energy-restricted trials. They both showed greater weight losses with yogurt interventions, but the difference between the yogurt intervention and the control diet was only significant in one of these trials. There are 5 prospective observational studies that have examined the association between yogurt and weight gain. The results of these studies are equivocal. Two of these studies reported that individuals with higher yogurt consumption gained less weight over time. One of these same studies also considered changes in waist circumference (WC) and showed that higher yogurt consumption was associated with smaller increases in WC. A third study was inconclusive because of low statistical power. A fourth study observed no association between changes in yogurt intake and weight gain, but the results suggested that those with the largest increases in yogurt intake during the study also had the highest increase in WC. The final study examined weight and WC change separately by sex and baseline weight status and showed benefits for both weight and WC changes for higher yogurt consumption in overweight men, but it also found that higher yogurt consumption in normal-weight women was associated with a greater increase in weight over follow-up. Potential underlying mechanisms for the action of yogurt on weight are briefly discussed.
Moran, Kieran; Antony, Joseph; Richter, Chris; Marshall, Brendan; Coyle, Joe; Falvey, Eanna; Franklyn-Miller, Andrew
2015-01-01
Background Low back pain is one of the most prevalent musculoskeletal conditions in the world. Many exercise treatment options exist but few interventions have utilised free-weight resistance training. To investigate the effects of a free-weight-based resistance training intervention on pain and lumbar fat infiltration in those with chronic low back pain. Methods Thirty participants entered the study, 11 females (age=39.6±12.4 years, height=164 cm±5.3 cm, body mass=70.9±8.2 kg,) and 19 males (age=39.7±9.7 years, height=179±5.9 cm, body mass=86.6±15.9 kg). A 16-week, progressive, free-weight-based resistance training intervention was used. Participants completed three training sessions per week. Participants completed a Visual Analogue Pain Scale, Oswestry Disability Index and Euro-Qol V2 quality of life measure at baseline and every 4 weeks throughout the study. Three-dimensional kinematic and kinetic measures were used for biomechanical analysis of a bodyweight squat movement. Maximum strength was measured using an isometric mid-thigh pull, and lumbar paraspinal endurance was measured using a Biering-Sorensen test. Lumbar paraspinal fat infiltration was measured preintervention and postintervention using MRIs. Results Postintervention pain, disability and quality of life were all significantly improved. In addition, there was a significant reduction in fat infiltration at the L3L4 and L4L5 levels and increase in lumbar extension time to exhaustion of 18%. Conclusions A free-weight-based resistance training intervention can be successfully utilised to improve pain, disability and quality of life in those with low back pain. PMID:27900136
Average System Cost Methodology : Administrator's Record of Decision.
United States. Bonneville Power Administration.
1984-06-01
Significant features of average system cost (ASC) methodology adopted are: retention of the jurisdictional approach where retail rate orders of regulartory agencies provide primary data for computing the ASC for utilities participating in the residential exchange; inclusion of transmission costs; exclusion of construction work in progress; use of a utility's weighted cost of debt securities; exclusion of income taxes; simplification of separation procedures for subsidized generation and transmission accounts from other accounts; clarification of ASC methodology rules; more generous review timetable for individual filings; phase-in of reformed methodology; and each exchanging utility must file under the new methodology within 20 days of implementation by the Federal Energy Regulatory Commission of the ten major participating utilities, the revised ASC will substantially only affect three. (PSB)
Tailoring dietary approaches for weight loss.
Gardner, C D
2012-07-01
Although the 'Low-Fat' diet was the predominant public health recommendation for weight loss and weight control for the past several decades, the obesity epidemic continued to grow during this time period. An alternative 'low-carbohydrate' (Low-Carb) approach, although originally dismissed and even vilified, was comparatively tested in a series of studies over the past decade, and has been found in general to be as effective, if not more, as the Low-Fat approach for weight loss and for several related metabolic health measures. From a glass half full perspective, this suggests that there is more than one choice for a dietary approach to lose weight, and that Low-Fat and Low-Carb diets may be equally effective. From a glass half empty perspective, the average amount of weight lost on either of these two dietary approaches under the conditions studied, particularly when followed beyond 1 year, has been modest at best and negligible at worst, suggesting that the two approaches may be equally ineffective. One could resign themselves at this point to focusing on calories and energy intake restriction, regardless of macronutrient distributions. However, before throwing out the half-glass of water, it is worthwhile to consider that focusing on average results may mask important subgroup successes and failures. In all weight-loss studies, without exception, the range of individual differences in weight change within any particular diet groups is orders of magnitude greater than the average group differences between diet groups. Several studies have now reported that adults with greater insulin resistance are more successful with weight loss on a lower-carbohydrate diet compared with a lower-fat diet, whereas adults with greater insulin sensitivity are equally or more successful with weight loss on a lower-fat diet compared with a lower-carbohydrate diet. Other preliminary findings suggest that there may be some promise with matching individuals with certain genotypes to
Bounded Self-Weights Estimation Method for Non-Local Means Image Denoising Using Minimax Estimators.
Nguyen, Minh Phuong; Chun, Se Young
2017-04-01
A non-local means (NLM) filter is a weighted average of a large number of non-local pixels with various image intensity values. The NLM filters have been shown to have powerful denoising performance, excellent detail preservation by averaging many noisy pixels, and using appropriate values for the weights, respectively. The NLM weights between two different pixels are determined based on the similarities between two patches that surround these pixels and a smoothing parameter. Another important factor that influences the denoising performance is the self-weight values for the same pixel. The recently introduced local James-Stein type center pixel weight estimation method (LJS) outperforms other existing methods when determining the contribution of the center pixels in the NLM filter. However, the LJS method may result in excessively large self-weight estimates since no upper bound is assumed, and the method uses a relatively large local area for estimating the self-weights, which may lead to a strong bias. In this paper, we investigated these issues in the LJS method, and then propose a novel local self-weight estimation methods using direct bounds (LMM-DB) and reparametrization (LMM-RP) based on the Baranchik's minimax estimator. Both the LMM-DB and LMM-RP methods were evaluated using a wide range of natural images and a clinical MRI image together with the various levels of additive Gaussian noise. Our proposed parameter selection methods yielded an improved bias-variance trade-off, a higher peak signal-to-noise (PSNR) ratio, and fewer visual artifacts when compared with the results of the classical NLM and LJS methods. Our proposed methods also provide a heuristic way to select a suitable global smoothing parameters that can yield PSNR values that are close to the optimal values.
A Green's function quantum average atom model
Starrett, Charles Edward
2015-05-21
A quantum average atom model is reformulated using Green's functions. This allows integrals along the real energy axis to be deformed into the complex plane. The advantage being that sharp features such as resonances and bound states are broadened by a Lorentzian with a half-width chosen for numerical convenience. An implementation of this method therefore avoids numerically challenging resonance tracking and the search for weakly bound states, without changing the physical content or results of the model. A straightforward implementation results in up to a factor of 5 speed-up relative to an optimized orbital based code.
Estimates of Random Error in Satellite Rainfall Averages
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.
2003-01-01
Satellite rain estimates are most accurate when obtained with microwave instruments on low earth-orbiting satellites. Estimation of daily or monthly total areal rainfall, typically of interest to hydrologists and climate researchers, is made difficult, however, by the relatively poor coverage generally available from such satellites. Intermittent coverage by the satellites leads to random "sampling error" in the satellite products. The inexact information about hydrometeors inferred from microwave data also leads to random "retrieval errors" in the rain estimates. In this talk we will review approaches to quantitative estimation of the sampling error in area/time averages of satellite rain retrievals using ground-based observations, and methods of estimating rms random error, both sampling and retrieval, in averages using satellite measurements themselves.
Laser Diode Cooling For High Average Power Applications
NASA Astrophysics Data System (ADS)
Mundinger, David C.; Beach, Raymond J.; Benett, William J.; Solarz, Richard W.; Sperry, Verry
1989-06-01
Many applications for semiconductor lasers that require high average power are limited by the inability to remove the waste heat generated by the diode lasers. In order to reduce the cost and complexity of these applications a heat sink package has been developed which is based on water cooled silicon microstructures. Thermal resistivities of less than 0.025°C/01/cm2) have been measured which should be adequate for up to CW operation of diode laser arrays. This concept can easily be scaled to large areas and is ideal for high average power solid state laser pumping. Several packages which illustrate the essential features of this design have been fabricated and tested. The theory of operation will be briefly covered, and several conceptual designs will be described. Also the fabrication and assembly procedures and measured levels of performance will be discussed.
Local average height distribution of fluctuating interfaces
NASA Astrophysics Data System (ADS)
Smith, Naftali R.; Meerson, Baruch; Sasorov, Pavel V.
2017-01-01
Height fluctuations of growing surfaces can be characterized by the probability distribution of height in a spatial point at a finite time. Recently there has been spectacular progress in the studies of this quantity for the Kardar-Parisi-Zhang (KPZ) equation in 1 +1 dimensions. Here we notice that, at or above a critical dimension, the finite-time one-point height distribution is ill defined in a broad class of linear surface growth models unless the model is regularized at small scales. The regularization via a system-dependent small-scale cutoff leads to a partial loss of universality. As a possible alternative, we introduce a local average height. For the linear models, the probability density of this quantity is well defined in any dimension. The weak-noise theory for these models yields the "optimal path" of the interface conditioned on a nonequilibrium fluctuation of the local average height. As an illustration, we consider the conserved Edwards-Wilkinson (EW) equation, where, without regularization, the finite-time one-point height distribution is ill defined in all physical dimensions. We also determine the optimal path of the interface in a closely related problem of the finite-time height-difference distribution for the nonconserved EW equation in 1 +1 dimension. Finally, we discuss a UV catastrophe in the finite-time one-point distribution of height in the (nonregularized) KPZ equation in 2 +1 dimensions.
Local average height distribution of fluctuating interfaces.
Smith, Naftali R; Meerson, Baruch; Sasorov, Pavel V
2017-01-01
Height fluctuations of growing surfaces can be characterized by the probability distribution of height in a spatial point at a finite time. Recently there has been spectacular progress in the studies of this quantity for the Kardar-Parisi-Zhang (KPZ) equation in 1+1 dimensions. Here we notice that, at or above a critical dimension, the finite-time one-point height distribution is ill defined in a broad class of linear surface growth models unless the model is regularized at small scales. The regularization via a system-dependent small-scale cutoff leads to a partial loss of universality. As a possible alternative, we introduce a local average height. For the linear models, the probability density of this quantity is well defined in any dimension. The weak-noise theory for these models yields the "optimal path" of the interface conditioned on a nonequilibrium fluctuation of the local average height. As an illustration, we consider the conserved Edwards-Wilkinson (EW) equation, where, without regularization, the finite-time one-point height distribution is ill defined in all physical dimensions. We also determine the optimal path of the interface in a closely related problem of the finite-time height-difference distribution for the nonconserved EW equation in 1+1 dimension. Finally, we discuss a UV catastrophe in the finite-time one-point distribution of height in the (nonregularized) KPZ equation in 2+1 dimensions.
Global atmospheric circulation statistics: Four year averages
NASA Technical Reports Server (NTRS)
Wu, M. F.; Geller, M. A.; Nash, E. R.; Gelman, M. E.
1987-01-01
Four year averages of the monthly mean global structure of the general circulation of the atmosphere are presented in the form of latitude-altitude, time-altitude, and time-latitude cross sections. The numerical values are given in tables. Basic parameters utilized include daily global maps of temperature and geopotential height for 18 pressure levels between 1000 and 0.4 mb for the period December 1, 1978 through November 30, 1982 supplied by NOAA/NMC. Geopotential heights and geostrophic winds are constructed using hydrostatic and geostrophic formulae. Meridional and vertical velocities are calculated using thermodynamic and continuity equations. Fields presented in this report are zonally averaged temperature, zonal, meridional, and vertical winds, and amplitude of the planetary waves in geopotential height with zonal wave numbers 1-3. The northward fluxes of sensible heat and eastward momentum by the standing and transient eddies along with their wavenumber decomposition and Eliassen-Palm flux propagation vectors and divergences by the standing and transient eddies along with their wavenumber decomposition are also given. Large interhemispheric differences and year-to-year variations are found to originate in the changes in the planetary wave activity.
Increase in average testis size of Canadian beef bulls.
García Guerra, Alvaro; Hendrick, Steve; Barth, Albert D
2013-05-01
Selection for adequate testis size in beef bulls is an important part of bull breeding soundness evaluation. Scrotal circumference (SC) is highly correlated with paired testis weight and is a practical method for estimating testis weight in the live animal. Most bulls presented for sale in Canada have SC included in the presale information. Scrotal circumference varies by age and breed, and may change over time due to selection for larger testis size. Therefore, it is important to periodically review the mean SC of various cattle breeds to provide valid bull selection criteria. Scrotal circumference data were obtained from bulls sold in western Canada from 2008 to 2011 and in Quebec from 2006 to 2010. Average scrotal circumferences for the most common beef breeds in Canada have increased significantly in the last 25 years. Differences between breeds have remained unchanged and Simmental bulls still have the largest SC at 1 year of age. Data provided here could aid in the establishment of new suggested minimum SC measurements for beef bulls.
Hydraulic Conductivity Estimation using Bayesian Model Averaging and Generalized Parameterization
NASA Astrophysics Data System (ADS)
Tsai, F. T.; Li, X.
2006-12-01
Non-uniqueness in parameterization scheme is an inherent problem in groundwater inverse modeling due to limited data. To cope with the non-uniqueness problem of parameterization, we introduce a Bayesian Model Averaging (BMA) method to integrate a set of selected parameterization methods. The estimation uncertainty in BMA includes the uncertainty in individual parameterization methods as the within-parameterization variance and the uncertainty from using different parameterization methods as the between-parameterization variance. Moreover, the generalized parameterization (GP) method is considered in the geostatistical framework in this study. The GP method aims at increasing the flexibility of parameterization through the combination of a zonation structure and an interpolation method. The use of BMP with GP avoids over-confidence in a single parameterization method. A normalized least-squares estimation (NLSE) is adopted to calculate the posterior probability for each GP. We employee the adjoint state method for the sensitivity analysis on the weighting coefficients in the GP method. The adjoint state method is also applied to the NLSE problem. The proposed methodology is implemented to the Alamitos Barrier Project (ABP) in California, where the spatially distributed hydraulic conductivity is estimated. The optimal weighting coefficients embedded in GP are identified through the maximum likelihood estimation (MLE) where the misfits between the observed and calculated groundwater heads are minimized. The conditional mean and conditional variance of the estimated hydraulic conductivity distribution using BMA are obtained to assess the estimation uncertainty.
Internal disinhibition predicts 5‐year weight regain in the National Weight Control Registry (NWCR)
Thomas, J. G.; Niemeier, H.; Wing, R. R.
2016-01-01
Summary Background Maintenance of weight loss remains elusive for most individuals. One potential innovative target is internal disinhibition (ID) or the tendency to eat in response to negative thoughts, feelings or physical sensations. Individuals high on ID do worse on average in standard behavioural treatment programmes, and recent studies suggest that disinhibition could play a significant role in weight regain. Purpose The purpose of the current study was to examine whether ID was associated with weight change over 5 years of follow‐up in the National Weight Control Registry, a registry of individuals who have successfully lost weight and maintained it. Methods From the National Weight Control Registry, 5,320 participants were examined across 5 years. Weight data were gathered annually. The disinhibition subscale of the Eating Inventory was used to calculate internal disinhibition and External Disinhibition (ED) and was collected at baseline, year 1, year 3 and year 5. Linear mixed models were used to estimate the weight loss maintained across follow‐up years 1 to 5 using ID and ED as baseline and prospective predictors. Results Internal disinhibition predicted weight regain in all analyses. ED interacted with ID, such that individuals who were high on ID showed greater weight regain if they were also higher on ED. Conclusions The ID scale could be a useful screening measure for risk of weight regain, given its brevity. Improved psychological coping could be a useful target for maintenance or booster interventions. PMID:27812382
Lokemoen, John T.; Johnson, Douglas H.; Sharp, David E.
1990-01-01
During 1976-81 we weighed several thousands of wild Mallard, Gadwall, and Blue-winged Teal in central North Dakota to examine duckling growth patterns, adult weights, and the factors influencing them. One-day-old Mallard and Gadwall averaged 32.4 and 30.4 g, respectively, a reduction of 34% and 29% from fresh egg weights. In all three species, the logistic growth curve provided a good fit for duckling growth patterns. Except for the asymptote, there was no difference in growth curves between males and females of a species. Mallard and Gadwall ducklings were heavier in years when wetland area was extensive or had increased from the previous year. Weights of after-second-year females were greater than yearlings for Mallard but not for Gadwall or Blue-winged Teal. Adult Mallard females lost weight continuously from late March to early July. Gadwall and Blue-winged Teal females, which nest later than Mallard, gained weight after spring arrival, lost weight from the onset of nesting until early July, and then regained some weight. Females of all species captured on nests were lighter than those captured off nests at the same time. Male Mallard weights decreased from spring arrival until late May. Male Gadwall and Blue-winged Teal weights increased after spring arrival, then declined until early June. Males of all three species then gained weight until the end of June. Among adults, female Gadwall and male Mallard and Blue-winged Teal were heavier in years when wetland area had increased from the previous year; female Blue-winged Teal were heavier in years with more wetland area.
High-average-power diode-pumped Yb: YAG lasers
Avizonis, P V; Beach, R; Bibeau, C M; Emanuel, M A; Harris, D G; Honea, E C; Monroe, R S; Payne, S A; Skidmore, J A; Sutton, S B
1999-10-01
A scaleable diode end-pumping technology for high-average-power slab and rod lasers has been under development for the past several years at Lawrence Livermore National Laboratory (LLNL). This technology has particular application to high average power Yb:YAG lasers that utilize a rod configured gain element. Previously, this rod configured approach has achieved average output powers in a single 5 cm long by 2 mm diameter Yb:YAG rod of 430 W cw and 280 W q-switched. High beam quality (M{sup 2} = 2.4) q-switched operation has also been demonstrated at over 180 W of average output power. More recently, using a dual rod configuration consisting of two, 5 cm long by 2 mm diameter laser rods with birefringence compensation, we have achieved 1080 W of cw output with an M{sup 2} value of 13.5 at an optical-to-optical conversion efficiency of 27.5%. With the same dual rod laser operated in a q-switched mode, we have also demonstrated 532 W of average power with an M{sup 2} < 2.5 at 17% optical-to-optical conversion efficiency. These q-switched results were obtained at a 10 kHz repetition rate and resulted in 77 nsec pulse durations. These improved levels of operational performance have been achieved as a result of technology advancements made in several areas that will be covered in this manuscript. These enhancements to our architecture include: (1) Hollow lens ducts that enable the use of advanced cavity architectures permitting birefringence compensation and the ability to run in large aperture-filling near-diffraction-limited modes. (2) Compound laser rods with flanged-nonabsorbing-endcaps fabricated by diffusion bonding. (3) Techniques for suppressing amplified spontaneous emission (ASE) and parasitics in the polished barrel rods.
Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing; Liu, Nan-Suey
2012-01-01
In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.
Theory of optimal weighting of data to detect climatic change
NASA Technical Reports Server (NTRS)
Bell, T. L.
1986-01-01
A search for climatic change predicted by climate models can easily yield unconvincing results because of 'climatic noise,' the inherent, unpredictable variability of time-average atmospheric data. A weighted average of data that maximizes the probability of detecting predicted climatic change is presented. To obtain the optimal weights, an estimate of the covariance matrix of the data from a prior data set is needed. This introduces additional sampling error into the method. This is presently taken into account. A form of the weighted average is found whose probability distribution is independent of the true (but unknown) covariance statistics of the data and of the climate model prediction.
Predictive RANS simulations via Bayesian Model-Scenario Averaging
NASA Astrophysics Data System (ADS)
Edeling, W. N.; Cinnella, P.; Dwight, R. P.
2014-10-01
The turbulence closure model is the dominant source of error in most Reynolds-Averaged Navier-Stokes simulations, yet no reliable estimators for this error component currently exist. Here we develop a stochastic, a posteriori error estimate, calibrated to specific classes of flow. It is based on variability in model closure coefficients across multiple flow scenarios, for multiple closure models. The variability is estimated using Bayesian calibration against experimental data for each scenario, and Bayesian Model-Scenario Averaging (BMSA) is used to collate the resulting posteriors, to obtain a stochastic estimate of a Quantity of Interest (QoI) in an unmeasured (prediction) scenario. The scenario probabilities in BMSA are chosen using a sensor which automatically weights those scenarios in the calibration set which are similar to the prediction scenario. The methodology is applied to the class of turbulent boundary-layers subject to various pressure gradients. For all considered prediction scenarios the standard-deviation of the stochastic estimate is consistent with the measurement ground truth. Furthermore, the mean of the estimate is more consistently accurate than the individual model predictions.
Predictive RANS simulations via Bayesian Model-Scenario Averaging
Edeling, W.N.; Cinnella, P.; Dwight, R.P.
2014-10-15
The turbulence closure model is the dominant source of error in most Reynolds-Averaged Navier–Stokes simulations, yet no reliable estimators for this error component currently exist. Here we develop a stochastic, a posteriori error estimate, calibrated to specific classes of flow. It is based on variability in model closure coefficients across multiple flow scenarios, for multiple closure models. The variability is estimated using Bayesian calibration against experimental data for each scenario, and Bayesian Model-Scenario Averaging (BMSA) is used to collate the resulting posteriors, to obtain a stochastic estimate of a Quantity of Interest (QoI) in an unmeasured (prediction) scenario. The scenario probabilities in BMSA are chosen using a sensor which automatically weights those scenarios in the calibration set which are similar to the prediction scenario. The methodology is applied to the class of turbulent boundary-layers subject to various pressure gradients. For all considered prediction scenarios the standard-deviation of the stochastic estimate is consistent with the measurement ground truth. Furthermore, the mean of the estimate is more consistently accurate than the individual model predictions.
The Influence of Sleep Disordered Breathing on Weight Loss in a National Weight Management Program
Janney, Carol A.; Kilbourne, Amy M.; Germain, Anne; Lai, Zongshan; Hoerster, Katherine D.; Goodrich, David E.; Klingaman, Elizabeth A.; Verchinina, Lilia; Richardson, Caroline R.
2016-01-01
Study Objective: To investigate the influence of sleep disordered breathing (SDB) on weight loss in overweight/obese veterans enrolled in MOVE!, a nationally implemented behavioral weight management program delivered by the National Veterans Health Administration health system. Methods: This observational study evaluated weight loss by SDB status in overweight/obese veterans enrolled in MOVE! from May 2008–February 2012 who had at least two MOVE! visits, baseline weight, and at least one follow-up weight (n = 84,770). SDB was defined by International Classification of Diseases, Ninth Revision, Clinical Modification codes. Primary outcome was weight change (lb) from MOVE! enrollment to 6- and 12-mo assessments. Weight change over time was modeled with repeated-measures analyses. Results: SDB was diagnosed in one-third of the cohort (n = 28,269). At baseline, veterans with SDB weighed 29 [48] lb more than those without SDB (P < 0.001). On average, veterans attended eight MOVE! visits. Weight loss patterns over time were statistically different between veterans with and without SDB (P < 0.001); veterans with SDB lost less weight (−2.5 [0.1] lb) compared to those without SDB (−3.3 [0.1] lb; P = 0.001) at 6 months. At 12 mo, veterans with SDB continued to lose weight whereas veterans without SDB started to re-gain weight. Conclusions: Veterans with sleep disordered breathing (SDB) had significantly less weight loss over time than veterans without SDB. SDB should be considered in the development and implementation of weight loss programs due to its high prevalence and negative effect on health. Citation: Janney CA, Kilbourne AM, Germain A, Lai Z, Hoerster KD, Goodrich DE, Klingaman EA, Verchinina L, Richardson CR. The influence of sleep disordered breathing on weight loss in a national weight management program. SLEEP 2016;39(1):59–65. PMID:26350475
Lagrangian averaging, nonlinear waves, and shock regularization
NASA Astrophysics Data System (ADS)
Bhat, Harish S.
In this thesis, we explore various models for the flow of a compressible fluid as well as model equations for shock formation, one of the main features of compressible fluid flows. We begin by reviewing the variational structure of compressible fluid mechanics. We derive the barotropic compressible Euler equations from a variational principle in both material and spatial frames. Writing the resulting equations of motion requires certain Lie-algebraic calculations that we carry out in detail for expository purposes. Next, we extend the derivation of the Lagrangian averaged Euler (LAE-alpha) equations to the case of barotropic compressible flows. The derivation in this thesis involves averaging over a tube of trajectories etaepsilon centered around a given Lagrangian flow eta. With this tube framework, the LAE-alpha equations are derived by following a simple procedure: start with a given action, expand via Taylor series in terms of small-scale fluid fluctuations xi, truncate, average, and then model those terms that are nonlinear functions of xi. We then analyze a one-dimensional subcase of the general models derived above. We prove the existence of a large family of traveling wave solutions. Computing the dispersion relation for this model, we find it is nonlinear, implying that the equation is dispersive. We carry out numerical experiments that show that the model possesses smooth, bounded solutions that display interesting pattern formation. Finally, we examine a Hamiltonian partial differential equation (PDE) that regularizes the inviscid Burgers equation without the addition of standard viscosity. Here alpha is a small parameter that controls a nonlinear smoothing term that we have added to the inviscid Burgers equation. We show the existence of a large family of traveling front solutions. We analyze the initial-value problem and prove well-posedness for a certain class of initial data. We prove that in the zero-alpha limit, without any standard viscosity
Quetelet, the average man and medical knowledge.
Caponi, Sandra
2013-01-01
Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.
Average deployments versus missile and defender parameters
Canavan, G.H.
1991-03-01
This report evaluates the average number of reentry vehicles (RVs) that could be deployed successfully as a function of missile burn time, RV deployment times, and the number of space-based interceptors (SBIs) in defensive constellations. Leakage estimates of boost-phase kinetic-energy defenses as functions of launch parameters and defensive constellation size agree with integral predictions of near-exact calculations for constellation sizing. The calculations discussed here test more detailed aspects of the interaction. They indicate that SBIs can efficiently remove about 50% of the RVs from a heavy missile attack. The next 30% can removed with two-fold less effectiveness. The next 10% could double constellation sizes. 5 refs., 7 figs.
Comprehensive time average digital holographic vibrometry
NASA Astrophysics Data System (ADS)
Psota, Pavel; Lédl, Vít; Doleček, Roman; Mokrý, Pavel; Vojtíšek, Petr; Václavík, Jan
2016-12-01
This paper presents a method that simultaneously deals with drawbacks of time-average digital holography: limited measurement range, limited spatial resolution, and quantitative analysis of the measured Bessel fringe patterns. When the frequency of the reference wave is shifted by an integer multiple of frequency at which the object oscillates, the measurement range of the method can be shifted either to smaller or to larger vibration amplitudes. In addition, phase modulation of the reference wave is used to obtain a sequence of phase-modulated fringe patterns. Such fringe patterns can be combined by means of phase-shifting algorithms, and amplitudes of vibrations can be straightforwardly computed. This approach independently calculates the amplitude values in every single pixel. The frequency shift and phase modulation are realized by proper control of Bragg cells and therefore no additional hardware is required.
High average power linear induction accelerator development
Bayless, J.R.; Adler, R.J.
1987-07-01
There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs.
Angle-averaged Compton cross sections
Nickel, G.H.
1983-01-01
The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.
The Average-Value Correspondence Principle
NASA Astrophysics Data System (ADS)
Goyal, Philip
2007-12-01
In previous work [1], we have presented an attempt to derive the finite-dimensional abstract quantum formalism from a set of physically comprehensible assumptions. In this paper, we continue the derivation of the quantum formalism by formulating a correspondence principle, the Average-Value Correspondence Principle, that allows relations between measurement outcomes which are known to hold in a classical model of a system to be systematically taken over into the quantum model of the system, and by using this principle to derive many of the correspondence rules (such as operator rules, commutation relations, and Dirac's Poisson bracket rule) that are needed to apply the abstract quantum formalism to model particular physical systems.
Average prime-pair counting formula
NASA Astrophysics Data System (ADS)
Korevaar, Jaap; Riele, Herman Te
2010-04-01
Taking r>0 , let π_{2r}(x) denote the number of prime pairs (p, p+2r) with p≤ x . The prime-pair conjecture of Hardy and Littlewood (1923) asserts that π_{2r}(x)˜ 2C_{2r} {li}_2(x) with an explicit constant C_{2r}>0 . There seems to be no good conjecture for the remainders ω_{2r}(x)=π_{2r}(x)- 2C_{2r} {li}_2(x) that corresponds to Riemann's formula for π(x)-{li}(x) . However, there is a heuristic approximate formula for averages of the remainders ω_{2r}(x) which is supported by numerical results.
Average Strength Parameters of Reactivated Mudstone Landslide for Countermeasure Works
NASA Astrophysics Data System (ADS)
Nakamura, Shinya; Kimura, Sho; Buddhi Vithana, Shriwantha
2015-04-01
Among many approaches to landslide stability analysis, in several landslide-related studies, shear strength parameters obtained from laboratory shear tests have been used with the limit equilibrium method. In most of them, it concluded that the average strength parameters, i.e. average cohesion (c'avg) and average angle of shearing resistance (φ'avg), calculated from back analysis were in agreement with the residual shear strength parameters measured by torsional ring-shear tests on undisturbed and remolded samples. However, disagreement with this contention can be found elsewhere that the residual shear strength measured using a torsional ring-shear apparatus were found to be lower than the average strength calculated by back analysis. One of the reasons why the singular application of residual shear strength in stability analysis causes an underestimation of the safety factor is the fact that the condition of the slip surface of a landslide can be heterogeneous. It may consist of portions that have already reached residual conditions along with other portions that have not on the slip surface. With a view of accommodating such possible differences of slip surface conditions of a landslide, it is worthy to first grasp an appropriate perception of the heterogeneous nature of the actual slip-surface to ensure a more suitable selection of measured shear strength values for stability calculation of landslides. For the present study, the determination procedure of the average strength parameters acting along the slip surface has been presented through the stability calculations of reactivated landslides in the area of Shimajiri-mudstone, Okinawa, Japan. The average strength parameters along slip surfaces of landslides have been estimated using the results of laboratory shear tests of the slip surface/zone soils accompanying a rational way of accessing the actual, heterogeneous slip surface conditions. The results tend to show that the shear strength acting along the
... trying to do so can have many causes. Metabolism slows down as you age . This can cause weight gain if you eat too much, eat the wrong foods, or do not get enough exercise. Drugs that can cause weight gain include: Birth ...
ERIC Educational Resources Information Center
Iona, Mario
1975-01-01
Presents a summary and comparison of various views on the concepts of mass and weight. Includes a consideration of gravitational force in an inertial system and apparent gravitational force on a rotating earth. Discusses the units and methods for measuring mass and weight. (GS)
ERIC Educational Resources Information Center
Lakdawalla, Darius; Philipson, Tomas
2007-01-01
We use panel data from the National Longitudinal Survey of Youth to investigate on-the-job exercise and weight. For male workers, job-related exercise has causal effects on weight, but for female workers, the effects seem primarily selective. A man who spends 18 years in the most physical fitness-demanding occupation is about 25 pounds (14…
... proved to be the most useful by the end of the 2 ½-year study. Researchers say overall the effects of the counseling and support were modest, and most people in the study did regain some weight. But they note that even modest weight loss can have health ...
ERIC Educational Resources Information Center
Katch, Victor L.
This paper describes a number of factors which go into determining weight. The paper describes what calories are, how caloric expenditure is measured, and why caloric expenditure is different for different people. The paper then outlines the way the body tends to adjust food intake and exercise to maintain a constant body weight. It is speculated…
... Profiles Multimedia Pregnancy & Healthy Weight Skip sharing on social media links Share this: Page Content New research shows that maintaining a healthy weight before and during pregnancy can reduce the likelihood of negative effects for mothers and babies We’ve heard the ...
The Weighted Oblimin Rotation.
ERIC Educational Resources Information Center
Lorenzo-Seva, Urbano
2000-01-01
Demonstrates that the weighting procedure proposed by E. Cureton and S. Mulaik (1975) can be applied to the Direct Oblimin approach of D. Clarkson and R. Jennrich (1988) to provide good results. The rotation method obtained is called Weighted Oblimin. Compared this method to other rotation methods with favorable results. (SLD)
... to lose that we’ve been talking about weight-loss surgery. Is that something we should consider?” Although the ... have the operation should not be made hastily. Weight-loss surgery is only advisable for extremely overweight adolescents for ...
Weight discrimination and bullying.
Puhl, Rebecca M; King, Kelly M
2013-04-01
Despite significant attention to the medical impacts of obesity, often ignored are the negative outcomes that obese children and adults experience as a result of stigma, bias, and discrimination. Obese individuals are frequently stigmatized because of their weight in many domains of daily life. Research spanning several decades has documented consistent weight bias and stigmatization in employment, health care, schools, the media, and interpersonal relationships. For overweight and obese youth, weight stigmatization translates into pervasive victimization, teasing, and bullying. Multiple adverse outcomes are associated with exposure to weight stigmatization, including depression, anxiety, low self-esteem, body dissatisfaction, suicidal ideation, poor academic performance, lower physical activity, maladaptive eating behaviors, and avoidance of health care. This review summarizes the nature and extent of weight stigmatization against overweight and obese individuals, as well as the resulting consequences that these experiences create for social, psychological, and physical health for children and adults who are targeted.
Lepere, A. J.; Slack-Smith, L. M.
2002-01-01
Intravenous sedation has been used in dentistry for many years because of its perceived advantages over general anesthesia, including shorter recovery times. However, there is limited literature available on recovery from intravenous dental sedation, particularly in the private general practice setting. The aim of this study was to describe the recovery times when sedation was conducted in private dental practice and to consider this in relation to age, weight, procedure type, and procedure time. The data were extracted from the intravenous sedation records available with 1 general anesthesia-trained dental practitioner who provides ambulatory sedation services to a number of private general dental practices in the Perth, Western Australia Metropolitan Area. Standardized intravenous sedation techniques as well as clear standardized discharge criteria were utilized. The sedatives used were fentanyl, midazolam, and propofol. Results from 85 patients produced an average recovery time of 19 minutes. Recovery time was not associated with the type or length of dental procedures performed. PMID:15384295
High Average Power, High Energy Short Pulse Fiber Laser System
Messerly, M J
2007-11-13
Recently continuous wave fiber laser systems with output powers in excess of 500W with good beam quality have been demonstrated [1]. High energy, ultrafast, chirped pulsed fiber laser systems have achieved record output energies of 1mJ [2]. However, these high-energy systems have not been scaled beyond a few watts of average output power. Fiber laser systems are attractive for many applications because they offer the promise of high efficiency, compact, robust systems that are turn key. Applications such as cutting, drilling and materials processing, front end systems for high energy pulsed lasers (such as petawatts) and laser based sources of high spatial coherence, high flux x-rays all require high energy short pulses and two of the three of these applications also require high average power. The challenge in creating a high energy chirped pulse fiber laser system is to find a way to scale the output energy while avoiding nonlinear effects and maintaining good beam quality in the amplifier fiber. To this end, our 3-year LDRD program sought to demonstrate a high energy, high average power fiber laser system. This work included exploring designs of large mode area optical fiber amplifiers for high energy systems as well as understanding the issues associated chirped pulse amplification in optical fiber amplifier systems.
The weight loss blogosphere: an online survey of weight loss bloggers.
Evans, Martinus; Faghri, Pouran D; Pagoto, Sherry L; Schneider, Kristin L; Waring, Molly E; Whited, Matthew C; Appelhans, Bradley M; Busch, Andrew; Coleman, Ailton S
2016-09-01
Blogging is a form of online journaling that has been increasingly used to document an attempt in weight loss. Despite the prevalence of weight loss bloggers, few studies have examined this population. We examined characteristics of weight loss bloggers and their blogs, including blogging habits, reasons for blogging, like and dislikes of blogging, and associations between blogging activity and weight loss. Participants (N = 194, 92.3 % female, mean age = 35) were recruited from Twitter and Facebook to complete an online survey. Participants reported an average weight loss of 42.3 pounds since starting to blog about their weight loss attempt. Blogging duration significantly predicted greater weight loss during blogging (β = -3.65, t(185) = -2.97, p = .003). Findings suggest that bloggers are generally successful with their weight loss attempt. Future research should explore what determines weight loss success/failure in bloggers and whether individuals desiring to lose weight would benefit from blogging.
Calculating Free Energies Using Average Force
NASA Technical Reports Server (NTRS)
Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)
2001-01-01
A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.
Average oxidation state of carbon in proteins.
Dick, Jeffrey M
2014-11-06
The formal oxidation state of carbon atoms in organic molecules depends on the covalent structure. In proteins, the average oxidation state of carbon (Z(C)) can be calculated as an elemental ratio from the chemical formula. To investigate oxidation-reduction (redox) patterns, groups of proteins from different subcellular locations and phylogenetic groups were selected for comparison. Extracellular proteins of yeast have a relatively high oxidation state of carbon, corresponding with oxidizing conditions outside of the cell. However, an inverse relationship between Z(C) and redox potential occurs between the endoplasmic reticulum and cytoplasm. This trend provides support for the hypothesis that protein transport and turnover are ultimately coupled to the maintenance of different glutathione redox potentials in subcellular compartments. There are broad changes in Z(C) in whole-genome protein compositions in microbes from different environments, and in Rubisco homologues, lower Z(C) tends to occur in organisms with higher optimal growth temperature. Energetic costs calculated from thermodynamic models are consistent with the notion that thermophilic organisms exhibit molecular adaptation to not only high temperature but also the reducing nature of many hydrothermal fluids. Further characterization of the material requirements of protein metabolism in terms of the chemical conditions of cells and environments may help to reveal other linkages among biochemical processes with implications for changes on evolutionary time scales.
Average oxidation state of carbon in proteins
Dick, Jeffrey M.
2014-01-01
The formal oxidation state of carbon atoms in organic molecules depends on the covalent structure. In proteins, the average oxidation state of carbon (ZC) can be calculated as an elemental ratio from the chemical formula. To investigate oxidation–reduction (redox) patterns, groups of proteins from different subcellular locations and phylogenetic groups were selected for comparison. Extracellular proteins of yeast have a relatively high oxidation state of carbon, corresponding with oxidizing conditions outside of the cell. However, an inverse relationship between ZC and redox potential occurs between the endoplasmic reticulum and cytoplasm. This trend provides support for the hypothesis that protein transport and turnover are ultimately coupled to the maintenance of different glutathione redox potentials in subcellular compartments. There are broad changes in ZC in whole-genome protein compositions in microbes from different environments, and in Rubisco homologues, lower ZC tends to occur in organisms with higher optimal growth temperature. Energetic costs calculated from thermodynamic models are consistent with the notion that thermophilic organisms exhibit molecular adaptation to not only high temperature but also the reducing nature of many hydrothermal fluids. Further characterization of the material requirements of protein metabolism in terms of the chemical conditions of cells and environments may help to reveal other linkages among biochemical processes with implications for changes on evolutionary time scales. PMID:25165594
Microstructural effects on the average properties in porous battery electrodes
NASA Astrophysics Data System (ADS)
García-García, Ramiro; García, R. Edwin
2016-03-01
A theoretical framework is formulated to analytically quantify the effects of the microstructure on the average properties of porous electrodes, including reactive area density and the through-thickness tortuosity as observed in experimentally-determined tomographic sections. The proposed formulation includes the microstructural non-idealities but also captures the well-known perfectly spherical limit. Results demonstrate that in the absence of any particle alignment, the through-thickness Bruggeman exponent α, reaches an asymptotic value of α ∼ 2 / 3 as the shape of the particles become increasingly prolate (needle- or fiber-like). In contrast, the Bruggeman exponent diverges as the shape of the particles become increasingly oblate, regardless of the degree of particle alignment. For aligned particles, tortuosity can be dramatically suppressed, e.g., α → 1 / 10 for ra → 1 / 10 and MRD ∼ 40 . Particle size polydispersity impacts the porosity-tortuosity relation when the average particle size is comparable to the thickness of the electrode layers. Electrode reactivity density can be arbitrarily increased as the particles become increasingly oblate, but asymptotically reach a minimum value as the particles become increasingly prolate. In the limit of a porous electrode comprised of fiber-like particles, the area density decreases by 24% , with respect to a distribution of perfectly spherical particles.
Impact of Field of Study, College and Year on Calculation of Cumulative Grade Point Average
ERIC Educational Resources Information Center
Trail, Carla; Reiter, Harold I.; Bridge, Michelle; Stefanowska, Patricia; Schmuck, Marylou; Norman, Geoff
2008-01-01
A consistent finding from many reviews is that undergraduate Grade Point Average (uGPA) is a key predictor of academic success in medical school. Curiously, while uGPA has established predictive validity, little is known about its reliability. For a variety of reasons, medical schools use different weighting schemas to combine years of study.…
Epidemic spreading on weighted complex networks
NASA Astrophysics Data System (ADS)
Sun, Ye; Liu, Chuang; Zhang, Chu-Xu; Zhang, Zi-Ke
2014-01-01
Nowadays, the emergence of online services provides various multi-relation information to support the comprehensive understanding of the epidemic spreading process. In this Letter, we consider the edge weights to represent such multi-role relations. In addition, we perform detailed analysis of two representative metrics, outbreak threshold and epidemic prevalence, on SIS and SIR models. Both theoretical and simulation results find good agreements with each other. Furthermore, experiments show that, on fully mixed networks, the weight distribution on edges would not affect the epidemic results once the average weight of whole network is fixed. This work may shed some light on the in-depth understanding of epidemic spreading on multi-relation and weighted networks.
Weitzen, Rony; Tichler, Thomas; Kaufman, Bella; Catane, Raphael; Shpatz, Yael
2006-11-01
Numerous studies have examined the association between body weight, nutritional factors, physical activity and the risk for primary breast cancer. Relatively few studies, however, have examined the associations between these issues and the recurrence of the disease and cure of the primary tumor. Today, three areas of focus are actively being researched for breast cancer survivors: body weight, diet composition and physical activity with specific emphasis on the risk for recurrence, survival and quality of life. Increased body weight or BMI (Body Mass Index) at diagnosis was found to be a significant risk factor for recurrent disease, decreased survival, or both. Overall obesity has been shown to adversely affect prognosis. Appropriate weight control may be particularly beneficial for breast cancer survivors. Breast cancer survivors should be encouraged to achieve and maintain a healthy weight. Limiting fat intake can reduce the risk of breast cancer recurrence. Increasing consumption of vegetables and fruits seems to have possible beneficial effects during and after treatments. To date physical activity after breast cancer diagnosis has been found to reduce the risk of death. The greatest benefit occurred in women who performed the equivalent of walking 3-5 hours per week at an average pace. Safe weight loss via increased physical activity and healthful food choices should be encouraged for normal, overweight or obese breast cancer survivors in order to improve survival and life quality.
Blanc, Ann K.; Wardlaw, Tessa
2005-01-01
OBJECTIVE: To critically examine the data used to produce estimates of the proportion of infants with low birth weight in developing countries and to describe biases in these data. To assess the effect of adjustment procedures on the estimates and propose a modified estimation procedure for international reporting purposes. METHODS: Mothers' reports about their recent births in 62 nationally representative Demographic and Health Surveys (DHS) conducted between 1990 and 2000 were analysed. The proportion of infants weighed at birth, characteristics of those weighed, extent of misreporting, and mothers' subjective assessments of their children's size at birth were examined. FINDINGS: In many developing countries the majority of infants were not weighed at birth. Those who were weighed were more likely to have mothers who live in urban areas and are educated, and to be born in a medical facility with assistance from medically trained personnel. Birth weights reported by mothers are "heaped" on multiples of 500 grams. CONCLUSION: Current survey-based estimates of the prevalence of low birth weight are biased substantially downwards. Two adjustments to reported data are recommended: a weighting procedure that combines reported birth weights with mothers' assessment of the child's size at birth, and categorization of one-quarter of the infants reported to have a birth weight of exactly 2500 grams as having low birth weight. Averaged over all surveys, these procedures increased the proportion classified as having low birth weight by 25%. We also recommend that the proportion of infants not weighed at birth be routinely reported. Efforts are needed to increase the weighing of newborns and the recording of their weights. PMID:15798841
Steinke, Hanno; Rabi, Suganthy; Saito, Toshiyuki; Sawutti, Alimjan; Miyaki, Takayoshi; Itoh, Masahiro; Spanel-Borowski, Katharina
2008-11-20
Plastination is an excellent technique which helps to keep the anatomical specimens in a dry, odourless state. Since the invention of plastination technique by von Hagens, research has been done to improve the quality of plastinated specimens. In this paper, we have described a method of producing light-weight plastinated specimens using xylene along with silicone and in the final step, substitute xylene with air. The finished plastinated specimens were light-weight, dry, odourless and robust. This method requires less use of resin thus making the plastination technique more cost-effective. The light-weight specimens are easy to carry and can easily be used for teaching.
Metamemory, Memory Performance, and Causal Attributions in Gifted and Average Children.
ERIC Educational Resources Information Center
Kurtz, Beth E.; Weinert, Franz E.
1989-01-01
Tested high- and average-achieving German fifth- and seventh-grade students' metacognitive knowledge, attributional beliefs, and performance on a sort recall test. Found ability-related differences in all three areas. Gifted children tended to attribute academic success to high ability while average children attributed success to effort. (SAK)
Global Average Brightness Temperature for April 2003
NASA Technical Reports Server (NTRS)
2003-01-01
[figure removed for brevity, see original site] Figure 1
This image shows average temperatures in April, 2003, observed by AIRS at an infrared wavelength that senses either the Earth's surface or any intervening cloud. Similar to a photograph of the planet taken with the camera shutter held open for a month, stationary features are captured while those obscured by moving clouds are blurred. Many continental features stand out boldly, such as our planet's vast deserts, and India, now at the end of its long, clear dry season. Also obvious are the high, cold Tibetan plateau to the north of India, and the mountains of North America. The band of yellow encircling the planet's equator is the Intertropical Convergence Zone (ITCZ), a region of persistent thunderstorms and associated high, cold clouds. The ITCZ merges with the monsoon systems of Africa and South America. Higher latitudes are increasingly obscured by clouds, though some features like the Great Lakes, the British Isles and Korea are apparent. The highest latitudes of Europe and Eurasia are completely obscured by clouds, while Antarctica stands out cold and clear at the bottom of the image.
The Atmospheric Infrared Sounder Experiment, with its visible, infrared, and microwave detectors, provides a three-dimensional look at Earth's weather. Working in tandem, the three instruments can make simultaneous observations all the way down to the Earth's surface, even in the presence of heavy clouds. With more than 2,000 channels sensing different regions of the atmosphere, the system creates a global, 3-D map of atmospheric temperature and humidity and provides information on clouds, greenhouse gases, and many other atmospheric phenomena. The AIRS Infrared Sounder Experiment flies onboard NASA's Aqua spacecraft and is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., under contract to NASA. JPL is a division of the California Institute of Technology in Pasadena.
A theoretical account of cue averaging in the rodent head direction system.
Page, Hector J I; Walters, Daniel M; Knight, Rebecca; Piette, Caitlin E; Jeffery, Kathryn J; Stringer, Simon M
2014-02-05
Head direction (HD) cell responses are thought to be derived from a combination of internal (or idiothetic) and external (or allothetic) sources of information. Recent work from the Jeffery laboratory shows that the relative influence of visual versus vestibular inputs upon the HD cell response depends on the disparity between these sources. In this paper, we present simulation results from a model designed to explain these observations. The model accurately replicates the Knight et al. data. We suggest that cue conflict resolution is critically dependent on plastic remapping of visual information onto the HD cell layer. This remap results in a shift in preferred directions of a subset of HD cells, which is then inherited by the rest of the cells during path integration. Thus, we demonstrate how, over a period of several minutes, a visual landmark may gain cue control. Furthermore, simulation results show that weaker visual landmarks fail to gain cue control as readily. We therefore suggest a second longer term plasticity in visual projections onto HD cell areas, through which landmarks with an inconsistent relationship to idiothetic information are made less salient, significantly hindering their ability to gain cue control. Our results provide a mechanism for reliability-weighted cue averaging that may pertain to other neural systems in addition to the HD system.
Bryan, Craig J; Bryan, AnnaBelle O; Hinkson, Kent; Bichrest, Michael; Ahern, D Aaron
2014-01-01
The current study examined relationships among self-reported depression severity, posttraumatic stress disorder (PTSD) symptom severity, and grade point average (GPA) among student servicemembers and veterans. We asked 422 student servicemembers and veterans (72% male, 86% Caucasian, mean age = 36.29 yr) to complete an anonymous online survey that assessed self-reported GPA, depression severity, PTSD severity, and frequency of academic problems (late assignments, low grades, failed exams, and skipped classes). Female respondents reported a slightly higher GPA than males (3.56 vs 3.41, respectively, p = 0.01). Depression symptoms (beta weight = -0.174, p = 0.03), male sex (beta weight = 0.160, p = 0.01), and younger age (beta weight = 0.155, p = 0.01) were associated with lower GPA but not PTSD symptoms (beta weight = -0.040, p = 0.62), although the interaction of depression and PTSD symptoms showed a nonsignificant inverse relationship with GPA (beta weight = -0.378, p = 0.08). More severe depression was associated with turning in assignments late (beta weight = 0.171, p = 0.03), failed exams (beta weight = 0.188, p = 0.02), and skipped classes (beta weight = 0.254, p = 0.01). The relationship of depression with self-reported GPA was mediated by frequency of failed examns. Results suggest that student servicemembers and veterans with greater emotional distress also report worse academic performance.
Zimmerman, R.E.; Shem, L.M.; Gowdy, M.J.; Van Dyke, G.D.; Hackney, C.T.
1992-07-01
Prevalence index values (FICWD, 1989) and average wetland values for all species present were compared for three wetland gas pipeline rights-of-way (ROWS) and adjacent natural areas. The similarities in results using these two indicator values suggest that an average wetland value may offer a simpler, less time-consuming method of evaluating the vegetation of a study site as an indication of wetness. Both PIVs and AWVs, are presented for the ROWs and the adjacent natural area at each site.
Zimmerman, R.E.; Shem, L.M.; Gowdy, M.J. ); Van Dyke, G.D. ); Hackney, C.T. )
1992-01-01
Prevalence index values (FICWD, 1989) and average wetland values for all species present were compared for three wetland gas pipeline rights-of-way (ROWS) and adjacent natural areas. The similarities in results using these two indicator values suggest that an average wetland value may offer a simpler, less time-consuming method of evaluating the vegetation of a study site as an indication of wetness. Both PIVs and AWVs, are presented for the ROWs and the adjacent natural area at each site.
... body composition gradually shifts â€” the proportion of muscle decreases and the proportion of fat increases. This shift slows their metabolism, making it easier to gain weight. In addition, some people become less physically ...
... spurts in height and weight gain in both boys and girls. Once these changes start, they continue for several ... or obese . Different BMI charts are used for boys and girls under the age of 20 because the amount ...
Kenshole, Anne B.
1972-01-01
Diabetes is being increasingly detected among the overweight. The author discusses the links between diabetes and obesity, and outlines methods by which satisfactory weight reduction may be achieved. PMID:20468726
Englberger, L.
1999-01-01
A programme of weight loss competitions and associated activities in Tonga, intended to combat obesity and the noncommunicable diseases linked to it, has popular support and the potential to effect significant improvements in health. PMID:10063662
Interpreting Sky-Averaged 21-cm Measurements
NASA Astrophysics Data System (ADS)
Mirocha, Jordan
2015-01-01
Within the first ~billion years after the Big Bang, the intergalactic medium (IGM) underwent a remarkable transformation, from a uniform sea of cold neutral hydrogen gas to a fully ionized, metal-enriched plasma. Three milestones during this epoch of reionization -- the emergence of the first stars, black holes (BHs), and full-fledged galaxies -- are expected to manifest themselves as extrema in sky-averaged ("global") measurements of the redshifted 21-cm background. However, interpreting these measurements will be complicated by the presence of strong foregrounds and non-trivialities in the radiative transfer (RT) modeling required to make robust predictions.I have developed numerical models that efficiently solve the frequency-dependent radiative transfer equation, which has led to two advances in studies of the global 21-cm signal. First, frequency-dependent solutions facilitate studies of how the global 21-cm signal may be used to constrain the detailed spectral properties of the first stars, BHs, and galaxies, rather than just the timing of their formation. And second, the speed of these calculations allows one to search vast expanses of a currently unconstrained parameter space, while simultaneously characterizing the degeneracies between parameters of interest. I find principally that (1) physical properties of the IGM, such as its temperature and ionization state, can be constrained robustly from observations of the global 21-cm signal without invoking models for the astrophysical sources themselves, (2) translating IGM properties to galaxy properties is challenging, in large part due to frequency-dependent effects. For instance, evolution in the characteristic spectrum of accreting BHs can modify the 21-cm absorption signal at levels accessible to first generation instruments, but could easily be confused with evolution in the X-ray luminosity star-formation rate relation. Finally, (3) the independent constraints most likely to aide in the interpretation
Single authentication: exposing weighted splining artifacts
NASA Astrophysics Data System (ADS)
Ciptasari, Rimba W.
2016-05-01
A common form of manipulation is to combine parts of the image fragment into another different image either to remove or blend the objects. Inspired by this situation, we propose a single authentication technique for detecting traces of weighted average splining technique. In this paper, we assume that image composite could be created by joining two images so that the edge between them is imperceptible. The weighted average technique is constructed from overlapped images so that it is possible to compute the gray level value of points within a transition zone. This approach works on the assumption that although splining process leaves the transition zone smoothly. They may, nevertheless, alter the underlying statistics of an image. In other words, it introduces specific correlation into the image. The proposed idea dealing with identifying these correlations is to generate an original model of both weighting function, left and right functions, as references to their synthetic models. The overall process of the authentication is divided into two main stages, which are pixel predictive coding and weighting function estimation. In the former stage, the set of intensity pairs {Il,Ir} is computed by exploiting pixel extrapolation technique. The least-squares estimation method is then employed to yield the weighted coefficients. We show the efficacy of the proposed scheme on revealing the splining artifacts. We believe that this is the first work that exposes the image splining artifact as evidence of digital tampering.
Overweight or about right? A norm comparison explanation of perceived weight status
Kersbergen, I.
2017-01-01
Summary Objectives Body‐weight norms may explain why personal evaluations of weight status are often inaccurate. Here, we tested a ‘norm comparison’ explanation of weight status perceptions, whereby personal evaluations of weight status are biased by perceived body‐weight norms. Methods Study 1 examined whether perceptions of how one's own body weight compares to an average person predict personal evaluations of weight status. Study 2 examined whether manipulating perceptions of how one's own body weight compares to an average person influences whether or not a person identifies their own weight status as being overweight. Results In Study 1, if participants rated their body weight as being similar to the body weight of an average person, they were less likely to identify their weight status as being overweight. In Study 2, participants that were led to believe that their body weight was heavier than the average person were more likely to perceive their own weight status as being overweight. Conclusions Personal perceptions of weight status are likely to be shaped by a ‘norm comparison’ process. As overweight becomes more normal, underestimation of weight status amongst individuals with overweight and obesity will be more common.
High average power, high current pulsed accelerator technology
Neau, E.L.
1995-05-01
Which current pulsed accelerator technology was developed during the late 60`s through the late 80`s to satisfy the needs of various military related applications such as effects simulators, particle beam devices, free electron lasers, and as drivers for Inertial Confinement Fusion devices. The emphasis in these devices is to achieve very high peak power levels, with pulse lengths on the order of a few 10`s of nanoseconds, peak currents of up to 10`s of MA, and accelerating potentials of up to 10`s of MV. New which average power systems, incorporating thermal management techniques, are enabling the potential use of high peak power technology in a number of diverse industrial application areas such as materials processing, food processing, stack gas cleanup, and the destruction of organic contaminants. These systems employ semiconductor and saturable magnetic switches to achieve short pulse durations that can then be added to efficiently give MV accelerating, potentials while delivering average power levels of a few 100`s of kilowatts to perhaps many megawatts. The Repetitive High Energy Puled Power project is developing short-pulse, high current accelerator technology capable of generating beams with kJ`s of energy per pulse delivered to areas of 1000 cm{sup 2} or more using ions, electrons, or x-rays. Modular technology is employed to meet the needs of a variety of applications requiring from 100`s of kV to MV`s and from 10`s to 100`s of kA. Modest repetition rates, up to a few 100`s of pulses per second (PPS), allow these machines to deliver average currents on the order of a few 100`s of mA. The design and operation of the second generation 300 kW RHEPP-II machine, now being brought on-line to operate at 2.5 MV, 25 kA, and 100 PPS will be described in detail as one example of the new high average power, high current pulsed accelerator technology.
White, Della B; Bursac, Zoran; Dilillo, Vicki; West, Delia S
2011-11-01
African-American women with type 2 diabetes experience limited weight loss in behavioral weight control programs. Some research suggests that overly ambitious weight loss expectations may negatively affect weight losses achieved but it is unknown whether they affect weight loss among African-American women. The current study examined personal weight loss goals and expected satisfaction with a reasonable weight loss among African-American women with type 2 diabetes starting a behavioral obesity treatment. We also explored associations among these factors and weight loss treatment outcomes. Self-identified African-American women (N = 84) in a 24-session group program were assessed at baseline and 6-month follow-up. At baseline, women indicated weight loss goals of 14.1 ± 6.6 kg (14% of initial weight). They also reported relatively high expected satisfaction with a reasonable weight loss (7-10%). On average, participants lost 3.0 ± 3.9 kg (3% of initial weight) and attended 73 ± 21% of group sessions. Neither weight loss goals nor expected satisfaction with a reasonable weight loss was correlated with either actual weight loss outcome or attendance. Having higher personal weight loss goals was associated with lower expectations of satisfaction with a reasonable weight loss. This suggests that African-American women with type 2 diabetes enter treatment hoping to lose far more weight than they are likely to achieve. It is important to understand the psychosocial sequelae of failing to reach these goals on subsequent weight maintenance and future weight loss attempts within this population.
Evaluation of a Viscosity-Molecular Weight Relationship.
ERIC Educational Resources Information Center
Mathias, Lon J.
1983-01-01
Background information, procedures, and results are provided for a series of graduate/undergraduate polymer experiments. These include synthesis of poly(methylmethacrylate), viscosity experiment (indicating large effect even small amounts of a polymer may have on solution properties), and measurement of weight-average molecular weight by light…
Zaks, Michael A; Goldobin, Denis S
2010-01-01
A recent paper claims that mean characteristics of chaotic orbits differ from the corresponding values averaged over the set of unstable periodic orbits, embedded in the chaotic attractor. We demonstrate that the alleged discrepancy is an artifact of the improper averaging. Since the natural measure is nonuniformly distributed over the attractor, different periodic orbits make different contributions into the time averages. As soon as the corresponding weights are accounted for, the discrepancy disappears.
Cleaning Physical Education Areas.
ERIC Educational Resources Information Center
Griffin, William R.
1999-01-01
Discusses techniques to help create clean and inviting school locker rooms. Daily, weekly or monthly, biannual, and annual cleaning strategies for locker room showers are highlighted as are the specialized maintenance needs for aerobic and dance areas, running tracks, and weight training areas. (GR)
Weight Loss Nutritional Supplements
NASA Astrophysics Data System (ADS)
Eckerson, Joan M.
Obesity has reached what may be considered epidemic proportions in the United States, not only for adults but for children. Because of the medical implications and health care costs associated with obesity, as well as the negative social and psychological impacts, many individuals turn to nonprescription nutritional weight loss supplements hoping for a quick fix, and the weight loss industry has responded by offering a variety of products that generates billions of dollars each year in sales. Most nutritional weight loss supplements are purported to work by increasing energy expenditure, modulating carbohydrate or fat metabolism, increasing satiety, inducing diuresis, or blocking fat absorption. To review the literally hundreds of nutritional weight loss supplements available on the market today is well beyond the scope of this chapter. Therefore, several of the most commonly used supplements were selected for critical review, and practical recommendations are provided based on the findings of well controlled, randomized clinical trials that examined their efficacy. In most cases, the nutritional supplements reviewed either elicited no meaningful effect or resulted in changes in body weight and composition that are similar to what occurs through a restricted diet and exercise program. Although there is some evidence to suggest that herbal forms of ephedrine, such as ma huang, combined with caffeine or caffeine and aspirin (i.e., ECA stack) is effective for inducing moderate weight loss in overweight adults, because of the recent ban on ephedra manufacturers must now use ephedra-free ingredients, such as bitter orange, which do not appear to be as effective. The dietary fiber, glucomannan, also appears to hold some promise as a possible treatment for weight loss, but other related forms of dietary fiber, including guar gum and psyllium, are ineffective.
Code of Federal Regulations, 2011 CFR
2011-07-01
... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...
Code of Federal Regulations, 2013 CFR
2013-07-01
... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...
Code of Federal Regulations, 2010 CFR
2010-07-01
... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...
Code of Federal Regulations, 2014 CFR
2014-07-01
... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...
Popular weight reduction diets.
Volpe, Stella Lucia
2006-01-01
The percentage of people who are overweight and obese has increased tremendously over the last 30 years. It has become a worldwide epidemic. This is evident by the number of children are being diagnosed with a body mass index >85th percentile, and the number of children begin diagnosed with type 2 diabetes mellitus, a disease previously reserved for adults. The weight loss industry has also gained from this epidemic; it is a billion dollar industry. People pay large sums of money on diet pills, remedies, and books, with the hope of losing weight permanently. Despite these efforts, the number of individuals who are overweight or obese continues to increase. Obesity is a complex, multifactorial disorder. It would be impossible to address all aspects of diet, exercise, and weight loss in this review. Therefore, this article will review popular weight loss diets, with particular attention given to comparing low fat diets with low carbohydrate diets. In addition, the role that the environment plays on both diet and exercise and how they impact obesity will be addressed. Finally, the National Weight Control Registry will be discussed.
Reciprocity of weighted networks
Squartini, Tiziano; Picciolo, Francesco; Ruzzenenti, Franco; Garlaschelli, Diego
2013-01-01
In directed networks, reciprocal links have dramatic effects on dynamical processes, network growth, and higher-order structures such as motifs and communities. While the reciprocity of binary networks has been extensively studied, that of weighted networks is still poorly understood, implying an ever-increasing gap between the availability of weighted network data and our understanding of their dyadic properties. Here we introduce a general approach to the reciprocity of weighted networks, and define quantities and null models that consistently capture empirical reciprocity patterns at different structural levels. We show that, counter-intuitively, previous reciprocity measures based on the similarity of mutual weights are uninformative. By contrast, our measures allow to consistently classify different weighted networks according to their reciprocity, track the evolution of a network's reciprocity over time, identify patterns at the level of dyads and vertices, and distinguish the effects of flux (im)balances or other (a)symmetries from a true tendency towards (anti-)reciprocation. PMID:24056721
Precipitation interpolation in mountainous areas
NASA Astrophysics Data System (ADS)
Kolberg, Sjur
2015-04-01
Different precipitation interpolation techniques as well as external drift covariates are tested and compared in a 26000 km2 mountainous area in Norway, using daily data from 60 stations. The main method of assessment is cross-validation. Annual precipitation in the area varies from below 500 mm to more than 2000 mm. The data were corrected for wind-driven undercatch according to operational standards. While temporal evaluation produce seemingly acceptable at-station correlation values (on average around 0.6), the average daily spatial correlation is less than 0.1. Penalising also bias, Nash-Sutcliffe R2 values are negative for spatial correspondence, and around 0.15 for temporal. Despite largely violated assumptions, plain Kriging produces better results than simple inverse distance weighting. More surprisingly, the presumably 'worst-case' benchmark of no interpolation at all, simply averaging all 60 stations for each day, actually outperformed the standard interpolation techniques. For logistic reasons, high altitudes are under-represented in the gauge network. The possible effect of this was investigated by a) fitting a precipitation lapse rate as an external drift, and b) applying a linear model of orographic enhancement (Smith and Barstad, 2004). These techniques improved the results only marginally. The gauge density in the region is one for each 433 km2; higher than the overall density of the Norwegian national network. Admittedly the cross-validation technique reduces the gauge density, still the results suggest that we are far from able to provide hydrological models with adequate data for the main driving force.
Instantaneous, phase-averaged, and time-averaged pressure from particle image velocimetry
NASA Astrophysics Data System (ADS)
de Kat, Roeland
2015-11-01
Recent work on pressure determination using velocity data from particle image velocimetry (PIV) resulted in approaches that allow for instantaneous and volumetric pressure determination. However, applying these approaches is not always feasible (e.g. due to resolution, access, or other constraints) or desired. In those cases pressure determination approaches using phase-averaged or time-averaged velocity provide an alternative. To assess the performance of these different pressure determination approaches against one another, they are applied to a single data set and their results are compared with each other and with surface pressure measurements. For this assessment, the data set of a flow around a square cylinder (de Kat & van Oudheusden, 2012, Exp. Fluids 52:1089-1106) is used. RdK is supported by a Leverhulme Trust Early Career Fellowship.
Determining average path length and average trapping time on generalized dual dendrimer
NASA Astrophysics Data System (ADS)
Li, Ling; Guan, Jihong
2015-03-01
Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.
Light weight phosphate cements
Wagh, Arun S.; Natarajan, Ramkumar,; Kahn, David
2010-03-09
A sealant having a specific gravity in the range of from about 0.7 to about 1.6 for heavy oil and/or coal bed methane fields is disclosed. The sealant has a binder including an oxide or hydroxide of Al or of Fe and a phosphoric acid solution. The binder may have MgO or an oxide of Fe and/or an acid phosphate. The binder is present from about 20 to about 50% by weight of the sealant with a lightweight additive present in the range of from about 1 to about 10% by weight of said sealant, a filler, and water sufficient to provide chemically bound water present in the range of from about 9 to about 36% by weight of the sealant when set. A porous ceramic is also disclosed.
Generalized constructive tree weights
Rivasseau, Vincent E-mail: adrian.tanasa@ens-lyon.org; Tanasa, Adrian E-mail: adrian.tanasa@ens-lyon.org
2014-04-15
The Loop Vertex Expansion (LVE) is a quantum field theory (QFT) method which explicitly computes the Borel sum of Feynman perturbation series. This LVE relies in a crucial way on symmetric tree weights which define a measure on the set of spanning trees of any connected graph. In this paper we generalize this method by defining new tree weights. They depend on the choice of a partition of a set of vertices of the graph, and when the partition is non-trivial, they are no longer symmetric under permutation of vertices. Nevertheless we prove they have the required positivity property to lead to a convergent LVE; in fact we formulate this positivity property precisely for the first time. Our generalized tree weights are inspired by the Brydges-Battle-Federbush work on cluster expansions and could be particularly suited to the computation of connected functions in QFT. Several concrete examples are explicitly given.
ERIC Educational Resources Information Center
Pape, K. E.; And Others
1978-01-01
For availibility see EC 103 548 Among findings of a 2-year followup study of 43 infants of birth weight less than 1000 grams were the following: average height at age 2 years was between the tenth and twenty-fifth percentiles; average weight was between the third and tenth percentiles; 15 Ss developed lower respiratory tract infections during the…
Weighted Uncertainty Relations
Xiao, Yunlong; Jing, Naihuan; Li-Jost, Xianqing; Fei, Shao-Ming
2016-01-01
Recently, Maccone and Pati have given two stronger uncertainty relations based on the sum of variances and one of them is nontrivial when the quantum state is not an eigenstate of the sum of the observables. We derive a family of weighted uncertainty relations to provide an optimal lower bound for all situations and remove the restriction on the quantum state. Generalization to multi-observable cases is also given and an optimal lower bound for the weighted sum of the variances is obtained in general quantum situation. PMID:26984295
Behavioral transitions and weight change patterns within the PREMIER trial.
Bartfield, Jessica K; Stevens, Victor J; Jerome, Gerald J; Batch, Bryan C; Kennedy, Betty M; Vollmer, William M; Harsha, David; Appel, Lawrence J; Desmond, Renee; Ard, Jamy D
2011-08-01
Little is known about the transition in behaviors from short-term weight loss to maintenance of weight loss. We wanted to determine how short-term and long-term weight loss and patterns of weight change were associated with intervention behavioral targets. This analysis includes overweight/obese participants in active treatment (n = 507) from the previously published PREMIER trial, an 18-month, multicomponent lifestyle intervention for blood pressure reduction, including 33 intervention sessions and recommendations to self-monitor food intake and physical activity daily. Associations between behaviors (attendance, recorded days/week of physical activity, food records/week) and weight loss of ≥5% at 6 and 18 months were examined using logistic regression. We characterized the sample using 5 weight change categories (weight gained, weight stable, weight loss then relapse, late weight loss, and weight loss then maintenance) and analyzed adherence to the behaviors for each category, comparing means with ANOVA. Participants lost an average of 5.3 ± 5.6 kg at 6 months and 4.0 ± 6.7 kg (4.96% of body weight) by 18 months. Higher levels of attendance, food record completion, and recorded days/week of physical activity were associated with increasing odds of achieving 5% weight loss. All weight change groups had declines in the behaviors over time; however, compared to the other four groups, the weight loss/maintenance group (n = 154) had statistically less significant decline in number of food records/week (48%), recorded days/week of physical activity (41.7%), and intervention sessions attended (12.8%) through 18 months. Behaviors associated with short-term weight loss continue to be associated with long-term weight loss, albeit at lower frequencies. Minimizing the decline in these behaviors may be important in achieving long-term weight loss.
Paper area density measurement from forward transmitted scattered light
Koo, Jackson C.
2001-01-01
A method whereby the average paper fiber area density (weight per unit area) can be directly calculated from the intensity of transmitted, scattered light at two different wavelengths, one being a non-absorpted wavelength. Also, the method makes it possible to derive the water percentage per fiber area density from a two-wavelength measurement. In the optical measuring technique optical transmitted intensity, for example, at 2.1 microns cellulose absorption line is measured and compared with another scattered, optical transmitted intensity reference in the nearby spectrum region, such as 1.68 microns, where there is no absorption. From the ratio of these two intensities, one can calculate the scattering absorption coefficient at 2.1 microns. This absorption coefficient at this wavelength is, then, experimentally correlated to the paper fiber area density. The water percentage per fiber area density can be derived from this two-wavelength measurement approach.
20 CFR 226.62 - Computing average monthly compensation.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Computing average monthly compensation. 226.62... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Years of Service and Average Monthly Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation is...
40 CFR 1033.710 - Averaging emission credits.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Averaging emission credits. 1033.710... Averaging emission credits. (a) Averaging is the exchange of emission credits among your engine families. You may average emission credits only as allowed by § 1033.740. (b) You may certify one or more...
20 CFR 226.62 - Computing average monthly compensation.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Computing average monthly compensation. 226... RETIREMENT ACT COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Years of Service and Average Monthly Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation...
... diet; VLCD; Low-calorie diet; LCD; Very low energy diet; Weight loss - rapid weight loss; Overweight - rapid ... AM, Aveyard P. Clinical effectiveness of very-low-energy diets in the management of weight loss: a ...
Weight and Diabetes (For Parents)
... your child lose weight to control diabetes, a weight management plan may be created. Even if your child's ... overweight, talk to your doctor about beginning a weight management program so you can set a good example. ...
Andersson, Neil; Mitchell, Steven
2006-01-01
Evaluation of mine risk education in Afghanistan used population weighted raster maps as an evaluation tool to assess mine education performance, coverage and costs. A stratified last-stage random cluster sample produced representative data on mine risk and exposure to education. Clusters were weighted by the population they represented, rather than the land area. A "friction surface" hooked the population weight into interpolation of cluster-specific indicators. The resulting population weighted raster contours offer a model of the population effects of landmine risks and risk education. Five indicator levels ordered the evidence from simple description of the population-weighted indicators (level 0), through risk analysis (levels 1–3) to modelling programme investment and local variations (level 4). Using graphic overlay techniques, it was possible to metamorphose the map, portraying the prediction of what might happen over time, based on the causality models developed in the epidemiological analysis. Based on a lattice of local site-specific predictions, each cluster being a small universe, the "average" prediction was immediately interpretable without losing the spatial complexity. PMID:16390549
Brief report: Weight dissatisfaction, weight status, and weight loss in Mexican-American children
Technology Transfer Automated Retrieval System (TEKTRAN)
The study objectives were to assess the association between weight dissatisfaction, weight status, and weight loss in Mexican-American children participating in a weight management program. Participants included 265 Mexican American children recruited for a school-based weight management program. Al...
Avakian, Harut; Gamberg, Leonard; Rossi, Patrizia; Prokudin, Alexei
2016-05-01
We review the concept of Bessel weighted asymmetries for semi-inclusive deep inelastic scattering and focus on the cross section in Fourier space, conjugate to the outgoing hadron’s transverse momentum, where convolutions of transverse momentum dependent parton distribution functions and fragmentation functions become simple products. Individual asymmetric terms in the cross section can be projected out by means of a generalized set of weights involving Bessel functions. The procedure is applied to studies of the double longitudinal spin asymmetry in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Monte Carlo extraction compared to input model calculations, which is due to the limitations imposed by the energy and momentum conservation at the given energy and hard scale Q2. We find that the Bessel weighting technique provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs.
ERIC Educational Resources Information Center
Sherman, Rachel M.
1997-01-01
Examines ways of giving an existing weight-training room new life without spending a lot of time and money. Tips include adding rubber floor coverings; using indirect lighting; adding windows, art work, or mirrors to open up the room; using more aesthetically pleasing ceiling tiles; upgrading ventilation; repadding or painting the equipment; and…
Wong, Christopher J
2014-05-01
Involuntary weight loss remains an important and challenging clinical problem, with a high degree of morbidity and mortality. Because of the frequency of finding a serious underlying diagnosis, clinicians must be thorough in assessment, keeping in mind a broad range of possible causes. Although prediction scores exist, they have not been broadly validated; therefore, clinical judgment remains ever essential.
ERIC Educational Resources Information Center
Nutter, June
1995-01-01
Secondary level physical education teachers can have their students use math concepts while working out on the weight-room equipment. The article explains how students can reinforce math skills while weightlifting by estimating their strength, estimating their power, or calculating other formulas. (SM)
... term use. All other drugs are approved for short-term use of no more than a few weeks. Be sure you understand the side effects of weight-loss medicines. Side effects can include: Increase in blood pressure Problems sleeping, headache, nervousness, and palpitations Nausea, constipation, and dry ...
Dynamic Weighted Data Structures.
1982-06-01
van "j Beethoven, Igor Stravinsky, Glan-Carlo Menotti, and Johann Sebastian Bach . Dynamic Weighted Data Structures Samuel W. Bent This thesis discusses...and Bonnie Hampton, who taught me much more than how to play the cello. Finally, for hours of artistic satisfaction, I thank Johannes Brahms, Ludwig
Menichetti, Giulia; Remondini, Daniel; Panzarasa, Pietro; Mondragón, Raúl J.; Bianconi, Ginestra
2014-01-01
One of the most important challenges in network science is to quantify the information encoded in complex network structures. Disentangling randomness from organizational principles is even more demanding when networks have a multiplex nature. Multiplex networks are multilayer systems of nodes that can be linked in multiple interacting and co-evolving layers. In these networks, relevant information might not be captured if the single layers were analyzed separately. Here we demonstrate that such partial analysis of layers fails to capture significant correlations between weights and topology of complex multiplex networks. To this end, we study two weighted multiplex co-authorship and citation networks involving the authors included in the American Physical Society. We show that in these networks weights are strongly correlated with multiplex structure, and provide empirical evidence in favor of the advantage of studying weighted measures of multiplex networks, such as multistrength and the inverse multiparticipation ratio. Finally, we introduce a theoretical framework based on the entropy of multiplex ensembles to quantify the information stored in multiplex networks that would remain undetected if the single layers were analyzed in isolation. PMID:24906003
Executive functions predict weight loss in a medically supervised weight loss programme
Bond, D.; Gunstad, J.; Pera, V.; Rathier, L.; Tremont, G.
2016-01-01
Summary Background Deficits in executive functions are related to poorer weight loss after bariatric surgery; however, less is known about the role that these deficits may play during participation in nonsurgical weight loss programmes. This study examined associations between objectively measured executive functions and weight loss during participation in a medically supervised weight loss programme. Methods Twenty‐three adult patients (age 50.4 ± 15.1, BMI 44.2 ± 8.8, 68% female, 92% White) enrolled in a medically supervised weight loss programme, involving prescription of a very low calorie diet and strategies to change eating and activity behaviours, underwent comprehensive computerized testing of executive functions at baseline. Weight was obtained at baseline and 8 weeks. Demographic and clinical information were obtained through medical chart review. Results Participants lost an average of 9.8 ± 3.4% of their initial body weight at 8 weeks. Fewer correct responses on a set‐shifting task and faster reaction time on a response inhibition task were associated with lower weight loss percentage at 8 weeks after adjusting for age, education and depressive symptoms. There were no associations between performance on tests of working memory or planning and weight loss. Conclusions This study shows that worse performance on a set‐shifting task (indicative of poorer cognitive flexibility) and faster reaction times on a response inhibition test (indicative of higher impulsivity) are associated with lower weight loss among participants in a medically supervised weight loss programme. Pre‐treatment assessment of executive functions may be useful in identifying individuals who may be at risk for suboptimal treatment outcomes. Future research is needed to replicate these findings in larger samples and identify underlying mechanisms. PMID:28090338
Weight change among people randomized to minimal intervention control groups in weight loss trials
Johns, David J.; Hartmann‐Boyce, Jamie; Jebb, Susan A.; Aveyard, Paul
2016-01-01
Objective Evidence on the effectiveness of behavioral weight management programs often comes from uncontrolled program evaluations. These frequently make the assumption that, without intervention, people will gain weight. The aim of this study was to use data from minimal intervention control groups in randomized controlled trials to examine the evidence for this assumption and the effect of frequency of weighing on weight change. Methods Data were extracted from minimal intervention control arms in a systematic review of multicomponent behavioral weight management programs. Two reviewers classified control arms into three categories based on intensity of minimal intervention and calculated 12‐month mean weight change using baseline observation carried forward. Meta‐regression was conducted in STATA v12. Results Thirty studies met the inclusion criteria, twenty‐nine of which had usable data, representing 5,963 participants allocated to control arms. Control arms were categorized according to intensity, as offering leaflets only, a single session of advice, or more than one session of advice from someone without specialist skills in supporting weight loss. Mean weight change at 12 months across all categories was −0.8 kg (95% CI −1.1 to −0.4). In an unadjusted model, increasing intensity by moving up a category was associated with an additional weight loss of −0.53 kg (95% CI −0.96 to −0.09). Also in an unadjusted model, each additional weigh‐in was associated with a weight change of −0.42 kg (95% CI −0.81 to −0.03). However, when both variables were placed in the same model, neither intervention category nor number of weigh‐ins was associated with weight change. Conclusions Uncontrolled evaluations of weight loss programs should assume that, in the absence of intervention, their population would weigh up to a kilogram on average less than baseline at the end of the first year of follow‐up. PMID:27028279
Westphal, M; Frazier, E; Miller, M C
1979-01-01
A five-year review of accounting data at a university hospital shows that immediately following institution of concurrent PSRO admission and length of stay review of Medicare-Medicaid patients, there was a significant decrease in length of stay and a fall in average charges generated per patient against the inflationary trend. Similar changes did not occur for the non-Medicare-Medicaid patients who were not reviewed. The observed changes occurred even though the review procedure rarely resulted in the denial of services to patients, suggesting an indirect effect of review.
Averaging techniques for steady and unsteady calculations of a transonic fan stage
NASA Technical Reports Server (NTRS)
Wyss, M. L.; Chima, R. V.; Tweedt, D. L.
1993-01-01
It is often desirable to characterize a turbomachinery flow field with a few lumped parameters such as total pressure ratio or stage efficiency. Various averaging schemes may be used to compute these parameters. Here three averaging schemes, the momentum, energy, and area averaging schemes, are described and compared for two computed solutions of the midspan section of a transonic fan stage: a steady averaging-plane solution in which average rotor outflow conditions were used as stator inflow conditions and an unsteady rotor-stator interaction solution. The unsteady solution is described, some unsteady flow phenomena are discussed and the steady pressure distributions are compared. Despite large unsteady pressure fluctuations on the stator surface, the steady pressure distribution matched the average unsteady distribution almost exactly. Stator wake profiles, stator loss coefficient, and stage efficiency were computed for the two solutions with the three averaging schemes and are compared. In general the energy averaging scheme gave good agreement between the averaging-plane solution and the time-averaged unsteady solution, even though certain phenomena due to unsteady wake migration were neglected.
Choose to change maternity weight management pilot.
Lilley, Suzanne; Anderson, Kate; Benbow, Elizabeth
2014-07-01
Obesity during pregnancy is associated with increased risk of adverse health outcomes during pregnancy. There is limited research available regarding effective interventions during pregnancy for obese women and this is combined with local inadequate service provision to support obese mothers in Greater Manchester (GM). Choose to Change (CTC) aims to develop, deliver and evaluate a community based weight management programme to limit excessive gestational weight gain. Participants (n=73) referred from January to December 2013 by Community Midwifery Teams (>18years) with a BMI >30 attended a healthy lifestyle intervention (1:1 or group) covering nutrition, physical activity and behaviour change over 12weeks. Baseline measures were weight, Body Mass Index (BMI), Blood Pressure, physical activity, dietary habits and psychological questionnaires measuring anxiety, self-esteem and dis-ordered eating. 28 clients were assigned to intervention (group (n=15), 1:1 (n=13). Mean age 29 (SD=5.78), mean BMI at referral was 38.96 (SD=4.87). Descriptive statistics suggest an average weight gain for clients (excluding drop outs n=12) is 0.94kg (SD=6.65). For those who have completed the programme (n=8) average weight gain was 1.03kg (SD=7.71). Results vary according to intervention type 1:1, 0.04kg (SD=8.82kg), group, 1.52kg (SD=3.17kg). Drop-out rate from referral to assessment was 62%, from assessment to intervention 32% and during intervention 26%. Overall the results of the present pilot study indicate that the CTC healthy lifestyle intervention can limit excessive gestational weight gain. CTC is looking at future directions for development including changing the assessment procedure to improve DORs, further analysis of various mediating factors including BMI and intervention type and exploration of post-measurements to show further improved health outcomes as the programme is rolled out across GM.
Potential of high-average-power solid state lasers
Emmett, J.L.; Krupke, W.F.; Sooy, W.R.
1984-09-25
We discuss the possibility of extending solid state laser technology to high average power and of improving the efficiency of such lasers sufficiently to make them reasonable candidates for a number of demanding applications. A variety of new design concepts, materials, and techniques have emerged over the past decade that, collectively, suggest that the traditional technical limitations on power (a few hundred watts or less) and efficiency (less than 1%) can be removed. The core idea is configuring the laser medium in relatively thin, large-area plates, rather than using the traditional low-aspect-ratio rods or blocks. This presents a large surface area for cooling, and assures that deposited heat is relatively close to a cooled surface. It also minimizes the laser volume distorted by edge effects. The feasibility of such configurations is supported by recent developments in materials, fabrication processes, and optical pumps. Two types of lasers can, in principle, utilize this sheet-like gain configuration in such a way that phase and gain profiles are uniformly sampled and, to first order, yield high-quality (undistorted) beams. The zig-zag laser does this with a single plate, and should be capable of power levels up to several kilowatts. The disk laser is designed around a large number of plates, and should be capable of scaling to arbitrarily high power levels.
NASA Technical Reports Server (NTRS)
Nack, M. L.; Curran, R. J.
1978-01-01
The dependence of the albedo at the top of a realistic atmosphere upon the surface albedo, solar zenith angle, and cloud optical thickness is examined for the cases of clear sky, total cloud cover, and fractional cloud cover. The radiative transfer calculations of Dave and Braslau (1975) for particular values of surface albedo and solar zenith angle, and a single value of cloud optical thickness are used as the basis of a parametric albedo model. The question of spectral and temporal averages of albedos and reflected irradiances is addressed, and unique weighting functions for the spectral and temporal albedo averages are developed.
Insulin sensitivity as a predictor of weight regain.
Wing, R R
1997-01-01
A recent study found that increases in insulin sensitivity following weight loss and stabilization were strongly related to subsequent weight regain. The present paper analyzed this relationship in two behavioral weight-loss programs. In the first study, 125 nondiabetic subjects were followed over 30 months; weight losses averaged 10 kg at six months, and subjects had regained 8 kg of their weight loss by their 30-month follow-up. Neither fasting insulin levels at six months nor changes in fasting insulin from zero to six months were related to subsequent weight regain. Similarly, insulin levels measured two hours after a 75 g glucose load were unrelated to subsequent weight regain. The second study followed 33 individuals with Type II diabetes, treated with behavior modification, and either a low calorie diet or a very low calorie diet. Weight losses averaged 18 kg at six months, and subjects had regained 10 kg by their 24-month follow-up. The Bergman minimal model was used to assess insulin sensitivity at 6-month intervals. Initial analyses suggested that changes in insulin sensitivity from zero to six months were related to subsequent weight regain, but this effect was strongly influenced by an outlier. After removing this individual, there were no significant relationships between the changes in insulin sensitivity that accompanied weight loss and future weight regain. Likewise, insulin sensitivity at 12 months did not predict weight regain from 12 to 24 months. These data do not support the hypothesis that increases in insulin sensitivity with weight loss are associated with subsequent weight regain.
Code of Federal Regulations, 2014 CFR
2014-07-01
... and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of Environment... Carbon-Related Exhaust Emissions § 600.510-12 Calculation of average fuel economy and average carbon.... (iv) (2) Average carbon-related exhaust emissions will be calculated to the nearest one gram per...
Effect of Parent Weight on Weight Loss in Obese Children.
ERIC Educational Resources Information Center
Epstein, Leonard H.; And Others
1986-01-01
Assessed effect of parent weight and parent control versus child self-control on weight loss in obese preadolescent children over three-year period. Children of nonobese parents had significantly greater decrease in relative weight after one year than children of obese parents. At three years, there was no effect of parent weight. Locus of control…
Bounds for weighted Lebesgue functions for exponential weights
NASA Astrophysics Data System (ADS)
Kubayi, D. G.
2001-08-01
The weighted Lebesgue functions for even weights W=e-Q on the real line have been intensively studied in recent years. In this paper, we discuss the corresponding results for a class of weights that includes non-even weights.
Are early first trimester weights valid proxies for preconception weight?
Technology Transfer Automated Retrieval System (TEKTRAN)
An accurate estimate of preconception weight is necessary for providing a gestational weight gain range based on the Institute of Medicine’s guidelines; however, an accurate and proximal preconception weight is not available for most women. We examined the validity of first trimester weights for est...
Particle sizing by weighted measurements of scattered light
NASA Technical Reports Server (NTRS)
Buchele, Donald R.
1988-01-01
A description is given of a measurement method, applicable to a poly-dispersion of particles, in which the intensity of scattered light at any angle is weighted by a factor proportional to that angle. Determination is then made of four angles at which the weighted intensity is four fractions of the maximum intensity. These yield four characteristic diameters, i.e., the diameters of the volume/area mean (D sub 32 the Sauter mean) and the volume/diameter mean (D sub 31); the diameters at cumulative volume fractions of 0.5 (D sub v0.5 the volume median) and 0.75 (D sub v0.75). They also yield the volume dispersion of diameters. Mie scattering computations show that an average diameter less than three micrometers cannot be accurately measured. The results are relatively insensitive to extraneous background light and to the nature of the diameter distribution. Also described is an experimental method of verifying the conclusions by using two microscopic slides coated with polystyrene microspheres to simulate the particles and the background.
Models for predicting objective function weights in prostate cancer IMRT
Boutilier, Justin J. Lee, Taewoo; Craig, Tim; Sharpe, Michael B.; Chan, Timothy C. Y.
2015-04-15
Purpose: To develop and evaluate the clinical applicability of advanced machine learning models that simultaneously predict multiple optimization objective function weights from patient geometry for intensity-modulated radiation therapy of prostate cancer. Methods: A previously developed inverse optimization method was applied retrospectively to determine optimal objective function weights for 315 treated patients. The authors used an overlap volume ratio (OV) of bladder and rectum for different PTV expansions and overlap volume histogram slopes (OVSR and OVSB for the rectum and bladder, respectively) as explanatory variables that quantify patient geometry. Using the optimal weights as ground truth, the authors trained and applied three prediction models: logistic regression (LR), multinomial logistic regression (MLR), and weighted K-nearest neighbor (KNN). The population average of the optimal objective function weights was also calculated. Results: The OV at 0.4 cm and OVSR at 0.1 cm features were found to be the most predictive of the weights. The authors observed comparable performance (i.e., no statistically significant difference) between LR, MLR, and KNN methodologies, with LR appearing to perform the best. All three machine learning models outperformed the population average by a statistically significant amount over a range of clinical metrics including bladder/rectum V53Gy, bladder/rectum V70Gy, and dose to the bladder, rectum, CTV, and PTV. When comparing the weights directly, the LR model predicted bladder and rectum weights that had, on average, a 73% and 74% relative improvement over the population average weights, respectively. The treatment plans resulting from the LR weights had, on average, a rectum V70Gy that was 35% closer to the clinical plan and a bladder V70Gy that was 29% closer, compared to the population average weights. Similar results were observed for all other clinical metrics. Conclusions: The authors demonstrated that the KNN and MLR
Heterogeneous edge weights promote epidemic diffusion in weighted evolving networks
NASA Astrophysics Data System (ADS)
Duan, Wei; Song, Zhichao; Qiu, Xiaogang
2016-08-01
The impact that the heterogeneities of links’ weights have on epidemic diffusion in weighted networks has received much attention. Investigating how heterogeneous edge weights affect epidemic spread is helpful for disease control. In this paper, we study a Reed-Frost epidemic model in weighted evolving networks. Our results indicate that a higher heterogeneity of edge weights leads to higher epidemic prevalence and epidemic incidence at earlier stage of epidemic diffusion in weighted evolving networks. In addition, weighted evolving scale-free networks come with a higher epidemic prevalence and epidemic incidence than unweighted scale-free networks.
High School Weight Training: A Comprehensive Program.
ERIC Educational Resources Information Center
Viscounte, Roger; Long, Ken
1989-01-01
Describes a weight training program, suitable for the general student population and the student-athlete, which is designed to produce improvement in specific, measurable areas including bench press (upper body), leg press (lower body), vertical jump (explosiveness); and 40-yard dash (speed). Two detailed charts are included, with notes on their…
Aubuchon, Mira; Liu, Ying; Petroski, Gregory F.; Thomas, Tom R.; Polotsky, Alex J.
2017-01-01
What is the impact of intentional weight loss and regain on serum androgens in women? We conducted an ancillary analysis of prospectively collected samples from a randomized controlled trial. The trial involved supervised 10% weight loss (8.5 kg on average) with diet and exercise over 4–6 months followed by supervised intentional regain of 50% of the lost weight (4.6 kg on average) over 4–6 months. Participants were randomized prior to the partial weight regain component to either continuation or cessation of endurance exercise. Analytic sample included 30 obese premenopausal women (mean age of 40 ± 5.9 years, mean baseline body mass index (BMI) of 32.9 ± 4.2 kg/m2) with metabolic syndrome. We evaluated sex hormone binding globulin (SHBG), total testosterone (T), free androgen index (FAI), and high molecular weight adiponectin (HMWAdp). Insulin, homeostasis model assessment (HOMA), and quantitative insulin sensitivity check index (QUICKI), and visceral adipose tissue (VAT) measured in the original trial were reanalyzed for the current analytic sample. Insulin, HOMA, and QUICKI improved with weight loss and were maintained despite weight regain. Log-transformed SHBG significantly increased from baseline to weight loss, and then significantly decreased with weight regain. LogFAI and logVAT decreased similarly and increased with weight loss followed by weight regain. No changes were found in logT and LogHMWAdp. There was no significant difference in any tested parameters by exercise between the groups. SHBG showed prominent sensitivity to body mass fluctuations, as reduction with controlled intentional weight regain showed an inverse relationship to VAT and occurred despite stable HMWAdp and sustained improvements with insulin resistance. FAI showed opposite changes to SHBG, while T did not change significantly with weight. Continued exercise during weight regain did not appear to impact these findings. PMID:27192090
Basso, Olga
2008-03-01
Birth weight is associated not just with infant morbidity and mortality, but with outcomes occurring much later in life, including adult mortality, as reported by a paper by Baker and colleagues in this issue of Epidemiology. While these associations are tantalizing per se, the truly interesting question concerns the mechanisms that underlie these links. The prevailing hypothesis suggests a "fetal origin" of diseases resulting from alterations in fetal nutrition that permanently program organ function. The most commonly proposed alternative is that factors, mainly genetic, that affect both fetal growth and disease risk are responsible for the observed associations. Although both mechanisms are intellectually attractive-and may well coexist-we should be cautious to not focus excessively on fetal growth. Doing this may lead us in the wrong direction, as has likely happened in the case of birth weight in relation to infant survival.
Cheney, M.C.
1997-12-31
The cost of energy for renewables has gained greater significance in recent years due to the drop in price in some competing energy sources, particularly natural gas. In pursuit of lower manufacturing costs for wind turbine systems, work was conducted to explore an innovative rotor designed to reduce weight and cost over conventional rotor systems. Trade-off studies were conducted to measure the influence of number of blades, stiffness, and manufacturing method on COE. The study showed that increasing number of blades at constant solidity significantly reduced rotor weight and that manufacturing the blades using pultrusion technology produced the lowest cost per pound. Under contracts with the National Renewable Energy Laboratory and the California Energy Commission, a 400 kW (33m diameter) turbine was designed employing this technology. The project included tests of an 80 kW (15.5m diameter) dynamically scaled rotor which demonstrated the viability of the design.
Identification and estimation of survivor average causal effects
Tchetgen, Eric J Tchetgen
2014-01-01
In longitudinal studies, outcomes ascertained at follow-up are typically undefined for individuals who die prior to the follow-up visit. In such settings, outcomes are said to be truncated by death and inference about the effects of a point treatment or exposure, restricted to individuals alive at the follow-up visit, could be biased even if as in experimental studies, treatment assignment were randomized. To account for truncation by death, the survivor average causal effect (SACE) defines the effect of treatment on the outcome for the subset of individuals who would have survived regardless of exposure status. In this paper, the author nonparametrically identifies SACE by leveraging post-exposure longitudinal correlates of survival and outcome that may also mediate the exposure effects on survival and outcome. Nonparametric identification is achieved by supposing that the longitudinal data arise from a certain nonparametric structural equations model and by making the monotonicity assumption that the effect of exposure on survival agrees in its direction across individuals. A novel weighted analysis involving a consistent estimate of the survival process is shown to produce consistent estimates of SACE. A data illustration is given, and the methods are extended to the context of time-varying exposures. We discuss a sensitivity analysis framework that relaxes assumptions about independent errors in the nonparametric structural equations model and may be used to assess the extent to which inference may be altered by a violation of key identifying assumptions. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:24889022
Code of Federal Regulations, 2011 CFR
2011-01-01
... DC area and for COLA areas with multiple survey areas? 591.216 Section 591.216 Administrative... combine survey data for the DC area and for COLA areas with multiple survey areas? (a) Washington, DC... DC survey areas identified in § 591.215(a) and then averages these average prices together...
Code of Federal Regulations, 2010 CFR
2010-01-01
... DC area and for COLA areas with multiple survey areas? 591.216 Section 591.216 Administrative... combine survey data for the DC area and for COLA areas with multiple survey areas? (a) Washington, DC... DC survey areas identified in § 591.215(a) and then averages these average prices together...
Code of Federal Regulations, 2013 CFR
2013-01-01
... DC area and for COLA areas with multiple survey areas? 591.216 Section 591.216 Administrative... combine survey data for the DC area and for COLA areas with multiple survey areas? (a) Washington, DC... DC survey areas identified in § 591.215(a) and then averages these average prices together...