Wieczorek, Michael E.
2014-01-01
This digital data release consists of seven data files of soil attributes for the United States and the District of Columbia. The files are derived from National Resources Conservations Service’s (NRCS) Soil Survey Geographic database (SSURGO). The data files can be linked to the raster datasets of soil mapping unit identifiers (MUKEY) available through the NRCS’s Gridded Soil Survey Geographic (gSSURGO) database (http://www.nrcs.usda.gov/wps/portal/nrcs/detail/soils/survey/geo/?cid=nrcs142p2_053628). The associated files, named DRAINAGECLASS, HYDRATING, HYDGRP, HYDRICCONDITION, LAYER, TEXT, and WTDEP are area- and depth-weighted average values for selected soil characteristics from the SSURGO database for the conterminous United States and the District of Columbia. The SSURGO tables were acquired from the NRCS on March 5, 2014. The soil characteristics in the DRAINAGE table are drainage class (DRNCLASS), which identifies the natural drainage conditions of the soil and refers to the frequency and duration of wet periods. The soil characteristics in the HYDRATING table are hydric rating (HYDRATE), a yes/no field that indicates whether or not a map unit component is classified as a "hydric soil". The soil characteristics in the HYDGRP table are the percentages for each hydrologic group per MUKEY. The soil characteristics in the HYDRICCONDITION table are hydric condition (HYDCON), which describes the natural condition of the soil component. The soil characteristics in the LAYER table are available water capacity (AVG_AWC), bulk density (AVG_BD), saturated hydraulic conductivity (AVG_KSAT), vertical saturated hydraulic conductivity (AVG_KV), soil erodibility factor (AVG_KFACT), porosity (AVG_POR), field capacity (AVG_FC), the soil fraction passing a number 4 sieve (AVG_NO4), the soil fraction passing a number 10 sieve (AVG_NO10), the soil fraction passing a number 200 sieve (AVG_NO200), and organic matter (AVG_OM). The soil characteristics in the TEXT table are percent sand, silt, and clay (AVG_SAND, AVG_SILT, and AVG_CLAY). The soil characteristics in the WTDEP table are the annual minimum water table depth (WTDEP_MIN), available water storage in the 0-25 cm soil horizon (AWS025), the minimum water table depth for the months April, May and June (WTDEPAMJ), the available water storage in the first 25 centimeters of the soil horizon (AWS25), the dominant drainage class (DRCLSD), the wettest drainage class (DRCLSWET), and the hydric classification (HYDCLASS), which is an indication of the proportion of the map unit, expressed as a class, that is "hydric", based on the hydric classification of a given MUKEY. (See Entity_Description for more detail). The tables were created with a set of arc macro language (aml) and awk (awk was created at Bell Labsin the 1970s and its name is derived from the first letters of the last names of its authors – Alfred Aho, Peter Weinberger, and Brian Kernighan) scripts. Send an email to mewieczo@usgs.gov to obtain copies of the computer code (See Process_Description.) The methods used are outlined in NRCS's "SSURGO Data Packaging and Use" (NRCS, 2011). The tables can be related or joined to the gSSURGO rasters of MUKEYs by the item 'MUKEY.' Joining or relating the tables to a MUKEY grid allows the creation of grids of area- and depth-weighted soil characteristics. A 90-meter raster of MUKEYs is provided which can be used to produce rasters of soil attributes. More detailed resolution rasters are available through NRCS via the link above.
Asymmetric network connectivity using weighted harmonic averages
NASA Astrophysics Data System (ADS)
Morrison, Greg; Mahadevan, L.
2011-02-01
We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.
Average molecular weight of surfactants in aerosols
NASA Astrophysics Data System (ADS)
Latif, M. T.; Brimblecombe, P.
2007-09-01
Surfactants in atmospheric aerosols determined as methylene blue active substances (MBAS) and ethyl violet active substances (EVAS). The MBAS and EVAS concentrations can be correlated with surface tension as determined by pendant drop analysis. The effect of surface tension was more clearly indicated in fine mode aerosol extracts. The concentration of MBAS and EVAS was determined before and after ultrafiltration analysis using AMICON centrifuge tubes that define a 5000 Da (5 K Da) nominal molecular weight fraction. Overall, MBAS and to a greater extent EVAS predominates in fraction with molecular weight below 5 K Da. In case of aerosols collected in Malaysia the higher molecular fractions tended to be a more predominant. The MBAS and EVAS are correlated with yellow to brown colours in aerosol extracts. Further experiments showed possible sources of surfactants (e.g. petrol soot, diesel soot) in atmospheric aerosols to yield material having molecular size below 5 K Da except for humic acid. The concentration of surfactants from these sources increased after ozone exposure and for humic acids it also general included smaller molecular weight surfactants.
Describing Average- and Longtime-Behavior by Weighted MSO Logics
NASA Astrophysics Data System (ADS)
Droste, Manfred; Meinecke, Ingmar
Weighted automata model quantitative aspects of systems like memory or power consumption. Recently, Chatterjee, Doyen, and Henzinger introduced a new kind of weighted automata which compute objectives like the average cost or the longtime peak power consumption. In these automata, operations like average, limit superior, limit inferior, limit average, or discounting are used to assign values to finite or infinite words. In general, these weighted automata are not semiring weighted anymore. Here, we establish a connection between such new kinds of weighted automata and weighted logics. We show that suitable weighted MSO logics and these new weighted automata are expressively equivalent, both for finite and infinite words. The constructions employed are effective, leading to decidability results for the weighted logic formulas considered.
Weighted Kullback-Leibler average-based distributed filtering algorithm
NASA Astrophysics Data System (ADS)
Lu, Kelin; Chang, Kuo-Chu; Zhou, Rui
2015-05-01
This paper considers a distributed filtering problem over a multi-sensor network in which the correlation of local estimation errors is unknown. Recently, this problem was studied by G. Battistelli [1] by developing a data fusion rule to calculate the weighted Kullback-Leibler average of local estimates with consensus algorithms for distributed averaging, where the weighted Kullback-Leibler average is defined as an averaged probability density function to minimize the sum of weighted Kullback-Leibler divergences from the original probability density functions. In this paper, we extends those earlier results by relaxing the prior assumption that all sensors share the same degree of confidence. Furthermore, a novel consensus-based distributed weighting coefficients selection scheme is developed to improve the fusion accuracy, where the weight associated with each sensor is adjusted based on the local estimation error covariance and the ones received from neighboring sensors, so that larger weight values will be assigned to a sensor with higher degree of confidence. Finally, a Monte-Carlo simulation with a 2D tracking system validates the effectiveness of the proposed distributed filtering algorithm.
Global Rotation Estimation Using Weighted Iterative Lie Algebraic Averaging
NASA Astrophysics Data System (ADS)
Reich, M.; Heipke, C.
2015-08-01
In this paper we present an approach for a weighted rotation averaging to estimate absolute rotations from relative rotations between two images for a set of multiple overlapping images. The solution does not depend on initial values for the unknown parameters and is robust against outliers. Our approach is one part of a solution for a global image orientation. Often relative rotations are not free from outliers, thus we use the redundancy in available pairwise relative rotations and present a novel graph-based algorithm to detect and eliminate inconsistent rotations. The remaining relative rotations are input to a weighted least squares adjustment performed in the Lie algebra of the rotation manifold SO(3) to obtain absolute orientation parameters for each image. Weights are determined using the prior information we derived from the estimation of the relative rotations. Because we use the Lie algebra of SO(3) for averaging no subsequent adaptation of the results has to be performed but the lossless projection to the manifold. We evaluate our approach on synthetic and real data. Our approach often is able to detect and eliminate all outliers from the relative rotations even if very high outlier rates are present. We show that we improve the quality of the estimated absolute rotations by introducing individual weights for the relative rotations based on various indicators. In comparison with the state-of-the-art in recent publications to global image orientation we achieve best results in the examined datasets.
Weighted Average Consensus-Based Unscented Kalman Filtering.
Li, Wangyan; Wei, Guoliang; Han, Fei; Liu, Yurong
2016-02-01
In this paper, we are devoted to investigate the consensus-based distributed state estimation problems for a class of sensor networks within the unscented Kalman filter (UKF) framework. The communication status among sensors is represented by a connected undirected graph. Moreover, a weighted average consensus-based UKF algorithm is developed for the purpose of estimating the true state of interest, and its estimation error is bounded in mean square which has been proven in the following section. Finally, the effectiveness of the proposed consensus-based UKF algorithm is validated through a simulation example. PMID:26168453
Technique of weight factor shifting for aperture averaging.
Yang, Changqi
2016-02-01
A singular aperture receiver is usually used in a free-space optical communication system. To increase the received optical flux, a larger diameter receiver is preferred, but this approach causes a sharp increase in cost. It also limits the large-scale civil use of free-space optical communications. This paper proposes a technique called aperture averaging weight factor shifting and designs a new type of receiver that can greatly reduce the cost of a free-space optical communication system. This paper offers a comparative analysis of two types of receivers for optical scintillation and proves theoretically that using this new type of receiver does not degrade the performance. PMID:26836065
Modified box dimension and average weighted receiving time on the weighted fractal networks
Dai, Meifeng; Sun, Yanqiu; Shao, Shuxiang; Xi, Lifeng; Su, Weiyi
2015-01-01
In this paper a family of weighted fractal networks, in which the weights of edges have been assigned to different values with certain scale, are studied. For the case of the weighted fractal networks the definition of modified box dimension is introduced, and a rigorous proof for its existence is given. Then, the modified box dimension depending on the weighted factor and the number of copies is deduced. Assuming that the walker, at each step, starting from its current node, moves uniformly to any of its nearest neighbors. The weighted time for two adjacency nodes is the weight connecting the two nodes. Then the average weighted receiving time (AWRT) is a corresponding definition. The obtained remarkable result displays that in the large network, when the weight factor is larger than the number of copies, the AWRT grows as a power law function of the network order with the exponent, being the reciprocal of modified box dimension. This result shows that the efficiency of the trapping process depends on the modified box dimension: the larger the value of modified box dimension, the more efficient the trapping process is. PMID:26666355
Modified box dimension and average weighted receiving time on the weighted fractal networks
NASA Astrophysics Data System (ADS)
Dai, Meifeng; Sun, Yanqiu; Shao, Shuxiang; Xi, Lifeng; Su, Weiyi
2015-12-01
In this paper a family of weighted fractal networks, in which the weights of edges have been assigned to different values with certain scale, are studied. For the case of the weighted fractal networks the definition of modified box dimension is introduced, and a rigorous proof for its existence is given. Then, the modified box dimension depending on the weighted factor and the number of copies is deduced. Assuming that the walker, at each step, starting from its current node, moves uniformly to any of its nearest neighbors. The weighted time for two adjacency nodes is the weight connecting the two nodes. Then the average weighted receiving time (AWRT) is a corresponding definition. The obtained remarkable result displays that in the large network, when the weight factor is larger than the number of copies, the AWRT grows as a power law function of the network order with the exponent, being the reciprocal of modified box dimension. This result shows that the efficiency of the trapping process depends on the modified box dimension: the larger the value of modified box dimension, the more efficient the trapping process is.
12 CFR 702.105 - Weighted-average life of investments.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 6 2011-01-01 2011-01-01 false Weighted-average life of investments. 702.105 Section 702.105 Banks and Banking NATIONAL CREDIT UNION ADMINISTRATION REGULATIONS AFFECTING CREDIT UNIONS PROMPT CORRECTIVE ACTION Net Worth Classification § 702.105 Weighted-average life of investments. Except as provided below (Table 3),...
47 CFR 65.305 - Calculation of the weighted average cost of capital.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., each weighted by its proportion in the capital structure of the telephone companies. (b) Unless the... capital. 65.305 Section 65.305 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON... Carriers § 65.305 Calculation of the weighted average cost of capital. (a) The composite weighted...
47 CFR 65.305 - Calculation of the weighted average cost of capital.
Code of Federal Regulations, 2011 CFR
2011-10-01
..., each weighted by its proportion in the capital structure of the telephone companies. (b) Unless the... capital. 65.305 Section 65.305 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON... Carriers § 65.305 Calculation of the weighted average cost of capital. (a) The composite weighted...
Latent-variable approaches to the Jamesian model of importance-weighted averages.
Scalas, L Francesca; Marsh, Herbert W; Nagengast, Benjamin; Morin, Alexandre J S
2013-01-01
The individually importance-weighted average (IIWA) model posits that the contribution of specific areas of self-concept to global self-esteem varies systematically with the individual importance placed on each specific component. Although intuitively appealing, this model has weak empirical support; thus, within the framework of a substantive-methodological synergy, we propose a multiple-item latent approach to the IIWA model as applied to a range of self-concept domains (physical, academic, spiritual self-concepts) and subdomains (appearance, math, verbal self-concepts) in young adolescents from two countries. Tests considering simultaneously the effects of self-concept domains on trait self-esteem did not support the IIWA model. On the contrary, support for a normative group importance model was found, in which importance varied as a function of domains but not individuals. Individuals differentially weight the various components of self-concept; however, the weights are largely determined by normative processes, so that little additional information is gained from individual weightings. PMID:23150198
Cohen's Linearly Weighted Kappa Is a Weighted Average of 2 x 2 Kappas
ERIC Educational Resources Information Center
Warrens, Matthijs J.
2011-01-01
An agreement table with [n as an element of N is greater than or equal to] 3 ordered categories can be collapsed into n - 1 distinct 2 x 2 tables by combining adjacent categories. Vanbelle and Albert ("Stat. Methodol." 6:157-163, 2009c) showed that the components of Cohen's weighted kappa with linear weights can be obtained from these n - 1…
76 FR 13580 - Bus Testing; Calculation of Average Passenger Weight and Test Vehicle Weight
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-14
... Federal Register (74 FR 51083) that incorporated brake performance and emissions tests into FTA's bus... Weight Per Person (See, ``Passenger Weight and Inspected Vessel Stability Requirements: Final Rule, 75 FR... Transportation (44 FR 11032). Executive Order 12866 requires agencies to regulate in the ``most...
Sensitivity Analysis of Ordered Weighted Averaging Operator in Earthquake Vulnerability Assessment
NASA Astrophysics Data System (ADS)
Moradi, M.; Delavar, M. R.; Moshiri, B.
2013-09-01
The main objective of this research is to find the extent to which the minimal variability Ordered Weighted Averaging (OWA) model of seismic vulnerability assessment is sensitive to variation of optimism degree. There are a variety of models proposed for seismic vulnerability assessment. In order to examine the efficiency of seismic vulnerability assessment models, the stability of results could be analysed. Seismic vulnerability assessment is done to estimate the probable losses in the future earthquake. Multi-Criteria Decision Making (MCDM) methods have been applied by a number of researchers to estimate the human, physical and financial losses in urban areas. The study area of this research is Tehran Metropolitan Area (TMA) which has more than eight million inhabitants. In addition, this paper assumes that North Tehran Fault (NTF) is activated and caused an earthquake in TMA. 1996 census data is used to extract the attribute values for six effective criteria in seismic vulnerability assessment. The results demonstrate that minimal variability OWA model of Seismic Loss Estimation (SLE) is more stable where the aggregated seismic vulnerability degree has a lower value. Moreover, minimal variability OWA is very sensitive to optimism degree in northern areas of Tehran. A number of statistical units in southern areas of the city also indicate considerable sensitivity to optimism degree due to numerous non-standard buildings. In addition, the change of seismic vulnerability degree caused by variation of optimism degree does not exceed 25 % of the original value which means that the overall accuracy of the model is acceptable.
77 FR 74452 - Bus Testing: Calculation of Average Passenger Weight and Test Vehicle Weight
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-14
... (GVWR) (74 FR 51083, October 5, 2009). The testing procedure simulated a 150 lb. weight for each seated... square feet (76 FR 13850, March 14, 2011). Subsequent to the NPRM, on July 6, 2012, Congress passed the..., Executive Order 13563, the Regulatory Flexibility Act, or the DOT Regulatory Policies and Procedures (44...
SIMPLE AND WEIGHTED AVERAGING APPROACHES TO SCALING: WHEN CAN SPATIAL CONTEXT BE IGNORED?
Technology Transfer Automated Retrieval System (TEKTRAN)
Scaling from plots to landscapes, landscapes to regions, and regions to the globe based on simple or weighted averaging techniques can be accurate when applied to the appropriate problems. Simple averaging approaches work well when conditions are homogeneous spatially and temporally. For example, ...
Raoult's law-based method for determination of coal tar average molecular weight
Brown, D.G.; Gupta, L.; Horace, H.K.; Coleman, A.J.
2005-08-01
A Raoult's law-based method for determining the number average molecular weight of coal tars is presented. The method requires data from two-phase coal tar/water equilibrium experiments, which readily are performed in environmental laboratories. An advantage of this method for environmental samples is that it is not impacted by the small amount of inert debris often present in coal tar samples obtained from contaminated sites. Results are presented for 10 coal tars from nine former manufactured gas plants located in the eastern United States. Vapor pressure osmometry (VPO) analysis provided similar average molecular weights to those determined with the Raoult's law-based method, except for one highly viscous coal tar sample. Use of the VPO-based average molecular weight for this coal tar resulted in underprediction of the coal tar constituents' aqueous concentrations. Additionally, one other coal tar was not completely soluble in solvents used for VPO analysis. The results indicate that the Raoult's law-based method is able to provide an average molecular weight that is consistent with the intended application of the data (e.g., modeling the dissolution of coal tar constituents into surrounding waters), and this method can be applied to coal tars that may be incompatible with other commonly used methods for determining average molecular weight, such as vapor pressure osmometry.
Predicting annual average particulate concentration in urban areas.
Progiou, Athena G; Ziomas, Ioannis C
2015-11-01
Particulate matter concentrations are in most cities a major environmental problem. This is also the case in Greece where, despite the various measures taken in the past, the problem still persists. In this aspect, a cost efficient, comprehensive method was developed in order to help decision makers to take the most appropriate measures towards particulates pollution abatement. The method is based on the source apportionment estimation from the application of 3D meteorological and dispersion modeling and is validated with the use of 10 years (2002-2012) PM10 monitoring data, in Athens, Greece, as well as using PM10 emission data for the same area and time period. It appears that the methodology can be used for estimating yearly average PM10 concentrations in a quite realistic manner, giving thus the decision makers the possibility to evaluate ex ante the effectiveness of specific abatement measures. PMID:26081738
NASA Technical Reports Server (NTRS)
Scruggs, T.; Moraguez, M.; Patankar, K.; Fitz-Coy, N.; Liou, J.-C.; Sorge, M.; Huynh, T.
2016-01-01
Debris fragments from the hypervelocity impact testing of DebriSat are being collected and characterized for use in updating existing satellite breakup models. One of the key parameters utilized in these models is the ballistic coefficient of the fragment which is directly related to its area-to-mass ratio. However, since the attitude of fragments varies during their orbital lifetime, it is customary to use the average cross-sectional area in the calculation of the area-to-mass ratio. The average cross-sectional area is defined as the average of the projected surface areas perpendicular to the direction of motion and has been shown to be equal to one-fourth of the total surface area of a convex object. Unfortunately, numerous fragments obtained from the DebriSat experiment show significant concavity (i.e., shadowing) and thus we have explored alternate methods for computing the average cross-sectional area of the fragments. An imaging system based on the volumetric reconstruction of a 3D object from multiple 2D photographs of the object was developed for use in determining the size characteristic (i.e., characteristics length) of the DebriSat fragments. For each fragment, the imaging system generates N number of images from varied azimuth and elevation angles and processes them using a space-carving algorithm to construct a 3D point cloud of the fragment. This paper describes two approaches for calculating the average cross-sectional area of debris fragments based on the 3D imager. Approach A utilizes the constructed 3D object to generate equally distributed cross-sectional area projections and then averages them to determine the average cross-sectional area. Approach B utilizes a weighted average of the area of the 2D photographs to directly compute the average cross-sectional area. A comparison of the accuracy and computational needs of each approach is described as well as preliminary results of an analysis to determine the "optimal" number of images needed for the 3D imager to accurately measure the average cross sectional area of objects with known dimensions.
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.
Effect of Temporal Residual Correlation on Estimation of Model Averaging Weights
NASA Astrophysics Data System (ADS)
Ye, M.; Lu, D.; Curtis, G. P.; Meyer, P. D.; Yabusaki, S.
2010-12-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are always calculated using model selection criteria such as AIC, AICc, BIC, and KIC. However, this method sometimes leads to an unrealistic situation in which one model receives overwhelmingly high averaging weight (even 100%), which cannot be justified by available data and knowledge. It is found in this study that the unrealistic situation is due partly, if not solely, to ignorance of residual correlation when estimating the negative log-likelihood function common to all the model selection criteria. In the context of maximum-likelihood or least-square inverse modeling, the residual correlation is accounted for in the full covariance matrix; when the full covariance matrix is replaced by its diagonal counterpart, it assumes data independence and ignores the correlation. As a result, treating the correlated residuals as independent distorts the distance between observations and simulations of alternative models. As a result, it may lead to incorrect estimation of model selection criteria and model averaging weights. This is illustrated for a set of surface complexation models developed to simulate uranium transport based on a series of column experiments. The residuals are correlated in time, and the time correlation is addressed using a second-order autoregressive model. The modeling results reveal importance of considering residual correlation in the estimation of model averaging weights.
NASA Astrophysics Data System (ADS)
Lin, Bangjiang; Li, Yiwei; Zhang, Shihao; Tang, Xuan
2015-10-01
Weighted interframe averaging (WIFA)-based channel estimation (CE) is presented for orthogonal frequency division multiplexing passive optical network (OFDM-PON), in which the CE results of the adjacent frames are directly averaged to increase the estimation accuracy. The effectiveness of WIFA combined with conventional least square, intrasymbol frequency-domain averaging, and minimum mean square error, respectively, is demonstrated through 26.7-km standard single-mode fiber transmission. The experimental results show that the WIFA method with low complexity can significantly enhance transmission performance of OFDM-PON.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-28
... Dumping Margin During an Antidumping Investigation; Final Modification, 71 FR 77,722 (December 27, 2006... Measures Concerning Certain Softwood Lumber Products From Canada, 70 FR 22,636 (May 2, 2005). The above... Weighted- Average Dumping Margin During an Antidumping Investigation; Final Modification, 71 FR...
[Weighted-averaging multi-planar reconstruction method for multi-detector row computed tomography].
Aizawa, Mitsuhiro; Nishikawa, Keiichi; Sasaki, Keita; Kobayashi, Norio; Yama, Mitsuru; Sano, Tsukasa; Murakami, Shin-ichi
2012-01-01
Development of multi-detector row computed tomography (MDCT) has enabled three-dimensions (3D) scanning with minute voxels. Minute voxels improve spatial resolution of CT images. At the same time, however, they increase image noise. Multi-planar reconstruction (MPR) is one of effective 3D-image processing techniques. The conventional MPR technique can adjust slice thickness of MPR images. When a thick slice is used, the image noise is decreased. In this case, however, spatial resolution is deteriorated. In order to deal with this trade-off problem, we have developed the weighted-averaging multi-planar reconstruction (W-MPR) technique to control the balance between the spatial resolution and noise. The weighted-average is determined by the Gaussian-type weighting function. In this study, we compared the performance of W-MPR with that of conventional simple-addition-averaging MPR. As a result, we could confirm that W-MPR can decrease the image noise without significant deterioration of spatial resolution. W-MPR can adjust freely the weight for each slice by changing the shape of the weighting function. Therefore, W-MPR can allow us to select a proper balance of spatial resolution and noise and at the same time produce suitable MPR images for observation of targeted anatomical structures. PMID:22277813
Binary weighted averaging of an ensemble of coherently collected image frames.
MacDonald, Adam; Cain, Stephen; Oxley, Mark
2007-04-01
Recent interest in the collection of remote laser radar imagery has motivated novel systems that process temporally contiguous frames of collected imagery to produce an average image that reduces laser speckle, increases image SNR, decreases the deleterious effects of atmospheric distortion, and enhances image detail. This research seeks an algorithm based on Bayesian estimation theory to select those frames from an ensemble that increases spatial resolution compared to simple unweighted averaging of all frames. The resulting binary weighted motion-compensated frame average is compared to the unweighted average using simulated and experimental data collected from a fielded laser vision system. Image resolution is significantly enhanced as quantified by the estimation of the atmospheric seeing parameter through which the average image was formed. PMID:17405439
NASA Astrophysics Data System (ADS)
Bell, Paul William
2008-12-01
The aim of this paper is to simulate profit expectations as an emergent property using an agent based model. The paper builds upon adaptive expectations, interactive expectations and small world networks, combining them into a single adaptive interactive profit expectations model (AIE). Understanding the diffusion of interactive expectations is aided by using a network to simulate the flow of information between firms. The AIE model is tested against a profit expectations survey. The paper introduces "runtime weighted model averaging" and the "pressure to change profit expectations index" (px). Runtime weighted model averaging combines the Bayesian Information Criteria and Kolmogorov's Complexity to enhance the prediction performance of models with varying complexity but a fixed number of parameters. The px is a subjective measure representing decision making in the face of uncertainty. The paper benchmarks the AIE model against the rational expectations hypothesis, finding the firms may have adequate memory although the interactive component of AIE model needs improvement. Additionally the paper investigates the efficacy of a tuneable network and equilibrium averaging. The tuneable network produces widely spaced multiple equilibria and runtime weighted model averaging improves prediction but there are issues with calibration.
Estimation of the global average temperature with optimally weighted point gauges
NASA Technical Reports Server (NTRS)
Hardin, James W.; Upson, Robert B.
1993-01-01
This paper considers the minimum mean squared error (MSE) incurred in estimating an idealized Earth's global average temperature with a finite network of point gauges located over the globe. We follow the spectral MSE formalism given by North et al. (1992) and derive the optimal weights for N gauges in the problem of estimating the Earth's global average temperature. Our results suggest that for commonly used configurations the variance of the estimate due to sampling error can be reduced by as much as 50%.
12 CFR 702.105 - Weighted-average life of investments.
Code of Federal Regulations, 2010 CFR
2010-01-01
...-in capital and membership capital in corporate credit unions, as defined in 12 CFR 704.2, the..., as defined in 17 CFR 270.2a-7, and collective investment funds operated in accordance with short-term investment fund rules set forth in 12 CFR 9.18(b)(4)(ii)(B)(1)-(3), the weighted-average life is defined...
Real diffusion-weighted MRI enabling true signal averaging and increased diffusion contrast.
Eichner, Cornelius; Cauley, Stephen F; Cohen-Adad, Julien; Mller, Harald E; Turner, Robert; Setsompop, Kawin; Wald, Lawrence L
2015-11-15
This project aims to characterize the impact of underlying noise distributions on diffusion-weighted imaging. The noise floor is a well-known problem for traditional magnitude-based diffusion-weighted MRI (dMRI) data, leading to biased diffusion model fits and inaccurate signal averaging. Here, we introduce a total-variation-based algorithm to eliminate shot-to-shot phase variations of complex-valued diffusion data with the intention to extract real-valued dMRI datasets. The obtained real-valued diffusion data are no longer superimposed by a noise floor but instead by a zero-mean Gaussian noise distribution, yielding dMRI data without signal bias. We acquired high-resolution dMRI data with strong diffusion weighting and, thus, low signal-to-noise ratio. Both the extracted real-valued and traditional magnitude data were compared regarding signal averaging, diffusion model fitting and accuracy in resolving crossing fibers. Our results clearly indicate that real-valued diffusion data enables idealized conditions for signal averaging. Furthermore, the proposed method enables unbiased use of widely employed linear least squares estimators for model fitting and demonstrates an increased sensitivity to detect secondary fiber directions with reduced angular error. The use of phase-corrected, real-valued data for dMRI will therefore help to clear the way for more detailed and accurate studies of white matter microstructure and structural connectivity on a fine scale. PMID:26241680
Bérard, J; Pardo, C E; Béthaz, S; Kreuzer, M; Bee, G
2010-10-01
High prolificacy of sows and increased fetal survival lead to greater incidence of intrauterine crowding (IUC), which may then affect pre- and postnatal development of the progeny. The aim of the study was to assess the impact of IUC, using unilaterally hysterectomized-ovariectomized gilts (UHO), on organ and muscle development of their progeny at birth. In the study, 7 UHO and 7 intact control (Con) Swiss Large White gilts were used. At farrowing, if available, 3 male and 3 female progeny with a low (>0.8 and <1.2 kg), medium (>1.2 and <1.4 kg), and high (>1.6 kg) birth weight (BtW) were killed. Internal organs and brain were weighed, and semitendinosus (STN), psoas major (PM), and rhomboideus (RH) muscles were collected. Histological analyses were performed in PM, RH, and STN (dark and light portion) using myofibrillar ATPase staining after preincubation at pH 10.3. Myosin heavy chain (MyHC) polymorphism was determined in the PM using SDS-PAGE gel electrophoresis. Despite that only one-half of the uterine space was available, litter size was smaller (P < 0.01) only by 35% in UHO compared with Con gilts. However, UHO progeny tended (P = 0.06) to be lighter than Con progeny. The average BtW of the selected piglets did not differ (P = 0.17) between the 2 sow groups, whereas PM and kidneys tended to be lighter (P < 0.07) in UHO than in Con progeny. Compared with Con progeny, the PM and the STN(dark) of UHO progeny had fewer (P ≤ 0.05) secondary and total myofibers as well as fewer (P = 0.10) primary myofibers in the PM. In the RH, the secondary-to-primary myofiber ratio was smaller (P < 0.01) in UHO than in Con progeny, whereas the total number of myofibers did not (P = 0.96) differ. The relative abundance of fetal MyHC was less (P = 0.02) and that of type I MyHC tended (P = 0.09) to be greater in UHO than in Con offspring. With increasing BtW, organ and brain weights increased (P < 0.01). Muscle cross-sectional area and total number of myofibers in the light portion of the STN were greater (P < 0.05) in high and medium than in low piglets. In conclusion, IUC reduced hyperplasia of secondary and total myofibers in the STN(dark) and PM. These effects were independent of the BtW and sex. PMID:20562364
Exponentially Weighted Moving Average Change Detection Around the Country (and the World)
NASA Astrophysics Data System (ADS)
Brooks, E.; Wynne, R. H.; Thomas, V. A.; Blinn, C. E.; Coulston, J.
2014-12-01
With continuous, freely available moderate-resolution imagery of the Earth's surface available, and with the promise of more imagery to come, change detection based on continuous process models continues to be a major area of research. One such method, exponentially weighted moving average change detection (EWMACD), is based on a mixture of harmonic regression (HR) and statistical quality control, a branch of statistics commonly used to detect aberrations in industrial and medical processes. By using HR to approximate per-pixel seasonal curves, the resulting residuals characterize information about the pixels which stands outside of the periodic structure imposed by HR. Under stable pixels, these residuals behave as might be expected, but in the presence of changes (growth, stress, removal), the residuals clearly show these changes when they are used as inputs into an EWMA chart. In prior work in Alabama, USA, EWMACD yielded an overall accuracy of 85% on a random sample of known thinned stands, in some cases detecting thinnings as sparse as 25% removal. It was also shown to correctly identify the timing of the thinning activity, typically within a single image date of the change. The net result of the algorithm was to produce date-by-date maps of afforestation and deforestation on a variable scale of severity. In other research, EWMACD has also been applied to detect land use and land cover changes in central Java, Indonesia, despite the heavy incidence of clouds and a monsoonal climate. Preliminary results show that EWMACD accurately identifies land use conversion (agricultural to residential, for example) and also identifies neighborhoods where the building density has increased, removing neighborhood vegetation. In both cases, initial results indicate the potential utility of EWMACD to detect both gross and subtle ecosystem disturbance, but further testing across a range of ecosystems and disturbances is clearly warranted.
Tree-average distances on certain phylogenetic networks have their weights uniquely determined
2012-01-01
A phylogenetic network N has vertices corresponding to species and arcs corresponding to direct genetic inheritance from the species at the tail to the species at the head. Measurements of DNA are often made on species in the leaf set, and one seeks to infer properties of the network, possibly including the graph itself. In the case of phylogenetic trees, distances between extant species are frequently used to infer the phylogenetic trees by methods such as neighbor-joining. This paper proposes a tree-average distance for networks more general than trees. The notion requires a weight on each arc measuring the genetic change along the arc. For each displayed tree the distance between two leaves is the sum of the weights along the path joining them. At a hybrid vertex, each character is inherited from one of its parents. We will assume that for each hybrid there is a probability that the inheritance of a character is from a specified parent. Assume that the inheritance events at different hybrids are independent. Then for each displayed tree there will be a probability that the inheritance of a given character follows the tree; this probability may be interpreted as the probability of the tree. The tree-average distance between the leaves is defined to be the expected value of their distance in the displayed trees. For a class of rooted networks that includes rooted trees, it is shown that the weights and the probabilities at each hybrid vertex can be calculated given the network and the tree-average distances between the leaves. Hence these weights and probabilities are uniquely determined. The hypotheses on the networks include that hybrid vertices have indegree exactly 2 and that vertices that are not leaves have a tree-child. PMID:22587565
Rong, Y; Sillick, M; Gregson, C M
2009-01-01
Dextrose equivalent (DE) value is the most common parameter used to characterize the molecular weight of maltodextrins. Its theoretical value is inversely proportional to number average molecular weight (M(n)), providing a theoretical basis for correlations with physical properties important to food manufacturing, such as: hygroscopicity, the glass transition temperature, and colligative properties. The use of freezing point osmometry to measure DE and M(n) was assessed. Measurements were made on a homologous series of malto-oligomers as well as a variety of commercially available maltodextrin products with DE values ranging from 5 to 18. Results on malto-oligomer samples confirmed that freezing point osmometry provided a linear response with number average molecular weight. However, noncarbohydrate species in some commercial maltodextrin products were found to be in high enough concentration to interfere appreciably with DE measurement. Energy dispersive spectroscopy showed that sodium and chloride were the major ions present in most commercial samples. Osmolality was successfully corrected using conductivity measurements to estimate ion concentrations. The conductivity correction factor appeared to be dependent on the concentration of maltodextrin. Equations were developed to calculate corrected values of DE and M(n) based on measurements of osmolality, conductivity, and maltodextrin concentration. This study builds upon previously reported results through the identification of the major interfering ions and provides an osmolality correction factor that successfully accounts for the influence of maltodextrin concentration on the conductivity measurement. The resulting technique was found to be rapid, robust, and required no reagents. PMID:19200083
A new state reconstructor for digital controls systems using weighted-average measurements
NASA Technical Reports Server (NTRS)
Polites, Michael E.
1989-01-01
A state reconstructor is presented for a linear continuous-time plant driven by a zero-order-hold. It takes a continuous-time output vector from the plant and convolutes it with a weighting-function matrix whose elements are time dependent. This result is integrated over T second intervals to generate weighted-averaged measurements, every T seconds, that are used in the state reconstruction process. If the plant is noise-free and can be modeled precisely, the output of this state reconstructor exactly equals the true state of the plant and accomplishes this without knowledge of the plant's initial state. If noise or modeling errors are a problem, it can be catenated with a state observer or a Kalman filter for a synergistic effect.
Calculation of weighted averages approach for the estimation of ping tolerance values
Silalom, S.; Carter, J.L.; Chantaramongkol, P.
2010-01-01
A biotic index was created and proposed as a tool to assess water quality in the Upper Mae Ping sub-watersheds. The Ping biotic index was calculated by utilizing Ping tolerance values. This paper presents the calculation of Ping tolerance values of the collected macroinvertebrates. Ping tolerance values were estimated by a weighted averages approach based on the abundance of macroinvertebrates and six chemical constituents that include conductivity, dissolved oxygen, biochemical oxygen demand, ammonia nitrogen, nitrate nitrogen and orthophosphate. Ping tolerance values range from 0 to 10. Macroinvertebrates assigned a 0 are very sensitive to organic pollution while macroinvertebrates assigned 10 are highly tolerant to pollution.
Correlation between weighted spectral distribution and average path length in evolving networks.
Jiao, Bo; Shi, Jianmai; Wu, Xiaoqun; Nie, Yuanping; Huang, Chengdong; Du, Jing; Zhou, Ying; Guo, Ronghua; Tao, Yerong
2016-02-01
The weighted spectral distribution (WSD) is a metric defined on the normalized Laplacian spectrum. In this study, synchronic random graphs are first used to rigorously analyze the metric's scaling feature, which indicates that the metric grows sublinearly as the network size increases, and the metric's scaling feature is demonstrated to be common in networks with Gaussian, exponential, and power-law degree distributions. Furthermore, a deterministic model of diachronic graphs is developed to illustrate the correlation between the slope coefficient of the metric's asymptotic line and the average path length, and the similarities and differences between synchronic and diachronic random graphs are investigated to better understand the correlation. Finally, numerical analysis is presented based on simulated and real-world data of evolving networks, which shows that the ratio of the WSD to the network size is a good indicator of the average path length. PMID:26931591
Correlation between weighted spectral distribution and average path length in evolving networks
NASA Astrophysics Data System (ADS)
Jiao, Bo; Shi, Jianmai; Wu, Xiaoqun; Nie, Yuanping; Huang, Chengdong; Du, Jing; Zhou, Ying; Guo, Ronghua; Tao, Yerong
2016-02-01
The weighted spectral distribution (WSD) is a metric defined on the normalized Laplacian spectrum. In this study, synchronic random graphs are first used to rigorously analyze the metric's scaling feature, which indicates that the metric grows sublinearly as the network size increases, and the metric's scaling feature is demonstrated to be common in networks with Gaussian, exponential, and power-law degree distributions. Furthermore, a deterministic model of diachronic graphs is developed to illustrate the correlation between the slope coefficient of the metric's asymptotic line and the average path length, and the similarities and differences between synchronic and diachronic random graphs are investigated to better understand the correlation. Finally, numerical analysis is presented based on simulated and real-world data of evolving networks, which shows that the ratio of the WSD to the network size is a good indicator of the average path length.
Association between lean meat percentage and average daily weight gain in Danish slaughter pigs.
Stege, H; Jensen, T B; Bagger, J; Keller, F; Nielsen, J P; Ersbøll, A K
2011-08-01
Danish pigs that are within optimal weight limits and have a high lean meat percentage (LMP) obtain the best prices at slaughter. Another reason to consider the variation in LMP is the assumed association between LMP and average daily weight gain (ADG) at the individual level. The aim of this study was to test whether high ADG was associated with low LMP and vice versa. A cohort of 99 pigs from a conventional Danish herd was followed from 30kg to slaughter. The data included days in the herd, start- and end-weights, calculated ADG and LMP, reported from the abattoir. The study also included existing data from 13,057 boars from a Danish boar test station. The results of the study demonstrated a significant negative association between LMP and ADG: Pearson's correlation coefficient (r)=-0.42 (95% CI: -0.57; -0.24) (p<0.0001) for the cohort and r=-0.42 (95% CI: -0.48; -0.36) (p<0.0001) for the boars. PMID:21195493
Fuzzy weighted average based on left and right scores in Malaysia tourism industry
NASA Astrophysics Data System (ADS)
Kamis, Nor Hanimah; Abdullah, Kamilah; Zulkifli, Muhammad Hazim; Sahlan, Shahrazali; Mohd Yunus, Syaizzal
2013-04-01
Tourism is known as an important sector to the Malaysian economy including economic generator, creating business and job offers. It is reported to bring in almost RM30 billion of the national income, thanks to intense worldwide promotion by Tourism Malaysia. One of the well-known attractions in Malaysia is our beautiful islands. The islands continue to be developed into tourist spots and attracting a continuous number of tourists. Chalets, luxury bungalows and resorts quickly develop along the coastlines of popular islands like Tioman, Redang, Pangkor, Perhentian, Sibu and so many others. In this study, we applied Fuzzy Weighted Average (FWA) method based on left and right scores in order to determine the criteria weights and to select the best island in Malaysia. Cost, safety, attractive activities, accommodation and scenery are five main criteria to be considered and five selected islands in Malaysia are taken into accounts as alternatives. The most important criteria that have been considered by the tourist are defined based on criteria weights ranking order and the best island in Malaysia is then determined in terms of FWA values. This pilot study can be used as a reference to evaluate performances or solving any selection problems, where more criteria, alternatives and decision makers will be considered in the future.
Equating of Subscores and Weighted Averages under the NEAT Design. Research Report. ETS RR-11-01
ERIC Educational Resources Information Center
Sinharay, Sandip; Haberman, Shelby
2011-01-01
Recently, the literature has seen increasing interest in subscores for their potential diagnostic values; for example, one study suggested the report of weighted averages of a subscore and the total score, whereas others showed, for various operational and simulated data sets, that weighted averages, as compared to subscores, lead to more accurate…
Marin, Lucas; Valls, Aida; Isern, David; Moreno, Antonio; Merigó, José M
2014-01-01
Linguistic variables are very useful to evaluate alternatives in decision making problems because they provide a vocabulary in natural language rather than numbers. Some aggregation operators for linguistic variables force the use of a symmetric and uniformly distributed set of terms. The need to relax these conditions has recently been posited. This paper presents the induced unbalanced linguistic ordered weighted average (IULOWA) operator. This operator can deal with a set of unbalanced linguistic terms that are represented using fuzzy sets. We propose a new order-inducing criterion based on the specificity and fuzziness of the linguistic terms. Different relevancies are given to the fuzzy values according to their uncertainty degree. To illustrate the behaviour of the precision-based IULOWA operator, we present an environmental assessment case study in which a multiperson multicriteria decision making model is applied. PMID:25136677
Effects of average molecular weight and concentration of polymer additive on friction and wear
Yoshida, Kazuo )
1990-04-01
Tribological behavior with oils containing polymethacrylates (PMAs) differing in average molecular weight, Mw, is examined in sliding concentrated contacts. At low loads, low PMA Mw or higher concentrations of PMA have a beneficial effect on wear, but at high loads, PMAs are detrimental. The beneficial effect is attributed to elastohydrodynamic film formation. The most important parameter is the number of PMA molecules per unit oil volume. The prowear action can be explained by the fact that PMA molecules may accumulate in the inlet region of the contact. The polymer accumulation may block the base oil entering the contact leading to oil starvation which in turn leads to severe contacts and increases in wear. This anomalous behavior may result from the competition between the prowear action and oil film formation. 13 refs.
Merigó, José M.
2014-01-01
Linguistic variables are very useful to evaluate alternatives in decision making problems because they provide a vocabulary in natural language rather than numbers. Some aggregation operators for linguistic variables force the use of a symmetric and uniformly distributed set of terms. The need to relax these conditions has recently been posited. This paper presents the induced unbalanced linguistic ordered weighted average (IULOWA) operator. This operator can deal with a set of unbalanced linguistic terms that are represented using fuzzy sets. We propose a new order-inducing criterion based on the specificity and fuzziness of the linguistic terms. Different relevancies are given to the fuzzy values according to their uncertainty degree. To illustrate the behaviour of the precision-based IULOWA operator, we present an environmental assessment case study in which a multiperson multicriteria decision making model is applied. PMID:25136677
A Weight-Averaged Interpolation Method for Coupling Time-Accurate Rarefied and Continuum Flows
NASA Astrophysics Data System (ADS)
Diaz, Steven William
A novel approach to coupling rarefied and continuum flow regimes as a single, hybrid model is introduced. The method borrows from techniques used in the simulation of spray flows to interpolate Lagrangian point-particles onto an Eulerian grid in a weight-averaged sense. A brief overview of traditional methods for modeling both rarefied and continuum domains is given, and a review of the literature regarding rarefied/continuum flow coupling is presented. Details of the theoretical development of the method of weighted interpolation are then described. The method evaluates macroscopic properties at the nodes of a CFD grid via the weighted interpolation of all simulated molecules in a set surrounding the node. The weight factor applied to each simulated molecule is the inverse of the linear distance between it and the given node. During development, the method was applied to several preliminary cases, including supersonic flow over an airfoil, subsonic flow over tandem airfoils, and supersonic flow over a backward facing step; all at low Knudsen numbers. The main thrust of the research centered on the time-accurate expansion of a rocket plume into a near-vacuum. The method proves flexible enough to be used with various flow solvers, demonstrated by the use of Fluent as the continuum solver for the preliminary cases and a NASA-developed Large Eddy Simulation research code, WRLES, for the full lunar model. The method is applicable to a wide range of Mach numbers and is completely grid independent, allowing the rarefied and continuum solvers to be optimized for their respective domains without consideration of the other. The work presented demonstrates the validity, and flexibility of the method of weighted interpolation as a novel concept in the field of hybrid flow coupling. The method marks a significant divergence from current practices in the coupling of rarefied and continuum flow domains and offers a kernel on which to base an ongoing field of research. It has the potential to significantly increase the flexibility of hybrid rarefied/continuum flow analyses.
NASA Astrophysics Data System (ADS)
Nadi, S.; Delavar, M. R.
2011-06-01
This paper presents a generic model for using different decision strategies in multi-criteria, personalized route planning. Some researchers have considered user preferences in navigation systems. However, these prior studies typically employed a high tradeoff decision strategy, which used a weighted linear aggregation rule, and neglected other decision strategies. The proposed model integrates a pairwise comparison method and quantifier-guided ordered weighted averaging (OWA) aggregation operators to form a personalized route planning method that incorporates different decision strategies. The model can be used to calculate the impedance of each link regarding user preferences in terms of the route criteria, criteria importance and the selected decision strategy. Regarding the decision strategy, the calculated impedance lies between aggregations that use a logical "and" (which requires all the criteria to be satisfied) and a logical "or" (which requires at least one criterion to be satisfied). The calculated impedance also includes taking the average of the criteria scores. The model results in multiple alternative routes, which apply different decision strategies and provide users with the flexibility to select one of them en-route based on the real world situation. The model also defines the robust personalized route under different decision strategies. The influence of different decision strategies on the results are investigated in an illustrative example. This model is implemented in a web-based geographical information system (GIS) for Isfahan in Iran and verified in a tourist routing scenario. The results demonstrated, in real world situations, the validity of the route planning carried out in the model.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-19
..., 2011 (76 FR 13580). Furthermore, due to the complexity of the issues proposed in the NPRM, FTA is..., FTA published an NPRM in the Federal Register (76 FR 13850) proposing to amend its bus testing... Weight and Test Vehicle Weight, and Public Meeting and Extension of Comment Period AGENCY:...
Time-weighted average SPME analysis for in planta determination of cVOCs.
Sheehan, Emily M; Limmer, Matt A; Mayer, Philipp; Karlson, Ulrich Gosewinkel; Burken, Joel G
2012-03-20
The potential of phytoscreening for plume delineation at contaminated sites has promoted interest in innovative, sensitive contaminant sampling techniques. Solid-phase microextraction (SPME) methods have been developed, offering quick, undemanding, noninvasive sampling without the use of solvents. In this study, time-weighted average SPME (TWA-SPME) sampling was evaluated for in planta quantification of chlorinated solvents. TWA-SPME was found to have increased sensitivity over headspace and equilibrium SPME sampling. Using a variety of chlorinated solvents and a polydimethylsiloxane/carboxen (PDMS/CAR) SPME fiber, most compounds exhibited near linear or linear uptake over the sampling period. Smaller, less hydrophobic compounds exhibited more nonlinearity than larger, more hydrophobic molecules. Using a specifically designed in planta sampler, field sampling was conducted at a site contaminated with chlorinated solvents. Sampling with TWA-SPME produced instrument responses ranging from 5 to over 200 times higher than headspace tree core sampling. This work demonstrates that TWA-SPME can be used for in planta detection of a broad range of chlorinated solvents and methods can likely be applied to other volatile and semivolatile organic compounds. PMID:22332592
Wingard, G.L.; Hudley, J.W.
2012-01-01
A molluscan analogue dataset is presented in conjunction with a weighted-averaging technique as a tool for estimating past salinity patterns in south Florida’s estuaries and developing targets for restoration based on these reconstructions. The method, here referred to as cumulative weighted percent (CWP), was tested using modern surficial samples collected in Florida Bay from sites located near fixed water monitoring stations that record salinity. The results were calibrated using species weighting factors derived from examining species occurrence patterns. A comparison of the resulting calibrated species-weighted CWP (SW-CWP) to the observed salinity at the water monitoring stations averaged over a 3-year time period indicates, on average, the SW-CWP comes within less than two salinity units of estimating the observed salinity. The SW-CWP reconstructions were conducted on a core from near the mouth of Taylor Slough to illustrate the application of the method.
Pardo, C E; Kreuzer, M; Bee, G
2013-11-01
Offspring born from normal litter size (10 to 15 piglets) but classified as having lower than average birth weight (average of the sow herd used: 1.46 ± 0.2 kg; mean ± s.d.) carry at birth negative phenotypic traits normally associated with intrauterine growth restriction, such as brain-sparing and impaired myofiber hyperplasia. The objective of the study was to assess long-term effects of intrauterine crowding by comparing postnatal performance, carcass characteristics and pork quality of offspring born from litters with higher (>1.7 kg) or lower (<1.3 kg) than average litter birth weight. From a population of multiparous Swiss Large White sows (parity 2 to 6), 16 litters with high (H = 1.75 kg) or low (L = 1.26 kg) average litter birth weight were selected. At farrowing, two female pigs and two castrated pigs were chosen from each litter: from the H-litters those with the intermediate (HI = 1.79 kg) and lowest (HL = 1.40 kg) birth weight, and from L-litters those with the highest (LH = 1.49 kg) and intermediate (LI = 1.26 kg) birth weight. Average birth weight of the selected HI and LI piglets differed (P < 0.05), whereas birth weight of the HL- and LH-piglets were similar (P > 0.05). These pigs were fattened in group pen and slaughtered at 165 days of age. Pre-weaning performance of the litters and growth performance, carcass and meat quality traits of the selected pigs were assessed. Number of stillborn and pig mortality were greater (P < 0.05) in L- than in H-litters. Consequently, fewer (P < 0.05) piglets were weaned and average litter weaning weight decreased by 38% (P < 0.05). The selected pigs of the L-litters displayed catch-up growth during the starter and grower-finisher periods, leading to similar (P > 0.05) slaughter weight at 165 days of age. However, HL-gilts were more feed efficient and had leaner carcasses than HI-, LH- and LI-pigs (birth weight class × gender interaction P < 0.05). Meat quality traits were mostly similar between groups. The marked between-litter birth weight variation observed in normal size litters had therefore no evident negative impact on growth potential and quality of pigs from the lower birth weight group. PMID:23896082
Bacillus subtilis 168 levansucrase (SacB) activity affects average levan molecular weight.
Porras-Domínguez, Jaime R; Ávila-Fernández, Ángela; Miranda-Molina, Afonso; Rodríguez-Alegría, María Elena; Munguía, Agustín López
2015-11-01
Levan is a fructan polymer that offers a variety of applications in the chemical, health, cosmetic and food industries. Most of the levan applications depend on levan molecular weight, which in turn depends on the source of the synthesizing enzyme and/or on reaction conditions. Here we demonstrate that in the particular case of levansucrase from Bacillus subtilis 168, enzyme concentration is also a factor defining the molecular weight levan distribution. While a bimodal distribution has been reported at the usual enzyme concentrations (1 U/ml equivalent to 0.1 μM levansucrase) we found that a low molecular weight normal distribution is solely obtained al high enzyme concentrations (>5 U/ml equivalent to 0.5 μM levansucrase) while a high normal molecular weight distribution is synthesized at low enzyme doses (0.1 U/ml equivalent to 0.01 μM of levansucrase). PMID:26256357
The effect of capsule-filling machine vibrations on average fill weight.
Llusa, Marcos; Faulhammer, Eva; Biserni, Stefano; Calzolari, Vittorio; Lawrence, Simon; Bresciani, Massimo; Khinast, Johannes
2013-09-15
The aim of this paper is to study the effect of the speed of capsule filling and the inherent machine vibrations on fill weight for a dosator-nozzle machine. The results show that increasing speed of capsule filling amplifies the vibration intensity (as measured by Laser Doppler vibrometer) of the machine frame, which leads to powder densification. The mass of the powder (fill weight) collected via the nozzle is significantly larger at a higher capsule filling speed. Therefore, there is a correlation between powder densification under more intense vibrations and larger fill weights. Quality-by Design of powder based products should evaluate the effect of environmental vibrations on material attributes, which in turn may affect product quality. PMID:23872302
Full-custom design of split-set data weighted averaging with output register for jitter suppression
NASA Astrophysics Data System (ADS)
Jubay, M. C.; Gerasta, O. J.
2015-06-01
A full-custom design of an element selection algorithm, named as Split-set Data Weighted Averaging (SDWA) is implemented in 90nm CMOS Technology Synopsys Library. SDWA is applied in seven unit elements (3-bit) using a thermometer-coded input. Split-set DWA is an improved DWA algorithm which caters the requirement for randomization along with long-term equal element usage. Randomization and equal element-usage improve the spectral response of the unit elements due to higher Spurious-free dynamic range (SFDR) and without significantly degrading signal-to-noise ratio (SNR). Since a full-custom, the design is brought to transistor-level and the chip custom layout is also provided, having a total area of 0.3mm2, a power consumption of 0.566 mW, and simulated at 50MHz clock frequency. On this implementation, SDWA is successfully derived and improved by introducing a register at the output that suppresses the jitter introduced at the final stage due to switching loops and successive delays.
47 CFR 36.622 - National and study area average unseparated loop costs.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 2 2010-10-01 2010-10-01 false National and study area average unseparated... Universal Service Fund Calculation of Loop Costs for Expense Adjustment § 36.622 National and study area... provided in paragraph (c) of this section, this is equal to the sum of the Loop Costs for each study...
47 CFR 36.622 - National and study area average unseparated loop costs.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 2 2011-10-01 2011-10-01 false National and study area average unseparated... Universal Service Fund Calculation of Loop Costs for Expense Adjustment § 36.622 National and study area... provided in paragraph (c) of this section, this is equal to the sum of the Loop Costs for each study...
The Effect of Area Averaging on the Approximated Profile of the H α Spectral Line
NASA Astrophysics Data System (ADS)
Bodnárová, M.; Utz, D.; Rybák, J.
2016-04-01
The Hα line is massively used as a diagnostics of the chromosphere. Often one needs to average the line profile over some area to increase the signal to noise ratio. Thus it is important to understand how derived parameters vary with changing approximations. In this study we investigate the effect of spatial averaging of a selected area on the temporal variations of the width, the intensity and the Dopplershift of the Hα spectral line profile. The approximated profile was deduced from co-temporal observations in five points throughout the Hα line profile obtained by the tunable Lyot filter installed on the Dutch Open Telescope. We found variations of the intensity and the Doppler velocities, which were independent of the size of the area used for the computation of the area averaged Hα spectral line profile.
Prediction of oil palm production using the weighted average of fuzzy sets concept approach
NASA Astrophysics Data System (ADS)
Nugraha, R. F.; Setiyowati, Susi; Mukhaiyar, Utriweni; Yuliawati, Apriliani
2015-12-01
Proper planning becomes crucial for decision making in a company. For oil palm producer companies, the prediction of future products realizations is useful and considered in making company's strategies. It is mean that to do the best in predicting is absolute. Until now, to predict the next monthly oil palm productions, the company use simple mean statistics of the latest five-year observations. Lately, imprecision in estimates of oil palm production (overestimate) becomes a problem and the focus of attention in a company. Here we proposed weighted mean approach by using fuzzy concept approach to do estimation and prediction. We obtain that the prediction using fuzzy concept almost always give underestimate of realizations than the simple mean.
Appiani, Elena; Page, Sarah E; McNeill, Kristopher
2014-10-21
Dissolved organic matter (DOM) is involved in numerous environmental processes, and its molecular size is important in many of these processes, such as DOM bioavailability, DOM sorptive capacity, and the formation of disinfection byproducts during water treatment. The size and size distribution of the molecules composing DOM remains an open question. In this contribution, an indirect method to assess the average size of DOM is described, which is based on the reaction of hydroxyl radical (HO(•)) quenching by DOM. HO(•) is often assumed to be relatively unselective, reacting with nearly all organic molecules with similar rate constants. Literature values for HO(•) reaction with organic molecules were surveyed to assess the unselectivity of DOM and to determine a representative quenching rate constant (k(rep) = 5.6 × 10(9) M(-1) s(-1)). This value was used to assess the average molecular weight of various humic and fulvic acid isolates as model DOM, using literature HO(•) quenching constants, kC,DOM. The results obtained by this method were compared with previous estimates of average molecular weight. The average molecular weight (Mn) values obtained with this approach are lower than the Mn measured by other techniques such as size exclusion chromatography (SEC), vapor pressure osmometry (VPO), and flow field fractionation (FFF). This suggests that DOM is an especially good quencher for HO(•), reacting at rates close to the diffusion-control limit. It was further observed that humic acids generally react faster than fulvic acids. The high reactivity of humic acids toward HO(•) is in line with the antioxidant properties of DOM. The benefit of this method is that it provides a firm upper bound on the average molecular weight of DOM, based on the kinetic limits of the HO(•) reaction. The results indicate low average molecular weight values, which is most consistent with the recent understanding of DOM. A possible DOM size distribution is discussed to reconcile the small nature of DOM with the large-molecule behavior observed in other studies. PMID:25222517
NASA Astrophysics Data System (ADS)
Ishihara, Takemi
2015-12-01
The author has developed a new leveling method for use with magnetic survey data, which consists of adjusting each measurement using the weighted spatial average of its neighboring data and subsequent temporal filtering. There are two key parameters in the method: the `weight distance' represents the characteristic distance of the weight function and the `filtering width' represents the full width of the Gaussian filtering function on the time series. This new method was applied to three examples of actual marine survey data. Leveling using optimum values of these two parameters for each example was found to significantly reduce the standard deviations of crossover differences by one third to one fifth of the values before leveling. The obtained time series of correction values for each example had a good correlation with the magnetic observatory data obtained relatively close to the survey areas, thus validating this new leveling method.
Code of Federal Regulations, 2012 CFR
2012-07-01
... average volume fraction of HAP in the actual solvent loss? 63.2854 Section 63.2854 Protection of... How do I determine the weighted average volume fraction of HAP in the actual solvent loss? (a) This section describes the information and procedures you must use to determine the weighted average...
NASA Astrophysics Data System (ADS)
Malczewski, Jacek
2006-12-01
The objective of this paper is to incorporate the concept of fuzzy (linguistic) quantifiers into the GIS-based land suitability analysis via ordered weighted averaging (OWA). OWA is a multicriteria evaluation procedure (or combination operator). The nature of the OWA procedure depends on some parameters, which can be specified by means of fuzzy (linguistic) quantifiers. By changing the parameters, OWA can generate a wide range of decision strategies or scenarios. The quantifier-guided OWA procedure is illustrated using land-use suitability analysis in a region of Mexico.
Area-averaged surface fluxes and their time-space variability over the FIFE experimental domain
NASA Technical Reports Server (NTRS)
Smith, E. A.; Hsu, A. Y.; Crosson, W. L.; Field, R. T.; Fritschen, L. J.; Gurney, R. J.; Kanemasu, E. T.; Kustas, W. P.; Nie, D.; Shuttleworth, W. J.
1992-01-01
The underlying mean and variance properties of surface net radiation, sensible-latent heat fluxes and soil heat flux are studied over the densely instrumented grassland region encompassing FIFE. Flux variability is discussed together with the problem of scaling up to area-averaged fluxes. Results are compared and contrasted for cloudy and clear situations and examined for the influence of surface-induced biophysical controls (burn and grazing treatments) and topographic controls (aspect ratios and slope factors).
Area-Averaged Surface Fluxes Over the Litfass Region Based on Eddy-Covariance Measurements
NASA Astrophysics Data System (ADS)
Beyrich, Frank; Leps, Jens-Peter; Mauder, Matthias; Bange, Jens; Foken, Thomas; Huneke, Sven; Lohse, Horst; Lüdi, Andreas; Meijninger, Wouter M. L.; Mironov, Dmitrii; Weisensee, Ulrich; Zittel, Peter
2006-10-01
Micrometeorological measurements (including eddy-covariance measurements of the surface fluxes of sensible and latent heat) were performed during the LITFASS-2003 experiment at 13 field sites over different types of land use (forest, lake, grassland, various agricultural crops) in a 20 × 20 km2 area around the Meteorological Observatory Lindenberg (MOL) of the German Meteorological Service (Deutscher Wetterdienst, DWD). Significant differences in the energy fluxes could be found between the major land surface types (forest, farmland, water), but also between the different agricultural crops (cereals, rape, maize). Flux ratios between the different surfaces changed during the course of the experiment as a result of increased water temperature of the lake, changing soil moisture, and of the vegetation development at the farmland sites. The measurements over grass performed at the boundary-layer field site Falkenberg of the MOL were shown to be quite representative for the farmland part of the area. Measurements from the 13 sites were composed into a time series of the area-averaged surface flux by taking into account the data quality of the single flux values from the different sites and the relative occurrence of each surface type in the area. Such composite fluxes could be determined for about 80% of the whole measurement time during the LITFASS-2003 experiment. Comparison of these aggregated surface fluxes with area-averaged fluxes from long-range scintillometer measurements and from airborne measurements showed good agreement.
High surface area, low weight composite nickel fiber electrodes
NASA Technical Reports Server (NTRS)
Johnson, Bradley A.; Ferro, Richard E.; Swain, Greg M.; Tatarchuk, Bruce J.
1993-01-01
The energy density and power density of light weight aerospace batteries utilizing the nickel oxide electrode are often limited by the microstructures of both the collector and the resulting active deposit in/on the collector. Heretofore, these two microstructures were intimately linked to one another by the materials used to prepare the collector grid as well as the methods and conditions used to deposit the active material. Significant weight and performance advantages were demonstrated by Britton and Reid at NASA-LeRC using FIBREX nickel mats of ca. 28-32 microns diameter. Work in our laboratory investigated the potential performance advantages offered by nickel fiber composite electrodes containing a mixture of fibers as small as 2 microns diameter (Available from Memtec America Corporation). These electrode collectors possess in excess of an order of magnitude more surface area per gram of collector than FIBREX nickel. The increase in surface area of the collector roughly translates into an order of magnitude thinner layer of active material. Performance data and advantages of these thin layer structures are presented. Attributes and limitations of their electrode microstructure to independently control void volume, pore structure of the Ni(OH)2 deposition, and resulting electrical properties are discussed.
Estimation of the Area of a Reverberant Plate Using Average Reverberation Properties
NASA Astrophysics Data System (ADS)
Achdjian, Hossep; Moulin, Emmanuel; Benmeddour, Farouk; Assaad, Jamal
This paper aims to present an original method for the estimation of the area of thin plates of arbitrary geometrical shapes. This method relies on the acquisition and ensemble processing of reverberated elastic signals on few sensors. The acoustical Green's function in a reverberant solid medium is modeled by a nonstationary random process based on the image-sources method. In that way, mathematical expectations of the signal envelopes can be analytically related to reverberation properties and structural parameters such as plate area, group velocity, or source-receiver distance. Then, a simple curve fitting applied to an ensemble average over N realizations of the late envelopes allows to estimate a global term involving the values of structural parameters. From simple statistical modal arguments, it is shown that the obtained relation depends on the plate area and not on the plate shape. Finally, by considering an additional relation obtained from the early characteristics (treated in a deterministic way) of the reverberation signals, it is possible to deduce the area value. This estimation is performed without geometrical measurements and requires an access to only a small portion of the plate. Furthermore, this method does not require any time measurement nor trigger synchronization between the input channels of instrumentation (between measured signals), thus implying low hardware constraints. Experimental results obtained on metallic plates with free boundary conditions and embedded window glasses will be presented. Areas of up to several meter-squares are correctly estimated with a relative error of a few percents.
NASA Astrophysics Data System (ADS)
Mo, Jiangtao; Liu, Chunyan; Yan, Shicui
2007-12-01
In this paper we propose a nonmonotone trust region method. Unlike traditional nonmonotone trust region method, the nonmonotone technique applied to our method is based on the nonmonotone line search technique proposed by Zhang and Hager [A nonmonotone line search technique and its application to unconstrained optimization, SIAM J. Optim. 14(4) (2004) 1043-1056] instead of that presented by Grippo et al. [A nonmonotone line search technique for Newton's method, SIAM J. Numer. Anal. 23(4) (1986) 707-716]. So the method requires nonincreasing of a special weighted average of the successive function values. Global and superlinear convergence of the method are proved under suitable conditions. Preliminary numerical results show that the method is efficient for unconstrained optimization problems.
Quantum black hole wave packet: Average area entropy and temperature dependent width
NASA Astrophysics Data System (ADS)
Davidson, Aharon; Yellin, Ben
2014-09-01
A quantum Schwarzschild black hole is described, at the mini super spacetime level, by a non-singular wave packet composed of plane wave eigenstates of the momentum Dirac-conjugate to the mass operator. The entropy of the mass spectrum acquires then independent contributions from the average mass and the width. Hence, Bekenstein's area entropy is formulated using the
Coombes, Brandon; Basu, Saonli; Guha, Sharmistha; Schork, Nicholas
2015-01-01
Multi-locus effect modeling is a powerful approach for detection of genes influencing a complex disease. Especially for rare variants, we need to analyze multiple variants together to achieve adequate power for detection. In this paper, we propose several parsimonious branching model techniques to assess the joint effect of a group of rare variants in a case-control study. These models implement a data reduction strategy within a likelihood framework and use a weighted score test to assess the statistical significance of the effect of the group of variants on the disease. The primary advantage of the proposed approach is that it performs model-averaging over a substantially smaller set of models supported by the data and thus gains power to detect multi-locus effects. We illustrate these proposed approaches on simulated and real data and study their performance compared to several existing rare variant detection approaches. The primary goal of this paper is to assess if there is any gain in power to detect association by averaging over a number of models instead of selecting the best model. Extensive simulations and real data application demonstrate the advantage the proposed approach in presence of causal variants with opposite directional effects along with a moderate number of null variants in linkage disequilibrium. PMID:26436424
Chi, Chang-Feng; Cao, Zi-Hao; Wang, Bin; Hu, Fa-Yuan; Li, Zhong-Rui; Zhang, Bin
2014-01-01
In the current study, the relationships between functional properties and average molecular weight (AMW) of collagen hydrolysates from Spanish mackerel (Scomberomorous niphonius) skin were researched. Seven hydrolysate fractions (5.04 ≤ AMW ≤ 47.82 kDa) from collagen of Spanish mackerel skin were obtained through the processes of acid extraction, proteolysis, and fractionation using gel filtration chromatography. The physicochemical properties of the collagen hydrolysate fractions were studied by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE), gel filtration chromatography, scanning electron microscope (SEM) and Fourier transform infrared spectroscopy (FTIR). The results indicated that there was an inverse relationship between the antioxidant activities and the logarithm of the AMW of the hydrolysate fractions in the tested AMW range. However, the reduction of AMW significantly enhanced the solubility of the hydrolysate fractions, and a similar AMW decrease of the hydrolysate fractions negatively affected the emulsifying and foaming capacities. This presented as a positive correlation between the logarithm of AMW and emulsion stability index, emulsifying activity index, foam stability, and foam capacity. Therefore, these collagen hydrolysates with excellent antioxidant activities or good functionalities as emulsifiers could be obtained by controlling the effect of the digestion process on the AMW of the resultant hydrolysates. PMID:25090114
NASA Astrophysics Data System (ADS)
Davies, G. R.; Chaplin, W. J.; Elsworth, Y.; Hale, S. J.
2014-07-01
The Birmingham Solar Oscillations Network (BiSON) has provided high-quality high-cadence observations from as far back in time as 1978. These data must be calibrated from the raw observations into radial velocity and the quality of the calibration has a large impact on the signal-to-noise ratio of the final time series. The aim of this work is to maximize the potential science that can be performed with the BiSON data set by optimizing the calibration procedure. To achieve better levels of signal-to-noise ratio, we perform two key steps in the calibration process: we attempt a correction for terrestrial atmospheric differential extinction; and the resulting improvement in the calibration allows us to perform weighted averaging of contemporaneous data from different BiSON stations. The improvements listed produce significant improvement in the signal-to-noise ratio of the BiSON frequency-power spectrum across all frequency ranges. The reduction of noise in the power spectrum will allow future work to provide greater constraint on changes in the oscillation spectrum with solar activity. In addition, the analysis of the low-frequency region suggests that we have achieved a noise level that may allow us to improve estimates of the upper limit of g-mode amplitudes.
NASA Astrophysics Data System (ADS)
Gasser, Guy; Pankratov, Irena; Elhanany, Sara; Glazman, Hillel; Lev, Ovadia
2014-05-01
A methodology used to estimate the percentage of wastewater effluent in an otherwise pristine water site is proposed on the basis of the weighted mean of the level of a consortium of indicator pollutants. This method considers the levels of uncertainty in the evaluation of each of the indicators in the site, potential effluent sources, and uncontaminated surroundings. A detailed demonstrative study was conducted on a site that is potentially subject to wastewater leakage. The research concentrated on several perched springs that are influenced to an unknown extent by agricultural communities. A comparison was made to a heavily contaminated site receiving wastewater effluent and surface water runoff. We investigated six springs in two nearby ridges where fecal contamination was detected in the past; the major sources of pollution in the area have since been diverted to a wastewater treatment system. We used chloride, acesulfame, and carbamazepine as domestic pollution tracers. Good correlation (R2 > 0.86) was observed between the mixing ratio predictions based on the two organic tracers (the slope of the linear regression was 1.05), whereas the chloride predictions differed considerably. This methodology is potentially useful, particularly for cases in which detailed hydrological modeling is unavailable but in which quantification of wastewater penetration is required. We demonstrate that the use of more than one tracer for estimation of the mixing ratio reduces the combined uncertainty level associated with the estimate and can also help to disqualify biased tracers.
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov
2007-01-01
Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.
NASA Astrophysics Data System (ADS)
Zhang, Leiming; Brook, Jeffrey R.
A method for deriving the site-specific and subgrid area wind speed and friction velocity from regional model output and detailed land type information is developed. The "subgrid velocity scale" is introduced to account for generation of turbulent fluxes by subgrid motions. The grid vector averaged wind speed is adjusted by adding the subgrid velocity scale. This is to account for the fact that the spatial average of the local wind speed is usually larger than the absolute value of the vector averaged velocity ( | limitV?| ), especially when there are different land or surface types within the spatial averaging area and when limitV? is small. The assumption of uu*=constant is then applied within a model grid area to obtain wind speed and friction velocity for specific sites and subgrid areas. Using this method, the site-specific and subgrid area wind speed and friction velocity can be estimated from grid-averaged model output. In addition, more realistic air pollutant dry deposition velocities for specific locations and subgrid areas can be calculated. Grid-averaged deposition velocity values calculated using this approach tend to be about 30% different (either larger or smaller) for HNO 3 and sulphate and about 10% different for SO 2 and O 3 compared to values calculated by assuming a constant wind speed over the whole model grid area. These differences are found to be even larger at specific sites or over some subgrid areas. This method can be applied to determine a more realistic wind speed, friction velocity and pollutant dry deposition velocity at specific locations using gridded meteorological data.
NASA Astrophysics Data System (ADS)
Kassianov, Evgueni; Barnard, James; Flynn, Connor; Riihimaki, Laura; Marinovici, Cristina
2015-10-01
Areal-averaged albedos are particularly difficult to measure in coastal regions, because the surface is not homogenous, consisting of a sharp demarcation between land and water. With this difficulty in mind, we evaluate a simple retrieval of areal-averaged surface albedo using ground-based measurements of atmospheric transmission alone under fully overcast conditions. To illustrate the performance of our retrieval, we find the areal-averaged albedo using measurements from the Multi-Filter Rotating Shadowband Radiometer (MFRSR) at five wavelengths (415, 500, 615, 673, and 870 nm). These MFRSR data are collected at a coastal site in Graciosa Island, Azores supported by the U.S. Department of Energy's (DOE's) Atmospheric Radiation Measurement (ARM) Program. The areal-averaged albedos obtained from the MFRSR are compared with collocated and coincident Moderate Resolution Imaging Spectroradiometer (MODIS) whitesky albedo at four nominal wavelengths (470, 560, 670 and 860 nm). These comparisons are made during a 19-month period (June 2009 - December 2010). We also calculate composite-based spectral values of surface albedo by a weighted-average approach using estimated fractions of major surface types observed in an area surrounding this coastal site. Taken as a whole, these three methods of finding albedo show spectral and temporal similarities, and suggest that our simple, transmission-based technique holds promise, but with estimated errors of about ±0.03. Additional work is needed to reduce this uncertainty in areas with inhomogeneous surfaces.
Kassianov, Evgueni I.; Barnard, James C.; Flynn, Connor J.; Riihimaki, Laura D.; Marinovici, Maria C.
2015-10-15
Areal-averaged albedos are particularly difficult to measure in coastal regions, because the surface is not homogenous, consisting of a sharp demarcation between land and water. With this difficulty in mind, we evaluate a simple retrieval of areal-averaged surface albedo using ground-based measurements of atmospheric transmission alone under fully overcast conditions. To illustrate the performance of our retrieval, we find the areal-averaged albedo using measurements from the Multi-Filter Rotating Shadowband Radiometer (MFRSR) at five wavelengths (415, 500, 615, 673, and 870 nm). These MFRSR data are collected at a coastal site in Graciosa Island, Azores supported by the U.S. Department of Energy’s (DOE’s) Atmospheric Radiation Measurement (ARM) Program. The areal-averaged albedos obtained from the MFRSR are compared with collocated and coincident Moderate Resolution Imaging Spectroradiometer (MODIS) white-sky albedo at four nominal wavelengths (470, 560, 670 and 860 nm). These comparisons are made during a 19-month period (June 2009 - December 2010). We also calculate composite-based spectral values of surface albedo by a weighted-average approach using estimated fractions of major surface types observed in an area surrounding this coastal site. Taken as a whole, these three methods of finding albedo show spectral and temporal similarities, and suggest that our simple, transmission-based technique holds promise, but with estimated errors of about ±0.03. Additional work is needed to reduce this uncertainty in areas with inhomogeneous surfaces.
Juarez-Galan, Juan M; Valor, Ignacio
2009-04-10
A new cryogenic integrative air sampler (patent application number 08/00669), able to overcome many of the limitations in current volatile organic compounds and odour sampling methodologies is presented. The sample is spontaneously collected in a universal way at 15 mL/min, selectively dried (reaching up to 95% of moisture removal) and stored under cryogenic conditions. The sampler performance was tested under time weighted average (TWA) conditions, sampling 100L of air over 5 days for determination of NH(3), H(2)S, and benzene, toluene, ethylbenzene and xylenes (BTEX) in the ppm(v) range. Recovery was 100% (statistically) for all compounds, with a concentration factor of 5.5. Furthermore, an in-field evaluation was done by monitoring the TWA inmission levels of BTEX and dimethylethylamine (ppb(v) range) in an urban area with the developed technology and comparing the results with those monitored with a commercial graphitised charcoal diffusive sampler. The results obtained showed a good statistical agreement between the two techniques. PMID:19230895
NASA Astrophysics Data System (ADS)
Suzuki, Kouki; Kato, Takeyoshi; Suzuoki, Yasuo
A photovoltaic power generation system (PVS) is one of the promising measures to develop a low carbon society. Because of the unstable power output characteristics, a robust forecast method must be employed for realizing the high penetration of PVS into an electric power system. Considering the difference in power output patterns among PVSs dispersed in the service area of electric power system, the forecast error would vary among locations, resulting in the reduced forecast error of the ensemble average power output of high penetration PVS. In this paper, by using the multi-point data of insolation observed in Chubu area during four months, we evaluated the forecast error of the ensemble average insolation of 11 districts, and compared it with the forecast error of individual district. As the results, the number of periods with the forecast error larger than the average insolation during four months is reduced by 16 hours for the ensemble average insolation compared with the average value of individual forecast. The largest forecast error during four months is also reduced to 0.45 kWh/m2 for the ensemble average insolation from 0.68 kWh/m2 on average of 11 districts.
ERIC Educational Resources Information Center
Warne, Russell T.; Nagaishi, Chanel; Slade, Michael K.; Hermesmeyer, Paul; Peck, Elizabeth Kimberli
2014-01-01
While research has shown the statistical significance of high school grade point averages (HSGPAs) in predicting future academic outcomes, the systems with which HSGPAs are calculated vary drastically across schools. Some schools employ unweighted grades that carry the same point value regardless of the course in which they are earned; other…
Thompson, Amanda L; Adair, Linda; Bentley, Margaret E
2014-01-01
Biomedical researchers have raised concerns that mothers’ inability to recognize infant and toddler overweight poses a barrier to stemming increasing rates of overweight and obesity, particularly among low-income or minority mothers. Little anthropological research has examined the sociocultural, economic or structural factors shaping maternal perceptions of infant and toddler size or addressed biomedical depictions of maternal misperception as a “socio-cultural problem.” We use qualitative and quantitative data from 237 low-income, African-American mothers to explore how they define ‘normal’ infant growth and infant overweight. Our quantitative results document that mothers’ perceptions of infant size change with infant age, are sensitive to the size of other infants in the community, and are associated with concerns over health and appetite. Qualitative analysis documents that mothers are concerned with their children’s weight status and assess size in relation to their infants’ cues, local and societal norms of appropriate size, interactions with biomedicine, and concerns about infant health and sufficiency. These findings suggest that mothers use multiple models to interpret and respond to child weight. An anthropological focus on the complex social and structural factors shaping what is considered ‘normal’ and ‘abnormal’ infant weight is critical for shaping appropriate and successful interventions. PMID:25684782
Sether, Bradley A.; Berkas, Wayne R.; Vecchia, Aldo V.
2004-01-01
Data were collected at 11 water-quality sampling sites in the upper Red River of the North (Red River) Basin from May 1997 through September 1999 to describe the water-quality characteristics of the upper Red River and to estimate constituent loads and flow-weighted average concentrations for major tributaries of the Red River upstream from the bridge crossing the Red River at Perley, Minn. Samples collected from the sites were analyzed for 5-day biochemical oxygen demand, bacteria, dissolved solids, nutrients, and suspended sediment. Concentration data indicated the median concentrations for most constituents and sampling sites during the study period were less than existing North Dakota and Minnesota standards or guidelines. However, more than 25 percent of the samples for the Red River at Perley, Minn., site had fecal coliform concentrations that were greater than 200 colonies per 100 milliliters, indicating an abundance of pathogens in the upper Red River Basin. Although total nitrite plus nitrate concentrations generally increased in a downstream direction, the median concentrations for all sites were less than the North Dakota suggested guideline of 1.0 milligram per liter. Total and dissolved phosphorus concentrations also generally increased in a downstream direction, but, for those constituents, the median concentrations for most sampling sites exceeded the North Dakota suggested guideline of 0.1 milligram per liter. For dissolved solids, nutrients, and suspended sediments, a relation between constituent concentration and streamflow was determined using the data collected during the study period. The relation was determined by a multiple regression model in which concentration was the dependent variable and streamflow was the primary explanatory variable. The regression model was used to compute unbiased estimates of annual loads for each constituent and for each of eight primary water-quality sampling sites and to compute the degree of uncertainty associated with each estimated annual load. The estimated annual loads for the eight primary sites then were used to estimate annual loads for five intervening reaches in the study area. Results were used as a screening tool to identify which subbasins contributed a disproportionate amount of pollutants to the Red River. To compare the relative water quality of the different subbasins, an estimated flow-weighted average (FWA) concentration was computed from the estimated average annual load and the average annual streamflow for each subbasin. The 5-day biochemical oxygen demands in the upper Red River Basin were fairly small, and medians ranged from 1 to 3 milligrams per liter. The largest estimated FWA concentration for dissolved solids (about 630 milligrams per liter) was for the Bois de Sioux River near Doran, Minn., site. The Otter Tail River above Breckenridge, Minn., site had the smallest estimated FWA concentration (about 240 milligrams per liter). The estimated FWA concentrations for dissolved solids for the main-stem sites ranged from about 300 to 500 milligrams per liter and generally increased in a downstream direction. The estimated FWA concentrations for total nitrite plus nitrate for the main-stem sites increased from about 0.2 milligram per liter for the Red River below Wahpeton, N. Dak., site to about 0.9 milligram per liter for the Red River at Perley, Minn., site. Much of the increase probably resulted from flows from the tributary sites and intervening reaches, excluding the Otter Tail River above Breckenridge, Minn., site. However, uncertainty in the estimated concentrations prevented any reliable conclusions regarding which sites or reaches contributed most to the increase. The estimated FWA concentrations for total ammonia for the main-stem sites increased from about 0.05 milligram per liter for the Red River above Fargo, N. Dak., site to about 0.15 milligram per liter for the Red River near Harwood, N. Dak., site. T
Numerous urban canopy schemes have recently been developed for mesoscale models in order to approximate the drag and turbulent production effects of a city on the air flow. However, little data exists by which to evaluate the efficacy of the schemes since "area-averaged&quo...
Area-to-point parameter estimation with geographically weighted regression
NASA Astrophysics Data System (ADS)
Murakami, Daisuke; Tsutsumi, Morito
2015-07-01
The modifiable areal unit problem (MAUP) is a problem by which aggregated units of data influence the results of spatial data analysis. Standard GWR, which ignores aggregation mechanisms, cannot be considered to serve as an efficient countermeasure of MAUP. Accordingly, this study proposes a type of GWR with aggregation mechanisms, termed area-to-point (ATP) GWR herein. ATP GWR, which is closely related to geostatistical approaches, estimates the disaggregate-level local trend parameters by using aggregated variables. We examine the effectiveness of ATP GWR for mitigating MAUP through a simulation study and an empirical study. The simulation study indicates that the method proposed herein is robust to the MAUP when the spatial scales of aggregation are not too global compared with the scale of the underlying spatial variations. The empirical studies demonstrate that the method provides intuitively consistent estimates.
The daily computed weighted averaging basic reproduction number R>0,k,ωn for MERS-CoV in South Korea
NASA Astrophysics Data System (ADS)
Jeong, Darae; Lee, Chang Hyeong; Choi, Yongho; Kim, Junseok
2016-06-01
In this paper, we propose the daily computed weighted averaging basic reproduction number R0,k,ωn for Middle East respiratory syndrome coronavirus (MERS-CoV) outbreak in South Korea, May to July 2015. We use an SIR model with piecewise constant parameters β (contact rate) and γ (removed rate). We use the explicit Euler's method for the solution of the SIR model and a nonlinear least-square fitting procedure for finding the best parameters. In R0,k,ωn, the parameters n, k, and w denote days from a reference date, the number of days in averaging, and a weighting factor, respectively. We perform a series of numerical experiments and compare the results with the real-world data. In particular, using the predicted reproduction number based on the previous two consecutive reproduction numbers, we can predict the future behavior of the reproduction number.
Collins, Alison M; Barchia, Idris M
2014-01-31
Serology indicates that Lawsonia intracellularis infection is widespread in many countries, with most pigs seroconverting before 22 weeks of age. However, the majority of animals appear to be sub-clinically affected, demonstrated by the low reported prevalence of diarrhoea. Production losses caused by sub-clinical proliferative enteropathy (PE) are more difficult to diagnose, indicating the need for a quantitative L. intracellularis assay that correlates well with disease severity. In previous studies, increasing numbers of L. intracellularis in pig faeces, quantified with a real time polymerase chain reaction (qPCR), showed a strong negative correlation with average daily gain (ADG). In this study, the association between faecal L. intracellularis numbers and PE severity was examined in two L. intracellularis experimental challenge trials (n1=32 and n2=95). The number of L. intracellularis shed in individual faeces was determined by qPCR on days 0, 7, 14, 17 and 21 days post challenge, and average daily gain was recorded over the same period. The severity of histopathological lesions of PE was scored at 21 days post challenge. L. intracellularis numbers correlated well with histopathology severity and faecal consistency scores (r=0.72 and 0.68, respectively), and negatively with ADG (r=-0.44). Large reductions in ADG (131 g/day) occurred when the number of L. intracellularis shed by experimentally challenged pigs increased from 10(7) to 10(8)L. intracellularis, although smaller ADG reductions were also observed (15 g/day) when the number of L. intracellularis increased from 10(6) to 10(7)L. intracellularis. PMID:24388631
NASA Astrophysics Data System (ADS)
Pelgrum, H.; Bastiaanssen, W. G. M.
1996-04-01
A knowledge of the area-averaged latent heat flux <λE> is necessary to validate large-scale model predictions of heat fluxes over heterogeneous land surfaces. This paper describes different procedures to obtain <λE> as a weighted average of ground-based observations. The weighting coefficients are obtained from remote sensing measurements. The remote sensing data used in this study consist of a Landsat thematic mapper image of the European Field Experiment in a Desertification-Threatened Area (EFEDA) grid box in central Spain, acquired on June 12, 1991. A newly developed remote sensing algorithm, the surface energy balance for land algorithm (SEBAL), solves the energy budget on a pixel-by-pixel basis. From the resulting frequency distribution of the latent heat flux, the area-averaged latent heat flux was calculated as <λE> = 164 W m-2. This method was validated with field measurements of latent heat flux, sensible heat flux, and soil moisture. In general, the SEBAL-derived output compared well with field measurements. Two other methods for retrieval of weighting coefficients were tested against SEBAL. The second method combines satellite images of surface temperature, surface albedo, and normalized difference vegetation index (NDVI) into an index on a pixel-by-pixel basis. After inclusion of ground-based measurements of the latent heat flux, a linear relationship between the index and the latent heat flux was established. This relationship was used to map the latent heat flux on a pixel-by-pixel basis, resulting in <λE> = 194 W m-2. The third method makes use of a supervised classification of the thematic mapper image into eight land use classes. An average latent heat flux was assigned to each class by using field measurements of the latent heat flux. According to the percentage of occurrence of each class in the image, <λE> was calculated as 110 W m-2. A weighting scheme was produced to make an estimation of <λE> possible from in situ observations. The weighting scheme contained a multiplication factor for each measurement site in order to compensate for the relative contribution of that site to <λE>. It was shown that <λE> derived as the arithmetic mean of 13 individual in situ observations leads to a difference of 34% (<λE> = 104 W m-2), which emphasizes the need for improved weighting procedures.
NASA Astrophysics Data System (ADS)
Elmore, A. J.; Guinn, S. M.
2009-12-01
Land surface phenology (LSP) is the seasonal pattern of vegetation dynamics that occur each spring and fall. Multiple drivers of spatial variation in LSP and its variation over time have been analyzed using satellite remote sensing. Until recently, these observations have been restricted to moderate- and low-resolution data, as it is only at these spatial resolutions for which temporally continuous data is available. However, understanding small scale variation in LSP over space and time may be key to linking pattern to process, and in particular, could be used to understand how ecological processes at the stand level scale to landscapes and continents. Through utilization of the large, and now free, Landsat record, recent research has led to the development of robust methods for calculating average phenological patterns at 30-m resolution by stacking two decades worth of data by acquisition day of year (DOY). Here we have extended these techniques to calculate the deviation from the average LSP for any given acquisition DOY-year combination. We model the average LSP as two sigmoid functions, one increasing in spring and a second decreasing in fall, connected by a sloped line representing gradual summer leaf area changes (see Figure). Deviation from the average LSP is considered here to take two forms: (1) residual vegetation cover in mid- to late-summer represent locations in which disturbance, drought, or (alternatively) better than average growing conditions have resulted a separation (either negative or positive) from the average vegetation cover for that DOY, and (2) climate conditions that result in an earlier or later onset of greenness, exhibited as a separation from the average spring onset of greenness curve in the DOY direction (either early or late.) Our study system for this work is the deciduous forests of the mid-Atlantic, USA, where we show that late summer vegetation cover is tied to edaphic properties governing the site specific soil moisture balance. Additionally, we show that climatic factors (mostly related to topography) strongly influence the average start of spring. Annual deviations in the start of spring do not always scale linearly suggesting a spatially complex relationship between climate and the onset of spring. Model fit for a single pixel of mid-Atlantic deciduous forest. Shades of gray represent the weight each datum has on the model fit (increasing, white to black). Data weights account for variable atmospheric conditions between acquisitions.
London, M L; Bernard, J K; Froetschel, M A; Bertrand, J K; Graves, W M
2012-02-01
Three studies were conducted to determine the relationship between dairy heifer growth and placing in the show ring. In the first study, 1,744 commercial dairy heifers (all breeds and crossbred animals) were evaluated to determine effects of growth on placing within Georgia Commercial Dairy Heifer Shows from 2007 to 2010. Birth weights were determined using breed birth weight averages, with crossbreeds being the average of 2 parent breeds. Average daily gains (ADG) were calculated and heifers were given rankings based on placing in show and for age and weight. Data was analyzed using the Spearman correlation calculations in the SAS software (SAS Institute Inc., Cary, NC). Age and ADG were inversely correlated (r=-0.89). Mean ADG for all heifers was determined to be 0.65 kg, below National Research Council recommendations of 0.7 to 0.8 kg. No strong relationship (r=-0.07) was observed between ADG and placing. Heavier heifers within a class showed a small positive relationship (r=0.10) with placing. For study 2, 238 heifers shown at the 2010 Georgia Junior National Livestock Show (Perry, GA) were measured and evaluated for ADG, placing, body weight, age, withers height, hip height, hip width, and jaw width. Height at withers had a moderate relationship (r=0.42) with placing, followed by hip height (r=0.32). A positive relationship (r=0.65) was observed between withers height and hip height. The correlation between weight and placing was determined (r=0.11). Age and ADG had a strong inverse relationship (r=-0.87). Study 3 evaluated 1,489 Holstein heifers shown from 2007 to 2010. Data was analyzed using the Penn State Growth Monitor Spreadsheet Curves. In total, 63.75% did not meet Penn State recommendations for body weight gain. Performance and physical features associated with age indicates that commercial dairy heifers are underfed. The effects of heat stress and high feed costs also play a role. This has economic implications because these animals will likely require more time before they enter the milk herd. The Commercial Dairy Heifer Program is vital for youth development in Georgia. However, those involved need to be encouraged to improve nutritional management practices. PMID:22281362
NASA Astrophysics Data System (ADS)
Potempski, S.; Galmarini, S.; Riccio, A.; Giunta, G.
2010-11-01
In this paper, we investigate applicability of Bayesian model averaging (BMA) methodology to atmospheric dispersion multimodel ensemble system within the context of emergency response applications. The BMA method can be used both to evaluate model predictions and to combine model results using BMA weighing factors. We analyze time evolution of BMA weights and include a detailed quantitative comparison of different combinations of model results performed by the means of statistical indicators. The analysis allows us to identify similarities and differences among different combined models. Finally, we question the portability of BMA weights among various cases. From the analysis it follows that BMA can be applied in considered problems; however, the median of the model results also performs well and produces more conservative results.
Gorsevski, Pece V; Donevska, Katerina R; Mitrovski, Cvetko D; Frizado, Joseph P
2012-02-01
This paper presents a GIS-based multi-criteria decision analysis approach for evaluating the suitability for landfill site selection in the Polog Region, Macedonia. The multi-criteria decision framework considers environmental and economic factors which are standardized by fuzzy membership functions and combined by integration of analytical hierarchy process (AHP) and ordered weighted average (OWA) techniques. The AHP is used for the elicitation of attribute weights while the OWA operator function is used to generate a wide range of decision alternatives for addressing uncertainty associated with interaction between multiple criteria. The usefulness of the approach is illustrated by different OWA scenarios that report landfill suitability on a scale between 0 and 1. The OWA scenarios are intended to quantify the level of risk taking (i.e., optimistic, pessimistic, and neutral) and to facilitate a better understanding of patterns that emerge from decision alternatives involved in the decision making process. PMID:22030279
Residence in coal-mining areas and low-birth-weight outcomes.
Ahern, Melissa; Mullett, Martha; Mackay, Katherine; Hamilton, Candice
2011-10-01
The objective of this study was to estimate the association between residence in coal mining environments and low birth weight. We conducted a cross-sectional, retrospective analysis of the association between low birth weight and mother's residence in coal mining areas in West Virginia. Birth data were obtained from the West Virginia Birthscore Dataset, 2005-2007 (n = 42,770). Data on coal mining were from the US Department of Energy. Covariates regarding mothers' demographics, behaviors, and insurance coverage were included. We used nested logistic regression (SUDAAN Proc Multilog) to conduct the study. Mothers who were older, unmarried, less educated, smoked, did not receive prenatal care, were on Medicaid, and had recorded medical risks had a greater risk of low birth weight. After controlling for covariates, residence in coal mining areas of West Virginia posed an independent risk of low birth weight. Odds ratios for both unadjusted and adjusted findings suggest a dose-response effect. Adjusted findings show that living in areas with high levels of coal mining elevates the odds of a low-birth-weight infant by 16%, and by 14% in areas with lower mining levels, relative to counties with no coal mining. After covariate adjustment, the persistence of a mining effect on low-birth-weight outcomes suggests an environmental effect resulting from pollution from mining activities. Air and water quality assessments have been largely missing from mining communities, but the need for them is indicated by these findings. PMID:20091110
Mean skin temperature weighted by skin area, heat transfer coefficients and thermal sensitivity
NASA Astrophysics Data System (ADS)
Mochida, T.
1983-07-01
Formulas for calculating the mean skin temperature are described by a general form - the sum total of the product of both the regional skin temperature and the weighting factor concerned with the region. The weighting factor in these formulas was classified into five groups from the point of the content and the concrete values were compared. Based on the heat equilibrium between man and his environment, a mean skin temperature formula weighted by skin areas and the heat transfer coefficients was derived. With reference to the thermal sensitivity coefficients given, a new formula, which is weighted by three important factors - the skin area, the heat transfer coefficients and the thermal sensitivity, was proposed. As the result of a comparison run against formulas reported previously, the weighting factors of the skin area heat transfer coefficient formula are similar to those of the Hardy - DuBois formula, and the weighting factors of the formula by the skin area, the heat transfer coefficients and the thermal sensitivity are similar to those of the formula by Nadel et al.
Boosting with Averaged Weight Vectors
NASA Technical Reports Server (NTRS)
Oza, Nikunj C.; Clancy, Daniel (Technical Monitor)
2002-01-01
AdaBoost is a well-known ensemble learning algorithm that constructs its constituent or base models in sequence. A key step in AdaBoost is constructing a distribution over the training examples to create each base model. This distribution, represented as a vector, is constructed to be orthogonal to the vector of mistakes made by the previous base model in the sequence. The idea is to make the next base model's errors uncorrelated with those of the previous model. Some researchers have pointed out the intuition that it is probably better to construct a distribution that is orthogonal to the mistake vectors of all the previous base models, but that this is not always possible. We present an algorithm that attempts to come as close as possible to this goal in an efficient manner. We present experimental results demonstrating significant improvement over AdaBoost and the Totally Corrective boosting algorithm, which also attempts to satisfy this goal.
Larsen, Inge; Hjulsager, Charlotte Kristiane; Holm, Anders; Olsen, John Elmerdahl; Nielsen, Sren Saxmose; Nielsen, Jens Peter
2016-01-01
Oral treatment with antimicrobials is widely used in pig production for the control of gastrointestinal infections. Lawsonia intracellularis (LI) causes enteritis in pigs older than six weeks of age and is commonly treated with antimicrobials. The objective of this study was to evaluate the efficacy of three oral dosage regimens (5, 10 and 20mg/kg body weight) of oxytetracycline (OTC) in drinking water over a five-day period on diarrhoea, faecal shedding of LI and average daily weight gain (ADG). A randomised clinical trial was carried out in four Danish pig herds. In total, 539 animals from 37 batches of nursery pigs were included in the study. The dosage regimens were randomly allocated to each batch and initiated at presence of assumed LI-related diarrhoea. In general, all OTC doses used for the treatment of LI infection resulted in reduced diarrhoea and LI shedding after treatment. Treatment with a low dose of 5mg/kg OTC per kg body weight, however, tended to cause more watery faeces and resulted in higher odds of pigs shedding LI above detection level when compared to medium and high doses (with odds ratios of 5.5 and 8.4, respectively). No association was found between the dose of OTC and the ADG. In conclusion, a dose of 5mg OTC per kg body weight was adequate for reducing the high-level LI shedding associated with enteropathy, but a dose of 10mg OTC per kg body weight was necessary to obtain a maximum reduction in LI shedding. PMID:26718056
NASA Astrophysics Data System (ADS)
Ghiglieri, Giorgio; Carletti, Alberto; Pittalis, Daniele
2014-11-01
Runoff estimation and water budget in ungauged basins is a challenge for hydrological researchers and planners. The principal aim of this study was the application and validation of the Kennessey method, which is a physiography-based indirect process for determining the average annual runoff coefficient and the basin-scale water balance. The coefficient can be calculated using specific physiographic characteristics (slope, permeability and vegetation cover) and a parameter that defines climatic conditions and does not require instrumental data. One of the main purposes of this study was to compare the average annual runoff coefficient obtained using the Kennessey method with the coefficients calculated using data from 30 instrumented drainage basins in Sardinia (Italy) over 71 years (from 1922 to 1992). These measurements represent an important and complete historical dataset from the study area. Using the runoff coefficient map, the method was also applied to assess the effective annual recharge rate of the aquifers of the Calich hydrogeological basin in the Nurra Plain (Alghero, NW Sardinia-Italy). The groundwater recharge rate was compared with rates calculated using the standard water balance method. The implementation of the method at the regional and basin scales was supported by GIS analyses. The results of the method are promising but show some discrepancies with other methodologies due to the higher weights given to the physiographic parameters than to the meteorological parameters. However, even though the weights assigned to the parameters require improvements, the Kennessey method is a useful tool for evaluating hydrologic processes, particularly for water management in areas where instrumental data are not available.
Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Rockhold, Mark L.
2008-06-01
A methodology to systematically and quantitatively assess model predictive uncertainty was applied to saturated zone uranium transport at the 300 Area of the U.S. Department of Energy Hanford Site in Washington State, USA. The methodology extends Maximum Likelihood Bayesian Model Averaging (MLBMA) to account jointly for uncertainties due to the conceptual-mathematical basis of models, model parameters, and the scenarios to which the models are applied. Conceptual uncertainty was represented by postulating four alternative models of hydrogeology and uranium adsorption. Parameter uncertainties were represented by estimation covariances resulting from the joint calibration of each model to observed heads and uranium concentration. Posterior model probability was dominated by one model. Results demonstrated the role of model complexity and fidelity to observed system behavior in determining model probabilities, as well as the impact of prior information. Two scenarios representing alternative future behavior of the Columbia River adjacent to the site were considered. Predictive simulations carried out with the calibrated models illustrated the computation of model- and scenario-averaged predictions and how results can be displayed to clearly indicate the individual contributions to predictive uncertainty of the model, parameter, and scenario uncertainties. The application demonstrated the practicability of applying a comprehensive uncertainty assessment to large-scale, detailed groundwater flow and transport modelling.
Inducing Conservation of Number, Weight, Volume, Area, and Mass in Pre-School Children.
ERIC Educational Resources Information Center
Young, Beverly S.
The major question this study attempted to answer was, "Can conservation of number, area, weight, mass, and volume to be induced and retained by 3- and 4-year-old children by structured instruction with a multivariate approach? Three nursery schools in Iowa City supplied subjects for this study. The Institute of Child Behavior and Development…
NASA Astrophysics Data System (ADS)
Fonseca, Julio; Del-Castillo-Negrete, Diego; Caldas, Ibere
2014-10-01
Area preserving maps have been extensively used to model 2-dimensional chaotic transport in plasmas and fluids. Here we focus on three types of area preserving maps describing ExB chaotic transport in magnetized plasmas with zonal flows perturbed by electrostatic drift waves. We include finite Larmor radius (FLR) effects by gyro-averaging the corresponding Hamiltonians of the maps. The Hamiltonians have frequencies with monotonic and non-monotonic profiles. In the limit of zero Larmor radius, the monotonic frequency map reduces to the standard Chirikov-Taylor map, and, in the case of non-monotonic frequency, the map reduces to the standard nontwist map. We show that FLR leads to chaos suppression, modifies the stability of fixed points, and changes the robustness of transport barriers. FLR effects also modify the phase space topology and give rise to bifurcations of the zonal flow ExB velocity profile. Dynamical systems methods based on recurrence time statistics are used to quantify the dependence on the Larmor radius of the threshold for the destruction of transport barriers.
NASA Astrophysics Data System (ADS)
Meyer, P. D.; Ye, M.; Neuman, S. P.; Rockhold, M. L.
2006-12-01
Applications of groundwater flow and transport models to regulatory and design problems have illustrated the potential importance of accounting for uncertainties in model conceptualization and structure as well as model parameters. One approach to this issue is to characterize model uncertainty using a discrete set of alternatives and assess the prediction uncertainty arising from the joint impact of model and parameter uncertainty. We demonstrate the application of this approach to the modeling of groundwater flow and uranium transport at the 300 Area of the Dept. of Energy Hanford Site in Washington State using the recently developed Maximum Likelihood Bayesian Model Averaging (MLBMA) method. Model uncertainty was included using alternative representations of the hydrogeologic units at the 300 Area and alternative representations of uranium adsorption. Parameter uncertainties for each model were based on the estimated parameter covariances resulting from the joint calibration of each model alternative to observations of hydraulic head and uranium concentration. The relative plausibility of each calibrated model was expressed in terms of a posterior model probability computed on the basis of Kashyap's information criterion KIC. Results of the application show that model uncertainty may dominate parameter uncertainty for the set of alternative models considered. We discuss the sensitivity of model probabilities to differences in KIC values and examine the effect of particular calibration data on model probabilities. In addition, we discuss the advantages of KIC over other model discrimination criteria for estimating model probabilities.
Mapping Human Cortical Areas in vivo Based on Myelin Content as Revealed by T1- and T2-weighted MRI
Glasser, Matthew F.; Van Essen, David C.
2011-01-01
Non-invasively mapping the layout of cortical areas in humans is a continuing challenge for neuroscience. We present a new method of mapping cortical areas based on myelin content as revealed by T1-weighted (T1w) and T2-weighted (T2w) MRI. The method is generalizable across different 3T scanners and pulse sequences. We use the ratio of T1w/T2w image intensities to eliminate the MR-related image intensity bias and enhance the contrast to noise ratio for myelin. Data from each subject was mapped to the cortical surface and aligned across individuals using surface-based registration. The spatial gradient of the group average myelin map provides an observer-independent measure of sharp transitions in myelin content across the surface—i.e. putative cortical areal borders. We found excellent agreement between the gradients of the myelin maps and the gradients of published probabilistic cytoarchitectonically defined cortical areas that were registered to the same surface-based atlas. For other cortical regions, we used published anatomical and functional information to make putative identifications of dozens of cortical areas or candidate areas. In general, primary and early unimodal association cortices are heavily myelinated and higher, multi-modal, association cortices are more lightly myelinated, but there are notable exceptions in the literature that are confirmed by our results. The overall pattern in the myelin maps also has important correlations with the developmental onset of subcortical white matter myelination, evolutionary cortical areal expansion in humans compared to macaques, postnatal cortical expansion in humans, and maps of neuronal density in non-human primates. PMID:21832190
Mazzella, Nicolas; Debenest, Timothée; Delmas, François
2008-09-01
Polar organic chemical integrative samplers (POCIS) were exposed for 9 days in two different microcosms that contained river waters spiked with deethylterbuthylazine, terbuthylazine and isoproturon. The experiment was performed with natural light and strong turbulence (flow velocities of about 15-50cms(-1)) for reproducing natural conditions. The concentrations were kept relatively constant in the first microcosm (2.6-3.6microgl(-1)) and were variable in the second microcosm (peak concentrations ranged from 15 to 24microgl(-1) during the 3 day pulse phase). The time-weighted average (TWA) concentrations were determined with both POCIS and repetitive grab sampling followed by solid-phase extraction. The results showed a systematic and significant overestimation of the TWA concentrations with the POCIS most probably due to the use of sampling rates derived under low flow scenario. The results showed also that peak concentrations of pollutants are fully integrated by this passive sampler. Even if the POCIS should not provide very accurate concentration estimates without the application of adequate sampling rate values or the use of performance reference compounds, it can be a really useful tool for detecting episodic or short-term pollution events (e.g. increased herbicide concentrations during a flood), which may be missed with classical and low frequency grab sampling. PMID:18649919
NASA Astrophysics Data System (ADS)
Naik, Haladhara; Kim, Guinyun; Kim, Kwangsoo; Zaman, Muhammad; Goswami, Ashok; Lee, Man Woo; Yang, Sung-Chul; Lee, Young-Ouk; Shin, Sung-Gyun; Cho, Moo-Hyun
2016-04-01
Photo-neutron cross sections of 197Au were experimentally determined for the bremsstrahlung end-point energies of 50, 60, and 70 MeV, by utilizing activation and off-line γ-ray spectrometric technique, using the 100 MeV electron linac at the Pohang Accelerator Laboratory (PAL), Pohang, Korea. The 197Au(γ, xn; x = 1- 6) reaction cross sections were calculated as a function of the bombarding photon energy by using the TALYS 1.6 computer code with default parameters. The flux-weighted average cross sections were obtained from the literature data and the theoretical values of TALYS 1.6 and TENDL-2014, for mono-energetic photons, and are found to be in good agreement with the present data. Isomeric yield ratios of 196m2,gAu from the 197Au(γ, n) reaction were also determined for the bremsstrahlung end-point energies of 50, 60, and 70 MeV, from the reaction cross sections of m2- and g-states, based on the present experimental data, and are found to be in good agreement with the theoretical values based on TALYS 1.6 and TENDL-2014.
Anisotropic Step, Surface Contact, and Area Weighted Directed Walks on the Triangular Lattice
NASA Astrophysics Data System (ADS)
Oppenheim, A. C.; Brak, R.; Owczarek, A. L.
We present results for the generating functions of single fully-directed walks on the triangular lattice, enumerated according to each type of step and weighted proportional to the area between the walk and the surface of a half-plane (wall), and the number of contacts made with the wall. We also give explicit formulae for total area generating functions, that is when the area is summed over all configurations with a given perimeter, and the generating function of the moments of heights above the wall (the first of which is the total area). These results generalise and summarise nearly all known results on the square lattice: all the square lattice results can be obtaining by setting one of the step weights to zero. Our results also contain as special cases those that already exist for the triangular lattice. In deriving some of the new results we utilise the Enumerating Combinatorial Objects (ECO) and marked area methods of combinatorics for obtaining functional equations in the most general cases. In several cases we give our results both in terms of ratios of infinite q-series and as continued fractions.
Baimatova, Nassiba; Koziel, Jacek A; Kenessov, Bulat
2015-05-11
A new and simple method for benzene, toluene, ethylbenzene and o-xylene (BTEX) quantification in vehicle exhaust was developed based on diffusion-controlled extraction onto a retracted solid-phase microextraction (SPME) fiber coating. The rationale was to develop a method based on existing and proven SPME technology that is feasible for field adaptation in developing countries. Passive sampling with SPME fiber retracted into the needle extracted nearly two orders of magnitude less mass (n) compared with exposed fiber (outside of needle) and sampling was in a time weighted-averaging (TWA) mode. Both the sampling time (t) and fiber retraction depth (Z) were adjusted to quantify a wider range of Cgas. Extraction and quantification is conducted in a non-equilibrium mode. Effects of Cgas, t, Z and T were tested. In addition, contribution of n extracted by metallic surfaces of needle assembly without SPME coating was studied. Effects of sample storage time on n loss was studied. Retracted TWA-SPME extractions followed the theoretical model. Extracted n of BTEX was proportional to Cgas, t, Dg, T and inversely proportional to Z. Method detection limits were 1.8, 2.7, 2.1 and 5.2 mg m(-3) (0.51, 0.83, 0.66 and 1.62 ppm) for BTEX, respectively. The contribution of extraction onto metallic surfaces was reproducible and influenced by Cgas and t and less so by T and by the Z. The new method was applied to measure BTEX in the exhaust gas of a Ford Crown Victoria 1995 and compared with a whole gas and direct injection method. PMID:25911428
Kristensen, Charlotte Sonne; Baadsgaard, Niels Peter; Toft, Nils
2011-03-01
The aim of this investigation was, through a meta-analysis, to review the published literature concerning the effect of PCV2 vaccination on the average daily weight gain (ADG) and on the mortality rate in pigs from weaning to slaughter. The review was restricted to studies investigating the effect of vaccines against PCV2 published from 2006 to 2008, identified using computerised literature databases. Only studies that met the following criteria were included: commercial vaccines were used, pigs or pens were assigned randomly to vaccination versus control groups in herds naturally infected with PCV2, and vaccinated and non-vaccinated pigs were housed together. Furthermore, it was a requirement that sample size, age at vaccination, and production period were stated. The levels of ADG and mortality rate had to be comparable to those seen in modern intensive swine production. In total, 107 studies were identified; 70 were excluded because they did not fulfil the inclusion criteria and 13 were identical to results published elsewhere. A significant effect of PCV2 vaccination on ADG was found for pigs in all production phases. The largest increase in ADG was found for finishing pigs (41.5g) and nursery-finishing pigs (33.6g) with only 10.6g increase in the nursery pigs. Mortality rate was significantly reduced for finishing pigs (4.4%) and nursery-finishing pigs (5.4%), but not for nursery pigs (0.25%). Herds negative for PRRS had a significantly larger increase in ADG compared to herds positive for PRRS. The PRRS status had no effect on mortality rate. PMID:21239076
Shmool, Jessie L C; Bobb, Jennifer F; Ito, Kazuhiko; Elston, Beth; Savitz, David A; Ross, Zev; Matte, Thomas D; Johnson, Sarah; Dominici, Francesca; Clougherty, Jane E
2015-10-01
Numerous studies have linked air pollution with adverse birth outcomes, but relatively few have examined differential associations across the socioeconomic gradient. To evaluate interaction effects of gestational nitrogen dioxide (NO2) and area-level socioeconomic deprivation on fetal growth, we used: (1) highly spatially-resolved air pollution data from the New York City Community Air Survey (NYCCAS); and (2) spatially-stratified principle component analysis of census variables previously associated with birth outcomes to define area-level deprivation. New York City (NYC) hospital birth records for years 2008-2010 were restricted to full-term, singleton births to non-smoking mothers (n=243,853). We used generalized additive mixed models to examine the potentially non-linear interaction of nitrogen dioxide (NO2) and deprivation categories on birth weight (and estimated linear associations, for comparison), adjusting for individual-level socio-demographic characteristics and sensitivity testing adjustment for co-pollutant exposures. Estimated NO2 exposures were highest, and most varying, among mothers residing in the most-affluent census tracts, and lowest among mothers residing in mid-range deprivation tracts. In non-linear models, we found an inverse association between NO2 and birth weight in the least-deprived and most-deprived areas (p-values<0.001 and 0.05, respectively) but no association in the mid-range of deprivation (p=0.8). Likewise, in linear models, a 10 ppb increase in NO2 was associated with a decrease in birth weight among mothers in the least-deprived and most-deprived areas of -16.2g (95% CI: -21.9 to -10.5) and -11.0 g (95% CI: -22.8 to 0.9), respectively, and a non-significant change in the mid-range areas [β=0.5 g (95% CI: -7.7 to 8.7)]. Linear slopes in the most- and least-deprived quartiles differed from the mid-range (reference group) (p-values<0.001 and 0.09, respectively). The complex patterning in air pollution exposure and deprivation in NYC, however, precludes simple interpretation of interactive effects on birth weight, and highlights the importance of considering differential distributions of air pollution concentrations, and potential differences in susceptibility, across deprivation levels. PMID:26318257
Costs Associated with Low Birth Weight in a Rural Area of Southern Mozambique
Sicuri, Elisa; Bardají, Azucena; Sigauque, Betuel; Maixenchs, Maria; Nhacolo, Ariel; Nhalungo, Delino; Macete, Eusebio; Alonso, Pedro L.; Menéndez, Clara
2011-01-01
Background Low Birth Weight (LBW) is prevalent in low-income countries. Even though the economic evaluation of interventions to reduce this burden is essential to guide health policies, data on costs associated with LBW are scarce. This study aims to estimate the costs to the health system and to the household and the Disability Adjusted Life Years (DALYs) arising from infant deaths associated with LBW in Southern Mozambique. Methods and Findings Costs incurred by the households were collected through exit surveys. Health system costs were gathered from data obtained onsite and from published information. DALYs due to death of LBW babies were based on local estimates of prevalence of LBW (12%), very low birth weight (VLBW) (1%) and of case fatality rates compared to non-LBW weight babies [for LBW (12%) and VLBW (80%)]. Costs associated with LBW excess morbidity were calculated on the incremental number of hospital admissions in LBW babies compared to non-LBW weight babies. Direct and indirect household costs for routine health care were 24.12 US$ (CI 95% 21.51; 26.26). An increase in birth weight of 100 grams would lead to a 53% decrease in these costs. Direct and indirect household costs for hospital admissions were 8.50 US$ (CI 95% 6.33; 10.72). Of the 3,322 live births that occurred in one year in the study area, health system costs associated to LBW (routine health care and excess morbidity) and DALYs were 169,957.61 US$ (CI 95% 144,900.00; 195,500.00) and 2,746.06, respectively. Conclusions This first cost evaluation of LBW in a low-income country shows that reducing the prevalence of LBW would translate into important cost savings to the health system and the household. These results are of relevance for similar settings and should serve to promote interventions aimed at improving maternal care. PMID:22174885
MPWide: a light-weight library for efficient message passing over wide area networks
NASA Astrophysics Data System (ADS)
Groen, D.; Rieder, S.; Portegies Zwart, S.
2013-12-01
We present MPWide, a light weight communication library which allows efficient message passing over a distributed network. MPWide has been designed to connect application running on distributed (super)computing resources, and to maximize the communication performance on wide area networks for those without administrative privileges. It can be used to provide message-passing between application, move files, and make very fast connections in client-server environments. MPWide has already been applied to enable distributed cosmological simulations across up to four supercomputers on two continents, and to couple two different bloodflow simulations to form a multiscale simulation.
Krpálková, L; Cabrera, V E; Kvapilík, J; Burdych, J; Crump, P
2014-10-01
The objective of this study was to evaluate the associations of variable intensity in rearing dairy heifers on 33 commercial dairy herds, including 23,008 cows and 18,139 heifers, with age at first calving (AFC), average daily weight gain (ADG), and milk yield (MY) level on reproduction traits and profitability. Milk yield during the production period was analyzed relative to reproduction and economic parameters. Data were collected during a 1-yr period (2011). The farms were located in 12 regions in the Czech Republic. The results show that those herds with more intensive rearing periods had lower conception rates among heifers at first and overall services. The differences in those conception rates between the group with the greatest ADG (≥0.800 kg/d) and the group with the least ADG (≤0.699 kg/d) were approximately 10 percentage points in favor of the least ADG. All the evaluated reproduction traits differed between AFC groups. Conception at first and overall services (cows) was greatest in herds with AFC ≥800 d. The shortest days open (105 d) and calving interval (396 d) were found in the middle AFC group (799 to 750 d). The highest number of completed lactations (2.67) was observed in the group with latest AFC (≥800 d). The earliest AFC group (≤749 d) was characterized by the highest depreciation costs per cow at 8,275 Czech crowns (US$414), and the highest culling rate for cows of 41%. The most profitable rearing approach was reflected in the middle AFC (799 to 750 d) and middle ADG (0.799 to 0.700 kg) groups. The highest MY (≥8,500 kg) occurred with the earliest AFC of 780 d. Higher MY led to lower conception rates in cows, but the highest MY group also had the shortest days open (106 d) and a calving interval of 386 d. The same MY group had the highest cow depreciation costs, net profit, and profitability without subsidies of 2.67%. We conclude that achieving low AFC will not always be the most profitable approach, which will depend upon farm-specific herd management. The MY is a very important factor for dairy farm profitability. The group of farms having the highest MY achieved the highest net profit despite having greater fertility problems. PMID:25064657
White, R R; Capper, J L
2013-12-01
The objective of this study was to assess environmental impact, economic viability, and social acceptability of 3 beef production systems with differing levels of efficiency. A deterministic model of U.S. beef production was used to predict the number of animals required to produce 1 × 10(9) kg HCW beef. Three production treatments were compared, 1 representing average U.S. production (control), 1 with a 15% increase in ADG, and 1 with a 15% increase in finishing weight (FW). For each treatment, various socioeconomic scenarios were compared to account for uncertainty in producer and consumer behavior. Environmental impact metrics included feed consumption, land use, water use, greenhouse gas emissions (GHGe), and N and P excretion. Feed cost, animal purchase cost, animal sales revenue, and income over costs (IOVC) were used as metrics of economic viability. Willingness to pay (WTP) was used to identify improvements or reductions in social acceptability. When ADG improved, feedstuff consumption, land use, and water use decreased by 6.4%, 3.2%, and 12.3%, respectively, compared with the control. Carbon footprint decreased 11.7% and N and P excretion were reduced by 4% and 13.8%, respectively. When FW improved, decreases were seen in feedstuff consumption (12.1%), water use (9.2%). and land use (15.5%); total GHGe decreased 14.7%; and N and P excretion decreased by 10.1% and 17.2%, compared with the control. Changes in IOVC were dependent on socioeconomic scenario. When the ADG scenario was compared with the control, changes in sector profitability ranged from 51 to 117% (cow-calf), -38 to 157% (stocker), and 37 to 134% (feedlot). When improved FW was compared, changes in cow-calf profit ranged from 67% to 143%, stocker profit ranged from -41% to 155% and feedlot profit ranged from 37% to 136%. When WTP was based on marketing beef being more efficiently produced, WTP improved by 10%; thus, social acceptability increased. When marketing was based on production efficiency and consumer knowledge of growth-enhancing technology use, WTP decreased by 12%-leading to a decrease in social acceptability. Results demonstrated that improved efficiency also improved environmental impact, but impacts on economic viability and social acceptability are highly dependent on consumer and producer behavioral responses to efficiency improvements. PMID:24146151
NASA Astrophysics Data System (ADS)
Hu, X.; Waller, L.; Liu, Y.
2010-12-01
Using remote sensing data to study the characteristics of PM2.5 (particles smaller than 2.5µm in size) especially in areas not covered by ground monitoring networks has attracted much interest due to multiple health outcomes related to its exposure. To accurately predict PM2.5 exposure, successfully modeling the relationship between PM2.5 concentration and aerosol optical thickness (AOT), as well as other environmental parameters, is crucial. Most of currently reported models are global methods without considering local variations, which might introduce significant errors into prediction results. In this paper, a geographically weighted regression model (GWR) was developed to model the relationship among PM2.5, AOT, and meteorological parameters such as mixing height, surface air temperature, relative humidity, and surface wind speed. GWR is capable of estimating local parameters instead of global parameters in terms of the geographical location, and all coefficients vary geographically to indicate the spatial variation. The study area is centered around Atlanta Metro area, and the data from 2001 to 2007 was collected from various sources. After developing the model, cross-validation techniques were implemented to assess the accuracy of our model. The results indicated that GWR, due to its ability of explaining local variations, has the potential to generate a better fit and can provide a promising alternative in PM2.5 exposure estimation.
NASA Astrophysics Data System (ADS)
Kumazawa, Shinsuke; Kato, Takeyoshi; Honda, Nobuyuki; Koaizawa, Masakazu; Nishino, Shinichi; Suzuoki, Yasuo
Based on the past studies regarding the insolation fluctuation, the smoothing effect of insolation among different locations would not be enough for the longer cycle than a few ten minutes. This study evaluated the maximum fluctuation width (MFW) within at most 120 min of ensemble average insolation of 40 points, its clearness index, and ensemble average insolation excluding sun-position dependent component. As the results, when the weather condition became worse after the noon in almost all area, the ensemble average insolation significantly reduced, resulting in MFW of 540W/m2 within 120 min. As other example, when the weather recovered during the morning in many areas, MFW was also large. By using the data observed for 6 months, this study calculated the cumulative frequency distribution of MFW of ensemble average insolation, its clearness index, and ensemble average insolation excluding sun-position dependent component. As the results, the absolute value of MFW of ensemble average insolation calculated with 120 min width window ranges mainly between 200-300W/m2. The absolute value of MWF of insolation excluding sun-position dependent component evaluated with 120 min width window is smaller than 200W/m2 in most days, and is not so different from MWF evaluated with 60 min width window. Finally, this study discussed the practical usability of insolation forecast.
NASA Astrophysics Data System (ADS)
Scherf, A.; Roth, R.
1996-12-01
During the field campaign of EFEDA II several aircraft measurements were performed in order to evaluate area mean values of turbulent energy fluxes over a relatively flat terrain in a desertification threatened area in Spain. Since earlier field experiments indicated differences between airborne measurements and surface observations, we tried to close the gap by carefully analysing the turbulence measurements. In order to evaluate the influence of the temporal variation of the convective boundary layer, the rise of the inversion, derived from simultaneously performed radiosonde ascents, was taken into account. By estimating the linear approximated fields of the meteorological parameters, it was possible to calculate the mean values of these quantities as well as the temporal and spatial derivatives, which are necessary for the evaluation of the advective terms of the energy budget. In this way is possible to examine the terms of the conservation equations in a supplementary way.
Roberts, Graham J; McDonald, Fraser; Neil, Monica; Lucas, Victoria S
2014-08-01
The mathematical principle of weighting averages to determine the most appropriate numerical outcome is well established in economic and social studies. It has seen little application in forensic dentistry. This study re-evaluated the data from a previous study of age assessment at the 10 year threshold. A semiautomatic process of weighting averages by n-td, x-tds, sd-tds, se-tds, 1/sd-tds, 1/se-tds was prepared in an Excel worksheet and the different weighted mean values reported. In addition the Fixed Effects and Random Effects models for Meta-Analysis were used and applied to the same data sets. In conclusion it has been shown that the most accurate age estimation method is to use the Random Effects Model for the mathematical procedures. PMID:25066175
Code of Federal Regulations, 2010 CFR
2010-07-01
... Extraction for Vegetable Oil Production Compliance Requirements § 63.2854 How do I determine the weighted... received for use in your vegetable oil production process. By the end of each calendar month following an... the solvent in each delivery of solvent, including solvent recovered from off-site oil. To...
Code of Federal Regulations, 2011 CFR
2011-07-01
... Extraction for Vegetable Oil Production Compliance Requirements § 63.2854 How do I determine the weighted... received for use in your vegetable oil production process. By the end of each calendar month following an... the solvent in each delivery of solvent, including solvent recovered from off-site oil. To...
NASA Technical Reports Server (NTRS)
Huff, Edward M.; Mosher, Marianne; Barszcz, Eric
2002-01-01
Recent research using NASA Ames AH-1 and OH-58C helicopters, and NASA Glenn test rigs, has shown that in-flight vibration data are typically non-stationary [l-4]. The nature and extent of this non-stationarity is most likely produced by several factors operating simultaneously. The aerodynamic flight environment and pilot commands provide continuously changing inputs, with a complex dynamic response that includes automatic feedback control from the engine regulator. It would appear that the combined effects operate primarily through an induced torque profile, which causes concomitant stress modulation at the individual internal gear meshes in the transmission. This notion is supported by several analyses, which show that upwards of 93% of the vibration signal s variance can be explained by knowledge of torque alone. That this relationship is stronger in an AH-1 than an OH-58, where measured non-stationarity is greater, suggests that the overall mass of the vehicle is an important consideration. In the lighter aircraft, the unsteady aerodynamic influences transmit relatively greater unsteady dynamic forces on the mechanical components, quite possibly contributing to its greater non-stationarity . In a recent paper using OH-58C pinion data [5], the authors have shown that in computing a time synchronous average (TSA) for various single-value metric computations, an effective trade-off can be obtained between sample size and measured stationarity by using data from only a single mesh cycle. A mesh cycle, which is defined as the number of rotations required for the gear teeth to return to their original mating position, has the property of representing all of the discrete phase angles of the opposing gears exactly once in the average. Measured stationarity is probably maximized because a single mesh cycle of the pinion gear occurs over a very short span of time, during which time-dependent non-stationary effects are kept to a minimum. Clearly, the advantage of local stationarity diminishes as the temporal duration of the cycle increases. This is most evident for a planetary mesh cycle, which can take several minutes to complete.
Ruiz, J M; Busnel, J P; Benoît, J P
1990-09-01
The phase separation of fractionated poly(DL-lactic acid-co-glycolic acid) copolymers 50/50 was determined by silicone oil addition. Polymer fractionation by preparative size exclusion chromatography afforded five different microsphere batches. Average molecular weight determined the existence, width, and displacement of the "stability window" inside the phase diagrams, and also microsphere characteristics such as core loading and amount released over 6 hr. Further, the gyration and hydrodynamic radii were measured by light scattering. It is concluded that the polymer-solvent affinity is largely modified by the variation of average molecular weights owing to different levels of solubility. The lower the average molecular weight is, the better methylene chloride serves as a solvent for the coating material. However, a paradoxical effect due to an increase in free carboxyl and hydroxyl groups is noticed for polymers of 18,130 and 31,030 SEC (size exclusion chromatography) Mw. For microencapsulation, polymers having an intermediate molecular weight (47,250) were the most appropriate in terms of core loading and release purposes. PMID:2235892
Ito, Tadashi; Sakai, Yoshihito; Nakamura, Eishi; Yamazaki, Kazunori; Yamada, Ayaka; Sato, Noritaka; Morita, Yoshifumi
2015-01-01
[Purpose] The purpose of this study was to examine the relationship between the paraspinal muscle cross-sectional area and the relative proprioceptive weighting ratio during local vibratory stimulation of older persons with lumbar spondylosis in an upright position. [Subjects] In all, 74 older persons hospitalized for lumbar spondylosis were included. [Methods] We measured the relative proprioceptive weighting ratio of postural sway using a Wii board while vibratory stimulations of 30, 60, or 240 Hz were applied to the subjects’ paraspinal or gastrocnemius muscles. Back strength, abdominal muscle strength, and erector spinae muscle (L1/L2, L4/L5) and lumbar multifidus (L1/L2, L4/L5) cross-sectional areas were evaluated. [Results] The erector spinae muscle (L1/L2) cross-sectional area was associated with the relative proprioceptive weighting ratio during 60Hz stimulation. [Conclusion] These findings show that the relative proprioceptive weighting ratio compared to the erector spinae muscle (L1/L2) cross-sectional area under 60Hz proprioceptive stimulation might be a good indicator of trunk proprioceptive sensitivity. PMID:26311962
2014-01-01
Mesoporous ZnO nanoparticles have been synthesized with tremendous increase in specific surface area of up to 578 m2/g which was 5.54 m2/g in previous reports (J. Phys. Chem. C 113:14676-14680, 2009). Different mesoporous ZnO nanoparticles with average pore sizes ranging from 7.22 to 13.43 nm and specific surface area ranging from 50.41 to 578 m2/g were prepared through the sol-gel method via a simple evaporation-induced self-assembly process. The hydrolysis rate of zinc acetate was varied using different concentrations of sodium hydroxide. Morphology, crystallinity, porosity, and J-V characteristics of the materials have been studied using transmission electron microscopy (TEM), X-ray diffraction (XRD), BET nitrogen adsorption/desorption, and Keithley instruments. PMID:25339855
NASA Astrophysics Data System (ADS)
Jadhav, Nitin A.; Singh, Pramod K.; Rhee, Hee Woo; Bhattacharya, Bhaskar
2014-10-01
Mesoporous ZnO nanoparticles have been synthesized with tremendous increase in specific surface area of up to 578 m2/g which was 5.54 m2/g in previous reports (J. Phys. Chem. C 113:14676-14680, 2009). Different mesoporous ZnO nanoparticles with average pore sizes ranging from 7.22 to 13.43 nm and specific surface area ranging from 50.41 to 578 m2/g were prepared through the sol-gel method via a simple evaporation-induced self-assembly process. The hydrolysis rate of zinc acetate was varied using different concentrations of sodium hydroxide. Morphology, crystallinity, porosity, and J- V characteristics of the materials have been studied using transmission electron microscopy (TEM), X-ray diffraction (XRD), BET nitrogen adsorption/desorption, and Keithley instruments.
Watanabe, Soko; Sawada, Mizuki; Ishizaki, Sumiko; Kobayashi, Ken; Tanaka, Masaru
2014-01-01
Background: Because body weight-bearing produces a shift in the horny layer, acral melanocytic nevus on the body weight-bearing area of the sole showed a regular fibrillar pattern (FP) due to slanting of the melanin columns in the horny layer. On the other hand, acral lentiginous melanoma (ALM) on the body weight-bearing area of the sole tended to show irregular fibrillar pattern showing rather structureless pigmentation instead of a parallel ridge pattern, which is due to the shift of the horny layer. Objective: To elucidate the subtle difference between the regular FP of nevus and irregular FP in ALM. Methods: In this study, the dermatoscopic features of five cases of ALM and five cases of acral melanocytic nevus on the weight-bearing area of the sole were compared. Results: All the cases with nevi showed regular FP showing regular distribution of fibrils, whereas all the melanomas showed irregular distribution of fibrils and colors. Fibrils in nevi tended to be clear at the furrows and dim at the ridges. White fibrils corresponding to the eccrine ducts in the horny layer were more often present on the ridges in ALM, which showed negative FP. Conclusion: Differentiating between the regular and irregular FP, including negative FP, might be helpful for the discrimination of melanoma from nevus. PMID:25396085
NASA Astrophysics Data System (ADS)
Obata, Kenta; Huete, Alfredo R.
2014-01-01
This study investigated the mechanisms underlying the scaling effects that apply to a fraction of vegetation cover (FVC) estimates derived using two-band spectral vegetation index (VI) isoline-based linear mixture models (VI isoline-based LMM). The VIs included the normalized difference vegetation index, a soil-adjusted vegetation index, and a two-band enhanced vegetation index (EVI2). This study focused in part on the monotonicity of an area-averaged FVC estimate as a function of spatial resolution. The proof of monotonicity yielded measures of the intrinsic area-averaged FVC uncertainties due to scaling effects. The derived results demonstrate that a factor ξ, which was defined as a function of "true" and "estimated" endmember spectra of the vegetated and nonvegetated surfaces, was responsible for conveying monotonicity or nonmonotonicity. The monotonic FVC values displayed a uniform increasing or decreasing trend that was independent of the choice of the two-band VI. Conditions under which scaling effects were eliminated from the FVC were identified. Numerical simulations verifying the monotonicity and the practical utility of the scaling theory were evaluated using numerical experiments applied to Landsat7-Enhanced Thematic Mapper Plus (ETM+) data. The findings contribute to developing scale-invariant FVC estimation algorithms for multisensor and data continuity.
Lee, Tzu-Hsien; Tseng, Chia-Yun
2014-01-01
This study recruited 16 industrial workers to examine the effects of material, weight, and base area of container on reduction of grip force (ΔGF) and heart rate for a 100-m manual carrying task. This study examined 2 carrying materials (iron and water), 4 carrying weights (4.4, 8.9, 13.3, 17.8 kg), and 2 base areas of container (24 × 24 cm, 35 × 24 cm). This study showed that carrying water significantly increased ΔGF and heart rate as compared with carrying iron. Also, ΔGF and heart rate significantly increased with carrying weight and base area of container. The effects of base area of container on ΔGF and heart rate were greater in carrying water condition than in carrying iron condition. The maximum dynamic effect of water on ΔGF and heart rate occurred when water occupied ~60%-80% of full volume of the container. PMID:25189743
NASA Astrophysics Data System (ADS)
Nagaoka, Tomoaki; Watanabe, Soichi; Sakurai, Kiyoko; Kunieda, Etsuo; Watanabe, Satoshi; Taki, Masao; Yamanaka, Yukio
2004-01-01
With advances in computer performance, the use of high-resolution voxel models of the entire human body has become more frequent in numerical dosimetries of electromagnetic waves. Using magnetic resonance imaging, we have developed realistic high-resolution whole-body voxel models for Japanese adult males and females of average height and weight. The developed models consist of cubic voxels of 2 mm on each side; the models are segmented into 51 anatomic regions. The adult female model is the first of its kind in the world and both are the first Asian voxel models (representing average Japanese) that enable numerical evaluation of electromagnetic dosimetry at high frequencies of up to 3 GHz. In this paper, we will also describe the basic SAR characteristics of the developed models for the VHF/UHF bands, calculated using the finite-difference time-domain method.
Arshad, F; Nor, I M; Ali, R M; Hamzah, F
1996-06-01
Diet is one of the major factors contributing to the development of obesity, apart from heredity and energy balance. The objective of this cross-sectional study is to assess energy, carbohydrate, protein and fat intakes in relation to bodyweight status among government office workers in Kuala Lumpur. A total of 185 Malay men and 196 Malay women aged 18 and above were randomly selected as the study sample. Height and weight were taken to determine body mass index (BMI). The dietary profile was obtained by using 24-hour dietary recalls and food frequency methods. This was analysed to determine average nutrient intake per day. Other information was ascertained from tested and coded questionnaires. The subjects were categorised into three groups of bodyweight status namely underweight (BMI < 20 kg/m2), normal weight (BMI 20-25 kg/m2) and obese (BMI > 25 kg/m2). The prevalence of obesity was 37.8%. The study showed that the mean energy intake of the respondents was 1709 ± 637 kcal/day. The energy composition comprised of 55.7 ± 7.6% carbohydrates, 29.7 ± 21.7 % fat and 15.6 ± 3.8% protein. There was no significant difference in diet composition among the three groups. The findings indicate that normal weight and overweight individuals had a lower intake of calories and carbohydrates than the underweight individuals (p<0.05). However, there were no significant differences in fat intakes. PMID:24394516
NASA Astrophysics Data System (ADS)
Liu, X.; Zhang, J. X.; Zhao, Z.; Ma, A. D.
2015-06-01
Synthetic aperture radar in the application of remote sensing technology is becoming more and more widely because of its all-time and all-weather operation, feature extraction research in high resolution SAR image has become a hot topic of concern. In particular, with the continuous improvement of airborne SAR image resolution, image texture information become more abundant. It's of great significance to classification and extraction. In this paper, a novel method for built-up areas extraction using both statistical and structural features is proposed according to the built-up texture features. First of all, statistical texture features and structural features are respectively extracted by classical method of gray level co-occurrence matrix and method of variogram function, and the direction information is considered in this process. Next, feature weights are calculated innovatively according to the Bhattacharyya distance. Then, all features are weighted fusion. At last, the fused image is classified with K-means classification method and the built-up areas are extracted after post classification process. The proposed method has been tested by domestic airborne P band polarization SAR images, at the same time, two groups of experiments based on the method of statistical texture and the method of structural texture were carried out respectively. On the basis of qualitative analysis, quantitative analysis based on the built-up area selected artificially is enforced, in the relatively simple experimentation area, detection rate is more than 90%, in the relatively complex experimentation area, detection rate is also higher than the other two methods. In the study-area, the results show that this method can effectively and accurately extract built-up areas in high resolution airborne SAR imagery.
Ghosh, Debarchana; Manson, Steven M.
2013-01-01
In this paper, we present a hybrid approach, robust principal component geographically weighted regression (RPCGWR), in examining urbanization as a function of both extant urban land use and the effect of social and environmental factors in the Twin Cities Metropolitan Area (TCMA) of Minnesota. We used remotely sensed data to treat urbanization via the proxy of impervious surface. We then integrated two different methods, robust principal component analysis (RPCA) and geographically weighted regression (GWR) to create an innovative approach to model urbanization. The RPCGWR results show significant spatial heterogeneity in the relationships between proportion of impervious surface and the explanatory factors in the TCMA. We link this heterogeneity to the sprawling nature of urban land use that has moved outward from the core Twin Cities through to their suburbs and exurbs. PMID:23814454
Mitchell, Nia S; Nassel, Ariann F; Thomas, Deborah
2015-12-01
Obesity rates are higher for ethnic minority, low-income, and rural communities. Programs are needed to support these communities with weight management. We determined the reach of a low-cost, nationally-available weight loss program in Health Resources and Services Administration medically underserved areas (MUAs) and described the demographics of the communities with program locations. This is a cross-sectional analysis of Take Off Pounds Sensibly (TOPS) chapter locations. Geographic information systems technology was used to combine information about TOPS chapter locations, the geographic boundaries of MUAs, and socioeconomic data from the Decennial 2010 Census. TOPS is available in 30 % of MUAs. The typical TOPS chapter is in a Census Tract that is predominantly white, urban, with a median annual income between $25,000 and $50,000. However, there are TOPS chapters in Census Tracts that can be classified as predominantly black or predominantly Hispanic; predominantly rural; and as low or high income. TOPS provides weight management services in MUAs and across many types of communities. TOPS can help treat obesity in the medically underserved. Future research should determine the differential effectiveness among chapters in different types of communities. PMID:26072259
Kimbro, Rachel Tolbert; Brooks-Gunn, Jeanne; McLanahan, Sara
2011-03-01
Although research consistently demonstrates a link between residential context and physical activity for adults and adolescents, less is known about young children's physical activity. Using data from the U.S. Fragile Families and Child Wellbeing Study (N=1822, 51% male), we explored whether outdoor play and television watching were associated with children's body mass indexes (BMIs) at age five using OLS regression models, controlling for a wide array of potential confounders, including maternal BMI. We also tested whether subjective and objective neighborhood measures - socioeconomic status (from U.S. Census tract data), type of dwelling, perceived collective efficacy, and interviewer-assessed physical disorder of the immediate environment outside the home - were associated with children's activities, using negative binomial regression models. Overall, 19% of the sample were overweight (between the 85th and 95th percentiles), and 16% were obese (≥ 95th percentile). Hours of outdoor play were negatively associated with BMI, and hours of television were positively associated with BMI. Moreover, a ratio of outdoor play to television time was a significant predictor of BMI. Higher maternal perceptions of neighborhood collective efficacy were associated with more hours of outdoor play, fewer hours of television viewing, and more trips to a park or playground. In addition, we found that neighborhood physical disorder was associated with both more outdoor play and more television watching. Finally, contrary to expectations, we found that children living in public housing had significantly more hours of outdoor play and watched more television, than other children. We hypothesize that poorer children may have more unstructured time, which they fill with television time but also with outdoor play time; and that children in public housing may be likely to have access to play areas on the grounds of their housing facilities. PMID:21324574
Lawrence, T E; Farrow, R L; Zollinger, B L; Spivey, K S
2008-06-01
With the adoption of visual instrument grading, the calculated yield grade can be used for payment to cattle producers selling on grid pricing systems. The USDA beef carcass grading standards include a relationship between required LM area (LMA) and HCW that is an important component of the final yield grade. As noted on a USDA yield grade LMA grid, a 272-kg (600-lb) carcass requires a 71-cm(2) (11.0-in.(2)) LMA and a 454-kg (1,000-lb) carcass requires a 102-cm(2) (15.8-in.(2)) LMA. This is a linear relationship, where required LMA = 0.171(HCW) + 24.526. If a beef carcass has a larger LMA than required, the calculated yield grade is lowered, whereas a smaller LMA than required increases the calculated yield grade. The objective of this investigation was to evaluate the LMA to HCW relationship against data on 434,381 beef carcasses in the West Texas A&M University (WTAMU) Beef Carcass Research Center database. In contrast to the USDA relationship, our data indicate a quadratic relationship [WTAMU LMA = 33.585 + 0.17729(HCW) -0.0000863(HCW(2))] between LMA and HCW whereby, on average, a 272-kg carcass has a 75-cm(2) (11.6-in.(2)) LMA and a 454-kg carcass has a 96-cm(2) (14.9-in.(2)) LMA, indicating a different slope and different intercept than those in the USDA grading standards. These data indicate that the USDA calculated yield grade equation favors carcasses lighter than 363 kg (800 lb) for having above average muscling and penalizes carcasses heavier than 363 kg (800 lb) for having below average muscling. If carcass weights continue to increase, we are likely to observe greater proportions of yield grade 4 and 5 carcasses because of the measurement bias that currently exists in the USDA yield grade equation. PMID:18310492
Estimating Average Domain Scores.
ERIC Educational Resources Information Center
Pommerich, Mary; Nicewander, W. Alan
A simulation study was performed to determine whether a group's average percent correct in a content domain could be accurately estimated for groups taking a single test form and not the entire domain of items. Six Item Response Theory (IRT) -based domain score estimation methods were evaluated, under conditions of few items per content area per…
Lopes, Thomas J.; Evetts, David M.
2004-01-01
Nevada's reliance on ground-water resources has increased because of increased development and surface-water resources being fully appropriated. The need to accurately quantify Nevada's water resources and water use is more critical than ever to meet future demands. Estimated ground-water pumpage, artificial and natural recharge, and interbasin flow can be used to help evaluate stresses on aquifer systems. In this report, estimates of ground-water pumpage and artificial recharge during calendar year 2000 were made using data from a variety of sources, such as reported estimates and estimates made using Landsat satellite imagery. Average annual natural recharge and interbasin flow were compiled from published reports. An estimated 1,427,100 acre-feet of ground water was pumped in Nevada during calendar year 2000. This total was calculated by summing six categories of ground-water pumpage, based on water use. Total artificial recharge during 2000 was about 145,970 acre-feet. At least one estimate of natural recharge was available for 209 of the 232 hydrographic areas (HAs). Natural recharge for the 209 HAs ranges from 1,793,420 to 2,583,150 acre-feet. Estimates of interbasin flow were available for 151 HAs. The categories and their percentage of the total ground-water pumpage are irrigation and stock watering (47 percent), mining (26 percent), water systems (14 percent), geothermal production (8 percent), self-supplied domestic (4 percent), and miscellaneous (less than 1 percent). Pumpage in the top 10 HAs accounted for about 49 percent of the total ground-water pumpage. The most ground-water pumpage in an HA was due to mining in Pumpernickel Valley (HA 65), Boulder Flat (HA 61), and Lower Reese River Valley (HA 59). Pumpage by water systems in Las Vegas Valley (HA 212) and Truckee Meadows (HA 87) were the fourth and fifth highest pumpage in 2000, respectively. Irrigation and stock watering pumpage accounted for most ground-water withdrawals in the HAs with the sixth through ninth highest pumpage. Geothermal production accounted for most pumpage in the Carson Desert (HA 101). Reinjection of ground water pumped for geothermal energy production accounted for about 64 percent (93,310 acre-feet) of the total artificial recharge. The only artificial recharge by water systems was in Las Vegas Valley, where 29,790 acre-feet of water from the Colorado River was injected into the aquifer system. Artificial recharge by mining totaled 22,870 acre-feet. Net ground-water flow was estimated only for the 143 HAs with available estimates of both natural recharge and interbasin flow. Of the 143 estimates, 58 have negative net ground-water flow, indicating that ground-water storage could be depleted if pumpage continues at the same rate. The State has designated HAs where permitted ground-water rights approach or exceed the estimated average annual recharge. Ten HAs were identified that are not designated and have a net ground-water flow between -1,000 to -35,000 acre-feet. Due to uncertainties in recharge, the water budgets for these HAs may need refining to determine if ground-water storage is being depleted.
NASA Technical Reports Server (NTRS)
Kovich, G.; Moore, R. D.; Urasek, D. C.
1973-01-01
The overall and blade-element performance are presented for an air compressor stage designed to study the effect of weight flow per unit annulus area on efficiency and flow range. At the design speed of 424.8 m/sec the peak efficiency of 0.81 occurred at the design weight flow and a total pressure ratio of 1.56. Design pressure ratio and weight flow were 1.57 and 29.5 kg/sec (65.0 lb/sec), respectively. Stall margin at design speed was 19 percent based on the weight flow and pressure ratio at peak efficiency and at stall.
NASA Astrophysics Data System (ADS)
Franz, Trenton E.; Zreda, M.; Ferre, T. P. A.; Rosolem, R.
2013-10-01
The cosmic-ray neutron probe measures soil moisture over tens of hectares, thus averaging spatially variable soil moisture fields. A previous paper described how variable soil moisture profiles affect the integrated cosmic-ray neutron signal from which depth-average soil moisture is computed. Here, we investigate the effect of horizontal heterogeneity on the relationship between neutron counts and average soil moisture. Observations from a distributed sensor network at a site in southern Arizona indicate that the horizontal component of the total variance of the soil moisture field is less variably in time than the vertical component. Using results from neutron particle transport simulations we show that 1-D binary distributions of soil moisture may affect both the mean and variance of neutron counts of a cosmic-ray neutron detector placed arbitrarily in a soil moisture field, potentially giving rise to an underestimate of the footprint average soil moisture. Similar simulations that used 1-D and 2-D Gaussian soil moisture fields indicate consistent mean and variances of a randomly placed detector if the correlation length scales are short (less than 30 m) and/or the soil moisture field variance is small (<0.032 m6 m-6). Taken together, these soil moisture observations and neutron transport simulations show that horizontal heterogeneity likely has a small effect on the relationship between mean neutron counts and average soil moisture for soils under natural conditions.
Hossain, Ahmed; Beyene, Joseph
2013-12-01
MicroRNAs (miRNAs) are short non-coding RNAs that play critical roles in numerous cellular processes through post-transcriptional functions. The aberrant role of miRNAs has been reported in a number of diseases. A robust computational method is vital to discover novel miRNAs where level of noise varies dramatically across the different miRNAs. In this paper, we propose a flexible rank-based procedure for estimating a weighted log partial area under the receiver operating characteristic (ROC) curve statistic for selecting differentially expressed miRNAs. The statistic combines results taking partial area under the curve (pAUC) and their corresponding variance. The proposed method does not involve complicated formulas and does not require advanced programming skills. Two real datasets are analyzed to illustrate the method and a simulation study is carried out to assess the performance of different miRNA ranking statistics. We conclude that the proposed method offers robust results with large samples for miRNA expression data, and the method can be used as an alternative analytical tool for identifying a list of target miRNAs for further biological and clinical investigation. PMID:24246291
NASA Astrophysics Data System (ADS)
Rung-Arunwan, Tawat; Siripunvaraporn, Weerachai; Utada, Hisashi
2016-04-01
Through a large number of magnetotelluric (MT) observations conducted in a study area, one can obtain regional one-dimensional (1-D) features of the subsurface electrical conductivity structure simply by taking the geometric average of determinant invariants of observed impedances. This method was proposed by Berdichevsky and coworkers, which is based on the expectation that distortion effects due to near-surface electrical heterogeneities will be statistically smoothed out. A good estimation of a regional mean 1-D model is useful, especially in recent years, to be used as a priori (or a starting) model in 3-D inversion. However, the original theory was derived before the establishment of the present knowledge on galvanic distortion. This paper, therefore, reexamines the meaning of the Berdichevsky average by using the conventional formulation of galvanic distortion. A simple derivation shows that the determinant invariant of distorted impedance and its Berdichevsky average is always downward biased by the distortion parameters of shear and splitting. This means that the regional mean 1-D model obtained from the Berdichevsky average tends to be more conductive. As an alternative rotational invariant, the sum of the squared elements (ssq) invariant is found to be less affected by bias from distortion parameters; thus, we conclude that its geometric average would be more suitable for estimating the regional structure. We find that the combination of determinant and ssq invariants provides parameters useful in dealing with a set of distorted MT impedances.
van der Pals, Jesper; Hammer-Hansen, Sophia; Nielles-Vallespin, Sonia; Kellman, Peter; Taylor, Joni; Kozlov, Shawn; Hsu, Li-Yueh; Chen, Marcus Y.; Arai, Andrew E.
2015-01-01
Aims Cardiovascular magnetic resonance (CMR) imaging can measure the myocardial area at risk (AAR), but the technique has received criticism for inadequate validation. CMR commonly depicts an AAR that is wider than the infarct, which in turn would require a lateral perfusion gradient within the AAR. We investigated the presence of a lateral perfusion gradient within the AAR and validated CMR measures of AAR against three independent reference standards of high quality. Methods and results Computed tomography (CT) perfusion imaging, microsphere blood flow analysis, T1-weighted 3T CMR and fluorescent microparticle pathology were used to investigate the AAR in a canine model (n = 10) of ischaemia and reperfusion. AAR size by CMR correlated well with CT (R2 = 0.80), microsphere blood flow (R2 = 0.80), and pathology (R2 = 0.74) with good limits of agreement [?0.79 4.02% of the left ventricular mass (LVM) vs. CT; ?1.49 4.04% LVM vs. blood flow and ?1.01 4.18% LVM vs. pathology]. The lateral portion of the AAR had higher perfusion than the core of the AAR by CT perfusion imaging (40.7 11.8 vs. 25.2 17.7 Hounsfield units, P = 0.0008) and microsphere blood flow (0.11 0.04 vs. 0.05 0.02 mL/g/min, lateral vs. core, P = 0.001). The transmural extent of MI was lower in the lateral portion of the AAR than the core (28.2 10.2 vs. 17.4 8.4% of the wall, P = 0.001). Conclusion T1-weighted CMR accurately quantifies size of the AAR with excellent agreement compared with three independent reference standards. A lateral perfusion gradient results in lower transmural extent of infarction at the edges of the AAR compared with the core. PMID:25881901
NASA Astrophysics Data System (ADS)
Shi, Y.; Long, Y.; Wi, X. L.
2014-04-01
When tourists visiting multiple tourist scenic spots, the travel line is usually the most effective road network according to the actual tour process, and maybe the travel line is different from planned travel line. For in the field of navigation, a proposed travel line is normally generated automatically by path planning algorithm, considering the scenic spots' positions and road networks. But when a scenic spot have a certain area and have multiple entrances or exits, the traditional described mechanism of single point coordinates is difficult to reflect these own structural features. In order to solve this problem, this paper focuses on the influence on the process of path planning caused by scenic spots' own structural features such as multiple entrances or exits, and then proposes a doubleweighted Graph Model, for the weight of both vertexes and edges of proposed Model can be selected dynamically. And then discusses the model building method, and the optimal path planning algorithm based on Dijkstra algorithm and Prim algorithm. Experimental results show that the optimal planned travel line derived from the proposed model and algorithm is more reasonable, and the travelling order and distance would be further optimized.
NASA Technical Reports Server (NTRS)
Urasek, D. C.; Kovich, G.; Moore, R. D.
1973-01-01
Performance was obtained for a 50-cm-diameter compressor designed for a high weight flow per unit annulus area of 208 (kg/sec)/sq m. Peak efficiency values of 0.83 and 0.79 were obtained for the rotor and stage, respectively. The stall margin for the stage was 23 percent, based on equivalent weight flow and total-pressure ratio at peak efficiency and stall.
NASA Astrophysics Data System (ADS)
Kuchment, L.; Romanov, P.; Gelfan, A.; Demidov, V.; Tarpley, D.
2007-12-01
Improvement of long-range forecasts of snowmelt flood volume is one of key hydrological problems in Northern Russia. Accurate quantitative characterization of snow cover properties required in snowmelt runoff models is challenging in this region since the existing network of hydrometeorological stations is sparse. Application of satellite data for snow monitoring is hampered by large areas of coniferous forests masking the snow pack and by persistent cloudiness in the fall and winter season. In order to enhance quantitative characterization of snowpack properties we have developed a new technique where satellite data are coupled with a snow cover model. The physically-based snowpack model uses interpolated data from ground-based meteorological stations and incorporates a number of products derived from Moderate Resolution Imaging Spectroradiometer (MODIS) onboard Terra and Aqua satellites. The input satellite data include albedo, land surface temperature, leaf area index and the canopy coverage. The outputs of the model are the snow depth, snow density, ice and liquid water content of snow and the snow grain size. The model was tested over a region with a size of ~240 000 km2 (56°N to 60°N, and 48°E to 54°E) located within the NEESPI area. This region includes the Vyatka River basin with the catchment area of about 120 000 km2. Snow pack simulations were conducted for 1 x 1 km grid cells for the spring season of 2002 and 2003. Spatial correlation between the modeled snow extent and the MODIS-derived snow cover distribution over the study area ranged from 0.9-1.0 in the beginning and in the end of the melt season to 0.5-0.6 during the period of intensive snow melt. The analysis of MODIS snow retrievals over the study area demonstrated their good agreement with surface observations. Satellite information on snow cover was not used in the current version of the model, however high accuracy of satellite snow retrievals makes their incorporation in the next version of the model very attractive. In the presentation we will discuss ways to incorporate satellite snow retrievals in the snowpack model and advantages of the use of improved estimates of SWE in runoff hydrograph calculations.
Wu, Jihuai; Xiao, Yaoming; Tang, Qunwei; Yue, Gentian; Lin, Jianming; Huang, Miaoliang; Huang, Yunfang; Fan, Leqing; Lan, Zhang; Yin, Shu; Sato, Tsugio
2012-04-10
Light-weight PEDOT-Pt/Ti mesh and Ti/TiO(2) foil electrodes are prepared. Owing to the PEDOT-Pt/Ti photocathode's high transparency, good electrocatalytic activity, and low resistance; the Ti/TiO(2) anode's large specific area and high conductivity, a light-weight backside illuminated large-area (100 cm(2) ) dye-sensitized solar cell achieves an energy conversion efficiency of 6.69% under an outdoors sunlight irradiation of 55 mW cm(-2) . PMID:22407518
NASA Astrophysics Data System (ADS)
Wang, Gongwen; Chen, Jianping; Li, Qing; Ding, Huoping
2007-06-01
This paper aims to monitor desertification evolution of different stages and assess its factors using remote sensing (RS) data and cellular automata (CA)-geographical information system (GIS) with an adaptive analytic hierarchy process (AHP) to derive weights of desertification factors. The study areas (114E to 117E and 39.5to 42.2N) are one of the important agro-pastoral transitional zone, located in Beijing and its neighboring areas, marginal desertified areas in North China. Desertification information including NDVI and desertification area were derived from the satellite images of 1987TM, 1996TM (with a resolution of 28.5), and 2006 CBERS-(with a resolution of 19.5 m) in study areas. The ancillary data in terms of meteorology, geology, 30m-DEM, hydrography can be statistical analyzed with GIS technology. A CA model based on the desertification factors with AHP-derived weights was built by AML program in ArcGIS workstation to assess the evolution of desertification in different stages (from 1987 to 1996, and from 1996 to 2006). The research results show that desertified areas was increased by 3.28% per year from 1987 to 1996, so was 0.51% per year from 1996 to 2006. Although the weights of desertification factors have some changes in different stages, the main factors including climate, NDVI, and terrain did not change except the values in study areas.
NASA Astrophysics Data System (ADS)
Ferraris, Stefano; Agnese, Carmelo; Baiamonte, Giorgio; Canone, Davide; Previati, Maurizio; Cat Berro, Daniele; Mercalli, Luca
2015-04-01
Modeling of rainfall statistical structure represents an important research area in hydrology, meteorology, atmospheric physics and climatology, because of the several theoretical and practical implications. The statistical inference of the alternation of wet periods (WP) and dry periods (DP) in daily rainfall records can be achieved through the modelling of inter-arrival time-series (IT), defined as the succession of times elapsed from a rainy day and the one immediately preceding it. It has been shown previously that the statistical structure of IT can be well described by the 3-parameter Lerch distribution (Lch). In this work, Lch was successfully applied to IT data belonging to a sub-alpine area (Piemonte and Valle d'Aosta, NW Italy); furthermore the same statistical procedure was applied to daily rainfall records to ITs associated. The analysis has been carried out for 26 daily rainfall long-series (≈ 90 yr of observations). The main objective of this work was to detect temporal trends of some features describing the statistical structure of both inter-arrival time-series (IT) and associated rainfall depth (H). Each time-series was divided on subsets of five years long and for each of them the estimation of the Lch parameter was performed, so to extend the trend analysis to some high quantiles.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., by Management Area (weights in metric tons) 2a Table 2a to Part 660, Subpart G Wildlife and Fisheries... COMMERCE (CONTINUED) FISHERIES OFF WEST COAST STATES Pt. 660. Subt. G, Table 2a Table 2a to Part 660... Date Note: At 75 FR 60995, Oct. 1, 2010, subpart G was amended by removing Tables 1a through 2c...
Code of Federal Regulations, 2010 CFR
2010-10-01
... 50 Wildlife and Fisheries 9 2010-10-01 2010-10-01 false 2009, Specifications of ABCs, OYs, and HGs, by Management Area (weights in metric tons) 1a Table 1a to Part 660, Subpart C Wildlife and Fisheries... COMMERCE (CONTINUED) FISHERIES OFF WEST COAST STATES Pt. 660, Subpt. C, Table 1a Table 1a to Part...
Code of Federal Regulations, 2010 CFR
2010-10-01
..., by Management Area (weights in metric tons) 1a Table 1a to Part 660, Subpart G Wildlife and Fisheries... COMMERCE (CONTINUED) FISHERIES OFF WEST COAST STATES Pt. 660, Subpt. G, Table 1a Table 1a to Part 660... Date Note: At 75 FR 60995, Oct. 1, 2010, subpart G was amended by removing Tables 1a through 2c...
Code of Federal Regulations, 2010 CFR
2010-10-01
... 50 Wildlife and Fisheries 9 2010-10-01 2010-10-01 false 2010, Specifications of ABCs, OYs, and HGs, by Management Area (weights in metric tons) 2a Table 2a to Part 660, Subpart C Wildlife and Fisheries... COMMERCE (CONTINUED) FISHERIES OFF WEST COAST STATES Pt. 660, Subpt. C, Table 2a Table 2a to Part...
ERIC Educational Resources Information Center
Young, Beverly S.
The present study was designed to determine whether conservation of number, weight, volume, area, and mass could be learned and retained by disadvantaged preschool children when taught by an inexperienced classroom teacher. An instructional sequence of 10-minute lessons was presented on alternate days over a 3 1/2 week period by preservice…
Xaverius, Pamela; Alman, Cameron; Holtz, Lori; Yarber, Laura
2016-03-01
Objectives This study examined risk and protective factors associated with very low birth weight (VLBW) for babies born to women receiving adequate or inadequate prenatal care. Methods Birth records from St. Louis City and County from 2000 to 2009 were used (n = 152,590). Data was categorized across risk factors and stratified by adequacy of prenatal care (PNC). Multivariate logistic regression and population attributable risk (PAR) was used to explore risk factors for VLBW infants. Results Women receiving inadequate prenatal care had a higher prevalence of delivering a VLBW infant than those receiving adequate PNC (4.11 vs. 1.44 %, p < .0001). The distribution of risk factors differed between adequate and inadequate PNC regarding Black race (36.4 vs. 79.0 %, p < .0001), age under 20 (13.0 vs. 33.6 %, p < .0001), <13 years of education (35.9 vs. 77.9 %, p < .0001), Medicaid status (35.7 vs. 74.9, p < .0001), primiparity (41.6 vs. 31.4 %, p < .0001), smoking (9.7 vs. 24.5 %, p < .0001), and diabetes (4.0 vs. 2.4 %, p < .0001), respectively. Black race, advanced maternal age, primiparity and gestational hypertension were significant predictors of VLBW, regardless of adequate or inadequate PNC. Among women with inadequate PNC, Medicaid was protective against (aOR 0.671, 95 % CI 0.563-0.803; PAR -32.6 %) and smoking a risk factor for (aOR 1.23, 95 % CI 1.01, 1.49; PAR 40.1 %) VLBW. When prematurity was added to the adjusted models, the largest PAR shifts to education (44.3 %) among women with inadequate PNC. Conclusions Community actions around broader issues of racism and social determinants of health are needed to prevent VLBW in a large urban area. PMID:26537389
Peng, Xiang; Mielke, Michael; Booth, Timothy
2011-01-17
We demonstrate high average power, high energy 1.55 μm ultra-short pulse (<1 ps) laser delivery using helium-filled and argon-filled large mode area hollow core photonic band-gap fibers and compare relevant performance parameters. The ultra-short pulse laser beam-with pulse energy higher than 7 μJ and pulse train average power larger than 0.7 W-is output from a 2 m long hollow core fiber with diffraction limited beam quality. We introduce a pulse tuning mechanism of argon-filled hollow core photonic band-gap fiber. We assess the damage threshold of the hollow core photonic band-gap fiber and propose methods to further increase pulse energy and average power handling. PMID:21263632
Haines, Aaron M.; Leu, Matthias; Svancara, Leona K.; Wilson, Gina; Scott, J. Michael
2010-01-01
Identification of biodiversity hotspots (hereafter, hotspots) has become a common strategy to delineate important areas for wildlife conservation. However, the use of hotspots has not often incorporated important habitat types, ecosystem services, anthropogenic activity, or consistency in identifying important conservation areas. The purpose of this study was to identify hotspots to improve avian conservation efforts for Species of Greatest Conservation Need (SGCN) in the state of Idaho, United States. We evaluated multiple approaches to define hotspots and used a unique approach based on weighting species by their distribution size and conservation status to identify hotspot areas. All hotspot approaches identified bodies of water (Bear Lake, Grays Lake, and American Falls Reservoir) as important hotspots for Idaho avian SGCN, but we found that the weighted approach produced more congruent hotspot areas when compared to other hotspot approaches. To incorporate anthropogenic activity into hotspot analysis, we grouped species based on their sensitivity to specific human threats (i.e., urban development, agriculture, fire suppression, grazing, roads, and logging) and identified ecological sections within Idaho that may require specific conservation actions to address these human threats using the weighted approach. The Snake River Basalts and Overthrust Mountains ecological sections were important areas for potential implementation of conservation actions to conserve biodiversity. Our approach to identifying hotspots may be useful as part of a larger conservation strategy to aid land managers or local governments in applying conservation actions on the ground.
2012-01-01
Background The study conducts statistical and spatial analyses to investigate amounts and types of permitted surface water pollution discharges in relation to population mortality rates for cancer and non-cancer causes nationwide and by urban-rural setting. Data from the Environmental Protection Agency's (EPA) Discharge Monitoring Report (DMR) were used to measure the location, type, and quantity of a selected set of 38 discharge chemicals for 10,395 facilities across the contiguous US. Exposures were refined by weighting amounts of chemical discharges by their estimated toxicity to human health, and by estimating the discharges that occur not only in a local county, but area-weighted discharges occurring upstream in the same watershed. Centers for Disease Control and Prevention (CDC) mortality files were used to measure age-adjusted population mortality rates for cancer, kidney disease, and total non-cancer causes. Analysis included multiple linear regressions to adjust for population health risk covariates. Spatial analyses were conducted by applying geographically weighted regression to examine the geographic relationships between releases and mortality. Results Greater non-carcinogenic chemical discharge quantities were associated with significantly higher non-cancer mortality rates, regardless of toxicity weighting or upstream discharge weighting. Cancer mortality was higher in association with carcinogenic discharges only after applying toxicity weights. Kidney disease mortality was related to higher non-carcinogenic discharges only when both applying toxicity weights and including upstream discharges. Effects for kidney mortality and total non-cancer mortality were stronger in rural areas than urban areas. Spatial results show correlations between non-carcinogenic discharges and cancer mortality for much of the contiguous United States, suggesting that chemicals not currently recognized as carcinogens may contribute to cancer mortality risk. The geographically weighted regression results suggest spatial variability in effects, and also indicate that some rural communities may be impacted by upstream urban discharges. Conclusions There is evidence that permitted surface water chemical discharges are related to population mortality. Toxicity weights and upstream discharges are important for understanding some mortality effects. Chemicals not currently recognized as carcinogens may nevertheless play a role in contributing to cancer mortality risk. Spatial models allow for the examination of geographic variability not captured through the regression models. PMID:22471926
ERIC Educational Resources Information Center
Gutirrez-Zornoza, Myriam; Snchez-Lpez, Mairena; Garca-Hermoso, Antonio; Gonzlez-Garca, Alberto; Chilln, Palma; Martnez-Vizcano, Vicente
2015-01-01
Purpose: The aim of this study was to examine (a) whether distance from home to school is a determinant of active commuting to school (ACS), (b) the relationship between distance from home to heavily used facilities (school, green spaces, and sports facilities) and the weight status and cardiometabolic risk categories, and (c) whether ACS has a
ERIC Educational Resources Information Center
Gutiérrez-Zornoza, Myriam; Sánchez-López, Mairena; García-Hermoso, Antonio; González-García, Alberto; Chillón, Palma; Martínez-Vizcaíno, Vicente
2015-01-01
Purpose: The aim of this study was to examine (a) whether distance from home to school is a determinant of active commuting to school (ACS), (b) the relationship between distance from home to heavily used facilities (school, green spaces, and sports facilities) and the weight status and cardiometabolic risk categories, and (c) whether ACS has a…
On generalized averaged Gaussian formulas
NASA Astrophysics Data System (ADS)
Spalevic, Miodrag M.
2007-09-01
We present a simple numerical method for constructing the optimal (generalized) averaged Gaussian quadrature formulas which are the optimal stratified extensions of Gauss quadrature formulas. These extensions exist in many cases in which real positive Kronrod formulas do not exist. For the Jacobi weight functions w(x)equiv w^{(alpha,beta)}(x)D(1-x)^alpha(1+x)^beta ( alpha,beta>-1 ) we give a necessary and sufficient condition on the parameters alpha and beta such that the optimal averaged Gaussian quadrature formulas are internal.
Nelson, Jennifer M.; Vos, Miriam B.; Walsh, Stephanie M.; O'Brien, Lauren A.
2015-01-01
Abstract Background: Childhood obesity in Georgia exceeds the national rate. The state's pediatric primary care providers (PCPs) are well positioned to support behavior change, but little is known about provider perceptions and practices regarding this role. Purpose: The aim of this study was to assess and compare weight-management–related counseling perceptions and practices among Georgia's PCPs. Methods: In 2012–2013, 656 PCPs (265 pediatricians, 143 family practice physicians [FPs], and 248 nurse practitioners/physician assistants [NP/PAs]) completed a survey regarding weight-management–related practices at well-child visits before their voluntary participation in a free training on patient-centered counseling and child weight management. Data were analyzed in 2014. Likert scales were used to quantify responses from 1 (strongly disagree or never) to 5 (strongly agree or always). Responses of 4 and 5 responses were combined to denote agreement or usual practice. Chi-squared analyses tested for independent associations between pediatricians and others. Statistical significance was determined using two-sided tests and p value <0.05. Results: The majority of PCPs assessed fruit and vegetable intake (83%) and physical activity (78%), but pediatricians were more likely than FPs and NP/PAs to assess beverage intake (96% vs. 82–87%; p≤0.002) and screen time (86% vs. 74–75%; p≤0.003). Pediatricians were also more likely to counsel patients on lifestyle changes (88% vs. 71%; p<0.001) and to track progress (50% vs. 35–39%; p<0.05). Though all PCPs agreed that goal setting is an effective motivator (88%) and that behavior change increases with provider encouragement (85%), fewer were confident in their ability to counsel (72%). Conclusions: Our results show that many PCPS in Georgia, particularly pediatricians, have incorporated weight management counseling into their practice; however, important opportunities to strengthen these efforts by targeting known high-risk behaviors remain. PMID:25585234
NASA Astrophysics Data System (ADS)
Gnanvo, Kondo; Bai, Xinzhan; Gu, Chao; Liyanage, Nilanga; Nelyubin, Vladimir; Zhao, Yuxiang
2016-02-01
A large-area and light-weight gas electron multiplier (GEM) detector was built at the University of Virginia as a prototype for the detector R&D program of the future Electron Ion Collider. The prototype has a trapezoidal geometry designed as a generic sector module in a disk layer configuration of a forward tracker in collider detectors. It is based on light-weight material and narrow support frames in order to minimize multiple scattering and dead-to-sensitive area ratio. The chamber has a novel type of two dimensional (2D) stereo-angle readout board with U-V strips that provides (r,φ) position information in the cylindrical coordinate system of a collider environment. The prototype was tested at the Fermilab Test Beam Facility in October 2013 and the analysis of the test beam data demonstrates an excellent response uniformity of the large area chamber with an efficiency higher than 95%. An angular resolution of 60 μrad in the azimuthal direction and a position resolution better than 550 μm in the radial direction were achieved with the U-V strip readout board. The results are discussed in this paper.
States' Average College Tuition.
ERIC Educational Resources Information Center
Eglin, Joseph J., Jr.; And Others
This report presents statistical data on trends in tuition costs from 1980-81 through 1995-96. The average tuition for in-state undergraduate students of 4-year public colleges and universities for academic year 1995-96 was approximately 8.9 percent of median household income. This figure was obtained by dividing the students' average annual…
Hmelnitsky, I; Nettheim, N
1987-06-01
Functional anatomy and physiology have naturally attended mainly to those functions which occur most commonly in everyday life. Piano playing is a more specialized area, where functions arise which have so far been neglected in medical science. These functions are here described by a pianist (IH) in the hope that medical researchers will respond to fill the gaps. The importance of this lies not only in the understanding of skilled manipulative activity but also in the avoidance of overuse syndrome (OUS) or repetitive strain injury (RSI). PMID:3614013
How to Address Measurement Noise in Bayesian Model Averaging
NASA Astrophysics Data System (ADS)
Schöniger, A.; Wöhling, T.; Nowak, W.
2014-12-01
When confronted with the challenge of selecting one out of several competing conceptual models for a specific modeling task, Bayesian model averaging is a rigorous choice. It ranks the plausibility of models based on Bayes' theorem, which yields an optimal trade-off between performance and complexity. With the resulting posterior model probabilities, their individual predictions are combined into a robust weighted average and the overall predictive uncertainty (including conceptual uncertainty) can be quantified. This rigorous framework does, however, not yet explicitly consider statistical significance of measurement noise in the calibration data set. This is a major drawback, because model weights might be instable due to the uncertainty in noisy data, which may compromise the reliability of model ranking. We present a new extension to the Bayesian model averaging framework that explicitly accounts for measurement noise as a source of uncertainty for the weights. This enables modelers to assess the reliability of model ranking for a specific application and a given calibration data set. Also, the impact of measurement noise on the overall prediction uncertainty can be determined. Technically, our extension is built within a Monte Carlo framework. We repeatedly perturb the observed data with random realizations of measurement error. Then, we determine the robustness of the resulting model weights against measurement noise. We quantify the variability of posterior model weights as weighting variance. We add this new variance term to the overall prediction uncertainty analysis within the Bayesian model averaging framework to make uncertainty quantification more realistic and "complete". We illustrate the importance of our suggested extension with an application to soil-plant model selection, based on studies by Wöhling et al. (2013, 2014). Results confirm that noise in leaf area index or evaporation rate observations produces a significant amount of weighting uncertainty and compromises the reliability of model ranking. Without our suggested extension, this additional contribution to prediction uncertainty could not be detected and model ranking results would be misinterpreted. We therefore advise modelers to include our suggested upgrade in the Bayesian model averaging routine.
NASA Astrophysics Data System (ADS)
Zavala, M.; Herndon, S. C.; Slott, R. S.; Dunlea, E. J.; Marr, L. C.; Shorter, J. H.; Zahniser, M.; Knighton, W. B.; Rogers, T. M.; Kolb, C. E.; Molina, L. T.; Molina, M. J.
2006-06-01
A mobile laboratory was used to measure on-road vehicle emission ratios during the MCMA-2003 field campaign held during the spring of 2003 in the Mexico City Metropolitan Area (MCMA). The measured emission ratios represent a sample of emissions of in-use vehicles under real world driving conditions for the MCMA. From the relative amounts of NOx and selected VOC's sampled, the results indicate that the technique is capable of differentiating among vehicle categories and fuel type in real world driving conditions. Emission ratios for NOx, NOy, NH3, H2CO, CH3CHO, and other selected volatile organic compounds (VOCs) are presented for chase sampled vehicles and fleet averaged emissions. Results indicate that colectivos, particularly CNG-powered colectivos, are potentially significant contributors of NOx and aldehydes in the MCMA. Similarly, ratios of selected VOCs and NOy showed a strong dependence on traffic mode. These results are compared with the vehicle emissions inventory for the MCMA, other vehicle emissions measurements in the MCMA, and measurements of on-road emissions in US cities. Our estimates for motor vehicle emissions of benzene, toluene, formaldehyde, and acetaldehyde in the MCMA indicate these species are present in concentrations higher than previously reported. The high motor vehicle aldehyde emissions may have an impact on the photochemistry of urban areas.
Kim, Sun Hye; Hwang, Ji-Yun; Kim, Mi Kyung; Chung, Hye Won; Nguyet, Tran Thi Phuc
2010-01-01
The objectives of this study were to examine the association between dietary factors and underweight and overweight adult Vietnamese living in the rural areas of Vietnam. A cross-sectional study of 497 Vietnamese aged 19 to 60 years (204 males, 293 females) was conducted in rural areas of Haiphong, Vietnam. The subjects were classified as underweight, normal weight, and overweight based on BMI. General characteristics, anthropometric parameters, blood profiles, and eating habits were obtained and dietary intake was assessed using 24-hour recalls for 2 consecutive days. A high prevalence of both underweight (BMI < 18.5 kg/m2) and overweight (BMI ≥ 23 kg/m2) individuals was observed (14.2% and 21.6% for males and 18.9% and 20.6% for females, respectively). For both genders, the overweight group were older than the under- and normal weight groups (P = 0.0118 for males and P = 0.0002 for females). In female subjects, the overweight group consumed significantly less cereals (P = 0.0033), energy (P = 0.0046), protein (P = 0.0222), and carbohydrate (P = 0.0017) and more fruits (P = 0.0026) than the underweight group; however, no such differences existed in males. The overweight subjects overate more frequently (P = 0.0295) and consumed fish (P = 0.0096) and fruits (P = 0.0083) more often. The prevalence of both underweight and overweight individuals pose serious public health problems in the rural areas of Vietnamese and the overweight group was related to overeating and high fish and fruit consumption. These findings may provide basic data for policymakers and dieticians in order to develop future nutrition and health programs for rural populations in Vietnam. PMID:20607070
42 CFR 423.279 - National average monthly bid amount.
Code of Federal Regulations, 2010 CFR
2010-10-01
... average bid a weight based on prior enrollment (new MA-PD plans are assigned zero weight). (c) Geographic... among PDP regions are negligible. (3) CMS applies any geographic adjustment in a budget neutral...
Yu, Xianyu; Wang, Yi; Niu, Ruiqing; Hu, Youjian
2016-01-01
In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR) technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM) classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO) algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%-19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides. PMID:27187430
Yu, Xianyu; Wang, Yi; Niu, Ruiqing; Hu, Youjian
2016-01-01
In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR) technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM) classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO) algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%–19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides. PMID:27187430
NASA Technical Reports Server (NTRS)
Rogers, Thomas W.
1988-01-01
Digital electronic filtering system produces series of moving-average samples of fluctuating signal in manner resulting in removal of undesired periodic signal component of known frequency. Filter designed to pass steady or slowly varying components of fluctuating pressure, flow, pump speed, and pump torque in slurry-pumping system. Concept useful for monitoring or control in variety of applications including machinery, power supplies, and scientific instrumentation.
NASA Astrophysics Data System (ADS)
Zavala, M.; Herndon, S. C.; Slott, R. S.; Dunlea, E. J.; Marr, L. C.; Shorter, J. H.; Zahniser, M.; Knighton, W. B.; Rogers, T. M.; Kolb, C. E.; Molina, L. T.; Molina, M. J.
2006-11-01
A mobile laboratory was used to measure on-road vehicle emission ratios during the MCMA-2003 field campaign held during the spring of 2003 in the Mexico City Metropolitan Area (MCMA). The measured emission ratios represent a sample of emissions of in-use vehicles under real world driving conditions for the MCMA. From the relative amounts of NOx and selected VOC's sampled, the results indicate that the technique is capable of differentiating among vehicle categories and fuel type in real world driving conditions. Emission ratios for NOx, NOy, NH3, H2CO, CH3CHO, and other selected volatile organic compounds (VOCs) are presented for chase sampled vehicles in the form of frequency distributions as well as estimates for the fleet averaged emissions. Our measurements of emission ratios for both CNG and gasoline powered "colectivos" (public transportation buses that are intensively used in the MCMA) indicate that - in a mole per mole basis - have significantly larger NOx and aldehydes emissions ratios as compared to other sampled vehicles in the MCMA. Similarly, ratios of selected VOCs and NOy showed a strong dependence on traffic mode. These results are compared with the vehicle emissions inventory for the MCMA, other vehicle emissions measurements in the MCMA, and measurements of on-road emissions in U.S. cities. We estimate NOx emissions as 100 600±29 200 metric tons per year for light duty gasoline vehicles in the MCMA for 2003. According to these results, annual NOx emissions estimated in the emissions inventory for this category are within the range of our estimated NOx annual emissions. Our estimates for motor vehicle emissions of benzene, toluene, formaldehyde, and acetaldehyde in the MCMA indicate these species are present in concentrations higher than previously reported. The high motor vehicle aldehyde emissions may have an impact on the photochemistry of urban areas.
Wire weight is lowered to water surface to measure stage at a site. Levels are made to the wire weights elevation from known benchmarks to ensure correct readings. This wire weight is located along the Missouri River in Bismarck, ND....
Srivatsav, Siddhart; Webster, Jacquelyn; Webster, Michael
2015-01-01
The average color in a scene is a potentially important cue to the illuminant and thus for color constancy, but it remains unknown how well and in what ways observers can estimate the mean chromaticity. We examined this by measuring the variability in "achromatic" settings for stimuli composed of different distributions of colors. The displays consisted of a 15 by 15 palette of colors shown on a gray background on a monitor, with each chip subtending 0.5 deg. Individual colors were randomly sampled from varying contrast ranges along the luminance, S and LM cardinal axes. Observers were instructed to adjust the chromaticity of the palette so that the mean was gray, with variability estimated from 20 or more repeated settings. This variability increased progressively with increasing contrast in the distributions, with large increases for chromatic contrast but also weak effects for added luminance contrast. Signals along the cardinal axes are relatively independent in many detection and discrimination tasks, but showed strong interference in the white estimates. Specifically, adding S contrast increased variability in the white settings along both the S and LM axes, and vice versa. This "cross-masking" and the effects of chromatic variance in general may occur because observers cannot explicitly perceive or represent the mean of a set of qualitatively different hues (e.g. that red and green hues average to gray), and thus may infer the mean only indirectly (e.g. from the relative saturation of different hues). Meeting abstract presented at VSS 2015. PMID:26326088
Model averaging in linkage analysis.
Matthysse, Steven
2006-06-01
Methods for genetic linkage analysis are traditionally divided into "model-dependent" and "model-independent," but there may be a useful place for an intermediate class, in which a broad range of possible models is considered as a parametric family. It is possible to average over model space with an empirical Bayes prior that weights models according to their goodness of fit to epidemiologic data, such as the frequency of the disease in the population and in first-degree relatives (and correlations with other traits in the pleiotropic case). For averaging over high-dimensional spaces, Markov chain Monte Carlo (MCMC) has great appeal, but it has a near-fatal flaw: it is not possible, in most cases, to provide rigorous sufficient conditions to permit the user safely to conclude that the chain has converged. A way of overcoming the convergence problem, if not of solving it, rests on a simple application of the principle of detailed balance. If the starting point of the chain has the equilibrium distribution, so will every subsequent point. The first point is chosen according to the target distribution by rejection sampling, and subsequent points by an MCMC process that has the target distribution as its equilibrium distribution. Model averaging with an empirical Bayes prior requires rapid estimation of likelihoods at many points in parameter space. Symbolic polynomials are constructed before the random walk over parameter space begins, to make the actual likelihood computations at each step of the random walk very fast. Power analysis in an illustrative case is described. (c) 2006 Wiley-Liss, Inc. PMID:16652369
Gong, Lunli; Zhou, Xiao; Wu, Yaohao; Zhang, Yun; Wang, Chen; Zhou, Heng; Guo, Fangfang; Cui, Lei
2014-02-01
The present study was designed to investigate the possibility of full-thickness defects repair in porcine articular cartilage (AC) weight-bearing area using chondrogenic differentiated autologous adipose-derived stem cells (ASCs) with a follow-up of 3 and 6 months, which is successive to our previous study on nonweight-bearing area. The isolated ASCs were seeded onto the phosphoglycerate/polylactic acid (PGA/PLA) with chondrogenic induction in vitro for 2 weeks as the experimental group prior to implantation in porcine AC defects (8 mm in diameter, deep to subchondral bone), with PGA/PLA only as control. With follow-up time being 3 and 6 months, both neo-cartilages of postimplantation integrated well with the neighboring normal cartilage and subchondral bone histologically in experimental group, whereas only fibrous tissue in control group. Immunohistochemical and toluidine blue staining confirmed similar distribution of COL II and glycosaminoglycan in the regenerated cartilage to the native one. A vivid remolding process with repair time was also witnessed in the neo-cartilage as the compressive modulus significantly increased from 70% of the normal cartilage at 3 months to nearly 90% at 6 months, which is similar to our former research. Nevertheless, differences of the regenerated cartilages still could be detected from the native one. Meanwhile, the exact mechanism involved in chondrogenic differentiation from ASCs seeded on PGA/PLA is still unknown. Therefore, proteome is resorted leading to 43 proteins differentially identified from 20 chosen two-dimensional spots, which do help us further our research on some committed factors. In conclusion, the comparison via proteome provided a thorough understanding of mechanisms implicating ASC differentiation toward chondrocytes, which is further substantiated by the present study as a perfect supplement to the former one in nonweight-bearing area. PMID:24044689
Americans' Average Radiation Exposure
NA
2000-08-11
We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body.
Temperature averaging thermal probe
NASA Technical Reports Server (NTRS)
Kalil, L. F.; Reinhardt, V. (Inventor)
1985-01-01
A thermal probe to average temperature fluctuations over a prolonged period was formed with a temperature sensor embedded inside a solid object of a thermally conducting material. The solid object is held in a position equidistantly spaced apart from the interior surfaces of a closed housing by a mount made of a thermally insulating material. The housing is sealed to trap a vacuum or mass of air inside and thereby prevent transfer of heat directly between the environment outside of the housing and the solid object. Electrical leads couple the temperature sensor with a connector on the outside of the housing. Other solid objects of different sizes and materials may be substituted for the cylindrically-shaped object to vary the time constant of the probe.
Assessing Impact of Weight on Quality of Life.
Kolotkin, R L; Head, S; Hamilton, M; Tse, C K
1995-01-01
This paper is a preliminary report on the development of a new instrument, the Impact of Weight on Quality of Life (IWQOL) questionnaire, that assesses the effects of weight on various areas of life. We conducted two studies utilizing subjects in treatment for obesity at Duke University Diet and Fitness Center. The first study describes item development, assesses reliability, and compares pre- and post-treatment scores on the IWQOL. In the second study we examined the effects of body mass index (BMI), gender, and age on subjects' perceptions of impact of weight on quality of life. Results indicate adequate psychometric properties with test-retest reliabilities averaging .75 for single items, and .89 for scales. Scale internal consistency averaged .87. Post-treatment scores differed significantly from pre-treatment scores on all scales, indicating that treatment produced positive changes in impact of weight on quality of life. The results of the second study indicate that the impact of weight generally worsened as the patients' size increased. However for women there was no association between BMI and impact of weight on Self-Esteem and Sexual Life. Even at the lowest BMI tertile studied, women reported that weight had a substantial impact in these areas. There were also significant gender differences, with women showing greater impact of weight on Self-Esteem and Sexual Life compared with men. The impact of age was a bit surprising, with some areas showing positive changes and others showing no change. PMID:7712359
Córdova-Palomera, Aldo; Fatjó-Vilas, Mar; Falcón, Carles; Bargalló, Nuria; Alemany, Silvia; Crespo-Facorro, Benedicto; Nenadic, Igor; Fañanás, Lourdes
2015-01-01
Background Previous research suggests that low birth weight (BW) induces reduced brain cortical surface area (SA) which would persist until at least early adulthood. Moreover, low BW has been linked to psychiatric disorders such as depression and psychological distress, and to altered neurocognitive profiles. Aims We present novel findings obtained by analysing high-resolution structural MRI scans of 48 twins; specifically, we aimed: i) to test the BW-SA association in a middle-aged adult sample; and ii) to assess whether either depression/anxiety disorders or intellectual quotient (IQ) influence the BW-SA link, using a monozygotic (MZ) twin design to separate environmental and genetic effects. Results Both lower BW and decreased IQ were associated with smaller total and regional cortical SA in adulthood. Within a twin pair, lower BW was related to smaller total cortical and regional SA. In contrast, MZ twin differences in SA were not related to differences in either IQ or depression/anxiety disorders. Conclusion The present study supports findings indicating that i) BW has a long-lasting effect on cortical SA, where some familial and environmental influences alter both foetal growth and brain morphology; ii) uniquely environmental factors affecting BW also alter SA; iii) higher IQ correlates with larger SA; and iv) these effects are not modified by internalizing psychopathology. PMID:26086820
42 CFR 423.279 - National average monthly bid amount.
Code of Federal Regulations, 2013 CFR
2013-10-01
... each MA-PD plan described in section 1851(a)(2)(A)(i) of the Act. The calculation does not include bids... equal weighting to PDP sponsors (other than fallback entities) and assigns MA-PD plans included in the national average bid a weight based on prior enrollment (new MA-PD plans are assigned zero weight)....
42 CFR 423.279 - National average monthly bid amount.
Code of Federal Regulations, 2012 CFR
2012-10-01
... each MA-PD plan described in section 1851(a)(2)(A)(i) of the Act. The calculation does not include bids... equal weighting to PDP sponsors (other than fallback entities) and assigns MA-PD plans included in the national average bid a weight based on prior enrollment (new MA-PD plans are assigned zero weight)....
42 CFR 423.279 - National average monthly bid amount.
Code of Federal Regulations, 2014 CFR
2014-10-01
... each MA-PD plan described in section 1851(a)(2)(A)(i) of the Act. The calculation does not include bids... equal weighting to PDP sponsors (other than fallback entities) and assigns MA-PD plans included in the national average bid a weight based on prior enrollment (new MA-PD plans are assigned zero weight)....
Wang, Tingting; Li, Wenhua; Wu, Xiangru; Yin, Bing; Chu, Caiting; Ding, Ming; Cui, Yanfen
2016-01-01
Objective To assess the added value of diffusion-weighted magnetic resonance imaging (DWI) with apparent diffusion coefficient (ADC) values compared to MRI, for characterizing the tubo-ovarian abscesses (TOA) mimicking ovarian malignancy. Materials and Methods Patients with TOA (or ovarian abscess alone; n = 34) or ovarian malignancy (n = 35) who underwent DWI and MRI were retrospectively reviewed. The signal intensity of cystic and solid component of TOAs and ovarian malignant tumors on DWI and the corresponding ADC values were evaluated, as well as clinical characteristics, morphological features, MRI findings were comparatively analyzed. Receiver operating characteristic (ROC) curve analysis based on logistic regression was applied to identify different imaging characteristics between the two patient groups and assess the predictive value of combination diagnosis with area under the curve (AUC) analysis. Results The mean ADC value of the cystic component in TOA was significantly lower than in malignant tumors (1.04 ± 0 .41 × 10−3 mm2/s vs. 2.42 ± 0.38 × 10−3 mm2/s; p < 0.001). The mean ADC value of the enhanced solid component in 26 TOAs was 1.43 ± 0.16×10−3mm2/s, and 46.2% (12 TOAs; pseudotumor areas) showed significantly higher signal intensity on DW-MRI than in ovarian malignancy (mean ADC value 1.44 ± 0.20×10−3 mm2/s vs.1.18 ± 0.36 × 10−3 mm2/s; p = 0.043). The combination diagnosis of ADC value and dilated tubal structure achieved the best AUC of 0.996. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy of MRI vs. DWI with ADC values for predicting TOA were 47.1%, 91.4%, 84.2%, 64%, and 69.6% vs. 100%, 97.1%, 97.1%, 100%, and 98.6%, respectively. Conclusions DW-MRI is superior to MRI in the assessment of TOA mimicking ovarian malignancy, and the ADC values aid in discriminating the pseudotumor area of TOA from the solid portion of ovarian malignancy. PMID:26894926
NASA Astrophysics Data System (ADS)
Hermance, J. F.; Jacob, R. W.; Bradley, B. A.; Mustard, J. F.
2005-12-01
In studying vegetation patterns remotely, the objective is to draw inferences on the development of specific or general land surface phenology (LSP) as a function of space and time by determining the behavior of a parameter (in our case NDVI), when the parameter estimate may be biased by noise, data dropouts and obfuscations from atmospheric and other effects. We describe the underpinning concepts of a procedure for a robust interpolation of NDVI data that does not have the limitations of other mathematical approaches which require orthonormal basis functions (e.g. Fourier analysis). In this approach, data need not be uniformly sampled in time, nor do we expect noise to be Gaussian-distributed. Our approach is intuitive and straightforward, and is applied here to the refined modeling of LSP using 7 years of weekly and biweekly AVHRR NDVI data for a 150 x 150 km study area in central Nevada. This site is a microcosm of a broad range of vegetation classes, from irrigated agriculture with annual NDVIvalues of up to 0.7 to playas and alkali salt flats with annual NDVI values of only 0.07. Our procedure involves a form of parameter estimation employing Bayesian statistics. In utilitarian terms, the latter procedure is a method of statistical analysis (in our case, robustified, weighted least-squares recursive curve-fitting) that incorporates a variety of prior knowledge when forming current estimates of a particular process or parameter. In addition to the standard Bayesian approach, we account for outliers due to data dropouts or obfuscations because of clouds and snow cover. An initial "starting model" for the average annual cycle and long term (7 year) trend is determined by jointly fitting a common set of complex annual harmonics and a low order polynomial to an entire multi-year time series in one step. This is not a formal Fourier series in the conventional sense, but rather a set of 4 cosine and 4 sine coefficients with fundamental periods of 12, 6, 3 and 1.5 months. Instabilities during large time gaps in the data are suppressed by introducing an expectation of minimum roughness on the fitted time series. Our next significant computational step involves a constrained least squares fit to the observed NDVI data. Residuals between the observed NDVI value and the predicted starting model are computed, and the inverse of these residuals provide the weights for a weighted least squares analysis whereby a set of annual eighth-order splines are fit to the 7 years of NDVI data. Although a series of independent 8-th order annual functionals over a period of 7 years is intrinsically unstable when there are significant data gaps, the splined versions for this specific application are quite stable due to explicit continuity conditions on the values and derivatives of the functionals across contiguous years, as well as a priori constraints on the predicted values vis-a-vis the assumed initial model. Our procedure allows us to robustly interpolate original unequally-spaced NDVI data with a new time series having the most-appropriate, user-defined time base. We apply this approach to the temporal behavior of vegetation in our 150 x 150 km study area. Such a small area, being so rich in vegetation diversity, is particularly useful to view in map form and by animated annual and multi-year time sequences, since the interrelation between phenology, topography and specific usage patterns becomes clear.
Dissociating Averageness and Attractiveness: Attractive Faces Are Not Always Average
ERIC Educational Resources Information Center
DeBruine, Lisa M.; Jones, Benedict C.; Unger, Layla; Little, Anthony C.; Feinberg, David R.
2007-01-01
Although the averageness hypothesis of facial attractiveness proposes that the attractiveness of faces is mostly a consequence of their averageness, 1 study has shown that caricaturing highly attractive faces makes them mathematically less average but more attractive. Here the authors systematically test the averageness hypothesis in 5 experiments…
Combining remotely sensed and other measurements for hydrologic areal averages
NASA Technical Reports Server (NTRS)
Johnson, E. R.; Peck, E. L.; Keefer, T. N.
1982-01-01
A method is described for combining measurements of hydrologic variables of various sampling geometries and measurement accuracies to produce an estimated mean areal value over a watershed and a measure of the accuracy of the mean areal value. The method provides a means to integrate measurements from conventional hydrological networks and remote sensing. The resulting areal averages can be used to enhance a wide variety of hydrological applications including basin modeling. The correlation area method assigns weights to each available measurement (point, line, or areal) based on the area of the basin most accurately represented by the measurement. The statistical characteristics of the accuracy of the various measurement technologies and of the random fields of the hydrologic variables used in the study (water equivalent of the snow cover and soil moisture) required to implement the method are discussed.
... obese. Achieving a healthy weight can help you control your cholesterol, blood pressure and blood sugar. It ... use more calories than you eat. A weight-control strategy might include Choosing low-fat, low-calorie ...
Flexibility of spatial averaging in visual perception
Lombrozo, Tania; Judson, Jeff; MacLeod, Donald I.A
2005-01-01
The classical receptive field (RF) concept—the idea that a visual neuron responds to fixed parts and properties of a stimulus—has been challenged by a series of recent physiological results. Here, we extend these findings to human vision, demonstrating that the extent of spatial averaging in contrast perception is also flexible, depending strongly on stimulus contrast and uniformity. At low contrast, spatial averaging is greatest (about 11 min of arc) within uniform regions such as edges, as expected if the relevant neurons have orientation-selective RFs. At high contrast, spatial averaging is minimal. These results can be understood if the visual system is balancing a trade-off between noise reduction, which favours large areas of averaging, and detail preservation, which favours minimal averaging. Two distinct populations of neurons with hard-wired RFs could account for our results, as could the more intriguing possibility of dynamic, contrast-dependent RFs. PMID:15870034
Model Averaging Method for Supersaturated Experimental Design
NASA Astrophysics Data System (ADS)
Salaki, Deiby T.; Kurnia, Anang; Sartono, Bagus
2016-01-01
In this paper, a new modified model averaging method was proposed. The candidate model construction was performed by distinguishing the covariates into focus variables and auxiliary variables whereas the weights selection was implemented using Mallows criterion. In addition, the illustration result shows that the applied model averaging method could be considered as a new alternative method for supersaturated experimental design as a typical form of high dimensional data. A supersaturated factorial design is an experimental series in which the number of factors exceeds the number of runs, so its size is not enough to estimate all the main effect. By using the model averaging method, the estimation or prediction power is significantly enhanced. In our illustration, the main factors are regarded as focus variables in order to give more attention to them whereas the lesser factors are regarded as auxiliary variables, which is along with the hierarchical ordering principle in experimental research. The limited empirical study shows that this method produces good prediction.
... is more than 8.8 pounds. A low birth weight baby can be born too small, too early (premature), or both. This can happen for many different reasons. They include health problems in the mother, genetic factors, problems ... by the mother. Some low birth weight babies may be more at risk for ...
NASA Technical Reports Server (NTRS)
Moore, R. D.; Urasek, D. C.; Kovich, G.
1973-01-01
The overall and blade-element performances are presented over the stable flow operating range from 50 to 100 percent of design speed. Stage peak efficiency of 0.834 was obtained at a weight flow of 26.4 kg/sec (58.3 lb/sec) and a pressure ratio of 1.581. The stall margin for the stage was 7.5 percent based on weight flow and pressure ratio at stall and peak efficiency conditions. The rotor minimum losses were approximately equal to design except in the blade vibration damper region. Stator minimum losses were less than design except in the tip and damper regions.
NASA Technical Reports Server (NTRS)
Chen, Fei; Yates, David; LeMone, Margaret
2001-01-01
To understand the effects of land-surface heterogeneity and the interactions between the land-surface and the planetary boundary layer at different scales, we develop a multiscale data set. This data set, based on the Cooperative Atmosphere-Surface Exchange Study (CASES97) observations, includes atmospheric, surface, and sub-surface observations obtained from a dense observation network covering a large region on the order of 100 km. We use this data set to drive three land-surface models (LSMs) to generate multi-scale (with three resolutions of 1, 5, and 10 kilometers) gridded surface heat flux maps for the CASES area. Upon validating these flux maps with measurements from surface station and aircraft, we utilize them to investigate several approaches for estimating the area-integrated surface heat flux for the CASES97 domain of 71x74 square kilometers, which is crucial for land surface model development/validation and area water and energy budget studies. This research is aimed at understanding the relative contribution of random turbulence versus organized mesoscale circulations to the area-integrated surface flux at the scale of 100 kilometers, and identifying the most important effective parameters for characterizing the subgrid-scale variability for large-scale atmosphere-hydrology models.
Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A.; Kagemann, Larry; Schuman, Joel S.
2016-01-01
Purpose Developing a novel image enhancement method so that nonframe-averaged optical coherence tomography (OCT) images become comparable to active eye-tracking frame-averaged OCT images. Methods Twenty-one eyes of 21 healthy volunteers were scanned with noneye-tracking nonframe-averaged OCT device and active eye-tracking frame-averaged OCT device. Virtual averaging was applied to nonframe-averaged images with voxel resampling and adding amplitude deviation with 15-time repetitions. Signal-to-noise (SNR), contrast-to-noise ratios (CNR), and the distance between the end of visible nasal retinal nerve fiber layer (RNFL) and the foveola were assessed to evaluate the image enhancement effect and retinal layer visibility. Retinal thicknesses before and after processing were also measured. Results All virtual-averaged nonframe-averaged images showed notable improvement and clear resemblance to active eye-tracking frame-averaged images. Signal-to-noise and CNR were significantly improved (SNR: 30.5 vs. 47.6 dB, CNR: 4.4 vs. 6.4 dB, original versus processed, P < 0.0001, paired t-test). The distance between the end of visible nasal RNFL and the foveola was significantly different before (681.4 vs. 446.5 μm, Cirrus versus Spectralis, P < 0.0001) but not after processing (442.9 vs. 446.5 μm, P = 0.76). Sectoral macular total retinal and circumpapillary RNFL thicknesses showed systematic differences between Cirrus and Spectralis that became not significant after processing. Conclusion The virtual averaging method successfully improved nontracking nonframe-averaged OCT image quality and made the images comparable to active eye-tracking frame-averaged OCT images. Translational Relevance Virtual averaging may enable detailed retinal structure studies on images acquired using a mixture of nonframe-averaged and frame-averaged OCT devices without concerning about systematic differences in both qualitative and quantitative aspects. PMID:26835180
Weight and weddings. Engaged men's body weight ideals and wedding weight management behaviors.
Klos, Lori A; Sobal, Jeffery
2013-01-01
Most adults marry at some point in life, and many invest substantial resources in a wedding ceremony. Previous research reports that brides often strive towards culturally-bound appearance norms and engage in weight management behaviors in preparation for their wedding. However, little is known about wedding weight ideals and behaviors among engaged men. A cross-sectional survey of 163 engaged men asked them to complete a questionnaire about their current height and weight, ideal wedding body weight, wedding weight importance, weight management behaviors, formality of their upcoming wedding ceremony, and demographics. Results indicated that the discrepancy between men's current weight and reported ideal wedding weight averaged 9.61 lb. Most men considered being at a certain weight at their wedding to be somewhat important. About 39% were attempting to lose weight for their wedding, and 37% were not trying to change their weight. Attempting weight loss was more frequent among men with higher BMI's, those planning more formal weddings, and those who considered being the right weight at their wedding as important. Overall, these findings suggest that weight-related appearance norms and weight loss behaviors are evident among engaged men. PMID:23063607
NASA Technical Reports Server (NTRS)
Howard, W. H.; Young, D. R.
1972-01-01
Device applies compressive force to bone to minimize loss of bone calcium during weightlessness or bedrest. Force is applied through weights, or hydraulic, pneumatic or electrically actuated devices. Device is lightweight and easy to maintain and operate.
Spacetime averaged null energy condition
Urban, Douglas; Olum, Ken D.
2010-06-15
The averaged null energy condition has known violations for quantum fields in curved space, even when one considers only achronal geodesics. Many such examples involve rapid variation in the stress-energy tensor in the vicinity of the geodesic under consideration, giving rise to the possibility that averaging in additional dimensions would yield a principle universally obeyed by quantum fields. However, after discussing various procedures for additional averaging, including integrating over all dimensions of the manifold, we give here a class of examples that violate any such averaged condition.
The Molecular Weight Distribution of Polymer Samples
ERIC Educational Resources Information Center
Horta, Arturo; Pastoriza, M. Alejandra
2007-01-01
Various methods for the determination of the molecular weight distribution (MWD) of different polymer samples are presented. The study shows that the molecular weight averages and distribution of a polymerization completely depend on the characteristics of the reaction itself.
Interpreting Bivariate Regression Coefficients: Going beyond the Average
ERIC Educational Resources Information Center
Halcoussis, Dennis; Phillips, G. Michael
2010-01-01
Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…
42 CFR 423.279 - National average monthly bid amount.
Code of Federal Regulations, 2011 CFR
2011-10-01
... bid amounts for each prescription drug plan (not including fallbacks) and for each MA-PD plan...(h) of the Act. (b) Calculation of weighted average. (1) The national average monthly bid amount is a....258(c)(1) of this chapter) and the denominator equal to the total number of Part D...
40 CFR 63.503 - Emissions averaging provisions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Method 18 or Method 25A of 40 CFR part 60, appendix A. Mj=Molecular weight of organic HAP j, gram per... demonstrate compliance, the number of emission points allowed to be included in the emission average is... demonstrate compliance, the number of emission points allowed in the emissions average for those...
A visibility graph averaging aggregation operator
NASA Astrophysics Data System (ADS)
Chen, Shiyu; Hu, Yong; Mahadevan, Sankaran; Deng, Yong
2014-06-01
The problem of aggregation is of considerable importance in many disciplines. In this paper, a new type of operator called visibility graph averaging (VGA) aggregation operator is proposed. This proposed operator is based on the visibility graph which can convert a time series into a graph. The weights are obtained according to the importance of the data in the visibility graph. Finally, the VGA operator is used in the analysis of the TAIEX database to illustrate that it is practical and compared with the classic aggregation operators, it shows its advantage that it not only implements the aggregation of the data purely, but also conserves the time information. Meanwhile, the determination of the weights is more reasonable.
Prediction of shelled shrimp weight by machine vision
Pan, Peng-min; Li, Jian-ping; Lv, Gu-lai; Yang, Hui; Zhu, Song-ming; Lou, Jian-zhong
2009-01-01
The weight of shelled shrimp is an important parameter for grading process. The weight prediction of shelled shrimp by contour area is not accurate enough because of the ignorance of the shrimp thickness. In this paper, a multivariate prediction model containing area, perimeter, length, and width was established. A new calibration algorithm for extracting length of shelled shrimp was proposed, which contains binary image thinning, branch recognition and elimination, and length reconstruction, while its width was calculated during the process of length extracting. The model was further validated with another set of images from 30 shelled shrimps. For a comparison purpose, artificial neural network (ANN) was used for the shrimp weight predication. The ANN model resulted in a better prediction accuracy (with the average relative error at 2.67%), but took a tenfold increase in calculation time compared with the weight-area-perimeter (WAP) model (with the average relative error at 3.02%). We thus conclude that the WAP model is a better method for the prediction of the weight of shelled red shrimp. PMID:19650197
Model averaging, optimal inference, and habit formation
FitzGerald, Thomas H. B.; Dolan, Raymond J.; Friston, Karl J.
2014-01-01
Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal functionthe subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challengethat of determining which model or models of their environment are the best for guiding behavior. Bayesian model averagingwhich says that an agent should weight the predictions of different models according to their evidenceprovides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent's behavior should show an equivalent balance. We hypothesize that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realizable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behavior. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded) Bayesian inference, focusing particularly upon the relationship between goal-directed and habitual behavior. PMID:25018724
High average power pockels cell
Daly, Thomas P.
1991-01-01
A high average power pockels cell is disclosed which reduces the effect of thermally induced strains in high average power laser technology. The pockels cell includes an elongated, substantially rectangular crystalline structure formed from a KDP-type material to eliminate shear strains. The X- and Y-axes are oriented substantially perpendicular to the edges of the crystal cross-section and to the C-axis direction of propagation to eliminate shear strains.
NASA Technical Reports Server (NTRS)
Ustino, Eugene A.
2006-01-01
This slide presentation reviews the observable radiances as functions of atmospheric parameters and of surface parameters; the mathematics of atmospheric weighting functions (WFs) and surface partial derivatives (PDs) are presented; and the equation of the forward radiative transfer (RT) problem is presented. For non-scattering atmospheres this can be done analytically, and all WFs and PDs can be computed analytically using the direct linearization approach. For scattering atmospheres, in general case, the solution of the forward RT problem can be obtained only numerically, but we need only two numerical solutions: one of the forward RT problem and one of the adjoint RT problem to compute all WFs and PDs we can think of. In this presentation we discuss applications of both the linearization and adjoint approaches
Application Bayesian Model Averaging method for ensemble system for Poland
NASA Astrophysics Data System (ADS)
Guzikowski, Jakub; Czerwinska, Agnieszka
2014-05-01
The aim of the project is to evaluate methods for generating numerical ensemble weather prediction using a meteorological data from The Weather Research & Forecasting Model and calibrating this data by means of Bayesian Model Averaging (WRF BMA) approach. We are constructing height resolution short range ensemble forecasts using meteorological data (temperature) generated by nine WRF's models. WRF models have 35 vertical levels and 2.5 km x 2.5 km horizontal resolution. The main emphasis is that the used ensemble members has a different parameterization of the physical phenomena occurring in the boundary layer. To calibrate an ensemble forecast we use Bayesian Model Averaging (BMA) approach. The BMA predictive Probability Density Function (PDF) is a weighted average of predictive PDFs associated with each individual ensemble member, with weights that reflect the member's relative skill. For test we chose a case with heat wave and convective weather conditions in Poland area from 23th July to 1st August 2013. From 23th July to 29th July 2013 temperature oscillated below or above 30 Celsius degree in many meteorology stations and new temperature records were added. During this time the growth of the hospitalized patients with cardiovascular system problems was registered. On 29th July 2013 an advection of moist tropical air masses was recorded in the area of Poland causes strong convection event with mesoscale convection system (MCS). MCS caused local flooding, damage to the transport infrastructure, destroyed buildings, trees and injuries and direct threat of life. Comparison of the meteorological data from ensemble system with the data recorded on 74 weather stations localized in Poland is made. We prepare a set of the model - observations pairs. Then, the obtained data from single ensemble members and median from WRF BMA system are evaluated on the basis of the deterministic statistical error Root Mean Square Error (RMSE), Mean Absolute Error (MAE). To evaluation probabilistic data The Brier Score (BS) and Continuous Ranked Probability Score (CRPS) were used. Finally comparison between BMA calibrated data and data from ensemble members will be displayed.
Vocal attractiveness increases by averaging.
Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal
2010-01-26
Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception. PMID:20129047
Determining GPS average performance metrics
NASA Technical Reports Server (NTRS)
Moore, G. V.
1995-01-01
Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.
Vibrational averages along thermal lines
NASA Astrophysics Data System (ADS)
Monserrat, Bartomeu
2016-01-01
A method is proposed for the calculation of vibrational quantum and thermal expectation values of physical properties from first principles. Thermal lines are introduced: these are lines in configuration space parametrized by temperature, such that the value of any physical property along them is approximately equal to the vibrational average of that property. The number of sampling points needed to explore the vibrational phase space is reduced by up to an order of magnitude when the full vibrational density is replaced by thermal lines. Calculations of the vibrational averages of several properties and systems are reported, namely, the internal energy and the electronic band gap of diamond and silicon, and the chemical shielding tensor of L-alanine. Thermal lines pave the way for complex calculations of vibrational averages, including large systems and methods beyond semilocal density functional theory.
Lorcaserin for weight management.
Taylor, James R; Dietrich, Eric; Powell, Jason
2013-01-01
Type 2 diabetes and obesity commonly occur together. Obesity contributes to insulin resistance, a main cause of type 2 diabetes. Modest weight loss reduces glucose, lipids, blood pressure, need for medications, and cardiovascular risk. A number of approaches can be used to achieve weight loss, including lifestyle modification, surgery, and medication. Lorcaserin, a novel antiobesity agent, affects central serotonin subtype 2A receptors, resulting in decreased food intake and increased satiety. It has been studied in obese patients with type 2 diabetes and results in an approximately 5.5 kg weight loss, on average, when used for one year. Headache, back pain, nasopharyngitis, and nausea were the most common adverse effects noted with lorcaserin. Hypoglycemia was more common in the lorcaserin groups in the clinical trials, but none of the episodes were categorized as severe. Based on the results of these studies, lorcaserin was approved at a dose of 10 mg twice daily in patients with a body mass index ≥30 kg/m(2) or ≥27 kg/m(2) with at least one weight-related comorbidity, such as hypertension, type 2 diabetes mellitus, or dyslipidemia, in addition to a reduced calorie diet and increased physical activity. Lorcaserin is effective for weight loss in obese patients with and without type 2 diabetes, although its specific role in the management of obesity is unclear at this time. This paper reviews the clinical trials of lorcaserin, its use from the patient perspective, and its potential role in the treatment of obesity. PMID:23788837
Lorcaserin for weight management
Taylor, James R; Dietrich, Eric; Powell, Jason
2013-01-01
Type 2 diabetes and obesity commonly occur together. Obesity contributes to insulin resistance, a main cause of type 2 diabetes. Modest weight loss reduces glucose, lipids, blood pressure, need for medications, and cardiovascular risk. A number of approaches can be used to achieve weight loss, including lifestyle modification, surgery, and medication. Lorcaserin, a novel antiobesity agent, affects central serotonin subtype 2A receptors, resulting in decreased food intake and increased satiety. It has been studied in obese patients with type 2 diabetes and results in an approximately 5.5 kg weight loss, on average, when used for one year. Headache, back pain, nasopharyngitis, and nausea were the most common adverse effects noted with lorcaserin. Hypoglycemia was more common in the lorcaserin groups in the clinical trials, but none of the episodes were categorized as severe. Based on the results of these studies, lorcaserin was approved at a dose of 10 mg twice daily in patients with a body mass index ≥30 kg/m2 or ≥27 kg/m2 with at least one weight-related comorbidity, such as hypertension, type 2 diabetes mellitus, or dyslipidemia, in addition to a reduced calorie diet and increased physical activity. Lorcaserin is effective for weight loss in obese patients with and without type 2 diabetes, although its specific role in the management of obesity is unclear at this time. This paper reviews the clinical trials of lorcaserin, its use from the patient perspective, and its potential role in the treatment of obesity. PMID:23788837
Effect of high-speed jet on flow behavior, retrogradation, and molecular weight of rice starch.
Fu, Zhen; Luo, Shun-Jing; BeMiller, James N; Liu, Wei; Liu, Cheng-Mei
2015-11-20
Effects of high-speed jet (HSJ) treatment on flow behavior, retrogradation, and degradation of the molecular structure of indica rice starch were investigated. Decreasing with the number of HSJ treatment passes were the turbidity of pastes (degree of retrogradation), the enthalpy of melting of retrograded rice starch, weight-average molecular weights and weight-average root-mean square radii of gyration of the starch polysaccharides, and the amylopectin peak areas of SEC profiles. The areas of lower-molecular-weight polymers increased. The chain-length distribution was not significantly changed. Pastes of all starch samples exhibited pseudoplastic, shear-thinning behavior. HSJ treatment increased the flow behavior index and decreased the consistency coefficient and viscosity. The data suggested that degradation of amylopectin was mainly involved and that breakdown preferentially occurred in chains between clusters. PMID:26344255
Averaging facial expression over time
Haberman, Jason; Harp, Tom; Whitney, David
2010-01-01
The visual system groups similar features, objects, and motion (e.g., Gestalt grouping). Recent work suggests that the computation underlying perceptual grouping may be one of summary statistical representation. Summary representation occurs for low-level features, such as size, motion, and position, and even for high level stimuli, including faces; for example, observers accurately perceive the average expression in a group of faces (J. Haberman & D. Whitney, 2007, 2009). The purpose of the present experiments was to characterize the time-course of this facial integration mechanism. In a series of three experiments, we measured observers’ abilities to recognize the average expression of a temporal sequence of distinct faces. Faces were presented in sets of 4, 12, or 20, at temporal frequencies ranging from 1.6 to 21.3 Hz. The results revealed that observers perceived the average expression in a temporal sequence of different faces as precisely as they perceived a single face presented repeatedly. The facial averaging was independent of temporal frequency or set size, but depended on the total duration of exposed faces, with a time constant of ~800 ms. These experiments provide evidence that the visual system is sensitive to the ensemble characteristics of complex objects presented over time. PMID:20053064
Exact averaging of laminar dispersion
NASA Astrophysics Data System (ADS)
Ratnakar, Ram R.; Balakotaiah, Vemuri
2011-02-01
We use the Liapunov-Schmidt (LS) technique of bifurcation theory to derive a low-dimensional model for laminar dispersion of a nonreactive solute in a tube. The LS formalism leads to an exact averaged model, consisting of the governing equation for the cross-section averaged concentration, along with the initial and inlet conditions, to all orders in the transverse diffusion time. We use the averaged model to analyze the temporal evolution of the spatial moments of the solute and show that they do not have the centroid displacement or variance deficit predicted by the coarse-grained models derived by other methods. We also present a detailed analysis of the first three spatial moments for short and long times as a function of the radial Peclet number and identify three clearly defined time intervals for the evolution of the solute concentration profile. By examining the skewness in some detail, we show that the skewness increases initially, attains a maximum for time scales of the order of transverse diffusion time, and the solute concentration profile never attains the Gaussian shape at any finite time. Finally, we reason that there is a fundamental physical inconsistency in representing laminar (Taylor) dispersion phenomena using truncated averaged models in terms of a single cross-section averaged concentration and its large scale gradient. Our approach evaluates the dispersion flux using a local gradient between the dominant diffusive and convective modes. We present and analyze a truncated regularized hyperbolic model in terms of the cup-mixing concentration for the classical Taylor-Aris dispersion that has a larger domain of validity compared to the traditional parabolic model. By analyzing the temporal moments, we show that the hyperbolic model has no physical inconsistencies that are associated with the parabolic model and can describe the dispersion process to first order accuracy in the transverse diffusion time.
NASA Technical Reports Server (NTRS)
1995-01-01
The Attitude Adjuster is a system for weight repositioning corresponding to a SCUBA diver's changing positions. Compact tubes on the diver's air tank permit controlled movement of lead balls within the Adjuster, automatically repositioning when the diver changes position. Manufactured by Think Tank Technologies, the system is light and small, reducing drag and energy requirements and contributing to lower air consumption. The Mid-Continent Technology Transfer Center helped the company with both technical and business information and arranged for the testing at Marshall Space Flight Center's Weightlessness Environmental Training Facility for astronauts.
Combining forecast weights: Why and how?
NASA Astrophysics Data System (ADS)
Yin, Yip Chee; Kok-Haur, Ng; Hock-Eam, Lim
2012-09-01
This paper proposes a procedure called forecast weight averaging which is a specific combination of forecast weights obtained from different methods of constructing forecast weights for the purpose of improving the accuracy of pseudo out of sample forecasting. It is found that under certain specified conditions, forecast weight averaging can lower the mean squared forecast error obtained from model averaging. In addition, we show that in a linear and homoskedastic environment, this superior predictive ability of forecast weight averaging holds true irrespective whether the coefficients are tested by t statistic or z statistic provided the significant level is within the 10% range. By theoretical proofs and simulation study, we have shown that model averaging like, variance model averaging, simple model averaging and standard error model averaging, each produces mean squared forecast error larger than that of forecast weight averaging. Finally, this result also holds true marginally when applied to business and economic empirical data sets, Gross Domestic Product (GDP growth rate), Consumer Price Index (CPI) and Average Lending Rate (ALR) of Malaysia.
Annual Average Changes in Adult Obesity as a Risk Factor for Papillary Thyroid Cancer
Hwang, Yunji; Lee, Kyu Eun; Park, Young Joo; Kim, Su-Jin; Kwon, Hyungju; Park, Do Joon; Cho, Belong; Choi, Ho-Chun; Kang, Daehee; Park, Sue K.
2016-01-01
Abstract We evaluated the association between weight change in middle-aged adults and papillary thyroid cancer (PTC) based on a large-scale case-control study. Our study included data from 1551 PTC patients (19.3% men and 80.7% women) who underwent thyroidectomy at the 3 general hospitals in Korea and 15,510 individually matched control subjects. The subjects’ weight history, epidemiologic information, and tumor characteristics confirmed after thyroidectomy were analyzed. Odds ratios (ORs) and 95% confidence intervals (95% CIs) were determined for the annual average changes in weight and obesity indicators (body mass index (BMI), body surface area, and body fat percentage (BF%) in subjects since the age of 35 years. Subjects with a total weight gain ≥10 kg after age 35 years were more likely to have PTC (men, OR, 5.39, 95% CI, 3.88–7.49; women, OR, 3.36, 95% CI, 2.87–3.93) compared with subjects with a stable weight (loss or gain <5 kg). A marked increase in BMI since age 35 years (annual average change of BMI ≥0.3 kg/m2/yr) was related to an elevated PTC risk, and the association was more pronounced for large-sized PTC risks (<1 cm, OR, 2.34, 95% CI, 1.92–2.85; ≥1 cm, OR, 4.00, 95% CI, 2.91–5.49, P heterogeneity = 0.005) compared with low PTC risks. Weight gain and annual increases in obesity indicators in middle-aged adults may increase the risk of developing PTC. PMID:26945379
Holiday weight gain: fact or fiction?
Roberts, S B; Mayer, J
2000-12-01
The prevalence of obesity continues to rise and controversy remains regarding the underlying specific causes of this trend. Recently, the magnitude of holiday weight gain and its contribution to annual weight gain were examined in a convenience sample of 195 adults. On average, weight gain during the 6-week winter period from Thanksgiving through New Year averaged only 0.37 kg. However, weight gain was greater among individuals who were overweight or obese, and 14% gained >2.3 kg (5 lb). In addition, among the entire population, weight gain during the 6-week holiday season explained 51% of annual weight gain. These results suggest that holiday weight gain may be an important contributor to the rising prevalence of obesity, even though absolute values for weight gain in this study were less than anticipated. Further studies using representative populations are needed to confirm these findings. PMID:11206847
Ultrahigh molecular weight aromatic siloxane polymers
NASA Technical Reports Server (NTRS)
Ludwick, L. M.
1982-01-01
The condensation of a diol with a silane in toluene yields a silphenylene-siloxane polymer. The reaction of stiochiometric amounts of the diol and silane produced products with molecular weights in the range 2.0 - 6.0 x 10 to the 5th power. The molecular weight of the product was greatly increased by a multistep technique. The methodology for synthesis of high molecular weight polymers using a two step procedure was refined. Polymers with weight average molecular weights in excess of 1.0 x 10 to the 6th power produced by this method. Two more reactive silanes, bis(pyrrolidinyl)dimethylsilane and bis(gamma butyrolactam)dimethylsilane, are compared with the dimethyleminodimethylsilane in ability to advance the molecular weight of the prepolymer. The polymers produced are characterized by intrinsic viscosity in tetrahydrofuran. Weight and number average molecular weights and polydispersity are determined by gel permeation chromatography.
Averaging Robertson-Walker cosmologies
Brown, Iain A.; Robbers, Georg; Behrend, Juliane E-mail: G.Robbers@thphys.uni-heidelberg.de
2009-04-15
The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the {Lambda}CDM concordance model, the backreaction is of the order of {Omega}{sub eff}{sup 0} Almost-Equal-To 4 Multiplication-Sign 10{sup -6}, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10{sup -8} and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w{sub eff} < -1/3 can be found for strongly phantom models.
Ensemble averaging of acoustic data
NASA Technical Reports Server (NTRS)
Stefanski, P. K.
1982-01-01
A computer program called Ensemble Averaging of Acoustic Data is documented. The program samples analog data, analyzes the data, and displays them in the time and frequency domains. Hard copies of the displays are the program's output. The documentation includes a description of the program and detailed user instructions for the program. This software was developed for use on the Ames 40- by 80-Foot Wind Tunnel's Dynamic Analysis System consisting of a PDP-11/45 computer, two RK05 disk drives, a tektronix 611 keyboard/display terminal, and FPE-4 Fourier Processing Element, and an analog-to-digital converter.
Dietary Supplements for Weight Loss
... on Caffeine ) Coleus forskohlii Coleus forskohlii is a plant that grows in India, Thailand, and other subtropical areas. Forskolin, made from the plant’s roots, is claimed to help you lose weight ...
Cosmological measures without volume weighting
Page, Don N
2008-10-15
Many cosmologists (myself included) have advocated volume weighting for the cosmological measure problem, weighting spatial hypersurfaces by their volume. However, this often leads to the Boltzmann brain problem, that almost all observations would be by momentary Boltzmann brains that arise very briefly as quantum fluctuations in the late universe when it has expanded to a huge size, so that our observations (too ordered for Boltzmann brains) would be highly atypical and unlikely. Here it is suggested that volume weighting may be a mistake. Volume averaging is advocated as an alternative. One consequence may be a loss of the argument that eternal inflation gives a nonzero probability that our universe now has infinite volume.
... loss-rapid weight loss; Overweight-rapid weight loss; Obesity-rapid weight loss; Diet-rapid weight loss ... for people who have health problems because of obesity. For these people, losing a lot of weight ...
Attention shifts and memory averaging.
Kerzel, Dirk
2002-04-01
When observers are asked to localize the final position of a moving stimulus, judgements may be influenced by additional elements that are presented in the visual scene. Typically, judgements arc biased toward a salient non-target element. It has been assumed that the non-target element acts as a landmark and attracts the remembered final target position. The present study investigated the effects of briefly flashed non-target elements on localization performance. Similar to landmark attraction, localization was biased toward these elements. However, an influence was only noted if the distractor was presented at the time of target disappearance or briefly thereafter. It is suggested that memory traces of distracting elements are only averaged with the final target position if they are highly activated at the time the target vanishes. PMID:12047052
Achronal averaged null energy condition
Graham, Noah; Olum, Ken D.
2007-09-15
The averaged null energy condition (ANEC) requires that the integral over a complete null geodesic of the stress-energy tensor projected onto the geodesic tangent vector is never negative. This condition is sufficient to prove many important theorems in general relativity, but it is violated by quantum fields in curved spacetime. However there is a weaker condition, which is free of known violations, requiring only that there is no self-consistent spacetime in semiclassical gravity in which ANEC is violated on a complete, achronal null geodesic. We indicate why such a condition might be expected to hold and show that it is sufficient to rule out closed timelike curves and wormholes connecting different asymptotically flat regions.
Flexible time domain averaging technique
NASA Astrophysics Data System (ADS)
Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng
2013-09-01
Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.
Using Bayes Model Averaging for Wind Power Forecasts
NASA Astrophysics Data System (ADS)
Preede Revheim, Pål; Beyer, Hans Georg
2014-05-01
For operational purposes predictions of the forecasts of the lumped output of groups of wind farms spread over larger geographic areas will often be of interest. A naive approach is to make forecasts for each individual site and sum them up to get the group forecast. It is however well documented that a better choice is to use a model that also takes advantage of spatial smoothing effects. It might however be the case that some sites tends to more accurately reflect the total output of the region, either in general or for certain wind directions. It will then be of interest giving these a greater influence over the group forecast. Bayesian model averaging (BMA) is a statistical post-processing method for producing probabilistic forecasts from ensembles. Raftery et al. [1] show how BMA can be used for statistical post processing of forecast ensembles, producing PDFs of future weather quantities. The BMA predictive PDF of a future weather quantity is a weighted average of the ensemble members' PDFs, where the weights can be interpreted as posterior probabilities and reflect the ensemble members' contribution to overall forecasting skill over a training period. In Revheim and Beyer [2] the BMA procedure used in Sloughter, Gneiting and Raftery [3] were found to produce fairly accurate PDFs for the future mean wind speed of a group of sites from the single sites wind speeds. However, when the procedure was attempted applied to wind power it resulted in either problems with the estimation of the parameters (mainly caused by longer consecutive periods of no power production) or severe underestimation (mainly caused by problems with reflecting the power curve). In this paper the problems that arose when applying BMA to wind power forecasting is met through two strategies. First, the BMA procedure is run with a combination of single site wind speeds and single site wind power production as input. This solves the problem with longer consecutive periods where the input data does not contain information, but it has the disadvantage of nearly doubling the number of model parameters to be estimated. Second, the BMA procedure is run with group mean wind power as the response variable instead of group mean wind speed. This also solves the problem with longer consecutive periods without information in the input data, but it leaves the power curve to also be estimated from the data. [1] Raftery, A. E., et al. (2005). Using Bayesian Model Averaging to Calibrate Forecast Ensembles. Monthly Weather Review, 133, 1155-1174. [2]Revheim, P. P. and H. G. Beyer (2013). Using Bayesian Model Averaging for wind farm group forecasts. EWEA Wind Power Forecasting Technology Workshop,Rotterdam, 4-5 December 2013. [3]Sloughter, J. M., T. Gneiting and A. E. Raftery (2010). Probabilistic Wind Speed Forecasting Using Ensembles and Bayesian Model Averaging. Journal of the American Statistical Association, Vol. 105, No. 489, 25-35
A pure bending exact nodal-averaged shear strain method for finite element plate analysis
NASA Astrophysics Data System (ADS)
Wu, C. T.; Guo, Y.; Wang, D.
2014-05-01
An averaged shear strain method, based on a nodal integration approach, is presented for the finite element analysis of Reissner-Mindlin plates. In this work, we combine the shear interpolation method from the MITC4 plate element with an area-weighted averaging technique for the nodal integration of shear energy to relieve shear locking in the thin plate analysis as well as to pass the pure bending patch test. In order to resolve the numerical instability caused by the direct nodal integration, the bending strain field is computed by a sub-domain nodal integration approach based on the Sub-domain Stabilized Conforming Integration and a modified curvature smoothing scheme. The resulting nodally integrated smoothed strain formulation is shown to contain only the primitive variables and thus can be easily implemented in the existing displacement-based finite element plate formulation. Several numerical examples are presented to demonstrate the accuracy of the present method.
Weight loss attempts in adults: goals, duration, and rate of weight loss.
Williamson, D F; Serdula, M K; Anda, R F; Levy, A; Byers, T
1992-01-01
OBJECTIVES: Although attempted weight loss is common, little is known about the goals and durations of weight loss attempts and the rates of achieved weight loss in the general population. METHODS. Data were collected by telephone in 1989 from adults aged 18 years and older in 39 states and the District of Columbia. Analyses were carried out separately for the 6758 men and 14,915 women who reported currently trying to lose weight. RESULTS. Approximately 25% of the men respondents and 40% of the women respondents reported that they were currently trying to lose weight. Among men, a higher percentage of Hispanics (31%) than of Whites (25%) or Blacks (23%) reported trying to lose weight. Among women, however, there were no ethnic differences in prevalence. The average man wanted to lose 30 pounds and to weigh 178 pounds; the average woman wanted to lose 31 pounds and to weigh 133 pounds. Black women wanted to lose an average of 8 pounds more than did White women, but Black women's goal weight was 10 pounds heavier. The average rate of achieved weight loss was 1.4 pounds per week for men and 1.1 pounds per week for women; these averages, however, may reflect only the experience of those most successful at losing weight. CONCLUSIONS. Attempted weight loss is a common behavior, regardless of age, gender, or ethnicity, and weight loss goals are substantial; however, obesity remains a major public health problem in the United States. PMID:1503167
Weight-ing: the experience of waiting on weight loss.
Glenn, Nicole M
2013-03-01
Perhaps we want to be perfect, strive for health, beauty, and the admiring gaze of others. Maybe we desire the body of our youth, the "healthy" body, the body that has just the right fit. Regardless of the motivation, we might find ourselves striving, wanting, and waiting on weight loss. What is it to wait on weight loss? I explore the meaning of this experience-as-lived using van Manen's guide to phenomenological reflection and writing. Weight has become an increasing focus of contemporary culture, demonstrated, for example, by a growing weight-loss industry and global obesity "epidemic." Weight has become synonymous with health status, and weight loss with "healthier." I examine the weight wait through experiences of the common and uncommon, considering relations to time, body, space, and the other with the aim of evoking a felt, embodied, emotive understanding of the meaning of waiting on weight loss. I also discuss the implications of the findings. PMID:23202478
Weight loss surgery helps people with extreme obesity to lose weight. It may be an option if you cannot lose weight ... obesity. There are different types of weight loss surgery. They often limit the amount of food you ...
Pollutant roses for daily averaged ambient air pollutant concentrations
NASA Astrophysics Data System (ADS)
Cosemans, Guido; Kretzschmar, Jan; Mensink, Clemens
Pollutant roses are indispensable tools to identify unknown (fugitive) sources of heavy metals at industrial sites whose current impact exceeds the target values imposed for the year 2012 by the European Air Quality Daughter Directive 2004/207/EC. As most of the measured concentrations of heavy metals in ambient air are daily averaged values, a method to obtain high quality pollutant roses from such data is of practical interest for cost-effective air quality management. A computational scheme is presented to obtain, from daily averaged concentrations, 10° angular resolution pollutant roses, called PRP roses, that are in many aspects comparable to pollutant roses made with half-hourly concentrations. The computational scheme is a ridge regression, based on three building blocks: ordinary least squares regression; outlier handling by weighting based on expected values of the higher percentiles in a lognormal distribution; weighted averages whereby observed values, raised to a power m, and daily wind rose frequencies are used as weights. Distance measures are used to find the optimal value for m. The performance of the computational scheme is illustrated by comparing the pollutant roses, constructed with measured half-hourly SO 2 data for 10 monitoring sites in the Antwerp harbour, with the PRP roses made with the corresponding daily averaged SO 2 concentrations. A miniature dataset, made up of 7 daily concentrations and of half-hourly wind directions assigned to 4 wind sectors, is used to illustrate the formulas and their results.
Birth weight reduction associated with residence near a hazardous waste landfill.
Berry, M; Bove, F
1997-01-01
We examined the relationship between birth weight and mother's residence near a hazardous waste landfill. Twenty-five years of birth certificates (1961-1985) were collected for four towns. Births were grouped into five 5-year periods corresponding to hypothesized exposure periods (1971-1975 having the greatest potential for exposure). From 1971 to 1975, term births (37-44 weeks gestation) to parents living closest to the landfill (Area 1A) had a statistically significant lower average birth weight (192 g) and a statistically significant higher proportion of low birth weight [odds ratio (OR) = 5.1; 95% confidence interval (CI), 2.1-12.3] than the control population. Average term birth weights in Area 1A rebounded by about 332 g after 1975. Parallel results were found for all births (gestational age > 27 weeks) in Area 1A during 1971-1975. Area 1A infants had twice the risk of prematurity (OR = 2.1; 95 CI, 1.0-4.4) during 1971-1975 compared to the control group. The results indicate a significant impact to infants born to residents living near the landfill during the period postulated as having the greatest potential for exposure. The magnitude of the effect is in the range of birth weight reduction due to cigarette smoking during pregnancy. Images Figure 1. Figure 2. PMID:9347901
40 CFR 63.5710 - How do I demonstrate compliance using emissions averaging?
Code of Federal Regulations, 2011 CFR
2011-07-01
... Molding Resin and Gel Coat Operations § 63.5710 How do I demonstrate compliance using emissions averaging... section to compute the weighted-average MACT model point value for each open molding resin and gel coat... open molding operation (PVR, PVPG, PVCG, PVTR, and PVTG) included in the average, kilograms of HAP...
College Freshman Stress and Weight Change: Differences by Gender
ERIC Educational Resources Information Center
Economos, Christina D.; Hildebrandt, M. Lise; Hyatt, Raymond R.
2008-01-01
Objectives: To examine how stress and health-related behaviors affect freshman weight change by gender. Methods: Three hundred ninety-six freshmen completed a 40-item health behavior survey and height and weight were collected at baseline and follow-up. Results: Average weight change was 5.04 lbs for males, 5.49 lbs for females. Weight gain was…
Programmable noise bandwidth reduction by means of digital averaging
NASA Technical Reports Server (NTRS)
Poklemba, John J. (Inventor)
1993-01-01
Predetection noise bandwidth reduction is effected by a pre-averager capable of digitally averaging the samples of an input data signal over two or more symbols, the averaging interval being defined by the input sampling rate divided by the output sampling rate. As the averaged sample is clocked to a suitable detector at a much slower rate than the input signal sampling rate the noise bandwidth at the input to the detector is reduced, the input to the detector having an improved signal to noise ratio as a result of the averaging process, and the rate at which such subsequent processing must operate is correspondingly reduced. The pre-averager forms a data filter having an output sampling rate of one sample per symbol of received data. More specifically, selected ones of a plurality of samples accumulated over two or more symbol intervals are output in response to clock signals at a rate of one sample per symbol interval. The pre-averager includes circuitry for weighting digitized signal samples using stored finite impulse response (FIR) filter coefficients. A method according to the present invention is also disclosed.
Judging body weight from faces: the height-weight illusion.
Schneider, Tobias M; Hecht, Heiko; Carbon, Claus-Christian
2012-01-01
Being able to exploit features of the human face to predict health and fitness can serve as an evolutionary advantage. Surface features such as facial symmetry, averageness, and skin colour are known to influence attractiveness. We sought to determine whether observers are able to extract more complex features, namely body weight. If possible, it could be used as a predictor for health and fitness. For instance, facial adiposity could be taken to indicate a cardiovascular challenge or proneness to infections. Observers seem to be able to glean body weight information from frontal views of a face. Is weight estimation robust across different viewing angles? We showed that participants strongly overestimated body weight for faces photographed from a lower vantage point while underestimating it for faces photographed from a higher vantage point. The perspective distortions of simple facial measures (e.g., width-to-height ratio) that accompany changes in vantage point do not suffice to predict body weight. Instead, more complex patterns must be involved in the height-weight illusion. PMID:22611670
Effect of clothing weight on body weight
Technology Transfer Automated Retrieval System (TEKTRAN)
Background: In clinical settings, it is common to measure weight of clothed patients and estimate a correction for the weight of clothing, but we can find no papers in the medical literature regarding the variability in clothing weight with weather, season, and gender. Methods: Fifty adults (35 wom...
Informed Test Component Weighting.
ERIC Educational Resources Information Center
Rudner, Lawrence M.
2001-01-01
Identifies and evaluates alternative methods for weighting tests. Presents formulas for composite reliability and validity as a function of component weights and suggests a rational process that identifies and considers trade-offs in determining weights. Discusses drawbacks to implicit weighting and explicit weighting and the difficulty of…
Jupiter's Radio Rotation Period: A 50-year Average
NASA Astrophysics Data System (ADS)
Higgins, C. A.; Reyes, F.; Solus, D.
2011-12-01
Using 50 years of continuous seasonal observations of Jupiter's decametric radio emissions from 18-22 MHz collected at the University of Florida Radio Observatory (UFRO), we calculate a new radio rotation period of Jupiter. The new period is the weighted mean of more than 20 independent measurements. Each measurement is found by determining the drift of the histograms of probability of occurrence versus the System III (1965) central meridian longitude (CML) over intervals of approximately 12, 24, 36, and 48 years. This multiple 12-year average technique is employed to reduce the uncertainty in the longitudes of the radio sources caused by Jupiter's 11.86 year orbit. Our weighted mean is 9 h 55 m 29.689 s, with a standard deviation of the weighted mean of 0.004 s. Our calculations show remarkably stable radio sources. An upper limit of any radio rotation period drift is discussed.
5 CFR 591.210 - What are weights?
Code of Federal Regulations, 2011 CFR
2011-01-01
... What are weights? (a) A weight is the relative importance or share of a subpart of a group compared... compared with the whole pie. (b) OPM uses two kinds of weights: Consumer expenditure weights and employment.... The employment weight is the relative employment population of the survey area compared with...
Dietary restraint and gestational weight gain
Mumford, Sunni L.; Siega-Riz, Anna Maria; Herring, Amy; Evenson, Kelly R.
2008-01-01
Objective To determine whether a history of preconceptional dieting and restrained eating was related to higher weight gains in pregnancy. Design Dieting practices were assessed among a prospective cohort of pregnant women using the Revised Restraint Scale. Women were classified on three separate subscales as restrained eaters, dieters, and weight cyclers. Subjects Participants included 1,223 women in the Pregnancy, Infection and Nutrition Study. Main outcome measures Total gestational weight gain and adequacy of weight gain (ratio of observed/expected weight gain based on Institute of Medicine (IOM) recommendations). Statistical analyses performed Multiple linear regression was used to model the two weight gain outcomes, while controlling for potential confounders including physical activity and weight gain attitudes. Results There was a positive association between each subscale and total weight gain, as well as adequacy of weight gain. Women classified as cyclers gained an average of 2 kg more than non-cyclers, and showed higher observed/expected ratios by 0.2 units. Among restrained eaters and dieters, there was a differential effect by BMI. With the exception of underweight women, all other weight status women with a history of dieting or restrained eating gained more weight during pregnancy and had higher adequacy of weight gain ratios. In contrast, underweight women with a history of restrained eating behaviors gained less weight compared to underweight women without those behaviors. Conclusions Restrained eating behaviors were associated with weight gains above the IOM recommendations for normal, overweight, and obese women, and weight gains below the recommendations for underweight women. Excessive gestational weight gain is of concern given its association with postpartum weight retention. The dietary restraint tool is useful for identifying women who would benefit from nutritional counseling prior to or during pregnancy in regards to achieving targeted weight gain recommendations. PMID:18926129
ERIC Educational Resources Information Center
Molfese, Dennis L.; Key, Alexandra Fonaryova; Kelly, Spencer; Cunningham, Natalie; Terrell, Shona; Ferguson, Melissa; Molfese, Victoria J.; Bonebright, Terri
2006-01-01
Event-related potentials (ERPs) were recorded from 27 children (14 girls, 13 boys) who varied in their reading skill levels. Both behavior performance measures recorded during the ERP word classification task and the ERP responses themselves discriminated between children with above-average, average, and below-average reading skills. ERP
ERIC Educational Resources Information Center
Molfese, Dennis L.; Key, Alexandra Fonaryova; Kelly, Spencer; Cunningham, Natalie; Terrell, Shona; Ferguson, Melissa; Molfese, Victoria J.; Bonebright, Terri
2006-01-01
Event-related potentials (ERPs) were recorded from 27 children (14 girls, 13 boys) who varied in their reading skill levels. Both behavior performance measures recorded during the ERP word classification task and the ERP responses themselves discriminated between children with above-average, average, and below-average reading skills. ERP…
Do diurnal aerosol changes affect daily average radiative forcing?
NASA Astrophysics Data System (ADS)
Kassianov, Evgueni; Barnard, James; Pekour, Mikhail; Berg, Larry K.; Michalsky, Joseph; Lantz, Kathy; Hodges, Gary
2013-06-01
diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.
... individual's health status and risks. How to Measure Height and Weight for BMI Height and weight must be measured in order to calculate BMI. It is most accurate to measure height in meters and weight in kilograms. However, the ...
Weight Management and Calories
... Food Safety Newsroom Dietary Guidelines Communicator’s Guide Weight Management You are here Home / ADULTS Weight Management Print Share Why is weight management important? In addition to helping you feel and ...
Effect of molecular weight on polyphenylquinoxaline properties
NASA Technical Reports Server (NTRS)
Jensen, Brian J.
1991-01-01
A series of polyphenyl quinoxalines with different molecular weight and end-groups were prepared by varying monomer stoichiometry. Thus, 4,4'-oxydibenzil and 3,3'-diaminobenzidine were reacted in a 50/50 mixture of m-cresol and xylenes. Reaction concentration, temperature, and stir rate were studied and found to have an effect on polymer properties. Number and weight average molecular weights were determined and correlated well with viscosity data. Glass transition temperatures were determined and found to vary with molecular weight and end-groups. Mechanical properties of films from polymers with different molecular weights were essentially identical at room temperature but showed significant differences at 232 C. Diamine terminated polymers were found to be much less thermooxidatively stable than benzil terminated polymers when aged at 316 C even though dynamic thermogravimetric analysis revealed only slight differences. Lower molecular weight polymers exhibited better processability than higher molecular weight polymers.
On the Choice of Average Solar Zenith Angle
NASA Astrophysics Data System (ADS)
Cronin, T.
2014-12-01
Studies with idealized climate models often make simplifying decisions to average solar radiation over space and time. But clear-sky and cloud albedo are increasing functions of the solar zenith angle, so the choice of average solar zenith angle is important and can lead to significant climate biases. Here, I use radiative transfer calculations for a pure scattering atmosphere and with a more detailed radiative transfer model to argue that one should in general choose the insolation-weighted zenith angle, rather than the simpler daytime-average zenith angle. The insolation-weighted zenith angle is especially superior if clouds are responsible for much of the shortwave reflection. Use of the daytime-average zenith angle may lead to a high bias in planetary albedo of ~3%, equivalent to a deficit in shortwave absorption of 10 W m-2 in the global energy budget (comparable to the radiative forcing of a roughly sixfold change in CO2 concentration). Other studies that have used general circulation models with spatially constant insolation have underestimated the global-mean zenith angle, with a consequent low bias in planetary albedo of ~2-6%, or a surplus in shortwave absorption of ~7-20 W m-2 in the global energy budget. I also discuss how a simple time-varying solar zenith angle could be used to minimize zenith angle-related biases in albedo for models of global climate that choose to spatially homogenize insolation.
Demonstration of a Model Averaging Capability in FRAMES
NASA Astrophysics Data System (ADS)
Meyer, P. D.; Castleton, K. J.
2009-12-01
Uncertainty in model structure can be incorporated in risk assessment using multiple alternative models and model averaging. To facilitate application of this approach to regulatory applications based on risk or dose assessment, a model averaging capability was integrated with the Framework for Risk Analysis in Multimedia Environmental Systems (FRAMES) version 2 software. FRAMES is a software platform that allows the non-parochial communication between disparate models, databases, and other frameworks. Users have the ability to implement and select environmental models for specific risk assessment and management problems. Standards are implemented so that models produce information that is readable by other downstream models and accept information from upstream models. Models can be linked across multiple media and from source terms to quantitative risk/dose estimates. Parameter sensitivity and uncertainty analysis tools are integrated. A model averaging module was implemented to accept output from multiple models and produce average results. These results can be deterministic quantities or probability distributions obtained from an analysis of parameter uncertainty. Output from alternative models is averaged using weights determined from user input and/or model calibration results. A model calibration module based on the PEST code was implemented to provide FRAMES with a general calibration capability. An application illustrates the implementation, user interfaces, execution, and results of the FRAMES model averaging capabilities.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Averaging. (a) A manufacturer may use averaging across engine families to demonstrate a zero or positive... credits obtained through trading. (b) Beginning in model year 2004, credits used to demonstrate a zero...
Code of Federal Regulations, 2013 CFR
2013-07-01
... Averaging. (a) A manufacturer may use averaging across engine families to demonstrate a zero or positive... credits obtained through trading. (b) Beginning in model year 2004, credits used to demonstrate a zero...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...
RHIC BPM system average orbit calculations
Michnoff,R.; Cerniglia, P.; Degen, C.; Hulsart, R.; et al.
2009-05-04
RHIC beam position monitor (BPM) system average orbit was originally calculated by averaging positions of 10000 consecutive turns for a single selected bunch. Known perturbations in RHIC particle trajectories, with multiple frequencies around 10 Hz, contribute to observed average orbit fluctuations. In 2006, the number of turns for average orbit calculations was made programmable; this was used to explore averaging over single periods near 10 Hz. Although this has provided an average orbit signal quality improvement, an average over many periods would further improve the accuracy of the measured closed orbit. A new continuous average orbit calculation was developed just prior to the 2009 RHIC run and was made operational in March 2009. This paper discusses the new algorithm and performance with beam.
Designing Digital Control Systems With Averaged Measurements
NASA Technical Reports Server (NTRS)
Polites, Michael E.; Beale, Guy O.
1990-01-01
Rational criteria represent improvement over "cut-and-try" approach. Recent development in theory of control systems yields improvements in mathematical modeling and design of digital feedback controllers using time-averaged measurements. By using one of new formulations for systems with time-averaged measurements, designer takes averaging effect into account when modeling plant, eliminating need to iterate design and simulation phases.
Averaging and Adding in Children's Worth Judgements
ERIC Educational Resources Information Center
Schlottmann, Anne; Harman, Rachel M.; Paine, Julie
2012-01-01
Under the normative Expected Value (EV) model, multiple outcomes are additive, but in everyday worth judgement intuitive averaging prevails. Young children also use averaging in EV judgements, leading to a disordinal, crossover violation of utility when children average the part worths of simple gambles involving independent events (Schlottmann,…
Body Weight Relationships in Early Marriage: Weight Relevance, Weight Comparisons, and Weight Talk
Bove, Caron F.; Sobal, Jeffery
2011-01-01
This investigation uncovered processes underlying the dynamics of body weight and body image among individuals involved in nascent heterosexual marital relationships in Upstate New York. In-depth, semi-structured qualitative interviews conducted with 34 informants, 20 women and 14 men, just prior to marriage and again one year later were used to explore continuity and change in cognitive, affective, and behavioral factors relating to body weight and body image at the time of marriage, an important transition in the life course. Three major conceptual themes operated in the process of developing and enacting informants’ body weight relationships with their partner: weight relevance, weight comparisons, and weight talk. Weight relevance encompassed the changing significance of weight during early marriage and included attracting and capturing a mate, relaxing about weight, living healthily, and concentrating on weight. Weight comparisons between partners involved weight relativism, weight competition, weight envy, and weight role models. Weight talk employed pragmatic talk, active and passive reassurance, and complaining and critiquing criticism. Concepts emerging from this investigation may be useful in designing future studies of and approaches to managing body weight in adulthood. PMID:21864601
Mechanisms of Weight Regain following Weight Loss
Blomain, Erik Scott; Dirhan, Dara Anne; Valentino, Michael Anthony; Kim, Gilbert Won; Waldman, Scott Arthur
2013-01-01
Obesity is a world-wide pandemic and its incidence is on the rise along with associated comorbidities. Currently, there are few effective therapies to combat obesity. The use of lifestyle modification therapy, namely, improvements in diet and exercise, is preferable over bariatric surgery or pharmacotherapy due to surgical risks and issues with drug efficacy and safety. Although they are initially successful in producing weight loss, such lifestyle intervention strategies are generally unsuccessful in achieving long-term weight maintenance, with the vast majority of obese patients regaining their lost weight during followup. Recently, various compensatory mechanisms have been elucidated by which the body may oppose new weight loss, and this compensation may result in weight regain back to the obese baseline. The present review summarizes the available evidence on these compensatory mechanisms, with a focus on weight loss-induced changes in energy expenditure, neuroendocrine pathways, nutrient metabolism, and gut physiology. These findings have added a major focus to the field of antiobesity research. In addition to investigating pathways that induce weight loss, the present work also focuses on pathways that may instead prevent weight regain. Such strategies will be necessary for improving long-term weight loss maintenance and outcomes for patients who struggle with obesity. PMID:24533218
Determinants of Low Birth Weight in Malawi: Bayesian Geo-Additive Modelling
Ngwira, Alfred; Stanley, Christopher C.
2015-01-01
Studies on factors of low birth weight in Malawi have neglected the flexible approach of using smooth functions for some covariates in models. Such flexible approach reveals detailed relationship of covariates with the response. The study aimed at investigating risk factors of low birth weight in Malawi by assuming a flexible approach for continuous covariates and geographical random effect. A Bayesian geo-additive model for birth weight in kilograms and size of the child at birth (less than average or average and higher) with district as a spatial effect using the 2010 Malawi demographic and health survey data was adopted. A Gaussian model for birth weight in kilograms and a binary logistic model for the binary outcome (size of child at birth) were fitted. Continuous covariates were modelled by the penalized (p) splines and spatial effects were smoothed by the two dimensional p-spline. The study found that child birth order, mother weight and height are significant predictors of birth weight. Secondary education for mother, birth order categories 2-3 and 4-5, wealth index of richer family and mother height were significant predictors of child size at birth. The area associated with low birth weight was Chitipa and areas with increased risk to less than average size at birth were Chitipa and Mchinji. The study found support for the flexible modelling of some covariates that clearly have nonlinear influences. Nevertheless there is no strong support for inclusion of geographical spatial analysis. The spatial patterns though point to the influence of omitted variables with some spatial structure or possibly epidemiological processes that account for this spatial structure and the maps generated could be used for targeting development efforts at a glance. PMID:26114866
NASA Astrophysics Data System (ADS)
Colorado, G.; Salinas, J. A.; Cavazos, T.; de Grau, P.
2013-05-01
15 CMIP5 GCMs precipitation simulations were combined in a weighted ensemble using the Reliable Ensemble Averaging (REA) method, obtaining the weight of each model. This was done for a historical period (1961-2000) and for the future emissions based on low (RCP4.5) and high (RCP8.5) radiating forcing for the period 2075-2099. The annual cycle of simple ensemble of the historical GCMs simulations, the historical REA average and the Climate Research Unit (CRU TS3.1) database was compared in four zones of México. In the case of precipitation we can see the improvements by using the REA method, especially in the two northern zones of México where the REA average is more close to the observations (CRU) that the simple average. However in the southern zones although there is an improvement it is not as good as it is in the north, particularly in the southeast where instead of the REA average is able to reproduce qualitatively good the annual cycle with the mid-summer drought it was greatly underestimated. The main reason is because the precipitation is underestimated for all the models and the mid-summer drought do not even exists in some models. In the REA average of the future scenarios, as we can expected, the most drastic decrease in precipitation was simulated using the RCP8.5 especially in the monsoon area and in the south of Mexico in summer and in winter. In the center and southern of Mexico however, the same scenario in autumn simulates an increase of precipitation.
Gestational weight gain among Hispanic women.
Sangi-Haghpeykar, Haleh; Lam, Kim; Raine, Susan P
2014-01-01
To describe gestational weight gain among Hispanic women and to examine psychological, social, and cultural contexts affecting weight gain. A total of 282 Hispanic women were surveyed post-partum before leaving the hospital. Women were queried about their prepregnancy weight and weight gained during pregnancy. Adequacy of gestational weight gain was based on guidelines set by the Institute of Medicine in 2009. Independent risk factors for excessive or insufficient weight gain were examined by logistic regression. Most women were unmarried (59 %), with a mean age of 28.4 ± 6.6 years and an average weight gain of 27.9 ± 13.3 lbs. Approximately 45 % of women had gained too much, 32 % too little, and only 24 % had an adequate amount of weight gain. The mean birth weight was 7.3, 7.9, and 6.8 lbs among the adequate, excessive, and insufficient weight gain groups. Among women who exercised before pregnancy, two-thirds continued to do so during pregnancy; the mean gestational weight gain of those who continued was lower than those who stopped (26.8 vs. 31.4 lbs, p = 0.04). Independent risk factors for excessive weight gain were being unmarried, U.S. born, higher prepregnancy body mass index, and having indifferent or negative views about weight gain. Independent risk factors for insufficient weight gain were low levels of support and late initiation of prenatal care. Depression, stress, and a woman's or her partner's happiness regarding pregnancy were unrelated to weight gain. The results of this study can be used by prenatal programs to identify Hispanic women at risk for excessive or insufficient gestational weight gain. PMID:23456347
... lose the weight on your own. Many people gain and lose weight. Unintentional weight loss is loss of 10 pounds ... health care provider may suggest changes in your diet and an exercise program depending on the cause of your weight loss.
Data Point Averaging for Computational Fluid Dynamics Data
NASA Technical Reports Server (NTRS)
Norman, David, Jr. (Inventor)
2014-01-01
A system and method for generating fluid flow parameter data for use in aerodynamic heating analysis. Computational fluid dynamics data is generated for a number of points in an area on a surface to be analyzed. Sub-areas corresponding to areas of the surface for which an aerodynamic heating analysis is to be performed are identified. A computer system automatically determines a sub-set of the number of points corresponding to each of the number of sub-areas and determines a value for each of the number of sub-areas using the data for the sub-set of points corresponding to each of the number of sub-areas. The value is determined as an average of the data for the sub-set of points corresponding to each of the number of sub-areas. The resulting parameter values then may be used to perform an aerodynamic heating analysis.
PREVENTING WEIGHT REGAIN AFTER WEIGHT LOSS
Technology Transfer Automated Retrieval System (TEKTRAN)
For most dieters, a regaining of lost weight is an all too common experience. Indeed, virtually all interventions for weight loss show limited or even poor long-term effectiveness. This sobering reality was reflected in a comprehensive review of nonsurgical treatments of obesity conducted by the Ins...
The average distances in random graphs with given expected degrees
Chung, Fan; Lu, Linyuan
2002-01-01
Random graph theory is used to examine the “small-world phenomenon”; any two strangers are connected through a short chain of mutual acquaintances. We will show that for certain families of random graphs with given expected degrees the average distance is almost surely of order log n/log d̃, where d̃ is the weighted average of the sum of squares of the expected degrees. Of particular interest are power law random graphs in which the number of vertices of degree k is proportional to 1/kβ for some fixed exponent β. For the case of β > 3, we prove that the average distance of the power law graphs is almost surely of order log n/log d̃. However, many Internet, social, and citation networks are power law graphs with exponents in the range 2 < β < 3 for which the power law random graphs have average distance almost surely of order log log n, but have diameter of order log n (provided having some mild constraints for the average distance and maximum degree). In particular, these graphs contain a dense subgraph, which we call the core, having nc/log log n vertices. Almost all vertices are within distance log log n of the core although there are vertices at distance log n from the core. PMID:12466502
Average-cost based robust structural control
NASA Technical Reports Server (NTRS)
Hagood, Nesbitt W.
1993-01-01
A method is presented for the synthesis of robust controllers for linear time invariant structural systems with parameterized uncertainty. The method involves minimizing quantities related to the quadratic cost (H2-norm) averaged over a set of systems described by real parameters such as natural frequencies and modal residues. Bounded average cost is shown to imply stability over the set of systems. Approximations for the exact average are derived and proposed as cost functionals. The properties of these approximate average cost functionals are established. The exact average and approximate average cost functionals are used to derive dynamic controllers which can provide stability robustness. The robustness properties of these controllers are demonstrated in illustrative numerical examples and tested in a simple SISO experiment on the MIT multi-point alignment testbed.
Calculating ensemble averaged descriptions of protein rigidity without sampling.
González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J
2012-01-01
Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG) algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG) that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars) that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability. PMID:22383947
Spectral and parametric averaging for integrable systems
NASA Astrophysics Data System (ADS)
Ma, Tao; Serota, R. A.
2015-05-01
We analyze two theoretical approaches to ensemble averaging for integrable systems in quantum chaos, spectral averaging (SA) and parametric averaging (PA). For SA, we introduce a new procedure, namely, rescaled spectral averaging (RSA). Unlike traditional SA, it can describe the correlation function of spectral staircase (CFSS) and produce persistent oscillations of the interval level number variance (IV). PA while not as accurate as RSA for the CFSS and IV, can also produce persistent oscillations of the global level number variance (GV) and better describes saturation level rigidity as a function of the running energy. Overall, it is the most reliable method for a wide range of statistics.
General periodic average structures of decagonal quasicrystals.
Cervellino, Antonio; Steurer, Walter
2002-03-01
The concept of periodic average structure is mutated from the theory of incommensurately modulated structures. For quasicrystals, this concept (up to now explored only in few cases) is becoming increasingly useful to understand their properties and to interpret some important structural features. The peculiar property of quasicrystals is that they admit not one but many (infinite) possible different average structures. Few of them, however, will be meaningful. Here are given a simple method (based on reciprocal space) for generating all the possible periodic average structures of decagonal quasicrystals and some new ideas about their meaning. By this method, the most significant average structures can be recognized from the diffraction pattern. PMID:11832588
Cell averaging Chebyshev methods for hyperbolic problems
NASA Technical Reports Server (NTRS)
Wei, Cai; Gottlieb, David; Harten, Ami
1990-01-01
A cell averaging method for the Chebyshev approximations of first order hyperbolic equations in conservation form is described. Formulas are presented for transforming between pointwise data at the collocation points and cell averaged quantities, and vice-versa. This step, trivial for the finite difference and Fourier methods, is nontrivial for the global polynomials used in spectral methods. The cell averaging methods presented are proven stable for linear scalar hyperbolic equations and present numerical simulations of shock-density wave interaction using the new cell averaging Chebyshev methods.
Statistics of time averaged atmospheric scintillation
Stroud, P.
1994-02-01
A formulation has been constructed to recover the statistics of the moving average of the scintillation Strehl from a discrete set of measurements. A program of airborne atmospheric propagation measurements was analyzed to find the correlation function of the relative intensity over displaced propagation paths. The variance in continuous moving averages of the relative intensity was then found in terms of the correlation functions. An empirical formulation of the variance of the continuous moving average of the scintillation Strehl has been constructed. The resulting characterization of the variance of the finite time averaged Strehl ratios is being used to assess the performance of an airborne laser system.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-14
... Dumping Margin and Assessment Rate in Certain Antidumping Duty Proceedings, 75 FR 81533 (December 28, 2010... Duty Proceedings, 76 FR 5518 (Feb. 1, 2011). In September, 2011, pursuant to section 123(g)(1)(D) of..., 71 FR 77722 (Dec. 27, 2006) (``Final Modification for Investigations''). The Department has...
ERIC Educational Resources Information Center
Jenny, Mirjam A.; Rieskamp, Jörg; Nilsson, Håkan
2014-01-01
Judging whether multiple events will co-occur is an important aspect of everyday decision making. The underlying probabilities of occurrence are usually unknown and have to be inferred from experience. Using a rigorous, quantitative model comparison, we investigate how people judge the conjunctive probabilities of multiple events to co-occur. In 2…
Implications of the method of capital cost payment on the weighted average cost of capital.
Boles, K E
1986-01-01
The author develops a theoretical and mathematical model, based on published financial management literature, to describe the cost of capital structure for health care delivery entities. This model is then used to generate the implications of changing the capital cost reimbursement mechanism from a cost basis to a prospective basis. The implications are that the cost of capital is increased substantially, the use of debt must be restricted, interest rates for borrowed funds will increase, and, initially, firms utilizing debt efficiently under cost-basis reimbursement will be restricted to the generation of funds from equity only under a prospective system. PMID:3525468
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-01
... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF COMMERCE... Administration, International Trade Administration, Department of Commerce. ACTION: Proposed rule; proposed modification; extension of comment period. SUMMARY: On December 28, 2010, the Department of Commerce...
Cernicchiaro, N; Renter, D G; Xiang, S; White, B J; Bello, N M
2013-06-01
Variability in ADG of feedlot cattle can affect profits, thus making overall returns more unstable. Hence, knowledge of the factors that contribute to heterogeneity of variances in animal performance can help feedlot managers evaluate risks and minimize profit volatility when making managerial and economic decisions in commercial feedlots. The objectives of the present study were to evaluate heteroskedasticity, defined as heterogeneity of variances, in ADG of cohorts of commercial feedlot cattle, and to identify cattle demographic factors at feedlot arrival as potential sources of variance heterogeneity, accounting for cohort- and feedlot-level information in the data structure. An operational dataset compiled from 24,050 cohorts from 25 U. S. commercial feedlots in 2005 and 2006 was used for this study. Inference was based on a hierarchical Bayesian model implemented with Markov chain Monte Carlo, whereby cohorts were modeled at the residual level and feedlot-year clusters were modeled as random effects. Forward model selection based on deviance information criteria was used to screen potentially important explanatory variables for heteroskedasticity at cohort- and feedlot-year levels. The Bayesian modeling framework was preferred as it naturally accommodates the inherently hierarchical structure of feedlot data whereby cohorts are nested within feedlot-year clusters. Evidence for heterogeneity of variance components of ADG was substantial and primarily concentrated at the cohort level. Feedlot-year specific effects were, by far, the greatest contributors to ADG heteroskedasticity among cohorts, with an estimated ∼12-fold change in dispersion between most and least extreme feedlot-year clusters. In addition, identifiable demographic factors associated with greater heterogeneity of cohort-level variance included smaller cohort sizes, fewer days on feed, and greater arrival BW, as well as feedlot arrival during summer months. These results support that heterogeneity of variances in ADG is prevalent in feedlot performance and indicate potential sources of heteroskedasticity. Further investigation of factors associated with heteroskedasticity in feedlot performance is warranted to increase consistency and uniformity in commercial beef cattle production and subsequent profitability. PMID:23482583
40 CFR 63.10009 - May I use emissions averaging to comply with this subpart?
Code of Federal Regulations, 2012 CFR
2012-07-01
...) You may choose to have your EGU emissions averaging group meet either the heat input basis (MMBtu or... equations. ER16FE12.003 Where: WAERm = Weighted average emissions rate maximum in terms of lb/heat input or... sorbent trap monitoring for hour i, Rmmi = Maximum rated heat input or gross electrical output of unit...
40 CFR 63.5710 - How do I demonstrate compliance using emissions averaging?
Code of Federal Regulations, 2013 CFR
2013-07-01
... Standards for Open Molding Resin and Gel Coat Operations § 63.5710 How do I demonstrate compliance using... open molding resin and gel coat operation included in the average. ER22AU01.013 Where: PVOP=weighted-average MACT model point value for each open molding operation (PVR, PVPG, PVCG, PVTR, and PVTG)...
40 CFR 63.5710 - How do I demonstrate compliance using emissions averaging?
Code of Federal Regulations, 2014 CFR
2014-07-01
... Standards for Open Molding Resin and Gel Coat Operations § 63.5710 How do I demonstrate compliance using... open molding resin and gel coat operation included in the average. ER22AU01.013 Where: PVOP = weighted-average MACT model point value for each open molding operation (PVR, PVPG, PVCG, PVTR, and PVTG)...
40 CFR 63.5710 - How do I demonstrate compliance using emissions averaging?
Code of Federal Regulations, 2012 CFR
2012-07-01
... Standards for Open Molding Resin and Gel Coat Operations § 63.5710 How do I demonstrate compliance using... open molding resin and gel coat operation included in the average. ER22AU01.013 Where: PVOP=weighted-average MACT model point value for each open molding operation (PVR, PVPG, PVCG, PVTR, and PVTG)...
NASA Astrophysics Data System (ADS)
Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc
2015-10-01
This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.
Dynamic Multiscale Averaging (DMA) of Turbulent Flow
Richard W. Johnson
2012-09-01
A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical engineering applications.
Mood and Weight Loss in a Behavioral Treatment Program.
ERIC Educational Resources Information Center
Wing, Rena R.; And Others
1983-01-01
Evaluated the relationship between mood and weight loss for 76 patients participating in two consecutive behavioral treatment programs. Weight losses averaged 12.2 pounds (5.55 kg) during the 10-week program. Positive changes in mood were reported during this interval, and these changes appeared to be related to changes in weight. (Author/RC)
Determinants of College Grade Point Averages
ERIC Educational Resources Information Center
Bailey, Paul Dean
2012-01-01
Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by
Determinants of College Grade Point Averages
ERIC Educational Resources Information Center
Bailey, Paul Dean
2012-01-01
Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by…
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 21 2012-07-01 2012-07-01 false Averaging. 90.204 Section 90.204 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NONROAD SPARK-IGNITION ENGINES AT OR BELOW 19 KILOWATTS Certification Averaging, Banking,...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Averaging. 90.204 Section 90.204 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NONROAD SPARK-IGNITION ENGINES AT OR BELOW 19 KILOWATTS Certification Averaging, Banking,...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Averaging. 90.204 Section 90.204 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NONROAD SPARK-IGNITION ENGINES AT OR BELOW 19 KILOWATTS Certification Averaging, Banking,...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Averaging. 90.204 Section 90.204 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NONROAD SPARK-IGNITION ENGINES AT OR BELOW 19 KILOWATTS Certification Averaging, Banking,...
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 20 2014-07-01 2013-07-01 true Averaging. 90.204 Section 90.204 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NONROAD SPARK-IGNITION ENGINES AT OR BELOW 19 KILOWATTS Certification Averaging, Banking,...
Average Transmission Probability of a Random Stack
ERIC Educational Resources Information Center
Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg
2010-01-01
The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…
A note on generalized averaged Gaussian formulas
NASA Astrophysics Data System (ADS)
Spalevic, Miodrag
2007-11-01
We have recently proposed a very simple numerical method for constructing the averaged Gaussian quadrature formulas. These formulas exist in many more cases than the real positive Gauss?Kronrod formulas. In this note we try to answer whether the averaged Gaussian formulas are an adequate alternative to the corresponding Gauss?Kronrod quadrature formulas, to estimate the remainder term of a Gaussian rule.
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2010 CFR
2010-07-01
... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2013 CFR
2013-07-01
... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2011 CFR
2011-07-01
... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Averaging. 91.204 Section 91.204 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM MARINE SPARK-IGNITION ENGINES Averaging, Banking, and Trading Provisions § 91.204...
The Hubble rate in averaged cosmology
Umeh, Obinna; Larena, Julien; Clarkson, Chris E-mail: julien.larena@gmail.com
2011-03-01
The calculation of the averaged Hubble expansion rate in an averaged perturbed Friedmann-Lemaître-Robertson-Walker cosmology leads to small corrections to the background value of the expansion rate, which could be important for measuring the Hubble constant from local observations. It also predicts an intrinsic variance associated with the finite scale of any measurement of H{sub 0}, the Hubble rate today. Both the mean Hubble rate and its variance depend on both the definition of the Hubble rate and the spatial surface on which the average is performed. We quantitatively study different definitions of the averaged Hubble rate encountered in the literature by consistently calculating the backreaction effect at second order in perturbation theory, and compare the results. We employ for the first time a recently developed gauge-invariant definition of an averaged scalar. We also discuss the variance of the Hubble rate for the different definitions.
Evaluating template bias when synthesizing population averages
NASA Astrophysics Data System (ADS)
Carlson, Blake L.; Christensen, Gary E.; Johnson, Hans J.; Vannier, Michael W.
2001-07-01
Establishing the average shape and spatial variability for a set of similar anatomical objects is important for detecting and discriminating morphological differences between populations. This may be done using deformable templates to synthesize a 3D CT/MRI image of the average anatomy from a set of CT/MRI images collected from a population of similar anatomical objects. This paper investigates the error associated with the choice of template selected from the population used to synthesize the average population shape. Population averages were synthesized for a population of five infant skulls with sagittal synostosis and a population of six normal adult brains using a consistent linear-elastic image registration algorithm. Each data set from the populations was used as the template to synthesize a population average. This resulted in five different population averages for the skull population and six different population averages for the brain population. The displacement variance distance from a skull within the population to the other skulls in the population ranged from 5.5 to 9.9 mm2 while the displacement variance distance from the synthesized average skulls to the population ranged from 2.2 to 2.7 mm2. The displacement variance distance from a brain within the population to the other brains in the population ranged from 9.3 to 14.2 mm2 while the displacement variance distance from the synthesized average brains to the population ranged from 3.2 to 3.6 mm2. These results suggest that there was no significant difference between the choice of template with respect to the shape of the synthesized average data set for these two populations.
Short-Term Auditory Memory of Above-Average and Below-Average Grade Three Readers.
ERIC Educational Resources Information Center
Caruk, Joan Marie
To determine if performance on short term auditory memory tasks is influenced by reading ability or sex differences, 62 third grade reading students (16 above average boys, 16 above average girls, 16 below average boys, and 14 below average girls) were administered four memory tests--memory for consonant names, memory for words, memory for…
Practical applications of averages and differences of Friedel opposites.
Flack, H D; Sadki, M; Thompson, A L; Watkin, D J
2011-01-01
The practical use of the average and difference intensities of Friedel opposites at different stages of structure analysis has been investigated. It is shown how these values may be properly and practically used at the stage of space-group determination. At the stage of least-squares refinement, it is shown that increasing the weight of the difference intensities does not improve their fit to the model. The correct form of the coefficients for a difference electron-density calculation is given. In the process of structure validation, it is further shown that plots of the observed and model difference intensities provide an objective method to evaluate the fit of the data to the model and to reveal insufficiencies in the intensity measurements. As a further tool for the validation of structure determinations, the use of the Patterson functions of the average and difference intensities has been investigated and their clear advantage demonstrated. PMID:21173470
Several weight-loss medicines are available. Ask your health care provider if any are right for you. About 5 ... by taking these medicines. But not everyone loses weight while taking them. Most people also regain the ...
NASA Astrophysics Data System (ADS)
Farkas, Illés; Ábel, Dániel; Palla, Gergely; Vicsek, Tamás
2007-06-01
The inclusion of link weights into the analysis of network properties allows a deeper insight into the (often overlapping) modular structure of real-world webs. We introduce a clustering algorithm clique percolation method with weights (CPMw) for weighted networks based on the concept of percolating k-cliques with high enough intensity. The algorithm allows overlaps between the modules. First, we give detailed analytical and numerical results about the critical point of weighted k-clique percolation on (weighted) Erdos Rényi graphs. Then, for a scientist collaboration web and a stock correlation graph we compute three-link weight correlations and with the CPMw the weighted modules. After reshuffling link weights in both networks and computing the same quantities for the randomized control graphs as well, we show that groups of three or more strong links prefer to cluster together in both original graphs.
... porphyrias because reducing the intakes of carbohydrate and energy in an effort to lose weight can worsen ... attempted to lose weight rapidly with very low energy diets. Patients with acute Porphyria should avoid very ...
... help with weight loss? Studies show that anti-obesity medicines can help people lose more weight when ... html • Hormone Health Network information about hormones and obesity: www.hormone.org/Other/upload/hormones-and- obesity- ...
Light propagation in the averaged universe
Bagheri, Samae; Schwarz, Dominik J. E-mail: dschwarz@physik.uni-bielefeld.de
2014-10-01
Cosmic structures determine how light propagates through the Universe and consequently must be taken into account in the interpretation of observations. In the standard cosmological model at the largest scales, such structures are either ignored or treated as small perturbations to an isotropic and homogeneous Universe. This isotropic and homogeneous model is commonly assumed to emerge from some averaging process at the largest scales. We assume that there exists an averaging procedure that preserves the causal structure of space-time. Based on that assumption, we study the effects of averaging the geometry of space-time and derive an averaged version of the null geodesic equation of motion. For the averaged geometry we then assume a flat Friedmann-Lemaître (FL) model and find that light propagation in this averaged FL model is not given by null geodesics of that model, but rather by a modified light propagation equation that contains an effective Hubble expansion rate, which differs from the Hubble rate of the averaged space-time.
ERIC Educational Resources Information Center
Ryan, Kevin Michael
2011-01-01
Research on syllable weight in generative phonology has focused almost exclusively on systems in which weight is treated as an ordinal hierarchy of clearly delineated categories (e.g. light and heavy). As I discuss, canonical weight-sensitive phenomena in phonology, including quantitative meter and quantity-sensitive stress, can also treat weight…
NASA Technical Reports Server (NTRS)
Dyer, A., Jr.; Ferrara, P. W.; Luke, H. P.
1969-01-01
Weight Control System, a set of linked computer programs which provides weight and balance reports from magnetic tape files, provides weight control and reporting on launch vehicle programs. With minor format modifications the program is applicable to aerospace, marine, automotive and other land transportation industries.
Cosmic inhomogeneities and averaged cosmological dynamics.
Paranjape, Aseem; Singh, T P
2008-10-31
If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a "dark energy." However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be "no." Averaging effects negligibly influence the cosmological dynamics. PMID:18999811
Factor weighting in DRASTIC modeling.
Pacheco, F A L; Pires, L M G R; Santos, R M B; Sanches Fernandes, L F
2015-02-01
Evaluation of aquifer vulnerability comprehends the integration of very diverse data, including soil characteristics (texture), hydrologic settings (recharge), aquifer properties (hydraulic conductivity), environmental parameters (relief), and ground water quality (nitrate contamination). It is therefore a multi-geosphere problem to be handled by a multidisciplinary team. The DRASTIC model remains the most popular technique in use for aquifer vulnerability assessments. The algorithm calculates an intrinsic vulnerability index based on a weighted addition of seven factors. In many studies, the method is subject to adjustments, especially in the factor weights, to meet the particularities of the studied regions. However, adjustments made by different techniques may lead to markedly different vulnerabilities and hence to insecurity in the selection of an appropriate technique. This paper reports the comparison of 5 weighting techniques, an enterprise not attempted before. The studied area comprises 26 aquifer systems located in Portugal. The tested approaches include: the Delphi consensus (original DRASTIC, used as reference), Sensitivity Analysis, Spearman correlations, Logistic Regression and Correspondence Analysis (used as adjustment techniques). In all cases but Sensitivity Analysis, adjustment techniques have privileged the factors representing soil characteristics, hydrologic settings, aquifer properties and environmental parameters, by leveling their weights to ≈4.4, and have subordinated the factors describing the aquifer media by downgrading their weights to ≈1.5. Logistic Regression predicts the highest and Sensitivity Analysis the lowest vulnerabilities. Overall, the vulnerability indices may be separated by a maximum value of 51 points. This represents an uncertainty of 2.5 vulnerability classes, because they are 20 points wide. Given this ambiguity, the selection of a weighting technique to integrate a vulnerability index may require additional expertise to be set up satisfactorily. Following a general criterion that weights must be proportional to the range of the ratings, Correspondence Analysis may be recommended as the best adjustment technique. PMID:25461049
Weighted Watson-Crick automata
Tamrin, Mohd Izzuddin Mohd; Turaev, Sherzod; Sembok, Tengku Mohd Tengku
2014-07-10
There are tremendous works in biotechnology especially in area of DNA molecules. The computer society is attempting to develop smaller computing devices through computational models which are based on the operations performed on the DNA molecules. A Watson-Crick automaton, a theoretical model for DNA based computation, has two reading heads, and works on double-stranded sequences of the input related by a complementarity relation similar with the Watson-Crick complementarity of DNA nucleotides. Over the time, several variants of Watson-Crick automata have been introduced and investigated. However, they cannot be used as suitable DNA based computational models for molecular stochastic processes and fuzzy processes that are related to important practical problems such as molecular parsing, gene disease detection, and food authentication. In this paper we define new variants of Watson-Crick automata, called weighted Watson-Crick automata, developing theoretical models for molecular stochastic and fuzzy processes. We define weighted Watson-Crick automata adapting weight restriction mechanisms associated with formal grammars and automata. We also study the generative capacities of weighted Watson-Crick automata, including probabilistic and fuzzy variants. We show that weighted variants of Watson-Crick automata increase their generative power.
Weighted Watson-Crick automata
NASA Astrophysics Data System (ADS)
Tamrin, Mohd Izzuddin Mohd; Turaev, Sherzod; Sembok, Tengku Mohd Tengku
2014-07-01
There are tremendous works in biotechnology especially in area of DNA molecules. The computer society is attempting to develop smaller computing devices through computational models which are based on the operations performed on the DNA molecules. A Watson-Crick automaton, a theoretical model for DNA based computation, has two reading heads, and works on double-stranded sequences of the input related by a complementarity relation similar with the Watson-Crick complementarity of DNA nucleotides. Over the time, several variants of Watson-Crick automata have been introduced and investigated. However, they cannot be used as suitable DNA based computational models for molecular stochastic processes and fuzzy processes that are related to important practical problems such as molecular parsing, gene disease detection, and food authentication. In this paper we define new variants of Watson-Crick automata, called weighted Watson-Crick automata, developing theoretical models for molecular stochastic and fuzzy processes. We define weighted Watson-Crick automata adapting weight restriction mechanisms associated with formal grammars and automata. We also study the generative capacities of weighted Watson-Crick automata, including probabilistic and fuzzy variants. We show that weighted variants of Watson-Crick automata increase their generative power.
Modeling Plants With Moving-Average Outputs
NASA Technical Reports Server (NTRS)
Polites, Michael E.
1989-01-01
Three discrete-state-variable representations derived. Paper discusses mathematical modeling of digital control systems for plants in which outputs include combinations of instantaneous and moving-average-prefiltered measurements.
Averaging of Fourier-Haar coefficients
Montgomery-Smith, S; Semenov, E M
1999-10-31
An operator defined by averaging of the Fourier-Haar coefficients of a function is studied. A criterion for the boundedness of such an operator acting in a pair of rearrangement-invariant spaces is derived.
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2012 CFR
2012-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...
Rotational averaging of multiphoton absorption cross sections
Friese, Daniel H. Beerepoot, Maarten T. P.; Ruud, Kenneth
2014-11-28
Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.
Orbit-averaged implicit particle codes
NASA Astrophysics Data System (ADS)
Cohen, B. I.; Freis, R. P.; Thomas, V.
1982-03-01
The merging of orbit-averaged particle code techniques with recently developed implicit methods to perform numerically stable and accurate particle simulations are reported. Implicitness and orbit averaging can extend the applicability of particle codes to the simulation of long time-scale plasma physics phenomena by relaxing time-step and statistical constraints. Difference equations for an electrostatic model are presented, and analyses of the numerical stability of each scheme are given. Simulation examples are presented for a one-dimensional electrostatic model. Schemes are constructed that are stable at large-time step, require fewer particles, and, hence, reduce input-output and memory requirements. Orbit averaging, however, in the unmagnetized electrostatic models tested so far is not as successful as in cases where there is a magnetic field. Methods are suggested in which orbit averaging should achieve more significant improvements in code efficiency.
Spacetime Average Density (SAD) cosmological measures
Page, Don N.
2014-11-01
The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmological constant.
Effects of wildfire disaster exposure on male birth weight in an Australian population
O’Donnell, M. H.; Behie, A. M.
2015-01-01
Background and objectives: Maternal stress can depress birth weight and gestational age, with potential health effects. A growing number of studies examine the effect of maternal stress caused by environmental disasters on birth outcomes. These changes may indicate an adaptive response. In this study, we examine the effects of maternal exposure to wildfire on birth weight and gestational age, hypothesising that maternal stress will negatively influence these measures. Methodology: Using data from the Australian Capital Territory, we employed Analysis of Variance to examine the influence of the 2003 Canberra wildfires on the weight of babies born to mothers resident in fire-affected regions, while considering the role of other factors. Results: We found that male infants born in the most severely fire-affected area had significantly higher average birth weights than their less exposed peers and were also heavier than males born in the same areas in non-fire years. Higher average weights were attributable to an increase in the number of macrosomic infants. There was no significant effect on the weight of female infants or on gestational age for either sex. Conclusions and implications: Our findings indicate heightened environmental responsivity in the male cohort. We find that elevated maternal stress acted to accelerate the growth of male fetuses, potentially through an elevation of maternal blood glucose levels. Like previous studies, our work finds effects of disaster exposure and suggests that fetal growth patterns respond to maternal signals. However, the direction of the change in birth weight is opposite to that of many earlier studies. PMID:26574560
Monthly average polar sea-ice concentration
Schweitzer, Peter N.
1995-01-01
The data contained in this CD-ROM depict monthly averages of sea-ice concentration in the modern polar oceans. These averages were derived from the Scanning Multichannel Microwave Radiometer (SMMR) and Special Sensor Microwave/Imager (SSM/I) instruments aboard satellites of the U.S. Air Force Defense Meteorological Satellite Program from 1978 through 1992. The data are provided as 8-bit images using the Hierarchical Data Format (HDF) developed by the National Center for Supercomputing Applications.
Instrument to average 100 data sets
NASA Technical Reports Server (NTRS)
Tuma, G. B.; Birchenough, A. G.; Rice, W. J.
1977-01-01
An instrumentation system is currently under development which will measure many of the important parameters associated with the operation of an internal combustion engine. Some of these parameters include mass-fraction burn rate, ignition energy, and the indicated mean effective pressure. One of the characteristics of an internal combustion engine is the cycle-to-cycle variation of these parameters. A curve-averaging instrument has been produced which will generate the average curve, over 100 cycles, of any engine parameter. the average curve is described by 2048 discrete points which are displayed on an oscilloscope screen to facilitate recording and is available in real time. Input can be any parameter which is expressed as a + or - 10-volt signal. Operation of the curve-averaging instrument is defined between 100 and 6000 rpm. Provisions have also been made for averaging as many as four parameters simultaneously, with a subsequent decrease in resolution. This provides the means to correlate and perhaps interrelate the phenomena occurring in an internal combustion engine. This instrument has been used successfully on a 1975 Chevrolet V8 engine, and on a Continental 6-cylinder aircraft engine. While this instrument was designed for use on an internal combustion engine, with some modification it can be used to average any cyclically varying waveform.
Self-averaging characteristics of spectral fluctuations
NASA Astrophysics Data System (ADS)
Braun, Petr; Haake, Fritz
2015-04-01
The spectral form factor as well as the two-point correlator of the density of (quasi-)energy levels of individual quantum dynamics are not self-averaging. Only suitable smoothing turns them into useful characteristics of spectra. We present numerical data for a fully chaotic kicked top, employing two types of smoothing: one involves primitives of the spectral correlator, the second, a small imaginary part of the quasi-energy. Self-averaging universal (like the circular unitary ensemble (CUE) average) behavior is found for the smoothed correlator, apart from noise which shrinks like 1/\\sqrt{N} as the dimension N of the quantum Hilbert space grows. There are periodically repeated quasi-energy windows of correlation decay and revival wherein the smoothed correlation remains finite as N\\to ∞ such that the noise is negligible. In between those windows (where the CUE averaged correlator takes on values of the order 1/{{N}2}) the noise becomes dominant and self-averaging is lost. We conclude that the noise forbids distinction of CUE and GUE-type behavior. Surprisingly, the underlying smoothed generating function does not enjoy any self-averaging outside the range of its variables relevant for determining the two-point correlator (and certain higher-order ones). We corroborate our numerical findings for the noise by analytically determining the CUE variance of the smoothed single-matrix correlator.
Selective Model Averaging with Bayesian Rule Learning for Predictive Biomedicine
Balasubramanian, Jeya B.; Visweswaran, Shyam; Cooper, Gregory F.; Gopalakrishnan, Vanathi
2014-01-01
Accurate disease classification and biomarker discovery remain challenging tasks in biomedicine. In this paper, we develop and test a practical approach to combining evidence from multiple models when making predictions using selective Bayesian model averaging of probabilistic rules. This method is implemented within a Bayesian Rule Learning system and compared to model selection when applied to twelve biomedical datasets using the area under the ROC curve measure of performance. Cross-validation results indicate that selective Bayesian model averaging statistically significantly outperforms model selection on average in these experiments, suggesting that combining predictions from multiple models may lead to more accurate quantification of classifier uncertainty. This approach would directly impact the generation of robust predictions on unseen test data, while also increasing knowledge for biomarker discovery and mechanisms that underlie disease. PMID:25717394
Computation of vertically averaged velocities in irregular sections of straight channels
NASA Astrophysics Data System (ADS)
Spada, E.; Tucciarelli, T.; Sinagra, M.; Sammartano, V.; Corato, G.
2015-09-01
Two new methods for vertically averaged velocity computation are presented, validated and compared with other available formulas. The first method derives from the well-known Huthoff algorithm, which is first shown to be dependent on the way the river cross section is discretized into several subsections. The second method assumes the vertically averaged longitudinal velocity to be a function only of the friction factor and of the so-called "local hydraulic radius", computed as the ratio between the integral of the elementary areas around a given vertical and the integral of the elementary solid boundaries around the same vertical. Both integrals are weighted with a linear shape function equal to zero at a distance from the integration variable which is proportional to the water depth according to an empirical coefficient β. Both formulas are validated against (1) laboratory experimental data, (2) discharge hydrographs measured in a real site, where the friction factor is estimated from an unsteady-state analysis of water levels recorded in two different river cross sections, and (3) the 3-D solution obtained using the commercial ANSYS CFX code, computing the steady-state uniform flow in a cross section of the Alzette River.
Zhao, Kaiguang; Valle, Denis; Popescu, Sorin; Zhang, Xuesong; Malick, Bani
2013-05-15
Model specification remains challenging in spectroscopy of plant biochemistry, as exemplified by the availability of various spectral indices or band combinations for estimating the same biochemical. This lack of consensus in model choice across applications argues for a paradigm shift in hyperspectral methods to address model uncertainty and misspecification. We demonstrated one such method using Bayesian model averaging (BMA), which performs variable/band selection and quantifies the relative merits of many candidate models to synthesize a weighted average model with improved predictive performances. The utility of BMA was examined using a portfolio of 27 foliage spectral–chemical datasets representing over 80 species across the globe to estimate multiple biochemical properties, including nitrogen, hydrogen, carbon, cellulose, lignin, chlorophyll (a or b), carotenoid, polar and nonpolar extractives, leaf mass per area, and equivalent water thickness. We also compared BMA with partial least squares (PLS) and stepwise multiple regression (SMR). Results showed that all the biochemicals except carotenoid were accurately estimated from hyerspectral data with R2 values > 0.80.
Tailoring dietary approaches for weight loss
Gardner, C D
2012-01-01
Although the ‘Low-Fat' diet was the predominant public health recommendation for weight loss and weight control for the past several decades, the obesity epidemic continued to grow during this time period. An alternative ‘low-carbohydrate' (Low-Carb) approach, although originally dismissed and even vilified, was comparatively tested in a series of studies over the past decade, and has been found in general to be as effective, if not more, as the Low-Fat approach for weight loss and for several related metabolic health measures. From a glass half full perspective, this suggests that there is more than one choice for a dietary approach to lose weight, and that Low-Fat and Low-Carb diets may be equally effective. From a glass half empty perspective, the average amount of weight lost on either of these two dietary approaches under the conditions studied, particularly when followed beyond 1 year, has been modest at best and negligible at worst, suggesting that the two approaches may be equally ineffective. One could resign themselves at this point to focusing on calories and energy intake restriction, regardless of macronutrient distributions. However, before throwing out the half-glass of water, it is worthwhile to consider that focusing on average results may mask important subgroup successes and failures. In all weight-loss studies, without exception, the range of individual differences in weight change within any particular diet groups is orders of magnitude greater than the average group differences between diet groups. Several studies have now reported that adults with greater insulin resistance are more successful with weight loss on a lower-carbohydrate diet compared with a lower-fat diet, whereas adults with greater insulin sensitivity are equally or more successful with weight loss on a lower-fat diet compared with a lower-carbohydrate diet. Other preliminary findings suggest that there may be some promise with matching individuals with certain genotypes to one type of diet over another for increasing weight-loss success. Future research to address the macronutrient intake component of the obesity epidemic should build on these recent insights and be directed toward effectively classifying individuals who can be differentially matched to alternate types of weight-loss diets that maximize weight-loss and weight-control success. PMID:25089189
Tailoring dietary approaches for weight loss.
Gardner, C D
2012-07-01
Although the 'Low-Fat' diet was the predominant public health recommendation for weight loss and weight control for the past several decades, the obesity epidemic continued to grow during this time period. An alternative 'low-carbohydrate' (Low-Carb) approach, although originally dismissed and even vilified, was comparatively tested in a series of studies over the past decade, and has been found in general to be as effective, if not more, as the Low-Fat approach for weight loss and for several related metabolic health measures. From a glass half full perspective, this suggests that there is more than one choice for a dietary approach to lose weight, and that Low-Fat and Low-Carb diets may be equally effective. From a glass half empty perspective, the average amount of weight lost on either of these two dietary approaches under the conditions studied, particularly when followed beyond 1 year, has been modest at best and negligible at worst, suggesting that the two approaches may be equally ineffective. One could resign themselves at this point to focusing on calories and energy intake restriction, regardless of macronutrient distributions. However, before throwing out the half-glass of water, it is worthwhile to consider that focusing on average results may mask important subgroup successes and failures. In all weight-loss studies, without exception, the range of individual differences in weight change within any particular diet groups is orders of magnitude greater than the average group differences between diet groups. Several studies have now reported that adults with greater insulin resistance are more successful with weight loss on a lower-carbohydrate diet compared with a lower-fat diet, whereas adults with greater insulin sensitivity are equally or more successful with weight loss on a lower-fat diet compared with a lower-carbohydrate diet. Other preliminary findings suggest that there may be some promise with matching individuals with certain genotypes to one type of diet over another for increasing weight-loss success. Future research to address the macronutrient intake component of the obesity epidemic should build on these recent insights and be directed toward effectively classifying individuals who can be differentially matched to alternate types of weight-loss diets that maximize weight-loss and weight-control success. PMID:25089189
An evaluation of prior influence on the predictive ability of Bayesian model averaging.
St-Louis, Véronique; Clayton, Murray K; Pidgeon, Anna M; Radeloff, Volker C
2012-03-01
Model averaging is gaining popularity among ecologists for making inference and predictions. Methods for combining models include Bayesian model averaging (BMA) and Akaike's Information Criterion (AIC) model averaging. BMA can be implemented with different prior model weights, including the Kullback-Leibler prior associated with AIC model averaging, but it is unclear how the prior model weight affects model results in a predictive context. Here, we implemented BMA using the Bayesian Information Criterion (BIC) approximation to Bayes factors for building predictive models of bird abundance and occurrence in the Chihuahuan Desert of New Mexico. We examined how model predictive ability differed across four prior model weights, and how averaged coefficient estimates, standard errors and coefficients' posterior probabilities varied for 16 bird species. We also compared the predictive ability of BMA models to a best single-model approach. Overall, Occam's prior of parsimony provided the best predictive models. In general, the Kullback-Leibler prior, however, favored complex models of lower predictive ability. BMA performed better than a best single-model approach independently of the prior model weight for 6 out of 16 species. For 6 other species, the choice of the prior model weight affected whether BMA was better than the best single-model approach. Our results demonstrate that parsimonious priors may be favorable over priors that favor complexity for making predictions. The approach we present has direct applications in ecology for better predicting patterns of species' abundance and occurrence. PMID:21947451
Ensemble bayesian model averaging using markov chain Monte Carlo sampling
Vrugt, Jasper A; Diks, Cees G H; Clark, Martyn P
2008-01-01
Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.
Unbiased Average Age-Appropriate Atlases for Pediatric Studies
Fonov, Vladimir; Evans, Alan C.; Botteron, Kelly; Almli, C. Robert; McKinstry, Robert C.; Collins, D. Louis
2010-01-01
Spatial normalization, registration, and segmentation techniques for Magnetic Resonance Imaging (MRI) often use a target or template volume to facilitate processing, take advantage of prior information, and define a common coordinate system for analysis. In the neuroimaging literature, the MNI305 Talairach-like coordinate system is often used as a standard template. However, when studying pediatric populations, variation from the adult brain makes the MNI305 suboptimal for processing brain images of children. Morphological changes occurring during development render the use of age-appropriate templates desirable to reduce potential errors and minimize bias during processing of pediatric data. This paper presents the methods used to create unbiased, age-appropriate MRI atlas templates for pediatric studies that represent the average anatomy for the age range of 4.5–18.5 years, while maintaining a high level of anatomical detail and contrast. The creation of anatomical T1-weighted, T2-weighted, and proton density-weighted templates for specific developmentally important age-ranges, used data derived from the largest epidemiological, representative (healthy and normal) sample of the U.S. population, where each subject was carefully screened for medical and psychiatric factors and characterized using established neuropsychological and behavioral assessments. . Use of these age-specific templates was evaluated by computing average tissue maps for gray matter, white matter, and cerebrospinal fluid for each specific age range, and by conducting an exemplar voxel-wise deformation-based morphometry study using 66 young (4.5–6.9 years) participants to demonstrate the benefits of using the age-appropriate templates. The public availability of these atlases/templates will facilitate analysis of pediatric MRI data and enable comparison of results between studies in a common standardized space specific to pediatric research. PMID:20656036
Average luminosity distance in inhomogeneous universes
Kostov, Valentin
2010-04-01
Using numerical ray tracing, the paper studies how the average distance modulus in an inhomogeneous universe differs from its homogeneous counterpart. The averaging is over all directions from a fixed observer not over all possible observers (cosmic), thus is more directly applicable to our observations. In contrast to previous studies, the averaging is exact, non-perturbative, and includes all non-linear effects. The inhomogeneous universes are represented by Swiss-cheese models containing random and simple cubic lattices of mass-compensated voids. The Earth observer is in the homogeneous cheese which has an Einstein-de Sitter metric. For the first time, the averaging is widened to include the supernovas inside the voids by assuming the probability for supernova emission from any comoving volume is proportional to the rest mass in it. Voids aligned along a certain direction give rise to a distance modulus correction which increases with redshift and is caused by cumulative gravitational lensing. That correction is present even for small voids and depends on their density contrast, not on their radius. Averaging over all directions destroys the cumulative lensing correction even in a non-randomized simple cubic lattice of voids. At low redshifts, the average distance modulus correction does not vanish due to the peculiar velocities, despite the photon flux conservation argument. A formula for the maximal possible average correction as a function of redshift is derived and shown to be in excellent agreement with the numerical results. The formula applies to voids of any size that: (a)have approximately constant densities in their interior and walls; and (b)are not in a deep nonlinear regime. The average correction calculated in random and simple cubic void lattices is severely damped below the predicted maximal one after a single void diameter. That is traced to cancellations between the corrections from the fronts and backs of different voids. The results obtained allow one to readily predict the redshift above which the direction-averaged fluctuation in the Hubble diagram falls below a required precision and suggest a method to extract the background Hubble constant from low redshift data without the need to correct for peculiar velocities.
Conditional simulation of geologically averaged block permeabilities
NASA Astrophysics Data System (ADS)
Journel, A. G.
1996-08-01
Currently available hardware and software for flow simulation can handle up to hundreds of thousands of blocks, or more comfortably tens of thousands of blocks. This limits the discretization of the reservoir model to an extremely coarse grid, say 200 × 200 × 25 for 10 6 blocks. Such a coarse grid cannot represent the structural and petrophysical variability at the resolution provided to geologists by well logs and outcrops. Thus there is no alternative to averaging the impact of all small-scale, within-block, heterogeneities into block 'pseudos' or average values. The flow simulator will account for geological description only through those pseudos, hence detailed modelling of geological heterogeneity should not go beyond the information that block pseudos can carry, at least for flow simulation purposes. It is suggested that the present drive in outcrop sampling be clearly redirected at evaluating 'geopseudos', i.e. at evaluating how small-scale variability (both structural and petrophysical) of typical depositional units averages out into large blocks' effective transmissivities and relative permeabilities. Outcrop data would allow the building of generic, high-resolution, numerical models of the geo-variability within a typical depositional unit: this is where geology intervenes. Then, this numerical model would be input into a generic flow simulator, single or multiphase, yielding genetic block averages, for blocks of various sizes and geometries: this is where the reservoir engineer intervenes. Next, the spatial statistics of these block averages (histograms, variograms, …) would be inferred: this is where the geostatistician intervenes. Last comes the problem of filling-in the actual reservoir volume with simulated block averages specific to each depositional unit. Because each reservoir is unique, random drawing of block average values from the previously inferred generic distributions would not be enough. The placement of block average values in the specific reservoir volume must be made conditional on local data whether well log, seismic or production-derived. This non-trivial task of 'conditional simulation' of block average is the challenge of both the reservoir geologist and geostatistician. This paper proposes an avenue of approach that draws from the pioneering works of Steve Begg at BP-Alaska (1992, 1994) and Jaime Gomez-Hernandez at Universidad of Valencia (1990, 1991).
When Is the Local Average Treatment Close to the Average? Evidence from Fertility and Labor Supply
ERIC Educational Resources Information Center
Ebenstein, Avraham
2009-01-01
The local average treatment effect (LATE) may differ from the average treatment effect (ATE) when those influenced by the instrument are not representative of the overall population. Heterogeneity in treatment effects may imply that parameter estimates from 2SLS are uninformative regarding the average treatment effect, motivating a search for…
Lokemoen, J.T.; Johnson, D.H.; Sharp, D.E.
1990-01-01
During 1976-81 we weighed several thousands of wild Mallard, Gadwall, and Blue-winged Teal in central North Dakota to examine duckling growth patterns, adult weights, and the factors influencing them. One-day-old Mallard and Gadwall averaged 32.4 and 30.4 g, respectively, a reduction of 34% and 29% from fresh egg weights. In all three species, the logistic growth curve provided a good fit for duckling growth patterns. Except for the asymptote, there was no difference in growth curves between males and females of a species. Mallard and Gadwall ducklings were heavier in years when wetland area was extensive or had increased from the previous year. Weights of after-second-year females were greater than yearlings for Mallard but not for Gadwall or Blue-winged Teal. Adult Mallard females lost weight continuously from late March to early July. Gadwall and Blue-winged Teal females, which nest later than Mallard, gained weight after spring arrival, lost weight from the onset of nesting until early July, and then regained some weight. Females of all species captured on nests were lighter than those captured off nests at the same time. Male Mallard weights decreased from spring arrival until late May. Male Gadwall and Blue-winged Teal weights increased after spring arrival, then declined until early June. Males of all three species then gained weight until the end of June. Among adults, female Gadwall and male Mallard and Blue-winged Teal were heavier in years when wetland area had increased from the previous year; female Blue-winged Teal were heavier in years with more wetland area.
A Stochastic Model of Space-Time Variability of Mesoscale Rainfall: Statistics of Spatial Averages
NASA Technical Reports Server (NTRS)
Kundu, Prasun K.; Bell, Thomas L.
2003-01-01
A characteristic feature of rainfall statistics is that they depend on the space and time scales over which rain data are averaged. A previously developed spectral model of rain statistics that is designed to capture this property, predicts power law scaling behavior for the second moment statistics of area-averaged rain rate on the averaging length scale L as L right arrow 0. In the present work a more efficient method of estimating the model parameters is presented, and used to fit the model to the statistics of area-averaged rain rate derived from gridded radar precipitation data from TOGA COARE. Statistical properties of the data and the model predictions are compared over a wide range of averaging scales. An extension of the spectral model scaling relations to describe the dependence of the average fraction of grid boxes within an area containing nonzero rain (the "rainy area fraction") on the grid scale L is also explored.
Self-averaging in complex brain neuron signals
NASA Astrophysics Data System (ADS)
Bershadskii, A.; Dremencov, E.; Fukayama, D.; Yadid, G.
2002-12-01
Nonlinear statistical properties of Ventral Tegmental Area (VTA) of limbic brain are studied in vivo. VTA plays key role in generation of pleasure and in development of psychological drug addiction. It is shown that spiking time-series of the VTA dopaminergic neurons exhibit long-range correlations with self-averaging behavior. This specific VTA phenomenon has no relation to VTA rewarding function. Last result reveals complex role of VTA in limbic brain.
A re-averaged WENO reconstruction and a third order CWENO scheme for hyperbolic conservation laws
NASA Astrophysics Data System (ADS)
Huang, Chieh-Sen; Arbogast, Todd; Hung, Chen-Hui
2014-04-01
A WENO re-averaging (or re-mapping) technique is developed that converts function averages on one grid to another grid to high order. Nonlinear weighting gives the essentially non-oscillatory property to the re-averaged function values. The new reconstruction grid is used to obtain a standard high order WENO reconstruction of the function averages at a select point. By choosing the reconstruction grid to include the point of interest, a high order function value can be reconstructed using only positive linear weights. The re-averaging technique is applied to define two variants of a classic CWENO3 scheme that combines two linear polynomials to obtain formal third order accuracy. Such a scheme cannot otherwise be defined, due to the nonexistence of linear weights for third order reconstruction at the center of a grid element. The new scheme uses a compact stencil of three solution averages, and only positive linear weights are used. The scheme extends easily to problems in higher space dimensions, essentially as a tensor product of the one-dimensional scheme. The scheme maintains formal third order accuracy in higher dimensions. Numerical results show that this CWENO3 scheme is third order accurate for smooth problems and gives good results for non-smooth problems, including those with shocks.
Books Average Previous Decade of Economic Misery
Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159
Books average previous decade of economic misery.
Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159
Comparison of averages of flows and maps.
Kaufmann, Z; Lustfeld, H
2001-11-01
It is shown that in transient chaos there is no direct relation between averages in a continuous time dynamical system (flow) and averages using the analogous discrete system defined by the corresponding Poincaré map. In contrast to permanent chaos, results obtained from the Poincaré map can even be qualitatively incorrect. The reason is that the return time between intersections on the Poincaré surface becomes relevant. However, after introducing a true-time Poincaré map, quantities known from the usual Poincaré map, such as conditionally invariant measure and natural measure, can be generalized to this case. Escape rates and averages, e.g., Liapunov exponents and drifts, can be determined correctly using these measures. Significant differences become evident when we compare with results obtained from the usual Poincaré map. PMID:11736004
Theory of optimal weighting of data to detect climatic change
NASA Technical Reports Server (NTRS)
Bell, T. L.
1986-01-01
A search for climatic change predicted by climate models can easily yield unconvincing results because of 'climatic noise,' the inherent, unpredictable variability of time-average atmospheric data. A weighted average of data that maximizes the probability of detecting predicted climatic change is presented. To obtain the optimal weights, an estimate of the covariance matrix of the data from a prior data set is needed. This introduces additional sampling error into the method. This is presently taken into account. A form of the weighted average is found whose probability distribution is independent of the true (but unknown) covariance statistics of the data and of the climate model prediction.
The modulated average structure of mullite.
Birkenstock, Johannes; Petříček, Václav; Pedersen, Bjoern; Schneider, Hartmut; Fischer, Reinhard X
2015-06-01
Homogeneous and inclusion-free single crystals of 2:1 mullite (Al(4.8)Si(1.2)O(9.6)) grown by the Czochralski technique were examined by X-ray and neutron diffraction methods. The observed diffuse scattering together with the pattern of satellite reflections confirm previously published data and are thus inherent features of the mullite structure. The ideal composition was closely met as confirmed by microprobe analysis (Al(4.82 (3))Si(1.18 (1))O(9.59 (5))) and by average structure refinements. 8 (5) to 20 (13)% of the available Si was found in the T* position of the tetrahedra triclusters. The strong tendencey for disorder in mullite may be understood from considerations of hypothetical superstructures which would have to be n-fivefold with respect to the three-dimensional average unit cell of 2:1 mullite and n-fourfold in case of 3:2 mullite. In any of these the possible arrangements of the vacancies and of the tetrahedral units would inevitably be unfavorable. Three directions of incommensurate modulations were determined: q1 = [0.3137 (2) 0 ½], q2 = [0 0.4021 (5) 0.1834 (2)] and q3 = [0 0.4009 (5) -0.1834 (2)]. The one-dimensional incommensurately modulated crystal structure associated with q1 was refined for the first time using the superspace approach. The modulation is dominated by harmonic occupational modulations of the atoms in the di- and the triclusters of the tetrahedral units in mullite. The modulation amplitudes are small and the harmonic character implies that the modulated structure still represents an average structure in the overall disordered arrangement of the vacancies and of the tetrahedral structural units. In other words, when projecting the local assemblies at the scale of a few tens of average mullite cells into cells determined by either one of the modulation vectors q1, q2 or q3 a weak average modulation results with slightly varying average occupation factors for the tetrahedral units. As a result, the real structure of mullite is locally ordered (as previously known), but on the long-range its average is not completely disordered, the modulated structure of mullite may be denoted the true 'average structure of mullite'. PMID:26027012
Average: the juxtaposition of procedure and context
NASA Astrophysics Data System (ADS)
Watson, Jane; Chick, Helen; Callingham, Rosemary
2014-09-01
This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.
An improved moving average technical trading rule
NASA Astrophysics Data System (ADS)
Papailias, Fotis; Thomakos, Dimitrios D.
2015-06-01
This paper proposes a modified version of the widely used price and moving average cross-over trading strategies. The suggested approach (presented in its 'long only' version) is a combination of cross-over 'buy' signals and a dynamic threshold value which acts as a dynamic trailing stop. The trading behaviour and performance from this modified strategy are different from the standard approach with results showing that, on average, the proposed modification increases the cumulative return and the Sharpe ratio of the investor while exhibiting smaller maximum drawdown and smaller drawdown duration than the standard strategy.
Average model for the galactic absorbtion
Milne, D.K.; Aller, L.H.
1980-01-01
It is found from a comparison of optical and radio emission that planetary nebulae south of the galactic plane exhibit lower average extinction at H..beta.. than those to the north. Suggestions are that either (i) the absorbing material is distributed with a scale height higher in the north than in the south, or (ii) a large absorbing cloud immediately above the Sun has increased the average extinction in the north. In either case the the z-distribution of absorbing material is close to an exponential model, with extinction of 0.7 bel kpc/sup -1/ along the galactic plane and scale height approx.100 pc.
40 CFR 86.449 - Averaging provisions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... the new FEL. Manufacturers must test the motorcycles according to 40 CFR part 1051, subpart D...) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles, General Provisions § 86.449 Averaging provisions. (a) This section describes...
40 CFR 86.449 - Averaging provisions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Averaging provisions. 86.449 Section 86.449 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles, General Provisions...
Why Johnny Can Be Average Today.
ERIC Educational Resources Information Center
Sturrock, Alan
1997-01-01
During a (hypothetical) phone interview with a university researcher, an elementary principal reminisced about a lifetime of reading groups with unmemorable names, medium-paced math problems, patchworked social studies/science lessons, and totally "average" IQ and batting scores. The researcher hung up at the mention of bell-curved assembly lines…
Averaging models for linear piezostructural systems
NASA Astrophysics Data System (ADS)
Kim, W.; Kurdila, A. J.; Stepanyan, V.; Inman, D. J.; Vignola, J.
2009-03-01
In this paper, we consider a linear piezoelectric structure which employs a fast-switched, capacitively shunted subsystem to yield a tunable vibration absorber or energy harvester. The dynamics of the system is modeled as a hybrid system, where the switching law is considered as a control input and the ambient vibration is regarded as an external disturbance. It is shown that under mild assumptions of existence and uniqueness of the solution of this hybrid system, averaging theory can be applied, provided that the original system dynamics is periodic. The resulting averaged system is controlled by the duty cycle of a driven pulse-width modulated signal. The response of the averaged system approximates the performance of the original fast-switched linear piezoelectric system. It is analytically shown that the averaging approximation can be used to predict the electromechanically coupled system modal response as a function of the duty cycle of the input switching signal. This prediction is experimentally validated for the system consisting of a piezoelectric bimorph connected to an electromagnetic exciter. Experimental results show that the analytical predictions are observed in practice over a fixed "effective range" of switching frequencies. The same experiments show that the response of the switched system is insensitive to an increase in switching frequency above the effective frequency range.
Ultrahigh-average-power solid state laser
NASA Astrophysics Data System (ADS)
Vetrovec, John
2002-09-01
This work presents an improved disk laser concept, where a diode- pumped disk is hydrostatically clamped to a rigid substrate and continuously cooled by a microchannel heat exchanger. Effective reduction of thermo-optical distortions makes this laser suitable for continuous operation at ultrahigh-average power.
Average Annual Rainfall over the Globe
ERIC Educational Resources Information Center
Agrawal, D. C.
2013-01-01
The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ׅ
The periodic average structure of particular quasicrystals.
Steurer; Haibach
1999-01-01
The non-crystallographic symmetry of d-dimensional (dD) quasiperiodic structures is incompatible with lattice periodicity in dD physical space. However, dD quasiperiodic structures can be described as irrational sections of nD (n > d) periodic hypercrystal structures. By appropriate oblique projection of particular hypercrystal structures onto physical space, discrete periodic average structures can be obtained. The boundaries of the projected atomic surfaces give the maximum distance of each atom in a quasiperiodic structure from the vertices of the reference lattice of its average structure. These maximum distances turn out to be smaller than even the shortest atomic bond lengths. The metrics of the average structure of a 3D Ammann tiling, for instance, with edge lengths of the unit tiles equal to the bond lengths in elemental aluminium, correspond almost exactly to the metrics of face-centred-cubic aluminium. This is remarkable since most stable quasicrystals contain aluminium as the main constitutent. The study of the average structure of quasicrystals can be a valuable aid to the elucidation of the geometry of quasicrystal-to-crystal transformations. It can also contribute to the derivation of the physically most relevant Brillouin (Jones) zone. PMID:10927229
HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS.
BEN-ZVI, ILAN, DAYRAN, D.; LITVINENKO, V.
2005-08-21
Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department.
A Functional Measurement Study on Averaging Numerosity
ERIC Educational Resources Information Center
Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio
2014-01-01
In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…
Averaging on Earth-Crossing Orbits
NASA Astrophysics Data System (ADS)
Gronchi, G. F.; Milani, A.
The orbits of planet-crossing asteroids (and comets) can undergo close approaches and collisions with some major planet. This introduces a singularity in the N-body Hamiltonian, and the averaging of the equations of motion, traditionally used to compute secular perturbations, is undefined. We show that it is possible to define in a rigorous way some generalised averaged equations of motion, in such a way that the generalised solutions are unique and piecewise smooth. This is obtained, both in the planar and in the three-dimensional case, by means of the method of extraction of the singularities by Kantorovich. The modified distance used to approximate the singularity is the one used by Wetherill in his method to compute probability of collision. Some examples of averaged dynamics have been computed; a systematic exploration of the averaged phase space to locate the secular resonances should be the next step. `Alice sighed wearily. ``I think you might do something better with the time'' she said, ``than waste it asking riddles with no answers'' (Alice in Wonderland, L. Carroll)
Why Johnny Can Be Average Today.
ERIC Educational Resources Information Center
Sturrock, Alan
1997-01-01
During a (hypothetical) phone interview with a university researcher, an elementary principal reminisced about a lifetime of reading groups with unmemorable names, medium-paced math problems, patchworked social studies/science lessons, and totally "average" IQ and batting scores. The researcher hung up at the mention of bell-curved assembly lines
Bayesian Model Averaging for Propensity Score Analysis
ERIC Educational Resources Information Center
Kaplan, David; Chen, Jianshen
2013-01-01
The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…
Measuring Time-Averaged Blood Pressure
NASA Technical Reports Server (NTRS)
Rothman, Neil S.
1988-01-01
Device measures time-averaged component of absolute blood pressure in artery. Includes compliant cuff around artery and external monitoring unit. Ceramic construction in monitoring unit suppresses ebb and flow of pressure-transmitting fluid in sensor chamber. Transducer measures only static component of blood pressure.
Reformulation of Ensemble Averages via Coordinate Mapping.
Schultz, Andrew J; Moustafa, Sabry G; Lin, Weisong; Weinstein, Steven J; Kofke, David A
2016-04-12
A general framework is established for reformulation of the ensemble averages commonly encountered in statistical mechanics. This "mapped-averaging" scheme allows approximate theoretical results that have been derived from statistical mechanics to be reintroduced into the underlying formalism, yielding new ensemble averages that represent exactly the error in the theory. The result represents a distinct alternative to perturbation theory for methodically employing tractable systems as a starting point for describing complex systems. Molecular simulation is shown to provide one appealing route to exploit this advance. Calculation of the reformulated averages by molecular simulation can proceed without contamination by noise produced by behavior that has already been captured by the approximate theory. Consequently, accurate and precise values of properties can be obtained while using less computational effort, in favorable cases, many orders of magnitude less. The treatment is demonstrated using three examples: (1) calculation of the heat capacity of an embedded-atom model of iron, (2) calculation of the dielectric constant of the Stockmayer model of dipolar molecules, and (3) calculation of the pressure of a Lennard-Jones fluid. It is observed that improvement in computational efficiency is related to the appropriateness of the underlying theory for the condition being simulated; the accuracy of the result is however not impacted by this. The framework opens many avenues for further development, both as a means to improve simulation methodology and as a new basis to develop theories for thermophysical properties. PMID:26950263
Initial Conditions in the Averaging Cognitive Model
ERIC Educational Resources Information Center
Noventa, S.; Massidda, D.; Vidotto, G.
2010-01-01
The initial state parameters s[subscript 0] and w[subscript 0] are intricate issues of the averaging cognitive models in Information Integration Theory. Usually they are defined as a measure of prior information (Anderson, 1981; 1982) but there are no general rules to deal with them. In fact, there is no agreement as to their treatment except in…
Glenzinski, D.; /Fermilab
2008-01-01
This paper summarizes a talk given at the Top2008 Workshop at La Biodola, Isola d Elba, Italy. The status of the world average top-quark mass is discussed. Some comments about the challanges facing the experiments in order to further improve the precision are offered.
Overt weight stigma, psychological distress and weight loss treatment outcomes.
Wott, Carissa B; Carels, Robert A
2010-05-01
Weight stigma is pervasive and is associated with psychosocial distress. Little research has examined the association between weight stigma and weight loss treatment outcomes. The current investigation examined overt weight stigma, depression, binge eating, and weight loss treatment outcomes in a sample of 55 overweight and obese adults. Overt weight stigma was significantly associated with greater depression and binge eating and poorer weight loss treatment outcomes in a 14-week behavioral weight loss program, suggesting that overt weight stigma may be detrimental to overweight and obese individuals' ability to lose weight and engage in behaviors consistent with weight loss. PMID:20460417
Orbit Averaging in Perturbed Planetary Rings
NASA Astrophysics Data System (ADS)
Stewart, Glen R.
2015-11-01
The orbital period is typically much shorter than the time scale for dynamical evolution of large-scale structures in planetary rings. This large separation in time scales motivates the derivation of reduced models by averaging the equations of motion over the local orbit period (Borderies et al. 1985, Shu et al. 1985). A more systematic procedure for carrying out the orbit averaging is to use Lie transform perturbation theory to remove the dependence on the fast angle variable from the problem order-by-order in epsilon, where the small parameter epsilon is proportional to the fractional radial distance from exact resonance. This powerful technique has been developed and refined over the past thirty years in the context of gyrokinetic theory in plasma physics (Brizard and Hahm, Rev. Mod. Phys. 79, 2007). When the Lie transform method is applied to resonantly forced rings near a mean motion resonance with a satellite, the resulting orbit-averaged equations contain the nonlinear terms found previously, but also contain additional nonlinear self-gravity terms of the same order that were missed by Borderies et al. and by Shu et al. The additional terms result from the fact that the self-consistent gravitational potential of the perturbed rings modifies the orbit-averaging transformation at nonlinear order. These additional terms are the gravitational analog of electrostatic ponderomotive forces caused by large amplitude waves in plasma physics. The revised orbit-averaged equations are shown to modify the behavior of nonlinear density waves in planetary rings compared to the previously published theory. This reserach was supported by NASA's Outer Planets Reserach program.
ERIC Educational Resources Information Center
Katch, Victor L.
This paper describes a number of factors which go into determining weight. The paper describes what calories are, how caloric expenditure is measured, and why caloric expenditure is different for different people. The paper then outlines the way the body tends to adjust food intake and exercise to maintain a constant body weight. It is speculated…
... us off the hook. How Our Minds Influence Weight Loss Why is it so hard to stick with a healthy eating or exercise plan? Why do we slip up or go ... Plan How Can I Get Back on My Weight-Loss Plan? How Can I Get Motivated to Exercise? Dealing With Feelings When You're Overweight Emotional ...
Sansone, Randy A; Sansone, Lori A
2014-07-01
Acute marijuana use is classically associated with snacking behavior (colloquially referred to as "the munchies"). In support of these acute appetite-enhancing effects, several authorities report that marijuana may increase body mass index in patients suffering from human immunodeficiency virus and cancer. However, for these medical conditions, while appetite may be stimulated, some studies indicate that weight gain is not always clinically meaningful. In addition, in a study of cancer patients in which weight gain did occur, it was less than the comparator drug (megestrol). However, data generally suggest that acute marijuana use stimulates appetite, and that marijuana use may stimulate appetite in low-weight individuals. As for large epidemiological studies in the general population, findings consistently indicate that users of marijuana tend to have lower body mass indices than nonusers. While paradoxical and somewhat perplexing, these findings may be explained by various study confounds, such as potential differences between acute versus chronic marijuana use; the tendency for marijuana use to be associated with other types of drug use; and/or the possible competition between food and drugs for the same reward sites in the brain. Likewise, perhaps the effects of marijuana are a function of initial weight status-i.e., maybe marijuana is a metabolic regulatory substance that increases body weight in low-weight individuals but not in normal-weight or overweight individuals. Only further research will clarify the complex relationships between marijuana and body weight. PMID:25337447
Wire weight is lowered to water surface to measure stage at a site. Levels are made to the wire weights elevation from known benchmarks to ensure correct readings. In the background there is housing protected with dikes along the Missouri River in Mandan, ND....
Technology Transfer Automated Retrieval System (TEKTRAN)
This review evaluated the available scientific literature relative to anthocyanins and weight loss and/or obesity with mention of other effects of anthocyanins on pathologies that are closely related to obesity. Although there is considerable popular press concerning anthocyanins and weight loss, th...
ERIC Educational Resources Information Center
Lakdawalla, Darius; Philipson, Tomas
2007-01-01
We use panel data from the National Longitudinal Survey of Youth to investigate on-the-job exercise and weight. For male workers, job-related exercise has causal effects on weight, but for female workers, the effects seem primarily selective. A man who spends 18 years in the most physical fitness-demanding occupation is about 25 pounds (14…
Weight discrimination and bullying.
Puhl, Rebecca M; King, Kelly M
2013-04-01
Despite significant attention to the medical impacts of obesity, often ignored are the negative outcomes that obese children and adults experience as a result of stigma, bias, and discrimination. Obese individuals are frequently stigmatized because of their weight in many domains of daily life. Research spanning several decades has documented consistent weight bias and stigmatization in employment, health care, schools, the media, and interpersonal relationships. For overweight and obese youth, weight stigmatization translates into pervasive victimization, teasing, and bullying. Multiple adverse outcomes are associated with exposure to weight stigmatization, including depression, anxiety, low self-esteem, body dissatisfaction, suicidal ideation, poor academic performance, lower physical activity, maladaptive eating behaviors, and avoidance of health care. This review summarizes the nature and extent of weight stigmatization against overweight and obese individuals, as well as the resulting consequences that these experiences create for social, psychological, and physical health for children and adults who are targeted. PMID:23731874
Hwang, Yunji; Lee, Kyu Eun; Park, Young Joo; Kim, Su-Jin; Kwon, Hyungju; Park, Do Joon; Cho, Belong; Choi, Ho-Chun; Kang, Daehee; Park, Sue K
2016-03-01
We evaluated the association between weight change in middle-aged adults and papillary thyroid cancer (PTC) based on a large-scale case-control study.Our study included data from 1551 PTC patients (19.3% men and 80.7% women) who underwent thyroidectomy at the 3 general hospitals in Korea and 15,510 individually matched control subjects. The subjects' weight history, epidemiologic information, and tumor characteristics confirmed after thyroidectomy were analyzed. Odds ratios (ORs) and 95% confidence intervals (95% CIs) were determined for the annual average changes in weight and obesity indicators (body mass index (BMI), body surface area, and body fat percentage (BF%) in subjects since the age of 35 years.Subjects with a total weight gain ≥10 kg after age 35 years were more likely to have PTC (men, OR, 5.39, 95% CI, 3.88-7.49; women, OR, 3.36, 95% CI, 2.87-3.93) compared with subjects with a stable weight (loss or gain <5 kg). A marked increase in BMI since age 35 years (annual average change of BMI ≥0.3 kg/m/yr) was related to an elevated PTC risk, and the association was more pronounced for large-sized PTC risks (<1 cm, OR, 2.34, 95% CI, 1.92-2.85; ≥1 cm, OR, 4.00, 95% CI, 2.91-5.49, P heterogeneity = 0.005) compared with low PTC risks.Weight gain and annual increases in obesity indicators in middle-aged adults may increase the risk of developing PTC. PMID:26945379
NASA Astrophysics Data System (ADS)
Fijani, E.; Chitsazan, N.; Nadiri, A.; Tsai, F. T.; Asghari Moghaddam, A.
2012-12-01
Artificial Neural Networks (ANNs) have been widely used to estimate concentration of chemicals in groundwater systems. However, estimation uncertainty is rarely discussed in the literature. Uncertainty in ANN output stems from three sources: ANN inputs, ANN parameters (weights and biases), and ANN structures. Uncertainty in ANN inputs may come from input data selection and/or input data error. ANN parameters are naturally uncertain because they are maximum-likelihood estimated. ANN structure is also uncertain because there is no unique ANN model given a specific case. Therefore, multiple plausible AI models are generally resulted for a study. One might ask why good models have to be ignored in favor of the best model in traditional estimation. What is the ANN estimation variance? How do the variances from different ANN models accumulate to the total estimation variance? To answer these questions we propose a Hierarchical Bayesian Model Averaging (HBMA) framework. Instead of choosing one ANN model (the best ANN model) for estimation, HBMA averages outputs of all plausible ANN models. The model weights are based on the evidence of data. Therefore, the HBMA avoids overconfidence on the single best ANN model. In addition, HBMA is able to analyze uncertainty propagation through aggregation of ANN models in a hierarchy framework. This method is applied for estimation of fluoride concentration in the Poldasht plain and the Bazargan plain in Iran. Unusually high fluoride concentration in the Poldasht and Bazargan plains has caused negative effects on the public health. Management of this anomaly requires estimation of fluoride concentration distribution in the area. The results show that the HBMA provides a knowledge-decision-based framework that facilitates analyzing and quantifying ANN estimation uncertainties from different sources. In addition HBMA allows comparative evaluation of the realizations for each source of uncertainty by segregating the uncertainty sources in a hierarchical framework. Fluoride concentration estimation using the HBMA method shows better agreement to the observation data in the test step because they are not based on a single model with a non-dominate weights.
Rigid shape matching by segmentation averaging.
Wang, Hongzhi; Oliensis, John
2010-04-01
We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking. PMID:20224119
Models of space averaged energetics of plates
NASA Technical Reports Server (NTRS)
Bouthier, O. M.; Bernhard, R. J.
1990-01-01
The analysis of high frequency vibrations in plates is of particular interest in the study of structure borne noise in aircrafts. The current methods of analysis are either too expensive (finite element method) or may have a confidence band wider than desirable (Statistical Energy Analysis). An alternative technique to model the space and time averaged response of structural acoustics problems with enough detail to include all significant mechanisms of energy generation, transmission, and absorption is highly desirable. The focus of this paper is the development of a set of equations which govern the space and time averaged energy density in plates. To solve this equation, a new type of boundary value problem must be treated in terms of energy density variables using energy and intensity boundary conditions. A computer simulation verification study of the energy governing equation is performed. A finite element formulation of the new equations is also implemented and several test cases are analyzed and compared to analytical solutions.
Apparent and average accelerations of the Universe
Bolejko, Krzysztof; Andersson, Lars E-mail: larsa@math.miami.edu
2008-10-15
In this paper we consider the relation between the volume deceleration parameter obtained within the Buchert averaging scheme and the deceleration parameter derived from supernova observation. This work was motivated by recent findings that showed that there are models which despite having {Lambda} = 0 have volume deceleration parameter q{sup vol}<0. This opens the possibility that back-reaction and averaging effects may be used as an interesting alternative explanation to the dark energy phenomenon. We have calculated q{sup vol} in some Lemaitre-Tolman models. For those models which are chosen to be realistic and which fit the supernova data, we find that q{sup vol}>0, while those models which we have been able to find which exhibit q{sup vol}<0 turn out to be unrealistic. This indicates that care must be exercised in relating the deceleration parameter to observations.
Average gluon and quark jet multiplicities
NASA Astrophysics Data System (ADS)
Kotikov, A. V.
2016-01-01
We show the results in [1, 2] for computing the QCD contributions to the scale evolution of average gluon and quark jet multiplicities. The new results came due a recent progress in timelike small-x resummation obtained in the M S ¯ factorization scheme. They depend on two nonperturbative parameters with clear and simple physical interpretations. A global fit of these two quantities to all available experimental data sets demonstrates by its goodness how our results solve a longstandig problem of QCD. Including all the available theoretical input within our approach, αs(5 ) (Mz)= 0.1199±0.0026 has been obtained in the M S ¯ scheme in an approximation equivalent to next-to-next-to-leading order enhanced by the resummations of ln x terms through the NNLL level and of ln Q2 terms by the renormalization group. This result is in excellent agreement with the present world average.
A weighted and directed interareal connectivity matrix for macaque cerebral cortex.
Markov, N T; Ercsey-Ravasz, M M; Ribeiro Gomes, A R; Lamy, C; Magrou, L; Vezoli, J; Misery, P; Falchier, A; Quilodran, R; Gariel, M A; Sallet, J; Gamanut, R; Huissoud, C; Clavagnier, S; Giroud, P; Sappey-Marinier, D; Barone, P; Dehay, C; Toroczkai, Z; Knoblauch, K; Van Essen, D C; Kennedy, H
2014-01-01
Retrograde tracer injections in 29 of the 91 areas of the macaque cerebral cortex revealed 1,615 interareal pathways, a third of which have not previously been reported. A weight index (extrinsic fraction of labeled neurons [FLNe]) was determined for each area-to-area pathway. Newly found projections were weaker on average compared with the known projections; nevertheless, the 2 sets of pathways had extensively overlapping weight distributions. Repeat injections across individuals revealed modest FLNe variability given the range of FLNe values (standard deviation <1 log unit, range 5 log units). The connectivity profile for each area conformed to a lognormal distribution, where a majority of projections are moderate or weak in strength. In the G29 × 29 interareal subgraph, two-thirds of the connections that can exist do exist. Analysis of the smallest set of areas that collects links from all 91 nodes of the G29 × 91 subgraph (dominating set analysis) confirms the dense (66%) structure of the cortical matrix. The G29 × 29 subgraph suggests an unexpectedly high incidence of unidirectional links. The directed and weighted G29 × 91 connectivity matrix for the macaque will be valuable for comparison with connectivity analyses in other species, including humans. It will also inform future modeling studies that explore the regularities of cortical networks. PMID:23010748
A Weighted and Directed Interareal Connectivity Matrix for Macaque Cerebral Cortex
Markov, N. T.; Ercsey-Ravasz, M. M.; Ribeiro Gomes, A. R.; Lamy, C.; Magrou, L.; Vezoli, J.; Misery, P.; Falchier, A.; Quilodran, R.; Gariel, M. A.; Sallet, J.; Gamanut, R.; Huissoud, C.; Clavagnier, S.; Giroud, P.; Sappey-Marinier, D.; Barone, P.; Dehay, C.; Toroczkai, Z.; Knoblauch, K.; Van Essen, D. C.; Kennedy, H.
2014-01-01
Retrograde tracer injections in 29 of the 91 areas of the macaque cerebral cortex revealed 1,615 interareal pathways, a third of which have not previously been reported. A weight index (extrinsic fraction of labeled neurons [FLNe]) was determined for each area-to-area pathway. Newly found projections were weaker on average compared with the known projections; nevertheless, the 2 sets of pathways had extensively overlapping weight distributions. Repeat injections across individuals revealed modest FLNe variability given the range of FLNe values (standard deviation <1 log unit, range 5 log units). The connectivity profile for each area conformed to a lognormal distribution, where a majority of projections are moderate or weak in strength. In the G29 × 29 interareal subgraph, two-thirds of the connections that can exist do exist. Analysis of the smallest set of areas that collects links from all 91 nodes of the G29 × 91 subgraph (dominating set analysis) confirms the dense (66%) structure of the cortical matrix. The G29 × 29 subgraph suggests an unexpectedly high incidence of unidirectional links. The directed and weighted G29 × 91 connectivity matrix for the macaque will be valuable for comparison with connectivity analyses in other species, including humans. It will also inform future modeling studies that explore the regularities of cortical networks. PMID:23010748
Psychological effects of weight retained after pregnancy.
Jenkin, W; Tiggemann, M
1997-01-01
This study is a prospective investigation of the effect of weight retained after pregnancy on weight satisfaction, self-esteem and depressive affect, utilising the framework provided by expectancy-value theory. Self-report data were obtained from 115 women who were in the last month of their first pregnancy, and then again a month following the birth. On average women were heavier four weeks after having their baby than they were prior to becoming pregnant, and were less satisfied with their post-natal weight and shape. They were also slightly heavier than they had anticipated, particularly in the case of the younger women. Actual post-natal weight proved the most important predictor of psychological well-being following birth. PMID:9253140
Clinical and genetic predictors of weight gain in patients diagnosed with breast cancer
Reddy, S M; Sadim, M; Li, J; Yi, N; Agarwal, S; Mantzoros, C S; Kaklamani, V G
2013-01-01
Background: Post-diagnosis weight gain in breast cancer patients has been associated with increased cancer recurrence and mortality. Our study was designed to identify risk factors for this weight gain and create a predictive model to identify a high-risk population for targeted interventions. Methods: Chart review was conducted on 459 breast cancer patients from Northwestern Robert H. Lurie Cancer Centre to obtain weights and body mass indices (BMIs) over an 18-month period from diagnosis. We also recorded tumour characteristics, demographics, clinical factors, and treatment regimens. Blood samples were genotyped for 14 single-nucleotide polymorphisms (SNPs) in fat mass and obesity-associated protein (FTO) and adiponectin pathway genes (ADIPOQ and ADIPOR1). Results: In all, 56% of patients had >0.5 kg m–2 increase in BMI from diagnosis to 18 months, with average BMI and weight gain of 1.9 kg m–2 and 5.1 kg, respectively. Our best predictive model was a primarily SNP-based model incorporating all 14 FTO and adiponectin pathway SNPs studied, their epistatic interactions, and age and BMI at diagnosis, with area under receiver operating characteristic curve of 0.85 for 18-month weight gain. Conclusion: We created a powerful risk prediction model that can identify breast cancer patients at high risk for weight gain. PMID:23922112
The Effect of a Mindful Restaurant Eating Intervention on Weight Management in Women
Timmerman, Gayle M.; Brown, Adama
2011-01-01
Objective To evaluate the effect of a Mindful Restaurant Eating intervention on weight management. Design Random control trial. Setting Greater metropolitan area of Austin, Texas. Participants Women (n = 35) 40-59 years old who eat out at least 3 times per week. Intervention The intervention, using 6 weekly 2 hour small group sessions, focused on reducing calorie and fat intake when eating out through education, behavior change strategies, and mindful eating meditations. Main Outcome Measures Weight, waist circumference, self-reported daily calorie and fat intake, self-reported calories and fat consumed when eating out, emotional eating, diet related self-efficacy, and barriers to weight management when eating out. Analysis General linear models examined change from baseline to final endpoint to determine differences in outcomes between the intervention and control group. Results Participants in the intervention group lost significantly more weight (P =.03), had lower average daily caloric (P =.002) and fat intake (P =.001), had increased diet related self-efficacy (P =.02), and had fewer barriers to weight management when eating out (P =.001). Conclusions and Implications Mindful Restaurant Eating intervention was effective in promoting weight management in perimenopausal women. PMID:22243980
Average Annual Rainfall over the Globe
ERIC Educational Resources Information Center
Agrawal, D. C.
2013-01-01
The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…
The Average Velocity in a Queue
ERIC Educational Resources Information Center
Frette, Vidar
2009-01-01
A number of cars drive along a narrow road that does not allow overtaking. Each driver has a certain maximum speed at which he or she will drive if alone on the road. As a result of slower cars ahead, many cars are forced to drive at speeds lower than their maximum ones. The average velocity in the queue offers a non-trivial example of a mean
Average chemical composition of the lunar surface
NASA Technical Reports Server (NTRS)
Turkevich, A. L.
1973-01-01
The available data on the chemical composition of the lunar surface at eleven sites (3 Surveyor, 5 Apollo and 3 Luna) are used to estimate the amounts of principal chemical elements (those present in more than about 0.5% by atom) in average lunar surface material. The terrae of the moon differ from the maria in having much less iron and titanium and appreciably more aluminum and calcium.
The Average Velocity in a Queue
ERIC Educational Resources Information Center
Frette, Vidar
2009-01-01
A number of cars drive along a narrow road that does not allow overtaking. Each driver has a certain maximum speed at which he or she will drive if alone on the road. As a result of slower cars ahead, many cars are forced to drive at speeds lower than their maximum ones. The average velocity in the queue offers a non-trivial example of a mean…
Stochastic Games with Average Payoff Criterion
Ghosh, M. K.; Bagchi, A.
1998-11-15
We study two-person stochastic games on a Polish state and compact action spaces and with average payoff criterion under a certain ergodicity condition. For the zero-sum game we establish the existence of a value and stationary optimal strategies for both players. For the nonzero-sum case the existence of Nash equilibrium in stationary strategies is established under certain separability conditions.
Modern average global sea-surface temperature
Schweitzer, Peter N.
1993-01-01
The data contained in this data set are derived from the NOAA Advanced Very High Resolution Radiometer Multichannel Sea Surface Temperature data (AVHRR MCSST), which are obtainable from the Distributed Active Archive Center at the Jet Propulsion Laboratory (JPL) in Pasadena, Calif. The JPL tapes contain weekly images of SST from October 1981 through December 1990 in nine regions of the world ocean: North Atlantic, Eastern North Atlantic, South Atlantic, Agulhas, Indian, Southeast Pacific, Southwest Pacific, Northeast Pacific, and Northwest Pacific. This data set represents the results of calculations carried out on the NOAA data and also contains the source code of the programs that made the calculations. The objective was to derive the average sea-surface temperature of each month and week throughout the whole 10-year series, meaning, for example, that data from January of each year would be averaged together. The result is 12 monthly and 52 weekly images for each of the oceanic regions. Averaging the images in this way tends to reduce the number of grid cells that lack valid data and to suppress interannual variability.
Disk-averaged synthetic spectra of Mars
NASA Technical Reports Server (NTRS)
Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather
2005-01-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.
Digital Averaging Phasemeter for Heterodyne Interferometry
NASA Technical Reports Server (NTRS)
Johnson, Donald; Spero, Robert; Shaklan, Stuart; Halverson, Peter; Kuhnert, Andreas
2004-01-01
A digital averaging phasemeter has been built for measuring the difference between the phases of the unknown and reference heterodyne signals in a heterodyne laser interferometer. This phasemeter performs well enough to enable interferometric measurements of distance with accuracy of the order of 100 pm and with the ability to track distance as it changes at a speed of as much as 50 cm/s. This phasemeter is unique in that it is a single, integral system capable of performing three major functions that, heretofore, have been performed by separate systems: (1) measurement of the fractional-cycle phase difference, (2) counting of multiple cycles of phase change, and (3) averaging of phase measurements over multiple cycles for improved resolution. This phasemeter also offers the advantage of making repeated measurements at a high rate: the phase is measured on every heterodyne cycle. Thus, for example, in measuring the relative phase of two signals having a heterodyne frequency of 10 kHz, the phasemeter would accumulate 10,000 measurements per second. At this high measurement rate, an accurate average phase determination can be made more quickly than is possible at a lower rate.
Blanc, Ann K.; Wardlaw, Tessa
2005-01-01
OBJECTIVE: To critically examine the data used to produce estimates of the proportion of infants with low birth weight in developing countries and to describe biases in these data. To assess the effect of adjustment procedures on the estimates and propose a modified estimation procedure for international reporting purposes. METHODS: Mothers' reports about their recent births in 62 nationally representative Demographic and Health Surveys (DHS) conducted between 1990 and 2000 were analysed. The proportion of infants weighed at birth, characteristics of those weighed, extent of misreporting, and mothers' subjective assessments of their children's size at birth were examined. FINDINGS: In many developing countries the majority of infants were not weighed at birth. Those who were weighed were more likely to have mothers who live in urban areas and are educated, and to be born in a medical facility with assistance from medically trained personnel. Birth weights reported by mothers are "heaped" on multiples of 500 grams. CONCLUSION: Current survey-based estimates of the prevalence of low birth weight are biased substantially downwards. Two adjustments to reported data are recommended: a weighting procedure that combines reported birth weights with mothers' assessment of the child's size at birth, and categorization of one-quarter of the infants reported to have a birth weight of exactly 2500 grams as having low birth weight. Averaged over all surveys, these procedures increased the proportion classified as having low birth weight by 25%. We also recommend that the proportion of infants not weighed at birth be routinely reported. Efforts are needed to increase the weighing of newborns and the recording of their weights. PMID:15798841
Average System Cost Methodology : Administrator's Record of Decision.
United States. Bonneville Power Administration.
1984-06-01
Significant features of average system cost (ASC) methodology adopted are: retention of the jurisdictional approach where retail rate orders of regulartory agencies provide primary data for computing the ASC for utilities participating in the residential exchange; inclusion of transmission costs; exclusion of construction work in progress; use of a utility's weighted cost of debt securities; exclusion of income taxes; simplification of separation procedures for subsidized generation and transmission accounts from other accounts; clarification of ASC methodology rules; more generous review timetable for individual filings; phase-in of reformed methodology; and each exchanging utility must file under the new methodology within 20 days of implementation by the Federal Energy Regulatory Commission of the ten major participating utilities, the revised ASC will substantially only affect three. (PSB)
Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing; Liu, Nan-Suey
2012-01-01
In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.
Weight Distribution for Non-binary Cluster LDPC Code Ensemble
NASA Astrophysics Data System (ADS)
Nozaki, Takayuki; Maehara, Masaki; Kasai, Kenta; Sakaniwa, Kohichi
In this paper, we derive the average weight distributions for the irregular non-binary cluster low-density parity-check (LDPC) code ensembles. Moreover, we give the exponential growth rate of the average weight distribution in the limit of large code length. We show that there exist $(2,d_c)$-regular non-binary cluster LDPC code ensembles whose normalized typical minimum distances are strictly positive.
Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.
ERIC Educational Resources Information Center
Dirks, Jean; And Others
1983-01-01
Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)
Estimates of Random Error in Satellite Rainfall Averages
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.
2003-01-01
Satellite rain estimates are most accurate when obtained with microwave instruments on low earth-orbiting satellites. Estimation of daily or monthly total areal rainfall, typically of interest to hydrologists and climate researchers, is made difficult, however, by the relatively poor coverage generally available from such satellites. Intermittent coverage by the satellites leads to random "sampling error" in the satellite products. The inexact information about hydrometeors inferred from microwave data also leads to random "retrieval errors" in the rain estimates. In this talk we will review approaches to quantitative estimation of the sampling error in area/time averages of satellite rain retrievals using ground-based observations, and methods of estimating rms random error, both sampling and retrieval, in averages using satellite measurements themselves.
... spurts in height and weight gain in both boys and girls. Once these changes start, they continue for several ... or obese . Different BMI charts are used for boys and girls under the age of 20 because the amount ...
... probably gain weight. That’s because metabolism (how you burn the calories you eat) can slow down with ... you consume. Energy out means the calories you burn for basic body functions and during physical activity. ...
Englberger, L.
1999-01-01
A programme of weight loss competitions and associated activities in Tonga, intended to combat obesity and the noncommunicable diseases linked to it, has popular support and the potential to effect significant improvements in health. PMID:10063662
... Differences in BMRs are associated with changes in energy balance. Energy balance reflects the difference between the amount of ... such as amphetamines, animals often have a negative energy balance which leads to weight loss. Based on ...
... can boost your efforts by cutting back on alcoholic drinks. Alcohol can cause weight gain in a ... Here is a quick comparison of some common alcoholic drinks: Regular beer, about 150 calories for a ...
... control pills Corticosteroids Some drugs used to treat bipolar disorder, schizophrenia, and depression Some drugs used to treat diabetes Hormone changes or medical problems can also cause unintentional weight gain. This may be due to: ...
... blood sugar levels under control. continue Weight and Type 2 Diabetes Most people are overweight when they're diagnosed with type 2 diabetes . Being overweight or obese increases a person's risk ...
Losing excess weight by eating a healthy diet is one of the best ways of helping to prevent disease. Obesity increases the risk of illness and death due to diabetes, stroke, coronary artery disease, and kidney and gallbladder disorders. The ...
The Economic Impact of Weight Regain
Sheppard, Caroline E.; Lester, Erica L. W.; Chuck, Anderson W.; Birch, Daniel W.; Karmali, Shahzeer; de Gara, Christopher J.
2013-01-01
Background. Obesity is well known for being associated with significant economic repercussions. Bariatric surgery is the only evidence-based solution to this problem as well as a cost-effective method of addressing the concern. Numerous authors have calculated the cost effectiveness and cost savings of bariatric surgery; however, to date the economic impact of weight regain as a component of overall cost has not been addressed. Methods. The literature search was conducted to elucidate the direct costs of obesity and primary bariatric surgery, the rate of weight recidivism and surgical revision, and any costs therein. Results. The quoted cost of obesity in Canada was $2.0 billion–$6.7 billion in 2013 CAD. The median percentage of bariatric procedures that fail due to weight gain or insufficient weight loss is 20% (average: 21.1% ± 10.1%, range: 5.2–39, n = 10). Revision of primary surgeries on average ranges from 2.5% to 18.4%, and depending on the procedure accounts for an additional cost between $14,000 and $50,000 USD per patient. Discussion. There was a significant deficit of the literature pertaining to the cost of revision surgery as compared with primary bariatric surgery. As such, the cycle of weight recidivism and bariatric revisions has not as of yet been introduced into any previous cost analysis of bariatric surgery. PMID:24454339
The economic impact of weight regain.
Sheppard, Caroline E; Lester, Erica L W; Chuck, Anderson W; Birch, Daniel W; Karmali, Shahzeer; de Gara, Christopher J
2013-01-01
Background. Obesity is well known for being associated with significant economic repercussions. Bariatric surgery is the only evidence-based solution to this problem as well as a cost-effective method of addressing the concern. Numerous authors have calculated the cost effectiveness and cost savings of bariatric surgery; however, to date the economic impact of weight regain as a component of overall cost has not been addressed. Methods. The literature search was conducted to elucidate the direct costs of obesity and primary bariatric surgery, the rate of weight recidivism and surgical revision, and any costs therein. Results. The quoted cost of obesity in Canada was $2.0 billion-$6.7 billion in 2013 CAD. The median percentage of bariatric procedures that fail due to weight gain or insufficient weight loss is 20% (average: 21.1% ± 10.1%, range: 5.2-39, n = 10). Revision of primary surgeries on average ranges from 2.5% to 18.4%, and depending on the procedure accounts for an additional cost between $14,000 and $50,000 USD per patient. Discussion. There was a significant deficit of the literature pertaining to the cost of revision surgery as compared with primary bariatric surgery. As such, the cycle of weight recidivism and bariatric revisions has not as of yet been introduced into any previous cost analysis of bariatric surgery. PMID:24454339
A Green's function quantum average atom model
Starrett, Charles Edward
2015-05-21
A quantum average atom model is reformulated using Green's functions. This allows integrals along the real energy axis to be deformed into the complex plane. The advantage being that sharp features such as resonances and bound states are broadened by a Lorentzian with a half-width chosen for numerical convenience. An implementation of this method therefore avoids numerically challenging resonance tracking and the search for weakly bound states, without changing the physical content or results of the model. A straightforward implementation results in up to a factor of 5 speed-up relative to an optimized orbital based code.
Auto-exploratory average reward reinforcement learning
Ok, DoKyeong; Tadepalli, P.
1996-12-31
We introduce a model-based average reward Reinforcement Learning method called H-learning and compare it with its discounted counterpart, Adaptive Real-Time Dynamic Programming, in a simulated robot scheduling task. We also introduce an extension to H-learning, which automatically explores the unexplored parts of the state space, while always choosing greedy actions with respect to the current value function. We show that this {open_quotes}Auto-exploratory H-learning{close_quotes} performs better than the original H-learning under previously studied exploration methods such as random, recency-based, or counter-based exploration.
5 CFR 591.210 - What are weights?
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false What are weights? 591.210 Section 591.210 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS ALLOWANCES AND DIFFERENTIALS Cost-of-Living Allowance and Post Differential-Nonforeign Areas Cost-Of-Living Allowances § 591.210 What are weights? (a) A weight is...
Importance Ratings and Weighting: Old Concerns and New Perspectives
ERIC Educational Resources Information Center
Russell, Lara B.; Hubley, Anita M.
2005-01-01
This article describes key concepts, reviews empirical findings, and discusses important issues related to the use of subjective importance ratings and importance weighting. The review of empirical findings focuses on weighting achieved via the multiplicative model and on 3 areas in which weighting is commonly used: quality of life, self-esteem,…
Averaging sensors technique for active vibration control applications
NASA Astrophysics Data System (ADS)
Cinquemani, S.; Cazzulani, G.; Braghin, F.; Resta, F.
2013-04-01
Fiber Bragg Gratings (FBG) sensors have a great potential in active vibration control of smart structures thanks to their small transversal size and the possibility to make an array of many sensors. The paper deals with the opportunity to reduce vibration in structures by using distributed sensors embedded in carbon fiber structures through the so called sensors-averaging technique. This method provides a properly weighted average of the outputs of a distributed array of sensors generating spatial filters on a broad range of undesired resonance modes without adversely affecting phase and amplitude. This approach combines the positive sides of decentralized control techniques as the control forces applied to the system are independent of one another, while, as for the centralized controls it has the possibility to exploit the information from all the sensors. The ability to easily manage this information allows to synthesize an efficient modal controller. Furthermore it enables to evaluate the stability of the control, the effects of spillover and the consequent effectiveness in reducing vibration. Theoretical aspects are supported by experimental applications on a large flexible system composed of a thin cantilever beam with 30 longitudinal FBG sensors and 6 piezoelectric actuators (PZT).
Predictive RANS simulations via Bayesian Model-Scenario Averaging
Edeling, W.N.; Cinnella, P.; Dwight, R.P.
2014-10-15
The turbulence closure model is the dominant source of error in most Reynolds-Averaged Navier–Stokes simulations, yet no reliable estimators for this error component currently exist. Here we develop a stochastic, a posteriori error estimate, calibrated to specific classes of flow. It is based on variability in model closure coefficients across multiple flow scenarios, for multiple closure models. The variability is estimated using Bayesian calibration against experimental data for each scenario, and Bayesian Model-Scenario Averaging (BMSA) is used to collate the resulting posteriors, to obtain a stochastic estimate of a Quantity of Interest (QoI) in an unmeasured (prediction) scenario. The scenario probabilities in BMSA are chosen using a sensor which automatically weights those scenarios in the calibration set which are similar to the prediction scenario. The methodology is applied to the class of turbulent boundary-layers subject to various pressure gradients. For all considered prediction scenarios the standard-deviation of the stochastic estimate is consistent with the measurement ground truth. Furthermore, the mean of the estimate is more consistently accurate than the individual model predictions.
Evaluation of a Viscosity-Molecular Weight Relationship.
ERIC Educational Resources Information Center
Mathias, Lon J.
1983-01-01
Background information, procedures, and results are provided for a series of graduate/undergraduate polymer experiments. These include synthesis of poly(methylmethacrylate), viscosity experiment (indicating large effect even small amounts of a polymer may have on solution properties), and measurement of weight-average molecular weight by light…
Evaluation of a Viscosity-Molecular Weight Relationship.
ERIC Educational Resources Information Center
Mathias, Lon J.
1983-01-01
Background information, procedures, and results are provided for a series of graduate/undergraduate polymer experiments. These include synthesis of poly(methylmethacrylate), viscosity experiment (indicating large effect even small amounts of a polymer may have on solution properties), and measurement of weight-average molecular weight by light
Analysis of averaged multichannel delay times
NASA Astrophysics Data System (ADS)
Kelkar, N. G.; Nowakowski, M.
2008-07-01
The physical significances and the pros and cons involved in the usage of different time-delay formalisms are discussed. The delay-time matrix introduced by Eisenbud, where only s waves participate in a reaction, is in general related to the definition of an angular time delay which is shown not to be equivalent to the so-called phase time delay of Eisenbud and Wigner even for single channel scattering. Whereas the expression due to Smith which is derived from a time-delayed radial wave packet is consistent with a lifetime matrix which is Hermitian, this is not true for any Eisenbud-type lifetime matrix which violates time-reversal invariance. Extending the angular time delay of Nussenzveig to multiple channels, we show that if one performs an average over the directions and subtracts the forward angle contribution containing an interference of the incident and scattered waves, the multichannel angle-dependent average time delay reduces to the one given by Smith. The present work also rectifies a recently misinterpreted misnomer of the relation due to Smith.
Adaptive common average filtering for myocontrol applications.
Rehbaum, Hubertus; Farina, Dario
2015-02-01
The use of electromyography (EMG) for the control of upper-limb prostheses has received great interest in neurorehabilitation engineering since decades. Important advances have been performed in the development of machine learning algorithms for myocontrol. This paper describes a novel adaptive filter for EMG preprocessing to be applied as conditioning stage for optimal subsequent information extraction. The aim of this filter is to improve both the quality (signal-to-noise ratio) and the selectivity of the EMG recordings. The filter is based on the classic common average reference (CAR), often used in EEG processing. However, while CAR is stationary, the proposed filter, which is referred to as adaptive common average reference (ACAR), is signal-dependent and its spatial transfer function is adapted over time. The ACAR filter is evaluated in this study for noise reduction and selectivity. Furthermore, it is proven that its application improves the performance of both pattern recognition and regression methods for myoelectric control. It is concluded that the proposed novel filter for EMG conditioning is a useful preprocessing tool in myocontrol applications. PMID:25388778
Average observational quantities in the timescape cosmology
Wiltshire, David L.
2009-12-15
We examine the properties of a recently proposed observationally viable alternative to homogeneous cosmology with smooth dark energy, the timescape cosmology. In the timescape model cosmic acceleration is realized as an apparent effect related to the calibration of clocks and rods of observers in bound systems relative to volume-average observers in an inhomogeneous geometry in ordinary general relativity. The model is based on an exact solution to a Buchert average of the Einstein equations with backreaction. The present paper examines a number of observational tests which will enable the timescape model to be distinguished from homogeneous cosmologies with a cosmological constant or other smooth dark energy, in current and future generations of dark energy experiments. Predictions are presented for comoving distance measures; H(z); the equivalent of the dark energy equation of state, w(z); the Om(z) measure of Sahni, Shafieloo, and Starobinsky; the Alcock-Paczynski test; the baryon acoustic oscillation measure, D{sub V}; the inhomogeneity test of Clarkson, Bassett, and Lu; and the time drift of cosmological redshifts. Where possible, the predictions are compared to recent independent studies of similar measures in homogeneous cosmologies with dark energy. Three separate tests with indications of results in possible tension with the {lambda}CDM model are found to be consistent with the expectations of the timescape cosmology.
Average Strength Parameters of Reactivated Mudstone Landslide for Countermeasure Works
NASA Astrophysics Data System (ADS)
Nakamura, Shinya; Kimura, Sho; Buddhi Vithana, Shriwantha
2015-04-01
Among many approaches to landslide stability analysis, in several landslide-related studies, shear strength parameters obtained from laboratory shear tests have been used with the limit equilibrium method. In most of them, it concluded that the average strength parameters, i.e. average cohesion (c'avg) and average angle of shearing resistance (φ'avg), calculated from back analysis were in agreement with the residual shear strength parameters measured by torsional ring-shear tests on undisturbed and remolded samples. However, disagreement with this contention can be found elsewhere that the residual shear strength measured using a torsional ring-shear apparatus were found to be lower than the average strength calculated by back analysis. One of the reasons why the singular application of residual shear strength in stability analysis causes an underestimation of the safety factor is the fact that the condition of the slip surface of a landslide can be heterogeneous. It may consist of portions that have already reached residual conditions along with other portions that have not on the slip surface. With a view of accommodating such possible differences of slip surface conditions of a landslide, it is worthy to first grasp an appropriate perception of the heterogeneous nature of the actual slip-surface to ensure a more suitable selection of measured shear strength values for stability calculation of landslides. For the present study, the determination procedure of the average strength parameters acting along the slip surface has been presented through the stability calculations of reactivated landslides in the area of Shimajiri-mudstone, Okinawa, Japan. The average strength parameters along slip surfaces of landslides have been estimated using the results of laboratory shear tests of the slip surface/zone soils accompanying a rational way of accessing the actual, heterogeneous slip surface conditions. The results tend to show that the shear strength acting along the slip surface of imperfectly-reactivated landslides cannot always be considered equal to its laboratory-measured residual strength. The engineers should rediscover the fact that it is reasonable to apply different strength parameters to the stability analysis depending on the actual conditions of the slip surface that are visible on the boring core samples. In that context, we suggest to show that it is more appropriate to consider average strength parameters for imperfectly-reactivated landslides, for which purpose the use of 'residual shear strength' in combination with other categories of shear strength is recommended. This way, the outcome of the stability analysis will be much more inclusive and representative of the non-slickensided portions of a slip surface as well.
Weight Loss Nutritional Supplements
NASA Astrophysics Data System (ADS)
Eckerson, Joan M.
Obesity has reached what may be considered epidemic proportions in the United States, not only for adults but for children. Because of the medical implications and health care costs associated with obesity, as well as the negative social and psychological impacts, many individuals turn to nonprescription nutritional weight loss supplements hoping for a quick fix, and the weight loss industry has responded by offering a variety of products that generates billions of dollars each year in sales. Most nutritional weight loss supplements are purported to work by increasing energy expenditure, modulating carbohydrate or fat metabolism, increasing satiety, inducing diuresis, or blocking fat absorption. To review the literally hundreds of nutritional weight loss supplements available on the market today is well beyond the scope of this chapter. Therefore, several of the most commonly used supplements were selected for critical review, and practical recommendations are provided based on the findings of well controlled, randomized clinical trials that examined their efficacy. In most cases, the nutritional supplements reviewed either elicited no meaningful effect or resulted in changes in body weight and composition that are similar to what occurs through a restricted diet and exercise program. Although there is some evidence to suggest that herbal forms of ephedrine, such as ma huang, combined with caffeine or caffeine and aspirin (i.e., ECA stack) is effective for inducing moderate weight loss in overweight adults, because of the recent ban on ephedra manufacturers must now use ephedra-free ingredients, such as bitter orange, which do not appear to be as effective. The dietary fiber, glucomannan, also appears to hold some promise as a possible treatment for weight loss, but other related forms of dietary fiber, including guar gum and psyllium, are ineffective.
Women's work. Maintaining a healthy body weight.
Welch, Nicky; Hunter, Wendy; Butera, Karina; Willis, Karen; Cleland, Verity; Crawford, David; Ball, Kylie
2009-08-01
This study describes women's perceptions of the supports and barriers to maintaining a healthy weight among currently healthy weight women from urban and rural socio-economically disadvantaged areas. Using focus groups and interviews, we asked women about their experiences of maintaining a healthy weight. Overwhelmingly, women described their healthy weight practices in terms of concepts related to work and management. The theme of 'managing health' comprised issues of managing multiple responsibilities, time, and emotions associated with healthy practices. Rural women faced particular difficulties in accessing supports at a practical level (for example, lack of childcare) and due to the gendered roles they enacted in caring for others. Family background (in particular, mothers' attitudes to food and weight) also appeared to influence perceptions about healthy weight maintenance. In the context of global increases in the prevalence of obesity, the value of initiatives aimed at supporting healthy weight women to maintain their weight should not be under-estimated. Such initiatives need to work within the social and personal constraints that women face in maintaining good health. PMID:19446587
Impact of Field of Study, College and Year on Calculation of Cumulative Grade Point Average
ERIC Educational Resources Information Center
Trail, Carla; Reiter, Harold I.; Bridge, Michelle; Stefanowska, Patricia; Schmuck, Marylou; Norman, Geoff
2008-01-01
A consistent finding from many reviews is that undergraduate Grade Point Average (uGPA) is a key predictor of academic success in medical school. Curiously, while uGPA has established predictive validity, little is known about its reliability. For a variety of reasons, medical schools use different weighting schemas to combine years of study.…
Orthopedic stretcher with average-sized person can pass through 18-inch opening
NASA Technical Reports Server (NTRS)
Lothschuetz, F. X.
1966-01-01
Modified Robinson stretcher for vertical lifting and carrying, will pass through an opening 18 inches in diameter, while containing a person of average height and weight. A subject 6 feet tall and weighing 200 pounds was lowered and raised out of an 18 inch diameter opening in a tank to test the stretcher.
Model averaging methods to merge statistical and dynamic seasonal streamflow forecasts in Australia
NASA Astrophysics Data System (ADS)
Schepen, A.; Wang, Q. J.
2014-12-01
The Australian Bureau of Meteorology operates a statistical seasonal streamflow forecasting service. It has also developed a dynamic seasonal streamflow forecasting approach. The two approaches produce similarly reliable forecasts in terms of ensemble spread but can differ in forecast skill depending on catchment and season. Therefore, it may be possible to augment the skill of the existing service by objectively weighting and merging the forecasts. Bayesian model averaging (BMA) is first applied to merge statistical and dynamic forecasts for 12 locations using leave-five-years-out cross-validation. It is seen that the BMA merged forecasts can sometimes be too uncertain, as shown by ensemble spreads that are unrealistically wide and even bi-modal. The BMA method applies averaging to forecast probability densities (and thus cumulative probabilities) for a given forecast variable value. An alternative approach is quantile model averaging (QMA), whereby forecast variable values (quantiles) are averaged for a given cumulative probability (quantile fraction). For the 12 locations, QMA is compared to BMA. BMA and QMA perform similarly in terms of forecast accuracy skill scores and reliability in terms of ensemble spread. Both methods improve forecast skill across catchments and seasons by combining the different strengths of the statistical and dynamic approaches. A major advantage of QMA over BMA is that it always produces reasonably well defined forecast distributions, even in the special cases where BMA does not. Optimally estimated QMA weights and BMA weights are similar; however, BMA weights are more efficiently estimated.
Average deployments versus missile and defender parameters
Canavan, G.H.
1991-03-01
This report evaluates the average number of reentry vehicles (RVs) that could be deployed successfully as a function of missile burn time, RV deployment times, and the number of space-based interceptors (SBIs) in defensive constellations. Leakage estimates of boost-phase kinetic-energy defenses as functions of launch parameters and defensive constellation size agree with integral predictions of near-exact calculations for constellation sizing. The calculations discussed here test more detailed aspects of the interaction. They indicate that SBIs can efficiently remove about 50% of the RVs from a heavy missile attack. The next 30% can removed with two-fold less effectiveness. The next 10% could double constellation sizes. 5 refs., 7 figs.
Quetelet, the average man and medical knowledge.
Caponi, Sandra
2013-01-01
Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine. PMID:23970171
Average Gait Differential Image Based Human Recognition
Chen, Jinyan; Liu, Jiansheng
2014-01-01
The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI) is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI), AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA) is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition. PMID:24895648
NASA Astrophysics Data System (ADS)
Davis, C. V.; Hill, T. M.; Moffitt, S. E.
2013-12-01
Foraminiferal shell weight can be impacted by environmental factors both during initial shell formation and as the result of post mortem preservation. An improved understanding of what determines this relationship can lead to both an understanding of foraminiferal calcite production in modern oceans and proxy development for past environmental conditions. Significantly, foraminiferal shell weight has been linked to carbonate ion concentration in both laboratory culture (of both planktic and benthic species) and in the modern and fossil record (in planktic foraminifera). This study explores the relationship between shell weight and changes in oxygenation and carbonate saturation in fossil benthic foraminifera from a high-resolution sedimentary record (MV0811-15JC; 34°36.930' N, 119°12.920' W; 418m water depth; 16.1-3.4 ka; sedimentation rate 44-100 cm kyr-1) from Santa Barbara Basin, CA (SBB). Ongoing work in SBB has described rapid biotic reorganization through the recent deglaciation in response to changes in dissolved oxygen concentrations, which are used here to create a semi quantitative oxygenation history for site MV0811-15JC. In modern Oxygen Minimum Zones, decreases in oxygen closely covary with increases in Total Carbon (with a corresponding decrease in the carbonate saturation state). We interpret that records from SBB of the average size-normalized test weight of Uvigerinid and Bolivinid foraminifera show that shell weight responds to these changes in oxygenation and saturation state. Multiple metrics of 'size normalization' including by length, geometric estimation of surface area and volume, and tracing of individual silhouettes are tested. Regardless of method utilized, the size normalized shell weight of all species fluctuates with abrupt changes in oxygenation and saturation state. Although all species respond to large-scale environmental changes, the weight records of Bolivinids and Uvigerinids reveal distinct differences, indicating that processes governing shell weight may vary across taxonomic groups.
Smoking Cessation and Weight Gain.
ERIC Educational Resources Information Center
Hall, Sharon M.; And Others
1986-01-01
Investigated determinants of weight gain after quitting smoking in two smoking treatment outcome studies. Results indicated abstinence resulted in weight gain, and postquitting weight gain was predicted by pretreatment tobacco use, a history of weight problems, and eating patterns. Relapse to smoking did not follow weight gain. (Author/BL)
Cleaning Physical Education Areas.
ERIC Educational Resources Information Center
Griffin, William R.
1999-01-01
Discusses techniques to help create clean and inviting school locker rooms. Daily, weekly or monthly, biannual, and annual cleaning strategies for locker room showers are highlighted as are the specialized maintenance needs for aerobic and dance areas, running tracks, and weight training areas. (GR)
Generalized constructive tree weights
Rivasseau, Vincent E-mail: adrian.tanasa@ens-lyon.org; Tanasa, Adrian E-mail: adrian.tanasa@ens-lyon.org
2014-04-15
The Loop Vertex Expansion (LVE) is a quantum field theory (QFT) method which explicitly computes the Borel sum of Feynman perturbation series. This LVE relies in a crucial way on symmetric tree weights which define a measure on the set of spanning trees of any connected graph. In this paper we generalize this method by defining new tree weights. They depend on the choice of a partition of a set of vertices of the graph, and when the partition is non-trivial, they are no longer symmetric under permutation of vertices. Nevertheless we prove they have the required positivity property to lead to a convergent LVE; in fact we formulate this positivity property precisely for the first time. Our generalized tree weights are inspired by the Brydges-Battle-Federbush work on cluster expansions and could be particularly suited to the computation of connected functions in QFT. Several concrete examples are explicitly given.
Light weight phosphate cements
Wagh, Arun S.; Natarajan, Ramkumar,; Kahn, David
2010-03-09
A sealant having a specific gravity in the range of from about 0.7 to about 1.6 for heavy oil and/or coal bed methane fields is disclosed. The sealant has a binder including an oxide or hydroxide of Al or of Fe and a phosphoric acid solution. The binder may have MgO or an oxide of Fe and/or an acid phosphate. The binder is present from about 20 to about 50% by weight of the sealant with a lightweight additive present in the range of from about 1 to about 10% by weight of said sealant, a filler, and water sufficient to provide chemically bound water present in the range of from about 9 to about 36% by weight of the sealant when set. A porous ceramic is also disclosed.
Sethi, Bipin Kumar; Nagesh, V Sri
2015-05-01
Ramadan fasting is associated with significant weight loss in both men and women. Reduction in blood pressure, lipids, blood glucose, body mass index and waist and hip circumference may also occur. However, benefits accrued during this month often reverse within a few weeks of cessation of fasting, with most people returning back to their pre-Ramadan body weights and body composition. To ensure maintenance of this fasting induced weight loss, health care professionals should encourage continuation of healthy dietary habits, moderate physical activity and behaviour modification, even after conclusion of fasting. It should be realized that Ramadan is an ideal platform to target year long lifestyle modification, to ensure that whatever health care benefits have been gained during this month, are perpetuated. PMID:26013789
Li, Yongfu; Shen, Hongwei; Lyons, John W; Sammler, Robert L; Brackhagen, Meinolf; Meunier, David M
2016-03-15
Size-exclusion chromatography (SEC) coupled with multi-angle laser light scattering (MALLS) and differential refractive index (DRI) detectors was employed for determination of the molecular weight distributions (MWD) of methylcellulose ethers (MC) and hydroxypropyl methylcellulose ethers (HPMC) having weight-average molecular weights (Mw) ranging from 20 to more than 1,000kg/mol. In comparison to previous work involving right-angle light scattering (RALS) and a viscometer for MWD characterization of MC and HPMC, MALLS yields more reliable molecular weight for materials having weight-average molecular weights (Mw) exceeding about 300kg/mol. A non-ideal SEC separation was observed for cellulose ethers with Mw>800kg/mol, and was manifested by upward divergence of logM vs. elution volume (EV) at larger elution volume at typical SEC flow rate such as 1.0mL/min. As such, the number-average molecular weight (Mn) determined for the sample was erroneously large and polydispersity (Mw/Mn) was erroneously small. This non-ideality resulting in the late elution of high molecular weight chains could be due to the elongation of polymer chains when experimental conditions yield Deborah numbers (De) exceeding 0.5. Non-idealities were eliminated when sufficiently low flow rates were used. Thus, using carefully selected experimental conditions, SEC coupled with MALLS and DRI can provide reliable MWD characterization of MC and HPMC covering the entire ranges of compositions and molecular weights of commercial interest. PMID:26794765
High Average Power, High Energy Short Pulse Fiber Laser System
Messerly, M J
2007-11-13
Recently continuous wave fiber laser systems with output powers in excess of 500W with good beam quality have been demonstrated [1]. High energy, ultrafast, chirped pulsed fiber laser systems have achieved record output energies of 1mJ [2]. However, these high-energy systems have not been scaled beyond a few watts of average output power. Fiber laser systems are attractive for many applications because they offer the promise of high efficiency, compact, robust systems that are turn key. Applications such as cutting, drilling and materials processing, front end systems for high energy pulsed lasers (such as petawatts) and laser based sources of high spatial coherence, high flux x-rays all require high energy short pulses and two of the three of these applications also require high average power. The challenge in creating a high energy chirped pulse fiber laser system is to find a way to scale the output energy while avoiding nonlinear effects and maintaining good beam quality in the amplifier fiber. To this end, our 3-year LDRD program sought to demonstrate a high energy, high average power fiber laser system. This work included exploring designs of large mode area optical fiber amplifiers for high energy systems as well as understanding the issues associated chirped pulse amplification in optical fiber amplifier systems.
Weighted Uncertainty Relations
Xiao, Yunlong; Jing, Naihuan; Li-Jost, Xianqing; Fei, Shao-Ming
2016-01-01
Recently, Maccone and Pati have given two stronger uncertainty relations based on the sum of variances and one of them is nontrivial when the quantum state is not an eigenstate of the sum of the observables. We derive a family of weighted uncertainty relations to provide an optimal lower bound for all situations and remove the restriction on the quantum state. Generalization to multi-observable cases is also given and an optimal lower bound for the weighted sum of the variances is obtained in general quantum situation. PMID:26984295
A theoretical account of cue averaging in the rodent head direction system
Page, Hector J. I.; Walters, Daniel M.; Knight, Rebecca; Piette, Caitlin E.; Jeffery, Kathryn J.; Stringer, Simon M.
2014-01-01
Head direction (HD) cell responses are thought to be derived from a combination of internal (or idiothetic) and external (or allothetic) sources of information. Recent work from the Jeffery laboratory shows that the relative influence of visual versus vestibular inputs upon the HD cell response depends on the disparity between these sources. In this paper, we present simulation results from a model designed to explain these observations. The model accurately replicates the Knight et al. data. We suggest that cue conflict resolution is critically dependent on plastic remapping of visual information onto the HD cell layer. This remap results in a shift in preferred directions of a subset of HD cells, which is then inherited by the rest of the cells during path integration. Thus, we demonstrate how, over a period of several minutes, a visual landmark may gain cue control. Furthermore, simulation results show that weaker visual landmarks fail to gain cue control as readily. We therefore suggest a second longer term plasticity in visual projections onto HD cell areas, through which landmarks with an inconsistent relationship to idiothetic information are made less salient, significantly hindering their ability to gain cue control. Our results provide a mechanism for reliability-weighted cue averaging that may pertain to other neural systems in addition to the HD system. PMID:24366143
A theoretical account of cue averaging in the rodent head direction system.
Page, Hector J I; Walters, Daniel M; Knight, Rebecca; Piette, Caitlin E; Jeffery, Kathryn J; Stringer, Simon M
2014-02-01
Head direction (HD) cell responses are thought to be derived from a combination of internal (or idiothetic) and external (or allothetic) sources of information. Recent work from the Jeffery laboratory shows that the relative influence of visual versus vestibular inputs upon the HD cell response depends on the disparity between these sources. In this paper, we present simulation results from a model designed to explain these observations. The model accurately replicates the Knight et al. data. We suggest that cue conflict resolution is critically dependent on plastic remapping of visual information onto the HD cell layer. This remap results in a shift in preferred directions of a subset of HD cells, which is then inherited by the rest of the cells during path integration. Thus, we demonstrate how, over a period of several minutes, a visual landmark may gain cue control. Furthermore, simulation results show that weaker visual landmarks fail to gain cue control as readily. We therefore suggest a second longer term plasticity in visual projections onto HD cell areas, through which landmarks with an inconsistent relationship to idiothetic information are made less salient, significantly hindering their ability to gain cue control. Our results provide a mechanism for reliability-weighted cue averaging that may pertain to other neural systems in addition to the HD system. PMID:24366143
Microstructural effects on the average properties in porous battery electrodes
NASA Astrophysics Data System (ADS)
García-García, Ramiro; García, R. Edwin
2016-03-01
A theoretical framework is formulated to analytically quantify the effects of the microstructure on the average properties of porous electrodes, including reactive area density and the through-thickness tortuosity as observed in experimentally-determined tomographic sections. The proposed formulation includes the microstructural non-idealities but also captures the well-known perfectly spherical limit. Results demonstrate that in the absence of any particle alignment, the through-thickness Bruggeman exponent α, reaches an asymptotic value of α ∼ 2 / 3 as the shape of the particles become increasingly prolate (needle- or fiber-like). In contrast, the Bruggeman exponent diverges as the shape of the particles become increasingly oblate, regardless of the degree of particle alignment. For aligned particles, tortuosity can be dramatically suppressed, e.g., α → 1 / 10 for ra → 1 / 10 and MRD ∼ 40 . Particle size polydispersity impacts the porosity-tortuosity relation when the average particle size is comparable to the thickness of the electrode layers. Electrode reactivity density can be arbitrarily increased as the particles become increasingly oblate, but asymptotically reach a minimum value as the particles become increasingly prolate. In the limit of a porous electrode comprised of fiber-like particles, the area density decreases by 24% , with respect to a distribution of perfectly spherical particles.
Metamemory, Memory Performance, and Causal Attributions in Gifted and Average Children.
ERIC Educational Resources Information Center
Kurtz, Beth E.; Weinert, Franz E.
1989-01-01
Tested high- and average-achieving German fifth- and seventh-grade students' metacognitive knowledge, attributional beliefs, and performance on a sort recall test. Found ability-related differences in all three areas. Gifted children tended to attribute academic success to high ability while average children attributed success to effort. (SAK)
40 CFR 1051.720 - How do I calculate my average emission level or emission credits?
Code of Federal Regulations, 2010 CFR
2010-07-01
... AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM RECREATIONAL ENGINES AND VEHICLES... vehicles in the engine family times the average internal surface area of the vehicles' fuel tanks. (4... 40 Protection of Environment 32 2010-07-01 2010-07-01 false How do I calculate my average...
40 CFR 1051.720 - How do I calculate my average emission level or emission credits?
Code of Federal Regulations, 2011 CFR
2011-07-01
... AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM RECREATIONAL ENGINES AND VEHICLES... vehicles in the engine family times the average internal surface area of the vehicles' fuel tanks. (4... 40 Protection of Environment 33 2011-07-01 2011-07-01 false How do I calculate my average...
40 CFR 1051.720 - How do I calculate my average emission level or emission credits?
Code of Federal Regulations, 2013 CFR
2013-07-01
... AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM RECREATIONAL ENGINES AND VEHICLES... vehicles in the engine family times the average internal surface area of the vehicles' fuel tanks. (4... 40 Protection of Environment 34 2013-07-01 2013-07-01 false How do I calculate my average...
40 CFR 1051.720 - How do I calculate my average emission level or emission credits?
Code of Federal Regulations, 2012 CFR
2012-07-01
... AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM RECREATIONAL ENGINES AND VEHICLES... vehicles in the engine family times the average internal surface area of the vehicles' fuel tanks. (4... 40 Protection of Environment 34 2012-07-01 2012-07-01 false How do I calculate my average...
40 CFR 1051.720 - How do I calculate my average emission level or emission credits?
Code of Federal Regulations, 2014 CFR
2014-07-01
... AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM RECREATIONAL ENGINES AND VEHICLES... vehicles in the engine family times the average internal surface area of the vehicles' fuel tanks. (4... 40 Protection of Environment 33 2014-07-01 2014-07-01 false How do I calculate my average...
Andersson, Neil; Mitchell, Steven
2006-01-01
Evaluation of mine risk education in Afghanistan used population weighted raster maps as an evaluation tool to assess mine education performance, coverage and costs. A stratified last-stage random cluster sample produced representative data on mine risk and exposure to education. Clusters were weighted by the population they represented, rather than the land area. A "friction surface" hooked the population weight into interpolation of cluster-specific indicators. The resulting population weighted raster contours offer a model of the population effects of landmine risks and risk education. Five indicator levels ordered the evidence from simple description of the population-weighted indicators (level 0), through risk analysis (levels 1-3) to modelling programme investment and local variations (level 4). Using graphic overlay techniques, it was possible to metamorphose the map, portraying the prediction of what might happen over time, based on the causality models developed in the epidemiological analysis. Based on a lattice of local site-specific predictions, each cluster being a small universe, the "average" prediction was immediately interpretable without losing the spatial complexity. PMID:16390549
Perceived weight in youths and risk of overweight or obesity six years later
Duong, Hao T.; Roberts, Robert E.
2014-01-01
Objective To examine the association between perceived overweight in adolescents and the development of overweight or obesity later in life. Methods This paper uses data from a prospective, two-wave cohort study. Participants are 2445 adolescents 11-17 years of age who reported perceived weight at baseline and also had height and weight measured at baseline and at follow-up six years later sampled from managed care groups in a large metropolitan area. Results Youths who perceived themselves as overweight at baseline were approximately 2.5 times as likely to be overweight or obese six years later compared to youths who perceived themselves as average weight (OR= 2.45, 95% CI=1.77-3.39), after adjusting for weight status at baseline, demographic characteristics, major depression, physical activity and dieting behaviors. Those who perceived themselves as skinny were less likely to be overweight or obese later (OR=0.36, 95% CI=0.27-0.49). Conclusions Perceived overweight was associated with overweight or obesity later in life. This relationship was not fully explained by extreme weight control behaviors or major depression. Further research is needed to explore the mechanism involved. PMID:24360137
Weight and age at calving and weight change related to first lactation milk yield.
Fisher, L J; Hall, J W; Jones, S E
1983-10-01
Daily milk yields from 400 first lactations collected from one herd over 16 yr were utilized to ascertain relations of weight and age at calving and change of body weight during first lactation on milk yield and calving interval. These relationships were evaluated for the complete lactation and for each of five 60-day segments of the lactation. The influence of sire on the interrelationship between body weight, age at calving, and milk yield also were measured in data from sires with 10 or more daughters. Average milk yield (300 day) and gain of body weight during first lactation for all records were 5544 and 56.2 kg. Both year and season of calving influenced weight at calving, milk yield, and the relationship between the two. Milk yield was the greatest and body weight gain the least for heifers calving in the fall. Analysis of all records revealed that calving weight but not calving age accounted for a significant portion of variation of milk yield during the first four 60-day periods. Both calving weight and age accounted for a significant amount of the variation of total milk yield. There was a significant effect of sire on calving weight and milk yield but not on total weight gain, age at calving, number of services, or calving interval. There was an increase of number of services and a trend toward a longer calving interval with increasing milk yield. Although age and weight at calving were nearly equal for explaining variation of total yield of milk of first lactation, age at calving was of little value in explaining variation of milk yield of the 60-day intervals. The relationship of these observations to the use of age correction factors for extended first lactation records is discussed. PMID:6643810
Weight change among people randomized to minimal intervention control groups in weight loss trials
Johns, David J.; Hartmann‐Boyce, Jamie; Jebb, Susan A.; Aveyard, Paul
2016-01-01
Objective Evidence on the effectiveness of behavioral weight management programs often comes from uncontrolled program evaluations. These frequently make the assumption that, without intervention, people will gain weight. The aim of this study was to use data from minimal intervention control groups in randomized controlled trials to examine the evidence for this assumption and the effect of frequency of weighing on weight change. Methods Data were extracted from minimal intervention control arms in a systematic review of multicomponent behavioral weight management programs. Two reviewers classified control arms into three categories based on intensity of minimal intervention and calculated 12‐month mean weight change using baseline observation carried forward. Meta‐regression was conducted in STATA v12. Results Thirty studies met the inclusion criteria, twenty‐nine of which had usable data, representing 5,963 participants allocated to control arms. Control arms were categorized according to intensity, as offering leaflets only, a single session of advice, or more than one session of advice from someone without specialist skills in supporting weight loss. Mean weight change at 12 months across all categories was −0.8 kg (95% CI −1.1 to −0.4). In an unadjusted model, increasing intensity by moving up a category was associated with an additional weight loss of −0.53 kg (95% CI −0.96 to −0.09). Also in an unadjusted model, each additional weigh‐in was associated with a weight change of −0.42 kg (95% CI −0.81 to −0.03). However, when both variables were placed in the same model, neither intervention category nor number of weigh‐ins was associated with weight change. Conclusions Uncontrolled evaluations of weight loss programs should assume that, in the absence of intervention, their population would weigh up to a kilogram on average less than baseline at the end of the first year of follow‐up. PMID:27028279
Brief report: Weight dissatisfaction, weight status, and weight loss in Mexican-American children
Technology Transfer Automated Retrieval System (TEKTRAN)
The study objectives were to assess the association between weight dissatisfaction, weight status, and weight loss in Mexican-American children participating in a weight management program. Participants included 265 Mexican American children recruited for a school-based weight management program. Al...
Ensemble Weight Enumerators for Protograph LDPC Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush
2006-01-01
Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.
Body Weight Perception and Weight Control Practices among Teenagers
Jeewon, Rajesh
2013-01-01
Background. Weight-loss behaviours are highly prevalent among adolescents, and body weight perception motivates weight control practices. However, little is known about the association of body weight perception, and weight control practices among teenagers in Mauritius. The aim of this study is to investigate the relationships between actual body weight, body weight perception, and weight control practices among teenagers. Methods. A questionnaire-based survey was used to collect data on anthropometric measurements, weight perception and weight control practices from a sample of 180 male and female students (90 boys and 90 girls) aged between 13 and 18 years old. Results. Based on BMI, 11.7% of students were overweight. Overall, 43.3% of respondents reported trying to lose weight (61.1% girls and 25.6% boys). Weight-loss behaviours were more prevalent among girls. Among the weight-loss teens, 88.5% students perceived themselves as overweight even though only 19.2% were overweight. Reducing fat intake (84.6%), exercising (80.8%), and increasing intake of fruits and vegetables (73.1%) and decreasing intake of sugar (66.7%) were the most commonly reported methods to lose weight. Conclusion. Body weight perception was poorly associated with actual weight status. Gender difference was observed in body weight perception. PMID:24967256
Implicit Bias about Weight and Weight Loss Treatment Outcomes
Carels, Robert A; Hinman, Nova G; Hoffmann, Debra A; Burmeister, Jacob M; Borushok, Jessica E.; Marx, Jenna M; Ashrafioun, Lisham
2014-01-01
Objectives The goal of the current study was to examine the impact of a weight loss intervention on implicit bias toward weight, as well as the relationship among implicit bias, weight loss behaviors, and weight loss outcomes. Additionally, of interest was the relationship among these variables when implicit weight bias was measured with a novel assessment that portrays individuals who are thin and obese engaged in both stereotypical and nonstereotypical health-related behaviors. Methods Implicit weight bias (stereotype consistent and stereotype inconsistent), binge eating, self-monitoring, and body weight were assessed among weight loss participants at baseline and post-treatment (N=44) participating in two weight loss programs. Results Stereotype consistent bias significantly decreased from baseline to post-treatment. Greater baseline stereotype consistent bias was associated with lower binge eating and greater self-monitoring. Greater post-treatment stereotype consistent bias was associated with greater percent weight loss. Stereotype inconsistent bias did not change from baseline to post-treatment and was generally unrelated to outcomes. Conclusion Weight loss treatment may reduce implicit bias toward overweight individuals among weight loss participants. Higher post-treatment stereotype consistent bias was associated with a higher percent weight loss, possibly suggesting that losing weight may serve to maintain implicit weight bias. Alternatively, great implicit weight bias may identify individuals motivated to make changes necessary for weight loss. PMID:25261809
Optimizing Average Precision Using Weakly Supervised Data.
Behl, Aseem; Mohapatra, Pritish; Jawahar, C V; Kumar, M Pawan
2015-12-01
Many tasks in computer vision, such as action classification and object detection, require us to rank a set of samples according to their relevance to a particular visual category. The performance of such tasks is often measured in terms of the average precision (ap). Yet it is common practice to employ the support vector machine ( svm) classifier, which optimizes a surrogate 0-1 loss. The popularity of svmcan be attributed to its empirical performance. Specifically, in fully supervised settings, svm tends to provide similar accuracy to ap-svm, which directly optimizes an ap-based loss. However, we hypothesize that in the significantly more challenging and practically useful setting of weakly supervised learning, it becomes crucial to optimize the right accuracy measure. In order to test this hypothesis, we propose a novel latent ap-svm that minimizes a carefully designed upper bound on the ap-based loss function over weakly supervised samples. Using publicly available datasets, we demonstrate the advantage of our approach over standard loss-based learning frameworks on three challenging problems: action classification, character recognition and object detection. PMID:26539857
Dynamic speckle texture processing using averaged dimensions
NASA Astrophysics Data System (ADS)
Rabal, Héctor; Arizaga, Ricardo; Cap, Nelly; Trivi, Marcelo; Mavilio Nuñez, Adriana; Fernandez Limia, Margarita
2006-08-01
Dynamic speckle or biospeckle is a phenomenon generated by laser light scattering in biological tissues. It is also present in some industrial processes where the surfaces exhibit some kind of activity. There are several methods to characterize the dynamic speckle pattern activity. For quantitative measurements, the Inertia Moment of the co occurrence matrix of the temporal history of the speckle pattern (THSP) is usually used. In this work we propose the use of average dimensions (AD) for quantitative classifications of textures of THSP images corresponding to different stages of the sample. The AD method was tested in an experiment with the drying of paint, a non biological phenomenon that we usually use as dynamic speckle initial test. We have chosen this phenomenon because its activity can be followed in a relatively simple way by gravimetric measures and because its behaviour is rather predictable. Also, the AD was applied to numerically simulated THSP images and the performance was compared with other quantitative method. Experiments with biological samples are currently under development.
Average oxidation state of carbon in proteins
Dick, Jeffrey M.
2014-01-01
The formal oxidation state of carbon atoms in organic molecules depends on the covalent structure. In proteins, the average oxidation state of carbon (ZC) can be calculated as an elemental ratio from the chemical formula. To investigate oxidation–reduction (redox) patterns, groups of proteins from different subcellular locations and phylogenetic groups were selected for comparison. Extracellular proteins of yeast have a relatively high oxidation state of carbon, corresponding with oxidizing conditions outside of the cell. However, an inverse relationship between ZC and redox potential occurs between the endoplasmic reticulum and cytoplasm. This trend provides support for the hypothesis that protein transport and turnover are ultimately coupled to the maintenance of different glutathione redox potentials in subcellular compartments. There are broad changes in ZC in whole-genome protein compositions in microbes from different environments, and in Rubisco homologues, lower ZC tends to occur in organisms with higher optimal growth temperature. Energetic costs calculated from thermodynamic models are consistent with the notion that thermophilic organisms exhibit molecular adaptation to not only high temperature but also the reducing nature of many hydrothermal fluids. Further characterization of the material requirements of protein metabolism in terms of the chemical conditions of cells and environments may help to reveal other linkages among biochemical processes with implications for changes on evolutionary time scales. PMID:25165594
Calculating Free Energies Using Average Force
NASA Technical Reports Server (NTRS)
Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)
2001-01-01
A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.
Menichetti, Giulia; Remondini, Daniel; Panzarasa, Pietro; Mondragón, Raúl J.; Bianconi, Ginestra
2014-01-01
One of the most important challenges in network science is to quantify the information encoded in complex network structures. Disentangling randomness from organizational principles is even more demanding when networks have a multiplex nature. Multiplex networks are multilayer systems of nodes that can be linked in multiple interacting and co-evolving layers. In these networks, relevant information might not be captured if the single layers were analyzed separately. Here we demonstrate that such partial analysis of layers fails to capture significant correlations between weights and topology of complex multiplex networks. To this end, we study two weighted multiplex co-authorship and citation networks involving the authors included in the American Physical Society. We show that in these networks weights are strongly correlated with multiplex structure, and provide empirical evidence in favor of the advantage of studying weighted measures of multiplex networks, such as multistrength and the inverse multiparticipation ratio. Finally, we introduce a theoretical framework based on the entropy of multiplex ensembles to quantify the information stored in multiplex networks that would remain undetected if the single layers were analyzed in isolation. PMID:24906003
ERIC Educational Resources Information Center
Nutter, June
1995-01-01
Secondary level physical education teachers can have their students use math concepts while working out on the weight-room equipment. The article explains how students can reinforce math skills while weightlifting by estimating their strength, estimating their power, or calculating other formulas. (SM)
ERIC Educational Resources Information Center
Sherman, Rachel M.
1997-01-01
Examines ways of giving an existing weight-training room new life without spending a lot of time and money. Tips include adding rubber floor coverings; using indirect lighting; adding windows, art work, or mirrors to open up the room; using more aesthetically pleasing ceiling tiles; upgrading ventilation; repadding or painting the equipment; and
Association between Dietary Carbohydrates and Body Weight
Ma, Yunsheng; Olendzki, Barbara; Chiriboga, David; Hebert, James R.; Li, Youfu; Li, Wenjun; Campbell, MaryJane; Gendreau, Katherine; Ockene, Ira S.
2005-01-01
The role of dietary carbohydrates in weight loss has received considerable attention in light of the current obesity epidemic. The authors investigated the association of body mass index (weight (kg)/height (m)2) with dietary intake of carbohydrates and with measures of the induced glycemic response, using data from an observational study of 572 healthy adults in central Massachusetts. Anthropometric measurements, 7-day dietary recalls, and physical activity recalls were collected quarterly from each subject throughout a 1-year study period. Data were collected between 1994 and 1998. Longitudinal analyses were conducted, and results were adjusted for other factors related to body habitus. Average body mass index was 27.4 kg/m2 (standard deviation, 5.5), while the average percentage of calories from carbohydrates was 44.9 (standard deviation, 9.6). Mean daily dietary glycemic index was 81.7 (standard deviation, 5.5), and glycemic load was 197.8 (standard deviation, 105.2). Body mass index was found to be positively associated with glycemic index, a measure of the glycemic response associated with ingesting different types of carbohydrates, but not with daily carbohydrate intake, percentage of calories from carbohydrates, or glycemic load. Results suggest that the type of carbohydrate may be related to body weight. However, further research is required to elucidate this association and its implications for weight management. PMID:15692080
Paper area density measurement from forward transmitted scattered light
Koo, Jackson C.
2001-01-01
A method whereby the average paper fiber area density (weight per unit area) can be directly calculated from the intensity of transmitted, scattered light at two different wavelengths, one being a non-absorpted wavelength. Also, the method makes it possible to derive the water percentage per fiber area density from a two-wavelength measurement. In the optical measuring technique optical transmitted intensity, for example, at 2.1 microns cellulose absorption line is measured and compared with another scattered, optical transmitted intensity reference in the nearby spectrum region, such as 1.68 microns, where there is no absorption. From the ratio of these two intensities, one can calculate the scattering absorption coefficient at 2.1 microns. This absorption coefficient at this wavelength is, then, experimentally correlated to the paper fiber area density. The water percentage per fiber area density can be derived from this two-wavelength measurement approach.
Raboisson, Didier; Dervillé, Marie; Herman, Nicolas; Cahuzac, Eric; Sans, Pierre; Allaire, Gilles
2012-08-01
Mastitis is a multifactorial disease and the most costly dairy production issue. In spite of extensive literature on udder-health risk factors, effects of metabolic diseases, farmers' competencies and livestock farming system on somatic cells count (SCC) are sparsely described. Herd-level or territorial-level factors affecting monthly composite milk weighted mean cow SCC (CMSCC) were analysed with a linear mixed effect model. The average CMSCC was 266,000 cells/ml. Half of the herds had CMSCC >300,000 cells/ml for 2-6 months a year, and 15% of herds for more than 7 months a year. CMSCC was positively associated with the number of cows, having a beef or fattening herd in addition to the dairy herd, the monthly average days in milk, the yearly age at first calving, the yearly proportion of purchased cows and the yearly culling rate. Moreover, a positive association is reported between CMSCC and the monthly proportion of cows probably with subacute ruminal acidosis (fat percentage minus protein percentage ≤0·30%, for Holstein) and negative energy balance (protein to fat ratio ≤0·66, for Holstein), the yearly average calving interval, having at least one dead cow and the mean monthly temperature. The association was negative for a predominant breed other than Holstein, the monthly milk production, the yearly dry-off period length, the monthly first calving cow proportion, having an autumn calving peak, being a Good Breeding Practices member, the monthly number of days with rain, the altitude and the territorial cattle density. CMSCC varied widely among the 11 dairy production areas. In conclusion, this study showed the average CMSCC for the French dairy cows, compared with international results. Moreover, it quantified the contribution of several factors to CMSCC, in particular metabolic diseases and the farm environment. PMID:22687283
Future research in weight bias: What next?
Alberga, Angela S; Russell-Mayhew, Shelly; von Ranson, Kristin M; McLaren, Lindsay; Ramos Salas, Ximena; Sharma, Arya M
2016-06-01
The 2015 Canadian Weight Bias Summit disseminated the newest research advances and brought together 40 experts, stakeholders, and policy makers in various disciplines in health, education, and public policy to identify future research directions in weight bias. In this paper we aim to share the results of the Summit as well as encourage international and interdisciplinary research collaborations in weight bias reduction. Consensus emerged on six research areas that warrant further investigation in weight bias: costs, causes, measurement, qualitative research and lived experience, interventions, and learning from other models of discrimination. These discussions highlighted three key lessons that were informed by the Summit, namely: language matters, the voices of people living with obesity should be incorporated, and interdisciplinary stakeholders should be included. PMID:27129601
Global Average Brightness Temperature for April 2003
NASA Technical Reports Server (NTRS)
2003-01-01
[figure removed for brevity, see original site] Figure 1
This image shows average temperatures in April, 2003, observed by AIRS at an infrared wavelength that senses either the Earth's surface or any intervening cloud. Similar to a photograph of the planet taken with the camera shutter held open for a month, stationary features are captured while those obscured by moving clouds are blurred. Many continental features stand out boldly, such as our planet's vast deserts, and India, now at the end of its long, clear dry season. Also obvious are the high, cold Tibetan plateau to the north of India, and the mountains of North America. The band of yellow encircling the planet's equator is the Intertropical Convergence Zone (ITCZ), a region of persistent thunderstorms and associated high, cold clouds. The ITCZ merges with the monsoon systems of Africa and South America. Higher latitudes are increasingly obscured by clouds, though some features like the Great Lakes, the British Isles and Korea are apparent. The highest latitudes of Europe and Eurasia are completely obscured by clouds, while Antarctica stands out cold and clear at the bottom of the image.
The Atmospheric Infrared Sounder Experiment, with its visible, infrared, and microwave detectors, provides a three-dimensional look at Earth's weather. Working in tandem, the three instruments can make simultaneous observations all the way down to the Earth's surface, even in the presence of heavy clouds. With more than 2,000 channels sensing different regions of the atmosphere, the system creates a global, 3-D map of atmospheric temperature and humidity and provides information on clouds, greenhouse gases, and many other atmospheric phenomena. The AIRS Infrared Sounder Experiment flies onboard NASA's Aqua spacecraft and is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., under contract to NASA. JPL is a division of the California Institute of Technology in Pasadena.
Ideal Weight and Weight Satisfaction: Association With Health Practices
Ardern, Chris I.; Church, Timothy S.; Hebert, James R.; Sui, Xuemei; Blair, Steven N.
2009-01-01
Evidence suggests that individuals have become more tolerant of higher body weights over time. To investigate this issue further, the authors examined cross-sectional associations among ideal weight, examination year, and obesity as well as the association of ideal weight and body weight satisfaction with health practices among 15,221 men and 4,126 women in the United States. Participants in 1987 reported higher ideal weights than participants in 2001, an effect particularly pronounced from 1987 to 2001 for younger and obese men (85.5 kg to 94.9 kg) and women (62.2 kg to 70.5 kg). For a given body mass index, higher ideal body weights were associated with greater weight satisfaction but lower intentions to lose weight. Body weight satisfaction was subsequently associated with greater walking/jogging, better diet, and lower lifetime weight loss but with less intention to change physical activity and diet or lose weight (P < 0.01). Conversely, body mass index was negatively associated with weight satisfaction (P < 0.01) and was associated with less walking/jogging, poorer diet, and greater lifetime weight loss but with greater intention to change physical activity and diet or lose weight. Although the health implications of these findings are somewhat unclear, increased weight satisfaction, in conjunction with increases in societal overweight/obesity, may result in decreased motivation to lose weight and/or adopt healthier lifestyle behaviors. PMID:19546153
Models for predicting objective function weights in prostate cancer IMRT
Boutilier, Justin J. Lee, Taewoo; Craig, Tim; Sharpe, Michael B.; Chan, Timothy C. Y.
2015-04-15
Purpose: To develop and evaluate the clinical applicability of advanced machine learning models that simultaneously predict multiple optimization objective function weights from patient geometry for intensity-modulated radiation therapy of prostate cancer. Methods: A previously developed inverse optimization method was applied retrospectively to determine optimal objective function weights for 315 treated patients. The authors used an overlap volume ratio (OV) of bladder and rectum for different PTV expansions and overlap volume histogram slopes (OVSR and OVSB for the rectum and bladder, respectively) as explanatory variables that quantify patient geometry. Using the optimal weights as ground truth, the authors trained and applied three prediction models: logistic regression (LR), multinomial logistic regression (MLR), and weighted K-nearest neighbor (KNN). The population average of the optimal objective function weights was also calculated. Results: The OV at 0.4 cm and OVSR at 0.1 cm features were found to be the most predictive of the weights. The authors observed comparable performance (i.e., no statistically significant difference) between LR, MLR, and KNN methodologies, with LR appearing to perform the best. All three machine learning models outperformed the population average by a statistically significant amount over a range of clinical metrics including bladder/rectum V53Gy, bladder/rectum V70Gy, and dose to the bladder, rectum, CTV, and PTV. When comparing the weights directly, the LR model predicted bladder and rectum weights that had, on average, a 73% and 74% relative improvement over the population average weights, respectively. The treatment plans resulting from the LR weights had, on average, a rectum V70Gy that was 35% closer to the clinical plan and a bladder V70Gy that was 29% closer, compared to the population average weights. Similar results were observed for all other clinical metrics. Conclusions: The authors demonstrated that the KNN and MLR weight prediction methodologies perform comparably to the LR model and can produce clinical quality treatment plans by simultaneously predicting multiple weights that capture trade-offs associated with sparing multiple OARs.
Morasiewicz, Piotr; Dragan, Szymon
2013-01-01
Distortion of the axis and shortening of the limbs result in multiple musculoskeletal pathologies. Rotation disorders should also be included among the disorders of the axis of the lower limb. In the case of rotational distortion, only derotation osteotomy can effectively correct torsion-associated deformations. Rotational distortion correction is accompanied by translational displacement and torsion, which results in more complex biomechanics. Using the pedobarographic platform, it is possible to evaluate static and dynamic posture and gait, percentage of body weight distribution on the lower limbs, and balance. Physiological gait and distribution of weight on the lower extremities are symmetrical. Balance is one of the determinants of proper biomechanics of the musculoskeletal system. An important aspect of treatment evaluation is pedobarographic assessment of balance and body weight distribution on the lower extremities ratio. The aim of this work was to evaluate the pedobarographic assessment of body weight distribution on the lower limbs and balance in patients with derotation corticotomies using the Ilizarov method. The study examined a group of 56 patients, who underwent derotation corticotomy using the Illizarov method between 1996 and 2012 at the Clinic of Orthopaedics and Traumatology of the Musculoskeletal System in Wrocław. The control group consisted of 54 patients, who were treated with correctional derotation-free corticotomy using the Ilizarov. Distribution of body weight on the lower limbs and balance were assessed with the pedobarographic platform. Following derotation corticotomy, the amount of body weight placed on the operated limb by subjects from the study group averaged 47.81%, 52.19% in the case of the healthy limb. These differences were not statistically significant. The difference between the average percentage of body weight placed on the diseased and healthy limb in the study group and the controls were not found to be statistically significant. There were no statistical differences in the average length of the gravity line or in the average surface area of the center of gravity position between the study and control groups. Balanced distribution of body weight on the lower limbs was achieved following derotation corticotomies using the Ilizarov method. Derotation corticotomies performed with the Ilizarov method allow for achieving normalization of body weight distribution on the lower limbs and balance, with values similar to those resulting from Ilizarov method derotation-free osteotomy. PMID:23952018
Particle sizing by weighted measurements of scattered light
NASA Technical Reports Server (NTRS)
Buchele, Donald R.
1988-01-01
A description is given of a measurement method, applicable to a poly-dispersion of particles, in which the intensity of scattered light at any angle is weighted by a factor proportional to that angle. Determination is then made of four angles at which the weighted intensity is four fractions of the maximum intensity. These yield four characteristic diameters, i.e., the diameters of the volume/area mean (D sub 32 the Sauter mean) and the volume/diameter mean (D sub 31); the diameters at cumulative volume fractions of 0.5 (D sub v0.5 the volume median) and 0.75 (D sub v0.75). They also yield the volume dispersion of diameters. Mie scattering computations show that an average diameter less than three micrometers cannot be accurately measured. The results are relatively insensitive to extraneous background light and to the nature of the diameter distribution. Also described is an experimental method of verifying the conclusions by using two microscopic slides coated with polystyrene microspheres to simulate the particles and the background.
Maternal weight gain during pregnancy and neonatal birth weight: a review of the literature
Monte, Santo; Valenti, Oriana; Giorgio, Elsa; Renda, Eliana; Hyseni, Entela; Faraci, Marianna; De Domenico, Roberta; Di Prima, Fosca A. F.
2011-01-01
Obesity has become a serious global public health issue and has consequences for nearly all areas of medicine. Within obstetrics, obesity not only has direct implications for the health of a pregnancy but also impacts on the weight of the child in infancy and beyond. As such, maternal weight may influence the prevalence and severity of obesity in future generations. Pregnancy has been identified as a key time to target a weight control or weight loss strategy to help curb the rapidly growing obesity epidemic. This study reviews the current evidence for interventions to promote weight control or weight loss in women around the time of pregnancy. Studies have shown positive correlations between both maternal pre-pregnancy weight and gestational weight gain with the birth weight of the infant and associated health risks, so interventions have been put to clinical trials at both time points. Many women are concerned about the health of their babies during pregnancy and are in frequent contact with their healthcare providers, pregnancy may be an especially powerful “teachable moment” for the promotion of healthy eating and physical activity behaviors among women. PMID:22439072
Steinberg, Dori M; Bennett, Gary G; Askew, Sandy; Tate, Deborah F
2015-01-01
Background Daily weighing is emerging as the recommended self-weighing frequency for weight loss. This is likely because it improves adoption of weight control behaviors. Objective Examine whether weighing everyday is associated with greater adoption of weight control behaviors compared to less frequent weighing. Design Longitudinal analysis of a previously conducted 6-month randomized-controlled trial. Participants/setting Overweight men and women in Chapel Hill, NC in the Intervention arm (N=47). Intervention The intervention focused on daily weighing for weight loss using an e-scale that transmitted weights to a study website, along with weekly e-mailed lessons and tailored feedback on daily weighing adherence and weight loss progress. Main outcome measures We gathered objective data on self-weighing frequency from e-scales. At baseline and 6 months, weight change was measured in clinic, and weight control behaviors (total items = 37), dietary strategies, and caloric expenditure from physical activity were assessed via questionnaires. Caloric intake was assessed using an online 24-hour recall tool. Statistical analyses We used chi-square tests to examine variation in discrete weight control behaviors and linear regression models to examine differences in weight, dietary strategies, and calorie intake and expenditure by self weighing frequency. Results Fifty-one percent of participants weighed everyday (n= 24) over 6 months. The average self-weighing frequency among those weighing less than daily (n=23) was 5.4±1.2 days per week. Daily weighers lost significantly more weight compared to those weighing less than daily (mean difference, −6.1kg; 95% CI −10.2, −2.1; p=.004). The total number of weight control behaviors adopted was greater among daily weighers (17.6 ± 7.6 vs. 11.2 ± 6.4; p=.004). There were no differences by self-weighing frequency in dietary strategies, caloric intake, or caloric expenditure. Conclusions Weighing everyday led to greater adoption of weight control behaviors and produced greater weight loss compared to weighing most days of the week. This further indicates daily weighing as an effective weight loss tool. PMID:25683820
High average power, high current pulsed accelerator technology
Neau, E.L.
1995-05-01
Which current pulsed accelerator technology was developed during the late 60`s through the late 80`s to satisfy the needs of various military related applications such as effects simulators, particle beam devices, free electron lasers, and as drivers for Inertial Confinement Fusion devices. The emphasis in these devices is to achieve very high peak power levels, with pulse lengths on the order of a few 10`s of nanoseconds, peak currents of up to 10`s of MA, and accelerating potentials of up to 10`s of MV. New which average power systems, incorporating thermal management techniques, are enabling the potential use of high peak power technology in a number of diverse industrial application areas such as materials processing, food processing, stack gas cleanup, and the destruction of organic contaminants. These systems employ semiconductor and saturable magnetic switches to achieve short pulse durations that can then be added to efficiently give MV accelerating, potentials while delivering average power levels of a few 100`s of kilowatts to perhaps many megawatts. The Repetitive High Energy Puled Power project is developing short-pulse, high current accelerator technology capable of generating beams with kJ`s of energy per pulse delivered to areas of 1000 cm{sup 2} or more using ions, electrons, or x-rays. Modular technology is employed to meet the needs of a variety of applications requiring from 100`s of kV to MV`s and from 10`s to 100`s of kA. Modest repetition rates, up to a few 100`s of pulses per second (PPS), allow these machines to deliver average currents on the order of a few 100`s of mA. The design and operation of the second generation 300 kW RHEPP-II machine, now being brought on-line to operate at 2.5 MV, 25 kA, and 100 PPS will be described in detail as one example of the new high average power, high current pulsed accelerator technology.
Optimal estimation of the diffusion coefficient from non-averaged and averaged noisy magnitude data
NASA Astrophysics Data System (ADS)
Kristoffersen, Anders
2007-08-01
The magnitude operation changes the signal distribution in MRI images from Gaussian to Rician. This introduces a bias that must be taken into account when estimating the apparent diffusion coefficient. Several estimators are known in the literature. In the present paper, two novel schemes are proposed. Both are based on simple least squares fitting of the measured signal, either to the median (MD) or to the maximum probability (MP) value of the Probability Density Function (PDF). Fitting to the mean (MN) or a high signal-to-noise ratio approximation to the mean (HS) is also possible. Special attention is paid to the case of averaged magnitude images. The PDF, which cannot be expressed in closed form, is analyzed numerically. A scheme for performing maximum likelihood (ML) estimation from averaged magnitude images is proposed. The performance of several estimators is evaluated by Monte Carlo (MC) simulations. We focus on typical clinical situations, where the number of acquisitions is limited. For non-averaged data the optimal choice is found to be MP or HS, whereas uncorrected schemes and the power image (PI) method should be avoided. For averaged data MD and ML perform equally well, whereas uncorrected schemes and HS are inadequate. MD provides easier implementation and higher computational efficiency than ML. Unbiased estimation of the diffusion coefficient allows high resolution diffusion tensor imaging (DTI) and may therefore help solving the problem of crossing fibers encountered in white matter tractography.
A Microgenetic Analysis of Strategic Variability in Gifted and Average-Ability Children
ERIC Educational Resources Information Center
Steiner, Hillary Hettinger
2006-01-01
Many researchers have described cognitive differences between gifted and average-performing children. Regarding strategy use, the gifted advantage is often associated with differences such as greater knowledge of strategies, quicker problem solving, and the ability to use strategies more appropriately. The current study used microgenetic methods…
Saturn kilometric radiation: Average and statistical properties
NASA Astrophysics Data System (ADS)
Lamy, L.; Zarka, P.; Cecconi, B.; Prangé, R.; Kurth, W. S.; Gurnett, D. A.
2008-07-01
Since Cassini entered Saturn's magnetosphere in July 2004, the auroral Saturnian kilometric radiation (SKR), which dominates the kronian radio spectrum, is observed quasi-continuously. Consecutive orbits of the spacecraft covered distances to Saturn down to 1.3 Saturn radii, all local times and, since December 2006, latitudes as high as 60°. On the basis of carefully calibrated and cleaned long-term time series and dynamic spectra, we analyze the average properties, and characteristics of the SKR over 2.75 years starting at Cassini's Saturn orbit insertion. This study confirms and expands previous results from Voyager 1 and 2 studies in the 1980s: the SKR spectrum is found to extend from a few kHz to 1200 kHz; extraordinary mode emission dominates, i.e., left-handed (LH) from the southern kronian hemisphere and right-handed (RH) from the northern one, for which we measure directly a degree of circular polarization up to 100%; the variable visibility of SKR along Cassini's orbit is consistent with sources at or close to the local electron cyclotron frequency fce, in the Local Time (LT) sector 09 h-12 h, and at latitudes ≥70°, with emission beamed along hollow cones centered on the local magnetic field vector; this anisotropic beaming results in the existence of an equatorial radio shadow zone, whose extent is quantified as a function of frequency; it also causes the systematic disappearance of emission at high latitudes above 200 kHz and below 30 kHz. In addition, we obtain new results on SKR: LH and RH intensity variations are found to match together at all timescales ≥30 min; moreover their spectra are found to be conjugated as a function of the latitude of the observer; we use this conjugacy to merge LH and RH spectra and derive pronounced systematic dependences of the SKR spectrum as a function of the spacecraft latitude and LT (that will be the input of a subsequent modeling study); we identify for the first time ordinary mode SKR emission; finally, in addition to the SKR and n-SMR components, we discuss the narrowband kilometric component (named here n-SKR) which extends mainly between 10 and 40 kHz, preferentially observed from high latitudes.
Interpreting Sky-Averaged 21-cm Measurements
NASA Astrophysics Data System (ADS)
Mirocha, Jordan
2015-01-01
Within the first ~billion years after the Big Bang, the intergalactic medium (IGM) underwent a remarkable transformation, from a uniform sea of cold neutral hydrogen gas to a fully ionized, metal-enriched plasma. Three milestones during this epoch of reionization -- the emergence of the first stars, black holes (BHs), and full-fledged galaxies -- are expected to manifest themselves as extrema in sky-averaged ("global") measurements of the redshifted 21-cm background. However, interpreting these measurements will be complicated by the presence of strong foregrounds and non-trivialities in the radiative transfer (RT) modeling required to make robust predictions.I have developed numerical models that efficiently solve the frequency-dependent radiative transfer equation, which has led to two advances in studies of the global 21-cm signal. First, frequency-dependent solutions facilitate studies of how the global 21-cm signal may be used to constrain the detailed spectral properties of the first stars, BHs, and galaxies, rather than just the timing of their formation. And second, the speed of these calculations allows one to search vast expanses of a currently unconstrained parameter space, while simultaneously characterizing the degeneracies between parameters of interest. I find principally that (1) physical properties of the IGM, such as its temperature and ionization state, can be constrained robustly from observations of the global 21-cm signal without invoking models for the astrophysical sources themselves, (2) translating IGM properties to galaxy properties is challenging, in large part due to frequency-dependent effects. For instance, evolution in the characteristic spectrum of accreting BHs can modify the 21-cm absorption signal at levels accessible to first generation instruments, but could easily be confused with evolution in the X-ray luminosity star-formation rate relation. Finally, (3) the independent constraints most likely to aide in the interpretation of global 21-cm signal measurements are detections of Lyman Alpha Emitters at high redshifts and constraints on the midpoint of reionization, both of which are among the primary science objectives of ongoing or near-future experiments.
High School Weight Training: A Comprehensive Program.
ERIC Educational Resources Information Center
Viscounte, Roger; Long, Ken
1989-01-01
Describes a weight training program, suitable for the general student population and the student-athlete, which is designed to produce improvement in specific, measurable areas including bench press (upper body), leg press (lower body), vertical jump (explosiveness); and 40-yard dash (speed). Two detailed charts are included, with notes on their…
High School Weight Training: A Comprehensive Program.
ERIC Educational Resources Information Center
Viscounte, Roger; Long, Ken
1989-01-01
Describes a weight training program, suitable for the general student population and the student-athlete, which is designed to produce improvement in specific, measurable areas including bench press (upper body), leg press (lower body), vertical jump (explosiveness); and 40-yard dash (speed). Two detailed charts are included, with notes on their
NASA Astrophysics Data System (ADS)
Bostan, P. A.; Heuvelink, G. B. M.; Akyurek, S. Z.
2012-10-01
Accurate mapping of the spatial distribution of annual precipitation is important for many applications in hydrology, climatology, agronomy, ecology and other environmental sciences. In this study, we compared five different statistical methods to predict spatially the average annual precipitation of Turkey using point observations of annual precipitation at meteorological stations and spatially exhaustive covariate data (i.e. elevation, aspect, surface roughness, distance to coast, land use and eco-region). The methods compared were multiple linear regression (MLR), ordinary kriging (OK), regression kriging (RK), universal kriging (UK), and geographically weighted regression (GWR). Average annual precipitation of Turkey from 1970 to 2006 was measured at 225 meteorological stations that are fairly uniformly distributed across the country, with a somewhat higher spatial density along the coastline. The observed annual precipitation varied between 255 mm and 2209 mm with an average of 628 mm. The annual precipitation was highest along the southern and northern coasts and low in the centre of the country, except for the area near the Van Lake, Keban and Ataturk Dams. To compare the performance of the interpolation techniques the total dataset was first randomly split in ten equally sized test datasets. Next, for each test data set the remaining 90% of the data comprised the training dataset. Each training dataset was then used to calibrate and apply the spatial prediction model. Predictions at the test dataset locations were compared with the observed test data. Validation was done by calculating the Root Mean Squared Error (RMSE), R-square and Standardized MSE (SMSE) values. According to these criteria, universal kriging is the most accurate with an RMSE of 178 mm, an R-square of 0.61 and an SMSE of 1.06, whilst multiple linear regression performed worst (RMSE of 222 mm, R-square of 0.39, and SMSE of 1.44). Ordinary kriging, UK using only elevation and geographically weighted regression are intermediate with RMSE values of 201 mm, 212 mm and 211 mm, and an R-square of 0.50, 0.44 and 0.45, respectively. The RK results are close to those of UK with an RMSE of 186 mm and R-square of 0.57. The spatial extrapolation performance of each method was also evaluated. This was done by predicting the annual precipitation in the eastern part of Turkey using observations from the western part. Results showed that MLR, GWR and RK performed best with little differences between these methods. The large prediction error variances confirmed that extrapolation is more difficult than interpolation. Whilst spatial extrapolation benefits most from covariate information as shown by an RMSE reduction of about 60 mm, in this study covariate information was also valuable for spatial interpolation because it reduced the RMSE with on average 30 mm.
Basso, Olga
2008-03-01
Birth weight is associated not just with infant morbidity and mortality, but with outcomes occurring much later in life, including adult mortality, as reported by a paper by Baker and colleagues in this issue of Epidemiology. While these associations are tantalizing per se, the truly interesting question concerns the mechanisms that underlie these links. The prevailing hypothesis suggests a "fetal origin" of diseases resulting from alterations in fetal nutrition that permanently program organ function. The most commonly proposed alternative is that factors, mainly genetic, that affect both fetal growth and disease risk are responsible for the observed associations. Although both mechanisms are intellectually attractive-and may well coexist-we should be cautious to not focus excessively on fetal growth. Doing this may lead us in the wrong direction, as has likely happened in the case of birth weight in relation to infant survival. PMID:18277158
Gain weighted eigenspace assignment
NASA Technical Reports Server (NTRS)
Davidson, John B.; Andrisani, Dominick, II
1994-01-01
This report presents the development of the gain weighted eigenspace assignment methodology. This provides a designer with a systematic methodology for trading off eigenvector placement versus gain magnitudes, while still maintaining desired closed-loop eigenvalue locations. This is accomplished by forming a cost function composed of a scalar measure of error between desired and achievable eigenvectors and a scalar measure of gain magnitude, determining analytical expressions for the gradients, and solving for the optimal solution by numerical iteration. For this development the scalar measure of gain magnitude is chosen to be a weighted sum of the squares of all the individual elements of the feedback gain matrix. An example is presented to demonstrate the method. In this example, solutions yielding achievable eigenvectors close to the desired eigenvectors are obtained with significant reductions in gain magnitude compared to a solution obtained using a previously developed eigenspace (eigenstructure) assignment method.
Solubility in binary solvent systems III: predictive expressions based on molecular surface areas.
Acree, W E; Rytting, J H
1983-03-01
The nearly ideal binary solvent model, which has led to successful predictive equations for the partial molar Gibbs free energy of the solute in binary solvent mixtures, was extended to include molecular surface areas as weighting factors. Two additional expressions were derived and compared to previously developed equations (based on molar volumes as weighting factors) for their ability to predict anthracene and naphthalene solubilities in mixed solvents from measurements in the pure solvents. The most successful equation in terms of goodness of fit involved a surface fraction average of the excess Gibbs free energy relative to Raoult's law and predicted experimental solubilities in 25 systems with an average deviation of 1.7% and a maximum deviation of 7.5%. Two expressions approximating weighting factors with molar volumes provided accurate predictions in many of the systems studied but failed in their ability to predict anthracene solubilities in solvent mixtures containing benzene. PMID:6842382
Longitudinal Study of Body Weight Changes in Children: Who is Gaining and Who is Losing Weight
Williamson, D. A.; Han, H.; Johnson, W.D.; Stewart, T.M.; Harsha, D.
2010-01-01
Cross-sectional studies have reported significant temporal increases in prevalence of childhood obesity in both genders and various racial groups, but recently the rise has subsided. Childhood obesity prevention trials suggest that, on average, overweight/obese children lose body weight and non-overweight children gain weight. This investigation tested the hypothesis that overweight children lose body weight/fat and non-overweight children gain body weight/fat using a longitudinal research design that did not include an obesity prevention program. The participants were 451 children in 4th to 6th grades at baseline. Height, weight, and body fat were measured at Month 0 and Month 28. Each child’s body mass index (BMI) percentile score was calculated specific for their age, gender and height. Higher BMI percentile scores and percent body fat at baseline were associated with larger decreases in BMI and percent body fat after 28 months. The BMI percentile mean for African-American girls increased whereas BMI percentile means for white boys and girls and African-American boys were stable over the 28 month study period. Estimates of obesity and overweight prevalence were stable because incidence and remission were similar. These findings support the hypothesis that overweight children tend to lose body weight and non-overweight children tend to gain body weight. PMID:20885393
Social embeddedness in an online weight management programme is linked to greater weight loss.
Poncela-Casasnovas, Julia; Spring, Bonnie; McClary, Daniel; Moller, Arlen C; Mukogo, Rufaro; Pellegrini, Christine A; Coons, Michael J; Davidson, Miriam; Mukherjee, Satyam; Nunes Amaral, Luis A
2015-03-01
The obesity epidemic is heightening chronic disease risk globally. Online weight management (OWM) communities could potentially promote weight loss among large numbers of people at low cost. Because little is known about the impact of these online communities, we examined the relationship between individual and social network variables, and weight loss in a large, international OWM programme. We studied the online activity and weight change of 22,419 members of an OWM system during a six-month period, focusing especially on the 2033 members with at least one friend within the community. Using Heckman's sample-selection procedure to account for potential selection bias and data censoring, we found that initial body mass index, adherence to self-monitoring and social networking were significantly correlated with weight loss. Remarkably, greater embeddedness in the network was the variable with the highest statistical significance in our model for weight loss. Average per cent weight loss at six months increased in a graded manner from 4.1% for non-networked members, to 5.2% for those with a few (two to nine) friends, to 6.8% for those connected to the giant component of the network, to 8.3% for those with high social embeddedness. Social networking within an OWM community, and particularly when highly embedded, may offer a potent, scalable way to curb the obesity epidemic and other disorders that could benefit from behavioural changes. PMID:25631561
NASA Astrophysics Data System (ADS)
Catura, R. C.; Vieira, J. R.
1985-09-01
Light weight mirror blanks were fabricated by dip-brazing a core of low mass aluminum foam material to thin face sheets of solid aluminum. The blanks weigh 40% of an equivalent size solid mirror and were diamond turned to provide reflective surfaces. Optical interferometry was used to assess their dimensional stability over 7 months. No changes in flatness are observed (to the sensitivity of the measurements of a half wavelength of red light).
Kirkland, L.; Anderson, R.
1993-01-01
Only 5% of patients dieting to achieve permanent weight loss will be successful and reap the associated health benefits. Ninety-five percent will be unsuccessful. The health implications of failed dieting attempts are numerous and include negative effects on both physical and psychological well-being. Better alternatives to dieting help patients take small, positive, enjoyable steps toward healthy eating, active living, and a positive self-image. PMID:8435552
NASA Astrophysics Data System (ADS)
Carletti, Timoteo; Righi, Simone
2010-05-01
In this paper we define a new class of weighted complex networks sharing several properties with fractal sets, and whose topology can be completely analytically characterized in terms of the involved parameters and of the fractal dimension. General networks with fractal or hierarchical structures can be set in the proposed framework that moreover could be used to provide some answers to the widespread emergence of fractal structures in nature.
Uncertainty of GHz-band Whole-body Average SARs in Infants based on their Kaup Indices
NASA Astrophysics Data System (ADS)
Miwa, Hironobu; Hirata, Akimasa; Fujiwara, Osamu; Nagaoka, Tomoaki; Watanabe, Soichi
We previously showed that a strong correlation exists between the absorption cross section and the body surface area of a human for 0.3-2GHz far field exposure, and proposed a formula for estimating whole-body-average specific absorption rates (WBA-SARs) in terms of height and weight. In this study, to evaluate variability in the WBA-SARs in infants based on their physique, we derived a new formula including Kaup indices of infants, which are being used to check their growth, and thereby estimated the WBA-SARs in infants with respect to their age from 0 month to three years. As a result, we found that under the same height/weight, the smaller the Kaup indices are, the larger the WBA-SARs become, and that the variability in the WBA-SARs is around 15% at the same age. To validate these findings, using the FDTD method, we simulated the GHz-band WBA-SARs in numerical human models corresponding to infants with age of 0, 1, 3, 6 and 9 months, which were obtained by scaling down the anatomically based Japanese three-year child model developed by NICT (National Institute of Information and Communications Technology). Results show that the FDTD-simulated WBA-SARs are smaller by 20% compared to those estimated for infants having the median height and the Kaup index of 0.5 percentiles, which provide conservative WBA-SARs.
Drop-Weight-Tear-Test Equipment Energy Calibration Program
Eiber, R.J.
1988-10-01
The Drop-Weight-Tear-Test (DWTT) energy absorption has not previously been considered as a measure of fracture toughness from this test. The DWTT was originally planned to define the fracture appearance transition temperature of line pipe. The test has been very successfully used for this purpose for the past 20 years. During this period of DWTT usage, the need for a toughness measurement to control ductile fracture propagation has seen the application of a Charpy shelf-energy in addition to a DWTT or Charpy shear area requirement in the specification of line-pipe properties. The purpose of exploring energy measurements in the DWTT is to determine if both fracture appearance and toughness can be obtained from a single test. To this end, a number of individual steel companies and gas-transmission companies have been examining the use of a DWTT energy measurement. Also, the Round Robin Program on the precracked drop-weight-tear test (DWTT) indicated that the energy measurements by the 12 participating laboratories had a coefficient of variation (standard deviation/average) that is about 30 percent larger than for the Charpy V-notch test. Also, the average energy variations between the various laboratories were quite large. This suggested that there was a need for calibrating the equipment used for energy measurements. It also suggested that equipment design may be a contributing factor. The objective of this program is to obtain reference steels that are uniform and of varying energy levels so that reference specimens can be supplied to a laboratory to assess the accuracy and precision of their DWTT energy measuring equipment. 7 figs., 5 tabs.
Code of Federal Regulations, 2010 CFR
2010-07-01
... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...
Code of Federal Regulations, 2011 CFR
2011-07-01
... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...
Code of Federal Regulations, 2013 CFR
2013-07-01
... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...
Code of Federal Regulations, 2014 CFR
2014-07-01
... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...
Low birth weight and residential proximity to PCB-contaminated waste sites.
Baibergenova, Akerke; Kudyakov, Rustam; Zdeb, Michael; Carpenter, David O
2003-01-01
Previous investigations have shown that women exposed to polychlorinated biphenyls (PCBs) are at increased risk of giving birth to an infant with low birth weight (< 2,500 g), and that this relationship is stronger for male than for female infants. We have tested the hypothesis that residents in a zip code that contains a PCB hazardous waste site or abuts a body of water contaminated with PCBs are at increased risk of giving birth to a low-birth-weight baby. We used the birth registry of the New York State Vital Statistics to identify all births between 1994 and 2000 in New York State except for New York City. This registry provides information on the infant, mother, and father together with the zip code of the mother's residence. The 865 state Superfund sites, the 86 National Priority List sites, and the six Areas of Concern in New York were characterized regarding whether or not they contain PCBs as a major contaminant. We identified 187 zip codes containing or abutting PCB-contaminated sites, and these zip codes were the residences of 24.5% of the 945,077 births. The birth weight in the PCB zip codes was on average 21.6 g less than in other zip codes (p < 0.001). Because there are many other risk factors for low birth weight, we have adjusted for these using a logistic regression model for these confounders. After adjusting for sex of the infant, mother's age, race, weight, height, education, income, marital status, and smoking, there was still a statistically significant 6% increased risk of giving birth to a male infant of low birth weight. These observations support the hypothesis that living in a zip code near a PCB-contaminated site poses a risk of exposure and giving birth to an infant of low birth weight. PMID:12896858
Relationship between smoking, weight and attitudes to weight in adolescent schoolgirls.
Halek, C.; Kerry, S.; Humphrey, H.; Crisp, A. H.; Hughes, J. M.
1993-01-01
A total of 1,932 schoolgirls aged 11-18 from seven schools in the South London area were surveyed using questionnaires which addressed eating patterns, body weight history, attitudes to body weight and shape, menstrual history and current smoking behaviour. They were also weighed and their height was measured. Twelve per cent of the girls were regular smokers and 10% smoked seven or more cigarettes over a 4 day period. Amongst girls aged 14 and over, 15% smoked regularly and a further 9% occasionally. A significant relationship was found between smoking and weight. Smokers were more likely to be moderately overweight in relation to their peers and to have been worried about their weight at some stage. There were differences between girls in state schools and those in independent schools with regard to smoking behaviour and weight. The findings have implications for anti-smoking strategies and health education generally. PMID:8506187
Hure, Alexis Jayne; Collins, Clare Elizabeth; Giles, Warwick Bruce; Paul, Jonathan Winter; Smith, Roger
2012-10-01
The objective of this study is to describe the fetal phenotype in utero and its associations with maternal pre-pregnancy weight and gestational weight gain. This prospective longitudinal cohort included 179 Australian women with singleton pregnancies. Serial ultrasound measurements were performed at 19, 25, 30 and 36 (±1) weeks gestation and maternal anthropometry were collected concurrently. The ultrasound scans included the standard fetal biometry of head circumference, biparietal diameter, abdominal circumference, and femur length, and body composition at the abdomen and mid-thigh, including fat and lean mass cross-sectional areas. Maternal gestational weight gain was compared to current clinical guidelines. The participants had an average of 3.7 ± 0.8 scans and birth data were available for 165 neonates. Fifty four per cent of the cohort gained weight in excess of current recommendations, according to pre-pregnancy body mass index (BMI). Maternal gestational weight positively predicted fetal abdominal circumference (P 0.029) and lean abdominal mass area (P 0.046) in linear mixed model regression analysis, adjusted for known and potential confounders. At any pre-pregnancy BMI gaining weight above the current recommendations resulted in a larger fetus according to standard biometry, because of significantly larger lean muscle mass at the abdomen (P 0.024) and not due to an increase in fat mass (P 0.463). We have demonstrated the importance of maternal weight gain, independent of pre-pregnancy BMI, to support the growth of a large but lean fetus. Prenatal counselling should focus on achieving a healthy BMI prior to conception so that gestational weight gain restrictions can be minimised. PMID:22052171
Tonsillitis, tonsillectomy and weight disturbance.
Conlon, B J; Donnelly, M J; McShane, D P
1997-10-18
To determine the relationship between tonsillitis, tonsillectomy and abnormalities in body weight, we have analyzed pre- and post-operative weights in a population of 55 children who underwent adenotonsillectomy in our department. Pre-operative mean weight was 9.8% heavier than the standard mean normal weight for age and post-operative mean weight was 22% greater than standard mean weight for age. The mean weight gain during the follow-up period was 12% greater than that which would be normally expected (p < 0.001). This study suggests that children undergoing tonsillectomy are slightly heavier than their peers and that following the procedure this discrepancy increases. PMID:9477349
Heart weight and running ability.
Gunn, H M
1989-01-01
The weight of the heart as determined by dissection techniques was compared with liveweight and total muscle weight in different types of horses and dogs as adults and during growth. With increasing body size both within and between species, heart weight forms a lesser proportion of liveweight and of total muscle weight. Heart weight forms a greater proportion of liveweight in Thoroughbreds and Greyhounds (breeds noted for high speed running) than in other less fleet members of their species and Greyhounds have greater heart weights relative to total muscle weight than other dogs. PMID:2630537
Weight misperception amongst youth of a developing country: Pakistan -a cross-sectional study
2013-01-01
Background Weight misperception is the discordance between an individual’s actual weight status and the perception of his/her weight. It is a common problem in the youth population as enumerated by many international studies. However data from Pakistan in this area is deficient. Methods A multi-center cross-sectional survey was carried out in undergraduate university students of Karachi between the ages of 15–24. Participants were questioned regarding their perception of being thin, normal or fat and it was compared with their Body Mass Index (BMI). Measurements of height and weight were taken for this purpose and BMI was categorized using Asian cut offs. Weight misperception was identified when the self-perceived weight (average, fat, thin) did not match the calculated BMI distribution. Chi square tests and logistic regression tests were applied to show associations of misperception and types of misperception (overestimation, underestimation) with independent variables like age, gender, type of university and faculties. P-value of <0.05 was taken as statistically significant. Results 42.4% of the total participants i.e. 43.3% males and 41% females misperceived their weight. Amongst those who misperceived 38.2% had overestimated and 61.8% had underestimated their weight. Greatest misperception of was observed in the overweight category (91%), specifically amongst overweight males (95%). Females of the underweight category overestimated their weight and males of the overweight category underestimated their weight. Amongst the total participants, females overestimated 8 times more than males (OR 8.054, 95% CI 5.34-12.13). Misperception increased with the age of the participants (OR 1.114, 95% CI 1.041-1.191). Odds of misperception were greater in students of private sector universities as compared to public (OR 1.861, 95% CI: 1.29-2.67). Odds of misperception were less in students of medical sciences (OR 0.693, 95% CI 0.491-0.977), engineering (OR 0.586, 95% CI 0.364-0.941) and business administration (OR 0.439, 95% CI 0.290-0.662) as compared to general faculty universities. Conclusion There was marked discrepancy between the calculated BMI and the self-perceived weight in the youth of Karachi. Better awareness campaigns need to be implemented to reverse these trends. PMID:23915180
Upper Limit of Weights in TAI Computation
NASA Technical Reports Server (NTRS)
Thomas, Claudine; Azoubib, Jacques
1996-01-01
The international reference time scale International Atomic Time (TAI) computed by the Bureau International des Poids et Mesures (BIPM) relies on a weighted average of data from a large number of atomic clocks. In it, the weight attributed to a given clock depends on its long-term stability. In this paper the TAI algorithm is used as the basis for a discussion of how to implement an upper limit of weight for clocks contributing to the ensemble time. This problem is approached through the comparison of two different techniques. In one case, a maximum relative weight is fixed: no individual clock can contribute more than a given fraction to the resulting time scale. The weight of each clock is then adjusted according to the qualities of the whole set of contributing elements. In the other case, a parameter characteristic of frequency stability is chosen: no individual clock can appear more stable than the stated limit. This is equivalent to choosing an absolute limit of weight and attributing this to to the most stable clocks independently of the other elements of the ensemble. The first technique is more robust than the second and automatically optimizes the stability of the resulting time scale, but leads to a more complicated computatio. The second technique has been used in the TAI algorithm since the very beginning. Careful analysis of tests on real clock data shows that improvement of the stability of the time scale requires revision from time to time of the fixed value chosen for the upper limit of absolute weight. In particular, we present results which confirm the decision of the CCDS Working Group on TAI to increase the absolute upper limit by a factor of 2.5. We also show that the use of an upper relative contribution further helps to improve the stability and may be a useful step towards better use of the massive ensemble of HP 507IA clocks now contributing to TAI.
Wee, Christina C.; Hamel, Mary Beth; Apovian, Caroline M.; Blackburn, George L.; Bolcic-Jankovic, Dragana; Colten, Mary Ellen; Hess, Donald T.; Huskey, Karen W.; Marcantonio, Edward R.; Schneider, Benjamin E.; Jones, Daniel B.
2015-01-01
Importance Weight loss surgery (WLS) has been shown to produce long-term weight loss but is not risk free or universally effective. The weight loss expectations and willingness to undergo perioperative risk among patients seeking WLS remain unknown. Objectives To examine the expectations and motivations of WLS patients and the mortality risks they are willing to undertake and to explore the demographic characteristics, clinical factors, and patient perceptions associated with high weight loss expectations and willingness to assume high surgical risk. Design We interviewed patients seeking WLS and conducted multivariable analyses to examine the characteristics associated with high weight loss expectations and the acceptance of mortality risks of 10% or higher. Setting Two WLS centers in Boston. Participants Six hundred fifty-four patients. Main Outcome Measures Disappointment with a sustained weight loss of 20% and willingness to accept a mortality risk of 10% or higher with WLS. Results On average, patients expected to lose as much as 38% of their weight after WLS and expressed disappointment if they did not lose at least 26%. Most patients (84.8%) accepted some risk of dying to undergo WLS, but only 57.5% were willing to undergo a hypothetical treatment that produced a 20% weight loss. The mean acceptable mortality risk to undergo WLS was 6.7%, but the median risk was only 0.1%; 19.5% of all patients were willing to accept a risk of at least 10%. Women were more likely than men to be disappointed with a 20% weight loss but were less likely to accept high mortality risk. After initial adjustment, white patients appeared more likely than African American patients to have high weight loss expectations and to be willing to accept high risk. Patients with lower quality-of-life scores and those who perceived needing to lose more than 10% and 20% of weight to achieve “any” health benefits were more likely to have unrealistic weight loss expectations. Low quality-of-life scores were also associated with willingness to accept high risk. Conclusions and Relevance Most patients seeking WLS have high weight loss expectations and believe they need to lose substantial weight to derive any health benefits. Educational efforts may be necessary to align expectations with clinical reality. PMID:23553327