Wieczorek, Michael E.
2014-01-01
This digital data release consists of seven data files of soil attributes for the United States and the District of Columbia. The files are derived from National Resources Conservations Service’s (NRCS) Soil Survey Geographic database (SSURGO). The data files can be linked to the raster datasets of soil mapping unit identifiers (MUKEY) available through the NRCS’s Gridded Soil Survey Geographic (gSSURGO) database (http://www.nrcs.usda.gov/wps/portal/nrcs/detail/soils/survey/geo/?cid=nrcs142p2_053628). The associated files, named DRAINAGECLASS, HYDRATING, HYDGRP, HYDRICCONDITION, LAYER, TEXT, and WTDEP are area- and depth-weighted average values for selected soil characteristics from the SSURGO database for the conterminous United States and the District of Columbia. The SSURGO tables were acquired from the NRCS on March 5, 2014. The soil characteristics in the DRAINAGE table are drainage class (DRNCLASS), which identifies the natural drainage conditions of the soil and refers to the frequency and duration of wet periods. The soil characteristics in the HYDRATING table are hydric rating (HYDRATE), a yes/no field that indicates whether or not a map unit component is classified as a "hydric soil". The soil characteristics in the HYDGRP table are the percentages for each hydrologic group per MUKEY. The soil characteristics in the HYDRICCONDITION table are hydric condition (HYDCON), which describes the natural condition of the soil component. The soil characteristics in the LAYER table are available water capacity (AVG_AWC), bulk density (AVG_BD), saturated hydraulic conductivity (AVG_KSAT), vertical saturated hydraulic conductivity (AVG_KV), soil erodibility factor (AVG_KFACT), porosity (AVG_POR), field capacity (AVG_FC), the soil fraction passing a number 4 sieve (AVG_NO4), the soil fraction passing a number 10 sieve (AVG_NO10), the soil fraction passing a number 200 sieve (AVG_NO200), and organic matter (AVG_OM). The soil characteristics in the TEXT table are percent sand, silt, and clay (AVG_SAND, AVG_SILT, and AVG_CLAY). The soil characteristics in the WTDEP table are the annual minimum water table depth (WTDEP_MIN), available water storage in the 0-25 cm soil horizon (AWS025), the minimum water table depth for the months April, May and June (WTDEPAMJ), the available water storage in the first 25 centimeters of the soil horizon (AWS25), the dominant drainage class (DRCLSD), the wettest drainage class (DRCLSWET), and the hydric classification (HYDCLASS), which is an indication of the proportion of the map unit, expressed as a class, that is "hydric", based on the hydric classification of a given MUKEY. (See Entity_Description for more detail). The tables were created with a set of arc macro language (aml) and awk (awk was created at Bell Labsin the 1970s and its name is derived from the first letters of the last names of its authors – Alfred Aho, Peter Weinberger, and Brian Kernighan) scripts. Send an email to mewieczo@usgs.gov to obtain copies of the computer code (See Process_Description.) The methods used are outlined in NRCS's "SSURGO Data Packaging and Use" (NRCS, 2011). The tables can be related or joined to the gSSURGO rasters of MUKEYs by the item 'MUKEY.' Joining or relating the tables to a MUKEY grid allows the creation of grids of area- and depth-weighted soil characteristics. A 90-meter raster of MUKEYs is provided which can be used to produce rasters of soil attributes. More detailed resolution rasters are available through NRCS via the link above.
Average Weighted Receiving Time of Weighted Tetrahedron Koch Networks
NASA Astrophysics Data System (ADS)
Dai, Meifeng; Zhang, Danping; Ye, Dandan; Zhang, Cheng; Li, Lei
2015-07-01
We introduce weighted tetrahedron Koch networks with infinite weight factors, which are generalization of finite ones. The term of weighted time is firstly defined in this literature. The mean weighted first-passing time (MWFPT) and the average weighted receiving time (AWRT) are defined by weighted time accordingly. We study the AWRT with weight-dependent walk. Results show that the AWRT for a nontrivial weight factor sequence grows sublinearly with the network order. To investigate the reason of sublinearity, the average receiving time (ART) for four cases are discussed.
Model Averaging Methods for Weight Trimming
Elliott, Michael R.
2009-01-01
In sample surveys where sampled units have unequal probabilities of inclusion, associations between the inclusion probabilities and the statistic of interest can induce bias. Weights equal to the inverse of the probability of inclusion are often used to counteract this bias. Highly disproportional sample designs have highly variable weights, which can introduce undesirable variability in statistics such as the population mean or linear regression estimates. Weight trimming reduces large weights to a fixed maximum value, reducing variability but introducing bias. Most standard approaches are ad-hoc in that they do not use the data to optimize bias-variance tradeoffs. This manuscript develops variable selection models, termed “weight pooling” models, that extend weight trimming procedures in a Bayesian model averaging framework to produce “data driven” weight trimming estimators. We develop robust yet efficient models that approximate fully-weighted estimators when bias correction is of greatest importance, and approximate unweighted estimators when variance reduction is critical. PMID:19946471
Asymmetric network connectivity using weighted harmonic averages
NASA Astrophysics Data System (ADS)
Morrison, Greg; Mahadevan, L.
2011-02-01
We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.
Scaling of average receiving time and average weighted shortest path on weighted Koch networks
NASA Astrophysics Data System (ADS)
Dai, Meifeng; Chen, Dandan; Dong, Yujuan; Liu, Jie
2012-12-01
In this paper we present weighted Koch networks based on classic Koch networks. A new method is used to determine the average receiving time (ART), whose key step is to write the sum of mean first-passage times (MFPTs) for all nodes to absorption at the trap located at a hub node as a recursive relation. We show that the ART exhibits a sublinear or linear dependence on network order. Thus, the weighted Koch networks are more efficient than classic Koch networks in receiving information. Moreover, average weighted shortest path (AWSP) is calculated. In the infinite network order limit, the AWSP depends on the scaling factor. The weighted Koch network grows unbounded but with the logarithm of the network size, while the weighted shortest paths stay bounded.
NASA Astrophysics Data System (ADS)
Wu, Zikai; Hou, Baoyu; Zhang, Hongjuan; Jin, Feng
2014-04-01
Deterministic network models have been attractive media for discussing dynamical processes' dependence on network structural features. On the other hand, the heterogeneity of weights affect dynamical processes taking place on networks. In this paper, we present a family of weighted expanded Koch networks based on Koch networks. They originate from a r-polygon, and each node of current generation produces m r-polygons including the node and whose weighted edges are scaled by factor w in subsequent evolutionary step. We derive closed-form expressions for average weighted shortest path length (AWSP). In large network, AWSP stays bounded with network order growing (0 < w < 1). Then, we focus on a special random walks and trapping issue on the networks. In more detail, we calculate exactly the average receiving time (ART). ART exhibits a sub-linear dependence on network order (0 < w < 1), which implies that nontrivial weighted expanded Koch networks are more efficient than un-weighted expanded Koch networks in receiving information. Besides, efficiency of receiving information at hub nodes is also dependent on parameters m and r. These findings may pave the way for controlling information transportation on general weighted networks.
77 FR 74452 - Bus Testing: Calculation of Average Passenger Weight and Test Vehicle Weight
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-14
...2132-AB01 Bus Testing: Calculation of Average Passenger Weight and Test Vehicle Weight AGENCY: Federal Transit Administration (FTA...regulation to increase the assumed average passenger weight value used for ballasting test buses from the...
Describing Average- and Longtime-Behavior by Weighted MSO Logics
NASA Astrophysics Data System (ADS)
Droste, Manfred; Meinecke, Ingmar
Weighted automata model quantitative aspects of systems like memory or power consumption. Recently, Chatterjee, Doyen, and Henzinger introduced a new kind of weighted automata which compute objectives like the average cost or the longtime peak power consumption. In these automata, operations like average, limit superior, limit inferior, limit average, or discounting are used to assign values to finite or infinite words. In general, these weighted automata are not semiring weighted anymore. Here, we establish a connection between such new kinds of weighted automata and weighted logics. We show that suitable weighted MSO logics and these new weighted automata are expressively equivalent, both for finite and infinite words. The constructions employed are effective, leading to decidability results for the weighted logic formulas considered.
, the body and control surfaces deformed at speeds in excess of 5 knots. The modifi- cations described herethe average weight of Connecticut River fish was considerably less (Table 1). The difference in average weight between sea lampreys in the two populations is not due to the difference in location
An Excel macro for transformed and weighted averaging
Stanley A. Klein
1992-01-01
An Excel macro is presented for averaging spreadsheet data. The macro has several special features: (1) The data are weighted\\u000a by the inverse variance of each datum to decrease the contribution-of noisy outliers. (2) There is a provision for a power\\u000a or a log transform of the data before averaging. The rationale for transforming the data before averaging is discussed
Weighted Kullback-Leibler average-based distributed filtering algorithm
NASA Astrophysics Data System (ADS)
Lu, Kelin; Chang, Kuo-Chu; Zhou, Rui
2015-05-01
This paper considers a distributed filtering problem over a multi-sensor network in which the correlation of local estimation errors is unknown. Recently, this problem was studied by G. Battistelli [1] by developing a data fusion rule to calculate the weighted Kullback-Leibler average of local estimates with consensus algorithms for distributed averaging, where the weighted Kullback-Leibler average is defined as an averaged probability density function to minimize the sum of weighted Kullback-Leibler divergences from the original probability density functions. In this paper, we extends those earlier results by relaxing the prior assumption that all sensors share the same degree of confidence. Furthermore, a novel consensus-based distributed weighting coefficients selection scheme is developed to improve the fusion accuracy, where the weight associated with each sensor is adjusted based on the local estimation error covariance and the ones received from neighboring sensors, so that larger weight values will be assigned to a sensor with higher degree of confidence. Finally, a Monte-Carlo simulation with a 2D tracking system validates the effectiveness of the proposed distributed filtering algorithm.
Scaling of average sending time on weighted Koch networks
NASA Astrophysics Data System (ADS)
Dai, Meifeng; Liu, Jie
2012-10-01
Random walks on weighted complex networks, especially scale-free networks, have attracted considerable interest in the past. But the efficiency of a hub sending information on scale-free small-world networks has been addressed less. In this paper, we study random walks on a class of weighted Koch networks with scaling factor 0 < r ? 1. We derive some basic properties for random walks on the weighted Koch networks, based on which we calculate analytically the average sending time (AST) defined as the average of mean first-passage times (MFPTs) from a hub node to all other nodes, excluding the hub itself. The obtained result displays that for 0 < r < 1 in large networks the AST grows as a power-law function of the network order with the exponent, represented by log 43r+1/r, and for r = 1 in large networks the AST grows with network order as N ln N, which is larger than the linear scaling of the average receiving time defined as the average of MFPTs for random walks to a given hub node averaged over all starting points.
Scaling of Average Weighted Receiving Time on Double-Weighted Koch Networks
NASA Astrophysics Data System (ADS)
Dai, Meifeng; Ye, Dandan; Hou, Jie; Li, Xingyi
2015-03-01
In this paper, we introduce a model of the double-weighted Koch networks based on actual road networks depending on the two weight factors w,r ? (0, 1]. The double weights represent the capacity-flowing weight and the cost-traveling weight, respectively. Denote by wFij the capacity-flowing weight connecting the nodes i and j, and denote by wCij the cost-traveling weight connecting the nodes i and j. Let wFij be related to the weight factor w, and let wCij be related to the weight factor r. This paper assumes that the walker, at each step, starting from its current node, moves to any of its neighbors with probability proportional to the capacity-flowing weight of edge linking them. The weighted time for two adjacency nodes is the cost-traveling weight connecting the two nodes. We define the average weighted receiving time (AWRT) on the double-weighted Koch networks. The obtained result displays that in the large network, the AWRT grows as power-law function of the network order with the exponent, represented by ?(w,r) = ½ log2(1 + 3wr). We show that the AWRT exhibits a sublinear or linear dependence on network order. Thus, the double-weighted Koch networks are more efficient than classic Koch networks in receiving information.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-14
...0625-AA87 Antidumping Proceedings: Calculation of the Weighted-Average Dumping Margin...modifying its methodology regarding the calculation of the weighted-average dumping margins...investigations. Antidumping Proceedings: Calculation of the Weighted Average Dumping...
26 CFR 1.989(b)-1 - Definition of weighted average exchange rate.
Code of Federal Regulations, 2010 CFR
2010-04-01
...Definition of weighted average exchange rate. 1.989(b)-1...Definition of weighted average exchange rate. For purposes of section 989(b)(3) and (4), the term “weighted average exchange rate” means the simple...
Global Rotation Estimation Using Weighted Iterative Lie Algebraic Averaging
NASA Astrophysics Data System (ADS)
Reich, M.; Heipke, C.
2015-08-01
In this paper we present an approach for a weighted rotation averaging to estimate absolute rotations from relative rotations between two images for a set of multiple overlapping images. The solution does not depend on initial values for the unknown parameters and is robust against outliers. Our approach is one part of a solution for a global image orientation. Often relative rotations are not free from outliers, thus we use the redundancy in available pairwise relative rotations and present a novel graph-based algorithm to detect and eliminate inconsistent rotations. The remaining relative rotations are input to a weighted least squares adjustment performed in the Lie algebra of the rotation manifold SO(3) to obtain absolute orientation parameters for each image. Weights are determined using the prior information we derived from the estimation of the relative rotations. Because we use the Lie algebra of SO(3) for averaging no subsequent adaptation of the results has to be performed but the lossless projection to the manifold. We evaluate our approach on synthetic and real data. Our approach often is able to detect and eliminate all outliers from the relative rotations even if very high outlier rates are present. We show that we improve the quality of the estimated absolute rotations by introducing individual weights for the relative rotations based on various indicators. In comparison with the state-of-the-art in recent publications to global image orientation we achieve best results in the examined datasets.
47 CFR 65.305 - Calculation of the weighted average cost of capital.
Code of Federal Regulations, 2010 CFR
2010-10-01
...Calculation of the weighted average cost of capital. (a) The composite weighted average cost of capital is the sum of the cost of debt, the...each weighted by its proportion in the capital structure of the telephone companies....
47 CFR 65.305 - Calculation of the weighted average cost of capital.
Code of Federal Regulations, 2011 CFR
2011-10-01
...cost of capital. (a) The composite weighted average cost of...proportion in the capital structure of the telephone companies...prescription proceeding, the composite weighted average cost of debt...of preferred stock is the composite weight computed in...
Time-weighted averaging for nitrous oxide: an automated method.
McGill, W A; Rivera, O; Howard, R
1980-11-01
An automated method of obtaining a time-weighted average of nitrous oxide levels in an operating room was compared with a standard method. The automated method consisted of electronic integration of the voltage output of a nitrous oxide analyzer using a multimeter-microprocessor. The standard method utilized a bag and pump to collect a room air sample, which was subsequently analyzed with a nitrous oxide analyzer. There was a high degree of correlation (r = 0.99) between the two methods. It is concluded that the automated method is an accurate alternative and offers institutions a simple, cost-effective method of monitoring and documenting results of pollution control programs in anesthetizing locations. PMID:7425378
Fiber-optic large area average temperature sensor
Looney, L.L.; Forman, P.R.
1994-05-01
In many instances the desired temperature measurement is only the spatial average temperature over a large area; eg. ground truth calibration for satellite imaging system, or average temperature of a farm field. By making an accurate measurement of the optical length of a long fiber-optic cable, we can determine the absolute temperature averaged over its length and hence the temperature of the material in contact with it.
Latent-variable approaches to the Jamesian model of importance-weighted averages.
Scalas, L Francesca; Marsh, Herbert W; Nagengast, Benjamin; Morin, Alexandre J S
2013-01-01
The individually importance-weighted average (IIWA) model posits that the contribution of specific areas of self-concept to global self-esteem varies systematically with the individual importance placed on each specific component. Although intuitively appealing, this model has weak empirical support; thus, within the framework of a substantive-methodological synergy, we propose a multiple-item latent approach to the IIWA model as applied to a range of self-concept domains (physical, academic, spiritual self-concepts) and subdomains (appearance, math, verbal self-concepts) in young adolescents from two countries. Tests considering simultaneously the effects of self-concept domains on trait self-esteem did not support the IIWA model. On the contrary, support for a normative group importance model was found, in which importance varied as a function of domains but not individuals. Individuals differentially weight the various components of self-concept; however, the weights are largely determined by normative processes, so that little additional information is gained from individual weightings. PMID:23150198
Cohen's Linearly Weighted Kappa Is a Weighted Average of 2 x 2 Kappas
ERIC Educational Resources Information Center
Warrens, Matthijs J.
2011-01-01
An agreement table with [n as an element of N is greater than or equal to] 3 ordered categories can be collapsed into n - 1 distinct 2 x 2 tables by combining adjacent categories. Vanbelle and Albert ("Stat. Methodol." 6:157-163, 2009c) showed that the components of Cohen's weighted kappa with linear weights can be obtained from these n - 1…
SIMPLE AND WEIGHTED AVERAGING APPROACHES TO SCALING: WHEN CAN SPATIAL CONTEXT BE IGNORED?
Technology Transfer Automated Retrieval System (TEKTRAN)
Scaling from plots to landscapes, landscapes to regions, and regions to the globe based on simple or weighted averaging techniques can be accurate when applied to the appropriate problems. Simple averaging approaches work well when conditions are homogeneous spatially and temporally. For example, ...
Benkler, Erik; Sterr, Uwe
2015-01-01
The power spectral density in Fourier frequency domain, and the different variants of the Allan deviation (ADEV) in dependence on the averaging time are well established tools to analyse the fluctuation properties and frequency instability of an oscillatory signal. It is often supposed that the statistical uncertainty of a measured average frequency is given by the ADEV at a well considered averaging time. However, this approach requires further mathematical justification and refinement, which has already been done regarding the original ADEV for certain noise types. Here we provide the necessary background to use the modified Allan deviation (modADEV) and other two-sample deviations to determine the uncertainty of weighted frequency averages. The type of two-sample deviation used to determine the uncertainty depends on the method used for determination of the average. We find that the modADEV, which is connected with $\\Lambda$-weighted averaging, and the two sample deviation associated to a linear phase regr...
NASA Astrophysics Data System (ADS)
Benkler, Erik; Lisdat, Christian; Sterr, Uwe
2015-08-01
The power spectral density in the Fourier frequency domain, and the different variants of the Allan deviation (ADEV) in dependence on the averaging time are well established tools to analyse the fluctuation properties and frequency instability of an oscillatory signal. It is often supposed that the statistical uncertainty of a measured average frequency is given by the ADEV at a well-considered averaging time. However, this approach requires further mathematical justification and refinement, which has already been done regarding the original ADEV for certain noise types. Here we provide the necessary background to use the modified Allan deviation (modADEV) and other two-sample deviations to determine the uncertainty of weighted frequency averages. The type of two-sample deviation used to determine the uncertainty depends on the method used for determination of the average. We find that the modADEV, which is connected with ? -weighted averaging, and the two-sample deviation associated with a linear phase regression weighting (parADEV) are, in particular, advantageous for measurements in which white phase noise is dominating. Furthermore, we derive a procedure for how to minimise the uncertainty of a measurement for a typical combination of white phase and frequency noise by adaptive averaging of the data set with different weighting functions. Finally, some aspects of the theoretical considerations for real-world frequency measurement equipment are discussed.
Raoult's law-based method for determination of coal tar average molecular weight
Brown, D.G.; Gupta, L.; Horace, H.K.; Coleman, A.J. [Lehigh University, Bethlehem, PA (US). Dept. of Civil & Environmental Engineers
2005-08-01
A Raoult's law-based method for determining the number average molecular weight of coal tars is presented. The method requires data from two-phase coal tar/water equilibrium experiments, which readily are performed in environmental laboratories. An advantage of this method for environmental samples is that it is not impacted by the small amount of inert debris often present in coal tar samples obtained from contaminated sites. Results are presented for 10 coal tars from nine former manufactured gas plants located in the eastern United States. Vapor pressure osmometry (VPO) analysis provided similar average molecular weights to those determined with the Raoult's law-based method, except for one highly viscous coal tar sample. Use of the VPO-based average molecular weight for this coal tar resulted in underprediction of the coal tar constituents' aqueous concentrations. Additionally, one other coal tar was not completely soluble in solvents used for VPO analysis. The results indicate that the Raoult's law-based method is able to provide an average molecular weight that is consistent with the intended application of the data (e.g., modeling the dissolution of coal tar constituents into surrounding waters), and this method can be applied to coal tars that may be incompatible with other commonly used methods for determining average molecular weight, such as vapor pressure osmometry.
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steven B.
2013-07-23
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cek resolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.
Body weight, diet and home range area in primates
Katharine Milton; Michael L. May
1976-01-01
Primates show a strong positive relationship between body weight and home range area. Dietary habits also influence home range area. Folivorous primates occupy smaller home range areas for their body weight than do frugivores and omnivores. Primates generally require smaller home range area per individual than solitary terrestrial mammals, but primates living in social groups have much larger total home
Modeling daily average stream temperature from air temperature and watershed area
NASA Astrophysics Data System (ADS)
Butler, N. L.; Hunt, J. R.
2012-12-01
Habitat restoration efforts within watersheds require spatial and temporal estimates of water temperature for aquatic species especially species that migrate within watersheds at different life stages. Monitoring programs are not able to fully sample all aquatic environments within watersheds under the extreme conditions that determine long-term habitat viability. Under these circumstances a combination of selective monitoring and modeling are required for predicting future geospatial and temporal conditions. This study describes a model that is broadly applicable to different watersheds while using readily available regional air temperature data. Daily water temperature data from thirty-eight gauges with drainage areas from 2 km2 to 2000 km2 in the Sonoma Valley, Napa Valley, and Russian River Valley in California were used to develop, calibrate, and test a stream temperature model. Air temperature data from seven NOAA gauges provided the daily maximum and minimum air temperatures. The model was developed and calibrated using five years of data from the Sonoma Valley at ten water temperature gauges and a NOAA air temperature gauge. The daily average stream temperatures within this watershed were bounded by the preceding maximum and minimum air temperatures with smaller upstream watersheds being more dependent on the minimum air temperature than maximum air temperature. The model assumed a linear dependence on maximum and minimum air temperature with a weighting factor dependent on upstream area determined by error minimization using observed data. Fitted minimum air temperature weighting factors were consistent over all five years of data for each gauge, and they ranged from 0.75 for upstream drainage areas less than 2 km2 to 0.45 for upstream drainage areas greater than 100 km2. For the calibration data sets within the Sonoma Valley, the average error between the model estimated daily water temperature and the observed water temperature data ranged from 0.7 °C to 1.5 °C for the different gauges. To test the model, the average water temperature was estimated at the six locations within the Sonoma Valley not used in the calibration. For each water temperature record, the prior area dependent weighting factor was used. Regional maximum and minimum air temperature data were then used to estimate the average stream water temperature over the period of recorded water temperature. The average error between model-estimated and observed water temperature for the additional locations in the Sonoma Valley ranged from 0.7 °C to 3.5 °C. The model estimated water temperature for gauges with upstream drainage area less than 50 km2 had average error between estimated and observed water temperature less than 1.7 °C. When upstream drainage area was greater than 50 km2, the average error increased up to 3.5°C for some gauge locations. The model could also estimate water temperature in streams in other basins using the same area-dependent weighting factor. For eighteen gauges in the Napa Valley to the east , the average error between estimated and observed water temperature ranged from 0.7 °C to 1.9 °C, while for four gauges in the Russian River Valley to the northwest, the average error ranged from 1.2 °C to 3.2 °C. We speculate the area-dependent weighting factor reflects the temperature of groundwater contributions to stream flow.
Predicting annual average particulate concentration in urban areas.
Progiou, Athena G; Ziomas, Ioannis C
2015-11-01
Particulate matter concentrations are in most cities a major environmental problem. This is also the case in Greece where, despite the various measures taken in the past, the problem still persists. In this aspect, a cost efficient, comprehensive method was developed in order to help decision makers to take the most appropriate measures towards particulates pollution abatement. The method is based on the source apportionment estimation from the application of 3D meteorological and dispersion modeling and is validated with the use of 10years (2002-2012) PM10 monitoring data, in Athens, Greece, as well as using PM10 emission data for the same area and time period. It appears that the methodology can be used for estimating yearly average PM10 concentrations in a quite realistic manner, giving thus the decision makers the possibility to evaluate ex ante the effectiveness of specific abatement measures. PMID:26081738
A New Minimal Average Weight Representation for Left-to-Right Point Multiplication Methods
International Association for Cryptologic Research (IACR)
A New Minimal Average Weight Representation for Left-to-Right Point Multiplication Methods M the bits from left-to-right. This property is also useful for memory-constrained devices because it can/subtractions. It is also of interest to have a representation which can be obtained by scanning the bits from left to right
Real-Time Impulse Noise Suppression from Images Using an Efficient Weighted-Average Filtering
NASA Astrophysics Data System (ADS)
Hosseini, Hossein; Hessar, Farzad; Marvasti, Farokh
2015-08-01
In this paper, we propose a method for real-time high density impulse noise suppression from images. In our method, we first apply an impulse detector to identify the corrupted pixels and then employ an innovative weighted-average filter to restore them. The filter takes the nearest neighboring interpolated image as the initial image and computes the weights according to the relative positions of the corrupted and uncorrupted pixels. Experimental results show that the proposed method outperforms the best existing methods in both PSNR measure and visual quality and is quite suitable for real-time applications.
Raoult's law-based method for determination of coal tar average molecular weight
Derick G. Brown; Lovleen Gupta; H. K. Horace; Andrew J. Coleman
2005-01-01
A Raoult's law-based method for determining the number average molecular weight of coal tars is presented. The method requires data from two-phase coal tar\\/water equilibrium experiments, which readily are performed in environmental laboratories. An advantage of this method for environmental samples is that it is not impacted by the small amount of inert debris often present in coal tar samples
Girshick, Ahna R.; Banks, Martin S.
2010-01-01
Depth perception involves combining multiple, possibly conflicting, sensory measurements to estimate the 3D structure of the viewed scene. Previous work has shown that the perceptual system combines measurements using a statistically optimal weighted average. However, the system should only combine measurements when they come from the same source. We asked whether the brain avoids combining measurements when they differ from one another: that is, whether the system is robust to outliers. To do this, we investigated how two slant cues—binocular disparity and texture gradients—influence perceived slant as a function of the size of the conflict between the cues. When the conflict was small, we observed weighted averaging. When the conflict was large, we observed robust behavior: perceived slant was dictated solely by one cue, the other being rejected. Interestingly, the rejected cue was either disparity or texture, and was not necessarily the more variable cue. We modeled the data in a probabilistic framework, and showed that weighted averaging and robustness are predicted if the underlying likelihoods have heavier tails than Gaussians. We also asked whether observers had conscious access to the single-cue estimates when they exhibited robustness and found they did not, i.e. they completely fused despite the robust percepts. PMID:19761341
Exponentially Weighted Moving Average Change Detection Around the Country (and the World)
NASA Astrophysics Data System (ADS)
Brooks, E.; Wynne, R. H.; Thomas, V. A.; Blinn, C. E.; Coulston, J.
2014-12-01
With continuous, freely available moderate-resolution imagery of the Earth's surface available, and with the promise of more imagery to come, change detection based on continuous process models continues to be a major area of research. One such method, exponentially weighted moving average change detection (EWMACD), is based on a mixture of harmonic regression (HR) and statistical quality control, a branch of statistics commonly used to detect aberrations in industrial and medical processes. By using HR to approximate per-pixel seasonal curves, the resulting residuals characterize information about the pixels which stands outside of the periodic structure imposed by HR. Under stable pixels, these residuals behave as might be expected, but in the presence of changes (growth, stress, removal), the residuals clearly show these changes when they are used as inputs into an EWMA chart. In prior work in Alabama, USA, EWMACD yielded an overall accuracy of 85% on a random sample of known thinned stands, in some cases detecting thinnings as sparse as 25% removal. It was also shown to correctly identify the timing of the thinning activity, typically within a single image date of the change. The net result of the algorithm was to produce date-by-date maps of afforestation and deforestation on a variable scale of severity. In other research, EWMACD has also been applied to detect land use and land cover changes in central Java, Indonesia, despite the heavy incidence of clouds and a monsoonal climate. Preliminary results show that EWMACD accurately identifies land use conversion (agricultural to residential, for example) and also identifies neighborhoods where the building density has increased, removing neighborhood vegetation. In both cases, initial results indicate the potential utility of EWMACD to detect both gross and subtle ecosystem disturbance, but further testing across a range of ecosystems and disturbances is clearly warranted.
Equating of Subscores and Weighted Averages under the NEAT Design. Research Report. ETS RR-11-01
ERIC Educational Resources Information Center
Sinharay, Sandip; Haberman, Shelby
2011-01-01
Recently, the literature has seen increasing interest in subscores for their potential diagnostic values; for example, one study suggested the report of weighted averages of a subscore and the total score, whereas others showed, for various operational and simulated data sets, that weighted averages, as compared to subscores, lead to more accurate…
A new state reconstructor for digital controls systems using weighted-average measurements
NASA Technical Reports Server (NTRS)
Polites, Michael E.
1989-01-01
A state reconstructor is presented for a linear continuous-time plant driven by a zero-order-hold. It takes a continuous-time output vector from the plant and convolutes it with a weighting-function matrix whose elements are time dependent. This result is integrated over T second intervals to generate weighted-averaged measurements, every T seconds, that are used in the state reconstruction process. If the plant is noise-free and can be modeled precisely, the output of this state reconstructor exactly equals the true state of the plant and accomplishes this without knowledge of the plant's initial state. If noise or modeling errors are a problem, it can be catenated with a state observer or a Kalman filter for a synergistic effect.
Calculation of weighted averages approach for the estimation of ping tolerance values
Silalom, S.; Carter, J.L.; Chantaramongkol, P.
2010-01-01
A biotic index was created and proposed as a tool to assess water quality in the Upper Mae Ping sub-watersheds. The Ping biotic index was calculated by utilizing Ping tolerance values. This paper presents the calculation of Ping tolerance values of the collected macroinvertebrates. Ping tolerance values were estimated by a weighted averages approach based on the abundance of macroinvertebrates and six chemical constituents that include conductivity, dissolved oxygen, biochemical oxygen demand, ammonia nitrogen, nitrate nitrogen and orthophosphate. Ping tolerance values range from 0 to 10. Macroinvertebrates assigned a 0 are very sensitive to organic pollution while macroinvertebrates assigned 10 are highly tolerant to pollution.
Pedersen, T.A.; LaVelle, J.M.
1997-12-31
Estimating average soil contaminant concentrations is a routine site investigation, risk assessment and remediation verification activity. Some techniques however fail to adequately consider the spatial distribution of contaminants and may result in erroneous estimates. Kriging may provides the best linear unbiased estimator of soil contaminant levels for sites when concentrations are spatially correlated. When variogram models show limited relationships, or when sampling density is limited, the application of geostatistical techniques may, not be appropriate. In these situations the geographic distribution of contaminants is often discounted when developing estimates of soil contaminant concentrations. In this poster presentation we describe the use of a Voronoi tessellation weighted averaging approach for estimating soil lead and polyaromatic hydrocarbons (PAHs) that in turn were used in performing risk assessment computations. The exposure point concentrations estimated using this approach provided a more realistic assessment of the risks actually posed by soil contaminants at this site.
Fuzzy weighted average based on left and right scores in Malaysia tourism industry
NASA Astrophysics Data System (ADS)
Kamis, Nor Hanimah; Abdullah, Kamilah; Zulkifli, Muhammad Hazim; Sahlan, Shahrazali; Mohd Yunus, Syaizzal
2013-04-01
Tourism is known as an important sector to the Malaysian economy including economic generator, creating business and job offers. It is reported to bring in almost RM30 billion of the national income, thanks to intense worldwide promotion by Tourism Malaysia. One of the well-known attractions in Malaysia is our beautiful islands. The islands continue to be developed into tourist spots and attracting a continuous number of tourists. Chalets, luxury bungalows and resorts quickly develop along the coastlines of popular islands like Tioman, Redang, Pangkor, Perhentian, Sibu and so many others. In this study, we applied Fuzzy Weighted Average (FWA) method based on left and right scores in order to determine the criteria weights and to select the best island in Malaysia. Cost, safety, attractive activities, accommodation and scenery are five main criteria to be considered and five selected islands in Malaysia are taken into accounts as alternatives. The most important criteria that have been considered by the tourist are defined based on criteria weights ranking order and the best island in Malaysia is then determined in terms of FWA values. This pilot study can be used as a reference to evaluate performances or solving any selection problems, where more criteria, alternatives and decision makers will be considered in the future.
Wu, Zhihai; Fang, Huajing; She, Yingying
2012-10-01
In this paper, the weighted average prediction (WAP) is introduced into the existing consensus protocol for simultaneously improving the robustness to communication delay and the convergence speed of achieving the consensus. The frequency-domain analysis and algebra graph theory are employed to derive the necessary and sufficient condition guaranteeing the second-order delayed multi-agent systems applying the WAP-based consensus protocol to achieve the stationary consensus. It is proved that introducing the WAP with the proper length into the existing consensus protocol can improve the robustness against communication delay. Also, we prove that for two kinds of second-order delayed multi-agent systems: 1) the IR-ones with communication delay approaching zero and 2) the ones with communication delay approaching the maximum delay, introducing the WAP with the proper length into the existing consensus protocol can accelerate the convergence speed of achieving the stationary consensus. PMID:22453642
NASA Astrophysics Data System (ADS)
Nadi, S.; Delavar, M. R.
2011-06-01
This paper presents a generic model for using different decision strategies in multi-criteria, personalized route planning. Some researchers have considered user preferences in navigation systems. However, these prior studies typically employed a high tradeoff decision strategy, which used a weighted linear aggregation rule, and neglected other decision strategies. The proposed model integrates a pairwise comparison method and quantifier-guided ordered weighted averaging (OWA) aggregation operators to form a personalized route planning method that incorporates different decision strategies. The model can be used to calculate the impedance of each link regarding user preferences in terms of the route criteria, criteria importance and the selected decision strategy. Regarding the decision strategy, the calculated impedance lies between aggregations that use a logical "and" (which requires all the criteria to be satisfied) and a logical "or" (which requires at least one criterion to be satisfied). The calculated impedance also includes taking the average of the criteria scores. The model results in multiple alternative routes, which apply different decision strategies and provide users with the flexibility to select one of them en-route based on the real world situation. The model also defines the robust personalized route under different decision strategies. The influence of different decision strategies on the results are investigated in an illustrative example. This model is implemented in a web-based geographical information system (GIS) for Isfahan in Iran and verified in a tourist routing scenario. The results demonstrated, in real world situations, the validity of the route planning carried out in the model.
Conductivity image enhancement in MREIT using adaptively weighted spatial averaging filter
2014-01-01
Background In magnetic resonance electrical impedance tomography (MREIT), we reconstruct conductivity images using magnetic flux density data induced by externally injected currents. Since we extract magnetic flux density data from acquired MR phase images, the amount of measurement noise increases in regions of weak MR signals. Especially for local regions of MR signal void, there may occur excessive amounts of noise to deteriorate the quality of reconstructed conductivity images. In this paper, we propose a new conductivity image enhancement method as a postprocessing technique to improve the image quality. Methods Within a magnetic flux density image, the amount of noise varies depending on the position-dependent MR signal intensity. Using the MR magnitude image which is always available in MREIT, we estimate noise levels of measured magnetic flux density data in local regions. Based on the noise estimates, we adjust the window size and weights of a spatial averaging filter, which is applied to reconstructed conductivity images. Without relying on a partial differential equation, the new method is fast and can be easily implemented. Results Applying the novel conductivity image enhancement method to experimental data, we could improve the image quality to better distinguish local regions with different conductivity contrasts. From phantom experiments, the estimated conductivity values had 80% less variations inside regions of homogeneous objects. Reconstructed conductivity images from upper and lower abdominal regions of animals showed much less artifacts in local regions of weak MR signals. Conclusion We developed the fast and simple method to enhance the conductivity image quality by adaptively adjusting the weights and window size of the spatial averaging filter using MR magnitude images. Since the new method is implemented as a postprocessing step, we suggest adopting it without or with other preprocessing methods for application studies where conductivity contrast is of primary concern. PMID:24970640
Measurement of area density of vertically aligned carbon nanotube forests by the weight-gain method
NASA Astrophysics Data System (ADS)
Esconjauregui, Santiago; Xie, Rongsie; Fouquet, Martin; Cartwright, Richard; Hardeman, David; Yang, Junwei; Robertson, John
2013-04-01
The area density of vertically aligned carbon nanotubes forests is measured and analysed by the weight gain method. The mass density of a close packed array of single- and multi-walled nanotubes is analysed as a function of the average nanotube diameter and number of walls, and this is used to derive the area density, from which the filling factor can be extracted. Densities of order 1013 cm-2 tubes are grown from cyclic catalyst methods.
Area-averaged surface fluxes and their time-space variability over the FIFE experimental domain
NASA Astrophysics Data System (ADS)
Smith, E. A.; Hsu, A. Y.; Crosson, W. L.; Field, R. T.; Fritschen, L. J.; Gurney, R. J.; Kanemasu, E. T.; Kustas, W. P.; Nie, D.; Shuttleworth, W. J.; Stewart, J. B.; Verma, S. B.; Weaver, H. L.; Wesely, M. L.
1992-11-01
The underlying mean and variance properties of surface net radiation, soil heat flux, and sensible-latent heat fluxes are examined over the densely instrumented grassland region encompassing the First ISLSCP Field Experiment (FIFE). Twenty-two surface flux stations at 20 sites were deployed during the four 1987 intensive field campaigns (IFCs). Flux variability is addressed together with the problem of scaling up to area-averaged fluxes. Successful parameterization of area-averaged fluxes in atmospheric models is based on accounting for internal spatial and temporal scales correctly. Mean and variance properties of fluxes are examined in both daily and diurnally averaged frameworks. Results are compared and contrasted for clear and cloudy situations and checked for the influence of surface-induced biophysical controls (burn and grazing treatments) and topographic controls (slope factors and aspect ratios). Examination of the sensitivity of domain-averaged fluxes to different averaging procedures demonstrates that this may be an important consideration. The results reveal six key features of the 1987 surface fluxes: (1) cloudiness variability and ample rainfall throughout the growing season led to near-consistency in flux magnitudes during the first three IFCs; (2) burn treatment, grazing conditions, and topography have clearly delineated influences on the diurnal cycle flux amplitudes but do not alter the evaporative fraction significantly; (3) cloudiness is the major control on flux variability in terms of both mean and variance properties but has little impact on the Bowen ratio or evaporative fraction; (4) spatial weighting of fluxes based on a biophysicaltopographical cross stratification generates a measurable bias with respect to straight arithmetic averaging (up to 20 W m-2 in available heating); (5) structure function analysis demonstrates significant underlying spatial autocorrelation structure in the fluxes, but the observed distance dependence is due to cloudiness controls, not surface controls; (6) Monte Carlo analysis of high resolution vegetation indices obtained from SPOT satellite measurements suggest that the mean domain amplitudes of the diurnal sensible and latent heat flux cycles can be biased up to 30-40 W m -2 by repositioning the 20 site locations within the experimental domain.
Mendle, Jerry M.
Weighted Average as Computed by the KarnikMendel Algorithms Feilong Liu, Student Member, IEEE, and Jerry M. Mendel, Life Fellow, IEEE Abstract--By connecting work from two different problems-- the fuzzy weighted. This new algorithm uses the KarnikMendel (KM) algorithms to compute the FWA -cut end- points. It appears
Wingard, G.L.; Hudley, J.W.
2012-01-01
A molluscan analogue dataset is presented in conjunction with a weighted-averaging technique as a tool for estimating past salinity patterns in south Florida’s estuaries and developing targets for restoration based on these reconstructions. The method, here referred to as cumulative weighted percent (CWP), was tested using modern surficial samples collected in Florida Bay from sites located near fixed water monitoring stations that record salinity. The results were calibrated using species weighting factors derived from examining species occurrence patterns. A comparison of the resulting calibrated species-weighted CWP (SW-CWP) to the observed salinity at the water monitoring stations averaged over a 3-year time period indicates, on average, the SW-CWP comes within less than two salinity units of estimating the observed salinity. The SW-CWP reconstructions were conducted on a core from near the mouth of Taylor Slough to illustrate the application of the method.
Pardo, C E; Kreuzer, M; Bee, G
2013-11-01
Offspring born from normal litter size (10 to 15 piglets) but classified as having lower than average birth weight (average of the sow herd used: 1.46 ± 0.2 kg; mean ± s.d.) carry at birth negative phenotypic traits normally associated with intrauterine growth restriction, such as brain-sparing and impaired myofiber hyperplasia. The objective of the study was to assess long-term effects of intrauterine crowding by comparing postnatal performance, carcass characteristics and pork quality of offspring born from litters with higher (>1.7 kg) or lower (<1.3 kg) than average litter birth weight. From a population of multiparous Swiss Large White sows (parity 2 to 6), 16 litters with high (H = 1.75 kg) or low (L = 1.26 kg) average litter birth weight were selected. At farrowing, two female pigs and two castrated pigs were chosen from each litter: from the H-litters those with the intermediate (HI = 1.79 kg) and lowest (HL = 1.40 kg) birth weight, and from L-litters those with the highest (LH = 1.49 kg) and intermediate (LI = 1.26 kg) birth weight. Average birth weight of the selected HI and LI piglets differed (P < 0.05), whereas birth weight of the HL- and LH-piglets were similar (P > 0.05). These pigs were fattened in group pen and slaughtered at 165 days of age. Pre-weaning performance of the litters and growth performance, carcass and meat quality traits of the selected pigs were assessed. Number of stillborn and pig mortality were greater (P < 0.05) in L- than in H-litters. Consequently, fewer (P < 0.05) piglets were weaned and average litter weaning weight decreased by 38% (P < 0.05). The selected pigs of the L-litters displayed catch-up growth during the starter and grower-finisher periods, leading to similar (P > 0.05) slaughter weight at 165 days of age. However, HL-gilts were more feed efficient and had leaner carcasses than HI-, LH- and LI-pigs (birth weight class × gender interaction P < 0.05). Meat quality traits were mostly similar between groups. The marked between-litter birth weight variation observed in normal size litters had therefore no evident negative impact on growth potential and quality of pigs from the lower birth weight group. PMID:23896082
Theoretical and empirical analysis of the average cross-sectional areas of breakup fragments
NASA Astrophysics Data System (ADS)
Hanada, T.; Liou, J.-C.
2011-05-01
This paper compares two different approaches to calculate the average cross-sectional area of breakup fragments. The first one is described in the NASA standard breakup model 1998 revision. This approach visually classifies fragments into several shapes, and then applies formulae developed for each shape to calculate the average cross-sectional area. The second approach was developed jointly by the Kyushu University and the NASA Orbital Debris Program Office. This new approach automatically classifies fragments into plate- or irregular-shapes based on their aspect ratio and thickness, and then applies formulae developed for each shape to calculate the average cross-sectional area. The comparison between the two approaches is demonstrated in the area-to-mass ratio ( A/m) distribution of fragments from two microsatellite impact experiments completed in early 2008. A major difference between the two approaches comes from the calculation of the average cross-sectional area of plates. In order to determine which of the two approaches provides a better description of the actual A/m distribution of breakup fragments, a theoretical analysis in the calculation of the average cross-sectional area of an ideal plate is conducted. This paper also investigates the average cross-sectional area of multi-layer insulation fragments. The average cross-sectional area of 214 multi-layer insulation fragments was measured by a planimeter, and then the data were used to benchmark the average cross-sectional areas estimated by the two approaches. The uncertainty in the calculation of the average cross-sectional area with the two approaches is also discussed in terms of size and thickness.
S. Nadi; M. R. Delavar
2011-01-01
This paper presents a generic model for using different decision strategies in multi-criteria, personalized route planning. Some researchers have considered user preferences in navigation systems. However, these prior studies typically employed a high tradeoff decision strategy, which used a weighted linear aggregation rule, and neglected other decision strategies. The proposed model integrates a pairwise comparison method and quantifier-guided ordered weighted
Area-averaged profiles over the mock urban setting test array
Nelson, M. A. (Matthew Aaron); Brown, M. J. (Michael J.); Pardyjak, E. R. (Eric R.); Klewicki, J. C.
2004-01-01
Urban areas have a large effect on the local climate and meteorology. Efforts have been made to incorporate the bulk dynamic and thermodynamic effects of urban areas into mesoscale models (e.g., Chin et al., 2000; Holt et al., 2002; Lacser and Otte, 2002). At this scale buildings cannot be resolved individually, but parameterizations have been developed to capture their aggregate effect. These urban canopy parameterizations have been designed to account for the area-average drag, turbulent kinetic energy (TKE) production, and surface energy balance modifications due to buildings (e.g., Sorbjan and Uliasz, 1982; Ca, 1999; Brown, 2000; Martilli et al., 2002). These models compute an area-averaged mean profile that is representative of the bulk flow characteristics over the entire mesoscale grid cell. One difficulty has been testing of these parameterizations due to lack of area-averaged data. In this paper, area-averaged velocity and turbulent kinetic energy profiles are derived from data collected at the Mock Urban Setting Test (MUST). The MUST experiment was designed to be a near full-scale model of an idealized urban area imbedded in the Atmospheric Surface Layer (ASL). It's purpose was to study airflow and plume transport in urban areas and to provide a test case for model validation. A large number of velocity measurements were taken at the test site so that it was possible to derive area-averaged velocity and TKE profiles.
Theoretical and Empirical Analysis of the Average Cross-sectional Areas of Breakup Fragments
NASA Astrophysics Data System (ADS)
Hanada, Toshiya; Liou, Jer-Chyi
This paper will compare two different approaches to calculate the average cross-sectional ar-eas of breakup fragments. The first one is described in the NASA standard breakup model 1998 revision. This approach visually classifies fragments into several shapes, and then applies formulae developed for each shape to calculate the average cross-sectional area. The second ap-proach was developed jointly by the Kyushu University and the NASA Orbital Debris Program Office. This new approach automatically classifies fragments into plate-or irregular-shaped objects based on their aspect ratio and thickness, and then applies formulae for each shape to calculate the average cross-sectional area. The comparison between the two approaches will be demonstrated in the area-to-mass ratio (A/m) distribution of fragments from two microsatellite impact tests completed in early 2008. In order to determine which one of the two approaches provides a better description of the actual A/m distribution of breakup fragments, a theoretical analysis of two objects in ideal shape was conducted. The first one is an ideal plate. It is used to investigate the uncertainty of the formula described in the NASA standard breakup model. The second shape is an ideal cylinder. It is used to investigate the uncertainty in the calculation of the average cross-sectional area of needle-like fragments generated from the CFRP layers and side panels of the microsatellite tests. This paper will also investigate the average cross-sectional areas of multi-layer insulation (MLI) fragments. The average cross-sectional areas of 214 MLI fragments were measured by a planime-ter, and then the data were used to benchmark the average cross-sectional areas estimated by the two approaches. The uncertainty in the calculation of the average cross-sectional area with the two approaches will also be discussed in terms of size and thickness.
López-Soria, S; Sibila, M; Nofrarías, M; Calsamiglia, M; Manzanilla, E G; Ramírez-Mendoza, H; Mínguez, A; Serrano, J M; Marín, O; Joisel, F; Charreyre, C; Segalés, J
2014-12-01
Porcine circovirus type 2 (PCV2) is a ubiquitous virus that mainly affects nursery and fattening pigs causing systemic disease (PCV2-SD) or subclinical infection. A characteristic sign in both presentations is reduction of average daily weight gain (ADWG). The present study aimed to assess the relationship between PCV2 load in serum and ADWG from 3 (weaning) to 21 weeks of age (slaughter) (ADWG 3-21). Thus, three different boar lines were used to inseminate sows from two PCV2-SD affected farms. One or two pigs per sow were selected (60, 61 and 51 piglets from Pietrain, Pietrain×Large White and Duroc×Large White boar lines, respectively). Pigs were bled at 3, 9, 15 and 21 weeks of age and weighted at 3 and 21 weeks. Area under the curve of the viral load at all sampling times (AUCqPCR 3-21) was calculated for each animal according to standard and real time quantitative PCR results; this variable was categorized as "negative or low" (<10(4.3) PCV2 genome copies/ml of serum), "medium" (?10(4.3) to ?10(5.3)) and "high" (>10(5.3)). Data regarding sex, PCV2 antibody titre at weaning and sow parity was also collected. A generalized linear model was performed, obtaining that paternal genetic line and AUCqPCR 3-21 were related to ADWG 3-21. ADWG 3-21 (mean±typical error) for "negative or low", "medium" and "high" AUCqPCR 3-21 was 672±9, 650±12 and 603±16 g/day, respectively, showing significant differences among them. This study describes different ADWG performances in 3 pig populations that suffered from different degrees of PCV2 viraemia. PMID:25448444
Hanusa, Christopher
faster using weighted-average Madelung constant calculations. Bonding in bulk ionic materials such as Mg 51 52 53 54 55 56 57 58 59 60 #12;3 INTRODUCTION The structures and relative stabilities of ionic energies can be accurately modeled using Born-Mayer-like potential functions. However, the bonding
Full-custom design of split-set data weighted averaging with output register for jitter suppression
NASA Astrophysics Data System (ADS)
Jubay, M. C.; Gerasta, O. J.
2015-06-01
A full-custom design of an element selection algorithm, named as Split-set Data Weighted Averaging (SDWA) is implemented in 90nm CMOS Technology Synopsys Library. SDWA is applied in seven unit elements (3-bit) using a thermometer-coded input. Split-set DWA is an improved DWA algorithm which caters the requirement for randomization along with long-term equal element usage. Randomization and equal element-usage improve the spectral response of the unit elements due to higher Spurious-free dynamic range (SFDR) and without significantly degrading signal-to-noise ratio (SNR). Since a full-custom, the design is brought to transistor-level and the chip custom layout is also provided, having a total area of 0.3mm2, a power consumption of 0.566 mW, and simulated at 50MHz clock frequency. On this implementation, SDWA is successfully derived and improved by introducing a register at the output that suppresses the jitter introduced at the final stage due to switching loops and successive delays.
Bacillus subtilis 168 levansucrase (SacB) activity affects average levan molecular weight.
Porras-Domínguez, Jaime R; Ávila-Fernández, Ángela; Miranda-Molina, Afonso; Rodríguez-Alegría, María Elena; Munguía, Agustín López
2015-11-01
Levan is a fructan polymer that offers a variety of applications in the chemical, health, cosmetic and food industries. Most of the levan applications depend on levan molecular weight, which in turn depends on the source of the synthesizing enzyme and/or on reaction conditions. Here we demonstrate that in the particular case of levansucrase from Bacillus subtilis 168, enzyme concentration is also a factor defining the molecular weight levan distribution. While a bimodal distribution has been reported at the usual enzyme concentrations (1U/ml equivalent to 0.1?M levansucrase) we found that a low molecular weight normal distribution is solely obtained al high enzyme concentrations (>5U/ml equivalent to 0.5?M levansucrase) while a high normal molecular weight distribution is synthesized at low enzyme doses (0.1U/ml equivalent to 0.01?M of levansucrase). PMID:26256357
The effect of capsule-filling machine vibrations on average fill weight.
Llusa, Marcos; Faulhammer, Eva; Biserni, Stefano; Calzolari, Vittorio; Lawrence, Simon; Bresciani, Massimo; Khinast, Johannes
2013-09-15
The aim of this paper is to study the effect of the speed of capsule filling and the inherent machine vibrations on fill weight for a dosator-nozzle machine. The results show that increasing speed of capsule filling amplifies the vibration intensity (as measured by Laser Doppler vibrometer) of the machine frame, which leads to powder densification. The mass of the powder (fill weight) collected via the nozzle is significantly larger at a higher capsule filling speed. Therefore, there is a correlation between powder densification under more intense vibrations and larger fill weights. Quality-by Design of powder based products should evaluate the effect of environmental vibrations on material attributes, which in turn may affect product quality. PMID:23872302
Kim, Tad; Rivara, Frederick P; Mozingo, David W; Lottenberg, Lawrence; Harris, Zachary B; Casella, George; Liu, Huazhi; Moldawer, Lyle L; Efron, Philip A; Ang, Darwin N
2015-01-01
Objective The state of Florida has some of the most dangerous highways in the USA. In 2006, Florida averaged 1.65 fatalities per 100 million vehicle miles travelled (VMT) compared with the national average of 1.42. A study was undertaken to find a method of identifying counties that contributed to the most driver fatalities after a motor vehicle collision (MVC). By regionalising interventions unique to this subset of counties, the use of resources would have the greatest potential of improving statewide driver death. Methods The Florida Highway Safety Motor Vehicle database 2000–2006 was used to calculate driver VMT-weighted deaths by county. A total of 3 468 326 motor vehicle crashes were evaluated. Counties that had driver death rates higher than the state average were sorted by a weighted averages method. Multivariate regression was used to calculate the likelihood of death for various risk factors. Results VMT-weighted death rates identified 12 out of 67 counties that contributed up to 50% of overall driver fatalities. These counties were primarily clustered in central and south Florida. The strongest independent risk factors for driver death attributable to MVC in these high-risk counties were alcohol/drug use, rural roads, speed limit ?45 mph, adverse weather conditions, divided highways, vehicle type, vehicle defects and roadway location. Conclusions Using the weighted averages method, a small subset of counties contributing to the majority of statewide driver fatalities was identified. Regionalised interventions on specific risk factors in these counties may have the greatest impact on reducing driver-related MVC fatalities. PMID:21685144
Krishnamoorthy, Kalimuthu
) for a product of powers of Poisson parameters are also used to assess the reliability of a parallel system some Bayesian credible intervals. It should be noted that the merits of the proposed methods in Harris. The Bayesian credible interval developed by Kim is not in closed- from, and a weighted Monte Carlo simulation
High surface area, low weight composite nickel fiber electrodes
NASA Technical Reports Server (NTRS)
Johnson, Bradley A.; Ferro, Richard E.; Swain, Greg M.; Tatarchuk, Bruce J.
1993-01-01
The energy density and power density of light weight aerospace batteries utilizing the nickel oxide electrode are often limited by the microstructures of both the collector and the resulting active deposit in/on the collector. Heretofore, these two microstructures were intimately linked to one another by the materials used to prepare the collector grid as well as the methods and conditions used to deposit the active material. Significant weight and performance advantages were demonstrated by Britton and Reid at NASA-LeRC using FIBREX nickel mats of ca. 28-32 microns diameter. Work in our laboratory investigated the potential performance advantages offered by nickel fiber composite electrodes containing a mixture of fibers as small as 2 microns diameter (Available from Memtec America Corporation). These electrode collectors possess in excess of an order of magnitude more surface area per gram of collector than FIBREX nickel. The increase in surface area of the collector roughly translates into an order of magnitude thinner layer of active material. Performance data and advantages of these thin layer structures are presented. Attributes and limitations of their electrode microstructure to independently control void volume, pore structure of the Ni(OH)2 deposition, and resulting electrical properties are discussed.
On the theory relating changes in area-average and pan evaporation (Invited)
NASA Astrophysics Data System (ADS)
Shuttleworth, W.; Serrat-Capdevila, A.; Roderick, M. L.; Scott, R.
2009-12-01
Theory relating changes in area-average evaporation with changes in the evaporation from pans or open water is developed. Such changes can arise by Type (a) processes related to large-scale changes in atmospheric concentrations and circulation that modify surface evaporation rates in the same direction, and Type (b) processes related to coupling between the surface and atmospheric boundary layer (ABL) at the landscape scale that usually modify area-average evaporation and pan evaporation in different directions. The interrelationship between evaporation rates in response to Type (a) changes is derived. They have the same sign and broadly similar magnitude but the change in area-average evaporation is modified by surface resistance. As an alternative to assuming the complementary evaporation hypothesis, the results of previous modeling studies that investigated surface-atmosphere coupling are parameterized and used to develop a theoretical description of Type (b) coupling via vapor pressure deficit (VPD) in the ABL. The interrelationship between appropriately normalized pan and area-average evaporation rates is shown to vary with temperature and wind speed but, on average, the Type (b) changes are approximately equal and opposite. Long-term Australian pan evaporation data are analyzed to demonstrate the simultaneous presence of Type (a) and (b) processes, and observations from three field sites in southwestern USA show support for the theory describing Type (b) coupling via VPD. England's victory over Australia in 2009 Ashes cricket test match series will not be mentioned.
ON THE THEORY RELATING CHANGES IN AREA-AVERAGE AND PAN EVAPORATION
Technology Transfer Automated Retrieval System (TEKTRAN)
Theory relating changes in the area-average evaporation from a landscape with changes in the evaporation from pans or open water within the landscape is developed. Such changes can arise in two ways, by Type (a) processes related to large-scale changes in atmospheric concentrations and circulation t...
MacDonald, Lee
to turbidity and the deposition of fine sediment (Rogers 1990), and the offshore coral communitiesAverage annual sediment yields from undeveloped areas were estimated from a sediment pond of anthropogenic sediment. Field measure- ments of the road network in two catchments led to the development
Average distance, surface area, and other structural properties of exchanged hypercubes
Klavzar, Sandi
Average distance, surface area, and other structural properties of exchanged hypercubes Sandi hypercubes [Loh et al., IEEE Transactions on Parallel and Dis- tributed Systems 16 (2005) 866Â874] are spanning subgraphs of hypercubes with about one half of their edges but still with many desirable
Average distance, surface area, and other structural properties of exchanged hypercubes
Klavzar, Sandi
Average distance, surface area, and other structural properties of exchanged hypercubes Sandi Klav, Zhejiang, 321004, China mameij@mail.ustc.edu.cn February 13, 2014 Abstract Exchanged hypercubes [Loh et al of hypercubes with about one half of their edges but still with many desirable properties of hypercubes
Adaptive beamformer based on average vowels/consonant spectrum weights for noisy speech recognition
NASA Astrophysics Data System (ADS)
Nakayama, Masato; Nishiura, Takanobu; Kawahara, Hideki
2002-11-01
Background noise and reverberations seriously degrades the sound capture quality. A microphone-array is an ideal candidate for capturing distant-talking speech. With a microphone array, a desired speech signal can be acquired selectively by steering the directivity. The AMNOR (Adaptive Microphone-Array for Noise Reduction) is an adaptive beamformer proposed by Kaneda et al. In addition, as the beamformer for speech capture, S-AMNOR, the AMNOR with a long time speech spectrum was also proposed by Okada et al. However, the performance of the S-AMNOR may be further improved, if each adaptive filter for vowel and consonants could be designed with average vowels/consonants spectrum. Therefore, we propose the new AMNOR with adaptive filters for vowels/consonants, in order to improve the signal capturing performance. We evaluated the ASR (Automatic Speech Recognition) performance with the enhanced desired signal using the adaptive filters for vowels/consonants after detecting vowels and consonants on each phoneme. As a result of evaluation experiments, by comparing the results from the proposed AMNOR and the conventional AMNOR/S-AMNOR, we could confirm that the ASR performance was improved with proposed AMNOR. [Work supported by JSPS.
Austin G. Fowler
2014-10-10
Consider a 2-D square array of qubits of extent $L\\times L$. We provide a proof that the minimum weight perfect matching problem associated with running a particular class of topological quantum error correction codes on this array can be exactly solved with a 2-D square array of classical computing devices, each of which is nominally associated with a fixed number $N$ of qubits, in constant average time per round of error detection independent of $L$ provided physical error rates are below fixed nonzero values, and other physically reasonable assumptions. This proof is applicable to the fully fault-tolerant case only, not the case of perfect stabilizer measurements.
Shih, H C; Tsai, S W; Kuo, C H
2012-01-01
A solid-phase microextraction (SPME) device was used as a diffusive sampler for airborne propylene glycol ethers (PGEs), including propylene glycol monomethyl ether (PGME), propylene glycol monomethyl ether acetate (PGMEA), and dipropylene glycol monomethyl ether (DPGME). Carboxen-polydimethylsiloxane (CAR/PDMS) SPME fiber was selected for this study. A polytetrafluoroethylene (PTFE) tubing was used as the holder, and the SPME fiber assembly was inserted into the tubing as a diffusive sampler. The diffusion path length and area of the sampler were 0.3 cm and 0.00086 cm(2), respectively. The theoretical sampling constants at 30°C and 1 atm for PGME, PGMEA, and DPGME were 1.50 × 10(-2), 1.23 × 10(-2) and 1.14 × 10(-2) cm(3) min(-1), respectively. For evaluations, known concentrations of PGEs around the threshold limit values/time-weighted average with specific relative humidities (10% and 80%) were generated both by the air bag method and the dynamic generation system, while 15, 30, 60, 120, and 240 min were selected as the time periods for vapor exposures. Comparisons of the SPME diffusive sampling method to Occupational Safety and Health Administration (OSHA) organic Method 99 were performed side-by-side in an exposure chamber at 30°C for PGME. A gas chromatography/flame ionization detector (GC/FID) was used for sample analysis. The experimental sampling constants of the sampler at 30°C were (6.93 ± 0.12) × 10(-1), (4.72 ± 0.03) × 10(-1), and (3.29 ± 0.20) × 10(-1) cm(3) min(-1) for PGME, PGMEA, and DPGME, respectively. The adsorption of chemicals on the stainless steel needle of the SPME fiber was suspected to be one of the reasons why significant differences between theoretical and experimental sampling rates were observed. Correlations between the results for PGME from both SPME device and OSHA organic Method 99 were linear (r = 0.9984) and consistent (slope = 0.97 ± 0.03). Face velocity (0-0.18 m/s) also proved to have no effects on the sampler. However, the effects of temperature and humidity have been observed. Therefore, adjustments of experimental sampling constants at different environmental conditions will be necessary. PMID:22651222
Area-averaged surface fluxes and their time-space variability over the FIFE experimental domain
NASA Technical Reports Server (NTRS)
Smith, E. A.; Hsu, A. Y.; Crosson, W. L.; Field, R. T.; Fritschen, L. J.; Gurney, R. J.; Kanemasu, E. T.; Kustas, W. P.; Nie, D.; Shuttleworth, W. J.
1992-01-01
The underlying mean and variance properties of surface net radiation, sensible-latent heat fluxes and soil heat flux are studied over the densely instrumented grassland region encompassing FIFE. Flux variability is discussed together with the problem of scaling up to area-averaged fluxes. Results are compared and contrasted for cloudy and clear situations and examined for the influence of surface-induced biophysical controls (burn and grazing treatments) and topographic controls (aspect ratios and slope factors).
Kundu, Prasun K
2015-01-01
Rainfall exhibits extreme variability at many space and time scales and calls for a statistical description. Based on an analysis of radar measurements of precipitation over the tropical oceans, we introduce a new probability law for the area-averaged rain rate constructed from the class of log-infinitely divisible distributions that accurately describes the frequency of the most intense rain events. The dependence of its parameters on the spatial averaging length L allows one to relate spatial statistics at different scales. In particular, it enables us to explain the observed power law scaling of the moments of the data and successfully predicts the continuous spectrum of scaling exponents expressing multiscaling characteristics of the rain intensity field.
NASA Astrophysics Data System (ADS)
Panchanathan, Sethuraman; Ramaswamy, Karthik; Fang, Jian-Jun; Moseler, Kathy; Levi, Sami
2000-12-01
In this paper we present the Elliptical Weighted Average filtering algorithm and an optimized implementation of a two- pass algorithm and used in digital image and video warping. Two-pass algorithms are well suited for hardware implementation due to their reduced complexity in using 1-D re-sampling and anti-aliasing filters. But, the primary disadvantage is the need for a large buffer to store the temporary image since warping is performed in two passes. The size of the temporary buffer is equal to or greater than the size of the input image. A dedicated, hardware, implementation for this algorithm implies huge cost in terms of real estate on chip. In our approach, Wolberg-Boult's resampling algorithm is modified to use only two rows of temporary buffer thereby making the algorithm more amenable for hardware implementation. We present the complexity analysis based on number of arithmetic and logic operations (add, shift, compare, multiply, clip and divide) per macroblock. In the case of EWA filters, it is the most cost-effective high- quality filtering method because point inclusion testing can be done with one function evaluation and the filter weights can be stored in lookup tables for reduction in computation. For mapping the quadrilaterals, four equations were needed for the four lines of the quadrilaterals, which was computationally complex, wherein the computational cost was directly proportional to the number of input pixels accessed. Also we present the complexity analysis per macroblock.
Coombes, Brandon; Basu, Saonli; Guha, Sharmistha; Schork, Nicholas
2015-01-01
Multi-locus effect modeling is a powerful approach for detection of genes influencing a complex disease. Especially for rare variants, we need to analyze multiple variants together to achieve adequate power for detection. In this paper, we propose several parsimonious branching model techniques to assess the joint effect of a group of rare variants in a case-control study. These models implement a data reduction strategy within a likelihood framework and use a weighted score test to assess the statistical significance of the effect of the group of variants on the disease. The primary advantage of the proposed approach is that it performs model-averaging over a substantially smaller set of models supported by the data and thus gains power to detect multi-locus effects. We illustrate these proposed approaches on simulated and real data and study their performance compared to several existing rare variant detection approaches. The primary goal of this paper is to assess if there is any gain in power to detect association by averaging over a number of models instead of selecting the best model. Extensive simulations and real data application demonstrate the advantage the proposed approach in presence of causal variants with opposite directional effects along with a moderate number of null variants in linkage disequilibrium. PMID:26436424
NASA Astrophysics Data System (ADS)
Gasser, Guy; Pankratov, Irena; Elhanany, Sara; Glazman, Hillel; Lev, Ovadia
2014-05-01
A methodology used to estimate the percentage of wastewater effluent in an otherwise pristine water site is proposed on the basis of the weighted mean of the level of a consortium of indicator pollutants. This method considers the levels of uncertainty in the evaluation of each of the indicators in the site, potential effluent sources, and uncontaminated surroundings. A detailed demonstrative study was conducted on a site that is potentially subject to wastewater leakage. The research concentrated on several perched springs that are influenced to an unknown extent by agricultural communities. A comparison was made to a heavily contaminated site receiving wastewater effluent and surface water runoff. We investigated six springs in two nearby ridges where fecal contamination was detected in the past; the major sources of pollution in the area have since been diverted to a wastewater treatment system. We used chloride, acesulfame, and carbamazepine as domestic pollution tracers. Good correlation (R2 > 0.86) was observed between the mixing ratio predictions based on the two organic tracers (the slope of the linear regression was 1.05), whereas the chloride predictions differed considerably. This methodology is potentially useful, particularly for cases in which detailed hydrological modeling is unavailable but in which quantification of wastewater penetration is required. We demonstrate that the use of more than one tracer for estimation of the mixing ratio reduces the combined uncertainty level associated with the estimate and can also help to disqualify biased tracers.
Area-preserving maps models of gyro-averaged ${\\bf E} \\times {\\bf B}$ chaotic transport
J. D. da Fonseca; D. del-Castillo-Negrete; I. L. Caldas
2014-09-10
Discrete maps have been extensively used to model 2-dimensional chaotic transport in plasmas and fluids. Here we focus on area-preserving maps describing finite Larmor radius (FLR) effects on ${\\bf E} \\times {\\bf B}$ chaotic transport in magnetized plasmas with zonal flows perturbed by electrostatic drift waves. FLR effects are included by gyro-averaging the Hamiltonians of the maps which, depending on the zonal flow profile, can have monotonic or non-monotonic frequencies. In the limit of zero Larmor radius, the monotonic frequency map reduces to the standard Chirikov-Taylor map, and, in the case of non-monotonic frequency, the map reduces to the standard nontwist map. We show that in both cases FLR leads to chaos suppression, changes in the stability of fixed points, and robustness of transport barriers. FLR effects are also responsible for changes in the phase space topology and zonal flow bifurcations. Dynamical systems methods based on recurrence time statistics are used to quantify the dependence on the Larmor radius of the threshold for the destruction of transport barriers.
Area-to-point parameter estimation with geographically weighted regression
NASA Astrophysics Data System (ADS)
Murakami, Daisuke; Tsutsumi, Morito
2015-07-01
The modifiable areal unit problem (MAUP) is a problem by which aggregated units of data influence the results of spatial data analysis. Standard GWR, which ignores aggregation mechanisms, cannot be considered to serve as an efficient countermeasure of MAUP. Accordingly, this study proposes a type of GWR with aggregation mechanisms, termed area-to-point (ATP) GWR herein. ATP GWR, which is closely related to geostatistical approaches, estimates the disaggregate-level local trend parameters by using aggregated variables. We examine the effectiveness of ATP GWR for mitigating MAUP through a simulation study and an empirical study. The simulation study indicates that the method proposed herein is robust to the MAUP when the spatial scales of aggregation are not too global compared with the scale of the underlying spatial variations. The empirical studies demonstrate that the method provides intuitively consistent estimates.
Michiels, A; Piepers, S; Ulens, T; Van Ransbeeck, N; Del Pozo Sacristán, R; Sierens, A; Haesebrouck, F; Demeyer, P; Maes, D
2015-09-01
The present study investigated the simultaneous influence of particulate matter (PM10) and ammonia (NH3) on performance, lung lesions and the presence of Mycoplasma hyopneumoniae (M. hyopneumoniae) in finishing pigs. A pig herd experiencing clinical problems of M. hyopneumoniae infections was selected. In total, 1095 finishing pigs of two replicates in eight compartments each were investigated during the entire finishing period (FP). Indoor PM10 and NH3 were measured at regular intervals during the FP with two Grimm spectrometers and two Graywolf Particle Counters (PM10) and an Innova photoacoustic gas monitor (NH3). Average daily weight gain (ADG) and mortality were calculated and associated with PM10 and NH3 during the FP. Nasal swabs (10 pigs/compartment) were collected one week prior to slaughter to detect DNA of M. hyopneumoniae with nested PCR (nPCR). The prevalence and extent of pneumonia lesions, and prevalence of fissures and pleurisy were examined at slaughter (29 weeks). The results from the nasal swabs and lung lesions were associated with PM10 and NH3 during the FP and the second half of the FP. In the univariable model, increasing PM10 concentrations resulted in a higher odds of pneumonia lesions (second half of the FP: OR=8.72; P=0.015), more severe pneumonia lesions (FP: P=0.04, second half of the FP: P=0.009), a higher odds of pleurisy lesions (FP: OR=20.91; P<0.001 and second half of the FP: OR=40.85; P<0.001) and a higher number of nPCR positive nasal samples (FP: OR=328.00; P=0.01 and second half of the FP: OR=185.49; P=0.02). Increasing NH3 concentrations in the univariable model resulted in a higher odds of pleurisy lesions (FP: OR=21.54; P=0.003) and a higher number of nPCR positive nasal samples (FP: OR=70.39; P=0.049; second half of the FP: OR=8275.05; P=0.01). In the multivariable model, an increasing PM10 concentration resulted in a higher odds of pleurisy lesions (FP: OR=8.85; P=0.049). These findings indicate that the respiratory health of finishing pigs was significantly affected by PM10. PMID:26148844
J. M. Line; Cajo J. E ter Braak; H. J. B. Birks
1994-01-01
A computer program for reconstructing environmental variables (e.g. lake-water pH) from fossil assemblages (e.g. diatoms) by weighted averaging regression and calibration is described. The estimation of sample-specific errors of prediction by bootstrapping is outlined. The program runs on IBM-compatible personal computers.
Sether, Bradley A.; Berkas, Wayne R.; Vecchia, Aldo V.
2004-01-01
Data were collected at 11 water-quality sampling sites in the upper Red River of the North (Red River) Basin from May 1997 through September 1999 to describe the water-quality characteristics of the upper Red River and to estimate constituent loads and flow-weighted average concentrations for major tributaries of the Red River upstream from the bridge crossing the Red River at Perley, Minn. Samples collected from the sites were analyzed for 5-day biochemical oxygen demand, bacteria, dissolved solids, nutrients, and suspended sediment. Concentration data indicated the median concentrations for most constituents and sampling sites during the study period were less than existing North Dakota and Minnesota standards or guidelines. However, more than 25 percent of the samples for the Red River at Perley, Minn., site had fecal coliform concentrations that were greater than 200 colonies per 100 milliliters, indicating an abundance of pathogens in the upper Red River Basin. Although total nitrite plus nitrate concentrations generally increased in a downstream direction, the median concentrations for all sites were less than the North Dakota suggested guideline of 1.0 milligram per liter. Total and dissolved phosphorus concentrations also generally increased in a downstream direction, but, for those constituents, the median concentrations for most sampling sites exceeded the North Dakota suggested guideline of 0.1 milligram per liter. For dissolved solids, nutrients, and suspended sediments, a relation between constituent concentration and streamflow was determined using the data collected during the study period. The relation was determined by a multiple regression model in which concentration was the dependent variable and streamflow was the primary explanatory variable. The regression model was used to compute unbiased estimates of annual loads for each constituent and for each of eight primary water-quality sampling sites and to compute the degree of uncertainty associated with each estimated annual load. The estimated annual loads for the eight primary sites then were used to estimate annual loads for five intervening reaches in the study area. Results were used as a screening tool to identify which subbasins contributed a disproportionate amount of pollutants to the Red River. To compare the relative water quality of the different subbasins, an estimated flow-weighted average (FWA) concentration was computed from the estimated average annual load and the average annual streamflow for each subbasin. The 5-day biochemical oxygen demands in the upper Red River Basin were fairly small, and medians ranged from 1 to 3 milligrams per liter. The largest estimated FWA concentration for dissolved solids (about 630 milligrams per liter) was for the Bois de Sioux River near Doran, Minn., site. The Otter Tail River above Breckenridge, Minn., site had the smallest estimated FWA concentration (about 240 milligrams per liter). The estimated FWA concentrations for dissolved solids for the main-stem sites ranged from about 300 to 500 milligrams per liter and generally increased in a downstream direction. The estimated FWA concentrations for total nitrite plus nitrate for the main-stem sites increased from about 0.2 milligram per liter for the Red River below Wahpeton, N. Dak., site to about 0.9 milligram per liter for the Red River at Perley, Minn., site. Much of the increase probably resulted from flows from the tributary sites and intervening reaches, excluding the Otter Tail River above Breckenridge, Minn., site. However, uncertainty in the estimated concentrations prevented any reliable conclusions regarding which sites or reaches contributed most to the increase. The estimated FWA concentrations for total ammonia for the main-stem sites increased from about 0.05 milligram per liter for the Red River above Fargo, N. Dak., site to about 0.15 milligram per liter for the Red River near Harwood, N. Dak., site. T
Mendle, Jerry M.
. Mendel, Life Fellow, IEEE Abstract--The focus of this paper is the linguistic weighted av- erage (LWA 90089-2564 USA (e-mail: dongruiw@usc.edu; mendel@sipi.usc. edu). Color versions of one or more a richer platform." Mendel [19] notes that "words mean different things to different people and so
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov
2007-01-01
Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.
Numerous urban canopy schemes have recently been developed for mesoscale models in order to approximate the drag and turbulent production effects of a city on the air flow. However, little data exists by which to evaluate the efficacy of the schemes since "area-averaged&quo...
ERIC Educational Resources Information Center
Sadler, Philip M.; Tai, Robert H.
2007-01-01
Honors and advanced placement (AP) courses are commonly viewed as more demanding than standard high school offerings. Schools employ a range of methods to account for such differences when calculating grade point average and the associated rank in class for graduating students. In turn, these statistics have a sizeable impact on college admission…
ERIC Educational Resources Information Center
Warne, Russell T.; Nagaishi, Chanel; Slade, Michael K.; Hermesmeyer, Paul; Peck, Elizabeth Kimberli
2014-01-01
While research has shown the statistical significance of high school grade point averages (HSGPAs) in predicting future academic outcomes, the systems with which HSGPAs are calculated vary drastically across schools. Some schools employ unweighted grades that carry the same point value regardless of the course in which they are earned; other…
Watershed Weighting of Export Coefficients to Map Critical Phosphorus Loading Areas
NASA Astrophysics Data System (ADS)
Endreny, Theodore A.; Wood, Eric F.
2003-02-01
The Export Coefficient model (ECM) is capable of generating reasonable estimates of annual phosphorous loading simply from a watershed's land cover data and export coefficient values (ECVs). In its current form, the ECM assumes that ECVs are homogeneous within each land cover type, yet basic nutrient runoff and hydrological theory suggests that runoff rates have spatial patterns controlled by loading and filtering along the flow paths from the upslope contributing area and downslope dispersal area. Using a geographic information system (GIS) raster, or pixel, modeling format, these contributing area and dispersal area (CADA) controls were derived from the perspective of each individual watershed pixel to weight the otherwise homogeneous ECVs for phosphorous. Although the CADA-ECM predicts export coefficient spatial variation for a single land use type, the lumped basin load is unaffected by weighting. After CADA weighting, a map of the new ECVs addressed the three fundamental criteria for targeting critical pollutant loading areas: (1) the presence of the pollutant, (2) the likelihood for runoff to carry the pollutant offsite, and (3) the likelihood that buffers will trap nutrients prior to their runoff into the receiving water body. These spatially distributed maps of the most important pollutant management areas were used within New York's West Branch Delaware River watershed to demonstrate how the CADA-ECM could be applied in targeting phosphorous critical loading areas.
Mapping Human Cortical Areas in vivo Based on Myelin Content as Revealed by T1- and T2-weighted MRI
Glasser, Matthew F.; Van Essen, David C.
2011-01-01
Non-invasively mapping the layout of cortical areas in humans is a continuing challenge for neuroscience. We present a new method of mapping cortical areas based on myelin content as revealed by T1-weighted (T1w) and T2-weighted (T2w) MRI. The method is generalizable across different 3T scanners and pulse sequences. We use the ratio of T1w/T2w image intensities to eliminate the MR-related image intensity bias and enhance the contrast to noise ratio for myelin. Data from each subject was mapped to the cortical surface and aligned across individuals using surface-based registration. The spatial gradient of the group average myelin map provides an observer-independent measure of sharp transitions in myelin content across the surface—i.e. putative cortical areal borders. We found excellent agreement between the gradients of the myelin maps and the gradients of published probabilistic cytoarchitectonically defined cortical areas that were registered to the same surface-based atlas. For other cortical regions, we used published anatomical and functional information to make putative identifications of dozens of cortical areas or candidate areas. In general, primary and early unimodal association cortices are heavily myelinated and higher, multi-modal, association cortices are more lightly myelinated, but there are notable exceptions in the literature that are confirmed by our results. The overall pattern in the myelin maps also has important correlations with the developmental onset of subcortical white matter myelination, evolutionary cortical areal expansion in humans compared to macaques, postnatal cortical expansion in humans, and maps of neuronal density in non-human primates. PMID:21832190
NASA Technical Reports Server (NTRS)
Schols, J. L.; Eloranta, E. W.
1992-01-01
Area-averaged horizontal wind measurements are derived from the motion of spatial inhomogeneities in aerosol backscattering observed with a volume-imaging lidar. Spatial averaging provides high precision, reducing sample variations of wind measurements well below the level of turbulent fluctuations, even under conditions of very light mean winds and strong convection or under the difficult conditions represented by roll convection. Wind velocities are measured using the two-dimensional spatial cross correlation computed between successive horizontal plane maps of aerosol backscattering, assembled from three-dimensional lidar scans. Prior to calculation of the correlation function, three crucial steps are performed: (1) the scans are corrected for image distortion by the wind during a finite scan time; (2) a temporal high pass median filtering is applied to eliminate structures that do not move with the wind; and (3) a histogram equalization is employed to reduce biases to the brightest features.
Boosting with Averaged Weight Vectors
NASA Technical Reports Server (NTRS)
Oza, Nikunj C.; Clancy, Daniel (Technical Monitor)
2002-01-01
AdaBoost is a well-known ensemble learning algorithm that constructs its constituent or base models in sequence. A key step in AdaBoost is constructing a distribution over the training examples to create each base model. This distribution, represented as a vector, is constructed to be orthogonal to the vector of mistakes made by the previous base model in the sequence. The idea is to make the next base model's errors uncorrelated with those of the previous model. Some researchers have pointed out the intuition that it is probably better to construct a distribution that is orthogonal to the mistake vectors of all the previous base models, but that this is not always possible. We present an algorithm that attempts to come as close as possible to this goal in an efficient manner. We present experimental results demonstrating significant improvement over AdaBoost and the Totally Corrective boosting algorithm, which also attempts to satisfy this goal.
Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Rockhold, Mark L.
2008-06-01
A methodology to systematically and quantitatively assess model predictive uncertainty was applied to saturated zone uranium transport at the 300 Area of the U.S. Department of Energy Hanford Site in Washington State, USA. The methodology extends Maximum Likelihood Bayesian Model Averaging (MLBMA) to account jointly for uncertainties due to the conceptual-mathematical basis of models, model parameters, and the scenarios to which the models are applied. Conceptual uncertainty was represented by postulating four alternative models of hydrogeology and uranium adsorption. Parameter uncertainties were represented by estimation covariances resulting from the joint calibration of each model to observed heads and uranium concentration. Posterior model probability was dominated by one model. Results demonstrated the role of model complexity and fidelity to observed system behavior in determining model probabilities, as well as the impact of prior information. Two scenarios representing alternative future behavior of the Columbia River adjacent to the site were considered. Predictive simulations carried out with the calibrated models illustrated the computation of model- and scenario-averaged predictions and how results can be displayed to clearly indicate the individual contributions to predictive uncertainty of the model, parameter, and scenario uncertainties. The application demonstrated the practicability of applying a comprehensive uncertainty assessment to large-scale, detailed groundwater flow and transport modelling.
A comparison of spatial smoothing methods for small area estimation with sampling weights
Mercer, Laina; Wakefield, Jon; Chen, Cici; Lumley, Thomas
2014-01-01
Small area estimation (SAE) is an important endeavor in many fields and is used for resource allocation by both public health and government organizations. Often, complex surveys are carried out within areas, in which case it is common for the data to consist only of the response of interest and an associated sampling weight, reflecting the design. While it is appealing to use spatial smoothing models, and many approaches have been suggested for this endeavor, it is rare for spatial models to incorporate the weighting scheme, leaving the analysis potentially subject to bias. To examine the properties of various approaches to estimation we carry out a simulation study, looking at bias due to both non-response and non-random sampling. We also carry out SAE of smoking prevalence in Washington State, at the zip code level, using data from the 2006 Behavioral Risk Factor Surveillance System. The computation times for the methods we compare are short, and all approaches are implemented in R using currently available packages. PMID:24959396
NASA Astrophysics Data System (ADS)
Meyer, P. D.; Ye, M.; Neuman, S. P.; Rockhold, M. L.
2006-12-01
Applications of groundwater flow and transport models to regulatory and design problems have illustrated the potential importance of accounting for uncertainties in model conceptualization and structure as well as model parameters. One approach to this issue is to characterize model uncertainty using a discrete set of alternatives and assess the prediction uncertainty arising from the joint impact of model and parameter uncertainty. We demonstrate the application of this approach to the modeling of groundwater flow and uranium transport at the 300 Area of the Dept. of Energy Hanford Site in Washington State using the recently developed Maximum Likelihood Bayesian Model Averaging (MLBMA) method. Model uncertainty was included using alternative representations of the hydrogeologic units at the 300 Area and alternative representations of uranium adsorption. Parameter uncertainties for each model were based on the estimated parameter covariances resulting from the joint calibration of each model alternative to observations of hydraulic head and uranium concentration. The relative plausibility of each calibrated model was expressed in terms of a posterior model probability computed on the basis of Kashyap's information criterion KIC. Results of the application show that model uncertainty may dominate parameter uncertainty for the set of alternative models considered. We discuss the sensitivity of model probabilities to differences in KIC values and examine the effect of particular calibration data on model probabilities. In addition, we discuss the advantages of KIC over other model discrimination criteria for estimating model probabilities.
Relationship of T2-Weighted MRI Myocardial Hyperintensity and the Ischemic Area-At-Risk
Kim, Han W.; Van Assche, Lowie; Jennings, Robert B.; Wince, W. Benjamin; Jensen, Christoph J.; Rehwald, Wolfgang G.; Wendell, David C.; Bhatti, Lubna; Spatz, Deneen M.; Parker, Michele A.; Jenista, Elizabeth R.; Klem, Igor; Crowley, Anna Lisa C.; Chen, Enn-Ling; Judd, Robert M.
2015-01-01
Rationale: After acute myocardial infarction (MI), delineating the area-at-risk (AAR) is crucial for measuring how much, if any, ischemic myocardium has been salvaged. T2-weighted MRI is promoted as an excellent method to delineate the AAR. However, the evidence supporting the validity of this method to measure the AAR is indirect, and it has never been validated with direct anatomic measurements. Objective: To determine whether T2-weighted MRI delineates the AAR. Methods and Results: Twenty-one canines and 24 patients with acute MI were studied. We compared bright-blood and black-blood T2-weighted MRI with images of the AAR and MI by histopathology in canines and with MI by in vivo delayed-enhancement MRI in canines and patients. Abnormal regions on MRI and pathology were compared by (a) quantitative measurement of the transmural-extent of the abnormality and (b) picture matching of contours. We found no relationship between the transmural-extent of T2-hyperintense regions and that of the AAR (bright-blood-T2: r=0.06, P=0.69; black-blood-T2: r=0.01, P=0.97). Instead, there was a strong correlation with that of infarction (bright-blood-T2: r=0.94, P<0.0001; black-blood-T2: r=0.95, P<0.0001). Additionally, contour analysis demonstrated a fingerprint match of T2-hyperintense regions with the intricate contour of infarcted regions by delayed-enhancement MRI. Similarly, in patients there was a close correspondence between contours of T2-hyperintense and infarcted regions, and the transmural-extent of these regions were highly correlated (bright-blood-T2: r=0.82, P<0.0001; black-blood-T2: r=0.83, P<0.0001). Conclusion: T2-weighted MRI does not depict the AAR. Accordingly, T2-weighted MRI should not be used to measure myocardial salvage, either to inform patient management decisions or to evaluate novel therapies for acute MI. PMID:25972514
NASA Astrophysics Data System (ADS)
Hu, X.; Waller, L.; Liu, Y.
2010-12-01
Using remote sensing data to study the characteristics of PM2.5 (particles smaller than 2.5µm in size) especially in areas not covered by ground monitoring networks has attracted much interest due to multiple health outcomes related to its exposure. To accurately predict PM2.5 exposure, successfully modeling the relationship between PM2.5 concentration and aerosol optical thickness (AOT), as well as other environmental parameters, is crucial. Most of currently reported models are global methods without considering local variations, which might introduce significant errors into prediction results. In this paper, a geographically weighted regression model (GWR) was developed to model the relationship among PM2.5, AOT, and meteorological parameters such as mixing height, surface air temperature, relative humidity, and surface wind speed. GWR is capable of estimating local parameters instead of global parameters in terms of the geographical location, and all coefficients vary geographically to indicate the spatial variation. The study area is centered around Atlanta Metro area, and the data from 2001 to 2007 was collected from various sources. After developing the model, cross-validation techniques were implemented to assess the accuracy of our model. The results indicated that GWR, due to its ability of explaining local variations, has the potential to generate a better fit and can provide a promising alternative in PM2.5 exposure estimation.
On the Possibility of Testing Miocene Clay from Cracow Area using Weight Sounding Test (WST)
NASA Astrophysics Data System (ADS)
Olesiak, Sebastian
2014-03-01
Polish standards concerning field investigation with the use of a Weight Sounding Test (WST) probe give interpretation of results for non-cohesive soils only. The lack of such interpretation for cohesive soils excludes this testing equipment from use. This paper presents the results of geotechnical site investigation and laboratory tests performed for Miocene clays in Carpathian Foredeep in the Cracow area. Based on the analysis of the results a correlation was determined between the characteristic values for the WST probe (number of half-turns NWST) and the selected properties of Miocene clays. The article is an attempt to create a complete interpretation of test results obtained for cohesive soil with WST equipment.
Borah, Madhur; Baruah, Rupali
2015-01-01
Introduction: Low birth weight (LBW) infants suffer more episodes of common childhood diseases and the spells of illness are more prolonged and serious. Longitudinal studies are useful to observe the health and disease pattern of LBW babies over time. Aims: This study was carried out in rural areas of Assam to assess the morbidity pattern of LBW babies during their first 6 months of life and to compare them with normal birth weight (NBW) counterparts. Materials and Methods: Total 30 LBW babies (0-2 months) and equal numbers of NBW babies from three subcenters under Boko Primary Health Centre of Assam were followed up in monthly intervals till 6 months of age in a prospective fashion. Results: More than two thirds of LBW babies (77%) were suffering from moderate or severe under-nutrition during the follow up. Acute respiratory tract infection (ARI) was the predominant morbidity suffered by LBW infants. The other illnesses suffered by the LBW infants during the follow up were diarrhea, skin disorders, fever and ear disorders. LBW infants had more episodes of hospitalization (65%) than the NBW infants (35%). Incidence rate of episodes of morbidity was found to be higher among those LBW infants who remained underweight at 6 months of age (Incidence rate of 49.3 per 100 infant months) and those who were not exclusively breast fed till 6 months of age (Incidence rate of 66.7 per 100 infant months). Conclusion: The study revealed that during the follow up, incidence of morbidities were higher among the LBW babies compared to NBW babies. It was also observed that ARI was the predominant morbidity in the LBW infants during first 6 months of age.
John W. Tukey
1948-01-01
The greatest fractional increase in variance when a weighted mean is calculated with approximate weights is, quite closely, the square of the largest fractional error in an individual weight. The average increase will be about one-half this amount. The use of weights accurate to two significant figures, or even to the nearest number of the form: 10, 11, 12, 12,
Kira, S. [Okayama Univ. Medical School (Japan)] [Okayama Univ. Medical School (Japan); Sakano, M.; Nogami, Y. [Okayama Univ. of Science (Japan)] [Okayama Univ. of Science (Japan)
1997-06-01
There have been several different methods of measurement for waterborne pollutants. The most frequently utilized method for sample preparation has been a liquid-to-liquid partition or a liquid-to-solid partition. In these methods, pollutants such as polycyclic aromatic hydrocarbons (PAHs) are extracted to organic solvents directly from the sample water, or the pollutants are once adsorbed to solid phase adsorbent, and subsequently eluted with organic solvents. In either case, the measured level represents at the time of the sampling, namely a spot-sampling. On the other hand, a time-weighted average concentration (TWA) has been used as a determinant to evaluate an atmospheric environment. But it has been an elaborated work for us to estimate TWA of pollutants in water, since a frequent spot-sampling of water is required at a field site. Further no data on the TWA of PAHs in the field water has been published, however the TWA of pollutants could be an important factor for a chronic effect on biota. In our previous report, we set up a continuous sampling device, using Sep-Pak C18 cartridge and a peristaltic pump, which enabled us to measure a TWA of benzo(a)pyrene in an experimental water system. The present paper describes a portable sampling device that can continuously sample PAHs in water. We have evaluated basic characteristics of the sampling device in the laboratory, and optimized chromatographic detection of 4 PAHs, fluoranthene, perylene, benzo(b)-fluoranthene (BbF) and benzo(a)pyrene (BaP). After these procedures, we have brought this sampling device to field water sites to verify its performance. The levels of PAHs was calculated as TWA for 24 hr period of time in water at a site. 9 refs., 1 fig., 1 tab.
Baimatova, Nassiba; Koziel, Jacek A; Kenessov, Bulat
2015-05-11
A new and simple method for benzene, toluene, ethylbenzene and o-xylene (BTEX) quantification in vehicle exhaust was developed based on diffusion-controlled extraction onto a retracted solid-phase microextraction (SPME) fiber coating. The rationale was to develop a method based on existing and proven SPME technology that is feasible for field adaptation in developing countries. Passive sampling with SPME fiber retracted into the needle extracted nearly two orders of magnitude less mass (n) compared with exposed fiber (outside of needle) and sampling was in a time weighted-averaging (TWA) mode. Both the sampling time (t) and fiber retraction depth (Z) were adjusted to quantify a wider range of Cgas. Extraction and quantification is conducted in a non-equilibrium mode. Effects of Cgas, t, Z and T were tested. In addition, contribution of n extracted by metallic surfaces of needle assembly without SPME coating was studied. Effects of sample storage time on n loss was studied. Retracted TWA-SPME extractions followed the theoretical model. Extracted n of BTEX was proportional to Cgas, t, Dg, T and inversely proportional to Z. Method detection limits were 1.8, 2.7, 2.1 and 5.2 mg m(-3) (0.51, 0.83, 0.66 and 1.62 ppm) for BTEX, respectively. The contribution of extraction onto metallic surfaces was reproducible and influenced by Cgas and t and less so by T and by the Z. The new method was applied to measure BTEX in the exhaust gas of a Ford Crown Victoria 1995 and compared with a whole gas and direct injection method. PMID:25911428
White, R R; Capper, J L
2013-12-01
The objective of this study was to assess environmental impact, economic viability, and social acceptability of 3 beef production systems with differing levels of efficiency. A deterministic model of U.S. beef production was used to predict the number of animals required to produce 1 × 10(9) kg HCW beef. Three production treatments were compared, 1 representing average U.S. production (control), 1 with a 15% increase in ADG, and 1 with a 15% increase in finishing weight (FW). For each treatment, various socioeconomic scenarios were compared to account for uncertainty in producer and consumer behavior. Environmental impact metrics included feed consumption, land use, water use, greenhouse gas emissions (GHGe), and N and P excretion. Feed cost, animal purchase cost, animal sales revenue, and income over costs (IOVC) were used as metrics of economic viability. Willingness to pay (WTP) was used to identify improvements or reductions in social acceptability. When ADG improved, feedstuff consumption, land use, and water use decreased by 6.4%, 3.2%, and 12.3%, respectively, compared with the control. Carbon footprint decreased 11.7% and N and P excretion were reduced by 4% and 13.8%, respectively. When FW improved, decreases were seen in feedstuff consumption (12.1%), water use (9.2%). and land use (15.5%); total GHGe decreased 14.7%; and N and P excretion decreased by 10.1% and 17.2%, compared with the control. Changes in IOVC were dependent on socioeconomic scenario. When the ADG scenario was compared with the control, changes in sector profitability ranged from 51 to 117% (cow-calf), -38 to 157% (stocker), and 37 to 134% (feedlot). When improved FW was compared, changes in cow-calf profit ranged from 67% to 143%, stocker profit ranged from -41% to 155% and feedlot profit ranged from 37% to 136%. When WTP was based on marketing beef being more efficiently produced, WTP improved by 10%; thus, social acceptability increased. When marketing was based on production efficiency and consumer knowledge of growth-enhancing technology use, WTP decreased by 12%-leading to a decrease in social acceptability. Results demonstrated that improved efficiency also improved environmental impact, but impacts on economic viability and social acceptability are highly dependent on consumer and producer behavioral responses to efficiency improvements. PMID:24146151
Krpálková, L; Cabrera, V E; Kvapilík, J; Burdych, J; Crump, P
2014-10-01
The objective of this study was to evaluate the associations of variable intensity in rearing dairy heifers on 33 commercial dairy herds, including 23,008 cows and 18,139 heifers, with age at first calving (AFC), average daily weight gain (ADG), and milk yield (MY) level on reproduction traits and profitability. Milk yield during the production period was analyzed relative to reproduction and economic parameters. Data were collected during a 1-yr period (2011). The farms were located in 12 regions in the Czech Republic. The results show that those herds with more intensive rearing periods had lower conception rates among heifers at first and overall services. The differences in those conception rates between the group with the greatest ADG (?0.800 kg/d) and the group with the least ADG (?0.699 kg/d) were approximately 10 percentage points in favor of the least ADG. All the evaluated reproduction traits differed between AFC groups. Conception at first and overall services (cows) was greatest in herds with AFC ?800 d. The shortest days open (105 d) and calving interval (396 d) were found in the middle AFC group (799 to 750 d). The highest number of completed lactations (2.67) was observed in the group with latest AFC (?800 d). The earliest AFC group (?749 d) was characterized by the highest depreciation costs per cow at 8,275 Czech crowns (US$414), and the highest culling rate for cows of 41%. The most profitable rearing approach was reflected in the middle AFC (799 to 750 d) and middle ADG (0.799 to 0.700 kg) groups. The highest MY (?8,500 kg) occurred with the earliest AFC of 780 d. Higher MY led to lower conception rates in cows, but the highest MY group also had the shortest days open (106 d) and a calving interval of 386 d. The same MY group had the highest cow depreciation costs, net profit, and profitability without subsidies of 2.67%. We conclude that achieving low AFC will not always be the most profitable approach, which will depend upon farm-specific herd management. The MY is a very important factor for dairy farm profitability. The group of farms having the highest MY achieved the highest net profit despite having greater fertility problems. PMID:25064657
Ito, Tadashi; Sakai, Yoshihito; Nakamura, Eishi; Yamazaki, Kazunori; Yamada, Ayaka; Sato, Noritaka; Morita, Yoshifumi
2015-01-01
[Purpose] The purpose of this study was to examine the relationship between the paraspinal muscle cross-sectional area and the relative proprioceptive weighting ratio during local vibratory stimulation of older persons with lumbar spondylosis in an upright position. [Subjects] In all, 74 older persons hospitalized for lumbar spondylosis were included. [Methods] We measured the relative proprioceptive weighting ratio of postural sway using a Wii board while vibratory stimulations of 30, 60, or 240?Hz were applied to the subjects’ paraspinal or gastrocnemius muscles. Back strength, abdominal muscle strength, and erector spinae muscle (L1/L2, L4/L5) and lumbar multifidus (L1/L2, L4/L5) cross-sectional areas were evaluated. [Results] The erector spinae muscle (L1/L2) cross-sectional area was associated with the relative proprioceptive weighting ratio during 60Hz stimulation. [Conclusion] These findings show that the relative proprioceptive weighting ratio compared to the erector spinae muscle (L1/L2) cross-sectional area under 60Hz proprioceptive stimulation might be a good indicator of trunk proprioceptive sensitivity.
Ito, Tadashi; Sakai, Yoshihito; Nakamura, Eishi; Yamazaki, Kazunori; Yamada, Ayaka; Sato, Noritaka; Morita, Yoshifumi
2015-07-01
[Purpose] The purpose of this study was to examine the relationship between the paraspinal muscle cross-sectional area and the relative proprioceptive weighting ratio during local vibratory stimulation of older persons with lumbar spondylosis in an upright position. [Subjects] In all, 74 older persons hospitalized for lumbar spondylosis were included. [Methods] We measured the relative proprioceptive weighting ratio of postural sway using a Wii board while vibratory stimulations of 30, 60, or 240?Hz were applied to the subjects' paraspinal or gastrocnemius muscles. Back strength, abdominal muscle strength, and erector spinae muscle (L1/L2, L4/L5) and lumbar multifidus (L1/L2, L4/L5) cross-sectional areas were evaluated. [Results] The erector spinae muscle (L1/L2) cross-sectional area was associated with the relative proprioceptive weighting ratio during 60Hz stimulation. [Conclusion] These findings show that the relative proprioceptive weighting ratio compared to the erector spinae muscle (L1/L2) cross-sectional area under 60Hz proprioceptive stimulation might be a good indicator of trunk proprioceptive sensitivity. PMID:26311962
Roberts, Graham J; McDonald, Fraser; Neil, Monica; Lucas, Victoria S
2014-08-01
The mathematical principle of weighting averages to determine the most appropriate numerical outcome is well established in economic and social studies. It has seen little application in forensic dentistry. This study re-evaluated the data from a previous study of age assessment at the 10 year threshold. A semiautomatic process of weighting averages by n-td, x-tds, sd-tds, se-tds, 1/sd-tds, 1/se-tds was prepared in an Excel worksheet and the different weighted mean values reported. In addition the Fixed Effects and Random Effects models for Meta-Analysis were used and applied to the same data sets. In conclusion it has been shown that the most accurate age estimation method is to use the Random Effects Model for the mathematical procedures. PMID:25066175
ERIC Educational Resources Information Center
Wang, Wen-Chung; Su, Ya-Hui
2004-01-01
In this study we investigated the effects of the average signed area (ASA) between the item characteristic curves of the reference and focal groups and three test purification procedures on the uniform differential item functioning (DIF) detection via the Mantel-Haenszel (M-H) method through Monte Carlo simulations. The results showed that ASA,…
Tomoaki Nagaoka; Soichi Watanabe; Kiyoko Sakurai; Etsuo Kunieda; Satoshi Watanabe; Masao Taki; Yukio Yamanaka
2004-01-01
With advances in computer performance, the use of high-resolution voxel models of the entire human body has become more frequent in numerical dosimetries of electromagnetic waves. Using magnetic resonance imaging, we have developed realistic high-resolution whole-body voxel models for Japanese adult males and females of average height and weight. The developed models consist of cubic voxels of 2 mm on
Claude Tomberg
1999-01-01
The mid-dorsolateral prefrontal area 46 has working memory functions for putting current cognitive processing into context and for updating relevant information on a trial-by-trial basis. Using non-averaged human brain responses to a target finger stimulus attended by the subject, we identified the cognitive prefrontal N140 electrogenesis with the Z method which numerically assesses the detailed consistency between scalp topographies of
Code of Federal Regulations, 2010 CFR
2010-10-01
...Specifications of ABCs, OYs, and HGs, by Management Area (weights in metric tons) 1a Table 1a to Part 660, Subpart C Wildlife and...Specifications of ABCs, OYs, and HGs, by Management Area (weights in metric tons) ER01OC10.000 ER01OC10.001...
Code of Federal Regulations, 2010 CFR
2010-10-01
...Specifications of ABCs, OYs, and HGs, by Management Area (weights in metric tons) 2a Table 2a to Part 660, Subpart C Wildlife and...Specifications of ABCs, OYs, and HGs, by Management Area (weights in metric tons) ER01OC10.006 ER01OC10.007...
NASA Astrophysics Data System (ADS)
Liu, X.; Zhang, J. X.; Zhao, Z.; Ma, A. D.
2015-06-01
Synthetic aperture radar in the application of remote sensing technology is becoming more and more widely because of its all-time and all-weather operation, feature extraction research in high resolution SAR image has become a hot topic of concern. In particular, with the continuous improvement of airborne SAR image resolution, image texture information become more abundant. It's of great significance to classification and extraction. In this paper, a novel method for built-up areas extraction using both statistical and structural features is proposed according to the built-up texture features. First of all, statistical texture features and structural features are respectively extracted by classical method of gray level co-occurrence matrix and method of variogram function, and the direction information is considered in this process. Next, feature weights are calculated innovatively according to the Bhattacharyya distance. Then, all features are weighted fusion. At last, the fused image is classified with K-means classification method and the built-up areas are extracted after post classification process. The proposed method has been tested by domestic airborne P band polarization SAR images, at the same time, two groups of experiments based on the method of statistical texture and the method of structural texture were carried out respectively. On the basis of qualitative analysis, quantitative analysis based on the built-up area selected artificially is enforced, in the relatively simple experimentation area, detection rate is more than 90%, in the relatively complex experimentation area, detection rate is also higher than the other two methods. In the study-area, the results show that this method can effectively and accurately extract built-up areas in high resolution airborne SAR imagery.
Arshad, F; Nor, I M; Ali, R M; Hamzah, F
1996-06-01
Diet is one of the major factors contributing to the development of obesity, apart from heredity and energy balance. The objective of this cross-sectional study is to assess energy, carbohydrate, protein and fat intakes in relation to bodyweight status among government office workers in Kuala Lumpur. A total of 185 Malay men and 196 Malay women aged 18 and above were randomly selected as the study sample. Height and weight were taken to determine body mass index (BMI). The dietary profile was obtained by using 24-hour dietary recalls and food frequency methods. This was analysed to determine average nutrient intake per day. Other information was ascertained from tested and coded questionnaires. The subjects were categorised into three groups of bodyweight status namely underweight (BMI < 20 kg/m2), normal weight (BMI 20-25 kg/m2) and obese (BMI > 25 kg/m2). The prevalence of obesity was 37.8%. The study showed that the mean energy intake of the respondents was 1709 ± 637 kcal/day. The energy composition comprised of 55.7 ± 7.6% carbohydrates, 29.7 ± 21.7 % fat and 15.6 ± 3.8% protein. There was no significant difference in diet composition among the three groups. The findings indicate that normal weight and overweight individuals had a lower intake of calories and carbohydrates than the underweight individuals (p<0.05). However, there were no significant differences in fat intakes. PMID:24394516
NASA Astrophysics Data System (ADS)
Fumal, T. E.; Heingartner, G. F.; Dawson, T. E.; Flowers, R.; Hamilton, J. C.; Kessler, J.; Reidy, L. M.; Samrad, L.; Seitz, G. G.; Southon, J.
2003-12-01
Paleoseismic excavations at Mill Canyon and Arano Flat, two sites 0.6 km apart on the San Andreas fault near Watsonville, California, provide the first high-resolution chronology of large earthquakes on the Santa Cruz Mountains segment of the fault. At Mill Canyon, a 2-m-wide zone of faulting has deformed latest Holocene deposits consisting of well-sorted sand and gravel interbedded with poorly sorted, commonly organic-rich debris flows ponded behind a small shutter ridge. We found evidence for the 1906 San Francisco earthquake and three additional ground-rupturing earthquakes since about 1500 A.D.. Radiocarbon ages and pollen analyses indicate that the penultimate earthquake at this site occurred about 1700-1790 A.D.. This indicates that the 1838 San Francisco peninsula earthquake did not rupture this portion of the fault. At Arano Flat, faulting is expressed as a 1 to 2-m-wide zone that deforms alluvial fan deposits overlying well-bedded overbank deposits. We found evidence at this location for at least nine earthquakes since about 1000 A.D. We constrain earthquake ages using a chronological model incorporating AMS radiocarbon ages of 113 samples of detrital charcoal from 19 layers and stratigraphic ordering. The mean recurrence interval is about 105 years, while individual intervals range from about 10-310 years. Two offset features at Arano Flat provide slip-per-event and slip rate data. A partially buried channel containing bottles from 1887-1890 is offset 3.5 m. Given that we found no evidence at either site for the 1890 M 6.3 earthquake, which produced surface rupture on the San Andreas fault southeast of Parajo Gap, this entire slip may have occurred during the 1906 earthquake. This value is unexpectedly high compared to the geodetic estimate of 2.3-3.1 m for the slip at depth (Thatcher et al., 1997) or the geologic estimate of 1.7-1.8 m of surface slip at Wright's tunnel (Prentice and Ponti, 1997), about 33 km northwest of Arano Flat. A fold that formed during two earthquakes, most recently about 1400-1470 A.D., is offset about 10.5 m during the past five earthquakes. This yields a slip rate of 22.5 +/- 2 mm/yr, significantly higher than values previously used for this segment. Average slip for the four earthquakes prior to 1906 is 1.2-1.8m indicating M?7. Thus the mean recurrence interval is half the value used by the Working Group on California Earthquake Probabilities (WG 03) for earthquakes of this magnitude on the Santa Cruz Mountains segment.
Chun K. Kim; Naresh C. Gupta; B. Chandramouli; Abass Alavi
Standardized uptake values(SUVs) arewidelyusedto measure 18F-fluorodeoxyglucose (FDG)uptake in venoustumors.ft has beenreported thatnormalization ofFDGuptake forpatient body weight(SUV@) overestimates FDGuptake inheavypatients, as theirfraclionof bodyfat (wfthlow FDGuptake)is oftenin creased. The objecth,eofthusstudywas to determineifâ€œnormal izalionof FDGuptake for the body surface areaâ€ (SUV@)is independentofthe patient'sbodysize and is morereliablethan SUVbW.Methods: FDG-PET images were acquired on 44 pa tients (body walght range: 45â€\\
Ghosh, Debarchana; Manson, Steven M.
2013-01-01
In this paper, we present a hybrid approach, robust principal component geographically weighted regression (RPCGWR), in examining urbanization as a function of both extant urban land use and the effect of social and environmental factors in the Twin Cities Metropolitan Area (TCMA) of Minnesota. We used remotely sensed data to treat urbanization via the proxy of impervious surface. We then integrated two different methods, robust principal component analysis (RPCA) and geographically weighted regression (GWR) to create an innovative approach to model urbanization. The RPCGWR results show significant spatial heterogeneity in the relationships between proportion of impervious surface and the explanatory factors in the TCMA. We link this heterogeneity to the “sprawling” nature of urban land use that has moved outward from the core Twin Cities through to their suburbs and exurbs. PMID:23814454
Tullos, Desiree
Conservation Program has been designed to reduce hearing loss at the College of Agricultural Sciences about hearing and its loss are likely to use hearing protection. Prior to working in a noisy area62 Chapter 1 - Hearing Conservation Program It has been shown that an eight hour time
NASA Astrophysics Data System (ADS)
Zaman, Muhammad; Kim, Guinyun; Naik, Haladhara; Kim, Kwangsoo; Yang, Sung-Chul; Shahid, Muhammad; Shin, Sung-Gyun; Cho, Moo-Hyun
2015-01-01
The flux-weighted average cross-sections of the natAg(?, xn)103-106Ag reaction with the end-point bremsstrahlung energies of 45, 50, 55, and 60 MeV were determined by activation and the off-line ?-ray spectrometric technique using the 100 MeV electron linac at the Pohang accelerator laboratory (PAL). The natAg(?, xn)103-106Ag reaction cross-sections as a function of the photon energy were estimated from the TENDL-2013 library based on the TALYS 1.6 computer code. The flux-weighted average cross-sections at the end-point bremsstrahlung energies of 45-60MeV were obtained from the literature data and the theoretical data based on the mono-energetic photons. We found that the present data and the flux-weighted theoretical values for the natAg(?, xn)103-106Ag reaction increase sharply from their threshold values to certain energies where other reaction channels are opened. Then, the first reaction remains constant until the second one reaches its maximum. Thereafter, the first reaction decreases slowly with an increase in the end-point bremsstrahlung energy due to the opening of different reaction channels.
Kim, Ai-Rhan Ellen; Lee, Yeon Kyung; Kim, Kyung Ah; Chu, Young Kyu; Baik, Byung Yoon; Kim, Eun Soon; Yun, Sung Cheol; Kim, Ki Soo; Pi, Soo Young
2006-02-01
This study investigated the incidence of acquired cytomegalovirus (CMV) infection in very low birth weight infants (VLBWI) given CMV seropositive blood, and sought to determine whether filtering and irradiation of blood products could help prevent CMV infection and the time required to clear passively-derived anti-CMV IgG among 80 VLBWI transfused with filtered-irradiated blood, 20 VLBWI transfused with nonfiltered- nonirradiated blood and 26 nontransfused VLBWI. CMV IgG and IgM values were obtained from all blood products prior to transfusions, and from VLBWI at birth until the infants became seronegative. Urine was obtained for CMV culture at birth and every 3-4 weeks until 12 weeks after the final transfusion. The incidence of CMV IgG seropositivity among the 126 infants at birth and the blood products given were 96% and 95%, respectively. The incidence of acquired CMV infection was 4/100 (4%) in the transfused group: 2/80 (2.5%) and 2/20 (10%) in the filtered-irradiated and nonfiltered-nonirradiated transfusion groups, respectively. Approximately 9-10 months elapsed to clear passively acquired CMV IgG. The irradiation and filtering of the blood products did not seem to decrease the transfusion-related CMV infection rate in Korea among VLBWI, however, further validation is recommended in a larger cohort of infants. PMID:16479056
NASA Astrophysics Data System (ADS)
Jadhav, Nitin A.; Singh, Pramod K.; Rhee, Hee Woo; Bhattacharya, Bhaskar
2014-10-01
Mesoporous ZnO nanoparticles have been synthesized with tremendous increase in specific surface area of up to 578 m2/g which was 5.54 m2/g in previous reports (J. Phys. Chem. C 113:14676-14680, 2009). Different mesoporous ZnO nanoparticles with average pore sizes ranging from 7.22 to 13.43 nm and specific surface area ranging from 50.41 to 578 m2/g were prepared through the sol-gel method via a simple evaporation-induced self-assembly process. The hydrolysis rate of zinc acetate was varied using different concentrations of sodium hydroxide. Morphology, crystallinity, porosity, and J- V characteristics of the materials have been studied using transmission electron microscopy (TEM), X-ray diffraction (XRD), BET nitrogen adsorption/desorption, and Keithley instruments.
2014-01-01
Mesoporous ZnO nanoparticles have been synthesized with tremendous increase in specific surface area of up to 578 m2/g which was 5.54 m2/g in previous reports (J. Phys. Chem. C 113:14676-14680, 2009). Different mesoporous ZnO nanoparticles with average pore sizes ranging from 7.22 to 13.43 nm and specific surface area ranging from 50.41 to 578 m2/g were prepared through the sol-gel method via a simple evaporation-induced self-assembly process. The hydrolysis rate of zinc acetate was varied using different concentrations of sodium hydroxide. Morphology, crystallinity, porosity, and J-V characteristics of the materials have been studied using transmission electron microscopy (TEM), X-ray diffraction (XRD), BET nitrogen adsorption/desorption, and Keithley instruments. PMID:25339855
NASA Technical Reports Server (NTRS)
Kovich, G.; Moore, R. D.; Urasek, D. C.
1973-01-01
The overall and blade-element performance are presented for an air compressor stage designed to study the effect of weight flow per unit annulus area on efficiency and flow range. At the design speed of 424.8 m/sec the peak efficiency of 0.81 occurred at the design weight flow and a total pressure ratio of 1.56. Design pressure ratio and weight flow were 1.57 and 29.5 kg/sec (65.0 lb/sec), respectively. Stall margin at design speed was 19 percent based on the weight flow and pressure ratio at peak efficiency and at stall.
Simon, Dirk; Fritzsche, Klaus H; Thieke, Christian; Klein, Jan; Parzer, Peter; Weber, Marc-André; Stieltjes, Bram
2012-01-01
The apparent diffusion coefficient (ADC) derived from diffusion-weighted imaging (DWI) correlates inversely with tumor proliferation rates. High-grade gliomas are typically heterogeneous and the delineation of areas of high and low proliferation is impeded by partial volume effects and blurred borders. Commonly used manual delineation is further impeded by potential overlap with cerebrospinal fluid and necrosis. Here we present an algorithm to reproducibly delineate and probabilistically quantify the ADC in areas of high and low proliferation in heterogeneous gliomas, resulting in a reproducible quantification in regions of tissue inhomogeneity. We used an expectation maximization (EM) clustering algorithm, applied on a Gaussian mixture model, consisting of pure superpositions of Gaussian distributions. Soundness and reproducibility of this approach were evaluated in 10 patients with glioma. High- and low-proliferating areas found using the clustering correspond well with conservative regions of interest drawn using all available imaging data. Systematic placement of model initialization seeds shows good reproducibility of the method. Moreover, we illustrate an automatic initialization approach that completely removes user-induced variability. In conclusion, we present a rapid, reproducible and automatic method to separate and quantify heterogeneous regions in gliomas. PMID:22487677
Julien M. E. Fraïsse; Daniel Braun
2015-04-13
We investigate in detail a recently introduced "coherent averaging scheme" in terms of its usefulness for achieving Heisenberg limited sensitivity in the measurement of different parameters. In the scheme, $N$ quantum probes in a product state interact with a quantum bus. Instead of measuring the probes directly and then averaging as in classical averaging, one measures the quantum bus or the entire system and tries to estimate the parameters from these measurement results. Combining analytical results from perturbation theory and an exactly solvable dephasing model with numerical simulations, we draw a detailed picture of the scaling of the best achievable sensitivity with $N$, the dependence on the initial state, the interaction strength, the part of the system measured, and the parameter under investigation.
Forecasting Sales by Exponentially Weighted Moving Averages
Peter R. Winters
1960-01-01
The growing use of computers for mechanized inventory control and production planning has brought with it the need for explicit forecasts of sales and usage for individual products and materials. These forecasts must be made on a routine basis for thousands of products, so that they must be made quickly, and, both in terms of computing time and information storage,
NASA Technical Reports Server (NTRS)
Urasek, D. C.; Kovich, G.; Moore, R. D.
1973-01-01
Performance was obtained for a 50-cm-diameter compressor designed for a high weight flow per unit annulus area of 208 (kg/sec)/sq m. Peak efficiency values of 0.83 and 0.79 were obtained for the rotor and stage, respectively. The stall margin for the stage was 23 percent, based on equivalent weight flow and total-pressure ratio at peak efficiency and stall.
NASA Astrophysics Data System (ADS)
Shi, Y.; Long, Y.; Wi, X. L.
2014-04-01
When tourists visiting multiple tourist scenic spots, the travel line is usually the most effective road network according to the actual tour process, and maybe the travel line is different from planned travel line. For in the field of navigation, a proposed travel line is normally generated automatically by path planning algorithm, considering the scenic spots' positions and road networks. But when a scenic spot have a certain area and have multiple entrances or exits, the traditional described mechanism of single point coordinates is difficult to reflect these own structural features. In order to solve this problem, this paper focuses on the influence on the process of path planning caused by scenic spots' own structural features such as multiple entrances or exits, and then proposes a doubleweighted Graph Model, for the weight of both vertexes and edges of proposed Model can be selected dynamically. And then discusses the model building method, and the optimal path planning algorithm based on Dijkstra algorithm and Prim algorithm. Experimental results show that the optimal planned travel line derived from the proposed model and algorithm is more reasonable, and the travelling order and distance would be further optimized.
NASA Astrophysics Data System (ADS)
Wang, Gongwen; Chen, Jianping; Li, Qing; Ding, Huoping
2007-06-01
This paper aims to monitor desertification evolution of different stages and assess its factors using remote sensing (RS) data and cellular automata (CA)-geographical information system (GIS) with an adaptive analytic hierarchy process (AHP) to derive weights of desertification factors. The study areas (114°E to 117°E and 39.5°to 42.2°N) are one of the important agro-pastoral transitional zone, located in Beijing and its neighboring areas, marginal desertified areas in North China. Desertification information including NDVI and desertification area were derived from the satellite images of 1987TM, 1996TM (with a resolution of 28.5), and 2006 CBERS-(with a resolution of 19.5 m) in study areas. The ancillary data in terms of meteorology, geology, 30m-DEM, hydrography can be statistical analyzed with GIS technology. A CA model based on the desertification factors with AHP-derived weights was built by AML program in ArcGIS workstation to assess the evolution of desertification in different stages (from 1987 to 1996, and from 1996 to 2006). The research results show that desertified areas was increased by 3.28% per year from 1987 to 1996, so was 0.51% per year from 1996 to 2006. Although the weights of desertification factors have some changes in different stages, the main factors including climate, NDVI, and terrain did not change except the values in study areas.
Lopes, Thomas J.; Evetts, David M.
2004-01-01
Nevada's reliance on ground-water resources has increased because of increased development and surface-water resources being fully appropriated. The need to accurately quantify Nevada's water resources and water use is more critical than ever to meet future demands. Estimated ground-water pumpage, artificial and natural recharge, and interbasin flow can be used to help evaluate stresses on aquifer systems. In this report, estimates of ground-water pumpage and artificial recharge during calendar year 2000 were made using data from a variety of sources, such as reported estimates and estimates made using Landsat satellite imagery. Average annual natural recharge and interbasin flow were compiled from published reports. An estimated 1,427,100 acre-feet of ground water was pumped in Nevada during calendar year 2000. This total was calculated by summing six categories of ground-water pumpage, based on water use. Total artificial recharge during 2000 was about 145,970 acre-feet. At least one estimate of natural recharge was available for 209 of the 232 hydrographic areas (HAs). Natural recharge for the 209 HAs ranges from 1,793,420 to 2,583,150 acre-feet. Estimates of interbasin flow were available for 151 HAs. The categories and their percentage of the total ground-water pumpage are irrigation and stock watering (47 percent), mining (26 percent), water systems (14 percent), geothermal production (8 percent), self-supplied domestic (4 percent), and miscellaneous (less than 1 percent). Pumpage in the top 10 HAs accounted for about 49 percent of the total ground-water pumpage. The most ground-water pumpage in an HA was due to mining in Pumpernickel Valley (HA 65), Boulder Flat (HA 61), and Lower Reese River Valley (HA 59). Pumpage by water systems in Las Vegas Valley (HA 212) and Truckee Meadows (HA 87) were the fourth and fifth highest pumpage in 2000, respectively. Irrigation and stock watering pumpage accounted for most ground-water withdrawals in the HAs with the sixth through ninth highest pumpage. Geothermal production accounted for most pumpage in the Carson Desert (HA 101). Reinjection of ground water pumped for geothermal energy production accounted for about 64 percent (93,310 acre-feet) of the total artificial recharge. The only artificial recharge by water systems was in Las Vegas Valley, where 29,790 acre-feet of water from the Colorado River was injected into the aquifer system. Artificial recharge by mining totaled 22,870 acre-feet. Net ground-water flow was estimated only for the 143 HAs with available estimates of both natural recharge and interbasin flow. Of the 143 estimates, 58 have negative net ground-water flow, indicating that ground-water storage could be depleted if pumpage continues at the same rate. The State has designated HAs where permitted ground-water rights approach or exceed the estimated average annual recharge. Ten HAs were identified that are not designated and have a net ground-water flow between -1,000 to -35,000 acre-feet. Due to uncertainties in recharge, the water budgets for these HAs may need refining to determine if ground-water storage is being depleted.
Haines, Aaron M.; Leu, Matthias; Svancara, Leona K.; Wilson, Gina; Scott, J. Michael
2010-01-01
Identification of biodiversity hotspots (hereafter, hotspots) has become a common strategy to delineate important areas for wildlife conservation. However, the use of hotspots has not often incorporated important habitat types, ecosystem services, anthropogenic activity, or consistency in identifying important conservation areas. The purpose of this study was to identify hotspots to improve avian conservation efforts for Species of Greatest Conservation Need (SGCN) in the state of Idaho, United States. We evaluated multiple approaches to define hotspots and used a unique approach based on weighting species by their distribution size and conservation status to identify hotspot areas. All hotspot approaches identified bodies of water (Bear Lake, Grays Lake, and American Falls Reservoir) as important hotspots for Idaho avian SGCN, but we found that the weighted approach produced more congruent hotspot areas when compared to other hotspot approaches. To incorporate anthropogenic activity into hotspot analysis, we grouped species based on their sensitivity to specific human threats (i.e., urban development, agriculture, fire suppression, grazing, roads, and logging) and identified ecological sections within Idaho that may require specific conservation actions to address these human threats using the weighted approach. The Snake River Basalts and Overthrust Mountains ecological sections were important areas for potential implementation of conservation actions to conserve biodiversity. Our approach to identifying hotspots may be useful as part of a larger conservation strategy to aid land managers or local governments in applying conservation actions on the ground.
2012-01-01
Background The study conducts statistical and spatial analyses to investigate amounts and types of permitted surface water pollution discharges in relation to population mortality rates for cancer and non-cancer causes nationwide and by urban-rural setting. Data from the Environmental Protection Agency's (EPA) Discharge Monitoring Report (DMR) were used to measure the location, type, and quantity of a selected set of 38 discharge chemicals for 10,395 facilities across the contiguous US. Exposures were refined by weighting amounts of chemical discharges by their estimated toxicity to human health, and by estimating the discharges that occur not only in a local county, but area-weighted discharges occurring upstream in the same watershed. Centers for Disease Control and Prevention (CDC) mortality files were used to measure age-adjusted population mortality rates for cancer, kidney disease, and total non-cancer causes. Analysis included multiple linear regressions to adjust for population health risk covariates. Spatial analyses were conducted by applying geographically weighted regression to examine the geographic relationships between releases and mortality. Results Greater non-carcinogenic chemical discharge quantities were associated with significantly higher non-cancer mortality rates, regardless of toxicity weighting or upstream discharge weighting. Cancer mortality was higher in association with carcinogenic discharges only after applying toxicity weights. Kidney disease mortality was related to higher non-carcinogenic discharges only when both applying toxicity weights and including upstream discharges. Effects for kidney mortality and total non-cancer mortality were stronger in rural areas than urban areas. Spatial results show correlations between non-carcinogenic discharges and cancer mortality for much of the contiguous United States, suggesting that chemicals not currently recognized as carcinogens may contribute to cancer mortality risk. The geographically weighted regression results suggest spatial variability in effects, and also indicate that some rural communities may be impacted by upstream urban discharges. Conclusions There is evidence that permitted surface water chemical discharges are related to population mortality. Toxicity weights and upstream discharges are important for understanding some mortality effects. Chemicals not currently recognized as carcinogens may nevertheless play a role in contributing to cancer mortality risk. Spatial models allow for the examination of geographic variability not captured through the regression models. PMID:22471926
ERIC Educational Resources Information Center
Gutiérrez-Zornoza, Myriam; Sánchez-López, Mairena; García-Hermoso, Antonio; González-García, Alberto; Chillón, Palma; Martínez-Vizcaíno, Vicente
2015-01-01
Purpose: The aim of this study was to examine (a) whether distance from home to school is a determinant of active commuting to school (ACS), (b) the relationship between distance from home to heavily used facilities (school, green spaces, and sports facilities) and the weight status and cardiometabolic risk categories, and (c) whether ACS has a…
Leakey, R R; Coutts, M P
1989-03-01
Single-node, leafy stem cuttings of Triplochiton scleroxylon K. Schum. were collected from successive nodes down the uppermost shoot of 2-shoot stockplants. The leaves were trimmed to 10, 50 and 100 cm(2) before the cuttings were set under intermittent mist to root. Batches of cuttings were harvested after 0, 14, 28 and 42 days to assess leaf water potential, dry weight and carbohydrate content of their leaf and stem portions. Cuttings with leaf areas of 10, 50 and 100 cm(2) increased in total dry weight by 29, 61 and 90%, respectively, during the 6-week period. The increase in dry weight was accompanied by increases in reflux-extracted soluble carbohydrates (RSC), water-soluble carbohydrates (WSC) and starch. By contrast, increase in leaf area reduced leaf water potential of cuttings before root emergence. Fewer large-leaved cuttings rooted than smaller-leaved cuttings, suggesting that rooting ability is at least partially determined by the balance between photosynthesis and transpiration. Fewer roots per cutting were produced on cuttings with 10 cm(2) leaves than on cuttings with larger leaves. Node position affected increments in dry weight, carbohydrate content and leaf water potential, with differences between nodes on day 0 generally being lost or slightly reversed by day 14. Rooting ability was not related to initial (day 0) carbohydrate content, suggesting that rooting is dependent on carbohydrates formed after severance. During the rooting period, the proportions of total non-structural carbohydrate as WSC and starch were reversed, from mostly WSC on day 0 to mostly starch by day 42. These changes in WSC and starch occurred most rapidly in large-leaved cuttings. PMID:14973005
NASA Astrophysics Data System (ADS)
Corsini, Alessandro; Cervi, Federico; Ronchetti, Francesco
2009-10-01
Locations of potential groundwater springs were mapped in an area of 68 km 2 in the Northern Apennines of Italy based on Weight of Evidence (WofE) and Radial Basis Function Link Net (RBFLN). A map of more than 200 springs and maps of five causal factors were uploaded to ArcGIS with Spatial Data Modelling extensions. The WofE and RBFLN potential groundwater spring maps had similar prediction rates, allowing about 50% of the training and validation springs to be predicted in about 15 to 20% of the study area. The two maps were merged using a heuristic combination matrix in order to produce two hybrid maps: one representing susceptible areas in both the WofE and RBFLN maps (type A), while the other representing susceptible areas at least in one of the two maps (type B). For small cumulated areas, the success rate of both hybrid maps was higher than that of the parent maps, while for large cumulated areas, only the type B hybrid map performed similarly to the parent maps. This conclusion suggests different applications of these maps to water management purposes.
Gnanvo, Kondo; Gua, Chao; Liyanage, Nilanga; Nelyubin, Vladimir; Zhao, Yuxiang
2015-01-01
A large-area and light-weight Gas Electron Multiplier (GEM) detector was built at the University of Virginia as a prototype for the detector R$\\&$D program of the future Electron Ion Collider. The prototype has a trapezoidal geometry designed as a generic sector module in a disk layer configuration of a forward tracker in collider detectors. It is based on light-weight material and narrow support frames in order to minimize multiple scattering and dead-to-sensitive area ratio. The chamber has a novel type of two dimensional (2D) stereo-angle readout board with U-V strips that provides (r,$\\varphi$) position information in the cylindrical coordinate system of a collider environment. The prototype was tested at the Fermilab Test Beam Facility in October 2013 and the analysis of the test beam data demonstrates an excellent response uniformity of the large area chamber with an efficiency higher than 95%. An angular resolution of 60 $\\mu$rad in the azimuthal direction and a position resolution better than 550 ...
NASA Astrophysics Data System (ADS)
Ferraris, Stefano; Agnese, Carmelo; Baiamonte, Giorgio; Canone, Davide; Previati, Maurizio; Cat Berro, Daniele; Mercalli, Luca
2015-04-01
Modeling of rainfall statistical structure represents an important research area in hydrology, meteorology, atmospheric physics and climatology, because of the several theoretical and practical implications. The statistical inference of the alternation of wet periods (WP) and dry periods (DP) in daily rainfall records can be achieved through the modelling of inter-arrival time-series (IT), defined as the succession of times elapsed from a rainy day and the one immediately preceding it. It has been shown previously that the statistical structure of IT can be well described by the 3-parameter Lerch distribution (Lch). In this work, Lch was successfully applied to IT data belonging to a sub-alpine area (Piemonte and Valle d'Aosta, NW Italy); furthermore the same statistical procedure was applied to daily rainfall records to ITs associated. The analysis has been carried out for 26 daily rainfall long-series (? 90 yr of observations). The main objective of this work was to detect temporal trends of some features describing the statistical structure of both inter-arrival time-series (IT) and associated rainfall depth (H). Each time-series was divided on subsets of five years long and for each of them the estimation of the Lch parameter was performed, so to extend the trend analysis to some high quantiles.
NSDL National Science Digital Library
N/A N/A (None; )
2005-12-11
Your skin covers and protects your body. Your skin can also detect pressure and weight. You can tell that a one gram weight feels lighter than a one kilogram weight because the receptors on your skin detect more pressure from a one kilogram weight compared to a one gram weight.
How to Address Measurement Noise in Bayesian Model Averaging
NASA Astrophysics Data System (ADS)
Schöniger, A.; Wöhling, T.; Nowak, W.
2014-12-01
When confronted with the challenge of selecting one out of several competing conceptual models for a specific modeling task, Bayesian model averaging is a rigorous choice. It ranks the plausibility of models based on Bayes' theorem, which yields an optimal trade-off between performance and complexity. With the resulting posterior model probabilities, their individual predictions are combined into a robust weighted average and the overall predictive uncertainty (including conceptual uncertainty) can be quantified. This rigorous framework does, however, not yet explicitly consider statistical significance of measurement noise in the calibration data set. This is a major drawback, because model weights might be instable due to the uncertainty in noisy data, which may compromise the reliability of model ranking. We present a new extension to the Bayesian model averaging framework that explicitly accounts for measurement noise as a source of uncertainty for the weights. This enables modelers to assess the reliability of model ranking for a specific application and a given calibration data set. Also, the impact of measurement noise on the overall prediction uncertainty can be determined. Technically, our extension is built within a Monte Carlo framework. We repeatedly perturb the observed data with random realizations of measurement error. Then, we determine the robustness of the resulting model weights against measurement noise. We quantify the variability of posterior model weights as weighting variance. We add this new variance term to the overall prediction uncertainty analysis within the Bayesian model averaging framework to make uncertainty quantification more realistic and "complete". We illustrate the importance of our suggested extension with an application to soil-plant model selection, based on studies by Wöhling et al. (2013, 2014). Results confirm that noise in leaf area index or evaporation rate observations produces a significant amount of weighting uncertainty and compromises the reliability of model ranking. Without our suggested extension, this additional contribution to prediction uncertainty could not be detected and model ranking results would be misinterpreted. We therefore advise modelers to include our suggested upgrade in the Bayesian model averaging routine.
Wire weight is lowered to water surface to measure stage at a site. Levels are made to the wire weights elevation from known benchmarks to ensure correct readings. This wire weight is located along the Missouri River in Bismarck, ND....
Gong, Lunli; Zhou, Xiao; Wu, Yaohao; Zhang, Yun; Wang, Chen; Zhou, Heng; Guo, Fangfang; Cui, Lei
2014-02-01
The present study was designed to investigate the possibility of full-thickness defects repair in porcine articular cartilage (AC) weight-bearing area using chondrogenic differentiated autologous adipose-derived stem cells (ASCs) with a follow-up of 3 and 6 months, which is successive to our previous study on nonweight-bearing area. The isolated ASCs were seeded onto the phosphoglycerate/polylactic acid (PGA/PLA) with chondrogenic induction in vitro for 2 weeks as the experimental group prior to implantation in porcine AC defects (8?mm in diameter, deep to subchondral bone), with PGA/PLA only as control. With follow-up time being 3 and 6 months, both neo-cartilages of postimplantation integrated well with the neighboring normal cartilage and subchondral bone histologically in experimental group, whereas only fibrous tissue in control group. Immunohistochemical and toluidine blue staining confirmed similar distribution of COL II and glycosaminoglycan in the regenerated cartilage to the native one. A vivid remolding process with repair time was also witnessed in the neo-cartilage as the compressive modulus significantly increased from 70% of the normal cartilage at 3 months to nearly 90% at 6 months, which is similar to our former research. Nevertheless, differences of the regenerated cartilages still could be detected from the native one. Meanwhile, the exact mechanism involved in chondrogenic differentiation from ASCs seeded on PGA/PLA is still unknown. Therefore, proteome is resorted leading to 43 proteins differentially identified from 20 chosen two-dimensional spots, which do help us further our research on some committed factors. In conclusion, the comparison via proteome provided a thorough understanding of mechanisms implicating ASC differentiation toward chondrocytes, which is further substantiated by the present study as a perfect supplement to the former one in nonweight-bearing area. PMID:24044689
Unimodular Gravity and Averaging
NASA Astrophysics Data System (ADS)
Coley, Alan
The question of the averaging of inhomogeneous spacetimes in cosmology is important for the correct interpretation of cosmological data. In this paper we suggest a conceptually simpler approach to averaging in cosmology based on the averaging of scalars within unimodular gravity. As an illustration, we consider the example of an exact spherically symmetric dust model, and show that within this approach averaging introduces correlations (corrections) to the effective dynamical evolution equation in the form of a spatial curvature term.
Unimodular Gravity and Averaging
A. Coley; J. Brannlund; J. Latta
2011-02-16
The question of the averaging of inhomogeneous spacetimes in cosmology is important for the correct interpretation of cosmological data. In this paper we suggest a conceptually simpler approach to averaging in cosmology based on the averaging of scalars within unimodular gravity. As an illustration, we consider the example of an exact spherically symmetric dust model, and show that within this approach averaging introduces correlations (corrections) to the effective dynamical evolution equation in the form of a spatial curvature term.
Aggregation operators for linguistic weighted information
Francisco Herrera; Enrique Herrera-Viedma
1997-01-01
The aim of this paper is to model the processes of the aggregation of weighted information in a linguistic framework. Three aggregation operators of weighted linguistic information are presented: linguistic weighted disjunction operator, linguistic weighted conjunction operator, and linguistic weighted averaging operator. A study of their axiomatics is presented to demonstrate their rational aggregation
Bariatric and Metabolic Institute Weight Management Program
Gleeson, Joseph G.
Management Program is a clinical partner of Health Management Resources (HMR), a proven weight-loss system receive a 20 percent discount on our Weight Management Program, which applies to weight-loss strategy classes and medical fees for our clinic-based weight-loss program. Average program weight loss ranges from
Córdova-Palomera, Aldo; Fatjó-Vilas, Mar; Falcón, Carles; Bargalló, Nuria; Alemany, Silvia; Crespo-Facorro, Benedicto; Nenadic, Igor; Fañanás, Lourdes
2015-01-01
Background Previous research suggests that low birth weight (BW) induces reduced brain cortical surface area (SA) which would persist until at least early adulthood. Moreover, low BW has been linked to psychiatric disorders such as depression and psychological distress, and to altered neurocognitive profiles. Aims We present novel findings obtained by analysing high-resolution structural MRI scans of 48 twins; specifically, we aimed: i) to test the BW-SA association in a middle-aged adult sample; and ii) to assess whether either depression/anxiety disorders or intellectual quotient (IQ) influence the BW-SA link, using a monozygotic (MZ) twin design to separate environmental and genetic effects. Results Both lower BW and decreased IQ were associated with smaller total and regional cortical SA in adulthood. Within a twin pair, lower BW was related to smaller total cortical and regional SA. In contrast, MZ twin differences in SA were not related to differences in either IQ or depression/anxiety disorders. Conclusion The present study supports findings indicating that i) BW has a long-lasting effect on cortical SA, where some familial and environmental influences alter both foetal growth and brain morphology; ii) uniquely environmental factors affecting BW also alter SA; iii) higher IQ correlates with larger SA; and iv) these effects are not modified by internalizing psychopathology. PMID:26086820
... thyroid problems, heart failure, and kidney disease. Good nutrition and exercise can help in losing weight. Eating extra calories within a well-balanced diet and treating any underlying medical problems can help to add weight.
... obese. Achieving a healthy weight can help you control your cholesterol, blood pressure and blood sugar. It ... use more calories than you eat. A weight-control strategy might include Choosing low-fat, low-calorie ...
... weight... Read full story >> Healthy Weight Loss share Body Image: It's Not Just About How You See Your ... your own skin share The ABC's of Positive Body Image Feeling comfortable in your own skin can be ...
... problem. Causes for sudden weight loss can include Thyroid problems Cancer Infectious diseases Digestive diseases Certain medicines Sudden weight gain can be due to medicines, thyroid problems, heart failure, and kidney disease. Good nutrition ...
... weight, the calories you eat must equal the energy you burn. To lose weight, you must use more calories than you eat. A weight-control strategy might include Choosing low-fat, low-calorie foods Eating smaller portions Drinking water instead of sugary drinks Being physically active Eating ...
Averaging undifferentiable monitored parameters
A. D. Bolychevtsev; L. B. Bystritskaya; A. L. Oksman; V. G. Onoprienko; A. I. Yatsenko
1987-01-01
Averaging monitored parameters over time intervals is an essential part of data processing in calculating the indices for the performance of, equipment in an industrial organization: power station, gas transport organization, and so on. The averaging is performed discretely by data-acquisition systems (DAS) or other such facilities, which periodically accumulate the input data on the current parameters and process them.
Owori, Steven
1991-01-01
diet), and (3) all-forage diet (100/o forage). Lambs were slaughtered individually upon reaching a targeted slaughter weight of 54. 4 kg. Data collected were: warm carcass weight; chilled carcass weight; fat depth 12th rib; adjusted fat depth; body... wall thickness; loin eye area; muscle depth; fat depth at the tail head; fat depth at the scapula; average daily gain; total days on feed; shrunk weight; Warner-Bratzler shear values; conformation; leg conformation; kidney and pelvic percentages...
42 CFR 423.279 - National average monthly bid amount.
Code of Federal Regulations, 2012 CFR
2012-10-01
...MA-PD plans included in the national average bid a weight based...assigned zero weight). (c) Geographic adjustment. (1) Upon...appropriate methodology, the national average monthly bid amount...2) CMS does not apply any geographic adjustments if CMS...
42 CFR 423.279 - National average monthly bid amount.
Code of Federal Regulations, 2013 CFR
2013-10-01
...MA-PD plans included in the national average bid a weight based...assigned zero weight). (c) Geographic adjustment. (1) Upon...appropriate methodology, the national average monthly bid amount...2) CMS does not apply any geographic adjustments if CMS...
42 CFR 423.279 - National average monthly bid amount.
Code of Federal Regulations, 2011 CFR
2011-10-01
...MA-PD plans included in the national average bid a weight based...assigned zero weight). (c) Geographic adjustment. (1) Upon...appropriate methodology, the national average monthly bid amount...2) CMS does not apply any geographic adjustments if CMS...
42 CFR 423.279 - National average monthly bid amount.
Code of Federal Regulations, 2014 CFR
2014-10-01
...MA-PD plans included in the national average bid a weight based...assigned zero weight). (c) Geographic adjustment. (1) Upon...appropriate methodology, the national average monthly bid amount...2) CMS does not apply any geographic adjustments if CMS...
42 CFR 423.279 - National average monthly bid amount.
Code of Federal Regulations, 2010 CFR
2010-10-01
...MA-PD plans included in the national average bid a weight based...assigned zero weight). (c) Geographic adjustment. (1) Upon...appropriate methodology, the national average monthly bid amount...2) CMS does not apply any geographic adjustments if CMS...
Weighted Gossip: Distributed Averaging Using Non-Doubly Stochastic Matrices
Shafran, Izhak
-INRIA, France Vincent Blondel UCL, Belgium Patrick Thiran EPFL, Switzerland John Tsitsiklis MIT, USA Martin]. The previous algorithm gained efficiency at the price of more complex coordination. At every round of Path
NASA Astrophysics Data System (ADS)
Du, Wen-Bo; Cao, Xian-Bin; Zhao, Lin; Zhou, Hong
2009-05-01
We investigate the evolutionary prisoner's dilemma game (PDG) on weighted Newman-Watts (NW) networks. In weighted NW networks, the link weight wij is assigned to the link between the nodes i and j as: wij = (?i · ?j)?, where ?i(?j) is the degree of node i(j) and ? represents the strength of the correlations. Obviously, the link weight can be tuned by only one parameter ?. We focus on the cooperative behavior and wealth distribution in the system. Simulation results show that the cooperator frequency is promoted by a large range of ? and there is a minimal cooperation frequency around ? = - 1. Moreover, we also employ the Gini coefficient to study the wealth distribution in the population. Numerical results show that the Gini coefficient reaches its minimum when ? approx - 1. Our work may be helpful in understanding the emergence of cooperation and unequal wealth distribution in society.
The Molecular Weight Distribution of Polymer Samples
ERIC Educational Resources Information Center
Horta, Arturo; Pastoriza, M. Alejandra
2007-01-01
Various methods for the determination of the molecular weight distribution (MWD) of different polymer samples are presented. The study shows that the molecular weight averages and distribution of a polymerization completely depend on the characteristics of the reaction itself.
NSDL National Science Digital Library
2012-05-18
Weights and measures may not be the first thing that most people wake up in the morning thinking about, but such matters are terribly important. The National Institute of Standards and Technology (NIST) has created this site to provide information to concerned stakeholders about the work of their Office of Weights and Measures. The mission of the Office is to promote "uniformity in U.S. weights and measures laws, regulations, and standards to achieve equity between buyers and sellers in the marketplaces." On the right hand side of the page, visitors can look over a Resources area. Here they will find educational materials, information on the National Conference on Weights and Measures, past newsletters, and a rather fine digital exhibit that traces "the struggle to achieve weights and measures standardization" in the United States. On the left-hand side of the page, visitors can learn more about price verification, packaging and labeling, and the metric system.
NASA Technical Reports Server (NTRS)
Moore, R. D.; Urasek, D. C.; Kovich, G.
1973-01-01
The overall and blade-element performances are presented over the stable flow operating range from 50 to 100 percent of design speed. Stage peak efficiency of 0.834 was obtained at a weight flow of 26.4 kg/sec (58.3 lb/sec) and a pressure ratio of 1.581. The stall margin for the stage was 7.5 percent based on weight flow and pressure ratio at stall and peak efficiency conditions. The rotor minimum losses were approximately equal to design except in the blade vibration damper region. Stator minimum losses were less than design except in the tip and damper regions.
Product (a) Type (b) Time of Harvest Gear Code (c) Area of catch (d) Net Weight No. of Fish F/FR RD/GG/DR/FL/OT (mm/yy) (kg) (when RD, GG or DR) Name Address Signature Date Licence Number (if applicable) Name knowledge and belief. (a): F=Fresh, FR=Frozen (b): RD=Round, GG=Gilled and Gutted, DR=Dressed, FL=Fillet, OT
Effect of high-speed jet on flow behavior, retrogradation, and molecular weight of rice starch.
Fu, Zhen; Luo, Shun-Jing; BeMiller, James N; Liu, Wei; Liu, Cheng-Mei
2015-11-20
Effects of high-speed jet (HSJ) treatment on flow behavior, retrogradation, and degradation of the molecular structure of indica rice starch were investigated. Decreasing with the number of HSJ treatment passes were the turbidity of pastes (degree of retrogradation), the enthalpy of melting of retrograded rice starch, weight-average molecular weights and weight-average root-mean square radii of gyration of the starch polysaccharides, and the amylopectin peak areas of SEC profiles. The areas of lower-molecular-weight polymers increased. The chain-length distribution was not significantly changed. Pastes of all starch samples exhibited pseudoplastic, shear-thinning behavior. HSJ treatment increased the flow behavior index and decreased the consistency coefficient and viscosity. The data suggested that degradation of amylopectin was mainly involved and that breakdown preferentially occurred in chains between clusters. PMID:26344255
Covariant approximation averaging
NASA Astrophysics Data System (ADS)
Shintani, Eigo; Arthur, Rudy; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph
2015-06-01
We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in Nf=2 +1 lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.
Covariant approximation averaging
Eigo Shintani; Rudy Arthur; Thomas Blum; Taku Izubuchi; Chulwoo Jung; Christoph Lehner
2015-07-08
We present a new class of statistical error reduction techniques for Monte-Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in $N_f=2+1$ lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte-Carlo calculations over conventional methods for the same cost.
NASA Technical Reports Server (NTRS)
Howard, W. H.; Young, D. R.
1972-01-01
Device applies compressive force to bone to minimize loss of bone calcium during weightlessness or bedrest. Force is applied through weights, or hydraulic, pneumatic or electrically actuated devices. Device is lightweight and easy to maintain and operate.
Unknown
2011-08-17
it extremely important that geocoding results be as accurate as possible. Existing global-weighting approaches to geocoding assume spatial stationarity of addressing systems and address data characteristic distributions across space, resulting in heuristics...
Improved averaging for non-null interferometry
NASA Astrophysics Data System (ADS)
Fleig, Jon F.; Murphy, Paul E.
2013-09-01
Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.
Americans' Average Radiation Exposure
NA
2000-08-11
We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body.
NSDL National Science Digital Library
2011-04-06
In this lesson students use a rule of thumb about the weight of babies to practice doubling and halving numbers. They complete an organized table and compare data using vertical and horizontal double bar graphs. The lesson includes a student activity sheet and extension ideas.
Lorcaserin for weight management
Taylor, James R; Dietrich, Eric; Powell, Jason
2013-01-01
Type 2 diabetes and obesity commonly occur together. Obesity contributes to insulin resistance, a main cause of type 2 diabetes. Modest weight loss reduces glucose, lipids, blood pressure, need for medications, and cardiovascular risk. A number of approaches can be used to achieve weight loss, including lifestyle modification, surgery, and medication. Lorcaserin, a novel antiobesity agent, affects central serotonin subtype 2A receptors, resulting in decreased food intake and increased satiety. It has been studied in obese patients with type 2 diabetes and results in an approximately 5.5 kg weight loss, on average, when used for one year. Headache, back pain, nasopharyngitis, and nausea were the most common adverse effects noted with lorcaserin. Hypoglycemia was more common in the lorcaserin groups in the clinical trials, but none of the episodes were categorized as severe. Based on the results of these studies, lorcaserin was approved at a dose of 10 mg twice daily in patients with a body mass index ?30 kg/m2 or ?27 kg/m2 with at least one weight-related comorbidity, such as hypertension, type 2 diabetes mellitus, or dyslipidemia, in addition to a reduced calorie diet and increased physical activity. Lorcaserin is effective for weight loss in obese patients with and without type 2 diabetes, although its specific role in the management of obesity is unclear at this time. This paper reviews the clinical trials of lorcaserin, its use from the patient perspective, and its potential role in the treatment of obesity. PMID:23788837
Lorcaserin for weight management.
Taylor, James R; Dietrich, Eric; Powell, Jason
2013-01-01
Type 2 diabetes and obesity commonly occur together. Obesity contributes to insulin resistance, a main cause of type 2 diabetes. Modest weight loss reduces glucose, lipids, blood pressure, need for medications, and cardiovascular risk. A number of approaches can be used to achieve weight loss, including lifestyle modification, surgery, and medication. Lorcaserin, a novel antiobesity agent, affects central serotonin subtype 2A receptors, resulting in decreased food intake and increased satiety. It has been studied in obese patients with type 2 diabetes and results in an approximately 5.5 kg weight loss, on average, when used for one year. Headache, back pain, nasopharyngitis, and nausea were the most common adverse effects noted with lorcaserin. Hypoglycemia was more common in the lorcaserin groups in the clinical trials, but none of the episodes were categorized as severe. Based on the results of these studies, lorcaserin was approved at a dose of 10 mg twice daily in patients with a body mass index ?30 kg/m(2) or ?27 kg/m(2) with at least one weight-related comorbidity, such as hypertension, type 2 diabetes mellitus, or dyslipidemia, in addition to a reduced calorie diet and increased physical activity. Lorcaserin is effective for weight loss in obese patients with and without type 2 diabetes, although its specific role in the management of obesity is unclear at this time. This paper reviews the clinical trials of lorcaserin, its use from the patient perspective, and its potential role in the treatment of obesity. PMID:23788837
Combining forecast weights: Why and how?
NASA Astrophysics Data System (ADS)
Yin, Yip Chee; Kok-Haur, Ng; Hock-Eam, Lim
2012-09-01
This paper proposes a procedure called forecast weight averaging which is a specific combination of forecast weights obtained from different methods of constructing forecast weights for the purpose of improving the accuracy of pseudo out of sample forecasting. It is found that under certain specified conditions, forecast weight averaging can lower the mean squared forecast error obtained from model averaging. In addition, we show that in a linear and homoskedastic environment, this superior predictive ability of forecast weight averaging holds true irrespective whether the coefficients are tested by t statistic or z statistic provided the significant level is within the 10% range. By theoretical proofs and simulation study, we have shown that model averaging like, variance model averaging, simple model averaging and standard error model averaging, each produces mean squared forecast error larger than that of forecast weight averaging. Finally, this result also holds true marginally when applied to business and economic empirical data sets, Gross Domestic Product (GDP growth rate), Consumer Price Index (CPI) and Average Lending Rate (ALR) of Malaysia.
76 FR 13580 - Bus Testing; Calculation of Average Passenger Weight and Test Vehicle Weight
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-14
...standing passenger from 1.5 to 1.75 square feet, and updating the Structural Strength and Distortion...dimensions for a standing passenger from 1.5 square feet of free floor space to 1.75 square feet of free floor space to acknowledge the...
Flexibility of spatial averaging in visual perception
Lombrozo, Tania; Judson, Jeff; MacLeod, Donald I.A
2005-01-01
The classical receptive field (RF) concept—the idea that a visual neuron responds to fixed parts and properties of a stimulus—has been challenged by a series of recent physiological results. Here, we extend these findings to human vision, demonstrating that the extent of spatial averaging in contrast perception is also flexible, depending strongly on stimulus contrast and uniformity. At low contrast, spatial averaging is greatest (about 11?min of arc) within uniform regions such as edges, as expected if the relevant neurons have orientation-selective RFs. At high contrast, spatial averaging is minimal. These results can be understood if the visual system is balancing a trade-off between noise reduction, which favours large areas of averaging, and detail preservation, which favours minimal averaging. Two distinct populations of neurons with hard-wired RFs could account for our results, as could the more intriguing possibility of dynamic, contrast-dependent RFs. PMID:15870034
Ultrahigh molecular weight aromatic siloxane polymers
NASA Technical Reports Server (NTRS)
Ludwick, L. M.
1982-01-01
The condensation of a diol with a silane in toluene yields a silphenylene-siloxane polymer. The reaction of stiochiometric amounts of the diol and silane produced products with molecular weights in the range 2.0 - 6.0 x 10 to the 5th power. The molecular weight of the product was greatly increased by a multistep technique. The methodology for synthesis of high molecular weight polymers using a two step procedure was refined. Polymers with weight average molecular weights in excess of 1.0 x 10 to the 6th power produced by this method. Two more reactive silanes, bis(pyrrolidinyl)dimethylsilane and bis(gamma butyrolactam)dimethylsilane, are compared with the dimethyleminodimethylsilane in ability to advance the molecular weight of the prepolymer. The polymers produced are characterized by intrinsic viscosity in tetrahydrofuran. Weight and number average molecular weights and polydispersity are determined by gel permeation chromatography.
C A Befort; E E Stewart; B K Smith; C A Gibson; D K Sullivan; J E Donnelly
2008-01-01
Objective:To examine weight loss maintenance among previous participants of a university-based behavioral weight management program and to compare behavioral strategies and perceived barriers between successful and unsuccessful maintainers.Method:Previous program participants (n=179) completed mailed surveys assessing current weight, weight control behaviors and perceived barriers to weight loss maintenance.Results:At 14.1±10.8 months following completion of treatment, survey respondents were on average 12.6±12.6 kg,
McCracken, Don Frederick
1955-01-01
II. Statistical comparison of White Leghorn, Degalb 101 inbred hybrids, and Hyline 934 inbred hybrids, for body weight, egg weight, egg production, and feed efficiency . . . . . . . . , . . . . . . . . . . 12 III . Correlation coefficients between... total feed con- sumption and various hereditary oharaoteristioa in which the three breeding groups differ RIQURXS 1. Average body weights by 4 week periods 2. Average egg weights by 4-week periods 3. Average egg produotioa by 4-week periods Pounds...
14-Day Boxcar averaged Terra-CERES (Reflected Solar Radiation)
NSDL National Science Digital Library
Tom Bridgman
2001-06-20
This animation displays one year of Reflected Solar Radiation (RSR) Terra-CERES data (March 1, 2000 to May 25, 2001) with a 14-day boxcar average. Endpoints have the average re-weighted for the smaller amount of data. The data are 2.5 degree resolution.
14-Day Boxcar averaged Terra-CERES (Outgoing Longwave Radiation)
NSDL National Science Digital Library
Tom Bridgman
2001-06-20
This animation displays one year of Outgoing Longwave Radiation (OLR) Terra-CERES data (March 1, 2000 to May 25, 2001) with a 14-day boxcar average. Endpoints have the average re-weighted for the smaller amount of data. The data are 2.5 degree resolution.
Minimizing Average Shortest Path Distances via Shortcut Edge Addition
Meyerson, Adam W.
Minimizing Average Shortest Path Distances via Shortcut Edge Addition Adam Meyerson and Brian typically use mesh networks since regular topologies are easier to manufacture. However, many pairs of nodes k shortcut edges (of length 0) whose addition minimizes the weighted average shortest path
Interpreting Bivariate Regression Coefficients: Going beyond the Average
ERIC Educational Resources Information Center
Halcoussis, Dennis; Phillips, G. Michael
2010-01-01
Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…
Averaging Measurements with Hidden Correlations and Asymmetric Errors
Michael Schmelling
2000-06-02
Properties of weighted averages are studied for the general case that the individual measurements are subject to hidden correlations and have asymmetric statistical as well as systematic errors. Explicit expressions are derived for an unbiased average with a well defined error estimate.
Averaging Measurements with Hidden Correlations and Asymmetric Errors
Michael Schmelling
2000-01-01
Properties of weighted averages are studied for the general case that the individual measurements are subject to hidden correlations and have asymmetric statistical as well as systematic errors. Explicit expressions are derived for an unbiased average with a well defined error estimate.
Macleod, Maureen; Craigie, Angela M; Barton, Karen L; Treweek, Shaun; Anderson, Annie S
2013-07-01
Little is known about the response of post-partum women from deprived backgrounds to weight management interventions, however behavioural intervention trials in disadvantaged communities are often characterised by recruitment difficulties. Recruitment and retention is key to the robust conduct of an effective trial, and exploratory work is essential prior to a definitive randomised controlled trial. This paper describes strategies used to recruit to the WeighWell feasibility study, which aimed to recruit 60 overweight or obese post-partum women living in areas of deprivation to a trial of a weight-loss intervention. Recruitment strategies included the following: (1) distribution of posters and 'business cards'; (2) newspaper advertisements; (3) visits to community groups; and (4) personalised letters of invitation sent via the National Health Service (NHS). Potential participants were screened for eligibility following response to a Freephone number. Body mass index was calculated using self-reported body weight and height. Over 6 months, 142 women responded of whom 65 (46%) met the eligibility criteria. The most effective methods for recruiting eligible women and those who went on to complete the study (n = 36) were visits to community groups (37% and 42%, respectively), personalised letters (26% and 17%, respectively) and posters and 'business cards' (22% and 31%, respectively). These results emphasise the need to utilise a range of strategies beyond traditional NHS settings. Current approaches might be enhanced by sending personal contact letters via their General Practitioner to women identified as eligible at post-natal discharge. Under-reporting of body weight by self-report suggests that a threshold lower than 25 kg/m(2) should be utilised for screening purposes. PMID:22284216
NASA Technical Reports Server (NTRS)
1995-01-01
The Attitude Adjuster is a system for weight repositioning corresponding to a SCUBA diver's changing positions. Compact tubes on the diver's air tank permit controlled movement of lead balls within the Adjuster, automatically repositioning when the diver changes position. Manufactured by Think Tank Technologies, the system is light and small, reducing drag and energy requirements and contributing to lower air consumption. The Mid-Continent Technology Transfer Center helped the company with both technical and business information and arranged for the testing at Marshall Space Flight Center's Weightlessness Environmental Training Facility for astronauts.
NASA Technical Reports Server (NTRS)
Chen, Fei; Yates, David; LeMone, Margaret
2001-01-01
To understand the effects of land-surface heterogeneity and the interactions between the land-surface and the planetary boundary layer at different scales, we develop a multiscale data set. This data set, based on the Cooperative Atmosphere-Surface Exchange Study (CASES97) observations, includes atmospheric, surface, and sub-surface observations obtained from a dense observation network covering a large region on the order of 100 km. We use this data set to drive three land-surface models (LSMs) to generate multi-scale (with three resolutions of 1, 5, and 10 kilometers) gridded surface heat flux maps for the CASES area. Upon validating these flux maps with measurements from surface station and aircraft, we utilize them to investigate several approaches for estimating the area-integrated surface heat flux for the CASES97 domain of 71x74 square kilometers, which is crucial for land surface model development/validation and area water and energy budget studies. This research is aimed at understanding the relative contribution of random turbulence versus organized mesoscale circulations to the area-integrated surface flux at the scale of 100 kilometers, and identifying the most important effective parameters for characterizing the subgrid-scale variability for large-scale atmosphere-hydrology models.
Cosmological measures without volume weighting
Page, Don N, E-mail: don@phys.ualberta.ca [Theoretical Physics Institute, Department of Physics, University of Alberta, Room 238 CEB, 11322-89 Avenue, Edmonton, AB T6G 2G7 (Canada)
2008-10-15
Many cosmologists (myself included) have advocated volume weighting for the cosmological measure problem, weighting spatial hypersurfaces by their volume. However, this often leads to the Boltzmann brain problem, that almost all observations would be by momentary Boltzmann brains that arise very briefly as quantum fluctuations in the late universe when it has expanded to a huge size, so that our observations (too ordered for Boltzmann brains) would be highly atypical and unlikely. Here it is suggested that volume weighting may be a mistake. Volume averaging is advocated as an alternative. One consequence may be a loss of the argument that eternal inflation gives a nonzero probability that our universe now has infinite volume.
The influence of aquariums on weight in individuals with dementia.
Edwards, Nancy E; Beck, Alan M
2013-01-01
This study assessed whether individuals with dementia who observe aquariums increase the amount of food they consume and maintain body weight. The sample included 70 residents in dementia units within 3 extended care facilities in 2 states. The intervention included the introduction of an aquarium into each common dining area. A total increase of 196.9 g of daily food intake (25.0%) was noted from baseline to the end of the 10-week study. Resident body weight increased an average of 2.2 pounds during the study. Eight of 70 residents experienced a weight loss ((Equation is included in full-text article.)=1.89 lbs). People with advanced dementia responded to aquariums in their environment documenting that attraction to the natural environment is so innate that it survives dementia. PMID:23138175
Weight loss attempts in adults: goals, duration, and rate of weight loss.
Williamson, D F; Serdula, M K; Anda, R F; Levy, A; Byers, T
1992-01-01
OBJECTIVES: Although attempted weight loss is common, little is known about the goals and durations of weight loss attempts and the rates of achieved weight loss in the general population. METHODS. Data were collected by telephone in 1989 from adults aged 18 years and older in 39 states and the District of Columbia. Analyses were carried out separately for the 6758 men and 14,915 women who reported currently trying to lose weight. RESULTS. Approximately 25% of the men respondents and 40% of the women respondents reported that they were currently trying to lose weight. Among men, a higher percentage of Hispanics (31%) than of Whites (25%) or Blacks (23%) reported trying to lose weight. Among women, however, there were no ethnic differences in prevalence. The average man wanted to lose 30 pounds and to weigh 178 pounds; the average woman wanted to lose 31 pounds and to weigh 133 pounds. Black women wanted to lose an average of 8 pounds more than did White women, but Black women's goal weight was 10 pounds heavier. The average rate of achieved weight loss was 1.4 pounds per week for men and 1.1 pounds per week for women; these averages, however, may reflect only the experience of those most successful at losing weight. CONCLUSIONS. Attempted weight loss is a common behavior, regardless of age, gender, or ethnicity, and weight loss goals are substantial; however, obesity remains a major public health problem in the United States. PMID:1503167
High average-power induction linacs
Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.
1989-03-15
Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs.
NASA Technical Reports Server (NTRS)
Jessee, R. D.
1970-01-01
Averaging circuit provides a secondary control signal during inoperative periods of an intermittent primary control system. It can also provide an average pulse rate over a fixed time interval, such as in a digital frequency meter.
Weighted Pushdown Systems with Indexed Weight Domains
Minamide, Yasuhiko
Weighted Pushdown Systems with Indexed Weight Domains Yasuhiko Minamide Faculty of Engineering, Information and Systems University of Tsukuba Abstract. The reachability analysis of weighted pushdown systems of a weighted pushdown system is associated with an element of a bounded semiring representing the weight
Application Bayesian Model Averaging method for ensemble system for Poland
NASA Astrophysics Data System (ADS)
Guzikowski, Jakub; Czerwinska, Agnieszka
2014-05-01
The aim of the project is to evaluate methods for generating numerical ensemble weather prediction using a meteorological data from The Weather Research & Forecasting Model and calibrating this data by means of Bayesian Model Averaging (WRF BMA) approach. We are constructing height resolution short range ensemble forecasts using meteorological data (temperature) generated by nine WRF's models. WRF models have 35 vertical levels and 2.5 km x 2.5 km horizontal resolution. The main emphasis is that the used ensemble members has a different parameterization of the physical phenomena occurring in the boundary layer. To calibrate an ensemble forecast we use Bayesian Model Averaging (BMA) approach. The BMA predictive Probability Density Function (PDF) is a weighted average of predictive PDFs associated with each individual ensemble member, with weights that reflect the member's relative skill. For test we chose a case with heat wave and convective weather conditions in Poland area from 23th July to 1st August 2013. From 23th July to 29th July 2013 temperature oscillated below or above 30 Celsius degree in many meteorology stations and new temperature records were added. During this time the growth of the hospitalized patients with cardiovascular system problems was registered. On 29th July 2013 an advection of moist tropical air masses was recorded in the area of Poland causes strong convection event with mesoscale convection system (MCS). MCS caused local flooding, damage to the transport infrastructure, destroyed buildings, trees and injuries and direct threat of life. Comparison of the meteorological data from ensemble system with the data recorded on 74 weather stations localized in Poland is made. We prepare a set of the model - observations pairs. Then, the obtained data from single ensemble members and median from WRF BMA system are evaluated on the basis of the deterministic statistical error Root Mean Square Error (RMSE), Mean Absolute Error (MAE). To evaluation probabilistic data The Brier Score (BS) and Continuous Ranked Probability Score (CRPS) were used. Finally comparison between BMA calibrated data and data from ensemble members will be displayed.
Weight structures, weight ltrations, weight spectral sequences, and weight complexes (for
Weight structures, weight #28;ltrations, weight spectral sequences, and weight complexes (for basic notion is the new de#28;nition of a weight structure for a triangulated C. We prove that a weight to cohomology zero". For Hw being the heart of the weight structure we de#28;ne a canonical conservative weakly
Model averaging, optimal inference, and habit formation
FitzGerald, Thomas H. B.; Dolan, Raymond J.; Friston, Karl J.
2014-01-01
Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function—the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge—that of determining which model or models of their environment are the best for guiding behavior. Bayesian model averaging—which says that an agent should weight the predictions of different models according to their evidence—provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent's behavior should show an equivalent balance. We hypothesize that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realizable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behavior. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded) Bayesian inference, focusing particularly upon the relationship between goal-directed and habitual behavior. PMID:25018724
Non-Homogeneous Fractal Hierarchical Weighted Networks
Dong, Yujuan; Dai, Meifeng; Ye, Dandan
2015-01-01
A model of fractal hierarchical structures that share the property of non-homogeneous weighted networks is introduced. These networks can be completely and analytically characterized in terms of the involved parameters, i.e., the size of the original graph Nk and the non-homogeneous weight scaling factors r1, r2, · · · rM. We also study the average weighted shortest path (AWSP), the average degree and the average node strength, taking place on the non-homogeneous hierarchical weighted networks. Moreover the AWSP is scrupulously calculated. We show that the AWSP depends on the number of copies and the sum of all non-homogeneous weight scaling factors in the infinite network order limit. PMID:25849619
Averaging Models: Parameters Estimation with the R-Average Procedure
ERIC Educational Resources Information Center
Vidotto, G.; Massidda, D.; Noventa, S.
2010-01-01
The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…
Physical Theories with Average Symmetry
Roberto C. Alamino
2013-05-03
This Letter probes the existence of physical laws invariant only in average when subjected to some transformation. The concept of a symmetry transformation is broadened to include corruption by random noise and average symmetry is introduced by considering functions which are invariant only in average under these transformations. It is then shown that actions with average symmetry obey a modified version of Noether's Theorem with dissipative currents. The relation of this with possible violations of physical symmetries, as for instance Lorentz invariance in some quantum gravity theories, is briefly commented.
Average Speed and Unit Conversion
NSDL National Science Digital Library
2009-01-01
Students will determine average speeds from data collected and convert units for speed problems. Students try to roll the ball with a prescribed average speed based on intuition. Then, based on unit conversion we will see how accurate the rolls really were.
Weight structures, weight oltrations, weight spectral sequences, and weight complexes (for
Weight structures, weight oltrations, weight spectral sequences, and weight complexes (for and func- tors. Our basic notion is the new deonition of a weight structure for a triangulated C_. We prove that a weight structure deones Postnikov towers of objects of C_; these towers
Weight trimming and propensity score weighting.
Lee, Brian K; Lessler, Justin; Stuart, Elizabeth A
2011-01-01
Propensity score weighting is sensitive to model misspecification and outlying weights that can unduly influence results. The authors investigated whether trimming large weights downward can improve the performance of propensity score weighting and whether the benefits of trimming differ by propensity score estimation method. In a simulation study, the authors examined the performance of weight trimming following logistic regression, classification and regression trees (CART), boosted CART, and random forests to estimate propensity score weights. Results indicate that although misspecified logistic regression propensity score models yield increased bias and standard errors, weight trimming following logistic regression can improve the accuracy and precision of final parameter estimates. In contrast, weight trimming did not improve the performance of boosted CART and random forests. The performance of boosted CART and random forests without weight trimming was similar to the best performance obtainable by weight trimmed logistic regression estimated propensity scores. While trimming may be used to optimize propensity score weights estimated using logistic regression, the optimal level of trimming is difficult to determine. These results indicate that although trimming can improve inferences in some settings, in order to consistently improve the performance of propensity score weighting, analysts should focus on the procedures leading to the generation of weights (i.e., proper specification of the propensity score model) rather than relying on ad-hoc methods such as weight trimming. PMID:21483818
Determinants of Low Birth Weight in Malawi: Bayesian Geo-Additive Modelling.
Ngwira, Alfred; Stanley, Christopher C
2015-01-01
Studies on factors of low birth weight in Malawi have neglected the flexible approach of using smooth functions for some covariates in models. Such flexible approach reveals detailed relationship of covariates with the response. The study aimed at investigating risk factors of low birth weight in Malawi by assuming a flexible approach for continuous covariates and geographical random effect. A Bayesian geo-additive model for birth weight in kilograms and size of the child at birth (less than average or average and higher) with district as a spatial effect using the 2010 Malawi demographic and health survey data was adopted. A Gaussian model for birth weight in kilograms and a binary logistic model for the binary outcome (size of child at birth) were fitted. Continuous covariates were modelled by the penalized (p) splines and spatial effects were smoothed by the two dimensional p-spline. The study found that child birth order, mother weight and height are significant predictors of birth weight. Secondary education for mother, birth order categories 2-3 and 4-5, wealth index of richer family and mother height were significant predictors of child size at birth. The area associated with low birth weight was Chitipa and areas with increased risk to less than average size at birth were Chitipa and Mchinji. The study found support for the flexible modelling of some covariates that clearly have nonlinear influences. Nevertheless there is no strong support for inclusion of geographical spatial analysis. The spatial patterns though point to the influence of omitted variables with some spatial structure or possibly epidemiological processes that account for this spatial structure and the maps generated could be used for targeting development efforts at a glance. PMID:26114866
AVERAGE PREDICTIVE COMPARISONS FOR MODELS
Gelman, Andrew
on the values of the predictors. We consider various definitions based on averages over a population of interest measures whether a convicted felon received a prison sentence rather than a We thank John Carlin
High average power pockels cell
Daly, Thomas P. (Pleasanton, CA)
1991-01-01
A high average power pockels cell is disclosed which reduces the effect of thermally induced strains in high average power laser technology. The pockels cell includes an elongated, substantially rectangular crystalline structure formed from a KDP-type material to eliminate shear strains. The X- and Y-axes are oriented substantially perpendicular to the edges of the crystal cross-section and to the C-axis direction of propagation to eliminate shear strains.
76 FR 28998 - Implementation of Revised Passenger Weight Standards for Existing Passenger Vessels
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-19
...Implementation of Revised Passenger Weight Standards for Existing Passenger Vessels...Implementation of Revised Passenger Weight Standards for Existing Passenger Vessels...prior to a change in the assumed average weight per person standard that will become...
Effect of molecular weight on polyphenylquinoxaline properties
NASA Technical Reports Server (NTRS)
Jensen, Brian J.
1991-01-01
A series of polyphenyl quinoxalines with different molecular weight and end-groups were prepared by varying monomer stoichiometry. Thus, 4,4'-oxydibenzil and 3,3'-diaminobenzidine were reacted in a 50/50 mixture of m-cresol and xylenes. Reaction concentration, temperature, and stir rate were studied and found to have an effect on polymer properties. Number and weight average molecular weights were determined and correlated well with viscosity data. Glass transition temperatures were determined and found to vary with molecular weight and end-groups. Mechanical properties of films from polymers with different molecular weights were essentially identical at room temperature but showed significant differences at 232 C. Diamine terminated polymers were found to be much less thermooxidatively stable than benzil terminated polymers when aged at 316 C even though dynamic thermogravimetric analysis revealed only slight differences. Lower molecular weight polymers exhibited better processability than higher molecular weight polymers.
Molecular relaxation study of polystyrene: influence of temperature, draw rate and molecular weight
Pezolet, Michel
Molecular relaxation study of polystyrene: influence of temperature, draw rate and molecular weight different polystyrene samples, four monodisperse of weight-average molecular weight ranging from 210 000 relaxation time (t1), which is of the order of seconds, is independent of average molecular weight (Mw
Mining Weighted Association Rules without Preassigned Weights
Bai, Fengshan
Mining Weighted Association Rules without Preassigned Weights Ke Sun and Fengshan Bai Abstract--Association rule mining is a key issue in data mining. However, the classical models ignore the difference between the transactions, and the weighted association rule mining does not work on databases with only binary attributes
Efficient brightness averaging of heterogeneous achromatic patches.
Kimura, Eiji; Takano, Yusuke
2015-09-01
Mean brightness in a variegated region may work as a clue to illumination intensity over the region and play an important role in the perception of object lightness. This study investigated whether brightness can be efficiently averaged for heterogeneous achromatic patches. Experiment 1 investigated discrimination thresholds for mean brightness between two arrays of 12 heterogeneous patches of different luminances. The thresholds were compared to brightness discrimination thresholds between two arrays of 12 homogeneous patches and to those between two single patches. The two arrays (or patches) were simultaneously presented for 200 msec and followed by a pattern mask. Results showed that mean brightness judgments for heterogeneous arrays were as accurate as simple brightness comparison for single patches, although they were slightly worse than brightness judgments for homogeneous arrays. This finding is consistent with efficient brightness averaging of different luminance patches. However, additional experiments revealed that inexperienced naive observers may use shortcuts for mean brightness judgments; they tended to choose as the brighter array the one containing a highest luminance patch or the one consisting of the larger number of patches. To investigate the effects of these confounding factors, Experiment 2 measured discrimination thresholds for mean brightness between two arrays composed of different numbers of heterogeneous patches (6 vs. 12 or 9 vs. 12). The highest luminance patch was included in the array consisting of either the smaller or the larger number of patches, and thus using this clue for mean judgments would lead to highly biased thresholds. Results were consistent with brightness averaging, but a small bias (varying in the magnitude among observers) was found to choose the array containing the highest luminance patch. Overall, the present findings suggest that brightness can be efficiently averaged, but with a greater weight to the highest luminance. Meeting abstract presented at VSS 2015. PMID:26326319
Pregnancy Weight Gain Calculator
... page.preprocess.inc ). Print Share Pregnancy Weight Gain Calculator Pregnancy Weight Gain Calculator Pregnancy Weight Gain Intro You should gain weight ... more from each food group. Pregnancy Weight Gain Calculator SuperTracker What's Cooking? Daily Food Plans BMI Calculator ...
Clayton, Dale H.
live egg indices and mean monthly gonadal weights (in mg/100g of body weight) was taken weight (in mg/100g body weight. Hence, path coefficient analysis was employed to obtain a clearer picture Testis weight M. eurysternus Brueelia sp. S. bannoo 0 10 20 30 40 50 60 70 80 90 May Jun Jul Aug Sep Oct
String-Averaging Projected Subgradient Methods for Constrained
Censor, Yair
unconstrained objective function descent steps by moving from to := - 0 ( ) and then regain feasibility-Averaging Projection (DSAP) methods wherein iteration-index-dependent variable strings and variable weights is a convex objective function mapping from the -dimensional Euclidean space into the reals and is a given
Determining GPS average performance metrics
NASA Technical Reports Server (NTRS)
Moore, G. V.
1995-01-01
Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.
The causal meaning of Fisher’s average effect
LEE, JAMES J.; CHOW, CARSON C.
2013-01-01
Summary In order to formulate the Fundamental Theorem of Natural Selection, Fisher defined the average excess and average effect of a gene substitution. Finding these notions to be somewhat opaque, some authors have recommended reformulating Fisher’s ideas in terms of covariance and regression, which are classical concepts of statistics. We argue that Fisher intended his two averages to express a distinction between correlation and causation. On this view, the average effect is a specific weighted average of the actual phenotypic changes that result from physically changing the allelic states of homologous genes. We show that the statistical and causal conceptions of the average effect, perceived as inconsistent by Falconer, can be reconciled if certain relationships between the genotype frequencies and non-additive residuals are conserved. There are certain theory-internal considerations favouring Fisher’s original formulation in terms of causality; for example, the frequency-weighted mean of the average effects equaling zero at each locus becomes a derivable consequence rather than an arbitrary constraint. More broadly, Fisher’s distinction between correlation and causation is of critical importance to gene-trait mapping studies and the foundations of evolutionary biology. PMID:23938113
... weight. That’s because metabolism (how you burn the calories you eat) can slow down with age. The ... eat and drink. ? To lose weight, burn more calories than you eat and drink. ? To gain weight, ...
Weight loss surgery helps people with extreme obesity to lose weight. It may be an option if you cannot ... caused by obesity. There are different types of weight loss surgery. They often limit the amount of food you ...
... to change. Continue Changing Old Habits Can Be Hard Trimming down to a healthier weight involves making ... to manage your weight, but it can be hard to actually do it. Weight-management experts can ...
Evaluations of average level spacings
Liou, H.I.
1980-01-01
The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of /sup 168/Er data. 19 figures, 2 tables.
Dryer, Rachel; Ware, Nicole
2014-01-01
Purpose: To identify beliefs held by the general public regarding causes of weight gain, weight prevention strategies, and barriers to weight management; and to examine whether such beliefs predict the actual body mass of participants. Methods: A questionnaire-based survey was administered to participants recruited from regional and metropolitan areas of Australia. This questionnaire obtained demographic information, height, weight; as well as beliefs about causes of weight gain, weight prevention strategies, and barriers to weight management. Results: The sample consisted of 376 participants (94 males, 282 females) between the ages of 18 years and 88 years (mean age?=?43.25, SD?=?13.64). The range and nature of the belief dimensions identified suggest that the Australian public have an understanding of the interaction between internal and external factors that impact on weight gain but also prevent successful weight management. Beliefs about prevention strategies and barriers to effective weight management were found to predict the participants’ actual body mass, even after controlling for demographic characteristics. Conclusions: The general public have a good understanding of the multiple contributing factors to weight gain and successful weight management. However, this understanding may not necessarily lead to individuals adopting the required lifestyle changes that result in achievement or maintenance of healthy weight levels. PMID:25750768
New applications for high average power beams
Neau, E.L.; Turman, B.N.; Patterson, E.L.
1993-08-01
The technology base formed by the development of high peak power simulators, laser drivers, FEL`s, and ICF drivers from the early 60`s through the late 80`s is being extended to high average power short-pulse machines with the capabilities of supporting new types of manufacturing processes and performing new roles in environmental cleanup applications. This paper discusses a process for identifying and developing possible commercial applications, specifically those requiring very high average power levels of hundreds of kilowatts to perhaps megawatts. The authors discuss specific technology requirements and give examples of application development efforts. The application development work is directed at areas that can possibly benefit from the high specific energies attainable with short pulse machines.
Walker, Lawrence R.
Included Ó FREE High Speed Internet Ó Full Kitchen & Bath Ó 24 Hour Laundry Ó Minutes From The Strip Ó Maintenance o High Speed Internet Access o Public Transportation o Disability Access o Pet Friendly · Weig Dishwasher o Oversized Closets o Balcony o Ceiling Fan(s) o Elevator o View Average Availability o Usually
2013 FACT SHEET average student
The Naval Postgraduate School develops creative, technologically informed graduates and solutions and Information Sciences (GSOIS) · School of International Graduate Studies (SIGS) average resident degree student faculty, students and staff, as well as NPS alumni, in a 24/7 environment. federal library/information
Body Weight Relationships in Early Marriage: Weight Relevance, Weight Comparisons, and Weight Talk
Bove, Caron F.; Sobal, Jeffery
2011-01-01
This investigation uncovered processes underlying the dynamics of body weight and body image among individuals involved in nascent heterosexual marital relationships in Upstate New York. In-depth, semi-structured qualitative interviews conducted with 34 informants, 20 women and 14 men, just prior to marriage and again one year later were used to explore continuity and change in cognitive, affective, and behavioral factors relating to body weight and body image at the time of marriage, an important transition in the life course. Three major conceptual themes operated in the process of developing and enacting informants’ body weight relationships with their partner: weight relevance, weight comparisons, and weight talk. Weight relevance encompassed the changing significance of weight during early marriage and included attracting and capturing a mate, relaxing about weight, living healthily, and concentrating on weight. Weight comparisons between partners involved weight relativism, weight competition, weight envy, and weight role models. Weight talk employed pragmatic talk, active and passive reassurance, and complaining and critiquing criticism. Concepts emerging from this investigation may be useful in designing future studies of and approaches to managing body weight in adulthood. PMID:21864601
Mood and Weight Loss in a Behavioral Treatment Program.
ERIC Educational Resources Information Center
Wing, Rena R.; And Others
1983-01-01
Evaluated the relationship between mood and weight loss for 76 patients participating in two consecutive behavioral treatment programs. Weight losses averaged 12.2 pounds (5.55 kg) during the 10-week program. Positive changes in mood were reported during this interval, and these changes appeared to be related to changes in weight. (Author/RC)
Recovery of petroleum with chemically treated high molecular weight polymers
Gibb, C.L.; Rhudy, J.S.
1980-11-18
Plugging of reservoirs with high molecular weight polymers, e.g. Partially hydrolyzed polyacrylamide, is overcome by chemically treating a polymer having an excessively high average molecular weight prior to injection into a reservoir with an oxidizing chemical, e.g. sodium hypochlorite, and thereafter incorporating a reducing chemical, e.g., sodium sulfite, to stop degradation of the polymer when a desired lower average molecular weight and flooding characteristics are attained.
YOSHIO SANO
We introduce a generalization of competition graphs, called weighted competi- tion graphs. The weighted competition graph of a digraph D = (V, A), denoted by Cw(D), is an edge-weighted graph (G, w) such that G = (V, E) is the competition graph of D, and the weight w(e) of an edge e = xy ? E is the number of
ERIC Educational Resources Information Center
Ryan, Kevin Michael
2011-01-01
Research on syllable weight in generative phonology has focused almost exclusively on systems in which weight is treated as an ordinal hierarchy of clearly delineated categories (e.g. light and heavy). As I discuss, canonical weight-sensitive phenomena in phonology, including quantitative meter and quantity-sensitive stress, can also treat weight…
Weighted distributed hash tables
Christian Schindelhauer; Gunnar Schomaker
2005-01-01
We present two methods for weighted consistent hashing also known as weighted distributed hash tables. The first method, called Linear Method, combines the standard consistent hasing introduced by Karger et al. [9] with a linear weighted distance measure. By using node copies and different partitions of the hash space, the balance of this scheme approximates the fair weight relationship with
Averaging Robertson-Walker Cosmologies
Iain A. Brown; Georg Robbers; Juliane Behrend
2009-09-10
The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the Buchert averaging formalism to linear Robertson-Walker universes containing matter, radiation and dark energy and evaluate numerically the discrepancies between the assumed and the averaged behaviour, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h=0.701. For the LCDM concordance model, the backreaction is of the order of Omega_eff~4x10^-6, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10^-8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w_eff<-1/3 can be found for strongly phantom models.
Multispecies weighted Hurwitz numbers
Harnad, J
2015-01-01
The construction of hypergeometric 2D Toda $\\tau$-functions as generating functions for weighted Hurwitz numbers is extended to multispecies families. Both the enumerative geometrical significance of multispecies weighted Hurwitz numbers as weighted enumerations of branched coverings of the Riemann sphere and their combinatorial significance in terms of weighted paths in the Cayley graph of $S_n$ are derived. The particular case of multispecies quantum weighted Hurwitz numbers is studied in detail.
PREVENTING WEIGHT REGAIN AFTER WEIGHT LOSS
Technology Transfer Automated Retrieval System (TEKTRAN)
For most dieters, a regaining of lost weight is an all too common experience. Indeed, virtually all interventions for weight loss show limited or even poor long-term effectiveness. This sobering reality was reflected in a comprehensive review of nonsurgical treatments of obesity conducted by the Ins...
NASA Astrophysics Data System (ADS)
Ding, F.; Theobald, M.; Vollmer, B.; Savtchenko, A. K.; Hearty, T. J.; Esfandiari, A. E.
2012-12-01
Producing timely and accurate water forecast and information is the mission of National Weather Service River Forecast Centers (NWS RFCs) of National Oceanic and Atmospheric Administration (NOAA). The river forecast system in RFCs requires average surface temperature in the fixed 6-hour period 000-0600, 0600-1200, 1200-1800, and 1200-0000 UTC. The current logic of RFC temperature forecast relies on ingest of point values of daytime maximum and nighttime minimum temperature. Meanwhile, the mean temperature for the 6-hour period is estimated from a weighted average of daytime maximum and nighttime minimum temperature. The Atmospheric Infrared Sounder (AIRS) in the first high spectral resolution infrared sounder on board the Aqua satellite which was launched in May 2002 and follows a Sun-synchronous polar orbit. It is aimed to produce high resolution atmospheric profile and surface atmospheric parameters. As Aqua crosses the equator at about 1330 and 0130 local time, the AIRS retrieved surface temperature may represent daytime maximum and nighttime minimum value. Comparing to point observation from surface weather stations which are often sparse over the less-populated area and are unevenly distributed, satellite may obtain better area averaged observation. This test study assesses the potential of using AIRS retrieved surface temperature to forecast 6-hour average temperature for NWS RFCs. The California Nevada RFC is selected due to the poor coverage of surface observation in the mountainous region and spring snow melting. The study focuses on the March to May spring season when water from snowpack melting often plays important role in flood. AIRS retrieved temperature and surface weather station data set will be used to derive statistical weighting coefficient for 6-hour average temperature forecast. The resulting forecast biases and errors will be the main indicators of the potential usage. All study results will be presented in the meeting.
NSDL National Science Digital Library
This is a task from the Illustrative Mathematics website that is one part of a complete illustration of the standard to which it is aligned. Each task has at least one solution and some commentary that addresses important asects of the task and its potential use. Here are the first few lines of the commentary for this task: John makes DVDs of his friend’s shows. He has realized that, because of his fixed costs, his average cost per DVD depends on the number of DVDs he prod...
Ensemble averaging of acoustic data
NASA Technical Reports Server (NTRS)
Stefanski, P. K.
1982-01-01
A computer program called Ensemble Averaging of Acoustic Data is documented. The program samples analog data, analyzes the data, and displays them in the time and frequency domains. Hard copies of the displays are the program's output. The documentation includes a description of the program and detailed user instructions for the program. This software was developed for use on the Ames 40- by 80-Foot Wind Tunnel's Dynamic Analysis System consisting of a PDP-11/45 computer, two RK05 disk drives, a tektronix 611 keyboard/display terminal, and FPE-4 Fourier Processing Element, and an analog-to-digital converter.
NASA Astrophysics Data System (ADS)
Afonina, I. A.; Kleptsyna, E. S.; Petukhov, V. L.; Patrashkov, S. A.
2003-05-01
Copper plays an important part in living being bodies. But, both high and low Cu levels may cause human and animal diseases. Some East Siberia areas are characterized by Cu pollution [1]. 5 group of hens were formed: 1 - control, 2-5 - experimental. For a month the hens from experimental groups were drunk with water where Cu content was 5, 10, 20 and 30 times higher than the upper limits (UL). Group 1 - 3 hens' weight was almost the same during the experiment. Weight decrease (from 2020 to 1656 g) was detected in group 4 (20 UL) for the first half a month. All the hens of group 4 except for 3 hens were died for the last 2 weeks. In group 5 (30 UL) all the hens died after 2 ... 14 days. Thus, high Cu concentrations (20 ... 30 UL) cause hens' weight reduction of and their death.
NASA Astrophysics Data System (ADS)
Colorado, G.; Salinas, J. A.; Cavazos, T.; de Grau, P.
2013-05-01
15 CMIP5 GCMs precipitation simulations were combined in a weighted ensemble using the Reliable Ensemble Averaging (REA) method, obtaining the weight of each model. This was done for a historical period (1961-2000) and for the future emissions based on low (RCP4.5) and high (RCP8.5) radiating forcing for the period 2075-2099. The annual cycle of simple ensemble of the historical GCMs simulations, the historical REA average and the Climate Research Unit (CRU TS3.1) database was compared in four zones of México. In the case of precipitation we can see the improvements by using the REA method, especially in the two northern zones of México where the REA average is more close to the observations (CRU) that the simple average. However in the southern zones although there is an improvement it is not as good as it is in the north, particularly in the southeast where instead of the REA average is able to reproduce qualitatively good the annual cycle with the mid-summer drought it was greatly underestimated. The main reason is because the precipitation is underestimated for all the models and the mid-summer drought do not even exists in some models. In the REA average of the future scenarios, as we can expected, the most drastic decrease in precipitation was simulated using the RCP8.5 especially in the monsoon area and in the south of Mexico in summer and in winter. In the center and southern of Mexico however, the same scenario in autumn simulates an increase of precipitation.
Differential absorption lidar signal averaging
NASA Technical Reports Server (NTRS)
Grant, William B.; Brothers, Alan M.; Bogan, James R.
1988-01-01
This paper presents experimental results using an atmospheric backscatter dual CO2 laser DIAL. It is shown that DIAL signals can be averaged to obtain an N exp -1/2 dependence decrease in the standard deviation of the ratio of backscattered returns from two lasers, where N is the number of DIAL signals averaged, and that such a lidar system can make measurements of gas concentrations with a precision of 0.7 percent in absorptance over 75 m in a short measurement time when the signal strength is high. Factors that eventually limit the rate of improvement in the SNR, such as changes in the ratio of the absorption and/or backscatter at the two laser frequencies and background noise, are discussed. In addition, it is noted that DIAL measurements made using hard-target backscatter often show departures from N exp -1/2 dependence improvement in the standard deviation because they are further limited by the combined effects of atmospheric turbulence and speckle (since the relative reproducibility of the speckle pattern on the receiver gives rise to correlations of the lidar signals).
Model Averaging for Improving Inference from Causal Diagrams.
Hamra, Ghassan B; Kaufman, Jay S; Vahratian, Anjel
2015-01-01
Model selection is an integral, yet contentious, component of epidemiologic research. Unfortunately, there remains no consensus on how to identify a single, best model among multiple candidate models. Researchers may be prone to selecting the model that best supports their a priori, preferred result; a phenomenon referred to as "wish bias". Directed acyclic graphs (DAGs), based on background causal and substantive knowledge, are a useful tool for specifying a subset of adjustment variables to obtain a causal effect estimate. In many cases, however, a DAG will support multiple, sufficient or minimally-sufficient adjustment sets. Even though all of these may theoretically produce unbiased effect estimates they may, in practice, yield somewhat distinct values, and the need to select between these models once again makes the research enterprise vulnerable to wish bias. In this work, we suggest combining adjustment sets with model averaging techniques to obtain causal estimates based on multiple, theoretically-unbiased models. We use three techniques for averaging the results among multiple candidate models: information criteria weighting, inverse variance weighting, and bootstrapping. We illustrate these approaches with an example from the Pregnancy, Infection, and Nutrition (PIN) study. We show that each averaging technique returns similar, model averaged causal estimates. An a priori strategy of model averaging provides a means of integrating uncertainty in selection among candidate, causal models, while also avoiding the temptation to report the most attractive estimate from a suite of equally valid alternatives. PMID:26270672
Model Averaging for Improving Inference from Causal Diagrams
Hamra, Ghassan B.; Kaufman, Jay S.; Vahratian, Anjel
2015-01-01
Model selection is an integral, yet contentious, component of epidemiologic research. Unfortunately, there remains no consensus on how to identify a single, best model among multiple candidate models. Researchers may be prone to selecting the model that best supports their a priori, preferred result; a phenomenon referred to as “wish bias”. Directed acyclic graphs (DAGs), based on background causal and substantive knowledge, are a useful tool for specifying a subset of adjustment variables to obtain a causal effect estimate. In many cases, however, a DAG will support multiple, sufficient or minimally-sufficient adjustment sets. Even though all of these may theoretically produce unbiased effect estimates they may, in practice, yield somewhat distinct values, and the need to select between these models once again makes the research enterprise vulnerable to wish bias. In this work, we suggest combining adjustment sets with model averaging techniques to obtain causal estimates based on multiple, theoretically-unbiased models. We use three techniques for averaging the results among multiple candidate models: information criteria weighting, inverse variance weighting, and bootstrapping. We illustrate these approaches with an example from the Pregnancy, Infection, and Nutrition (PIN) study. We show that each averaging technique returns similar, model averaged causal estimates. An a priori strategy of model averaging provides a means of integrating uncertainty in selection among candidate, causal models, while also avoiding the temptation to report the most attractive estimate from a suite of equally valid alternatives. PMID:26270672
Effectiveness and Impact of Corporate Average Fuel Economy Standards
NSDL National Science Digital Library
2001-01-01
Several new and forthcoming books published by the National Academies Press (NAP) can now be read online through NAP's OpenBook feature, that allows readers to view full text of books (.html). The first listed here, "Effectiveness and Impact of Corporate Average Fuel Economy (CAFE) Standards", is an unedited pre-print. It gives the results of a the National Academies Transportation Research Board's recent investigation into the impacts of the CAFE program, which was passed in 1975 in response to oil shortages and required that auto manufacturers increase the sales-weighted average fuel economy for passenger cars and light-duty trucks.
Direct measurement of the resistivity weighting function
NASA Astrophysics Data System (ADS)
Koon, D. W.; Chan, Winston K.
1998-12-01
We have directly measured the resistivity weighting function—the sensitivity of a four-wire resistance measurement to local variations in resistivity—for a square specimen of photoconducting material. This was achieved by optically perturbing the local resistivity of the specimen while measuring the effect of this perturbation on its four-wire resistance. The weighting function we measure for a square geometry with electrical leads at its corners agrees well with calculated results, displaying two symmetric regions of negative weighting which disappear when van der Pauw averaging is performed.
Weighted Watson-Crick automata
Tamrin, Mohd Izzuddin Mohd [Department of Information System, Kulliyyah of Information and Communication Technology, International Islamic University Malaysia, 50728 Gombak, Selangor (Malaysia); Turaev, Sherzod; Sembok, Tengku Mohd Tengku [Department of Computer Science, Kulliyyah of Information and Communication Technology, International Islamic University Malaysia, 50728 Gombak, Selangor (Malaysia)
2014-07-10
There are tremendous works in biotechnology especially in area of DNA molecules. The computer society is attempting to develop smaller computing devices through computational models which are based on the operations performed on the DNA molecules. A Watson-Crick automaton, a theoretical model for DNA based computation, has two reading heads, and works on double-stranded sequences of the input related by a complementarity relation similar with the Watson-Crick complementarity of DNA nucleotides. Over the time, several variants of Watson-Crick automata have been introduced and investigated. However, they cannot be used as suitable DNA based computational models for molecular stochastic processes and fuzzy processes that are related to important practical problems such as molecular parsing, gene disease detection, and food authentication. In this paper we define new variants of Watson-Crick automata, called weighted Watson-Crick automata, developing theoretical models for molecular stochastic and fuzzy processes. We define weighted Watson-Crick automata adapting weight restriction mechanisms associated with formal grammars and automata. We also study the generative capacities of weighted Watson-Crick automata, including probabilistic and fuzzy variants. We show that weighted variants of Watson-Crick automata increase their generative power.
... Resources Breastfeeding IMMPaCt State and Local Programs Preventing Weight Gain Language: English Español (Spanish) Recommend on Facebook ... of cancer. Choosing an Eating Plan to Prevent Weight Gain So, how do you choose a healthful ...
Weight Management and Calories
... page.preprocess.inc ). Print Share Why is weight management important? In addition to helping you feel and ... a health care provider to determine appropriate weight management for him or her. Because children and adolescents ...
ERIC Educational Resources Information Center
Clarke, Doug
1993-01-01
Describes an activity shared at an inservice teacher workshop and suitable for middle school in which students predict their ideal weight in kilograms based on tables giving ideal weights for given heights. (MDH)
Healthy Weight during Pregnancy
... Articles weights and fruits Building Muscle on a Vegetarian Diet Foods for Camping and Hiking Food Tips for ... Two weights and fruits Building Muscle on a Vegetarian Diet Foods for Camping and Hiking Food Tips for ...
Volker Becker; Thorsten Poeschel
2007-01-31
In contrast to a still common belief, a steadily flowing hourglass changes its weight in the course of time. We will show that, nevertheless, it is possible to construct hourglasses that do not change their weight.
Measuring complexity through average symmetry
NASA Astrophysics Data System (ADS)
Alamino, Roberto C.
2015-07-01
This work introduces a complexity measure which addresses some conflicting issues between existing ones by using a new principle—measuring the average amount of symmetry broken by an object. It attributes low (although different) complexity to either deterministic or random homogeneous densities and higher complexity to the intermediate cases. This new measure is easily computable, breaks the coarse graining paradigm and can be straightforwardly generalized, including to continuous cases and general networks. By applying this measure to a series of objects, it is shown that it can be consistently used for both small scale structures with exact symmetry breaking and large scale patterns, for which, differently from similar measures, it consistently discriminates between repetitive patterns, random configurations and self-similar structures
Topological quantization of ensemble averages
NASA Astrophysics Data System (ADS)
Prodan, Emil
2009-02-01
We define the current of a quantum observable and, under well-defined conditions, we connect its ensemble average to the index of a Fredholm operator. The present work builds on a formalism developed by Kellendonk and Schulz-Baldes (2004 J. Funct. Anal. 209 388) to study the quantization of edge currents for continuous magnetic Schrödinger operators. The generalization given here may be a useful tool to scientists looking for novel manifestations of the topological quantization. As a new application, we show that the differential conductance of atomic wires is given by the index of a certain operator. We also comment on how the formalism can be used to probe the existence of edge states.
Scheduling to Minimize Average Completion Time Revisited: Deterministic On-line Algorithms
Megow, Nicole
2004-02-06
We consider the scheduling problem of minimizing the average weighted completion time on identical parallel machines when jobs are arriving over time. For both the preemptive and the nonpreemptive setting, we show that ...
Flexible time domain averaging technique
NASA Astrophysics Data System (ADS)
Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng
2013-09-01
Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.
SEA SURFACE TEMPERATURE MONTHLY AVERAGE AND
385: SEA SURFACE TEMPERATURE MONTHLY AVERAGE AND ANOMALY CHARTS NORTHEASTERN PACIFIC OCEAN 1947 SEA SURFACE TEMPERATURE MONTHLY AVERAGE AND ANOMALY CHARTS NORTHEASTERN PACIFIC OCEAN, 1947 Part I- -Sea surface temperature monthly average charts, northeastern Pacific Ocean 5 Part II- -Sea
Monthly Average Temperature for Boston, MA
NSDL National Science Digital Library
The phenomenon is monthly average temperature data for Boston, MA from March 1872 until September 2000. In addition to monthly averages, the National Weather Service table also shows the yearly average temperature.
Empty and Elusive Averages in Performance Measurement.
ERIC Educational Resources Information Center
Bonetti, S. M.
1992-01-01
Two pitfalls are identified in the use of arithmetic averages for performance measurement in higher education. First, asymmetric average targeting rules involve a fallacy of composition; and, second, comparisons with adjusted averages involve a serious methodological error. (DB)
Theory of optimal weighting of data to detect climatic change
NASA Technical Reports Server (NTRS)
Bell, T. L.
1986-01-01
A search for climatic change predicted by climate models can easily yield unconvincing results because of 'climatic noise,' the inherent, unpredictable variability of time-average atmospheric data. A weighted average of data that maximizes the probability of detecting predicted climatic change is presented. To obtain the optimal weights, an estimate of the covariance matrix of the data from a prior data set is needed. This introduces additional sampling error into the method. This is presently taken into account. A form of the weighted average is found whose probability distribution is independent of the true (but unknown) covariance statistics of the data and of the climate model prediction.
NASA Astrophysics Data System (ADS)
Zhang, S.; Zheng, X.; Chen, Z.; Chen, J.; Wu, G.; Yi, X.
2014-12-01
Atmospheric CO2 abundance data can be used to constrain surface carbon fluxes and evaluate prediction skills of ecosystem models. In this study a multimodel carbon assimilation system is developed for assimilating atmospheric CO2 abundance data into three ecosystem models and exploiting the diversity of prediction skills of these models. The assimilation approach is based on a modified ensemble Kalman filter (EnKF) which estimates the inflation factor of the forecast error with a maximum likelihood function. The Bayesian model averaging scheme infers best predictions of ecosystem carbon fluxes by weighting individual predictions based on their probabilistic likelihood measurements. The proposed system was used to estimate the terrestrial ecosystem carbon fluxes from 2000 to 2008 and evaluate ecosystem models in different areas of the globe and at different times.
A Weighted and Directed Interareal Connectivity Matrix for Macaque Cerebral Cortex
Markov, N. T.; Ercsey-Ravasz, M. M.; Ribeiro Gomes, A. R.; Lamy, C.; Magrou, L.; Vezoli, J.; Misery, P.; Falchier, A.; Quilodran, R.; Gariel, M. A.; Sallet, J.; Gamanut, R.; Huissoud, C.; Clavagnier, S.; Giroud, P.; Sappey-Marinier, D.; Barone, P.; Dehay, C.; Toroczkai, Z.; Knoblauch, K.; Van Essen, D. C.; Kennedy, H.
2014-01-01
Retrograde tracer injections in 29 of the 91 areas of the macaque cerebral cortex revealed 1,615 interareal pathways, a third of which have not previously been reported. A weight index (extrinsic fraction of labeled neurons [FLNe]) was determined for each area-to-area pathway. Newly found projections were weaker on average compared with the known projections; nevertheless, the 2 sets of pathways had extensively overlapping weight distributions. Repeat injections across individuals revealed modest FLNe variability given the range of FLNe values (standard deviation <1 log unit, range 5 log units). The connectivity profile for each area conformed to a lognormal distribution, where a majority of projections are moderate or weak in strength. In the G29 × 29 interareal subgraph, two-thirds of the connections that can exist do exist. Analysis of the smallest set of areas that collects links from all 91 nodes of the G29 × 91 subgraph (dominating set analysis) confirms the dense (66%) structure of the cortical matrix. The G29 × 29 subgraph suggests an unexpectedly high incidence of unidirectional links. The directed and weighted G29 × 91 connectivity matrix for the macaque will be valuable for comparison with connectivity analyses in other species, including humans. It will also inform future modeling studies that explore the regularities of cortical networks. PMID:23010748
[Evaluation of a weight reduction program: slender without diets].
Kiefer, I; Schoberberger, R; Kunze, M
1991-05-01
"Schlank ohne Diät" ("Weight-Reduction Without Diet") is a strategy to normalize body weight, by influencing multiple factors that appear to influence and promote obesity. The base line of therapy is the modification of nutritional habits. Self-control, especially monitoring and recording of calorie intake and the loss of energy by physical activities is the key that trains every client to change his nutritional habits and helps to reduce body weight and keep normal body weight stable. In a retrospective study, including 134 persons, 84 clients (62,69%) were able to reduce body weight, 9 clients (6,72%) reached starting point of weight and in 30,60% (41 clients) during the participation in this methods an increase of body weight was seen. On an average 120 clients achieved a weight reduction of 5.98 kg during the participation in this method. The loss of weight ranged from 1 to 31 kg per person. PMID:1897283
Do Diurnal Aerosol Changes Affect Daily Average Radiative Forcing?
Kassianov, Evgueni I.; Barnard, James C.; Pekour, Mikhail S.; Berg, Larry K.; Michalsky, Joseph J.; Lantz, K.; Hodges, G. B.
2013-06-17
Strong diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project (TCAP) on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF, when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.
Do diurnal aerosol changes affect daily average radiative forcing?
NASA Astrophysics Data System (ADS)
Kassianov, Evgueni; Barnard, James; Pekour, Mikhail; Berg, Larry K.; Michalsky, Joseph; Lantz, Kathy; Hodges, Gary
2013-06-01
diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-19
...to clarify any misunderstandings regarding the NPRM published on March 14, 2011 (76 FR 13580). Furthermore, due to the complexity of the issues proposed in the NPRM, FTA is extending the comment period to June 15, 2011, to allow interested parties...
Demonstration of a Model Averaging Capability in FRAMES
NASA Astrophysics Data System (ADS)
Meyer, P. D.; Castleton, K. J.
2009-12-01
Uncertainty in model structure can be incorporated in risk assessment using multiple alternative models and model averaging. To facilitate application of this approach to regulatory applications based on risk or dose assessment, a model averaging capability was integrated with the Framework for Risk Analysis in Multimedia Environmental Systems (FRAMES) version 2 software. FRAMES is a software platform that allows the non-parochial communication between disparate models, databases, and other frameworks. Users have the ability to implement and select environmental models for specific risk assessment and management problems. Standards are implemented so that models produce information that is readable by other downstream models and accept information from upstream models. Models can be linked across multiple media and from source terms to quantitative risk/dose estimates. Parameter sensitivity and uncertainty analysis tools are integrated. A model averaging module was implemented to accept output from multiple models and produce average results. These results can be deterministic quantities or probability distributions obtained from an analysis of parameter uncertainty. Output from alternative models is averaged using weights determined from user input and/or model calibration results. A model calibration module based on the PEST code was implemented to provide FRAMES with a general calibration capability. An application illustrates the implementation, user interfaces, execution, and results of the FRAMES model averaging capabilities.
Averaging of nonlinearity-managed pulses.
Zharnitsky, Vadim; Pelinovsky, Dmitry
2005-09-01
We consider the nonlinear Schrodinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons. PMID:16253000
Estimating Average Wind Velocity Along a Trajectory
NASA Technical Reports Server (NTRS)
Bertsch, P.
1986-01-01
Average Wind Velocity (VWAVE) program calculates average wind velocity over time for particular vehicle trajectory. Calculation based on wind profile, which is wind magnitude at various altitudes. Average of wind profile over altitude does not correlate well with actual apparent effect of wind. Wind profiles with low average velocities more severe than some wind profiles with high average velocities. VWAVE written in FORTRAN V for interactive execution.
Ali Afsahi; Jacob J. Rael; Arya Behzad; Michael Pan; Stephen Au; Adedayo Ojo; C. Paul Lee; Seema Butala Anand; Kevin Chien; Stephen Wu; Alireza Zolfaghari; John C. Leete; Long Tran; Keith A. Carter; Mohammad Nariman; Keno Wai-Ki Yeung; Walter Morton; Mark Gonikberg; Mukul Seth; Marcellus Forbes; Jay Pattin; Luis Gutierrez; Sumant Ranganathan; Ning Li; Eric Blecker; Tom Kwan; Mark Chambers; Maryam Rofougaran; Jason Trachewsky; Pieter Van Rooyen
2008-01-01
A low-power 802.11abg SoC which achieves the best reported sensitivity as well as lowest reported power consumption and utilizes an extensive array of auto calibrations is reported. This SoC utilizes a two-antenna array receiver to build a single weight combiner (SWC) system. A new signal-path Cartesian phase generation and combination technique is proposed that shifts the RF signal in 22.5deg
Data Point Averaging for Computational Fluid Dynamics Data
NASA Technical Reports Server (NTRS)
Norman, David, Jr. (Inventor)
2014-01-01
A system and method for generating fluid flow parameter data for use in aerodynamic heating analysis. Computational fluid dynamics data is generated for a number of points in an area on a surface to be analyzed. Sub-areas corresponding to areas of the surface for which an aerodynamic heating analysis is to be performed are identified. A computer system automatically determines a sub-set of the number of points corresponding to each of the number of sub-areas and determines a value for each of the number of sub-areas using the data for the sub-set of points corresponding to each of the number of sub-areas. The value is determined as an average of the data for the sub-set of points corresponding to each of the number of sub-areas. The resulting parameter values then may be used to perform an aerodynamic heating analysis.
Adjustment factors and genetic parameter estimates for yearling pelvic area in Limousin cattle
Cowley, Joel Douglas
1995-01-01
area were .26 for bulls and .30 for heifers indicating that pelvic area can be modified through selection. Genetic correlations between pelvic area and birth weight direct, weaning weight direct, yearling weight and yearling hip height converged at . 16...
A multidisciplinary approach to weight control.
Hudiburgh, N K
1984-04-01
A college course in weight modification, which combines a nutritionally adequate, deficit kilocalorie diet with behavior modification and exercise, is described. Course content, weight loss data, and subjective appraisal of behavior modification techniques by students are presented. During one semester in which the course was offered, the 20 women enrolled lost an average of 10 lb. A one-year follow-up of 8 of the women revealed an avearage loss of 20 lb since the beginning of the course. Results compare favorably with those of other studies in which behavior modification was used as a component of a weight control program. In addition, the students were very positive regarding the benefits of the course in promoting sound nutrition and weight control practices. PMID:6707402
Psychological effects of weight retained after pregnancy.
Jenkin, W; Tiggemann, M
1997-01-01
This study is a prospective investigation of the effect of weight retained after pregnancy on weight satisfaction, self-esteem and depressive affect, utilising the framework provided by expectancy-value theory. Self-report data were obtained from 115 women who were in the last month of their first pregnancy, and then again a month following the birth. On average women were heavier four weeks after having their baby than they were prior to becoming pregnant, and were less satisfied with their post-natal weight and shape. They were also slightly heavier than they had anticipated, particularly in the case of the younger women. Actual post-natal weight proved the most important predictor of psychological well-being following birth. PMID:9253140
Web ensemble averages for retrieving relevant information from rejected Monte Carlo moves
NASA Astrophysics Data System (ADS)
Athènes, M.
2007-07-01
We study the relevance of including information about rejected Monte-Carlo moves in path-sampling computations of free energies. For this purpose, we define webs as sets of paths linked by the path-sampling scheme and introduce an associated statistical ensemble. Within this web ensemble, we derive and test several statistical averages enabling to include information about configurational and path quantities belonging to the unselected trial moves. We numerically observe that retrieving this information does not always result in variance reduction, as theoretically predicted by Delmas and Jourdain. To explain the possible detrimental effect of information-retrieving from web sampling, an action for the webs is introduced. The behaviour of the statistical variance is observed to correlate to an overlapping area of a web action histogram. This area represents the probability that a generated web is such that the difference of its action between the targeted and reference ensembles is lower than the corresponding difference of free energy. Variance reductions are numerically observed for increased areas, as it is the case for the residence weight method proposed previously. More generally, web ensembles yield a rigorous framework for rationalizing existing methods and also for deriving potentially new methods enabling to retrieve relevant information from rejected trial moves.
Multi-scale modelling of flow in periodic solid structures through spatial averaging
NASA Astrophysics Data System (ADS)
Buckinx, Geert; Baelmans, Martine
2015-06-01
This paper presents spatially averaged Navier-Stokes equations for modelling macro-scale flow in devices with periodic solid structures such as fin and tube arrays. The properties of steady and unsteady periodically developed flow are investigated to assess different strategies for determining the closure terms in the macro-scale flow equations. It is shown that the spatial averaging technique requires an appropriate weighting function to ensure that the closure terms are spatially constant for periodically developed flow. Moreover, through an appropriate choice of the weighting function, the closure terms can be obtained by solving a local closure problem on a unit cell of the periodic structures. The theoretical framework of this paper is applied as multi-scale modelling technique for flow through a cylindrical tube array. This case study illustrates the advantages of the weighted spatial averaging technique over the volume averaging technique.
ERIC Educational Resources Information Center
Katch, Victor L.
This paper describes a number of factors which go into determining weight. The paper describes what calories are, how caloric expenditure is measured, and why caloric expenditure is different for different people. The paper then outlines the way the body tends to adjust food intake and exercise to maintain a constant body weight. It is speculated…
... in a person's diabetes management plan. Weight and Type 1 Diabetes If a person has type 1 diabetes but hasn't been treated yet, he or she often loses weight. In type 1 diabetes, the body can't use glucose (pronounced: GLOO- ...
What Is Weight Loss Surgery? For some people, being overweight is about more than just looks. People who are 100 or more pounds ... t make these plans work. Doctors may do weight loss surgery if someone has tried but failed to lose ...
... your age or your BMI is above 40. Weight loss surgery (bariatric surgery) is the most effective treatment for weight loss in women with a BMI greater than 40. I have ... signs of insulin resistance and/or obesity. A low-calorie diet and exercise may lead ...
... Started Success Stories Tips for Parents The Health Effects of Overweight & Obesity External Resources Breastfeeding IMMPaCt State and Local Programs ... weight and prevent weight gain. The Possible Health Effects from Having Obesity Having obesity can increase your chances of developing ...
ERIC Educational Resources Information Center
Iona, Mario
1975-01-01
Presents a summary and comparison of various views on the concepts of mass and weight. Includes a consideration of gravitational force in an inertial system and apparent gravitational force on a rotating earth. Discusses the units and methods for measuring mass and weight. (GS)
Technology Transfer Automated Retrieval System (TEKTRAN)
This review evaluated the available scientific literature relative to anthocyanins and weight loss and/or obesity with mention of other effects of anthocyanins on pathologies that are closely related to obesity. Although there is considerable popular press concerning anthocyanins and weight loss, th...
Wire weight is lowered to water surface to measure stage at a site. Levels are made to the wire weights elevation from known benchmarks to ensure correct readings. In the background there is housing protected with dikes along the Missouri River in Mandan, ND....
NASA Astrophysics Data System (ADS)
Davis, C. V.; Hill, T. M.; Moffitt, S. E.
2013-12-01
Foraminiferal shell weight can be impacted by environmental factors both during initial shell formation and as the result of post mortem preservation. An improved understanding of what determines this relationship can lead to both an understanding of foraminiferal calcite production in modern oceans and proxy development for past environmental conditions. Significantly, foraminiferal shell weight has been linked to carbonate ion concentration in both laboratory culture (of both planktic and benthic species) and in the modern and fossil record (in planktic foraminifera). This study explores the relationship between shell weight and changes in oxygenation and carbonate saturation in fossil benthic foraminifera from a high-resolution sedimentary record (MV0811-15JC; 34°36.930' N, 119°12.920' W; 418m water depth; 16.1-3.4 ka; sedimentation rate 44-100 cm kyr-1) from Santa Barbara Basin, CA (SBB). Ongoing work in SBB has described rapid biotic reorganization through the recent deglaciation in response to changes in dissolved oxygen concentrations, which are used here to create a semi quantitative oxygenation history for site MV0811-15JC. In modern Oxygen Minimum Zones, decreases in oxygen closely covary with increases in Total Carbon (with a corresponding decrease in the carbonate saturation state). We interpret that records from SBB of the average size-normalized test weight of Uvigerinid and Bolivinid foraminifera show that shell weight responds to these changes in oxygenation and saturation state. Multiple metrics of 'size normalization' including by length, geometric estimation of surface area and volume, and tracing of individual silhouettes are tested. Regardless of method utilized, the size normalized shell weight of all species fluctuates with abrupt changes in oxygenation and saturation state. Although all species respond to large-scale environmental changes, the weight records of Bolivinids and Uvigerinids reveal distinct differences, indicating that processes governing shell weight may vary across taxonomic groups.
The average distances in random graphs with given expected degrees
NASA Astrophysics Data System (ADS)
Chung, Fan; Lu, Linyuan
2002-12-01
Random graph theory is used to examine the "small-world phenomenon"; any two strangers are connected through a short chain of mutual acquaintances. We will show that for certain families of random graphs with given expected degrees the average distance is almost surely of order log n/log , where is the weighted average of the sum of squares of the expected degrees. Of particular interest are power law random graphs in which the number of vertices of degree k is proportional to 1/k for some fixed exponent . For the case of > 3, we prove that the average distance of the power law graphs is almost surely of order log n/log ? < 3 for which the power law random graphs have average distance almost surely of order log log n, but have diameter of order log n (provided having some mild constraints for the average distance and maximum degree). In particular, these graphs contain a dense subgraph, which we call the core, having nc/log log n vertices. Almost all vertices are within distance log log n of the core although there are vertices at distance log n from the core.
The average distances in random graphs with given expected degrees.
Chung, Fan; Lu, Linyuan
2002-12-10
Random graph theory is used to examine the "small-world phenomenon"; any two strangers are connected through a short chain of mutual acquaintances. We will show that for certain families of random graphs with given expected degrees the average distance is almost surely of order log nlog d, where d is the weighted average of the sum of squares of the expected degrees. Of particular interest are power law random graphs in which the number of vertices of degree k is proportional to 1kbeta for some fixed exponent beta. For the case of beta > 3, we prove that the average distance of the power law graphs is almost surely of order log nlog d. However, many Internet, social, and citation networks are power law graphs with exponents in the range 2 < beta < 3 for which the power law random graphs have average distance almost surely of order log log n, but have diameter of order log n (provided having some mild constraints for the average distance and maximum degree). In particular, these graphs contain a dense subgraph, which we call the core, having n(clog log n) vertices. Almost all vertices are within distance log log n of the core although there are vertices at distance log n from the core. PMID:12466502
Locally Weighted Learning Christopher G. Atkeson, Andrew W. Moorey
Schaal, Stefan
a set of nearest 1 #12;neighbors and selects or votes on the predictions made by each of the stored models include nearest neighbor, weighted average, and locally weighted regression Figure 1. Each of these local models combine points near a query point to estimate the appropriate output. Nearest neighbor
Interpolation weights calculation distributed with OpenMP
the inverse-distance-weighted average of an user-specified number of nearest neighbor values from a sourceInterpolation weights calculation distributed with OpenMP ® Eric Maisonnave WN-CMGC-15-5 #12;Table associated to interpolations between two grids in spherical coordinates. This option is widely used by OASIS
Evaluation of a Viscosity-Molecular Weight Relationship.
ERIC Educational Resources Information Center
Mathias, Lon J.
1983-01-01
Background information, procedures, and results are provided for a series of graduate/undergraduate polymer experiments. These include synthesis of poly(methylmethacrylate), viscosity experiment (indicating large effect even small amounts of a polymer may have on solution properties), and measurement of weight-average molecular weight by light…
A Behavioral Weight Reduction Model for Moderately Mentally Retarded Adolescents.
ERIC Educational Resources Information Center
Rotatori, Anthony F.; And Others
1980-01-01
A behavioral weight reduction treatment and maintenance program for moderately mentally retarded adolescents which involves six phases from background information collection to followup relies on stimulus control procedures to modify eating behaviors. Data from pilot studies show an average weekly weight loss of .5 to 1 pound per S. (CL)
Tyler Helmuth
2014-10-12
Loop-weighted walk with parameter $\\lambda\\geq 0$ is a non-Markovian model of random walks that is related to the loop $O(N)$ model of statistical mechanics. A walk receives weight $\\lambda^{k}$ if it contains $k$ loops; whether this is a reward or punishment for containing loops depends on the value of $\\lambda$. A challenging feature of loop-weighted walk is that it is not purely repulsive, meaning the weight of the future of a walk may either increase or decrease if the past is forgotten. Repulsion is typically an essential property for lace expansion arguments. This article circumvents the lack of repulsion and proves, via the lace expansion, that for any $\\lambda\\geq 0$ loop-weighted walk is diffusive in high dimensions.
Effects of spatial variability and scale on areal -average evapotranspiration
NASA Technical Reports Server (NTRS)
Famiglietti, J. S.; Wood, Eric F.
1993-01-01
This paper explores the effect of spatial variability and scale on areally-averaged evapotranspiration. A spatially-distributed water and energy balance model is employed to determine the effect of explicit patterns of model parameters and atmospheric forcing on modeled areally-averaged evapotranspiration over a range of increasing spatial scales. The analysis is performed from the local scale to the catchment scale. The study area is King's Creek catchment, an 11.7 sq km watershed located on the native tallgrass prairie of Kansas. The dominant controls on the scaling behavior of catchment-average evapotranspiration are investigated by simulation, as is the existence of a threshold scale for evapotranspiration modeling, with implications for explicit versus statistical representation of important process controls. It appears that some of our findings are fairly general, and will therefore provide a framework for understanding the scaling behavior of areally-averaged evapotranspiration at the catchment and larger scales.
Code of Federal Regulations, 2010 CFR
2010-07-01
...ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE NONROAD COMPRESSION-IGNITION ENGINES Averaging, Banking, and Trading Provisions § 89.204 Averaging. (a) Requirements for...
RHIC BPM system average orbit calculations
Michnoff,R.; Cerniglia, P.; Degen, C.; Hulsart, R.; et al.
2009-05-04
RHIC beam position monitor (BPM) system average orbit was originally calculated by averaging positions of 10000 consecutive turns for a single selected bunch. Known perturbations in RHIC particle trajectories, with multiple frequencies around 10 Hz, contribute to observed average orbit fluctuations. In 2006, the number of turns for average orbit calculations was made programmable; this was used to explore averaging over single periods near 10 Hz. Although this has provided an average orbit signal quality improvement, an average over many periods would further improve the accuracy of the measured closed orbit. A new continuous average orbit calculation was developed just prior to the 2009 RHIC run and was made operational in March 2009. This paper discusses the new algorithm and performance with beam.
Paper area density measurement from forward transmitted scattered light
Koo, Jackson C. (San Ramon, CA)
2001-01-01
A method whereby the average paper fiber area density (weight per unit area) can be directly calculated from the intensity of transmitted, scattered light at two different wavelengths, one being a non-absorpted wavelength. Also, the method makes it possible to derive the water percentage per fiber area density from a two-wavelength measurement. In the optical measuring technique optical transmitted intensity, for example, at 2.1 microns cellulose absorption line is measured and compared with another scattered, optical transmitted intensity reference in the nearby spectrum region, such as 1.68 microns, where there is no absorption. From the ratio of these two intensities, one can calculate the scattering absorption coefficient at 2.1 microns. This absorption coefficient at this wavelength is, then, experimentally correlated to the paper fiber area density. The water percentage per fiber area density can be derived from this two-wavelength measurement approach.
Averaging in LRS class II spacetimes
NASA Astrophysics Data System (ADS)
Kašpar, Petr; Svítek, Otakar
2015-02-01
We generalize Buchert's averaged equations (Gen Relativ Gravit 32; 105, 2000; Gen Relativ Gravit 33; 1381, 2001) to LRS class II dust model in the sense that all Einstein equations are averaged, not only the trace part. We derive the relevant averaged equations and we investigate backreaction on expansion and shear scalars in an approximate LTB model. Finally we propose a way to close the system of averaged equations.
Averaging and Adding in Children's Worth Judgements
ERIC Educational Resources Information Center
Schlottmann, Anne; Harman, Rachel M.; Paine, Julie
2012-01-01
Under the normative Expected Value (EV) model, multiple outcomes are additive, but in everyday worth judgement intuitive averaging prevails. Young children also use averaging in EV judgements, leading to a disordinal, crossover violation of utility when children average the part worths of simple gambles involving independent events (Schlottmann,…
Stochastic averaging of BALAJI DEVARAJU, Nico Sneeuw
Stuttgart, Universität
Stochastic averaging of GRACE data BALAJI DEVARAJU, Nico Sneeuw Geod¨atisches Institut der deterministic (Gaussian) or stochastic (Wiener), and at the same time isotropic or anisotropic. Stochastic op operators depend on an averaging radius. To per- form stochastic averaging the desired signal struc- ture
ERIC Educational Resources Information Center
Pape, K. E.; And Others
1978-01-01
For availibility see EC 103 548 Among findings of a 2-year followup study of 43 infants of birth weight less than 1000 grams were the following: average height at age 2 years was between the tenth and twenty-fifth percentiles; average weight was between the third and tenth percentiles; 15 Ss developed lower respiratory tract infections during the…
Carter, Megan Ann; Dubois, Lise; Tremblay, Mark S; Taljaard, Monica
2013-04-01
The objective of this paper was to determine the influence of place factors on weight gain in a contemporary cohort of children while also adjusting for early life and individual/family social factors. Participants from the Québec Longitudinal Study of Child Development comprised the sample for analysis (n?=?1,580). A mixed-effects regression analysis was conducted to determine the longitudinal relationship between these place factors and standardized BMI, from age 4 to 10 years. The average relationship with time was found to be quadratic (rate of weight gain increased over time). Neighborhood material deprivation was found to be positively related to weight gain. Social deprivation, social disorder, and living in a medium density area were inversely related, while no association was found for social cohesion. Early life factors and genetic proxies appeared to be important in explaining weight gain in this sample. This study suggests that residential environments may play a role in childhood weight change; however, pathways are likely to be complex and interacting and perhaps not as important as early life factors and genetic proxies. Further work is required to clarify these relationships. PMID:22806452
Andersson, Neil; Mitchell, Steven
2006-01-01
Evaluation of mine risk education in Afghanistan used population weighted raster maps as an evaluation tool to assess mine education performance, coverage and costs. A stratified last-stage random cluster sample produced representative data on mine risk and exposure to education. Clusters were weighted by the population they represented, rather than the land area. A "friction surface" hooked the population weight into interpolation of cluster-specific indicators. The resulting population weighted raster contours offer a model of the population effects of landmine risks and risk education. Five indicator levels ordered the evidence from simple description of the population-weighted indicators (level 0), through risk analysis (levels 1–3) to modelling programme investment and local variations (level 4). Using graphic overlay techniques, it was possible to metamorphose the map, portraying the prediction of what might happen over time, based on the causality models developed in the epidemiological analysis. Based on a lattice of local site-specific predictions, each cluster being a small universe, the "average" prediction was immediately interpretable without losing the spatial complexity. PMID:16390549
Zhao, Kaiguang; Valle, Denis; Popescu, Sorin; Zhang, Xuesong; Malick, Bani
2013-05-15
Model specification remains challenging in spectroscopy of plant biochemistry, as exemplified by the availability of various spectral indices or band combinations for estimating the same biochemical. This lack of consensus in model choice across applications argues for a paradigm shift in hyperspectral methods to address model uncertainty and misspecification. We demonstrated one such method using Bayesian model averaging (BMA), which performs variable/band selection and quantifies the relative merits of many candidate models to synthesize a weighted average model with improved predictive performances. The utility of BMA was examined using a portfolio of 27 foliage spectral–chemical datasets representing over 80 species across the globe to estimate multiple biochemical properties, including nitrogen, hydrogen, carbon, cellulose, lignin, chlorophyll (a or b), carotenoid, polar and nonpolar extractives, leaf mass per area, and equivalent water thickness. We also compared BMA with partial least squares (PLS) and stepwise multiple regression (SMR). Results showed that all the biochemicals except carotenoid were accurately estimated from hyerspectral data with R2 values > 0.80.
Computation of vertically averaged velocities in irregular sections of straight channels
NASA Astrophysics Data System (ADS)
Spada, E.; Tucciarelli, T.; Sinagra, M.; Sammartano, V.; Corato, G.
2015-09-01
Two new methods for vertically averaged velocity computation are presented, validated and compared with other available formulas. The first method derives from the well-known Huthoff algorithm, which is first shown to be dependent on the way the river cross section is discretized into several subsections. The second method assumes the vertically averaged longitudinal velocity to be a function only of the friction factor and of the so-called "local hydraulic radius", computed as the ratio between the integral of the elementary areas around a given vertical and the integral of the elementary solid boundaries around the same vertical. Both integrals are weighted with a linear shape function equal to zero at a distance from the integration variable which is proportional to the water depth according to an empirical coefficient ?. Both formulas are validated against (1) laboratory experimental data, (2) discharge hydrographs measured in a real site, where the friction factor is estimated from an unsteady-state analysis of water levels recorded in two different river cross sections, and (3) the 3-D solution obtained using the commercial ANSYS CFX code, computing the steady-state uniform flow in a cross section of the Alzette River.
Precipitation interpolation in mountainous areas
NASA Astrophysics Data System (ADS)
Kolberg, Sjur
2015-04-01
Different precipitation interpolation techniques as well as external drift covariates are tested and compared in a 26000 km2 mountainous area in Norway, using daily data from 60 stations. The main method of assessment is cross-validation. Annual precipitation in the area varies from below 500 mm to more than 2000 mm. The data were corrected for wind-driven undercatch according to operational standards. While temporal evaluation produce seemingly acceptable at-station correlation values (on average around 0.6), the average daily spatial correlation is less than 0.1. Penalising also bias, Nash-Sutcliffe R2 values are negative for spatial correspondence, and around 0.15 for temporal. Despite largely violated assumptions, plain Kriging produces better results than simple inverse distance weighting. More surprisingly, the presumably 'worst-case' benchmark of no interpolation at all, simply averaging all 60 stations for each day, actually outperformed the standard interpolation techniques. For logistic reasons, high altitudes are under-represented in the gauge network. The possible effect of this was investigated by a) fitting a precipitation lapse rate as an external drift, and b) applying a linear model of orographic enhancement (Smith and Barstad, 2004). These techniques improved the results only marginally. The gauge density in the region is one for each 433 km2; higher than the overall density of the Norwegian national network. Admittedly the cross-validation technique reduces the gauge density, still the results suggest that we are far from able to provide hydrological models with adequate data for the main driving force.
Trapping on Weighted Tetrahedron Koch Networks with Small-World Property
NASA Astrophysics Data System (ADS)
Dai, Meifeng; Xie, Qi; Xi, Lifeng
2013-04-01
In this paper, we present weighted tetrahedron Koch networks depending on a weight factor. According to their self-similar construction, we obtain the analytical expressions of the weighted clustering coefficient and average weighted shortest path (AWSP). The obtained solutions show that the weighted tetrahedron Koch networks exhibits small-world property. Then, we calculate the average receiving time (ART) on weighted-dependent walks, which is the sum of mean first-passage times (MFPTs) for all nodes absorpt at the trap located at a hub node. We find that the ART exhibits a sublinear or linear dependence on network order.
Cernicchiaro, N; Renter, D G; Xiang, S; White, B J; Bello, N M
2013-06-01
Variability in ADG of feedlot cattle can affect profits, thus making overall returns more unstable. Hence, knowledge of the factors that contribute to heterogeneity of variances in animal performance can help feedlot managers evaluate risks and minimize profit volatility when making managerial and economic decisions in commercial feedlots. The objectives of the present study were to evaluate heteroskedasticity, defined as heterogeneity of variances, in ADG of cohorts of commercial feedlot cattle, and to identify cattle demographic factors at feedlot arrival as potential sources of variance heterogeneity, accounting for cohort- and feedlot-level information in the data structure. An operational dataset compiled from 24,050 cohorts from 25 U. S. commercial feedlots in 2005 and 2006 was used for this study. Inference was based on a hierarchical Bayesian model implemented with Markov chain Monte Carlo, whereby cohorts were modeled at the residual level and feedlot-year clusters were modeled as random effects. Forward model selection based on deviance information criteria was used to screen potentially important explanatory variables for heteroskedasticity at cohort- and feedlot-year levels. The Bayesian modeling framework was preferred as it naturally accommodates the inherently hierarchical structure of feedlot data whereby cohorts are nested within feedlot-year clusters. Evidence for heterogeneity of variance components of ADG was substantial and primarily concentrated at the cohort level. Feedlot-year specific effects were, by far, the greatest contributors to ADG heteroskedasticity among cohorts, with an estimated ?12-fold change in dispersion between most and least extreme feedlot-year clusters. In addition, identifiable demographic factors associated with greater heterogeneity of cohort-level variance included smaller cohort sizes, fewer days on feed, and greater arrival BW, as well as feedlot arrival during summer months. These results support that heterogeneity of variances in ADG is prevalent in feedlot performance and indicate potential sources of heteroskedasticity. Further investigation of factors associated with heteroskedasticity in feedlot performance is warranted to increase consistency and uniformity in commercial beef cattle production and subsequent profitability. PMID:23482583
Implications of the method of capital cost payment on the weighted average cost of capital.
Boles, K E
1986-01-01
The author develops a theoretical and mathematical model, based on published financial management literature, to describe the cost of capital structure for health care delivery entities. This model is then used to generate the implications of changing the capital cost reimbursement mechanism from a cost basis to a prospective basis. The implications are that the cost of capital is increased substantially, the use of debt must be restricted, interest rates for borrowed funds will increase, and, initially, firms utilizing debt efficiently under cost-basis reimbursement will be restricted to the generation of funds from equity only under a prospective system. PMID:3525468
ERIC Educational Resources Information Center
Jenny, Mirjam A.; Rieskamp, Jörg; Nilsson, Håkan
2014-01-01
Judging whether multiple events will co-occur is an important aspect of everyday decision making. The underlying probabilities of occurrence are usually unknown and have to be inferred from experience. Using a rigorous, quantitative model comparison, we investigate how people judge the conjunctive probabilities of multiple events to co-occur. In 2…
Approximation Schemes for Minimizing Average Weighted Completion Time with Release Dates
Skutella, Martin
Mitterand, 91025 Evry Cedex, France. 3Bell Labs, 600-700 Mountain Ave, Murray Hill, NJ 07974. chekuri@research.bell-labs of Fundamental Mathematics Research, Bell Labs, 700 Mountain Avenue, Murray Hill, NJ 07974. sanjeev@research.bell-labs
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-28
...prior WTO dispute settlement reports...Antidumping Proceedings: Calculation...In several disputes,\\5\\ the...WTO Dispute Settlement Body has...Antidumping Duty Proceedings and Request...in which a dispute settlement panel...
MedlinePLUS Videos and Cool Tools
... to find out if birth order affected adult women’s height and weight. They looked at health records ... you. Related MedlinePlus Health Topics Family History Obesity Women's Health About MedlinePlus Site Map FAQs Contact Us ...
1997-11-01
Oxandrolone, an oral drug that promotes weight gain in people experiencing weight loss, has been approved by the Food and Drug Administration (FDA) for patients with HIV. Oxandrolone's effectiveness in HIV-related weight loss is unknown. The drug is a man-made anabolic steroid. Several small studies have shown encouraging results for HIV-related weight loss when doses two to eight times the recommended dosage were used. Daily doses ranging from 20 to 80 mg appear to be needed for treating HIV-associated wasting syndrome. A host of side effects usually associated with anabolic steroids are not seen as frequently with oxandrolone, including liver toxicity. More information can be obtained by contacting the Project Inform Hotline. PMID:11365375
... at rest, it is known as the basal metabolic rate (BMR). Indeed, measurement of the BMR was one ... story relating weight and thyroid. For example, when metabolic rates are reduced in animals by various means (for ...
... Research (DIR) Annual Report NICHD Research and National Child Abuse Prevention Month Pregnancy & Healthy Weight Currently selected NICHD ... on Preventing Preterm Birth Strengthening Families & Communities: National Child Abuse Prevention Month News & Media Join NICHD Listservs News ...
... Corticosteroids Some drugs used to treat bipolar disorder, schizophrenia, and depression Some drugs used to treat diabetes ... al. Position of the American Dietetic Association: weight management. J Am Diet Assoc. 2009;109:330-346.
NASA Astrophysics Data System (ADS)
Matulef, Kevin; O'Donnell, Ryan; Rubinfeld, Ronitt; Servedio, Rocco A.
We consider the problem of testing whether a Boolean function f:{ - 1,1} n ?{ - 1,1} is a ±1-weight halfspace, i.e. a function of the form f(x) = sgn(w 1 x 1 + w 2 x 2 + ? + w n x n ) where the weights w i take values in { - 1,1}. We show that the complexity of this problem is markedly different from the problem of testing whether f is a general halfspace with arbitrary weights. While the latter can be done with a number of queries that is independent of n [7], to distinguish whether f is a ±-weight halfspace versus ?-far from all such halfspaces we prove that nonadaptive algorithms must make ?(logn) queries. We complement this lower bound with a sublinear upper bound showing that O(sqrt{n}\\cdot poly(1/?)) queries suffice.
... Some kids and teens are underweight because of eating disorders , like anorexia or bulimia, which need medical attention. Back Continue The Role ... and Obesity Weight and Diabetes Your Child's Growth Eating Disorders Body Mass Index (BMI) Charts Healthy Eating Keeping ...
ERIC Educational Resources Information Center
Blum, Ann
1987-01-01
Presents a lesson in multiple parts designed to explain the importance of standardized weights and measures and to demonstrate how governmental activities have changed standards and influenced commerce. (JDH)
A Stochastic Model of Space-Time Variability of Mesoscale Rainfall: Statistics of Spatial Averages
NASA Technical Reports Server (NTRS)
Kundu, Prasun K.; Bell, Thomas L.
2003-01-01
A characteristic feature of rainfall statistics is that they depend on the space and time scales over which rain data are averaged. A previously developed spectral model of rain statistics that is designed to capture this property, predicts power law scaling behavior for the second moment statistics of area-averaged rain rate on the averaging length scale L as L right arrow 0. In the present work a more efficient method of estimating the model parameters is presented, and used to fit the model to the statistics of area-averaged rain rate derived from gridded radar precipitation data from TOGA COARE. Statistical properties of the data and the model predictions are compared over a wide range of averaging scales. An extension of the spectral model scaling relations to describe the dependence of the average fraction of grid boxes within an area containing nonzero rain (the "rainy area fraction") on the grid scale L is also explored.
Efficiency of transportation on weighted extended Koch networks
NASA Astrophysics Data System (ADS)
Wu, Zikai; Hou, Baoyu; Zhang, Hongjuan
2013-10-01
In this paper, we propose a family of weighted extended Koch networks based on a class of extended Koch networks. They originate from a r-complete graph, and each node in each r-complete graph of current generation produces mr-complete graphs whose weighted edges are scaled by factor h in subsequent evolutionary step. We study the structural properties of these networks and random walks on them. In more detail, we calculate exactly the average weighted shortest path length (AWSP), average receiving time (ART) and average sending time (AST). Besides, the technique of resistor network is employed to uncover the relationship between ART and AST on networks with unit weight. In the infinite network order limit, the average weighted shortest path lengths stay bounded with growing network order (0 < h < 1). The closed form expression of ART shows that it exhibits a sub-linear dependence (0 < h < 1) or linear dependence ( h = 1) on network order. On the contrary, the AST behaves super-linearly with the network order. Collectively, all the obtained results show that the efficiency of message transportation on weighted extended Koch networks has close relation to the network parameters h, m and r. All these findings could shed light on the structure and random walks of general weighted networks.
NSDL National Science Digital Library
2010-07-28
This printable sheet is an excellent reference tool for geometry students. It details the formulae for finding the area, volume, and surface area for a variety of two- and three-dimensional shapes and includes an illustration of each that shows which measurements are important to the calculation. Presented are: areas of polygons (square, rectangle, parallelogram, trapezoid, circle, ellipse, triangles); volumes of polyhedra (cube, rectangular prism, irregular prism, cylinder, pyramid, cone, sphere, ellipsoid); and surface area (cube, prism, sphere).
The average path length of scale free networks
NASA Astrophysics Data System (ADS)
Chen, Fei; Chen, Zengqiang; Wang, Xiufeng; Yuan, Zhuzhi
2008-09-01
In this paper, the exact solution of average path length in Barabási-Albert model is given. The average path length is an important property of networks and attracts much attention in many areas. The Barabási-Albert model, also called scale free model, is a popular model used in modeling real systems. Hence it is valuable for us to examine the average path length of scale free model. There are two answers, regarding the exact solution for the average path length of scale free networks, already provided by Newman and Bollobas respectively. As Newman proposed, the average path length grows as log(n) with the network size n. However, Bollobas suggested that while it was true when m = 1, the answer changed to log(n)/log(log(n)) when m > 1. In this paper, as we propose, the exact solution of average path length of BA model should approach log(n)/log(log(n)) regardless the value of m. Finally, the simulation is presented to show the validity of our result.
Random time averaged diffusivities for Lévy walks
NASA Astrophysics Data System (ADS)
Froemberg, D.; Barkai, E.
2013-07-01
We investigate a Lévy walk alternating between velocities ±v0 with opposite sign. The sojourn time probability distribution at large times is a power law lacking its mean or second moment. The first case corresponds to a ballistic regime where the ensemble averaged mean squared displacement (MSD) at large times is ?x2? ? t2, the latter to enhanced diffusion with ?x2? ? t?, 1 < ? < 2. The correlation function and the time averaged MSD are calculated. In the ballistic case, the deviations of the time averaged MSD from a purely ballistic behavior are shown to be distributed according to a Mittag-Leffler density function. In the enhanced diffusion regime, the fluctuations of the time averages MSD vanish at large times, yet very slowly. In both cases we quantify the discrepancy between the time averaged and ensemble averaged MSDs.
Low birth weight and residential proximity to PCB-contaminated waste sites.
Baibergenova, Akerke; Kudyakov, Rustam; Zdeb, Michael; Carpenter, David O
2003-01-01
Previous investigations have shown that women exposed to polychlorinated biphenyls (PCBs) are at increased risk of giving birth to an infant with low birth weight (< 2,500 g), and that this relationship is stronger for male than for female infants. We have tested the hypothesis that residents in a zip code that contains a PCB hazardous waste site or abuts a body of water contaminated with PCBs are at increased risk of giving birth to a low-birth-weight baby. We used the birth registry of the New York State Vital Statistics to identify all births between 1994 and 2000 in New York State except for New York City. This registry provides information on the infant, mother, and father together with the zip code of the mother's residence. The 865 state Superfund sites, the 86 National Priority List sites, and the six Areas of Concern in New York were characterized regarding whether or not they contain PCBs as a major contaminant. We identified 187 zip codes containing or abutting PCB-contaminated sites, and these zip codes were the residences of 24.5% of the 945,077 births. The birth weight in the PCB zip codes was on average 21.6 g less than in other zip codes (p < 0.001). Because there are many other risk factors for low birth weight, we have adjusted for these using a logistic regression model for these confounders. After adjusting for sex of the infant, mother's age, race, weight, height, education, income, marital status, and smoking, there was still a statistically significant 6% increased risk of giving birth to a male infant of low birth weight. These observations support the hypothesis that living in a zip code near a PCB-contaminated site poses a risk of exposure and giving birth to an infant of low birth weight. PMID:12896858
High average power active-mirror amplifier
D. C. Brown; K. K. Lee; R. Bowman; J. Menders; J. Kuper
1986-01-01
Operation of the first high average power Nd:glass active-mirror amplifier, a scalable laser device that may be used to configure solid-state laser systems with high average power output into the kilowatt regime, is reported. An extractable average power of over 120 W was achieved at the device laser material fracture limit, and at a repetition rate of 5 Hz.
High average power active-mirror amplifier.
Brown, D C; Bowman, R; Kuper, J; Lee, K K; Menders, J
1986-03-01
We report operation of the first high average power Nd:glass active-mirror amplifier, a scalable laser device that may be used to configure solid-state laser systems with high average power output into the kilowatt regime. An extractable average power of over 120 W was achieved at the device laser material fracture limit and at a repetition rate of 5 Hz. PMID:18231222
High average power active-mirror amplifier
Brown, D.C.; Bowman, R.; Kuper, J.; Lee, K.K.; Menders, J.
1986-03-01
We report operation of the first high average power Nd:glass active-mirror amplifier, a scalable laser device that may be used to configure solid-state laser systems with high average power output into the kilowatt regime. An extractable average power of over 120 W was achieved at the device laser material fracture limit and at a repetition rate of 5 Hz.
Particle sizing by weighted measurements of scattered light
NASA Technical Reports Server (NTRS)
Buchele, Donald R.
1988-01-01
A description is given of a measurement method, applicable to a poly-dispersion of particles, in which the intensity of scattered light at any angle is weighted by a factor proportional to that angle. Determination is then made of four angles at which the weighted intensity is four fractions of the maximum intensity. These yield four characteristic diameters, i.e., the diameters of the volume/area mean (D sub 32 the Sauter mean) and the volume/diameter mean (D sub 31); the diameters at cumulative volume fractions of 0.5 (D sub v0.5 the volume median) and 0.75 (D sub v0.75). They also yield the volume dispersion of diameters. Mie scattering computations show that an average diameter less than three micrometers cannot be accurately measured. The results are relatively insensitive to extraneous background light and to the nature of the diameter distribution. Also described is an experimental method of verifying the conclusions by using two microscopic slides coated with polystyrene microspheres to simulate the particles and the background.
Weight Loss Nutritional Supplements
NASA Astrophysics Data System (ADS)
Eckerson, Joan M.
Obesity has reached what may be considered epidemic proportions in the United States, not only for adults but for children. Because of the medical implications and health care costs associated with obesity, as well as the negative social and psychological impacts, many individuals turn to nonprescription nutritional weight loss supplements hoping for a quick fix, and the weight loss industry has responded by offering a variety of products that generates billions of dollars each year in sales. Most nutritional weight loss supplements are purported to work by increasing energy expenditure, modulating carbohydrate or fat metabolism, increasing satiety, inducing diuresis, or blocking fat absorption. To review the literally hundreds of nutritional weight loss supplements available on the market today is well beyond the scope of this chapter. Therefore, several of the most commonly used supplements were selected for critical review, and practical recommendations are provided based on the findings of well controlled, randomized clinical trials that examined their efficacy. In most cases, the nutritional supplements reviewed either elicited no meaningful effect or resulted in changes in body weight and composition that are similar to what occurs through a restricted diet and exercise program. Although there is some evidence to suggest that herbal forms of ephedrine, such as ma huang, combined with caffeine or caffeine and aspirin (i.e., ECA stack) is effective for inducing moderate weight loss in overweight adults, because of the recent ban on ephedra manufacturers must now use ephedra-free ingredients, such as bitter orange, which do not appear to be as effective. The dietary fiber, glucomannan, also appears to hold some promise as a possible treatment for weight loss, but other related forms of dietary fiber, including guar gum and psyllium, are ineffective.
Weight misperception amongst youth of a developing country: Pakistan -a cross-sectional study
2013-01-01
Background Weight misperception is the discordance between an individual’s actual weight status and the perception of his/her weight. It is a common problem in the youth population as enumerated by many international studies. However data from Pakistan in this area is deficient. Methods A multi-center cross-sectional survey was carried out in undergraduate university students of Karachi between the ages of 15–24. Participants were questioned regarding their perception of being thin, normal or fat and it was compared with their Body Mass Index (BMI). Measurements of height and weight were taken for this purpose and BMI was categorized using Asian cut offs. Weight misperception was identified when the self-perceived weight (average, fat, thin) did not match the calculated BMI distribution. Chi square tests and logistic regression tests were applied to show associations of misperception and types of misperception (overestimation, underestimation) with independent variables like age, gender, type of university and faculties. P-value of <0.05 was taken as statistically significant. Results 42.4% of the total participants i.e. 43.3% males and 41% females misperceived their weight. Amongst those who misperceived 38.2% had overestimated and 61.8% had underestimated their weight. Greatest misperception of was observed in the overweight category (91%), specifically amongst overweight males (95%). Females of the underweight category overestimated their weight and males of the overweight category underestimated their weight. Amongst the total participants, females overestimated 8 times more than males (OR 8.054, 95% CI 5.34-12.13). Misperception increased with the age of the participants (OR 1.114, 95% CI 1.041-1.191). Odds of misperception were greater in students of private sector universities as compared to public (OR 1.861, 95% CI: 1.29-2.67). Odds of misperception were less in students of medical sciences (OR 0.693, 95% CI 0.491-0.977), engineering (OR 0.586, 95% CI 0.364-0.941) and business administration (OR 0.439, 95% CI 0.290-0.662) as compared to general faculty universities. Conclusion There was marked discrepancy between the calculated BMI and the self-perceived weight in the youth of Karachi. Better awareness campaigns need to be implemented to reverse these trends. PMID:23915180
Ensemble Weight Enumerators for Protograph LDPC Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush
2006-01-01
Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.
Short-Term Auditory Memory of Above-Average and Below-Average Grade Three Readers.
ERIC Educational Resources Information Center
Caruk, Joan Marie
To determine if performance on short term auditory memory tasks is influenced by reading ability or sex differences, 62 third grade reading students (16 above average boys, 16 above average girls, 16 below average boys, and 14 below average girls) were administered four memory tests--memory for consonant names, memory for words, memory for…
... oxygen into energy), and behavior or habits. Energy Balance Energy balance is important for maintaining a healthy weight. The ... OUT over time = weight stays the same (energy balance) More energy IN than OUT over time = weight ...
Dynamic Multiscale Averaging (DMA) of Turbulent Flow
Richard W. Johnson
2012-09-01
A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical engineering applications.
NASA Astrophysics Data System (ADS)
Noda, H.; Lapusta, N.; Kanamori, H.
2010-12-01
Static stress drop is often estimated using the seismic moment and rupture area based on a model for uniform stress drop distribution; we denote this estimate by ??_M. ??_M is sometimes interpreted as the spatial average of stress change over the ruptured area, denoted here as ??_A, and used accordingly, for example, to discuss the relation between recurrence interval and the healing of the frictional surface in a system with one degree of freedom [e.g., Marone, 1998]. ??_M is also used to estimate available energy (defined as the strain energy change computed using the final stress state as the reference one) and radiation efficiency [e.g., Venkataraman and Kanamori, 2004]. In this work, we define a stress drop measure, ??_E, that would enter the exact computation of available energy and radiation efficiency. The three stress drop measures - ??_M that can be estimated from observations, ??_A, and ??_E - are equal if the static stress change is spatially uniform, and that motivates substituting ??_M for the other two quantities in applications. However, finite source inversions suggest that the stress change is heterogeneous in natural earthquakes [e.g., Bouchon, 1997]. Since ??_M is the average of stress change weighted by slip distribution due to a uniform stress drop [Madariaga, 1979], ??_E is the average of stress change weighted by actual slip distribution in the event (this work), and ??_A is the simple spatial average of stress change, the three measures should, in general, be different. Here, we investigate the effect of heterogeneity aiming to understand how to use the seismological estimates of stress drop appropriately. We create heterogeneous slip distributions for both circular and rectangular planar ruptures using the approach motivated by Liu-Zeng et al. [2005] and Lavalleé et al [2005]. We find that, indeed, the three stress drop measures differ in our scenarios. In particular, heterogeneity increases ??_E and thus the available energy when the seismic moment (and hence ??_M) is preserved. So using ??_M instead of ??_E would underestimate available energy and hence overestimate radiation efficiency. For a range of parameters, ??_E is well-approximated by the seismic estimate ??_M if the latter is computed using a modified (decreased) rupture area that excludes low-slipped regions; a qualitatively similar procedure is already being used in practice [Somerville et al, 1999].
Drop-Weight-Tear-Test Equipment Energy Calibration Program
Eiber, R.J.
1988-10-01
The Drop-Weight-Tear-Test (DWTT) energy absorption has not previously been considered as a measure of fracture toughness from this test. The DWTT was originally planned to define the fracture appearance transition temperature of line pipe. The test has been very successfully used for this purpose for the past 20 years. During this period of DWTT usage, the need for a toughness measurement to control ductile fracture propagation has seen the application of a Charpy shelf-energy in addition to a DWTT or Charpy shear area requirement in the specification of line-pipe properties. The purpose of exploring energy measurements in the DWTT is to determine if both fracture appearance and toughness can be obtained from a single test. To this end, a number of individual steel companies and gas-transmission companies have been examining the use of a DWTT energy measurement. Also, the Round Robin Program on the precracked drop-weight-tear test (DWTT) indicated that the energy measurements by the 12 participating laboratories had a coefficient of variation (standard deviation/average) that is about 30 percent larger than for the Charpy V-notch test. Also, the average energy variations between the various laboratories were quite large. This suggested that there was a need for calibrating the equipment used for energy measurements. It also suggested that equipment design may be a contributing factor. The objective of this program is to obtain reference steels that are uniform and of varying energy levels so that reference specimens can be supplied to a laboratory to assess the accuracy and precision of their DWTT energy measuring equipment. 7 figs., 5 tabs.
Selective Model Averaging with Bayesian Rule Learning for Predictive Biomedicine
Balasubramanian, Jeya B.; Visweswaran, Shyam; Cooper, Gregory F.; Gopalakrishnan, Vanathi
2014-01-01
Accurate disease classification and biomarker discovery remain challenging tasks in biomedicine. In this paper, we develop and test a practical approach to combining evidence from multiple models when making predictions using selective Bayesian model averaging of probabilistic rules. This method is implemented within a Bayesian Rule Learning system and compared to model selection when applied to twelve biomedical datasets using the area under the ROC curve measure of performance. Cross-validation results indicate that selective Bayesian model averaging statistically significantly outperforms model selection on average in these experiments, suggesting that combining predictions from multiple models may lead to more accurate quantification of classifier uncertainty. This approach would directly impact the generation of robust predictions on unseen test data, while also increasing knowledge for biomarker discovery and mechanisms that underlie disease. PMID:25717394
INVERSIONS FOR AVERAGE SUPERGRANULAR FLOWS USING FINITE-FREQUENCY KERNELS
Svanda, Michal, E-mail: michal@astronomie.cz [Astronomical Institute, Academy of Sciences of the Czech Republic (v.v.i.), Fricova 298, CZ-25165 Ondrejov (Czech Republic)
2012-11-10
I analyze the maps recording the travel-time shifts caused by averaged plasma anomalies under an 'average supergranule', constructed by means of statistical averaging over 5582 individual supergranules with large divergence signals detected in two months of Helioseismic and Magnetic Imager Dopplergrams. By utilizing a three-dimensional validated time-distance inversion code, I measure a peak vertical velocity of 117 {+-} 2 m s{sup -1} at depths around 1.2 Mm in the center of the supergranule and a root-mean-square vertical velocity of 21 m s{sup -1} over the area of the supergranule. A discrepancy between this measurement and the measured surface vertical velocity (a few m s{sup -1}) can be explained by the existence of the large-amplitude vertical flow under the surface of supergranules with large divergence signals, recently suggested by Duvall and Hanasoge.
Weight knowledge and weight magnitude: impact on lumbosacral loading.
Farrag, Ahmed T; Elsayed, Walaa H; El-Sayyad, Mohsen M; Marras, William S
2015-01-01
Several factors can impact lumbosacral loads during lifting, including weight knowledge and weight magnitude. However, interaction between them has never been tested. This study investigated the interaction effect of these variables on lumbosacral forces and moments. Participants performed symmetrical lifts using three different weights. Weight knowledge involved known and unknown weight conditions. A biologically assisted dynamic model was used to calculate spinal loading parameters. Weight impacted all variables, while knowledge impacted only compression, by a moderate amount (5%), and spinal moments. Lifting a lightweight resulted in a difference of 16% and 7.2% between knowledge conditions for compression and anterior-posterior shear forces, respectively, compared with a negligible difference of < 1% when lifting a heavy weight. Increased spinal loading with light unknown weight can be attributed to increased muscular co-contraction. Weight knowledge is important to consider at low weight levels as it can increase tissue loading to values equivalent to lifting a heavier weight. PMID:25329859
Light weight phosphate cements
Wagh, Arun S. (Naperville, IL); Natarajan, Ramkumar, (Woodridge, IL); Kahn, David (Miami, FL)
2010-03-09
A sealant having a specific gravity in the range of from about 0.7 to about 1.6 for heavy oil and/or coal bed methane fields is disclosed. The sealant has a binder including an oxide or hydroxide of Al or of Fe and a phosphoric acid solution. The binder may have MgO or an oxide of Fe and/or an acid phosphate. The binder is present from about 20 to about 50% by weight of the sealant with a lightweight additive present in the range of from about 1 to about 10% by weight of said sealant, a filler, and water sufficient to provide chemically bound water present in the range of from about 9 to about 36% by weight of the sealant when set. A porous ceramic is also disclosed.
Sethi, Bipin Kumar; Nagesh, V Sri
2015-05-01
Ramadan fasting is associated with significant weight loss in both men and women. Reduction in blood pressure, lipids, blood glucose, body mass index and waist and hip circumference may also occur. However, benefits accrued during this month often reverse within a few weeks of cessation of fasting, with most people returning back to their pre-Ramadan body weights and body composition. To ensure maintenance of this fasting induced weight loss, health care professionals should encourage continuation of healthy dietary habits, moderate physical activity and behaviour modification, even after conclusion of fasting. It should be realized that Ramadan is an ideal platform to target year long lifestyle modification, to ensure that whatever health care benefits have been gained during this month, are perpetuated. PMID:26013789
Weighting by Inverse Variance or by Sample Size in Random-Effects Meta-Analysis
ERIC Educational Resources Information Center
Marin-Martinez, Fulgencio; Sanchez-Meca, Julio
2010-01-01
Most of the statistical procedures in meta-analysis are based on the estimation of average effect sizes from a set of primary studies. The optimal weight for averaging a set of independent effect sizes is the inverse variance of each effect size, but in practice these weights have to be estimated, being affected by sampling error. When assuming a…
High School Weight Training: A Comprehensive Program.
ERIC Educational Resources Information Center
Viscounte, Roger; Long, Ken
1989-01-01
Describes a weight training program, suitable for the general student population and the student-athlete, which is designed to produce improvement in specific, measurable areas including bench press (upper body), leg press (lower body), vertical jump (explosiveness); and 40-yard dash (speed). Two detailed charts are included, with notes on their…
Average Transmission Probability of a Random Stack
ERIC Educational Resources Information Center
Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg
2010-01-01
The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…
Determinants of College Grade Point Averages
ERIC Educational Resources Information Center
Bailey, Paul Dean
2012-01-01
Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by…
Annealing Between Distributions by Averaging Moments
Toronto, University of
Annealing Between Distributions by Averaging Moments Chris J. Maddison Dept. of Comp. Sci work with Chris Maddison, Ruslan Salakhutdinov)Annealing between distributions by averaging moments May temperature Roger Grosse (joint work with Chris Maddison, Ruslan Salakhutdinov)Annealing between distributions
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2011 CFR
2011-07-01
...divided by the quantity of aluminum produced during the period...comprising the averaging group. (2) To determine...total emissions by total aluminum production. (3...total emissions by total aluminum production. (c...making up each averaging group shall not exceed...
Scalar averaging in Szekeres dust models
NASA Astrophysics Data System (ADS)
Sussman, Roberto A.
2013-07-01
We consider a formalism of weighed proper volume scalar averages (the "q-average") for the study of quasi-spherical Szekeres models. We show that the q-average of the main fluid flow covariant scalars are spherically symmetric and satisfy FLRW evolution laws, so that fluctuations and perturbations with respect to these averages provide a full description of the deviation of the models from homogeneity and spherical symmetry. The main proper tensors of the models are given in terms of these fluctuations, with the averages of scalar invariant contractions expressed as second order statistical moments of the density and Hubble scalar expansion. We discuss a possible application of this formalism in connection to a gravitational entropy functional in which entropy production is directly related to a negative statistical correlation between density and velocity fluctuations.
The Hubble rate in averaged cosmology
Umeh, Obinna; Larena, Julien; Clarkson, Chris E-mail: julien.larena@gmail.com
2011-03-01
The calculation of the averaged Hubble expansion rate in an averaged perturbed Friedmann-Lemaître-Robertson-Walker cosmology leads to small corrections to the background value of the expansion rate, which could be important for measuring the Hubble constant from local observations. It also predicts an intrinsic variance associated with the finite scale of any measurement of H{sub 0}, the Hubble rate today. Both the mean Hubble rate and its variance depend on both the definition of the Hubble rate and the spatial surface on which the average is performed. We quantitatively study different definitions of the averaged Hubble rate encountered in the literature by consistently calculating the backreaction effect at second order in perturbation theory, and compare the results. We employ for the first time a recently developed gauge-invariant definition of an averaged scalar. We also discuss the variance of the Hubble rate for the different definitions.
Average diurnal variation of summer lightning over the Florida peninsula
NASA Technical Reports Server (NTRS)
Maier, L. M.; Krider, E. P.; Maier, M. W.
1984-01-01
Data derived from a large network of electric field mills are used to determine the average diurnal variation of lightning in a Florida seacoast environment. The variation at the NASA Kennedy Space Center and the Cape Canaveral Air Force Station area is compared with standard weather observations of thunder, and the variation of all discharges in this area is compared with the statistics of cloud-to-ground flashes over most of the South Florida peninsula and offshore waters. The results show average diurnal variations that are consistent with statistics of thunder start times and the times of maximum thunder frequency, but that the actual lightning tends to stop one to two hours before the recorded thunder. The variation is also consistent with previous determinations of the times of maximum rainfall and maximum rainfall rate.
Ensemble bayesian model averaging using markov chain Monte Carlo sampling
Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL
2008-01-01
Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.
Unbiased Average Age-Appropriate Atlases for Pediatric Studies
Fonov, Vladimir; Evans, Alan C.; Botteron, Kelly; Almli, C. Robert; McKinstry, Robert C.; Collins, D. Louis
2010-01-01
Spatial normalization, registration, and segmentation techniques for Magnetic Resonance Imaging (MRI) often use a target or template volume to facilitate processing, take advantage of prior information, and define a common coordinate system for analysis. In the neuroimaging literature, the MNI305 Talairach-like coordinate system is often used as a standard template. However, when studying pediatric populations, variation from the adult brain makes the MNI305 suboptimal for processing brain images of children. Morphological changes occurring during development render the use of age-appropriate templates desirable to reduce potential errors and minimize bias during processing of pediatric data. This paper presents the methods used to create unbiased, age-appropriate MRI atlas templates for pediatric studies that represent the average anatomy for the age range of 4.5–18.5 years, while maintaining a high level of anatomical detail and contrast. The creation of anatomical T1-weighted, T2-weighted, and proton density-weighted templates for specific developmentally important age-ranges, used data derived from the largest epidemiological, representative (healthy and normal) sample of the U.S. population, where each subject was carefully screened for medical and psychiatric factors and characterized using established neuropsychological and behavioral assessments. . Use of these age-specific templates was evaluated by computing average tissue maps for gray matter, white matter, and cerebrospinal fluid for each specific age range, and by conducting an exemplar voxel-wise deformation-based morphometry study using 66 young (4.5–6.9 years) participants to demonstrate the benefits of using the age-appropriate templates. The public availability of these atlases/templates will facilitate analysis of pediatric MRI data and enable comparison of results between studies in a common standardized space specific to pediatric research. PMID:20656036
Time averaging of instantaneous quantities in HYDRA
McCallen, R.C.
1996-09-01
For turbulent flow the evaluation of direct numerical simulations (DNS) where all scales are resolved and large-eddy simulation (LES) where only large-scales are resolved is difficult because the results are three-dimensional and transient. To simplify the analysis, the instantaneous flow field can be averaged in time for evaluation and comparison to experimental results. The incompressible Navier-Stokes flow code HYDRA has been modified for calculation of time-average quantities for both DNS and LES. This report describes how time averages of instantaneous quantities are generated during program execution (i.e., while generating the instantaneous quantities, instead of as a postprocessing operation). The calculations are performed during program execution to avoid storing values at each time step and thus to reduce storage requirements. The method used in calculating the time-average velocities, turbulent intensities, <{ital u}{sup ``}{sup 2}>, <{ital va}{sup ``}{sup 2}>, and <{ital w}{sup ``}{sup 2}>, and turbulent shear, <{ital u}{sup ``}{ital v}{sup ``}> are outlined. The brackets <> used here represent a time average. the described averaging methods were implemented in the HYDRA code for three-dimensional problem solutions. Also presented is a method for taking the time averages for a number of consecutive intervals and calculating the time average for the sum of the intervals. This method could be used for code restarts or further postprocessing of the timer averages from consecutive intervals. This method was not used in the HYDRA implementation, but is included here for completeness. In HYDRA, the running sums needed fro time averaging are simply written to the restart dump.
Social embeddedness in an online weight management programme is linked to greater weight loss.
Poncela-Casasnovas, Julia; Spring, Bonnie; McClary, Daniel; Moller, Arlen C; Mukogo, Rufaro; Pellegrini, Christine A; Coons, Michael J; Davidson, Miriam; Mukherjee, Satyam; Nunes Amaral, Luis A
2015-03-01
The obesity epidemic is heightening chronic disease risk globally. Online weight management (OWM) communities could potentially promote weight loss among large numbers of people at low cost. Because little is known about the impact of these online communities, we examined the relationship between individual and social network variables, and weight loss in a large, international OWM programme. We studied the online activity and weight change of 22,419 members of an OWM system during a six-month period, focusing especially on the 2033 members with at least one friend within the community. Using Heckman's sample-selection procedure to account for potential selection bias and data censoring, we found that initial body mass index, adherence to self-monitoring and social networking were significantly correlated with weight loss. Remarkably, greater embeddedness in the network was the variable with the highest statistical significance in our model for weight loss. Average per cent weight loss at six months increased in a graded manner from 4.1% for non-networked members, to 5.2% for those with a few (two to nine) friends, to 6.8% for those connected to the giant component of the network, to 8.3% for those with high social embeddedness. Social networking within an OWM community, and particularly when highly embedded, may offer a potent, scalable way to curb the obesity epidemic and other disorders that could benefit from behavioural changes. PMID:25631561
Conservation benefits of temperate marine protected areas: variation among fish species.
Blyth-Skyrme, Robert E; Kaiser, Michel J; Hiddink, Jan G; Edwards-Jones, Gareth; Hart, Paul J B
2006-06-01
Marine protected areas, and other fishery management systems that impart partial or total protection from fishing, are increasingly advocated as an essential management tool to ensure the sustainable use of marine resources. Beneficial effects for fish species are well documented for tropical and reef systems, but the effects of marine protected areas remain largely untested in temperate waters. We compared trends in sport-fishing catches of nine fish species in an area influenced by a large (500-km2) towed-fishing-gear restriction zone and in adjacent areas under conventional fishery management controls. Over the period 1973-2002 the mean reported weight of above-average-sized (trophy) fish of species with early age at maturity and limited home range was greatest within the area influenced by the fishing-gear restriction zone. The reported weight of trophy fish of species that mature early also declined less and more slowly over time within the area influenced by the fishing-gear restriction zone. Importantly, the mean reported weight of trophy fish of species that mature late and those that undertake extensive spatial movements declined at the same rate in all areas. Hence these species are likely to require protected areas > 500 km2 for effective protection. Our results also indicated that fish species with a localized distribution or high site fidelity may require additional protection from sport fishing to prevent declines in the number or size of fish within the local population. PMID:16909574
Self-averaging in complex brain neuron signals
NASA Astrophysics Data System (ADS)
Bershadskii, A.; Dremencov, E.; Fukayama, D.; Yadid, G.
2002-12-01
Nonlinear statistical properties of Ventral Tegmental Area (VTA) of limbic brain are studied in vivo. VTA plays key role in generation of pleasure and in development of psychological drug addiction. It is shown that spiking time-series of the VTA dopaminergic neurons exhibit long-range correlations with self-averaging behavior. This specific VTA phenomenon has no relation to VTA rewarding function. Last result reveals complex role of VTA in limbic brain.
Ideal weight and weight satisfaction: association with health practices.
Kuk, Jennifer L; Ardern, Chris I; Church, Timothy S; Hebert, James R; Sui, Xuemei; Blair, Steven N
2009-08-15
Evidence suggests that individuals have become more tolerant of higher body weights over time. To investigate this issue further, the authors examined cross-sectional associations among ideal weight, examination year, and obesity as well as the association of ideal weight and body weight satisfaction with health practices among 15,221 men and 4,126 women in the United States. Participants in 1987 reported higher ideal weights than participants in 2001, an effect particularly pronounced from 1987 to 2001 for younger and obese men (85.5 kg to 94.9 kg) and women (62.2 kg to 70.5 kg). For a given body mass index, higher ideal body weights were associated with greater weight satisfaction but lower intentions to lose weight. Body weight satisfaction was subsequently associated with greater walking/jogging, better diet, and lower lifetime weight loss but with less intention to change physical activity and diet or lose weight (P < 0.01). Conversely, body mass index was negatively associated with weight satisfaction (P < 0.01) and was associated with less walking/jogging, poorer diet, and greater lifetime weight loss but with greater intention to change physical activity and diet or lose weight. Although the health implications of these findings are somewhat unclear, increased weight satisfaction, in conjunction with increases in societal overweight/obesity, may result in decreased motivation to lose weight and/or adopt healthier lifestyle behaviors. PMID:19546153
A NOTE ON Ap WEIGHTS: PASTING WEIGHTS AND CHANGING VARIABLES
Pérez, Mario
A NOTE ON Ap WEIGHTS: PASTING WEIGHTS AND CHANGING VARIABLES MARIO P´EREZ RIERA Abstract. For two weights u, w on Rn , we show that w Ap,u (the Muckenhoupt class of weights) if and only if wu Ap and wu1-p Ap, under the assumption that u Ar for every r > 1. We also prove a rather general result
Support weight enumerators and coset weight distributions of isodual codes
Milenkovic, Olgica
Support weight enumerators and coset weight distributions of isodual codes Olgica Milenkovic In this paper various methods for computing the support weight enumerators of binary, linear, even, isodual codes are described. It is shown that there exist relationships between support weight enumerators
Minimum Weight Euclidean Matching and Weighted Relative Neighborhood Graphs
Mirzaian, Andranik
Minimum Weight Euclidean Matching and Weighted Relative Neighborhood Graphs Andranik Mirzaian The Minimum Weight Euclidean Matching (MWEM) problem is: given 2n point sites in the plane with Euclidean, O((n2 +F) logn) time algorithm based on the Weighted Voronoi Diagram(WVD) of the sites, where F
Barto, Libor
Weighted Clones Libor Barto Department of Algebra Faculty of Mathematics and Physics Charles University in Prague AAA 89, February 27, 2015 #12;what is this talk about? clones relational clones why study clones: almost whole UA + fun why study relational clones: CSP why we care about : UA CSP
NSDL National Science Digital Library
Lance King
2011-07-26
Students will recognize that the mass of an object is a measure that is independent of gravity. If they can effectively complete the guided inquiry activity as well as the short writing summary to reinforce what they learned, they will gain a foundation for understanding the difference between mass and weight.
NSDL National Science Digital Library
COSI
2009-01-01
In this activity about weights and balances, learners create their own balance using paper cups. Then, learners explore how to compare the relative mass of objects. In the "Now, explore!" section, to take the experiment one step further, they can make carbon dioxide gas and discover its mass relative to the air around it.
ERIC Educational Resources Information Center
Nutter, June
1995-01-01
Secondary level physical education teachers can have their students use math concepts while working out on the weight-room equipment. The article explains how students can reinforce math skills while weightlifting by estimating their strength, estimating their power, or calculating other formulas. (SM)
Exponential smoothing weighted correlations
NASA Astrophysics Data System (ADS)
Pozzi, F.; Di Matteo, T.; Aste, T.
2012-06-01
In many practical applications, correlation matrices might be affected by the "curse of dimensionality" and by an excessive sensitiveness to outliers and remote observations. These shortcomings can cause problems of statistical robustness especially accentuated when a system of dynamic correlations over a running window is concerned. These drawbacks can be partially mitigated by assigning a structure of weights to observational events. In this paper, we discuss Pearson's ? and Kendall's ? correlation matrices, weighted with an exponential smoothing, computed on moving windows using a data-set of daily returns for 300 NYSE highly capitalized companies in the period between 2001 and 2003. Criteria for jointly determining optimal weights together with the optimal length of the running window are proposed. We find that the exponential smoothing can provide more robust and reliable dynamic measures and we discuss that a careful choice of the parameters can reduce the autocorrelation of dynamic correlations whilst keeping significance and robustness of the measure. Weighted correlations are found to be smoother and recovering faster from market turbulence than their unweighted counterparts, helping also to discriminate more effectively genuine from spurious correlations.
E. Kalai; D. Samet
1987-01-01
Nonsymmetric Shapley values for coalitional form games with transferable utility are studied. The nonsymmetries are modeled through nonsymmetric weight systems defined on the players of the games. It is shown axiomatically that two families of solutions of this type are possible. These families are strongly related to each other through the duality relationship on games. While the first family lends
Testing +/- 1-Weight Halfspaces
Matulef, Kevin M.
2009-01-01
We consider the problem of testing whether a Boolean function f:{???1,1} [superscript n] ?{???1,1} is a ±1-weight halfspace, i.e. a function of the form f(x)?=?sgn(w [subscript 1] x [subscript 1]?+?w [subscript 2] x ...
Light propagation in the averaged universe
Bagheri, Samae; Schwarz, Dominik J. E-mail: dschwarz@physik.uni-bielefeld.de
2014-10-01
Cosmic structures determine how light propagates through the Universe and consequently must be taken into account in the interpretation of observations. In the standard cosmological model at the largest scales, such structures are either ignored or treated as small perturbations to an isotropic and homogeneous Universe. This isotropic and homogeneous model is commonly assumed to emerge from some averaging process at the largest scales. We assume that there exists an averaging procedure that preserves the causal structure of space-time. Based on that assumption, we study the effects of averaging the geometry of space-time and derive an averaged version of the null geodesic equation of motion. For the averaged geometry we then assume a flat Friedmann-Lemaître (FL) model and find that light propagation in this averaged FL model is not given by null geodesics of that model, but rather by a modified light propagation equation that contains an effective Hubble expansion rate, which differs from the Hubble rate of the averaged space-time.
Lagged average predictions in a predictability experiment
NASA Technical Reports Server (NTRS)
Roads, John O.
1988-01-01
Lagged average predictions are examined here within the context of an idealized predictability experiment. Lagged predictions contribute to making better forecasts than the forecasts obtained from using only the latest initial state. Analytic models suggest that lagged predictions contribute the greatest amount when the error growth rates are small. Little dependence upon the magnitude of the intial error is found if the growth rates remain constant. It is also shown how lagged average forecasts can be used to predict the error. Discriminating forecasts made only when the error is predicted to be small are shown to have much better than average skill.
Cosmic inhomogeneities and averaged cosmological dynamics.
Paranjape, Aseem; Singh, T P
2008-10-31
If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a "dark energy." However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be "no." Averaging effects negligibly influence the cosmological dynamics. PMID:18999811
GROUP ACTION INDUCED AVERAGING FOR HARDI PROCESSING
Çetingül, H. Ertan; Afsari, Bijan; Wright, Margaret J.; Thompson, Paul M.; Vidal, Rene
2012-01-01
We consider the problem of processing high angular resolution diffusion images described by orientation distribution functions (ODFs). Prior work showed that several processing operations, e.g., averaging, interpolation and filtering, can be reduced to averaging in the space of ODFs. However, this approach leads to anatomically erroneous results when the ODFs to be processed have very different orientations. To address this issue, we propose a group action induced distance for averaging ODFs, which leads to a novel processing framework on the spaces of orientation (the space of 3D rotations) and shape (the space of ODFs with the same orientation). Experiments demonstrate that our framework produces anatomically meaningful results. PMID:22903055
Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing; Liu, Nan-Suey
2012-01-01
In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.
Inadequate gestational weight gain and adverse pregnancy outcomes among normal weight women in China
Wen, Tingyuan; Lv, Yanwei
2015-01-01
Objective: The objective of the paper is to find the association between inadequate gestational weight gain and pregnancy outcomes in normal weight women in China. Method: A retrospective study was conducted among 13,776 normal weight pregnant women who received antenatal care and delivered singleton infants at the participating hospital during August, 2009 to July, 2013. Adverse pregnancy outcomes like low birth weight (LBW), preterm birth, birth asphyxia, neonatal intensive care unit (NICU) admission and length of hospital stay were compared and analyzed between two groups with inadequate and adequate gestational weight gain. Results: According to the IOM recommendations, inadequate gestational weight gain was found to be 14.7% in this study. Women with inadequate gestational weight gain (GWG) were found to be at a higher risk for LBW (aOR = 2.13, 95% CI: 1.75, 2.86) and preterm birth (aOR = 1.44, 95% CI: 1.21, 1.67) than those in the adequate gestational weight gain group, after adjusting for monthly family income, maternal education, occupation, and whether they received any advice regarding benefits of gestational weight gain and residential area. However, inadequate GWG was not associated with longer hospital stay (aOR = 1.13, 95% CI: 0.91-1.43) in adjusted model. In addition, the rate of birth asphyxia and NICU admission were similar in both groups (P > 0.05). Conclusions: Normal weight pregnant women with GWG below the recommended AIOM 2009 guidelines were found to be at an increased risk of low birth weight and preterm birth. PMID:25932249
The Sensitivity of Guttman Weights.
ERIC Educational Resources Information Center
Green, Bert F., Jr.
The use of Guttman weights in scoring tests is discussed. Scores of 2,500 men on one subtest of the CEED-SAT-Verbal Test were examined using cross-validated Guttman weights. Several scores were compared, as follows: Scores obtained from cross-validated Guttman weights; Scores obtained by rounding the Guttman weights to one digit, ranging from 0 to…
Correlation Weights in Multiple Regression
ERIC Educational Resources Information Center
Waller, Niels G.; Jones, Jeff A.
2010-01-01
A general theory on the use of correlation weights in linear prediction has yet to be proposed. In this paper we take initial steps in developing such a theory by describing the conditions under which correlation weights perform well in population regression models. Using OLS weights as a comparison, we define cases in which the two weighting…
Biodegradation of high molecular weight polylactic acid
NASA Astrophysics Data System (ADS)
Stloukal, Petr; Koutny, Marek; Sedlarik, Vladimir; Kucharczyk, Pavel
2012-07-01
Polylactid acid seems to be an appropriate replacement of conventional non-biodegradable synthetic polymer primarily due to comparable mechanical, thermal and processing properties in its high molecular weight form. Biodegradation of high molecular PLA was studied in compost for various forms differing in their specific surface area. The material proved its good biodegradability under composting conditions and all investigated forms showed to be acceptable for industrial composting. Despite expectations, no significant differences in resulting mineralizations were observed for fiber, film and powder sample forms with different specific surface areas. The clearly faster biodegradation was detected only for the thin coating on porous material with high specific surface area.
How the economy affects teenage weight.
Arkes, Jeremy
2009-06-01
Much research has focused on the proximate determinants of weight gain and obesity for adolescents, but not much information has emerged on identifying which adolescents might be at risk or on prevention. This research focuses on a distal determinant of teenage weight gain, namely changes in the economy, which may help identify geographical areas where adolescents may be at risk and may provide insights into the mechanisms by which adolescents gain weight. This study uses a nationally representative sample of individuals, between 15 and 18 years old from the 1997 US National Longitudinal Survey of Youth, to estimate a model with state and year fixed effects to examine how within-state changes in the unemployment rate affect four teenage weight outcomes: an age- and gender-standardized percentile in the body-mass-index distribution and indicators for being overweight, obese, and underweight. I found statistically significant estimates, indicating that females gain weight in weaker economic periods and males gain weight in stronger economic periods. Possible causes for the contrasting results across gender include, among other things, differences in the responsiveness of labor market work to the economy and differences in the types of jobs generally occupied by female and male teenagers. PMID:19364624
Spacetime Average Density (SAD) Cosmological Measures
Don N. Page
2014-10-22
The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmological constant.
40 CFR 86.449 - Averaging provisions.
Code of Federal Regulations, 2012 CFR
2012-07-01
...Emission Regulations for 1978 and Later New Motorcycles, General Provisions § 86.449 ...certification averaging program. Include only motorcycles certified under this subpart and intended...for which you manufacture or import motorcycles. (d) Calculate your...
40 CFR 86.449 - Averaging provisions.
Code of Federal Regulations, 2014 CFR
2014-07-01
...Emission Regulations for 1978 and Later New Motorcycles, General Provisions § 86.449 ...certification averaging program. Include only motorcycles certified under this subpart and intended...for which you manufacture or import motorcycles. (d) Calculate your...
40 CFR 86.449 - Averaging provisions.
Code of Federal Regulations, 2013 CFR
2013-07-01
...Emission Regulations for 1978 and Later New Motorcycles, General Provisions § 86.449 ...certification averaging program. Include only motorcycles certified under this subpart and intended...for which you manufacture or import motorcycles. (d) Calculate your...
Thermal ghost imaging with averaged speckle patterns
Shapiro, Jeffrey H.
We present theoretical and experimental results showing that a thermal ghost imaging system can produce images of high quality even when it uses detectors so slow that they respond only to intensity-averaged (that is, ...
Convergence speed in distributed consensus and averaging
Olshevsky, Alexander
We study the convergence speed of distributed iterative algorithms for the consensus and averaging problems, with emphasis on the latter. We first consider the case of a fixed communication topology. We show that a simple ...
Spacetime Average Density (SAD) cosmological measures
Page, Don N.
2014-11-01
The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmological constant.
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2014 CFR
2014-07-01
...PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11...in an approved averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2011 CFR
2011-07-01
...PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11...in an approved averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2013 CFR
2013-07-01
...PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11...in an approved averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2012 CFR
2012-07-01
...PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11...in an approved averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2010 CFR
2010-07-01
...76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General provisions. In lieu of...
Averaging Sampled Sensor Outputs To Detect Failures
NASA Technical Reports Server (NTRS)
Panossian, Hagop V.
1990-01-01
Fluctuating signals smoothed by taking consecutive averages. Sampling-and-averaging technique processes noisy or otherwise erratic signals from number of sensors to obtain indications of failures in complicated system containing sensors. Used under both transient and steady-state conditions. Useful in monitoring automotive engines, chemical-processing plants, powerplants, and other systems in which outputs of sensors contain noise or other fluctuations in measured quantities.
Heuristic approach to capillary pressures averaging
Coca, B.P.
1980-10-01
Several methods are available to average capillary pressure curves. Among these are the J-curve and regression equations of the wetting-fluid saturation in porosity and permeability (capillary pressure held constant). While the regression equation seem completely empiric, the J-curve method seems to be theoretically sound due to its expression based on a relation between the average capillary radius and the permeability-porosity ratio. An analysis is given of each of these methods.
Applications of high average power nonlinear optics
Velsko, S.P.; Krupke, W.F.
1996-02-05
Nonlinear optical frequency convertors (harmonic generators and optical parametric oscillators are reviewed with an emphasis on high average power performance and limitations. NLO materials issues and NLO device designs are discussed in reference to several emerging scientific, military and industrial commercial applications requiring {approx} 100 watt average power level in the visible and infrared spectral regions. Research efforts required to enable practical {approx} 100 watt class NLO based laser systems are identified.
Method of averaging in Clifford algebras
D. S. Shirokov
2015-06-19
In this paper we consider different operators acting on Clifford algebras. We consider Reynolds operator of Salingaros' vee group. This operator average" an action of Salingaros' vee group on Clifford algebra. We consider conjugate action on Clifford algebra. We present a relation between these operators and projection operators onto fixed subspaces of Clifford algebras. Using method of averaging we present solutions of system of commutator equations.
Monthly average polar sea-ice concentration
Schweitzer, Peter N.
1995-01-01
The data contained in this CD-ROM depict monthly averages of sea-ice concentration in the modern polar oceans. These averages were derived from the Scanning Multichannel Microwave Radiometer (SMMR) and Special Sensor Microwave/Imager (SSM/I) instruments aboard satellites of the U.S. Air Force Defense Meteorological Satellite Program from 1978 through 1992. The data are provided as 8-bit images using the Hierarchical Data Format (HDF) developed by the National Center for Supercomputing Applications.
Instrument to average 100 data sets
NASA Technical Reports Server (NTRS)
Tuma, G. B.; Birchenough, A. G.; Rice, W. J.
1977-01-01
An instrumentation system is currently under development which will measure many of the important parameters associated with the operation of an internal combustion engine. Some of these parameters include mass-fraction burn rate, ignition energy, and the indicated mean effective pressure. One of the characteristics of an internal combustion engine is the cycle-to-cycle variation of these parameters. A curve-averaging instrument has been produced which will generate the average curve, over 100 cycles, of any engine parameter. the average curve is described by 2048 discrete points which are displayed on an oscilloscope screen to facilitate recording and is available in real time. Input can be any parameter which is expressed as a + or - 10-volt signal. Operation of the curve-averaging instrument is defined between 100 and 6000 rpm. Provisions have also been made for averaging as many as four parameters simultaneously, with a subsequent decrease in resolution. This provides the means to correlate and perhaps interrelate the phenomena occurring in an internal combustion engine. This instrument has been used successfully on a 1975 Chevrolet V8 engine, and on a Continental 6-cylinder aircraft engine. While this instrument was designed for use on an internal combustion engine, with some modification it can be used to average any cyclically varying waveform.
Gain weighted eigenspace assignment
NASA Technical Reports Server (NTRS)
Davidson, John B.; Andrisani, Dominick, II
1994-01-01
This report presents the development of the gain weighted eigenspace assignment methodology. This provides a designer with a systematic methodology for trading off eigenvector placement versus gain magnitudes, while still maintaining desired closed-loop eigenvalue locations. This is accomplished by forming a cost function composed of a scalar measure of error between desired and achievable eigenvectors and a scalar measure of gain magnitude, determining analytical expressions for the gradients, and solving for the optimal solution by numerical iteration. For this development the scalar measure of gain magnitude is chosen to be a weighted sum of the squares of all the individual elements of the feedback gain matrix. An example is presented to demonstrate the method. In this example, solutions yielding achievable eigenvectors close to the desired eigenvectors are obtained with significant reductions in gain magnitude compared to a solution obtained using a previously developed eigenspace (eigenstructure) assignment method.
Kirkland, L; Anderson, R
1993-01-01
Only 5% of patients dieting to achieve permanent weight loss will be successful and reap the associated health benefits. Ninety-five percent will be unsuccessful. The health implications of failed dieting attempts are numerous and include negative effects on both physical and psychological well-being. Better alternatives to dieting help patients take small, positive, enjoyable steps toward healthy eating, active living, and a positive self-image. PMID:8435552
NSDL National Science Digital Library
2014-09-18
Using the same method for measuring friction that was used in the previous lesson (Discovering Friction), students design and conduct experiments to determine if weight added incrementally to objects affects the amount of friction encountered when they slide across flat surfaces. After graphing the data from their experiments, students calculate the coefficients of friction between the objects and the surfaces they moved upon, for both static and kinetic friction.
(Bessel-) weighted asymmetries
Bernhard Musch, Alexey Prokudin
2011-11-01
Semi-inclusive deep inelastic scattering experiments allow us to probe the motion of quarks inside the proton in terms of so-called transverse momentum dependent parton distribution functions (TMD PDFs), but the information is convoluted with fragmentation functions (TMD FFs) and soft factors. It has long been known that weighting the measured event counts with powers of the hadron momentum before forming angular asymmetries de-convolutes TMD PDFs and TMD FFs in an elegant way, but this also entails an undesirable sensitivity to high momentum contributions. Using Bessel functions as weights, we find a natural generalization of weighted asymmetries that preserves the de-convolution property and features soft-factor cancellation, yet allows us to be less sensitive to high transverse momenta. The formalism also relates to TMD quantities studied in lattice QCD. We briefly show preliminary lattice results from an exploratory calculation of the Boer-Mulders shift using lattices generated by the MILC and LHP collaborations at a pion mass of 500 MeV.
Does light attract piglets to the creep area?
Larsen, M L V; Pedersen, L J
2015-06-01
Hypothermia, experienced by piglets, has been related to piglet deaths and high and early use of a heated creep area is considered important to prevent hypothermia. The aims of the present study were to investigate how a newly invented radiant heat source, eHeat, would affect piglets' use of the creep area and whether light in the creep area works as an attractant on piglets. A total of 39 sows, divided between two batches, were randomly distributed to three heat source treatments: (1) standard infrared heat lamp (CONT, n=19), (2) eHeat with light (EL, n=10) and (3) eHeat without light (ENL, n=10). Recordings of piglets' use of the creep area were made as scan sampling every 10 min for 3 h during two periods, one in daylight (0900 to 1200 h) and one in darkness (2100 to 2400 h), on day 1, 2, 3, 7, 14 and 21 postpartum. On the same days, piglets were weighted. Results showed an interaction between treatment and observation period (P<0.05) with a lower use of the creep area during darkness compared with daylight for CONT and EL litters, but not for ENL litters. Piglets average daily weight gain was not affected by treatment, but was positively correlated with piglets' birth weight and was lower in batch 1 compared with batch 2. Seen from the present results, neither eHeat nor light worked as an attractant on piglets; in contrast, piglets preferred to sleep in the dark and it would therefore be recommended to turn off the light in the creep area during darkness. Heating up the creep area without light can be accomplished by using a radiant heat source such as eHeat in contrast to the normally used light-emitting infrared heat lamp. PMID:25711807
Measurement of the average b hadron lifetime in Z 0 decays
NASA Astrophysics Data System (ADS)
Acton, P. D.; Akers, R.; Alexander, G.; Allison, J.; Anderson, K. J.; Arcelli, S.; Astbury, A.; Axen, D.; Azuelos, G.; Baines, J. T. M.; Ball, A. H.; Banks, J.; Barlow, R. J.; Barnett, S.; Batoldus, R.; Batley, J. R.; Beaudoin, G.; Beck, A.; Beck, G. A.; Becker, J.; Beeston, C.; Behnke, T.; Bell, K. W.; Bella, G.; Bentkowski, P.; Berlich, P.; Bethke, S.; Biebel, O.; Bloodworth, I. J.; Bock, P.; Boden, B.; Bosch, H. M.; Boutemeur, M.; Breuker, H.; Bright-Thomas, P.; Brown, R. M.; Buijs, A.; Burckhart, H. J.; Burgard, C.; Capiluppi, P.; Carnegie, R. K.; Carter, A. A.; Carter, J. R.; Chang, C. Y.; Charlton, D. G.; Chu, S. L.; Clarke, P. E. L.; Clayton, J. C.; Cohen, I.; Conboy, J. E.; Cooper, M.; Coupland, M.; Cuffiani, M.; Dado, S.; Dallavalle, G. M.; de Jong, S.; Del Pozo, L. A.; Deng, H.; Dieckmann, A.; Dittmar, M.; Dixit, M. S.; Do Couto E Silva, E.; Duboscq, J. E.; Duchovni, E.; Duckeck, G.; Duerdoth, I. P.; Dumas, D. J. P.; Elcombe, P. A.; Estabrooks, P. G.; Etzion, E.; Evans, H. G.; Fabbri, F.; Fabbro, B.; Fierro, M.; Fincke-Keeler, M.; Fischer, H. M.; Fong, D. G.; Foucher, M.; Gaidot, A.; Gary, J. W.; Gascon, J.; Geddes, N. I.; Geich-Gimbel, C.; Gensler, S. W.; Gentit, F. X.; Giacomelli, G.; Giacomelli, R.; Gibson, V.; Gibson, W. R.; Gillies, J. D.; Goldberg, J.; Gingrich, D. M.; Goodrick, M. J.; Gorn, W.; Grandi, C.; Grant, F. C.; Hagemann, J.; Hanson, G. G.; Hansroul, M.; Hargrove, C. K.; Harrison, P. F.; Hart, J.; Hattersley, P. M.; Hauschild, M.; Hawkes, C. M.; Heflin, E.; Hemingway, R. J.; Herten, G.; Heuer, R. D.; Hill, J. C.; Hillier, S. J.; Hilse, T.; Hinshaw, D. A.; Hobbs, J. D.; Hobson, P. R.; Hochman, D.; Homer, R. J.; Honma, A. K.; Hughes-Jones, R. E.; Humbert, R.; Igo-Kemenes, P.; Ihssen, H.; Imrie, D. C.; Janissen, A. C.; Jawahery, A.; Jeffreys, P. W.; Jeremie, H.; Jimack, M.; Jones, M.; Jones, R. W. L.; Jovanovic, P.; Jui, C.; Karlen, D.; Kawagoe, K.; Kawamoto, T.; Keeler, R. K.; Kellogg, R. G.; Kennedy, B. W.; Kluth, S.; Kobayashi, T.; Koetke, D. S.; Kokott, T. P.; Komamiya, S.; Köpke, L.; Kral, J. F.; Kowalewski, R.; von Krogh, J.; Kroll, J.; Kuwano, M.; Kyberd, P.; Lafferty, G. D.; Lafoux, H.; Lahmann, R.; Lamarche, F.; Layter, J. G.; Leblanc, P.; Lee, A. M.; Lehto, M. H.; Lellouch, D.; Leroy, C.; Letts, J.; Levegrün, S.; Levinson, L.; Lloyd, S. L.; Loebinger, F. K.; Lorah, J. M.; Lorazo, B.; Losty, M. J.; Lou, X. C.; Ludwig, J.; Luig, A.; Mannelli, M.; Marcellini, S.; Markus, C.; Martin, A. J.; Martin, J. P.; Mashimo, T.; Mättig, P.; Maur, U.; McKenna, J.; McMahon, T. J.; McNutt, J. R.; Meijers, F.; Menszner, D.; Merritt, F. S.; Mes, H.; Michelini, A.; Middleton, R. P.; Mikenberg, G.; Mildenberger, J.; Miller, D. J.; Mir, R.; Mohr, W.; Moisan, C.; Montanari, A.; Mori, T.; Morii, M.; Müller, U.; Nellen, B.; Nguyen, H. H.; O'Neale, S. W.; Oakham, F. G.; Odorici, F.; Ogren, H. O.; Oram, C. J.; Oreglia, M. J.; Orito, S.; Pansart, J. P.; Panzer-Steindel, B.; Paschievici, P.; Patrick, G. N.; Paz-Jaoshvili, N.; Pearce, M. J.; Pfister, P.; Pilcher, J. E.; Pinfold, J.; Pitman, D.; Plane, D. E.; Poffenberger, P.; Poli, B.; Pouladdej, A.; Pritchard, T. W.; Przysiezniak, H.; Quast, G.; Redmond, M. W.; Rees, D. L.; Richards, G. E.; Robins, S. A.; Robinson, D.; Rollnik, A.; Roney, J. M.; Ros, E.; Rossberg, S.; Rossi, A. M.; Rosvick, M.; Routenburg, P.; Runge, K.; Runolfsson, O.; Rust, D. R.; Sasaki, M.; Sbarra, C.; Schaile, A. D.; Schaile, O.; Schappert, W.; Scharff-Hansen, P.; Schenk, P.; Schmitt, B.; von der Schmitt, H.; Schröder, M.; Schwick, C.; Schwiening, J.; Scott, W. G.; Settles, M.; Shears, T. G.; Shen, B. C.; Shepherd-Themistocleous, C. H.; Sherwood, P.; Siroli, G. P.; Skillman, A.; Skuja, A.; Smith, A. M.; Smith, T. J.; Snow, G. A.; Sobie, R.; Springer, R. W.; Sproston, M.; Stahl, A.; Stegmann, C.; Stephens, K.; Steuerer, J.; Ströhmer, R.; Strom, D.; Takeda, H.; Takeshita, T.; Tarem, S.; Tecchio, M.; Teixeira-Dias, P.; Tesch, N.; Thomson, M. A.; Torrente-Lujan, E.; Towers, S.; Transtromer, G.; Tresilian, N. J.; Tsukamoto, T.; Turner, M. F.; van den Plas, D.; van Kooten, R.; Vandalen, G. J.; Vasseur, G.; Virtue, C. J.; Wagner, A.; Wagner, D. L.; Wahl, C.; Ward, C. P.; Ward, D. R.; Watkins, P. M.; Watson, A. T.; Watson, N. K.; Weber, M.; Weber, P.; Wells, P. S.; Wermes, N.; Whalley, M. A.; Wilkens, B.; Wilson, G. W.; Wilson, J. A.; Winterer, V.-H.; Wlodek, T.; Wolf, G.; Wotton, S.; Wyatt, T. R.; Yaari, R.; Yeaman, A.; Yekutieli, G.; Yurko, M.; Zeuner, W.; Zorn, G. T.
1993-06-01
A sample of 2610 electron candidates and 2762 muon candidates identified in hadronic Z 0 decays has been used to measure the average b hadron lifetime. These data were recorded with the OPAL detector during 1990 and 1991. Maximum likelihood fits to the distributions of the lepton impact parameters yield an average b hadron lifetime of 10052_2005_Article_BF01474617_TeX2GIFE1.gif tau _b = 1523 ± 34 ± 38fs , where the first error is statistical and the second systematic. This result is a weighted average over the semileptonic branching fractions and production rates of the b hadrons produced in Z 0 decays.
Normal Organ Weights in Women: Part I-The Heart.
Molina, D Kimberley; DiMaio, Vincent J M
2015-09-01
Cardiac enlargement is a well-known independent risk factor for sudden cardiac death, though the definition of what constitutes cardiac enlargement is not universally established. A previous study was undertaken to establish a normal range for male hearts to address this issue; the present study was designed to address the issue and to determine normal cardiac weights in adult human females. A prospective study was undertaken of healthy females dying from sudden, traumatic deaths aged 18 to 35 years. Cases were excluded if: there was a history of medical illness, including illicit drug use; prolonged medical treatment was performed; there was a prolonged period between the time of injury and death; body length and weight could not be accurately assessed; if there was significant cardiac injury; or if any illness or intoxication was identified after gross, microscopic, and toxicologic analysis, including evidence of systemic disease. A total of 102 cases met criteria for inclusion in the study during the approximately 10-year period of data collection from 2004 to 2014. The decedents had an average age of 24.4 years and ranged in length from 141 to 182 cm (56.4 to 72.8 in.) with an average length of 160 cm (64 in.). The weight ranged from 35.9 to 152 kg (79 to 334 lbs) with an average weight of 65.3 kg (143 lbs). The majority of the decedents (86%) died from either ballistic or blunt force (including craniocerebral) injuries. Overall, the heart weights ranged from 156 to 422 g with an average of 245 g and a standard deviation of 52 g. Regression analysis was performed to assess the relationship between heart weight and body weight, body length, and body mass index, respectively, and found insufficient associations to enable predictability. The authors, therefore, propose establishing a normal range for heart weight in women of 148 to 296 g. PMID:26153896
Very-Low-Calorie Diets and Sustained Weight Loss
Wim H. M. Saris
2001-01-01
Objective: To review of the literature on the topic of very-low-calorie diets (VLCDs) and the long-term weight-maintenance success in the treatment of obesity.Research Methods and Procedures: A literature search of the following keywords: VLCD, long-term weight maintenance, and dietary treatment of obesity.Results: VLCDs and low-calorie diets with an average intake between 400 and 800 kcal do not differ in body
The causal meaning of Fisher's average effect JAMES J. LEE* A N D CARSON C. CHOW
, 1992; Edwards, 1994, 2002; Lessard, 1997; Okasha, 2008). In the discrete-time formulation of the FTNS by Falconer, can be reconciled if certain relationships between the genotype frequencies and non formulation in terms of causality; for example, the frequency- weighted mean of the average effects equaling
Experimental The average size of the silver nanoparticles, prepared by the method of Kor-
Braun, Paul
Experimental The average size of the silver nanoparticles, prepared by the method of Kor- gel, with approximate molecular weight of 900 000, was then added to 0.5 mg of these silver nanocrystals in a vial, and radically improved photochemical reactors.[1±6] Colloidal self-assembly has been suggested as an efficient
40 CFR 63.5710 - How do I demonstrate compliance using emissions averaging?
Code of Federal Regulations, 2010 CFR
2010-07-01
...Manufacturing Standards for Open Molding Resin and Gel Coat Operations § 63.5710...Weighted-average MACT model point value for production resin used in the past 12 months, kilograms per megagram. MR = Mass of production resin used in the past 12 months,...
Impact of Field of Study, College and Year on Calculation of Cumulative Grade Point Average
ERIC Educational Resources Information Center
Trail, Carla; Reiter, Harold I.; Bridge, Michelle; Stefanowska, Patricia; Schmuck, Marylou; Norman, Geoff
2008-01-01
A consistent finding from many reviews is that undergraduate Grade Point Average (uGPA) is a key predictor of academic success in medical school. Curiously, while uGPA has established predictive validity, little is known about its reliability. For a variety of reasons, medical schools use different weighting schemas to combine years of study.…
Average luminosity distance in inhomogeneous universes
Kostov, Valentin
2010-04-01
Using numerical ray tracing, the paper studies how the average distance modulus in an inhomogeneous universe differs from its homogeneous counterpart. The averaging is over all directions from a fixed observer not over all possible observers (cosmic), thus is more directly applicable to our observations. In contrast to previous studies, the averaging is exact, non-perturbative, and includes all non-linear effects. The inhomogeneous universes are represented by Swiss-cheese models containing random and simple cubic lattices of mass-compensated voids. The Earth observer is in the homogeneous cheese which has an Einstein-de Sitter metric. For the first time, the averaging is widened to include the supernovas inside the voids by assuming the probability for supernova emission from any comoving volume is proportional to the rest mass in it. Voids aligned along a certain direction give rise to a distance modulus correction which increases with redshift and is caused by cumulative gravitational lensing. That correction is present even for small voids and depends on their density contrast, not on their radius. Averaging over all directions destroys the cumulative lensing correction even in a non-randomized simple cubic lattice of voids. At low redshifts, the average distance modulus correction does not vanish due to the peculiar velocities, despite the photon flux conservation argument. A formula for the maximal possible average correction as a function of redshift is derived and shown to be in excellent agreement with the numerical results. The formula applies to voids of any size that: (a)have approximately constant densities in their interior and walls; and (b)are not in a deep nonlinear regime. The average correction calculated in random and simple cubic void lattices is severely damped below the predicted maximal one after a single void diameter. That is traced to cancellations between the corrections from the fronts and backs of different voids. The results obtained allow one to readily predict the redshift above which the direction-averaged fluctuation in the Hubble diagram falls below a required precision and suggest a method to extract the background Hubble constant from low redshift data without the need to correct for peculiar velocities.
Volume averaging in the quasispherical Szekeres model
NASA Astrophysics Data System (ADS)
Bolejko, Krzysztof
2009-07-01
This paper considers volume averaging in the quasispherical Szekeres model. The volume averaging became of considerable interest after it was shown that the volume acceleration calculated within the averaging framework can be positive even when the local expansion rate decelerates. This issue was intensively studied within spherically symmetric models. However, since our Universe is not spherically symmetric similar analysis is needed in non-symmetrical models. This papers presents the averaging analysis within the quasispherical Szekeres model which is a non-symmetrical generalisation of the spherically symmetric Lemaître-Tolman family of models. In the quasispherical Szekeres model the distribution of mass over a surface of constant t and r has the form of a mass-dipole superposed on a monopole. This paper shows that when calculating the volume acceleration, ä, within the Szekeres model, the dipole does not contribute to the final result, hence ä only depends on a monopole configuration. Thus, the volume averaging within the Szekeres model leads to literally the same solutions as those obtained within the Lemaître-Tolman model.
Klos, Lori A; Greenleaf, Christy; Paly, Natalie; Kessler, Molly M; Shoemaker, Colby G; Suchla, Erika A
2015-01-01
A number of weight loss-related reality television programs chronicle the weight loss experience of obese individuals in a competitive context. Although highly popular, such shows may misrepresent the behavior change necessary to achieve substantial weight loss. A systematic, quantitative content analysis of Seasons 10-13 (n = 66 episodes) of The Biggest Loser was conducted to determine the amount of time and number of instances that diet, physical activity, or other weight management strategies were presented. The average episode was 78.8 ± 15.7 min in length. Approximately 33.3% of an episode, representing 1,121 segments, portrayed behavioral weight management-related content. Within the episode time devoted to weight management content, 85.2% was related to physical activity, 13.5% to diet, and 1.2% to other. Recent seasons of The Biggest Loser suggest that substantial weight loss is achieved primarily through physical activity, with little emphasis on modifying diet and eating behavior. Although physical activity can impart substantial metabolic health benefits, it may be difficult to create enough of an energy deficit to induce significant weight loss in the real world. Future studies should examine the weight loss attitudes and behaviors of obese individuals and health professionals after exposure to reality television shows focused on weight loss. PMID:25909247
Miller, Marshall Middleton
1956-01-01
AND DISCUSSION ~ ~ ~ A. Feed Efficiency B. Egg Production C. Body Weights D. Egg Weights. E. Pause in Production. F. Effect of Position Page 1 7 9 . 11 . 12 . 14 . 15 . 16 G. Cage Performance Versus Floor Performanoe. . . 18 H ~ Age to 50 Percent... ~ Statistical comparison of White Inbred Hybrid No. 1, and Inbred for initial body weight, change average body ~eight, and age to production Leghorns, Hybrid No. 2, in body weighted 50 percent 29 VII ' Statistical comparison of White Leghorns, Inbred...
Marital status and body weight, weight perception, and weight management among U.S. adults.
Klos, Lori A; Sobal, Jeffery
2013-12-01
Married individuals often have higher body weights than unmarried individuals, but it is unclear how marital roles affect body weight-related perceptions, desires, and behaviors. This study analyzed cross-sectional data for 4,089 adult men and 3,989 adult women using multinomial logistic regression to examine associations between marital status, perceived body weight, desired body weight, and weight management approach. Controlling for demographics and current weight, married or cohabiting women and divorced or separated women more often perceived themselves as overweight and desired to weigh less than women who had never married. Marital status was unrelated to men's weight perception and desired weight change. Marital status was also generally unrelated to weight management approach, except that divorced or separated women were more likely to have intentionally lost weight within the past year compared to never married women. Additionally, never married men were more likely to be attempting to prevent weight gain than married or cohabiting men and widowed men. Overall, married and formerly married women more often perceived themselves as overweight and desired a lower weight. Men's marital status was generally unassociated with weight-related perceptions, desires, and behaviors. Women's but not men's marital roles appear to influence their perceived and desired weight, suggesting that weight management interventions should be sensitive to both marital status and gender differences. PMID:24183145
, Above Average 5, Excellent Quality of argument in project summary. Argument is missing, incomplete, and is mostly complete. The reader is left satisfied. Argument is shy of `excellent' standing based on a few/clarity, but the introduction is mostly complete. The reader is left satisfied. Role is shy of `excellent' standing based
Average System Cost Methodology : Administrator's Record of Decision.
United States. Bonneville Power Administration.
1984-06-01
Significant features of average system cost (ASC) methodology adopted are: retention of the jurisdictional approach where retail rate orders of regulartory agencies provide primary data for computing the ASC for utilities participating in the residential exchange; inclusion of transmission costs; exclusion of construction work in progress; use of a utility's weighted cost of debt securities; exclusion of income taxes; simplification of separation procedures for subsidized generation and transmission accounts from other accounts; clarification of ASC methodology rules; more generous review timetable for individual filings; phase-in of reformed methodology; and each exchanging utility must file under the new methodology within 20 days of implementation by the Federal Energy Regulatory Commission of the ten major participating utilities, the revised ASC will substantially only affect three. (PSB)
Body Weight Independently Affects Articular Cartilage Catabolism
Denning, W. Matt; Winward, Jason G.; Pardo, Michael Becker; Hopkins, J. Ty; Seeley, Matthew K.
2015-01-01
Although obesity is associated with osteoarthritis, it is unclear whether body weight (BW) independently affects articular cartilage catabolism (i.e., independent from physiological factors that also accompany obesity). The primary purpose of this study was to evaluate the independent effect of BW on articular cartilage catabolism associated with walking. A secondary purpose was to determine how decreased BW influenced cardiovascular response due to walking. Twelve able-bodied subjects walked for 30 minutes on a lower-body positive pressure treadmill during three sessions: control (unadjusted BW), +40%BW, and -40%BW. Serum cartilage oligomeric matrix protein (COMP) was measured immediately before (baseline) and after, and 15 and 30 minutes after the walk. Heart rate (HR) and rate of perceived exertion (RPE) were measured every three minutes during the walk. Relative to baseline, average serum COMP concentration was 13% and 5% greater immediately after and 15 minutes after the walk. Immediately after the walk, serum COMP concentration was 14% greater for the +40%BW session than for the -40%BW session. HR and RPE were greater for the +40%BW session than for the other two sessions, but did not differ between the control and -40%BW sessions. BW independently influences acute articular cartilage catabolism and cardiovascular response due to walking: as BW increases, so does acute articular cartilage catabolism and cardiovascular response. These results indicate that lower-body positive pressure walking may benefit certain individuals by reducing acute articular cartilage catabolism, due to walking, while maintaining cardiovascular response. Key points Walking for 30 minutes with adjustments in body weight (normal body weight, +40% and -40% body weight) significantly influences articular cartilage catabolism, measured via serum COMP concentration. Compared to baseline levels, walking with +40% body weight and normal body weight both elicited significant increases in articular cartilage catabolism, while walking with -40% body weight did not. Cardiovascular response (HR and RPE) was not significantly different during walking with normal body weight and when compared to walking with -40% body weight. PMID:25983577
Perceiving the average hue of color arrays
Webster, Jacquelyn; Kay, Paul; Webster, Michael A.
2014-01-01
The average of a color distribution has special significance for color coding (e.g. to estimate the illuminant) but how it depends on the visual representation (e.g. perceptual vs. cone-opponent) or nonlinearities (e.g. categorical coding) is unknown. We measured the perceived average of two colors shown alternated in spatial arrays. Observers adjusted the components until the average equaled a specified reference hue. Matches for red, blue-red, or yellow-green were consistent with the arithmetic mean chromaticity, while blue-green settings deviated toward blue. The settings show little evidence for categorical coding, and cannot be predicted from the scaled appearances of the individual components. PMID:24695184
Books Average Previous Decade of Economic Misery
Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159
Books average previous decade of economic misery.
Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159
Matrix averages relating to the Ginibre ensembles
Peter J. Forrester; Eric M. Rains
2009-07-02
The theory of zonal polynomials is used to compute the average of a Schur polynomial of argument $AX$, where $A$ is a fixed matrix and $X$ is from the real Ginibre ensemble. This generalizes a recent result of Sommers and Khorozhenko [J. Phys. A {\\bf 42} (2009), 222002], and furthermore allows analogous results to be obtained for the complex and real quaternion Ginibre ensembles. As applications, the positive integer moments of the general variance Ginibre ensembles are computed in terms of generalized hypergeometric functions, these are written in terms of averages over matrices of the same size as the moment to give duality formulas, and the averages of the power sums of the eigenvalues are expressed as finite sums of zonal polynomials.
Matrix averages relating to Ginibre ensembles
NASA Astrophysics Data System (ADS)
Forrester, Peter J.; Rains, Eric M.
2009-09-01
The theory of zonal polynomials is used to compute the average of a Schur polynomial of argument AX, where A is a fixed matrix and X is from the real Ginibre ensemble. This generalizes a recent result of Sommers and Khoruzhenko (2009 J. Phys. A: Math. Theor. 42 222002), and furthermore allows analogous results to be obtained for the complex and real quaternion Ginibre ensembles. As applications, the positive integer moments of the general variance Ginibre ensembles are computed in terms of generalized hypergeometric functions; these are written in terms of averages over matrices of the same size as the moment to give duality formulas, and the averages of the power sums of the eigenvalues are expressed as finite sums of zonal polynomials.
Exploiting scale dependence in cosmological averaging
Mattsson, Teppo; Ronkainen, Maria E-mail: maria.ronkainen@helsinki.fi
2008-02-15
We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre-Tolman-Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z{approx}2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion.
NASA Astrophysics Data System (ADS)
Soltanzadeh, I.; Azadi, M.; Vakili, G. A.
2011-07-01
Using Bayesian Model Averaging (BMA), an attempt was made to obtain calibrated probabilistic numerical forecasts of 2-m temperature over Iran. The ensemble employs three limited area models (WRF, MM5 and HRM), with WRF used with five different configurations. Initial and boundary conditions for MM5 and WRF are obtained from the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) and for HRM the initial and boundary conditions come from analysis of Global Model Europe (GME) of the German Weather Service. The resulting ensemble of seven members was run for a period of 6 months (from December 2008 to May 2009) over Iran. The 48-h raw ensemble outputs were calibrated using BMA technique for 120 days using a 40 days training sample of forecasts and relative verification data. The calibrated probabilistic forecasts were assessed using rank histogram and attribute diagrams. Results showed that application of BMA improved the reliability of the raw ensemble. Using the weighted ensemble mean forecast as a deterministic forecast it was found that the deterministic-style BMA forecasts performed usually better than the best member's deterministic forecast.
Raboisson, Didier; Dervillé, Marie; Herman, Nicolas; Cahuzac, Eric; Sans, Pierre; Allaire, Gilles
2012-08-01
Mastitis is a multifactorial disease and the most costly dairy production issue. In spite of extensive literature on udder-health risk factors, effects of metabolic diseases, farmers' competencies and livestock farming system on somatic cells count (SCC) are sparsely described. Herd-level or territorial-level factors affecting monthly composite milk weighted mean cow SCC (CMSCC) were analysed with a linear mixed effect model. The average CMSCC was 266,000 cells/ml. Half of the herds had CMSCC >300,000 cells/ml for 2-6 months a year, and 15% of herds for more than 7 months a year. CMSCC was positively associated with the number of cows, having a beef or fattening herd in addition to the dairy herd, the monthly average days in milk, the yearly age at first calving, the yearly proportion of purchased cows and the yearly culling rate. Moreover, a positive association is reported between CMSCC and the monthly proportion of cows probably with subacute ruminal acidosis (fat percentage minus protein percentage ?0·30%, for Holstein) and negative energy balance (protein to fat ratio ?0·66, for Holstein), the yearly average calving interval, having at least one dead cow and the mean monthly temperature. The association was negative for a predominant breed other than Holstein, the monthly milk production, the yearly dry-off period length, the monthly first calving cow proportion, having an autumn calving peak, being a Good Breeding Practices member, the monthly number of days with rain, the altitude and the territorial cattle density. CMSCC varied widely among the 11 dairy production areas. In conclusion, this study showed the average CMSCC for the French dairy cows, compared with international results. Moreover, it quantified the contribution of several factors to CMSCC, in particular metabolic diseases and the farm environment. PMID:22687283
Information filtering via weighted heat conduction algorithm
NASA Astrophysics Data System (ADS)
Liu, Jian-Guo; Guo, Qiang; Zhang, Yi-Cheng
2011-06-01
In this paper, by taking into account effects of the user and object correlations on a heat conduction (HC) algorithm, a weighted heat conduction (WHC) algorithm is presented. We argue that the edge weight of the user-object bipartite network should be embedded into the HC algorithm to measure the object similarity. The numerical results indicate that both the accuracy and diversity could be improved greatly compared with the standard HC algorithm and the optimal values reached simultaneously. On the Movielens and Netflix datasets, the algorithmic accuracy, measured by the average ranking score, can be improved by 39.7% and 56.1% in the optimal case, respectively, and the diversity could reach 0.9587 and 0.9317 when the recommendation list equals to 5. Further statistical analysis indicates that, in the optimal case, the distributions of the edge weight are changed to the Poisson form, which may be the reason why HC algorithm performance could be improved. This work highlights the effect of edge weight on a personalized recommendation study, which maybe an important factor affecting personalized recommendation performance.
An improved moving average technical trading rule
NASA Astrophysics Data System (ADS)
Papailias, Fotis; Thomakos, Dimitrios D.
2015-06-01
This paper proposes a modified version of the widely used price and moving average cross-over trading strategies. The suggested approach (presented in its 'long only' version) is a combination of cross-over 'buy' signals and a dynamic threshold value which acts as a dynamic trailing stop. The trading behaviour and performance from this modified strategy are different from the standard approach with results showing that, on average, the proposed modification increases the cumulative return and the Sharpe ratio of the investor while exhibiting smaller maximum drawdown and smaller drawdown duration than the standard strategy.
A singularity theorem based on spatial averages
José M. M. Senovilla
2007-05-06
Inspired by Raychaudhuri's work, and using the equation named after him as a basic ingredient, a new singularity theorem is proved. Open non-rotating everywhere expanding universes with non-vanishing spatial average of the matter variables are severely geodesically incomplete to the past. Another way of stating the result is that, under the same conditions, any singularity-free model must have a vanishing spatial average of the energy density (and other physical variables). This is very satisfactory and provides a clear decisive difference between singular and non-singular cosmologies.
Polarized electron beams at milliampere average current
Poelker, Matthew [JLAB
2013-11-01
This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today's CEBAF polarized source operating at ~ 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.
Combination approach of highly conflicting evidence based on weighted distance of evidence
NASA Astrophysics Data System (ADS)
Liu, Zhicheng; He, Jiazhou; Qiao, Hui
2013-10-01
In order to fuse highly conflicting evidence effectively, a novel combination method based on weighted distance of evidence is proposed by taking the ideas of Murphy's averaging method and Deng's weighted averaging method. Firstly, the essentiality of each element in the frame of discernment is given by Murphy's idea. Secondly, the weighted averaging distance between any two bodies of evidence(BOEs) is calculated under the modified City Block distance norm, further the support degree of each evidence supported by other evidences can be obtained. Thirdly, the normalized total support degree of each evidence is considered as the weights of BOEs, and a new weighted averaging BOE will be gained. Finally, the information fusion process can be realized by using the Dempster's rule of combination. Simulation results show that the proposed method can deal with the highly conflicting evidence with better performance of convergence, and it also can recognize the target more effectively and fleetly.
Grappling with Weight Cutting. The Wisconsin Wrestling Minimum Weight Project.
ERIC Educational Resources Information Center
Oppliger, Robert A.; And Others
1995-01-01
In response to a new state rule, the Wisconsin Minimum Weight Project curtails weight cutting among high school wrestlers. The project uses skinfold testing to determine a minimum competitive weight and nutrition education to help the wrestler diet safety. It serves as a model for other states and other sports. (Author/SM)
Estimates of Random Error in Satellite Rainfall Averages
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.
2003-01-01
Satellite rain estimates are most accurate when obtained with microwave instruments on low earth-orbiting satellites. Estimation of daily or monthly total areal rainfall, typically of interest to hydrologists and climate researchers, is made difficult, however, by the relatively poor coverage generally available from such satellites. Intermittent coverage by the satellites leads to random "sampling error" in the satellite products. The inexact information about hydrometeors inferred from microwave data also leads to random "retrieval errors" in the rain estimates. In this talk we will review approaches to quantitative estimation of the sampling error in area/time averages of satellite rain retrievals using ground-based observations, and methods of estimating rms random error, both sampling and retrieval, in averages using satellite measurements themselves.
Bosomworth, N. John
2012-01-01
Abstract Objective To explore the reasons why long-term weight loss is seldom achieved and to evaluate the consequences of various weight trajectories, including stability, loss, and gain. Quality of evidence Studies evaluating population weight metrics were mainly observational. Level I evidence was available to evaluate the influence of weight interventions on mortality and quality of life. Main message Sustained weight loss is achieved by a small percentage of those intending to lose weight. Mortality is lowest in the high-normal and overweight range. The safest body-size trajectory is stable weight with optimization of physical and metabolic fitness. With weight loss there is evidence for lower mortality in those with obesity-related comorbidities. There is also evidence for improved health-related quality of life in obese individuals who lose weight. Weight loss in the healthy obese, however, is associated with increased mortality. Conclusion Weight loss is advisable only for those with obesity-related comorbidities. Healthy obese people wishing to lose weight should be informed that there might be associated risks. A strategy that leads to a stable body mass index with optimized physical and metabolic fitness at any size is the safest weight intervention option. PMID:22586192
Classification image weights and internal noise level estimation
NASA Technical Reports Server (NTRS)
Ahumada, Albert J Jr
2002-01-01
For the linear discrimination of two stimuli in white Gaussian noise in the presence of internal noise, a method is described for estimating linear classification weights from the sum of noise images segregated by stimulus and response. The recommended method for combining the two response images for the same stimulus is to difference the average images. Weights are derived for combining images over stimuli and observers. Methods for estimating the level of internal noise are described with emphasis on the case of repeated presentations of the same noise sample. Simple tests for particular hypotheses about the weights are shown based on observer agreement with a noiseless version of the hypothesis.
STANDARD ATOMIC WEIGHT VALUES FOR THE MONONUCLIDIC ELEMENTS - 2005.
HOLDEN, N.E.
2005-08-13
When the policy for determining the atomic weight values for the mononuclidic elements was changed some decades ago, it was argued that new atomic mass tables would only be produced about once a decade. Since 1977, the average has been once every nine years, which is consistent with that early estimate. This report summarizes the changes over the years for the atomic weight values of the mononuclidic elements. It applies the Commission's technical rules to the latest atomic mass table and recommends changes in the values of the Standard Atomic Weights for eleven of the twenty-two for the TSAW.
Niv, Noosha; Cohen, Amy N.; Hamilton, Alison; Reist, Christopher; Young, Alexander S.
2013-01-01
The objective of this study was to examine the effectiveness of a weight loss program for individuals with schizophrenia in usual care. The study included 146 adults with schizophrenia from two mental health clinics of the Department of Veterans Affairs. The 109 individuals who were overweight or obese were offered a 16-week, psychosocial, weight management program. Weight and BMI were assessed at baseline, 1 year later and at each treatment session. Only 51% of those who were overweight or obese chose to enroll in the weight management program. Participants attended an average of 6.7 treatment sessions, lost an average of 2.4 pounds and had an average BMI decrease of 0.3. There was no significant change in weight or BMI compared to the control group. Intervention strategies that both improve utilization and yield greater weight loss need to be developed. PMID:22430566
Distracted Eating and Weight Gain
MedlinePLUS Videos and Cool Tools
... distraction can lead to weight gain. Related MedlinePlus Health Topics Nutrition Weight Control About MedlinePlus Site Map ... Rockville Pike, Bethesda, MD 20894 U.S. Department of Health and Human Services National Institutes of Health Page ...
HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS.
BEN-ZVI, ILAN, DAYRAN, D.; LITVINENKO, V.
2005-08-21
Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department.
Science of NHL Hockey: Statistics & Averages
NSDL National Science Digital Library
NBC Learn
2010-10-07
Being a top goalie in the NHL takes more than quick reflexes and nerves of steel, it also requires a firm grip on the numbers. Namely, the key averages and statistics of goaltending. "Science of NHL Hockey" is a 10-part video series produced in partnership with the National Science Foundation and the National Hockey League.
Averaging Theory for Non-linear Oscillators
Aritra Sinha
2015-06-24
I have first discussed how averaging theory can be an effective tool in solving weakly non-linear oscillators. Then I have applied this technique for a Van der Pol oscillator and extended the stability criterion of a Van der Pol oscillator for any integer n(odd or even).
A Functional Measurement Study on Averaging Numerosity
ERIC Educational Resources Information Center
Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio
2014-01-01
In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…
Initial Conditions in the Averaging Cognitive Model
ERIC Educational Resources Information Center
Noventa, S.; Massidda, D.; Vidotto, G.
2010-01-01
The initial state parameters s[subscript 0] and w[subscript 0] are intricate issues of the averaging cognitive models in Information Integration Theory. Usually they are defined as a measure of prior information (Anderson, 1981; 1982) but there are no general rules to deal with them. In fact, there is no agreement as to their treatment except in…
Average configuration of the induced venus magnetotail
McComas, D.J.; Spence, H.E.; Russell, C.T.
1985-01-01
In this paper we discuss the interaction of the solar wind flow with Venus and describe the morphology of magnetic field line draping in the Venus magnetotail. In particular, we describe the importance of the interplanetary magnetic field (IMF) X-component in controlling the configuration of field draping in this induced magnetotail, and using the results of a recently developed technique, we examine the average magnetic configuration of this magnetotail. The derived J x B forces must balance the average, steady state acceleration of, and pressure gradients in, the tail plasma. From this relation the average tail plasma velocity, lobe and current sheet densities, and average ion temperature have been derived. In this study we extend these results by making a connection between the derived consistent plasma flow speed and density, and the observational energy/charge range and sensitivity of the Pioneer Venus Orbiter (PVO) plasma analyzer, and demonstrate that if the tail is principally composed of O/sup +/, the bulk of the plasma should not be observable much of the time that the PVO is within the tail. Finally, we examine the importance of solar wind slowing upstream of the obstacle and its implications for the temperature of pick-up planetary ions, compare the derived ion temperatures with their theoretical maximum values, and discuss the implications of this process for comets and AMPTE-type releases.
Glenzinski, D.; /Fermilab
2008-01-01
This paper summarizes a talk given at the Top2008 Workshop at La Biodola, Isola d Elba, Italy. The status of the world average top-quark mass is discussed. Some comments about the challanges facing the experiments in order to further improve the precision are offered.
Analytics for geometric average trigger reset options
Lyuu, Yuh-Dauh
, respectively) in stock price. This makes a reset option useful to portfolio insurance. To prevent price options, are (1) to mitigate the possibility of stock price manipulation, especially for thinly traded in Taiwan, issued two average reset options on the Taipei Stock Exchange in 1999. Standard reset options
Measuring Time-Averaged Blood Pressure
NASA Technical Reports Server (NTRS)
Rothman, Neil S.
1988-01-01
Device measures time-averaged component of absolute blood pressure in artery. Includes compliant cuff around artery and external monitoring unit. Ceramic construction in monitoring unit suppresses ebb and flow of pressure-transmitting fluid in sensor chamber. Transducer measures only static component of blood pressure.
Average Annual Rainfall over the Globe
ERIC Educational Resources Information Center
Agrawal, D. C.
2013-01-01
The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…
Why Johnny Can Be Average Today.
ERIC Educational Resources Information Center
Sturrock, Alan
1997-01-01
During a (hypothetical) phone interview with a university researcher, an elementary principal reminisced about a lifetime of reading groups with unmemorable names, medium-paced math problems, patchworked social studies/science lessons, and totally "average" IQ and batting scores. The researcher hung up at the mention of bell-curved assembly lines…
Average Values of Mean Squares in Factorials
Jerome Cornfield; John W. Tukey
1956-01-01
The assumptions appropriate to the application of analysis of variance to specific examples, and the effects of these assumptions on the resulting interpretations, are today a matter of very active discussion. Formulas for average values of mean squares play a central role in this problem, as do assumptions about interactions. This paper presents formulas for crossed (and, incidentally, for nested
Average Annual Rainfall Over the Globe
NASA Astrophysics Data System (ADS)
Agrawal, D. C.
2013-12-01
The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74×1017 J of solar radiation per second and it is divided over various channels as given in Table 1. It keeps our planet warm and maintains its average temperature2 of 288 K with the help of the atmosphere in such a way that life can survive. It also recycles the water in the oceans/rivers/ lakes by initial evaporation and subsequent precipitation; the average annual rainfall over the globe is around one meter. According to M. King Hubbert the amount of solar power going into the evaporation and precipitation channel is 4.0×1016 W. Students can verify the value of average annual rainfall over the globe by utilizing this part of solar energy. This activity is described in the next section.
Uncertainty of GHz-band Whole-body Average SARs in Infants based on their Kaup Indices
NASA Astrophysics Data System (ADS)
Miwa, Hironobu; Hirata, Akimasa; Fujiwara, Osamu; Nagaoka, Tomoaki; Watanabe, Soichi
We previously showed that a strong correlation exists between the absorption cross section and the body surface area of a human for 0.3-2GHz far field exposure, and proposed a formula for estimating whole-body-average specific absorption rates (WBA-SARs) in terms of height and weight. In this study, to evaluate variability in the WBA-SARs in infants based on their physique, we derived a new formula including Kaup indices of infants, which are being used to check their growth, and thereby estimated the WBA-SARs in infants with respect to their age from 0 month to three years. As a result, we found that under the same height/weight, the smaller the Kaup indices are, the larger the WBA-SARs become, and that the variability in the WBA-SARs is around 15% at the same age. To validate these findings, using the FDTD method, we simulated the GHz-band WBA-SARs in numerical human models corresponding to infants with age of 0, 1, 3, 6 and 9 months, which were obtained by scaling down the anatomically based Japanese three-year child model developed by NICT (National Institute of Information and Communications Technology). Results show that the FDTD-simulated WBA-SARs are smaller by 20% compared to those estimated for infants having the median height and the Kaup index of 0.5 percentiles, which provide conservative WBA-SARs.
Predictive RANS simulations via Bayesian Model-Scenario Averaging
Edeling, W.N.; Cinnella, P.; Dwight, R.P.
2014-10-15
The turbulence closure model is the dominant source of error in most Reynolds-Averaged Navier–Stokes simulations, yet no reliable estimators for this error component currently exist. Here we develop a stochastic, a posteriori error estimate, calibrated to specific classes of flow. It is based on variability in model closure coefficients across multiple flow scenarios, for multiple closure models. The variability is estimated using Bayesian calibration against experimental data for each scenario, and Bayesian Model-Scenario Averaging (BMSA) is used to collate the resulting posteriors, to obtain a stochastic estimate of a Quantity of Interest (QoI) in an unmeasured (prediction) scenario. The scenario probabilities in BMSA are chosen using a sensor which automatically weights those scenarios in the calibration set which are similar to the prediction scenario. The methodology is applied to the class of turbulent boundary-layers subject to various pressure gradients. For all considered prediction scenarios the standard-deviation of the stochastic estimate is consistent with the measurement ground truth. Furthermore, the mean of the estimate is more consistently accurate than the individual model predictions.
Predictive RANS simulations via Bayesian Model-Scenario Averaging
NASA Astrophysics Data System (ADS)
Edeling, W. N.; Cinnella, P.; Dwight, R. P.
2014-10-01
The turbulence closure model is the dominant source of error in most Reynolds-Averaged Navier-Stokes simulations, yet no reliable estimators for this error component currently exist. Here we develop a stochastic, a posteriori error estimate, calibrated to specific classes of flow. It is based on variability in model closure coefficients across multiple flow scenarios, for multiple closure models. The variability is estimated using Bayesian calibration against experimental data for each scenario, and Bayesian Model-Scenario Averaging (BMSA) is used to collate the resulting posteriors, to obtain a stochastic estimate of a Quantity of Interest (QoI) in an unmeasured (prediction) scenario. The scenario probabilities in BMSA are chosen using a sensor which automatically weights those scenarios in the calibration set which are similar to the prediction scenario. The methodology is applied to the class of turbulent boundary-layers subject to various pressure gradients. For all considered prediction scenarios the standard-deviation of the stochastic estimate is consistent with the measurement ground truth. Furthermore, the mean of the estimate is more consistently accurate than the individual model predictions.
Averaging sensors technique for active vibration control applications
NASA Astrophysics Data System (ADS)
Cinquemani, S.; Cazzulani, G.; Braghin, F.; Resta, F.
2013-04-01
Fiber Bragg Gratings (FBG) sensors have a great potential in active vibration control of smart structures thanks to their small transversal size and the possibility to make an array of many sensors. The paper deals with the opportunity to reduce vibration in structures by using distributed sensors embedded in carbon fiber structures through the so called sensors-averaging technique. This method provides a properly weighted average of the outputs of a distributed array of sensors generating spatial filters on a broad range of undesired resonance modes without adversely affecting phase and amplitude. This approach combines the positive sides of decentralized control techniques as the control forces applied to the system are independent of one another, while, as for the centralized controls it has the possibility to exploit the information from all the sensors. The ability to easily manage this information allows to synthesize an efficient modal controller. Furthermore it enables to evaluate the stability of the control, the effects of spillover and the consequent effectiveness in reducing vibration. Theoretical aspects are supported by experimental applications on a large flexible system composed of a thin cantilever beam with 30 longitudinal FBG sensors and 6 piezoelectric actuators (PZT).
Acoustic source localization in a reverberant environment by average beamforming
NASA Astrophysics Data System (ADS)
Castellini, Paolo; Sassaroli, Andrea
2010-04-01
This paper presents a strategy for the application of acoustic beamforming to locate noise sources in a reverberant field. In the hypothesis of stationary phenomena, the average amplitude and standard deviation of the output of beamforming, obtained from different array locations, are calculated. The standard deviation, normalized by the maximum value, can be used for beamforming output weighting, so as to enhance the source contribution, which is space invariant, and to attenuate the mirrors and sidelobe peaks, whose spatial position changes with changes in array position. The availability of microphone signals acquired when moving the array to a different position also allows a super-array to be obtained, i.e. an array obtained considering all the data as coming from a unique array. In this way, the capability of the averaging procedure to reject mirrors effects and disturbances is combined with high resolution beamforming for application in reverberant fields. These improvements are extended to the entire frequency range, since the procedure is not greatly affected by signal wavelength.
Towards more practical average bounds on supervised learning.
Gu, H; Takahashi, H
1996-01-01
In this paper, we describe a method which enables us to study the average generalization performance of learning directly via hypothesis testing inequalities. The resulting theory provides a unified viewpoint of average-case learning curves of concept learning and regression in realistic learning problems not necessarily within the Bayesian framework. The advantages of the theory are that it alleviates the practical pessimism frequently claimed for the results of the Vapnik-Chervonenkis (VC) theory and its alike, and provides general insights into generalization. Besides, the bounds on learning curves are directly related to the number of adjustable system weights. Although the theory is based on an approximation assumption, and cannot apply to the worst-case learning setting, the precondition of the assumption is mild, and the approximation itself is only a sufficient condition for the validity of the theory. We illustrate the results with numerical simulations, and apply the theory to examining the generalization ability of combination of neural networks. PMID:18263490
Hypnotherapy in Weight Loss Treatment.
ERIC Educational Resources Information Center
Cochrane, Gordon; Friesen, John
1986-01-01
Investigated effects of hypnosis as a treatment for weight loss among women. The primary hypothesis that hypnosis is an effective treatment for weight loss was confirmed, but seven concomitant variables and the use of audiotapes were not significant contributors to weight loss. (Author/ABB)
Fungible Weights in Multiple Regression
ERIC Educational Resources Information Center
Waller, Niels G.
2008-01-01
Every set of alternate weights (i.e., nonleast squares weights) in a multiple regression analysis with three or more predictors is associated with an infinite class of weights. All members of a given class can be deemed "fungible" because they yield identical "SSE" (sum of squared errors) and R[superscript 2] values. Equations for generating…
Successful habits of weight losers
Technology Transfer Automated Retrieval System (TEKTRAN)
Despite the availability of the US Dietary Guidelines for Americans, the prevalence of obesity in adults has increased by 200% since 1980. Although few people have lost weight and maintained weight loss long term, some have and are tracked by the National Weight Control Registry. Results from these ...
Weight Training for Wheelchair Sports.
ERIC Educational Resources Information Center
Practical Pointers, 1978
1978-01-01
The article examines weight lifting training procedures for persons involved in wheelchair sports. Popular myths about weight training are countered, and guidelines for a safe and sound weight or resistance training program are given. Diagrams and descriptions follow for specific weightlifting activities: regular or standing press, military press,…
MINIMUM WEIGHT PATHS TIMEDEPENDENT NETWORKS
Orda, Ariel
MINIMUM WEIGHT PATHS in TIMEDEPENDENT NETWORKS Ariel Orda Raphael Rom Department of Electrical) ABSTRACT We investigate the minimum weight path problem in networks whose link weights and link delays are both functions of time. We demonstrate that in general there exist cases in which no finite path
February 1990 UNITARY LOWEST WEIGHT
Scalise, Randall J.
PSU/TH/62 February 1990 UNITARY LOWEST WEIGHT REPRESENTATIONS OF THE NONÂCOMPACT SUPERGROUP OSp(2m of the unitary lowest weight representations of the nonÂcompact supergroup OSp(2m \\Lambda =2n) with the even) \\Theta USp(2n) decomposition of the lowest weight representations of OSp(2m \\Lambda =2n). The partic
Physiological adaptations to weight loss and factors favouring weight regain.
Greenway, F L
2015-08-01
Obesity is a major global health problem and predisposes individuals to several comorbidities that can affect life expectancy. Interventions based on lifestyle modification (for example, improved diet and exercise) are integral components in the management of obesity. However, although weight loss can be achieved through dietary restriction and/or increased physical activity, over the long term many individuals regain weight. The aim of this article is to review the research into the processes and mechanisms that underpin weight regain after weight loss and comment on future strategies to address them. Maintenance of body weight is regulated by the interaction of a number of processes, encompassing homoeostatic, environmental and behavioural factors. In homoeostatic regulation, the hypothalamus has a central role in integrating signals regarding food intake, energy balance and body weight, while an 'obesogenic' environment and behavioural patterns exert effects on the amount and type of food intake and physical activity. The roles of other environmental factors are also now being considered, including sleep debt and iatrogenic effects of medications, many of which warrant further investigation. Unfortunately, physiological adaptations to weight loss favour weight regain. These changes include perturbations in the levels of circulating appetite-related hormones and energy homoeostasis, in addition to alterations in nutrient metabolism and subjective appetite. To maintain weight loss, individuals must adhere to behaviours that counteract physiological adaptations and other factors favouring weight regain. It is difficult to overcome physiology with behaviour. Weight loss medications and surgery change the physiology of body weight regulation and are the best chance for long-term success. An increased understanding of the physiology of weight loss and regain will underpin the development of future strategies to support overweight and obese individuals in their efforts to achieve and maintain weight loss. PMID:25896063
Average Strength Parameters of Reactivated Mudstone Landslide for Countermeasure Works
NASA Astrophysics Data System (ADS)
Nakamura, Shinya; Kimura, Sho; Buddhi Vithana, Shriwantha
2015-04-01
Among many approaches to landslide stability analysis, in several landslide-related studies, shear strength parameters obtained from laboratory shear tests have been used with the limit equilibrium method. In most of them, it concluded that the average strength parameters, i.e. average cohesion (c'avg) and average angle of shearing resistance (?'avg), calculated from back analysis were in agreement with the residual shear strength parameters measured by torsional ring-shear tests on undisturbed and remolded samples. However, disagreement with this contention can be found elsewhere that the residual shear strength measured using a torsional ring-shear apparatus were found to be lower than the average strength calculated by back analysis. One of the reasons why the singular application of residual shear strength in stability analysis causes an underestimation of the safety factor is the fact that the condition of the slip surface of a landslide can be heterogeneous. It may consist of portions that have already reached residual conditions along with other portions that have not on the slip surface. With a view of accommodating such possible differences of slip surface conditions of a landslide, it is worthy to first grasp an appropriate perception of the heterogeneous nature of the actual slip-surface to ensure a more suitable selection of measured shear strength values for stability calculation of landslides. For the present study, the determination procedure of the average strength parameters acting along the slip surface has been presented through the stability calculations of reactivated landslides in the area of Shimajiri-mudstone, Okinawa, Japan. The average strength parameters along slip surfaces of landslides have been estimated using the results of laboratory shear tests of the slip surface/zone soils accompanying a rational way of accessing the actual, heterogeneous slip surface conditions. The results tend to show that the shear strength acting along the slip surface of imperfectly-reactivated landslides cannot always be considered equal to its laboratory-measured residual strength. The engineers should rediscover the fact that it is reasonable to apply different strength parameters to the stability analysis depending on the actual conditions of the slip surface that are visible on the boring core samples. In that context, we suggest to show that it is more appropriate to consider average strength parameters for imperfectly-reactivated landslides, for which purpose the use of 'residual shear strength' in combination with other categories of shear strength is recommended. This way, the outcome of the stability analysis will be much more inclusive and representative of the non-slickensided portions of a slip surface as well.
Relationship of the weaning weight of beef calves to the size of their dams
Tanner, James Edward
1964-01-01
). Marlowe (1962) reported regression coefficients of 0. 061 pound average daily gain for Angus and 0. 052 pound for Hereford calves per hundred weight change of the dam. Cow weight in these analyses was adjusted to that of a mature cow in average flesh... differences in the weight of the dam at weaning. An analysis using Model A and eliminating the steers resulted in an increase of 20 pounds in the mean, but only extremely small changes in the weight difference between bulls and heifers or in the other con...
NASA Astrophysics Data System (ADS)
Schatten, Kenneth
1984-04-01
Having for numerous reasons acquired a three digit kilogram mass, the author is experienced at the painful struggles that the gourmand must suffer to reduce weight, particularly if he/she enjoys reasonably large amounts of good food. To the avant-garde geophysicist, utilizing the following approach could be pleasurable, rewarding, and may even enable the accomplishment of what Ghengis Khan, Alexander the Great, Napolean, and Hitler could not!The basic approach is the full utilization of Newton's formula for the attraction of two massive bodies: F=GM1M2/r2, where G, is the gravitational constant; r, the distance between the two bodies; and M1 and M2, the masses of the two bodies. Although one usually chooses M1 to be the earth's mass ME and M2 to be the mass of a small object, this unnecessarily restricts the realm of phenomena. The less restrictive assumption is M1 + M2 = ME.
Holiday Weight Management by Successful Weight Losers and Normal Weight Individuals
ERIC Educational Resources Information Center
Phelan, Suzanne; Wing, Rena R.; Raynor, Hollie A.; Dibello, Julia; Nedeau, Kim; Peng, Wanfeng
2008-01-01
This study compared weight control strategies during the winter holidays among successful weight losers (SWL) in the National Weight Control Registry and normal weight individuals (NW) with no history of obesity. SWL (n = 178) had lost a mean of 34.9 kg and had kept greater than or equal to 13.6 kg off for a mean of 5.9 years. NW (n = 101) had a…
Influence of Molecular Weight on the Mechanical Performance of a Thermoplastic Glassy Polyimide
NASA Technical Reports Server (NTRS)
Nicholson, Lee M.; Whitley, Karen S.; Gates, Thomas S.; Hinkley, Jeffrey A.
1999-01-01
Mechanical Testing of an advanced thermoplastic polyimide (LaRC-TM-SI) with known variations in molecular weight was performed over a range of temperatures below the glass transition temperature. The physical characterization, elastic properties and notched tensile strength were all determined as a function of molecular weight and test temperature. It was shown that notched tensile strength is a strong function of both temperature and molecular weight, whereas stiffness is only a strong function of temperature. A critical molecular weight (Mc) was observed to occur at a weight-average molecular weight (Mw) of approx. 22000 g/mol below which, the notched tensile strength decreases rapidly. This critical molecular weight transition is temperature-independent. Furthermore, inelastic analysis showed that low molecular weight materials tended to fail in a brittle manner, whereas high molecular weight materials exhibited ductile failure. The microstructural images supported these findings.
Reference-tissue correction of T2-weighted signal intensity for prostate cancer detection
NASA Astrophysics Data System (ADS)
Peng, Yahui; Jiang, Yulei; Oto, Aytekin
2014-03-01
The purpose of this study was to investigate whether correction with respect to reference tissue of T2-weighted MRimage signal intensity (SI) improves its effectiveness for classification of regions of interest (ROIs) as prostate cancer (PCa) or normal prostatic tissue. Two image datasets collected retrospectively were used in this study: 71 cases acquired with GE scanners (dataset A), and 59 cases acquired with Philips scanners (dataset B). Through a consensus histology- MR correlation review, 175 PCa and 108 normal-tissue ROIs were identified and drawn manually. Reference-tissue ROIs were selected in each case from the levator ani muscle, urinary bladder, and pubic bone. T2-weighted image SI was corrected as the ratio of the average T2-weighted image SI within an ROI to that of a reference-tissue ROI. Area under the receiver operating characteristic curve (AUC) was used to evaluate the effectiveness of T2-weighted image SIs for differentiation of PCa from normal-tissue ROIs. AUC (+/- standard error) for uncorrected T2-weighted image SIs was 0.78+/-0.04 (datasets A) and 0.65+/-0.05 (datasets B). AUC for corrected T2-weighted image SIs with respect to muscle, bladder, and bone reference was 0.77+/-0.04 (p=1.0), 0.77+/-0.04 (p=1.0), and 0.75+/-0.04 (p=0.8), respectively, for dataset A; and 0.81+/-0.04 (p=0.002), 0.78+/-0.04 (p<0.001), and 0.79+/-0.04 (p<0.001), respectively, for dataset B. Correction in reference to the levator ani muscle yielded the most consistent results between GE and Phillips images. Correction of T2-weighted image SI in reference to three types of extra-prostatic tissue can improve its effectiveness for differentiation of PCa from normal-tissue ROIs, and correction in reference to the levator ani muscle produces consistent T2-weighted image SIs between GE and Phillips MR images.
Rigid shape matching by segmentation averaging.
Wang, Hongzhi; Oliensis, John
2010-04-01
We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking. PMID:20224119
Slater Averaged Pseudopotential and Its Inprovements
NASA Astrophysics Data System (ADS)
Miao, Maosheng
2001-03-01
We demonstrate that the optimized effective potential method(OEP), which can be viewed as a way for constructing orbital independent potential from the known orbital dependent potentials, is valid for pseudopotentials. It is further on proved that for most group I and II elements as well as the elements with large radius, the Slater averaged pseudopotential, which is local and orbital independent, is applicable with very good transferability. A Heine-Abarenkov(HA) correction is proposed to make the pseudopotential workable for other elements, especially the first row atoms. Further on, the combination of the Slater averaged potential and the Bachelet-Hamman-Schluter(BHS) construction produces a new family of first principle norm-conserving pseudopotentials.
Average entanglement for Markovian quantum trajectories
Vogelsberger, S. [Institut Fourier, Universite Joseph Fourier and CNRS, BP 74, F-38402 Saint Martin d'Heres (France); Spehner, D. [Institut Fourier, Universite Joseph Fourier and CNRS, BP 74, F-38402 Saint Martin d'Heres (France); Laboratoire de Physique et Modelisation des Milieux Condenses, Universite Joseph Fourier and CNRS, BP 166, F-38042 Grenoble (France)
2010-11-15
We study the evolution of the entanglement of noninteracting qubits coupled to reservoirs under monitoring of the reservoirs by means of continuous measurements. We calculate the average of the concurrence of the qubits wave function over all quantum trajectories. For two qubits coupled to independent baths subjected to local measurements, this average decays exponentially with a rate depending on the measurement scheme only. This contrasts with the known disappearance of entanglement after a finite time for the density matrix in the absence of measurements. For two qubits coupled to a common bath, the mean concurrence can vanish at discrete times. Our analysis applies to arbitrary quantum jump or quantum state diffusion dynamics in the Markov limit. We discuss the best measurement schemes to protect entanglement in specific examples.
Average gluon and quark jet multiplicities
A. V. Kotikov
2014-11-30
We show the results in [1,2] for computing the QCD contributions to the scale evolution of average gluon and quark jet multiplicities. The new results came due a recent progress in timelike small-x resummation obtained in the MSbar factorization scheme. They depend on two nonperturbative parameters with clear and simple physical interpretations. A global fit of these two quantities to all available experimental data sets demonstrates by its goodness how our results solve a longstandig problem of QCD. Including all the available theoretical input within our approach, alphas(Mz)=0.1199 +- 0.0026 has been obtained in the MSbar scheme in an approximation equivalent to next-to-next-to-leading order enhanced by the resummations of ln x terms through the NNLL level and of ln Q2 terms by the renormalization group. This result is in excellent agreement with the present world average.
Average plasma environment at geosynchronous orbit
NASA Technical Reports Server (NTRS)
Su, S. Y.; Konradi, A.
1979-01-01
The average plasma environment at geosynchronous orbit (GSO) is derived from a whole year's worth of plasma data obtained by the UCSD electrostatic electrometer on board ATS 5. The result is primarily intended for use as a general reference for engineers designing a large spacecraft to be flown at GSO. A simple mathematical formula using a 3rd order polynomial is found to be adequate for representing the yearly averaged particle energy spectrum from 70 to 41,000 eV under different geomagnetic conditions. Furthermore, correlation analyses with the geomagnetic planetary index Kp and with the auroral electrojet index AE were carried out in the hope that the ground observations of the geomagnetic field variations can be used to predict the plasma variations in space. Unfortunately, the results indicate that such forecasting is not feasible by use of these two popular geomagnetic parameters alone.
Polarized electron beams at milliampere average current
Poelker, M. [Thomas Jefferson National Accelerator Facility, Newport News, Virginia 23606 (United States)
2013-11-07
This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today’s CEBAF polarized source operating at ? 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.
Averaging of temporal memories by rats.
Swanton, Dale N; Gooch, Cynthia M; Matell, Matthew S
2009-07-01
Rats were trained on a mixed fixed-interval schedule in which stimulus A (tone or light) indicated food availability after 10 s and stimulus B (the other stimulus) indicated food availability after 20 s. Testing consisted of nonreinforced probe trials in which the stimulus was A, B, or the compound AB. On single-stimulus trials, rats responded with a peak of activity around the programmed reinforced time. On compound-stimulus trials, rats showed a single scalar peak of responding at a time midway between those for stimulus A and B. These results suggest that when provided with discrepant information regarding the temporal predictability of reinforcement, rats compute an average of the scheduled reinforcement times for the A and B stimuli and use this average to generate an expectation of reward for the compound stimuli. PMID:19594288
Averaging of Temporal Memories by Rats
Swanton, Dale N.; Gooch, Cynthia M.; Matell, Matthew S.
2009-01-01
Rats were trained on a mixed fixed-interval schedule in which stimulus A (tone or light) indicated food availability after 10 s and stimulus B (the other stimulus) indicated food availability after 20 s. Testing consisted of non-reinforced probe trials in which the stimulus was A, B, or the compound AB. On single-stimulus trials, rats responded with a peak of activity around the programmed reinforced time. On compound-stimulus trials, rats showed a single scalar peak of responding at a time midway between those for stimulus A and B. These results suggest that when provided with discrepant information regarding the temporal predictability of reinforcement, rats compute an average of the scheduled reinforcement times for the A and B stimuli and use this average to generate an expectation of reward for the compound stimuli. PMID:19594288
Tongue motion averaging from contour sequences.
Li, Min; Kambhamettu, Chandra; Stone, Maureen
2005-01-01
In this paper, a method to get the best representation of a speech motion from several repetitions is presented. Each repetition is a representation of the same speech captured at different times by sequence of ultrasound images and is composed of a set of 2D spatio-temporal contours. These 2D contours in different repetitions are time aligned first by a shape based Dynamic Programming (DP) method. The best representation of the speech motion is then obtained by averaging the time aligned contours from different repetitions. Procrustes analysis is used to measure the contour similarity in the time alignment process and to get the averaged best representation. To get the point correspondence for Procrustes analysis, a nonrigid point correspondence recovery method based on a local stretching model and a global constraint is developed. Synthetic validations and experiments on real tongue motion are also presented in this paper. PMID:16206480
Ensemble averaging vs. time averaging in molecular dynamics simulations of thermal conductivity
NASA Astrophysics Data System (ADS)
Gordiz, Kiarash; Singh, David J.; Henry, Asegun
2015-01-01
In this report, we compare time averaging and ensemble averaging as two different methods for phase space sampling in molecular dynamics (MD) calculations of thermal conductivity. For the comparison, we calculate thermal conductivities of solid argon and silicon structures, using equilibrium MD. We introduce two different schemes for the ensemble averaging approach and show that both can reduce the total simulation time as compared to time averaging. It is also found that velocity rescaling is an efficient mechanism for phase space exploration. Although our methodology is tested using classical MD, the approaches used for generating independent trajectories may find their greatest utility in computationally expensive simulations such as first principles MD. For such simulations, where each time step is costly, time averaging can require long simulation times because each time step must be evaluated sequentially and therefore phase space averaging is achieved through sequential operations. On the other hand, with ensemble averaging, phase space sampling can be achieved through parallel operations, since each trajectory is independent. For this reason, particularly when using massively parallel architectures, ensemble averaging can result in much shorter simulation times (˜100-200X), but exhibits similar overall computational effort.
Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.
ERIC Educational Resources Information Center
Dirks, Jean; And Others
1983-01-01
Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)
The Average Velocity in a Queue
ERIC Educational Resources Information Center
Frette, Vidar
2009-01-01
A number of cars drive along a narrow road that does not allow overtaking. Each driver has a certain maximum speed at which he or she will drive if alone on the road. As a result of slower cars ahead, many cars are forced to drive at speeds lower than their maximum ones. The average velocity in the queue offers a non-trivial example of a mean…
Stochastic Games with Average Payoff Criterion
Ghosh, M. K.; Bagchi, A.
1998-11-15
We study two-person stochastic games on a Polish state and compact action spaces and with average payoff criterion under a certain ergodicity condition. For the zero-sum game we establish the existence of a value and stationary optimal strategies for both players. For the nonzero-sum case the existence of Nash equilibrium in stationary strategies is established under certain separability conditions.
NASA Astrophysics Data System (ADS)
Bostan, P. A.; Heuvelink, G. B. M.; Akyurek, S. Z.
2012-10-01
Accurate mapping of the spatial distribution of annual precipitation is important for many applications in hydrology, climatology, agronomy, ecology and other environmental sciences. In this study, we compared five different statistical methods to predict spatially the average annual precipitation of Turkey using point observations of annual precipitation at meteorological stations and spatially exhaustive covariate data (i.e. elevation, aspect, surface roughness, distance to coast, land use and eco-region). The methods compared were multiple linear regression (MLR), ordinary kriging (OK), regression kriging (RK), universal kriging (UK), and geographically weighted regression (GWR). Average annual precipitation of Turkey from 1970 to 2006 was measured at 225 meteorological stations that are fairly uniformly distributed across the country, with a somewhat higher spatial density along the coastline. The observed annual precipitation varied between 255 mm and 2209 mm with an average of 628 mm. The annual precipitation was highest along the southern and northern coasts and low in the centre of the country, except for the area near the Van Lake, Keban and Ataturk Dams. To compare the performance of the interpolation techniques the total dataset was first randomly split in ten equally sized test datasets. Next, for each test data set the remaining 90% of the data comprised the training dataset. Each training dataset was then used to calibrate and apply the spatial prediction model. Predictions at the test dataset locations were compared with the observed test data. Validation was done by calculating the Root Mean Squared Error (RMSE), R-square and Standardized MSE (SMSE) values. According to these criteria, universal kriging is the most accurate with an RMSE of 178 mm, an R-square of 0.61 and an SMSE of 1.06, whilst multiple linear regression performed worst (RMSE of 222 mm, R-square of 0.39, and SMSE of 1.44). Ordinary kriging, UK using only elevation and geographically weighted regression are intermediate with RMSE values of 201 mm, 212 mm and 211 mm, and an R-square of 0.50, 0.44 and 0.45, respectively. The RK results are close to those of UK with an RMSE of 186 mm and R-square of 0.57. The spatial extrapolation performance of each method was also evaluated. This was done by predicting the annual precipitation in the eastern part of Turkey using observations from the western part. Results showed that MLR, GWR and RK performed best with little differences between these methods. The large prediction error variances confirmed that extrapolation is more difficult than interpolation. Whilst spatial extrapolation benefits most from covariate information as shown by an RMSE reduction of about 60 mm, in this study covariate information was also valuable for spatial interpolation because it reduced the RMSE with on average 30 mm.
Importance Weight Aware Gradient Updates
Karampatziakis, Nikos
2010-01-01
An importance weight quantifies the relative importance of one example over another, coming up in applications of boosting, asymmetric classification costs, and active learning. The standard approach for dealing with importance weights in gradient descent is via multiplication of the gradient. This approach has obvious problems when importance weights are large. We develop an alternate approach based on an invariance property: that updating twice with importance weight $h$ is equivalent to updating once with importance weight $2h$. For many important losses this has a closed form update which satisfies standard regret guarantees when all examples have $h=1$. Empirically, importance weight invariant updates yield better learned hypotheses and reduce the sensitivity of the algorithm to the exact setting of the learning rate even for datasets where all importance weights are equal to one.
Fast Optimal Transport Averaging of Neuroimaging Data.
Gramfort, A; Peyré, G; Cuturi, M
2015-01-01
Knowing how the Human brain is anatomically and functionally organized at the level of a group of healthy individuals or patients is the primary goal of neuroimaging research. Yet computing an average of brain imaging data defined over a voxel grid or a triangulation remains a challenge. Data are large, the geometry of the brain is complex and the between subjects variability leads to spatially or temporally non-overlapping effects of interest. To address the problem of variability, data are commonly smoothed before performing a linear group averaging. In this work we build on ideas originally introduced by Kantorovich to propose a new algorithm that can average efficiently non-normalized data defined over arbitrary discrete domains using transportation metrics. We show how Kantorovich means can be linked to Wasserstein barycenters in order to take advantage of the entropic smoothing approach used by. It leads to a smooth convex optimization problem and an algorithm with strong convergence guarantees. We illustrate the versatility of this tool and its empirical behavior on functional neuroimaging data, functional MRI and magnetoencephalography (MEG) source estimates, defined on voxel grids and triangulations of the folded cortical surface. PMID:26221679
Disk-averaged synthetic spectra of Mars
NASA Technical Reports Server (NTRS)
Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather
2005-01-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.
Terrestrial kilometric radiation: 3-average spectral properties
NASA Technical Reports Server (NTRS)
Kaiser, M. L.; Alexander, J. K.
1976-01-01
A study is presented of the average spectral properties of terrestrial kilometric radiation (TKR) derived from observations made by radio astronomy experiments onboard the IMP-6 and RAE-2 spacecraft. As viewed from near the equatorial plane, TKR is most intense and most often observed in the 21-24 hr local time zone and is rarely seen in the 09-12 hr zone. The peak flux density usually occurs near 240 kHz, but there is evidence that the peak occurs at a somewhat lower frequency on the dayside. The frequency of the peak in the average flux spectrum varies inversely with increasing substorm activity as inferred from the auroral electrojet index (AE) from a maximum near 300 kHz during very quiet times to a minimum below 200 kHz during very disturbed times. The absolute flux levels in the 100-600 kHz TKR band increase significantly with increasing AE. The average power associated with a particular source region seems to decrease rapidly with increasing source altitude.
Digital Averaging Phasemeter for Heterodyne Interferometry
NASA Technical Reports Server (NTRS)
Johnson, Donald; Spero, Robert; Shaklan, Stuart; Halverson, Peter; Kuhnert, Andreas
2004-01-01
A digital averaging phasemeter has been built for measuring the difference between the phases of the unknown and reference heterodyne signals in a heterodyne laser interferometer. This phasemeter performs well enough to enable interferometric measurements of distance with accuracy of the order of 100 pm and with the ability to track distance as it changes at a speed of as much as 50 cm/s. This phasemeter is unique in that it is a single, integral system capable of performing three major functions that, heretofore, have been performed by separate systems: (1) measurement of the fractional-cycle phase difference, (2) counting of multiple cycles of phase change, and (3) averaging of phase measurements over multiple cycles for improved resolution. This phasemeter also offers the advantage of making repeated measurements at a high rate: the phase is measured on every heterodyne cycle. Thus, for example, in measuring the relative phase of two signals having a heterodyne frequency of 10 kHz, the phasemeter would accumulate 10,000 measurements per second. At this high measurement rate, an accurate average phase determination can be made more quickly than is possible at a lower rate.
Adaptive-Weighted Packet Scheduling for Premium Service Haining Wang
Wang, Haining
Adaptive-Weighted Packet Scheduling for Premium Service Haining Wang Chia Shen¡ Kang G. Shin according to the dynamics of the average queue size of premium service, the proposed scheme can achieve low loss rate, low de- lay and delay jitter for the premium service. Moreover, it requires neither rigid
Weighted Centroid Localization in Zigbee-based Sensor Networks
Jan Blumenthal; Ralf Grossmann; Frank Golatowski; Dirk Timmermann
2007-01-01
Localization in wireless sensor networks gets more and more important, because many applications need to locate the source of incoming measurements as precise as possible. Weighted centroid localization (WCL) provides a fast and easy algorithm to locate devices in wireless sensor networks. The algorithm is derived from a centroid determination which calculates the position of devices by averaging the coordinates
Economic evaluation of an internet-based weight management program
Technology Transfer Automated Retrieval System (TEKTRAN)
To determine whether a behavioral Internet treatment (BIT) program for weight management is a viable, cost-effective option compared with usual care (UC) in a diverse sample of overweight (average body mass index = 29 kg/m2), healthy adults (mean age = 34 years) serving in the US Air Force. Two-grou...
14 CFR 25.1519 - Weight, center of gravity, and weight distribution.
Code of Federal Regulations, 2013 CFR
2013-01-01
...2013-01-01 false Weight, center of gravity, and weight distribution. 25.1519...Limitations § 25.1519 Weight, center of gravity, and weight distribution. The airplane weight, center of gravity, and weight distribution...
14 CFR 25.1519 - Weight, center of gravity, and weight distribution.
Code of Federal Regulations, 2014 CFR
2014-01-01
...2014-01-01 false Weight, center of gravity, and weight distribution. 25.1519...Limitations § 25.1519 Weight, center of gravity, and weight distribution. The airplane weight, center of gravity, and weight distribution...
14 CFR 25.1519 - Weight, center of gravity, and weight distribution.
Code of Federal Regulations, 2012 CFR
2012-01-01
...2012-01-01 false Weight, center of gravity, and weight distribution. 25.1519...Limitations § 25.1519 Weight, center of gravity, and weight distribution. The airplane weight, center of gravity, and weight distribution...
Hamdi, Moustapha; Van Landuyt, Koenraad; Blondeel, Phillip; Hijjawi, John B; Roche, Nathalie; Monstrey, Stan
2009-01-01
The body contour deformities that develop in morbidly obese patients following bariatric surgery often involve the breasts. Mastopexy is virtually always required in the female massive weight loss patient, and breast augmentation is often an important adjunct to breast-lifting procedures. The lateral intercostal artery perforator (LICAP) pedicled flap provides ample material for autogenous breast augmentation in such patients. Between June 2001 and June 2005, bilateral LICAP flaps were used as a method of autologous breast augmentation in six patients after massive weight loss. Of the 12 pedicled LICAP flaps raised, the average flap dimension was 23.6x10.6 cm. Mean flap harvesting time was 60 min (range 45-75 min) for a single flap. All but two flaps were based on one perforator. All donor sites were closed primarily. Complete flap survival was achieved in all cases. A minor wound dehiscence occurred in two cases both of which healed secondarily. Patient satisfaction with both the appearance of their breasts and lateral axillary-thoracic region was high. The improved contour of the lateral axillary region was frequently noted as a significant benefit. In massive weight loss patients, harvesting the lateral skin-fat excess based on the LICAP provides supple tissue for breast augmentation, while simultaneously improving the contour of this area frequently affected by skin excess. Additionally, harvesting these flaps without sacrifice of the underlying muscle eases postoperative recovery and reduces donor site morbidity. PMID:18054303
Waring, Molly E.; Schneider, Kristin L.; Appelhans, Bradley M.; Busch, Andrew M.; Whited, Matthew C.; Rodrigues, Stephanie; Lemon, Stephenie C.; Pagoto, Sherry L.
2014-01-01
Objective Some adults with comorbid depression and obesity respond well to lifestyle interventions while others have poor outcomes. The objective of this study was to evaluate whether early-treatment weight loss progress predicts clinically significant 6-month weight loss among women with obesity and depression. Methods We conducted a secondary analysis of data from 75 women with obesity and depression who received a standard lifestyle intervention. Relative risks (RRs) and 95% confidence intervals (CIs) for achieving ?5% weight loss by 6 months were calculated based on whether they achieved ?1 pound/week weight loss in weeks 2–8. Among those on target at week 3, we examined potential subsequent time points at which weight loss progress might identify additional individuals at risk for treatment failure. Results At week 2, women who averaged ?1 pound/week loss were twice as likely to achieve 5% weight loss by 6 months than those who did not (RR=2.40; 95% CI: 2.32–4.29); weight loss at weeks 3–8 was similarly predictive (RRs=2.02–3.20). Examining weight loss progress at week 3 and subsequently a time point during weeks 4–8, 52–67% of participants were not on target with their weight loss, and those on target were 2–3 times as likely to achieve 5% weight loss by 6 months (RRs=1.82–2.92). Conclusion Weight loss progress as early as week 2 of treatment predicts weight loss outcomes for women with comorbid obesity and depression, which supports the feasibility of developing stepped care interventions that adjust treatment intensity based on early progress in this population. PMID:24745781
Aperture averaging in a laser Gaussian beam: simulations and experiments
NASA Astrophysics Data System (ADS)
Barrios, Ricardo; Dios, Federico; Recolons, Jaume; Rodriguez, Alejandro
2010-08-01
In terrestrial free-space laser communication, aside from pointing issues, the major problem that have to be dealt with is the turbulent atmosphere that produces irradiance fluctuations in the received signal, greatly reducing the link performance. Aperture averaging is the standard method used to mitigate these irradiance fluctuations consisting in increasing the area of the detector, or effectively increasing it by using a collecting lens with a diameter as large as possible. Prediction of the aperture averaging factor for Gaussian beam with currently available theory is compared with data collected experimentally and simulations based in the beam propagation method, where the atmospheric turbulence is represented by linearly spaced random phase screens. Experiments were carried out using a collecting lens with two simultaneous detectors, one of them with a small aperture to emulate an effective point detector, while the other one was mounted with interchangeable diaphragms, hence measurements for different aperture diameters could be made. The testbed for the experiments consists of a nearly horizontal path of 1.2 km with the transmitter and receiver on either side of the optical link. The analysis of the experimental data is used to characterize the aperture averaging factor when different values of laser divergence are selected.
Reliability of Calculating Average Soil Composition of Apollo Landing Sites
NASA Astrophysics Data System (ADS)
Basu, Abhijit; Riegsecker, Sue E.
1998-01-01
Lunar soil, i.e., the fine fraction of the lunar regolith, is the ground truth available for calibrating remotely sensed properties of virtually atmosphere-free planetary bodies. Such properties include albedo, IR-VIS-UV spectra, and secondary XRF, which are used to characterize the chemical and mineralogical compositions of planetary crusts. The quality of calibration, however, is dependent on the degree to which the ground truth is represented in the remotely sensed properties. The footprints and spatial resolution of orbital and Earth-based observations are much larger than the sampling areas at the landing sites. Yet an average composition of soils at each landing site is our best approximation for testing calibration. Previously, we have compiled chemical compositions of lunar soils and estimated the best average composition (CC) for each landing site. We have now compiled and estimated the best average mineralogical composition (MC) of soils (9 150-p n fraction) at each Apollo landing site. In this paper, we examine how these two estimates (Tables 1 and 2) compare and how representative they may be. For the purpose of comparison, we have calculated the normative mineralogy of each site (from Table 1) and recast them on a quartz-apatite-pyrite-free basis, i.e., in terms of feldspar, pyroxene, olivine, and ilmenite + chromite (Table 3).
High Average Power, High Energy Short Pulse Fiber Laser System
Messerly, M J
2007-11-13
Recently continuous wave fiber laser systems with output powers in excess of 500W with good beam quality have been demonstrated [1]. High energy, ultrafast, chirped pulsed fiber laser systems have achieved record output energies of 1mJ [2]. However, these high-energy systems have not been scaled beyond a few watts of average output power. Fiber laser systems are attractive for many applications because they offer the promise of high efficiency, compact, robust systems that are turn key. Applications such as cutting, drilling and materials processing, front end systems for high energy pulsed lasers (such as petawatts) and laser based sources of high spatial coherence, high flux x-rays all require high energy short pulses and two of the three of these applications also require high average power. The challenge in creating a high energy chirped pulse fiber laser system is to find a way to scale the output energy while avoiding nonlinear effects and maintaining good beam quality in the amplifier fiber. To this end, our 3-year LDRD program sought to demonstrate a high energy, high average power fiber laser system. This work included exploring designs of large mode area optical fiber amplifiers for high energy systems as well as understanding the issues associated chirped pulse amplification in optical fiber amplifier systems.
2010-01-01
Background This study aimed to canvass the nature of adolescent-parent interactions about weight, particularly overweight, and to explore ideas of how to foster supportive discussions regarding weight, both in the home and with family doctors. Methods A market research company was contracted to recruit and conduct a series of separate focus groups with adolescents and unrelated parents of adolescents from low-middle socio-economic areas in Sydney and a regional centre, Australia. Group discussions were audio recorded, transcribed, and then a qualitative content analysis of the data was performed. Results Nine focus groups were conducted; two were held with girls (n = 13), three with boys (n = 18), and four with parents (20 mothers, 12 fathers). Adolescent and parent descriptions of weight-related interactions could be classified into three distinct approaches: indirect/cautious (i.e. focus on eating or physical activity behaviors without discussing weight specifically); direct/open (i.e. body weight was discussed); and never/rarely discussing the subject. Indirect approaches were described most frequently by both adolescents and parents and were generally preferred over direct approaches. Parents and adolescents were circumspect but generally supportive of the potential role for family doctors to monitor and discuss adolescent weight status. Conclusions These findings have implications for developing acceptable messages for adolescent and family overweight prevention and treatment interventions. PMID:20205918
Random walks on non-homogenous weighted Koch networks
NASA Astrophysics Data System (ADS)
Dai, Meifeng; Li, Xingyi; Xi, Lifeng
2013-09-01
In this paper, we introduce new models of non-homogenous weighted Koch networks on real traffic systems depending on the three scaling factors r1,r2,r3?(0,1). Inspired by the definition of the average weighted shortest path (AWSP), we define the average weighted receiving time (AWRT). Assuming that the walker, at each step, starting from its current node, moves uniformly to any of its neighbors, we show that in large network, the AWRT grows as power-law function of the network order with the exponent, represented by ?(r1,r2,r3)=log4(1+r1+r2+r3). Moreover, the AWSP, in the infinite network order limit, only depends on the sum of scaling factors r1,r2,r3.
Computation of synthetic mammograms with an edge-weighting algorithm
NASA Astrophysics Data System (ADS)
Homann, Hanno; Bergner, Frank; Erhard, Klaus
2015-03-01
The promising increase in cancer detection rates1, 2 makes digital breast tomosynthesis (DBT) an interesting alternative to full-field digital mammography (FFDM) in breast cancer screening. However, this benefit comes at the cost of an increased average glandular dose in a combined DBT plus FFDM acquisition protocol. Synthetic mammograms, which are computed from the reconstructed tomosynthesis volume data, have demonstrated to be an alternative to a regular FFDM exposure in a DBT plus synthetic 2D reading mode.3 Besides weighted averaging and modified maximum intensity projection (MIP) methods,4, 5 the integration of CAD techniques for computing a weighting function in the forward projection step of the synthetic mammogram generation has been recently proposed.6, 7 In this work, a novel and computationally efficient method is presented based on an edge-retaining algorithm, which directly computes the weighting function by an edge-detection filter.
Body weight and composition dynamics of fall migrating canvasbacks
Serie, J.R.; Sharp, D.E.
1989-01-01
We studied body weights and composition of canvasbacks (Aythya valisineria) during fall migration 1975-77 on stopover sites along the upper Mississippi River near La Crosse, Wisconsin (Navigational Pools 7 and 8) and Keokuk, Iowa (Navigational Pool 19). Body weights varied (P < 0.001) by age and sex without interaction. Weights varied by year (P < 0.001) on Pools 7 and 8. Mean weights increased (P < 0.01) within age and sex classes by date and averaged 3.6 and 2.7 g daily on Pools 7 and 8 and Pool 19, respectively. Percent fat was highly correlated (P < 0.001) with carcass weight for each age and sex. Live weight was a good predictor of total body fat. Mean estimated total body fat ranged from 200 to 300 g and comprised 15-20% of live weights among age and sex classes. Temporal weight patterns were less variable for adults than immatures, but generally increased during migration. Length of stopover varied inversely with fat reserves among color-marked adult males. Variation in fat condition of canvasbacks during fall may explain the mechanism regulating population ingress and egress on stopover sites. Fat reserves attained by canvasbacks during fall stopover may have adaptive significance in improving survival by conditioning for winter.
Ness-Abramof, Rosane; Apovian, Caroline M
2005-08-01
Drug-induced weight gain is a serious side effect of many commonly used drugs leading to noncompliance with therapy and to exacerbation of comorbid conditions related to obesity. Improved glycemic control achieved by insulin, insulin secretagogues or thiazolidinedione therapy is generally accompanied by weight gain. It is a problematic side effect of therapy due to the known deleterious effect of weight gain on glucose control, increased blood pressure and worsening lipid profile. Weight gain may be lessened or prevented by adherence to diet and exercise or combination therapy with metformin. Weight gain is also common in psychotropic therapy. The atypical antipsychotic drugs (clozapine, olanzepine, risperidone and quetiapine) are known to cause marked weight gain. Antidepressants such as amitriptyline, mirtazapine and some serotonin reuptake inhibitors (SSRIs) also may promote appreciable weight gain that cannot be explained solely by improvement in depressive symptoms. The same phenomenon is observed with mood stabilizers such as lithium, valproic acid and carbamazepine. Antiepileptic drugs (AEDs) that promote weight gain include valproate, carbamazepine and gabapentin. Lamotrigine is an AED that is weight-neutral, while topiramate and zonisamide may induce weight loss. PMID:16234878
Ness-Abramof, Rosane; Apovian, Caroline M
2005-01-01
Drug-induced weight gain is a serious side effect of many commonly used drugs leading to noncompliance with therapy and to exacerbation of comorbid conditions related to obesity. Improved glycemic control achieved by insulin, insulin secretagogues or thiazolidinedione therapy is generally accompanied by weight gain. It is a problematic side effect of therapy due to the known deleterious effect of weight gain on glucose control, increased blood pressure and worsening lipid profile. Weight gain may be lessened or prevented by adherence to diet and exercise or combination therapy with metformin. Weight gain is also common in psychotropic therapy. The atypical antipsychotic drugs (clozapine, olanzepine, risperidone and quetiapine) are known to cause marked weight gain. Antidepressants such as amitriptyline, mirtazapine and some serotonin reuptake inhibitors (SSRIs) also may promote appreciable weight gain that cannot be explained solely by improvement in depressive symptoms. The same phenomenon is observed with mood stabilizers such as lithium, valproic acid and carbamazepine. Antiepileptic drugs (AEDs) that promote weight gain include valproate, carbamazepine and gabapentin. Lamotrigine is an AED that is weight-neutral, while topiramate and zonisamide may induce weight loss. PMID:16341287
Lee, Jia-In; Yen, Cheng-Fang
2014-12-01
The aims of this cross-sectional study were to examine the associations between body weight and mental health indicators including depression, social phobia, insomnia, and self-esteem among Taiwanese adolescents in Grades 7-12. The body mass index (BMI) of 5254 adolescents was calculated based on self-reported weight and height measurements. Body weight status was determined by the age- and gender-specific International Obesity Task Force reference tables. By using participants of average weight as the reference group, the association between body weight status (underweight, overweight, and obesity) and mental health indicators (depression, social phobia, insomnia, and self-esteem) were examined by using multiple regression analysis. The possible moderating effects of sociodemographic characteristics on the association were also examined. After controlling for the effects of sociodemographic characteristics, both overweight (p < 0.05) and obese adolescents (p < 0.001) had a lower level of self-esteem than did those of average weight; however, no significant differences in depression, social phobia, or insomnia were found between those who were overweight/obese and those of average weight. No significant differences in the four mental health indicators were found between those who were underweight and those of average weight. Sociodemographic characteristics had no moderating effect on the association between body weight and mental health indicators. In conclusion, mental health and school professionals must take the association between overweight/obesity and self-esteem into consideration when approaching the issue of mental health among adolescents. PMID:25476101
Handling Dynamic Weights in Weighted Frequent Pattern Mining
NASA Astrophysics Data System (ADS)
Ahmed, Chowdhury Farhan; Tanbeer, Syed Khairuzzaman; Jeong, Byeong-Soo; Lee, Young-Koo
Even though weighted frequent pattern (WFP) mining is more effective than traditional frequent pattern mining because it can consider different semantic significances (weights) of items, existing WFP algorithms assume that each item has a fixed weight. But in real world scenarios, the weight (price or significance) of an item can vary with time. Reflecting these changes in item weight is necessary in several mining applications, such as retail market data analysis and web click stream analysis. In this paper, we introduce the concept of a dynamic weight for each item, and propose an algorithm, DWFPM (dynamic weighted frequent pattern mining), that makes use of this concept. Our algorithm can address situations where the weight (price or significance) of an item varies dynamically. It exploits a pattern growth mining technique to avoid the level-wise candidate set generation-and-test methodology. Furthermore, it requires only one database scan, so it is eligible for use in stream data mining. An extensive performance analysis shows that our algorithm is efficient and scalable for WFP mining using dynamic weights.