Wieczorek, Michael E.
2014-01-01
This digital data release consists of seven data files of soil attributes for the United States and the District of Columbia. The files are derived from National Resources Conservations Service’s (NRCS) Soil Survey Geographic database (SSURGO). The data files can be linked to the raster datasets of soil mapping unit identifiers (MUKEY) available through the NRCS’s Gridded Soil Survey Geographic (gSSURGO) database (http://www.nrcs.usda.gov/wps/portal/nrcs/detail/soils/survey/geo/?cid=nrcs142p2_053628). The associated files, named DRAINAGECLASS, HYDRATING, HYDGRP, HYDRICCONDITION, LAYER, TEXT, and WTDEP are area- and depth-weighted average values for selected soil characteristics from the SSURGO database for the conterminous United States and the District of Columbia. The SSURGO tables were acquired from the NRCS on March 5, 2014. The soil characteristics in the DRAINAGE table are drainage class (DRNCLASS), which identifies the natural drainage conditions of the soil and refers to the frequency and duration of wet periods. The soil characteristics in the HYDRATING table are hydric rating (HYDRATE), a yes/no field that indicates whether or not a map unit component is classified as a "hydric soil". The soil characteristics in the HYDGRP table are the percentages for each hydrologic group per MUKEY. The soil characteristics in the HYDRICCONDITION table are hydric condition (HYDCON), which describes the natural condition of the soil component. The soil characteristics in the LAYER table are available water capacity (AVG_AWC), bulk density (AVG_BD), saturated hydraulic conductivity (AVG_KSAT), vertical saturated hydraulic conductivity (AVG_KV), soil erodibility factor (AVG_KFACT), porosity (AVG_POR), field capacity (AVG_FC), the soil fraction passing a number 4 sieve (AVG_NO4), the soil fraction passing a number 10 sieve (AVG_NO10), the soil fraction passing a number 200 sieve (AVG_NO200), and organic matter (AVG_OM). The soil characteristics in the TEXT table are
Average Weighted Receiving Time of Weighted Tetrahedron Koch Networks
NASA Astrophysics Data System (ADS)
Dai, Meifeng; Zhang, Danping; Ye, Dandan; Zhang, Cheng; Li, Lei
2015-07-01
We introduce weighted tetrahedron Koch networks with infinite weight factors, which are generalization of finite ones. The term of weighted time is firstly defined in this literature. The mean weighted first-passing time (MWFPT) and the average weighted receiving time (AWRT) are defined by weighted time accordingly. We study the AWRT with weight-dependent walk. Results show that the AWRT for a nontrivial weight factor sequence grows sublinearly with the network order. To investigate the reason of sublinearity, the average receiving time (ART) for four cases are discussed.
Asymmetric network connectivity using weighted harmonic averages
NASA Astrophysics Data System (ADS)
Morrison, Greg; Mahadevan, L.
2011-02-01
We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.
NASA Astrophysics Data System (ADS)
Wu, Zikai; Hou, Baoyu; Zhang, Hongjuan; Jin, Feng
2014-04-01
Deterministic network models have been attractive media for discussing dynamical processes' dependence on network structural features. On the other hand, the heterogeneity of weights affect dynamical processes taking place on networks. In this paper, we present a family of weighted expanded Koch networks based on Koch networks. They originate from a r-polygon, and each node of current generation produces m r-polygons including the node and whose weighted edges are scaled by factor w in subsequent evolutionary step. We derive closed-form expressions for average weighted shortest path length (AWSP). In large network, AWSP stays bounded with network order growing (0 < w < 1). Then, we focus on a special random walks and trapping issue on the networks. In more detail, we calculate exactly the average receiving time (ART). ART exhibits a sub-linear dependence on network order (0 < w < 1), which implies that nontrivial weighted expanded Koch networks are more efficient than un-weighted expanded Koch networks in receiving information. Besides, efficiency of receiving information at hub nodes is also dependent on parameters m and r. These findings may pave the way for controlling information transportation on general weighted networks.
Dynamic consensus estimation of weighted average on directed graphs
NASA Astrophysics Data System (ADS)
Li, Shuai; Guo, Yi
2015-07-01
Recent applications call for distributed weighted average estimation over sensor networks, where sensor measurement accuracy or environmental conditions need to be taken into consideration in the final consensused group decision. In this paper, we propose new dynamic consensus filter design to distributed estimate weighted average of sensors' inputs on directed graphs. Based on recent advances in the filed, we modify the existing proportional-integral consensus filter protocol to remove the requirement of bi-directional gain exchange between neighbouring sensors, so that the algorithm works for directed graphs where bi-directional communications are not possible. To compensate for the asymmetric structure of the system introduced by such a removal, sufficient gain conditions are obtained for the filter protocols to guarantee the convergence. It is rigorously proved that the proposed filter protocol converges to the weighted average of constant inputs asymptotically, and to the weighted average of time-varying inputs with a bounded error. Simulations verify the effectiveness of the proposed protocols.
Global Rotation Estimation Using Weighted Iterative Lie Algebraic Averaging
NASA Astrophysics Data System (ADS)
Reich, M.; Heipke, C.
2015-08-01
In this paper we present an approach for a weighted rotation averaging to estimate absolute rotations from relative rotations between two images for a set of multiple overlapping images. The solution does not depend on initial values for the unknown parameters and is robust against outliers. Our approach is one part of a solution for a global image orientation. Often relative rotations are not free from outliers, thus we use the redundancy in available pairwise relative rotations and present a novel graph-based algorithm to detect and eliminate inconsistent rotations. The remaining relative rotations are input to a weighted least squares adjustment performed in the Lie algebra of the rotation manifold SO(3) to obtain absolute orientation parameters for each image. Weights are determined using the prior information we derived from the estimation of the relative rotations. Because we use the Lie algebra of SO(3) for averaging no subsequent adaptation of the results has to be performed but the lossless projection to the manifold. We evaluate our approach on synthetic and real data. Our approach often is able to detect and eliminate all outliers from the relative rotations even if very high outlier rates are present. We show that we improve the quality of the estimated absolute rotations by introducing individual weights for the relative rotations based on various indicators. In comparison with the state-of-the-art in recent publications to global image orientation we achieve best results in the examined datasets.
Scaling of Average Weighted Receiving Time on Double-Weighted Koch Networks
NASA Astrophysics Data System (ADS)
Dai, Meifeng; Ye, Dandan; Hou, Jie; Li, Xingyi
2015-03-01
In this paper, we introduce a model of the double-weighted Koch networks based on actual road networks depending on the two weight factors w,r ∈ (0, 1]. The double weights represent the capacity-flowing weight and the cost-traveling weight, respectively. Denote by wFij the capacity-flowing weight connecting the nodes i and j, and denote by wCij the cost-traveling weight connecting the nodes i and j. Let wFij be related to the weight factor w, and let wCij be related to the weight factor r. This paper assumes that the walker, at each step, starting from its current node, moves to any of its neighbors with probability proportional to the capacity-flowing weight of edge linking them. The weighted time for two adjacency nodes is the cost-traveling weight connecting the two nodes. We define the average weighted receiving time (AWRT) on the double-weighted Koch networks. The obtained result displays that in the large network, the AWRT grows as power-law function of the network order with the exponent, represented by θ(w,r) = ½ log2(1 + 3wr). We show that the AWRT exhibits a sublinear or linear dependence on network order. Thus, the double-weighted Koch networks are more efficient than classic Koch networks in receiving information.
Weighted Average Consensus-Based Unscented Kalman Filtering.
Li, Wangyan; Wei, Guoliang; Han, Fei; Liu, Yurong
2016-02-01
In this paper, we are devoted to investigate the consensus-based distributed state estimation problems for a class of sensor networks within the unscented Kalman filter (UKF) framework. The communication status among sensors is represented by a connected undirected graph. Moreover, a weighted average consensus-based UKF algorithm is developed for the purpose of estimating the true state of interest, and its estimation error is bounded in mean square which has been proven in the following section. Finally, the effectiveness of the proposed consensus-based UKF algorithm is validated through a simulation example. PMID:26168453
Modified box dimension and average weighted receiving time on the weighted fractal networks
Dai, Meifeng; Sun, Yanqiu; Shao, Shuxiang; Xi, Lifeng; Su, Weiyi
2015-01-01
In this paper a family of weighted fractal networks, in which the weights of edges have been assigned to different values with certain scale, are studied. For the case of the weighted fractal networks the definition of modified box dimension is introduced, and a rigorous proof for its existence is given. Then, the modified box dimension depending on the weighted factor and the number of copies is deduced. Assuming that the walker, at each step, starting from its current node, moves uniformly to any of its nearest neighbors. The weighted time for two adjacency nodes is the weight connecting the two nodes. Then the average weighted receiving time (AWRT) is a corresponding definition. The obtained remarkable result displays that in the large network, when the weight factor is larger than the number of copies, the AWRT grows as a power law function of the network order with the exponent, being the reciprocal of modified box dimension. This result shows that the efficiency of the trapping process depends on the modified box dimension: the larger the value of modified box dimension, the more efficient the trapping process is. PMID:26666355
Average receiving scaling of the weighted polygon Koch networks with the weight-dependent walk
NASA Astrophysics Data System (ADS)
Ye, Dandan; Dai, Meifeng; Sun, Yanqiu; Shao, Shuxiang; Xie, Qi
2016-09-01
Based on the weighted Koch networks and the self-similarity of fractals, we present a family of weighted polygon Koch networks with a weight factor r(0 < r ≤ 1) . We study the average receiving time (ART) on weight-dependent walk (i.e., the walker moves to any of its neighbors with probability proportional to the weight of edge linking them), whose key step is to calculate the sum of mean first-passage times (MFPTs) for all nodes absorpt at a hub node. We use a recursive division method to divide the weighted polygon Koch networks in order to calculate the ART scaling more conveniently. We show that the ART scaling exhibits a sublinear or linear dependence on network order. Thus, the weighted polygon Koch networks are more efficient than expended Koch networks in receiving information. Finally, compared with other previous studies' results (i.e., Koch networks, weighted Koch networks), we find out that our models are more general.
Attention Disengagement Difficulties among Average Weight Women Who Binge Eat.
Lyu, Zhenyong; Zheng, Panpan; Jackson, Todd
2016-07-01
In this study, we assessed biases in attention disengagement among average-weight women with binge-eating (n = 33) and non-eating disordered controls (n = 31). Participants engaged in a spatial cueing paradigm task wherein they first observed high-calorie food, low-calorie food, or neutral images and then had to quickly locate targets in either the same or a different location. Within both groups, reaction times (RTs) were longer to valid-cued trials (i.e. target appearing in location of preceding cue) than to invalid-cued trials (i.e. targets appearing in location different from initial location), reflecting a general inhibition of return (IOR) effect. However, RT findings also indicated that women with BE had significantly more difficulty disengaging from high-calorie food images than did controls, even though neither group had disengagement problems related to other image types. Selective attention disengagement difficulties related to high-calorie food images suggested that increased reward sensitivity to such cues is related to binge eating risk. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association. PMID:26856539
47 CFR 65.305 - Calculation of the weighted average cost of capital.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Carriers § 65.305 Calculation of the weighted average cost of capital. (a) The composite weighted average... Commission determines to the contrary in a prescription proceeding, the composite weighted average cost of debt and cost of preferred stock is the composite weight computed in accordance with §...
26 CFR 1.989(b)-1 - Definition of weighted average exchange rate.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 10 2010-04-01 2010-04-01 false Definition of weighted average exchange rate. 1... average exchange rate. For purposes of section 989(b)(3) and (4), the term “weighted average exchange rate” means the simple average of the daily exchange rates (determined by reference to a qualified source...
Latent-variable approaches to the Jamesian model of importance-weighted averages.
Scalas, L Francesca; Marsh, Herbert W; Nagengast, Benjamin; Morin, Alexandre J S
2013-01-01
The individually importance-weighted average (IIWA) model posits that the contribution of specific areas of self-concept to global self-esteem varies systematically with the individual importance placed on each specific component. Although intuitively appealing, this model has weak empirical support; thus, within the framework of a substantive-methodological synergy, we propose a multiple-item latent approach to the IIWA model as applied to a range of self-concept domains (physical, academic, spiritual self-concepts) and subdomains (appearance, math, verbal self-concepts) in young adolescents from two countries. Tests considering simultaneously the effects of self-concept domains on trait self-esteem did not support the IIWA model. On the contrary, support for a normative group importance model was found, in which importance varied as a function of domains but not individuals. Individuals differentially weight the various components of self-concept; however, the weights are largely determined by normative processes, so that little additional information is gained from individual weightings. PMID:23150198
Cohen's Linearly Weighted Kappa Is a Weighted Average of 2 x 2 Kappas
ERIC Educational Resources Information Center
Warrens, Matthijs J.
2011-01-01
An agreement table with [n as an element of N is greater than or equal to] 3 ordered categories can be collapsed into n - 1 distinct 2 x 2 tables by combining adjacent categories. Vanbelle and Albert ("Stat. Methodol." 6:157-163, 2009c) showed that the components of Cohen's weighted kappa with linear weights can be obtained from these n - 1…
76 FR 13580 - Bus Testing; Calculation of Average Passenger Weight and Test Vehicle Weight
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-14
... Federal Register (74 FR 51083) that incorporated brake performance and emissions tests into FTA's bus... Weight Per Person (See, ``Passenger Weight and Inspected Vessel Stability Requirements: Final Rule, 75 FR... Transportation (44 FR 11032). Executive Order 12866 requires agencies to regulate in the ``most...
77 FR 74452 - Bus Testing: Calculation of Average Passenger Weight and Test Vehicle Weight
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-14
... (GVWR) (74 FR 51083, October 5, 2009). The testing procedure simulated a 150 lb. weight for each seated... square feet (76 FR 13850, March 14, 2011). Subsequent to the NPRM, on July 6, 2012, Congress passed the..., Executive Order 13563, the Regulatory Flexibility Act, or the DOT Regulatory Policies and Procedures (44...
Rainfall Estimation Over Tropical Oceans. 1; Area Average Rain Rate
NASA Technical Reports Server (NTRS)
Cuddapah, Prabhakara; Cadeddu, Maria; Meneghini, R.; Short, David A.; Yoo, Jung-Moon; Dalu, G.; Schols, J. L.; Weinman, J. A.
1997-01-01
Multichannel dual polarization microwave radiometer SSM/I observations over oceans do not contain sufficient information to differentiate quantitatively the rain from other hydrometeors on a scale comparable to the radiometer field of view (approx. 30 km). For this reason we have developed a method to retrieve average rain rate over a mesoscale grid box of approx. 300 x 300 sq km area over the TOGA COARE region where simultaneous radiometer and radar observations are available for four months (Nov. 92 to Feb. 93). The rain area in the grid box, inferred from the scattering depression due to hydrometeors in the 85 Ghz brightness temperature, constitutes a key parameter in this method. Then the spectral and polarization information contained in all the channels of the SSM/I is utilized to deduce a second parameter. This is the ratio S/E of scattering index S, and emission index E calculated from the SSM/I data. The rain rate retrieved from this method over the mesoscale area can reproduce the radar observed rain rate with a correlation coefficient of about 0.85. Furthermore monthly total rainfall estimated from this method over that area has an average error of about 15%.
SIMPLE AND WEIGHTED AVERAGING APPROACHES TO SCALING: WHEN CAN SPATIAL CONTEXT BE IGNORED?
Technology Transfer Automated Retrieval System (TEKTRAN)
Scaling from plots to landscapes, landscapes to regions, and regions to the globe based on simple or weighted averaging techniques can be accurate when applied to the appropriate problems. Simple averaging approaches work well when conditions are homogeneous spatially and temporally. For example, ...
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek
NASA Technical Reports Server (NTRS)
Scruggs, T.; Moraguez, M.; Patankar, K.; Fitz-Coy, N.; Liou, J.-C.; Sorge, M.; Huynh, T.
2016-01-01
Debris fragments from the hypervelocity impact testing of DebriSat are being collected and characterized for use in updating existing satellite breakup models. One of the key parameters utilized in these models is the ballistic coefficient of the fragment which is directly related to its area-to-mass ratio. However, since the attitude of fragments varies during their orbital lifetime, it is customary to use the average cross-sectional area in the calculation of the area-to-mass ratio. The average cross-sectional area is defined as the average of the projected surface areas perpendicular to the direction of motion and has been shown to be equal to one-fourth of the total surface area of a convex object. Unfortunately, numerous fragments obtained from the DebriSat experiment show significant concavity (i.e., shadowing) and thus we have explored alternate methods for computing the average cross-sectional area of the fragments. An imaging system based on the volumetric reconstruction of a 3D object from multiple 2D photographs of the object was developed for use in determining the size characteristic (i.e., characteristics length) of the DebriSat fragments. For each fragment, the imaging system generates N number of images from varied azimuth and elevation angles and processes them using a space-carving algorithm to construct a 3D point cloud of the fragment. This paper describes two approaches for calculating the average cross-sectional area of debris fragments based on the 3D imager. Approach A utilizes the constructed 3D object to generate equally distributed cross-sectional area projections and then averages them to determine the average cross-sectional area. Approach B utilizes a weighted average of the area of the 2D photographs to directly compute the average cross-sectional area. A comparison of the accuracy and computational needs of each approach is described as well as preliminary results of an analysis to determine the "optimal" number of images needed for
Code of Federal Regulations, 2010 CFR
2010-04-01
... weighted-average dumping margins disregarded. 351.106 Section 351.106 Customs Duties INTERNATIONAL TRADE... minimis net countervailable subsidies and weighted-average dumping margins disregarded. (a) Introduction... practice of disregarding net countervailable subsidies or weighted-average dumping margins that were...
Code of Federal Regulations, 2011 CFR
2011-04-01
... weighted-average dumping margins disregarded. 351.106 Section 351.106 Customs Duties INTERNATIONAL TRADE... minimis net countervailable subsidies and weighted-average dumping margins disregarded. (a) Introduction... practice of disregarding net countervailable subsidies or weighted-average dumping margins that were...
NASA Astrophysics Data System (ADS)
Lin, Bangjiang; Li, Yiwei; Zhang, Shihao; Tang, Xuan
2015-10-01
Weighted interframe averaging (WIFA)-based channel estimation (CE) is presented for orthogonal frequency division multiplexing passive optical network (OFDM-PON), in which the CE results of the adjacent frames are directly averaged to increase the estimation accuracy. The effectiveness of WIFA combined with conventional least square, intrasymbol frequency-domain averaging, and minimum mean square error, respectively, is demonstrated through 26.7-km standard single-mode fiber transmission. The experimental results show that the WIFA method with low complexity can significantly enhance transmission performance of OFDM-PON.
47 CFR 65.305 - Calculation of the weighted average cost of capital.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 3 2011-10-01 2011-10-01 false Calculation of the weighted average cost of capital. 65.305 Section 65.305 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-28
... Dumping Margin During an Antidumping Investigation; Final Modification, 71 FR 77,722 (December 27, 2006... Measures Concerning Certain Softwood Lumber Products From Canada, 70 FR 22,636 (May 2, 2005). The above... Weighted- Average Dumping Margin During an Antidumping Investigation; Final Modification, 71 FR...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-01
... certain antidumping duty proceedings (75 FR 81533). That proposed rule and proposed modification indicated... International Trade Administration 19 CFR Part 351 RIN 0625-AA87 Antidumping Proceedings: Calculation of the... regarding the calculation of the weighted average dumping margin and antidumping duty assessment rate...
12 CFR 702.105 - Weighted-average life of investments.
Code of Federal Regulations, 2014 CFR
2014-01-01
..., as defined in 17 CFR 270.2a-7, and collective investment funds operated in accordance with short-term investment fund rules set forth in 12 CFR 9.18(b)(4)(ii)(B)(1)-(3), the weighted-average life is defined as... and nonperpetual capital in corporate credit unions, as defined in 12 CFR 704.2, the...
12 CFR 702.105 - Weighted-average life of investments.
Code of Federal Regulations, 2012 CFR
2012-01-01
..., as defined in 17 CFR 270.2a-7, and collective investment funds operated in accordance with short-term investment fund rules set forth in 12 CFR 9.18(b)(4)(ii)(B)(1)-(3), the weighted-average life is defined as... and nonperpetual capital in corporate credit unions, as defined in 12 CFR 704.2, the...
12 CFR 702.105 - Weighted-average life of investments.
Code of Federal Regulations, 2013 CFR
2013-01-01
..., as defined in 17 CFR 270.2a-7, and collective investment funds operated in accordance with short-term investment fund rules set forth in 12 CFR 9.18(b)(4)(ii)(B)(1)-(3), the weighted-average life is defined as... and nonperpetual capital in corporate credit unions, as defined in 12 CFR 704.2, the...
Binary weighted averaging of an ensemble of coherently collected image frames.
MacDonald, Adam; Cain, Stephen; Oxley, Mark
2007-04-01
Recent interest in the collection of remote laser radar imagery has motivated novel systems that process temporally contiguous frames of collected imagery to produce an average image that reduces laser speckle, increases image SNR, decreases the deleterious effects of atmospheric distortion, and enhances image detail. This research seeks an algorithm based on Bayesian estimation theory to select those frames from an ensemble that increases spatial resolution compared to simple unweighted averaging of all frames. The resulting binary weighted motion-compensated frame average is compared to the unweighted average using simulated and experimental data collected from a fielded laser vision system. Image resolution is significantly enhanced as quantified by the estimation of the atmospheric seeing parameter through which the average image was formed. PMID:17405439
Modeling daily average stream temperature from air temperature and watershed area
NASA Astrophysics Data System (ADS)
Butler, N. L.; Hunt, J. R.
2012-12-01
Habitat restoration efforts within watersheds require spatial and temporal estimates of water temperature for aquatic species especially species that migrate within watersheds at different life stages. Monitoring programs are not able to fully sample all aquatic environments within watersheds under the extreme conditions that determine long-term habitat viability. Under these circumstances a combination of selective monitoring and modeling are required for predicting future geospatial and temporal conditions. This study describes a model that is broadly applicable to different watersheds while using readily available regional air temperature data. Daily water temperature data from thirty-eight gauges with drainage areas from 2 km2 to 2000 km2 in the Sonoma Valley, Napa Valley, and Russian River Valley in California were used to develop, calibrate, and test a stream temperature model. Air temperature data from seven NOAA gauges provided the daily maximum and minimum air temperatures. The model was developed and calibrated using five years of data from the Sonoma Valley at ten water temperature gauges and a NOAA air temperature gauge. The daily average stream temperatures within this watershed were bounded by the preceding maximum and minimum air temperatures with smaller upstream watersheds being more dependent on the minimum air temperature than maximum air temperature. The model assumed a linear dependence on maximum and minimum air temperature with a weighting factor dependent on upstream area determined by error minimization using observed data. Fitted minimum air temperature weighting factors were consistent over all five years of data for each gauge, and they ranged from 0.75 for upstream drainage areas less than 2 km2 to 0.45 for upstream drainage areas greater than 100 km2. For the calibration data sets within the Sonoma Valley, the average error between the model estimated daily water temperature and the observed water temperature data ranged from 0.7
Nesterov, V V; Kurenbin, O I; Krasikov, V D; Belenkii, B G
1987-01-01
The problem of preparation of a block copolymer of precise molecular-weight distribution (MWD) and with heterogeneous composition on the basis of gel-permeation chromatography (GPC) data has been investigated. It has been shown that in MWD calculations the distribution f(p) of the composition p in individual GPC fractions should be taken into account. The type of the f(p) functions can be simultaneously established by an independent method, such as use of adsorption-column or thin-layer chromatography sensitive to the composition of the copolymer. It has also been shown that the actual f(p) may be replaced by a corresponding piecewise distribution, of simple form, without decrease in the precision of calculation of the MWD and average molecular weights of most known block copolymers. PMID:18964273
Model Averaging Methods for Weight Trimming in Generalized Linear Regression Models
Elliott, Michael R.
2012-01-01
In sample surveys where units have unequal probabilities of inclusion, associations between the inclusion probability and the statistic of interest can induce bias in unweighted estimates. This is true even in regression models, where the estimates of the population slope may be biased if the underlying mean model is misspecified or the sampling is nonignorable. Weights equal to the inverse of the probability of inclusion are often used to counteract this bias. Highly disproportional sample designs have highly variable weights; weight trimming reduces large weights to a maximum value, reducing variability but introducing bias. Most standard approaches are ad hoc in that they do not use the data to optimize bias-variance trade-offs. This article uses Bayesian model averaging to create “data driven” weight trimming estimators. We extend previous results for linear regression models (Elliott 2008) to generalized linear regression models, developing robust models that approximate fully-weighted estimators when bias correction is of greatest importance, and approximate unweighted estimators when variance reduction is critical. PMID:23275683
WUATSA: Weighted usable area time series analysis
Franc, G.M.
1995-12-31
As stated in my paper entitled, {open_quotes}FISHN-Minimum Flow Selection Made Easy{close_quotes}, there continues to exist differences of opinion between environmental resource agencies (Agencies) and power producers in the interpretation of Weighted Usable Area (WUA) versus flow data, as a tool for making minimum flow recommendations. WUA-flow curves are developed from Instream Flow Incremental Methodology (IFIM) studies. Each point on a WUA-flow curve defines the usable habitat area created within a bypassed reach, for a specific species and life stage, due to a specified minimum flow being constantly maintained within that reach. In the FISHN paper I discussed the Federal Energy Regulatory Commission`s (FERCs) effort to standardize the use of WUA-flow data to assist in minimum flow selection, as proposed in their article entitled, {open_quotes}Evaluating Relicense Proposals at the Federal Energy Regulatory Commission{close_quotes}. This FERC paper advanced a technique which has subsequently become known as the FARGO method (named after the primary author). The FISHN paper initially critiqued FARGO and then focused discussion on an alternative approach (FISHN) which is an extension to the IFIM methodology.
12 CFR 702.105 - Weighted-average life of investments.
Code of Federal Regulations, 2011 CFR
2011-01-01
..., as defined in 17 CFR 270.2a-7, and collective investment funds operated in accordance with short-term investment fund rules set forth in 12 CFR 9.18(b)(4)(ii)(B)(1)-(3), the weighted-average life is defined as...-in capital and membership capital in corporate credit unions, as defined in 12 CFR 704.2,...
12 CFR 702.105 - Weighted-average life of investments.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., as defined in 17 CFR 270.2a-7, and collective investment funds operated in accordance with short-term investment fund rules set forth in 12 CFR 9.18(b)(4)(ii)(B)(1)-(3), the weighted-average life is defined as...-in capital and membership capital in corporate credit unions, as defined in 12 CFR 704.2,...
Girshick, Ahna R.; Banks, Martin S.
2010-01-01
Depth perception involves combining multiple, possibly conflicting, sensory measurements to estimate the 3D structure of the viewed scene. Previous work has shown that the perceptual system combines measurements using a statistically optimal weighted average. However, the system should only combine measurements when they come from the same source. We asked whether the brain avoids combining measurements when they differ from one another: that is, whether the system is robust to outliers. To do this, we investigated how two slant cues—binocular disparity and texture gradients—influence perceived slant as a function of the size of the conflict between the cues. When the conflict was small, we observed weighted averaging. When the conflict was large, we observed robust behavior: perceived slant was dictated solely by one cue, the other being rejected. Interestingly, the rejected cue was either disparity or texture, and was not necessarily the more variable cue. We modeled the data in a probabilistic framework, and showed that weighted averaging and robustness are predicted if the underlying likelihoods have heavier tails than Gaussians. We also asked whether observers had conscious access to the single-cue estimates when they exhibited robustness and found they did not, i.e. they completely fused despite the robust percepts. PMID:19761341
Weighted Averaging for Calculating Azimuthal Angles and Filtering Love Waves Using S-transforms
NASA Astrophysics Data System (ADS)
Napoli, V.; Russell, D. R.
2015-12-01
The S-transform methodology is based on Stockwell transforms, which is a form of a short Fourier transform, with a time domain transform window defined by a Gaussian function. The Gaussian function has a standard deviation equal to the frequency of interest. Applying the transform to multiple frequencies of interest results in a time/frequency spectrogram, which has the advantage of being simply invertible back to the time domain. This allows for the calculation of instantaneous frequency/time phase and amplitude measurements, which makes 2D signal filtration of surface waves possible. By solving for the transverse angle of propagation of narrow band filtered Love waves at a range of periods (8-25s) we calculate a vector of possible azimuths, one at each period. We then average over all the bands of interest to determine the mean angle of propagation. To avoid using unreliable low signal-to-noise (SNR) azimuth estimates, we use a SNR weighted average to more accurately reflect the overall signal propagation azimuth. We then use the mean signal azimuth to design a 2D Love wave rejection filter that will reject off-azimuth noise and then invert this to the time domain for an improved signal on the propagation azimuth. We apply this method to the 2009 Democratic People's Republic of Korea nuclear test. After testing the weighted averaging approach, the SNR ratio increases by a factor of 2 overall, and a signal on the transverse component is identified as a Rayleigh wave that "leaked" into the transverse component. Without this method, there could have been improper Love wave signal identification for the event. Using this innovative SNR weighted averaging technique to calculate propagation angle indicates that S-transform filters can lower the noise level by a factor of 2 or more, helping with low SNR events, and remove Rayleigh "leakage" into the transverse channel.
A new state reconstructor for digital controls systems using weighted-average measurements
NASA Technical Reports Server (NTRS)
Polites, Michael E.
1989-01-01
A state reconstructor is presented for a linear continuous-time plant driven by a zero-order-hold. It takes a continuous-time output vector from the plant and convolutes it with a weighting-function matrix whose elements are time dependent. This result is integrated over T second intervals to generate weighted-averaged measurements, every T seconds, that are used in the state reconstruction process. If the plant is noise-free and can be modeled precisely, the output of this state reconstructor exactly equals the true state of the plant and accomplishes this without knowledge of the plant's initial state. If noise or modeling errors are a problem, it can be catenated with a state observer or a Kalman filter for a synergistic effect.
A new lot inspection procedure based on exponentially weighted moving average
NASA Astrophysics Data System (ADS)
Aslam, Muhammad; Azam, Muhammad; Jun, Chi-Hyuck
2015-06-01
In this manuscript a new variable sampling plan based on the exponentially weighted moving average (EWMA) statistic is proposed assuming that the quality characteristic follows the normal distribution. The plans are proposed when the standard deviation of the normal distribution is known or unknown. The plan parameters for both cases are determined such that the given producer's risk and consumer's risk are satisfied. The proposed plan includes the ordinary variable single sampling plan as a special case and its advantage over the single sampling plan is discussed in terms of the sample size. Extensive tables are provided for industrial use.
Calculation of weighted averages approach for the estimation of ping tolerance values
Silalom, S.; Carter, J.L.; Chantaramongkol, P.
2010-01-01
A biotic index was created and proposed as a tool to assess water quality in the Upper Mae Ping sub-watersheds. The Ping biotic index was calculated by utilizing Ping tolerance values. This paper presents the calculation of Ping tolerance values of the collected macroinvertebrates. Ping tolerance values were estimated by a weighted averages approach based on the abundance of macroinvertebrates and six chemical constituents that include conductivity, dissolved oxygen, biochemical oxygen demand, ammonia nitrogen, nitrate nitrogen and orthophosphate. Ping tolerance values range from 0 to 10. Macroinvertebrates assigned a 0 are very sensitive to organic pollution while macroinvertebrates assigned 10 are highly tolerant to pollution.
Correlation between weighted spectral distribution and average path length in evolving networks.
Jiao, Bo; Shi, Jianmai; Wu, Xiaoqun; Nie, Yuanping; Huang, Chengdong; Du, Jing; Zhou, Ying; Guo, Ronghua; Tao, Yerong
2016-02-01
The weighted spectral distribution (WSD) is a metric defined on the normalized Laplacian spectrum. In this study, synchronic random graphs are first used to rigorously analyze the metric's scaling feature, which indicates that the metric grows sublinearly as the network size increases, and the metric's scaling feature is demonstrated to be common in networks with Gaussian, exponential, and power-law degree distributions. Furthermore, a deterministic model of diachronic graphs is developed to illustrate the correlation between the slope coefficient of the metric's asymptotic line and the average path length, and the similarities and differences between synchronic and diachronic random graphs are investigated to better understand the correlation. Finally, numerical analysis is presented based on simulated and real-world data of evolving networks, which shows that the ratio of the WSD to the network size is a good indicator of the average path length. PMID:26931591
Correlation between weighted spectral distribution and average path length in evolving networks
NASA Astrophysics Data System (ADS)
Jiao, Bo; Shi, Jianmai; Wu, Xiaoqun; Nie, Yuanping; Huang, Chengdong; Du, Jing; Zhou, Ying; Guo, Ronghua; Tao, Yerong
2016-02-01
The weighted spectral distribution (WSD) is a metric defined on the normalized Laplacian spectrum. In this study, synchronic random graphs are first used to rigorously analyze the metric's scaling feature, which indicates that the metric grows sublinearly as the network size increases, and the metric's scaling feature is demonstrated to be common in networks with Gaussian, exponential, and power-law degree distributions. Furthermore, a deterministic model of diachronic graphs is developed to illustrate the correlation between the slope coefficient of the metric's asymptotic line and the average path length, and the similarities and differences between synchronic and diachronic random graphs are investigated to better understand the correlation. Finally, numerical analysis is presented based on simulated and real-world data of evolving networks, which shows that the ratio of the WSD to the network size is a good indicator of the average path length.
Fuzzy weighted average based on left and right scores in Malaysia tourism industry
NASA Astrophysics Data System (ADS)
Kamis, Nor Hanimah; Abdullah, Kamilah; Zulkifli, Muhammad Hazim; Sahlan, Shahrazali; Mohd Yunus, Syaizzal
2013-04-01
Tourism is known as an important sector to the Malaysian economy including economic generator, creating business and job offers. It is reported to bring in almost RM30 billion of the national income, thanks to intense worldwide promotion by Tourism Malaysia. One of the well-known attractions in Malaysia is our beautiful islands. The islands continue to be developed into tourist spots and attracting a continuous number of tourists. Chalets, luxury bungalows and resorts quickly develop along the coastlines of popular islands like Tioman, Redang, Pangkor, Perhentian, Sibu and so many others. In this study, we applied Fuzzy Weighted Average (FWA) method based on left and right scores in order to determine the criteria weights and to select the best island in Malaysia. Cost, safety, attractive activities, accommodation and scenery are five main criteria to be considered and five selected islands in Malaysia are taken into accounts as alternatives. The most important criteria that have been considered by the tourist are defined based on criteria weights ranking order and the best island in Malaysia is then determined in terms of FWA values. This pilot study can be used as a reference to evaluate performances or solving any selection problems, where more criteria, alternatives and decision makers will be considered in the future.
Equating of Subscores and Weighted Averages under the NEAT Design. Research Report. ETS RR-11-01
ERIC Educational Resources Information Center
Sinharay, Sandip; Haberman, Shelby
2011-01-01
Recently, the literature has seen increasing interest in subscores for their potential diagnostic values; for example, one study suggested the report of weighted averages of a subscore and the total score, whereas others showed, for various operational and simulated data sets, that weighted averages, as compared to subscores, lead to more accurate…
Merigó, José M.
2014-01-01
Linguistic variables are very useful to evaluate alternatives in decision making problems because they provide a vocabulary in natural language rather than numbers. Some aggregation operators for linguistic variables force the use of a symmetric and uniformly distributed set of terms. The need to relax these conditions has recently been posited. This paper presents the induced unbalanced linguistic ordered weighted average (IULOWA) operator. This operator can deal with a set of unbalanced linguistic terms that are represented using fuzzy sets. We propose a new order-inducing criterion based on the specificity and fuzziness of the linguistic terms. Different relevancies are given to the fuzzy values according to their uncertainty degree. To illustrate the behaviour of the precision-based IULOWA operator, we present an environmental assessment case study in which a multiperson multicriteria decision making model is applied. PMID:25136677
Weighted averages of magnetization from magnetic field measurements: A fast interpretation tool
NASA Astrophysics Data System (ADS)
Fedi, Maurizio
2003-08-01
Magnetic anomalies may be interpreted in terms of weighted averages of magnetization (WAM) by a simple transformation. The WAM transformation consists of dividing at each measurement point the experimental magnetic field by a normalizing field, computed from a source volume with a homogeneous unit-magnetization. The transformation yields a straightforward link among source and field position vectors. A main WAM outcome is that sources at different depths appear well discriminated. Due to the symmetry of the problem, the higher the considered field altitude, the deeper the sources outlined by the transformation. This is shown for single and multi-source synthetic cases as well as for real data. We analyze the real case of Mt. Vulture volcano (Southern Italy), where the related anomaly strongly interferes with that from deep intrusive sources. The volcanic edifice is well identified. The deep source is estimated at about 9 km depth, in agreement with other results.
Detecting the start of an influenza outbreak using exponentially weighted moving average charts
2010-01-01
Background Influenza viruses cause seasonal outbreaks in temperate climates, usually during winter and early spring, and are endemic in tropical climates. The severity and length of influenza outbreaks vary from year to year. Quick and reliable detection of the start of an outbreak is needed to promote public health measures. Methods We propose the use of an exponentially weighted moving average (EWMA) control chart of laboratory confirmed influenza counts to detect the start and end of influenza outbreaks. Results The chart is shown to provide timely signals in an example application with seven years of data from Victoria, Australia. Conclusions The EWMA control chart could be applied in other applications to quickly detect influenza outbreaks. PMID:20587013
A Weight-Averaged Interpolation Method for Coupling Time-Accurate Rarefied and Continuum Flows
NASA Astrophysics Data System (ADS)
Diaz, Steven William
A novel approach to coupling rarefied and continuum flow regimes as a single, hybrid model is introduced. The method borrows from techniques used in the simulation of spray flows to interpolate Lagrangian point-particles onto an Eulerian grid in a weight-averaged sense. A brief overview of traditional methods for modeling both rarefied and continuum domains is given, and a review of the literature regarding rarefied/continuum flow coupling is presented. Details of the theoretical development of the method of weighted interpolation are then described. The method evaluates macroscopic properties at the nodes of a CFD grid via the weighted interpolation of all simulated molecules in a set surrounding the node. The weight factor applied to each simulated molecule is the inverse of the linear distance between it and the given node. During development, the method was applied to several preliminary cases, including supersonic flow over an airfoil, subsonic flow over tandem airfoils, and supersonic flow over a backward facing step; all at low Knudsen numbers. The main thrust of the research centered on the time-accurate expansion of a rocket plume into a near-vacuum. The method proves flexible enough to be used with various flow solvers, demonstrated by the use of Fluent as the continuum solver for the preliminary cases and a NASA-developed Large Eddy Simulation research code, WRLES, for the full lunar model. The method is applicable to a wide range of Mach numbers and is completely grid independent, allowing the rarefied and continuum solvers to be optimized for their respective domains without consideration of the other. The work presented demonstrates the validity, and flexibility of the method of weighted interpolation as a novel concept in the field of hybrid flow coupling. The method marks a significant divergence from current practices in the coupling of rarefied and continuum flow domains and offers a kernel on which to base an ongoing field of research. It has the
Conductivity image enhancement in MREIT using adaptively weighted spatial averaging filter
2014-01-01
Background In magnetic resonance electrical impedance tomography (MREIT), we reconstruct conductivity images using magnetic flux density data induced by externally injected currents. Since we extract magnetic flux density data from acquired MR phase images, the amount of measurement noise increases in regions of weak MR signals. Especially for local regions of MR signal void, there may occur excessive amounts of noise to deteriorate the quality of reconstructed conductivity images. In this paper, we propose a new conductivity image enhancement method as a postprocessing technique to improve the image quality. Methods Within a magnetic flux density image, the amount of noise varies depending on the position-dependent MR signal intensity. Using the MR magnitude image which is always available in MREIT, we estimate noise levels of measured magnetic flux density data in local regions. Based on the noise estimates, we adjust the window size and weights of a spatial averaging filter, which is applied to reconstructed conductivity images. Without relying on a partial differential equation, the new method is fast and can be easily implemented. Results Applying the novel conductivity image enhancement method to experimental data, we could improve the image quality to better distinguish local regions with different conductivity contrasts. From phantom experiments, the estimated conductivity values had 80% less variations inside regions of homogeneous objects. Reconstructed conductivity images from upper and lower abdominal regions of animals showed much less artifacts in local regions of weak MR signals. Conclusion We developed the fast and simple method to enhance the conductivity image quality by adaptively adjusting the weights and window size of the spatial averaging filter using MR magnitude images. Since the new method is implemented as a postprocessing step, we suggest adopting it without or with other preprocessing methods for application studies where conductivity
NASA Astrophysics Data System (ADS)
Nadi, S.; Delavar, M. R.
2011-06-01
This paper presents a generic model for using different decision strategies in multi-criteria, personalized route planning. Some researchers have considered user preferences in navigation systems. However, these prior studies typically employed a high tradeoff decision strategy, which used a weighted linear aggregation rule, and neglected other decision strategies. The proposed model integrates a pairwise comparison method and quantifier-guided ordered weighted averaging (OWA) aggregation operators to form a personalized route planning method that incorporates different decision strategies. The model can be used to calculate the impedance of each link regarding user preferences in terms of the route criteria, criteria importance and the selected decision strategy. Regarding the decision strategy, the calculated impedance lies between aggregations that use a logical "and" (which requires all the criteria to be satisfied) and a logical "or" (which requires at least one criterion to be satisfied). The calculated impedance also includes taking the average of the criteria scores. The model results in multiple alternative routes, which apply different decision strategies and provide users with the flexibility to select one of them en-route based on the real world situation. The model also defines the robust personalized route under different decision strategies. The influence of different decision strategies on the results are investigated in an illustrative example. This model is implemented in a web-based geographical information system (GIS) for Isfahan in Iran and verified in a tourist routing scenario. The results demonstrated, in real world situations, the validity of the route planning carried out in the model.
Fuzzy Petri nets Using Intuitionistic Fuzzy Sets and Ordered Weighted Averaging Operators.
Liu, Hu-Chen; You, Jian-Xin; You, Xiao-Yue; Su, Qiang
2016-08-01
Fuzzy Petri nets (FPNs) are an important modeling tool for knowledge representation and reasoning, which have been extensively used in a lot of fields. However, the conventional FPN models have been criticized as having many shortcomings in the literature. Many different models have been suggested to enhance the performance of FPNs, but deficiencies still exist in these models. First, various types of uncertain knowledge information provided by domain experts are very hard to be modeled by the existing FPN models. Second, the traditional FPNs determine the results of knowledge reasoning using the min, max, and product operators, which may not work well in many practical applications. In this paper, we propose a new type of FPN model based on intuitionistic fuzzy sets and ordered weighted averaging operators to deal with the problems and improve the effectiveness of the conventional FPNs. Moreover, a max-algebra-based reasoning algorithm is developed in order to implement the intuitionistic fuzzy reasoning formally and automatically. Finally, a case study concerning fault diagnosis of aircraft generator is presented to demonstrate the proposed intuitionistic FPN model. Numerical experiments show that the new FPN model is feasible and quite effective for knowledge representation and reasoning of intuitionistic fuzzy expert systems. PMID:26259253
Robust HLLC Riemann solver with weighted average flux scheme for strong shock
NASA Astrophysics Data System (ADS)
Kim, Sung Don; Lee, Bok Jik; Lee, Hyoung Jin; Jeung, In-Seuck
2009-11-01
Many researchers have reported failures of the approximate Riemann solvers in the presence of strong shock. This is believed to be due to perturbation transfer in the transverse direction of shock waves. We propose a simple and clear method to prevent such problems for the Harten-Lax-van Leer contact (HLLC) scheme. By defining a sensing function in the transverse direction of strong shock, the HLLC flux is switched to the Harten-Lax-van Leer (HLL) flux in that direction locally, and the magnitude of the additional dissipation is automatically determined using the HLL scheme. We combine the HLLC and HLL schemes in a single framework using a switching function. High-order accuracy is achieved using a weighted average flux (WAF) scheme, and a method for v-shear treatment is presented. The modified HLLC scheme is named HLLC-HLL. It is tested against a steady normal shock instability problem and Quirk's test problems, and spurious solutions in the strong shock regions are successfully controlled.
Time-weighted average SPME analysis for in planta determination of cVOCs.
Sheehan, Emily M; Limmer, Matt A; Mayer, Philipp; Karlson, Ulrich Gosewinkel; Burken, Joel G
2012-03-20
The potential of phytoscreening for plume delineation at contaminated sites has promoted interest in innovative, sensitive contaminant sampling techniques. Solid-phase microextraction (SPME) methods have been developed, offering quick, undemanding, noninvasive sampling without the use of solvents. In this study, time-weighted average SPME (TWA-SPME) sampling was evaluated for in planta quantification of chlorinated solvents. TWA-SPME was found to have increased sensitivity over headspace and equilibrium SPME sampling. Using a variety of chlorinated solvents and a polydimethylsiloxane/carboxen (PDMS/CAR) SPME fiber, most compounds exhibited near linear or linear uptake over the sampling period. Smaller, less hydrophobic compounds exhibited more nonlinearity than larger, more hydrophobic molecules. Using a specifically designed in planta sampler, field sampling was conducted at a site contaminated with chlorinated solvents. Sampling with TWA-SPME produced instrument responses ranging from 5 to over 200 times higher than headspace tree core sampling. This work demonstrates that TWA-SPME can be used for in planta detection of a broad range of chlorinated solvents and methods can likely be applied to other volatile and semivolatile organic compounds. PMID:22332592
Iterative weighted average diffusion as a novel external force in the active contour model
NASA Astrophysics Data System (ADS)
Mirov, Ilya S.; Nakhmani, Arie
2016-03-01
The active contour model has good performance in boundary extraction for medical images; particularly, Gradient Vector Flow (GVF) active contour model shows good performance at concavity convergence and insensitivity to initialization, yet it is susceptible to edge leaking, deep and narrow concavities, and has some issues handling noisy images. This paper proposes a novel external force, called Iterative Weighted Average Diffusion (IWAD), which used in tandem with parametric active contours, provides superior performance in images with high values of concavity. The image gradient is first turned into an edge image, smoothed, and modified with enhanced corner detection, then the IWAD algorithm diffuses the force at a given pixel based on its 3x3 pixel neighborhood. A forgetting factor, φ, is employed to ensure that forces being spread away from the boundary of the image will attenuate. The experimental results show better behavior in high curvature regions, faster convergence, and less edge leaking than GVF when both are compared to expert manual segmentation of the images.
Wingard, G.L.; Hudley, J.W.
2012-01-01
A molluscan analogue dataset is presented in conjunction with a weighted-averaging technique as a tool for estimating past salinity patterns in south Florida’s estuaries and developing targets for restoration based on these reconstructions. The method, here referred to as cumulative weighted percent (CWP), was tested using modern surficial samples collected in Florida Bay from sites located near fixed water monitoring stations that record salinity. The results were calibrated using species weighting factors derived from examining species occurrence patterns. A comparison of the resulting calibrated species-weighted CWP (SW-CWP) to the observed salinity at the water monitoring stations averaged over a 3-year time period indicates, on average, the SW-CWP comes within less than two salinity units of estimating the observed salinity. The SW-CWP reconstructions were conducted on a core from near the mouth of Taylor Slough to illustrate the application of the method.
Pardo, C E; Kreuzer, M; Bee, G
2013-11-01
Offspring born from normal litter size (10 to 15 piglets) but classified as having lower than average birth weight (average of the sow herd used: 1.46 ± 0.2 kg; mean ± s.d.) carry at birth negative phenotypic traits normally associated with intrauterine growth restriction, such as brain-sparing and impaired myofiber hyperplasia. The objective of the study was to assess long-term effects of intrauterine crowding by comparing postnatal performance, carcass characteristics and pork quality of offspring born from litters with higher (>1.7 kg) or lower (<1.3 kg) than average litter birth weight. From a population of multiparous Swiss Large White sows (parity 2 to 6), 16 litters with high (H = 1.75 kg) or low (L = 1.26 kg) average litter birth weight were selected. At farrowing, two female pigs and two castrated pigs were chosen from each litter: from the H-litters those with the intermediate (HI = 1.79 kg) and lowest (HL = 1.40 kg) birth weight, and from L-litters those with the highest (LH = 1.49 kg) and intermediate (LI = 1.26 kg) birth weight. Average birth weight of the selected HI and LI piglets differed (P < 0.05), whereas birth weight of the HL- and LH-piglets were similar (P > 0.05). These pigs were fattened in group pen and slaughtered at 165 days of age. Pre-weaning performance of the litters and growth performance, carcass and meat quality traits of the selected pigs were assessed. Number of stillborn and pig mortality were greater (P < 0.05) in L- than in H-litters. Consequently, fewer (P < 0.05) piglets were weaned and average litter weaning weight decreased by 38% (P < 0.05). The selected pigs of the L-litters displayed catch-up growth during the starter and grower-finisher periods, leading to similar (P > 0.05) slaughter weight at 165 days of age. However, HL-gilts were more feed efficient and had leaner carcasses than HI-, LH- and LI-pigs (birth weight class × gender interaction P < 0.05). Meat quality traits were mostly similar between groups. The
Averages, Areas and Volumes; Cambridge Conference on School Mathematics Feasibility Study No. 45.
ERIC Educational Resources Information Center
Cambridge Conference on School Mathematics, Newton, MA.
Presented is an elementary approach to areas, columns and other mathematical concepts usually treated in calculus. The approach is based on the idea of average and this concept is utilized throughout the report. In the beginning the average (arithmetic mean) of a set of numbers is considered and two properties of the average which often simplify…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-19
..., 2011 (76 FR 13580). Furthermore, due to the complexity of the issues proposed in the NPRM, FTA is..., FTA published an NPRM in the Federal Register (76 FR 13850) proposing to amend its bus testing... Federal Transit Administration 49 CFR Part 665 RIN 2132-AB01 Bus Testing: Calculation of Average...
Bacillus subtilis 168 levansucrase (SacB) activity affects average levan molecular weight.
Porras-Domínguez, Jaime R; Ávila-Fernández, Ángela; Miranda-Molina, Afonso; Rodríguez-Alegría, María Elena; Munguía, Agustín López
2015-11-01
Levan is a fructan polymer that offers a variety of applications in the chemical, health, cosmetic and food industries. Most of the levan applications depend on levan molecular weight, which in turn depends on the source of the synthesizing enzyme and/or on reaction conditions. Here we demonstrate that in the particular case of levansucrase from Bacillus subtilis 168, enzyme concentration is also a factor defining the molecular weight levan distribution. While a bimodal distribution has been reported at the usual enzyme concentrations (1 U/ml equivalent to 0.1 μM levansucrase) we found that a low molecular weight normal distribution is solely obtained al high enzyme concentrations (>5 U/ml equivalent to 0.5 μM levansucrase) while a high normal molecular weight distribution is synthesized at low enzyme doses (0.1 U/ml equivalent to 0.01 μM of levansucrase). PMID:26256357
The effect of capsule-filling machine vibrations on average fill weight.
Llusa, Marcos; Faulhammer, Eva; Biserni, Stefano; Calzolari, Vittorio; Lawrence, Simon; Bresciani, Massimo; Khinast, Johannes
2013-09-15
The aim of this paper is to study the effect of the speed of capsule filling and the inherent machine vibrations on fill weight for a dosator-nozzle machine. The results show that increasing speed of capsule filling amplifies the vibration intensity (as measured by Laser Doppler vibrometer) of the machine frame, which leads to powder densification. The mass of the powder (fill weight) collected via the nozzle is significantly larger at a higher capsule filling speed. Therefore, there is a correlation between powder densification under more intense vibrations and larger fill weights. Quality-by Design of powder based products should evaluate the effect of environmental vibrations on material attributes, which in turn may affect product quality. PMID:23872302
López-Soria, S; Sibila, M; Nofrarías, M; Calsamiglia, M; Manzanilla, E G; Ramírez-Mendoza, H; Mínguez, A; Serrano, J M; Marín, O; Joisel, F; Charreyre, C; Segalés, J
2014-12-01
Porcine circovirus type 2 (PCV2) is a ubiquitous virus that mainly affects nursery and fattening pigs causing systemic disease (PCV2-SD) or subclinical infection. A characteristic sign in both presentations is reduction of average daily weight gain (ADWG). The present study aimed to assess the relationship between PCV2 load in serum and ADWG from 3 (weaning) to 21 weeks of age (slaughter) (ADWG 3-21). Thus, three different boar lines were used to inseminate sows from two PCV2-SD affected farms. One or two pigs per sow were selected (60, 61 and 51 piglets from Pietrain, Pietrain×Large White and Duroc×Large White boar lines, respectively). Pigs were bled at 3, 9, 15 and 21 weeks of age and weighted at 3 and 21 weeks. Area under the curve of the viral load at all sampling times (AUCqPCR 3-21) was calculated for each animal according to standard and real time quantitative PCR results; this variable was categorized as "negative or low" (<10(4.3) PCV2 genome copies/ml of serum), "medium" (≥10(4.3) to ≤10(5.3)) and "high" (>10(5.3)). Data regarding sex, PCV2 antibody titre at weaning and sow parity was also collected. A generalized linear model was performed, obtaining that paternal genetic line and AUCqPCR 3-21 were related to ADWG 3-21. ADWG 3-21 (mean±typical error) for "negative or low", "medium" and "high" AUCqPCR 3-21 was 672±9, 650±12 and 603±16 g/day, respectively, showing significant differences among them. This study describes different ADWG performances in 3 pig populations that suffered from different degrees of PCV2 viraemia. PMID:25448444
Area-averaged profiles over the mock urban setting test array
Nelson, M. A.; Brown, M. J.; Pardyjak, E. R.; Klewicki, J. C.
2004-01-01
Urban areas have a large effect on the local climate and meteorology. Efforts have been made to incorporate the bulk dynamic and thermodynamic effects of urban areas into mesoscale models (e.g., Chin et al., 2000; Holt et al., 2002; Lacser and Otte, 2002). At this scale buildings cannot be resolved individually, but parameterizations have been developed to capture their aggregate effect. These urban canopy parameterizations have been designed to account for the area-average drag, turbulent kinetic energy (TKE) production, and surface energy balance modifications due to buildings (e.g., Sorbjan and Uliasz, 1982; Ca, 1999; Brown, 2000; Martilli et al., 2002). These models compute an area-averaged mean profile that is representative of the bulk flow characteristics over the entire mesoscale grid cell. One difficulty has been testing of these parameterizations due to lack of area-averaged data. In this paper, area-averaged velocity and turbulent kinetic energy profiles are derived from data collected at the Mock Urban Setting Test (MUST). The MUST experiment was designed to be a near full-scale model of an idealized urban area imbedded in the Atmospheric Surface Layer (ASL). It's purpose was to study airflow and plume transport in urban areas and to provide a test case for model validation. A large number of velocity measurements were taken at the test site so that it was possible to derive area-averaged velocity and TKE profiles.
Full-custom design of split-set data weighted averaging with output register for jitter suppression
NASA Astrophysics Data System (ADS)
Jubay, M. C.; Gerasta, O. J.
2015-06-01
A full-custom design of an element selection algorithm, named as Split-set Data Weighted Averaging (SDWA) is implemented in 90nm CMOS Technology Synopsys Library. SDWA is applied in seven unit elements (3-bit) using a thermometer-coded input. Split-set DWA is an improved DWA algorithm which caters the requirement for randomization along with long-term equal element usage. Randomization and equal element-usage improve the spectral response of the unit elements due to higher Spurious-free dynamic range (SFDR) and without significantly degrading signal-to-noise ratio (SNR). Since a full-custom, the design is brought to transistor-level and the chip custom layout is also provided, having a total area of 0.3mm2, a power consumption of 0.566 mW, and simulated at 50MHz clock frequency. On this implementation, SDWA is successfully derived and improved by introducing a register at the output that suppresses the jitter introduced at the final stage due to switching loops and successive delays.
47 CFR 36.622 - National and study area average unseparated loop costs.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 2 2010-10-01 2010-10-01 false National and study area average unseparated... Universal Service Fund Calculation of Loop Costs for Expense Adjustment § 36.622 National and study area... provided in paragraph (c) of this section, this is equal to the sum of the Loop Costs for each study...
47 CFR 36.622 - National and study area average unseparated loop costs.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 2 2011-10-01 2011-10-01 false National and study area average unseparated... Universal Service Fund Calculation of Loop Costs for Expense Adjustment § 36.622 National and study area... provided in paragraph (c) of this section, this is equal to the sum of the Loop Costs for each study...
The Effect of Area Averaging on the Approximated Profile of the H α Spectral Line
NASA Astrophysics Data System (ADS)
Bodnárová, M.; Utz, D.; Rybák, J.
2016-04-01
The Hα line is massively used as a diagnostics of the chromosphere. Often one needs to average the line profile over some area to increase the signal to noise ratio. Thus it is important to understand how derived parameters vary with changing approximations. In this study we investigate the effect of spatial averaging of a selected area on the temporal variations of the width, the intensity and the Dopplershift of the Hα spectral line profile. The approximated profile was deduced from co-temporal observations in five points throughout the Hα line profile obtained by the tunable Lyot filter installed on the Dutch Open Telescope. We found variations of the intensity and the Doppler velocities, which were independent of the size of the area used for the computation of the area averaged Hα spectral line profile.
Prediction of oil palm production using the weighted average of fuzzy sets concept approach
NASA Astrophysics Data System (ADS)
Nugraha, R. F.; Setiyowati, Susi; Mukhaiyar, Utriweni; Yuliawati, Apriliani
2015-12-01
Proper planning becomes crucial for decision making in a company. For oil palm producer companies, the prediction of future products realizations is useful and considered in making company's strategies. It is mean that to do the best in predicting is absolute. Until now, to predict the next monthly oil palm productions, the company use simple mean statistics of the latest five-year observations. Lately, imprecision in estimates of oil palm production (overestimate) becomes a problem and the focus of attention in a company. Here we proposed weighted mean approach by using fuzzy concept approach to do estimation and prediction. We obtain that the prediction using fuzzy concept almost always give underestimate of realizations than the simple mean.
Baeck, Annelies; Wagemans, Johan; Op de Beeck, Hans P
2013-04-15
Natural scenes typically contain multiple visual objects, often in interaction, such as when a bottle is used to fill a glass. Previous studies disagree about the representation of multiple objects and the role of object position herein, nor did they pinpoint the effect of potential interactions between the objects. In an fMRI study, we presented four single objects in two different positions and object pairs consisting of all possible combinations of the single objects. Objects pairs could form either a meaningful action configuration in which they interact with each other or a non-meaningful configuration. We found that for single objects and object pairs both identity and position were represented in multi-voxel activity patterns in LOC. The response patterns of object pairs were best predicted by a weighted average of the response patterns of the constituent objects, with the strongest single-object response (the max response) weighted more than the min response. The difference in weight between the max and the min object was larger for familiar action pairs than for other pairs when participants attended to the configuration. A weighted average thus relates the response patterns of object pairs to the response patterns of single objects, even when the objects interact. PMID:23266747
On the theory relating changes in area-average and pan evaporation (Invited)
NASA Astrophysics Data System (ADS)
Shuttleworth, W.; Serrat-Capdevila, A.; Roderick, M. L.; Scott, R.
2009-12-01
Theory relating changes in area-average evaporation with changes in the evaporation from pans or open water is developed. Such changes can arise by Type (a) processes related to large-scale changes in atmospheric concentrations and circulation that modify surface evaporation rates in the same direction, and Type (b) processes related to coupling between the surface and atmospheric boundary layer (ABL) at the landscape scale that usually modify area-average evaporation and pan evaporation in different directions. The interrelationship between evaporation rates in response to Type (a) changes is derived. They have the same sign and broadly similar magnitude but the change in area-average evaporation is modified by surface resistance. As an alternative to assuming the complementary evaporation hypothesis, the results of previous modeling studies that investigated surface-atmosphere coupling are parameterized and used to develop a theoretical description of Type (b) coupling via vapor pressure deficit (VPD) in the ABL. The interrelationship between appropriately normalized pan and area-average evaporation rates is shown to vary with temperature and wind speed but, on average, the Type (b) changes are approximately equal and opposite. Long-term Australian pan evaporation data are analyzed to demonstrate the simultaneous presence of Type (a) and (b) processes, and observations from three field sites in southwestern USA show support for the theory describing Type (b) coupling via VPD. England's victory over Australia in 2009 Ashes cricket test match series will not be mentioned.
ON THE THEORY RELATING CHANGES IN AREA-AVERAGE AND PAN EVAPORATION
Technology Transfer Automated Retrieval System (TEKTRAN)
Theory relating changes in the area-average evaporation from a landscape with changes in the evaporation from pans or open water within the landscape is developed. Such changes can arise in two ways, by Type (a) processes related to large-scale changes in atmospheric concentrations and circulation t...
Shih, H C; Tsai, S W; Kuo, C H
2012-01-01
A solid-phase microextraction (SPME) device was used as a diffusive sampler for airborne propylene glycol ethers (PGEs), including propylene glycol monomethyl ether (PGME), propylene glycol monomethyl ether acetate (PGMEA), and dipropylene glycol monomethyl ether (DPGME). Carboxen-polydimethylsiloxane (CAR/PDMS) SPME fiber was selected for this study. A polytetrafluoroethylene (PTFE) tubing was used as the holder, and the SPME fiber assembly was inserted into the tubing as a diffusive sampler. The diffusion path length and area of the sampler were 0.3 cm and 0.00086 cm(2), respectively. The theoretical sampling constants at 30°C and 1 atm for PGME, PGMEA, and DPGME were 1.50 × 10(-2), 1.23 × 10(-2) and 1.14 × 10(-2) cm(3) min(-1), respectively. For evaluations, known concentrations of PGEs around the threshold limit values/time-weighted average with specific relative humidities (10% and 80%) were generated both by the air bag method and the dynamic generation system, while 15, 30, 60, 120, and 240 min were selected as the time periods for vapor exposures. Comparisons of the SPME diffusive sampling method to Occupational Safety and Health Administration (OSHA) organic Method 99 were performed side-by-side in an exposure chamber at 30°C for PGME. A gas chromatography/flame ionization detector (GC/FID) was used for sample analysis. The experimental sampling constants of the sampler at 30°C were (6.93 ± 0.12) × 10(-1), (4.72 ± 0.03) × 10(-1), and (3.29 ± 0.20) × 10(-1) cm(3) min(-1) for PGME, PGMEA, and DPGME, respectively. The adsorption of chemicals on the stainless steel needle of the SPME fiber was suspected to be one of the reasons why significant differences between theoretical and experimental sampling rates were observed. Correlations between the results for PGME from both SPME device and OSHA organic Method 99 were linear (r = 0.9984) and consistent (slope = 0.97 ± 0.03). Face velocity (0-0.18 m/s) also proved to have no effects on the sampler
Sontag, C A; Stafford, W F; Correia, J J
2004-03-01
Analysis of sedimentation velocity data for indefinite self-associating systems is often achieved by fitting of weight average sedimentation coefficients (s(20,w)) However, this method discriminates poorly between alternative models of association and is biased by the presence of inactive monomers and irreversible aggregates. Therefore, a more robust method for extracting the binding constants for indefinite self-associating systems has been developed. This approach utilizes a set of fitting routines (SedAnal) that perform global non-linear least squares fits of up to 10 sedimentation velocity experiments, corresponding to different loading concentrations, by a combination of finite element simulations and a fitting algorithm that uses a simplex convergence routine to search parameter space. Indefinite self-association is analyzed with the software program isodesfitter, which incorporates user provided functions for sedimentation coefficients as a function of the degree of polymerization for spherical, linear and helical polymer models. The computer program hydro was used to generate the sedimentation coefficient values for the linear and helical polymer assembly mechanisms. Since this curve fitting method directly fits the shape of the sedimenting boundary, it is in principle very sensitive to alternative models and the presence of species not participating in the reaction. This approach is compared with traditional fitting of weight average data and applied to the initial stages of Mg(2+)-induced tubulin self-associating into small curved polymers, and vinblastine-induced tubulin spiral formation. The appropriate use and limitations of the methods are discussed. PMID:15043931
Long-path Scintillometry To Determine Area-averaged Evaporation Over Heterogeneous Terrain
NASA Astrophysics Data System (ADS)
Meininger, W. M. L.; de Bruin, H. A. R.
Results of the Flevopolder 1998 field experiment will be presented. Area-averaged evaporation determined with a combined system of a Large Aperture Scintillometer (LAS) and a Radio-wave (small aperture) Scintillometer (RWS) with a path length of 2.2 km will be compared with 'ground-truth' eddy-correlation measurements. The landscape consists of different rectangular agricultural fields. The main crops are potatoes, sugar beats, onions and wheat. Over each of these different crops micro- meteorological stations were installed, inclusive eddy-correlation equipment. In addi- tion, area-averaged evaporation derived from the LAS alone and a simple estimate of available energy will be discussed also. The results appear to be very promising. Fi- nally, first results of evaporation derived from scintillometry and from satellite images will be presented.
High surface area, low weight composite nickel fiber electrodes
NASA Technical Reports Server (NTRS)
Johnson, Bradley A.; Ferro, Richard E.; Swain, Greg M.; Tatarchuk, Bruce J.
1993-01-01
The energy density and power density of light weight aerospace batteries utilizing the nickel oxide electrode are often limited by the microstructures of both the collector and the resulting active deposit in/on the collector. Heretofore, these two microstructures were intimately linked to one another by the materials used to prepare the collector grid as well as the methods and conditions used to deposit the active material. Significant weight and performance advantages were demonstrated by Britton and Reid at NASA-LeRC using FIBREX nickel mats of ca. 28-32 microns diameter. Work in our laboratory investigated the potential performance advantages offered by nickel fiber composite electrodes containing a mixture of fibers as small as 2 microns diameter (Available from Memtec America Corporation). These electrode collectors possess in excess of an order of magnitude more surface area per gram of collector than FIBREX nickel. The increase in surface area of the collector roughly translates into an order of magnitude thinner layer of active material. Performance data and advantages of these thin layer structures are presented. Attributes and limitations of their electrode microstructure to independently control void volume, pore structure of the Ni(OH)2 deposition, and resulting electrical properties are discussed.
NASA Astrophysics Data System (ADS)
Boroushaki, Soheil; Malczewski, Jacek
2008-04-01
This paper focuses on the integration of GIS and an extension of the analytical hierarchy process (AHP) using quantifier-guided ordered weighted averaging (OWA) procedure. AHP_OWA is a multicriteria combination operator. The nature of the AHP_OWA depends on some parameters, which are expressed by means of fuzzy linguistic quantifiers. By changing the linguistic terms, AHP_OWA can generate a wide range of decision strategies. We propose a GIS-multicriteria evaluation (MCE) system through implementation of AHP_OWA within ArcGIS, capable of integrating linguistic labels within conventional AHP for spatial decision making. We suggest that the proposed GIS-MCE would simplify the definition of decision strategies and facilitate an exploratory analysis of multiple criteria by incorporating qualitative information within the analysis.
Yasugi, T; Kawai, T; Mizunuma, K; Horiguchi, S; Iguchi, H; Ikeda, M
1992-01-01
A diffusive sampling method with water as absorbent was examined in comparison with 3 conventional methods of diffusive sampling with carbon cloth as absorbent, pumping through National Institute of Occupational Safety and Health (NIOSH) charcoal tubes, and pumping through NIOSH silica gel tubes to measure time-weighted average concentration of dimethylformamide (DMF). DMF vapors of constant concentrations at 3-110 ppm were generated by bubbling air at constant velocities through liquid DMF followed by dilution with fresh air. Both types of diffusive samplers could either absorb or adsorb DMF in proportion to time (0.25-8 h) and concentration (3-58 ppm), except that the DMF adsorbed was below the measurable amount when carbon cloth samplers were exposed at 3 ppm for less than 1 h. When both diffusive samplers were loaded with DMF and kept in fresh air, the DMF in water samplers stayed unchanged for at least for 12 h. The DMF in carbon cloth samplers showed a decay with a half-time of 14.3 h. When the carbon cloth was taken out immediately after termination of DMF exposure, wrapped in aluminum foil, and kept refrigerated, however, there was no measurable decrease in DMF for at least 3 weeks. When the air was drawn at 0.2 l/min, a breakthrough of the silica gel tube took place at about 4,000 ppm.min (as the lower 95% confidence limit), whereas charcoal tubes could tolerate even heavier exposures, suggesting that both tubes are fit to measure the 8-h time-weighted average of DMF at 10 ppm. PMID:1577523
NASA Astrophysics Data System (ADS)
Amir Rahmani, Mohammad; Zarghami, Mahdi
2013-03-01
The projections of the climate change by using General Climate Models (GCMs) are uncertain. Hence, combining the results of GCMs is now an effective solution to tackle this uncertainty. To evaluate the performance of GCMs, a new measure based on the similarity of the projections is defined. In defining this measure the Ordered Weighted Averaging (OWA) approach is used. The relative weights of the GCMs projections in different stations, to be aggregated by the OWA operator, are obtained by regular increasing monotone fuzzy quantifiers, which model the risk preferences of the decision maker. To show the effectiveness of the approach, climate change in the northwestern provinces of Iran is studied by using the data of 15 synoptic stations. The weather generator of LARS-WG is used to downscale the GCMs under three emission scenarios (A2, A1B and B1) for the period 2011 to 2030. The combined results, by using the similarity values, indicate a - 0.1 °C to + 4.5 °C change in temperature in the region. Precipitation is expected to increase in summer and fall. Changes in wintry precipitation depend on the location; however the precipitation in spring would have a medium change. The results of this study show the usefulness of OWA operator, which considers the risk attitudes of the decision maker. This approach could help water and environmental managers to tackle the climate uncertainties.
Coombes, Brandon; Basu, Saonli; Guha, Sharmistha; Schork, Nicholas
2015-01-01
Multi-locus effect modeling is a powerful approach for detection of genes influencing a complex disease. Especially for rare variants, we need to analyze multiple variants together to achieve adequate power for detection. In this paper, we propose several parsimonious branching model techniques to assess the joint effect of a group of rare variants in a case-control study. These models implement a data reduction strategy within a likelihood framework and use a weighted score test to assess the statistical significance of the effect of the group of variants on the disease. The primary advantage of the proposed approach is that it performs model-averaging over a substantially smaller set of models supported by the data and thus gains power to detect multi-locus effects. We illustrate these proposed approaches on simulated and real data and study their performance compared to several existing rare variant detection approaches. The primary goal of this paper is to assess if there is any gain in power to detect association by averaging over a number of models instead of selecting the best model. Extensive simulations and real data application demonstrate the advantage the proposed approach in presence of causal variants with opposite directional effects along with a moderate number of null variants in linkage disequilibrium. PMID:26436424
Cook, D A; Coory, M; Webster, R A
2011-06-01
OBJECTIVE To introduce a new type of risk-adjusted (RA) exponentially weighted moving average (EWMA) chart and to compare it to a commonly used type of variable life adjusted display chart for analysis of patient outcomes. DATA Routine inpatient data on mortality following admission for acute myocardial infarction, from all public and private hospitals in Queensland, Australia. METHODS The RA-EWMA plots the EWMA of the observed and predicted values. Predicted values were obtained from a logistic regression model for all hospitals in Queensland. The EWMA of the predicted values is a moving centre line, reflecting current patient case mix at a particular hospital. Thresholds around this moving centre line provide a scale by which to assess the importance of trends in the EWMA of the observed values. RESULTS The RA-EWMA chart can be designed to have equivalent performance, in terms of average run lengths, as variable life adjusted display chart. The advantages of the RA-EWMA are that it communicates information about the current level of an indicator in a direct and understandable way, and it explicitly displays information about the current patient case mix. Also, because it is not reset, the RA-EWMA is a more natural chart to use in health, where it is exceedingly rare to stop or dramatically and abruptly alter a process of care. CONCLUSION The RA-EWMA chart is a direct and intuitive way to display information about an indicator while accounting for differences in case mix. PMID:21209145
NASA Astrophysics Data System (ADS)
Davies, G. R.; Chaplin, W. J.; Elsworth, Y.; Hale, S. J.
2014-07-01
The Birmingham Solar Oscillations Network (BiSON) has provided high-quality high-cadence observations from as far back in time as 1978. These data must be calibrated from the raw observations into radial velocity and the quality of the calibration has a large impact on the signal-to-noise ratio of the final time series. The aim of this work is to maximize the potential science that can be performed with the BiSON data set by optimizing the calibration procedure. To achieve better levels of signal-to-noise ratio, we perform two key steps in the calibration process: we attempt a correction for terrestrial atmospheric differential extinction; and the resulting improvement in the calibration allows us to perform weighted averaging of contemporaneous data from different BiSON stations. The improvements listed produce significant improvement in the signal-to-noise ratio of the BiSON frequency-power spectrum across all frequency ranges. The reduction of noise in the power spectrum will allow future work to provide greater constraint on changes in the oscillation spectrum with solar activity. In addition, the analysis of the low-frequency region suggests that we have achieved a noise level that may allow us to improve estimates of the upper limit of g-mode amplitudes.
Woolcock, Patrick J; Koziel, Jacek A; Cai, Lingshuang; Johnston, Patrick A; Brown, Robert C
2013-03-15
Time-weighted average (TWA) passive sampling using solid-phase microextraction (SPME) and gas chromatography was investigated as a new method of collecting, identifying and quantifying contaminants in process gas streams. Unlike previous TWA-SPME techniques using the retracted fiber configuration (fiber within needle) to monitor ambient conditions or relatively stagnant gases, this method was developed for fast-moving process gas streams at temperatures approaching 300 °C. The goal was to develop a consistent and reliable method of analyzing low concentrations of contaminants in hot gas streams without performing time-consuming exhaustive extraction with a slipstream. This work in particular aims to quantify trace tar compounds found in a syngas stream generated from biomass gasification. This paper evaluates the concept of retracted SPME at high temperatures by testing the three essential requirements for TWA passive sampling: (1) zero-sink assumption, (2) consistent and reliable response by the sampling device to changing concentrations, and (3) equal concentrations in the bulk gas stream relative to the face of the fiber syringe opening. Results indicated the method can accurately predict gas stream concentrations at elevated temperatures. Evidence was also discovered to validate the existence of a second boundary layer within the fiber during the adsorption/absorption process. This limits the technique to operating within reasonable mass loadings and loading rates, established by appropriate sampling depths and times for concentrations of interest. A limit of quantification for the benzene model tar system was estimated at 0.02 g m(-3) (8 ppm) with a limit of detection of 0.5 mg m(-3) (200 ppb). Using the appropriate conditions, the technique was applied to a pilot-scale fluidized-bed gasifier to verify its feasibility. Results from this test were in good agreement with literature and prior pilot plant operation, indicating the new method can measure low
NASA Astrophysics Data System (ADS)
Gasser, Guy; Pankratov, Irena; Elhanany, Sara; Glazman, Hillel; Lev, Ovadia
2014-05-01
A methodology used to estimate the percentage of wastewater effluent in an otherwise pristine water site is proposed on the basis of the weighted mean of the level of a consortium of indicator pollutants. This method considers the levels of uncertainty in the evaluation of each of the indicators in the site, potential effluent sources, and uncontaminated surroundings. A detailed demonstrative study was conducted on a site that is potentially subject to wastewater leakage. The research concentrated on several perched springs that are influenced to an unknown extent by agricultural communities. A comparison was made to a heavily contaminated site receiving wastewater effluent and surface water runoff. We investigated six springs in two nearby ridges where fecal contamination was detected in the past; the major sources of pollution in the area have since been diverted to a wastewater treatment system. We used chloride, acesulfame, and carbamazepine as domestic pollution tracers. Good correlation (R2 > 0.86) was observed between the mixing ratio predictions based on the two organic tracers (the slope of the linear regression was 1.05), whereas the chloride predictions differed considerably. This methodology is potentially useful, particularly for cases in which detailed hydrological modeling is unavailable but in which quantification of wastewater penetration is required. We demonstrate that the use of more than one tracer for estimation of the mixing ratio reduces the combined uncertainty level associated with the estimate and can also help to disqualify biased tracers.
Michiels, A; Piepers, S; Ulens, T; Van Ransbeeck, N; Del Pozo Sacristán, R; Sierens, A; Haesebrouck, F; Demeyer, P; Maes, D
2015-09-01
The present study investigated the simultaneous influence of particulate matter (PM10) and ammonia (NH3) on performance, lung lesions and the presence of Mycoplasma hyopneumoniae (M. hyopneumoniae) in finishing pigs. A pig herd experiencing clinical problems of M. hyopneumoniae infections was selected. In total, 1095 finishing pigs of two replicates in eight compartments each were investigated during the entire finishing period (FP). Indoor PM10 and NH3 were measured at regular intervals during the FP with two Grimm spectrometers and two Graywolf Particle Counters (PM10) and an Innova photoacoustic gas monitor (NH3). Average daily weight gain (ADG) and mortality were calculated and associated with PM10 and NH3 during the FP. Nasal swabs (10 pigs/compartment) were collected one week prior to slaughter to detect DNA of M. hyopneumoniae with nested PCR (nPCR). The prevalence and extent of pneumonia lesions, and prevalence of fissures and pleurisy were examined at slaughter (29 weeks). The results from the nasal swabs and lung lesions were associated with PM10 and NH3 during the FP and the second half of the FP. In the univariable model, increasing PM10 concentrations resulted in a higher odds of pneumonia lesions (second half of the FP: OR=8.72; P=0.015), more severe pneumonia lesions (FP: P=0.04, second half of the FP: P=0.009), a higher odds of pleurisy lesions (FP: OR=20.91; P<0.001 and second half of the FP: OR=40.85; P<0.001) and a higher number of nPCR positive nasal samples (FP: OR=328.00; P=0.01 and second half of the FP: OR=185.49; P=0.02). Increasing NH3 concentrations in the univariable model resulted in a higher odds of pleurisy lesions (FP: OR=21.54; P=0.003) and a higher number of nPCR positive nasal samples (FP: OR=70.39; P=0.049; second half of the FP: OR=8275.05; P=0.01). In the multivariable model, an increasing PM10 concentration resulted in a higher odds of pleurisy lesions (FP: OR=8.85; P=0.049). These findings indicate that the respiratory health
Wells, Frank C.; Schertz, Terry L.
1984-01-01
A computer program using the Statistical Analysis System has been developed to perform the arithmetic calculations and regression analyses to determine volume-weighted-average concentrations of selected water-quality constituents in lakes and reservoirs. The program has been used in Texas to show decreasing trends in dissolved-solids and total-phosphorus concentrations in Lake Arlington after the discharge of sewage effluent into the reservoir was stopped. The program also was used to show that the August 1978 and October 1981 floods on the Brazos River greatly decreased the volume-weighted-average concentrations of selected constituents in Hubbard Creek Reservoir and Possum Kingdom Lake.
Code of Federal Regulations, 2012 CFR
2012-07-01
... weighted average in Equation 2 of § 63.2840 to determine the compliance ratio. (b) To determine the volume... determine chemical properties of the solvent and the volume percentage of all HAP components present in the... by the total volume of all deliveries as expressed in Equation 1 of this section. Record the...
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov
2007-01-01
Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.
Spacially-averaged and point measurements of wind variability in the Geyser's area
Porch, W.M.
1980-05-01
This paper describes the results of a comparison of wind measurements made with conventional cup-vane tower mounted anemometers and optical space-averaged anemometer techniques. The results described cover the period from 7/17/79 to 7/27/79 during the intensive ASCOT experiment in the Geyser's region. The average height of the laser beam above terrain was about 30 meters. Most of the optical anemometer wind data was obtained using a laser beam system described in detail by Lawrence, et al. Some measurements were also made along the same path using a white light photodiode array system developed at LLL.
NASA Astrophysics Data System (ADS)
Kassianov, Evgueni; Barnard, James; Flynn, Connor; Riihimaki, Laura; Marinovici, Cristina
2015-10-01
Areal-averaged albedos are particularly difficult to measure in coastal regions, because the surface is not homogenous, consisting of a sharp demarcation between land and water. With this difficulty in mind, we evaluate a simple retrieval of areal-averaged surface albedo using ground-based measurements of atmospheric transmission alone under fully overcast conditions. To illustrate the performance of our retrieval, we find the areal-averaged albedo using measurements from the Multi-Filter Rotating Shadowband Radiometer (MFRSR) at five wavelengths (415, 500, 615, 673, and 870 nm). These MFRSR data are collected at a coastal site in Graciosa Island, Azores supported by the U.S. Department of Energy's (DOE's) Atmospheric Radiation Measurement (ARM) Program. The areal-averaged albedos obtained from the MFRSR are compared with collocated and coincident Moderate Resolution Imaging Spectroradiometer (MODIS) whitesky albedo at four nominal wavelengths (470, 560, 670 and 860 nm). These comparisons are made during a 19-month period (June 2009 - December 2010). We also calculate composite-based spectral values of surface albedo by a weighted-average approach using estimated fractions of major surface types observed in an area surrounding this coastal site. Taken as a whole, these three methods of finding albedo show spectral and temporal similarities, and suggest that our simple, transmission-based technique holds promise, but with estimated errors of about ±0.03. Additional work is needed to reduce this uncertainty in areas with inhomogeneous surfaces.
Kassianov, Evgueni I.; Barnard, James C.; Flynn, Connor J.; Riihimaki, Laura D.; Marinovici, Maria C.
2015-10-15
Areal-averaged albedos are particularly difficult to measure in coastal regions, because the surface is not homogenous, consisting of a sharp demarcation between land and water. With this difficulty in mind, we evaluate a simple retrieval of areal-averaged surface albedo using ground-based measurements of atmospheric transmission alone under fully overcast conditions. To illustrate the performance of our retrieval, we find the areal-averaged albedo using measurements from the Multi-Filter Rotating Shadowband Radiometer (MFRSR) at five wavelengths (415, 500, 615, 673, and 870 nm). These MFRSR data are collected at a coastal site in Graciosa Island, Azores supported by the U.S. Department of Energy’s (DOE’s) Atmospheric Radiation Measurement (ARM) Program. The areal-averaged albedos obtained from the MFRSR are compared with collocated and coincident Moderate Resolution Imaging Spectroradiometer (MODIS) white-sky albedo at four nominal wavelengths (470, 560, 670 and 860 nm). These comparisons are made during a 19-month period (June 2009 - December 2010). We also calculate composite-based spectral values of surface albedo by a weighted-average approach using estimated fractions of major surface types observed in an area surrounding this coastal site. Taken as a whole, these three methods of finding albedo show spectral and temporal similarities, and suggest that our simple, transmission-based technique holds promise, but with estimated errors of about ±0.03. Additional work is needed to reduce this uncertainty in areas with inhomogeneous surfaces.
ERIC Educational Resources Information Center
Warne, Russell T.; Nagaishi, Chanel; Slade, Michael K.; Hermesmeyer, Paul; Peck, Elizabeth Kimberli
2014-01-01
While research has shown the statistical significance of high school grade point averages (HSGPAs) in predicting future academic outcomes, the systems with which HSGPAs are calculated vary drastically across schools. Some schools employ unweighted grades that carry the same point value regardless of the course in which they are earned; other…
ERIC Educational Resources Information Center
Sadler, Philip M.; Tai, Robert H.
2007-01-01
Honors and advanced placement (AP) courses are commonly viewed as more demanding than standard high school offerings. Schools employ a range of methods to account for such differences when calculating grade point average and the associated rank in class for graduating students. In turn, these statistics have a sizeable impact on college admission…
Thompson, Amanda L; Adair, Linda; Bentley, Margaret E
2014-01-01
Biomedical researchers have raised concerns that mothers’ inability to recognize infant and toddler overweight poses a barrier to stemming increasing rates of overweight and obesity, particularly among low-income or minority mothers. Little anthropological research has examined the sociocultural, economic or structural factors shaping maternal perceptions of infant and toddler size or addressed biomedical depictions of maternal misperception as a “socio-cultural problem.” We use qualitative and quantitative data from 237 low-income, African-American mothers to explore how they define ‘normal’ infant growth and infant overweight. Our quantitative results document that mothers’ perceptions of infant size change with infant age, are sensitive to the size of other infants in the community, and are associated with concerns over health and appetite. Qualitative analysis documents that mothers are concerned with their children’s weight status and assess size in relation to their infants’ cues, local and societal norms of appropriate size, interactions with biomedicine, and concerns about infant health and sufficiency. These findings suggest that mothers use multiple models to interpret and respond to child weight. An anthropological focus on the complex social and structural factors shaping what is considered ‘normal’ and ‘abnormal’ infant weight is critical for shaping appropriate and successful interventions. PMID:25684782
Sether, Bradley A.; Berkas, Wayne R.; Vecchia, Aldo V.
2004-01-01
associated with each estimated annual load. The estimated annual loads for the eight primary sites then were used to estimate annual loads for five intervening reaches in the study area. Results were used as a screening tool to identify which subbasins contributed a disproportionate amount of pollutants to the Red River. To compare the relative water quality of the different subbasins, an estimated flow-weighted average (FWA) concentration was computed from the estimated average annual load and the average annual streamflow for each subbasin. The 5-day biochemical oxygen demands in the upper Red River Basin were fairly small, and medians ranged from 1 to 3 milligrams per liter. The largest estimated FWA concentration for dissolved solids (about 630 milligrams per liter) was for the Bois de Sioux River near Doran, Minn., site. The Otter Tail River above Breckenridge, Minn., site had the smallest estimated FWA concentration (about 240 milligrams per liter). The estimated FWA concentrations for dissolved solids for the main-stem sites ranged from about 300 to 500 milligrams per liter and generally increased in a downstream direction. The estimated FWA concentrations for total nitrite plus nitrate for the main-stem sites increased from about 0.2 milligram per liter for the Red River below Wahpeton, N. Dak., site to about 0.9 milligram per liter for the Red River at Perley, Minn., site. Much of the increase probably resulted from flows from the tributary sites and intervening reaches, excluding the Otter Tail River above Breckenridge, Minn., site. However, uncertainty in the estimated concentrations prevented any reliable conclusions regarding which sites or reaches contributed most to the increase. The estimated FWA concentrations for total ammonia for the main-stem sites increased from about 0.05 milligram per liter for the Red River above Fargo, N. Dak., site to about 0.15 milligram per liter for the Red River near Harwood, N. Dak., site. T
The daily computed weighted averaging basic reproduction number R>0,k,ωn for MERS-CoV in South Korea
NASA Astrophysics Data System (ADS)
Jeong, Darae; Lee, Chang Hyeong; Choi, Yongho; Kim, Junseok
2016-06-01
In this paper, we propose the daily computed weighted averaging basic reproduction number R0,k,ωn for Middle East respiratory syndrome coronavirus (MERS-CoV) outbreak in South Korea, May to July 2015. We use an SIR model with piecewise constant parameters β (contact rate) and γ (removed rate). We use the explicit Euler's method for the solution of the SIR model and a nonlinear least-square fitting procedure for finding the best parameters. In R0,k,ωn, the parameters n, k, and w denote days from a reference date, the number of days in averaging, and a weighting factor, respectively. We perform a series of numerical experiments and compare the results with the real-world data. In particular, using the predicted reproduction number based on the previous two consecutive reproduction numbers, we can predict the future behavior of the reproduction number.
Numerous urban canopy schemes have recently been developed for mesoscale models in order to approximate the drag and turbulent production effects of a city on the air flow. However, little data exists by which to evaluate the efficacy of the schemes since "area-averaged&quo...
Collins, Alison M; Barchia, Idris M
2014-01-31
Serology indicates that Lawsonia intracellularis infection is widespread in many countries, with most pigs seroconverting before 22 weeks of age. However, the majority of animals appear to be sub-clinically affected, demonstrated by the low reported prevalence of diarrhoea. Production losses caused by sub-clinical proliferative enteropathy (PE) are more difficult to diagnose, indicating the need for a quantitative L. intracellularis assay that correlates well with disease severity. In previous studies, increasing numbers of L. intracellularis in pig faeces, quantified with a real time polymerase chain reaction (qPCR), showed a strong negative correlation with average daily gain (ADG). In this study, the association between faecal L. intracellularis numbers and PE severity was examined in two L. intracellularis experimental challenge trials (n1=32 and n2=95). The number of L. intracellularis shed in individual faeces was determined by qPCR on days 0, 7, 14, 17 and 21 days post challenge, and average daily gain was recorded over the same period. The severity of histopathological lesions of PE was scored at 21 days post challenge. L. intracellularis numbers correlated well with histopathology severity and faecal consistency scores (r=0.72 and 0.68, respectively), and negatively with ADG (r=-0.44). Large reductions in ADG (131 g/day) occurred when the number of L. intracellularis shed by experimentally challenged pigs increased from 10(7) to 10(8)L. intracellularis, although smaller ADG reductions were also observed (15 g/day) when the number of L. intracellularis increased from 10(6) to 10(7)L. intracellularis. PMID:24388631
Südmeyer, T; Brunner, F; Innerhofer, E; Paschotta, R; Furusawa, K; Baggett, J C; Monro, T M; Richardson, D J; Keller, U
2003-10-15
We demonstrate that nonlinear fiber compression is possible at unprecedented average power levels by use of a large-mode-area holey (microstructured) fiber and a passively mode-locked thin disk Yb:YAG laser operating at 1030 nm. We broaden the optical spectrum of the 810-fs pump pulses by nonlinear propagation in the fiber and remove the resultant chirp with a dispersive prism pair to achieve 18 W of average power in 33-fs pulses with a peak power of 12 MW and a repetition rate of 34 MHz. The output beam is nearly diffraction limited and is linearly polarized. PMID:14587786
The Average Body Surface Area of Adult Cancer Patients in the UK: A Multicentre Retrospective Study
Sacco, Joseph J.; Botten, Joanne; Macbeth, Fergus; Bagust, Adrian; Clark, Peter
2010-01-01
The majority of chemotherapy drugs are dosed based on body surface area (BSA). No standard BSA values for patients being treated in the United Kingdom are available on which to base dose and cost calculations. We therefore retrospectively assessed the BSA of patients receiving chemotherapy treatment at three oncology centres in the UK between 1st January 2005 and 31st December 2005. A total of 3613 patients receiving chemotherapy for head and neck, ovarian, lung, upper GI/pancreas, breast or colorectal cancers were included. The overall mean BSA was 1.79 m2 (95% CI 1.78–1.80) with a mean BSA for men of 1.91 m2 (1.90–1.92) and 1.71 m2 (1.70–1.72) for women. Results were consistent across the three centres. No significant differences were noted between treatment in the adjuvant or palliative setting in patients with breast or colorectal cancer. However, statistically significant, albeit small, differences were detected between some tumour groups. In view of the consistency of results between three geographically distinct UK cancer centres, we believe the results of this study may be generalised and used in future costings and budgeting for new chemotherapy agents in the UK. PMID:20126669
NASA Astrophysics Data System (ADS)
Elmore, A. J.; Guinn, S. M.
2009-12-01
Land surface phenology (LSP) is the seasonal pattern of vegetation dynamics that occur each spring and fall. Multiple drivers of spatial variation in LSP and its variation over time have been analyzed using satellite remote sensing. Until recently, these observations have been restricted to moderate- and low-resolution data, as it is only at these spatial resolutions for which temporally continuous data is available. However, understanding small scale variation in LSP over space and time may be key to linking pattern to process, and in particular, could be used to understand how ecological processes at the stand level scale to landscapes and continents. Through utilization of the large, and now free, Landsat record, recent research has led to the development of robust methods for calculating average phenological patterns at 30-m resolution by stacking two decades worth of data by acquisition day of year (DOY). Here we have extended these techniques to calculate the deviation from the average LSP for any given acquisition DOY-year combination. We model the average LSP as two sigmoid functions, one increasing in spring and a second decreasing in fall, connected by a sloped line representing gradual summer leaf area changes (see Figure). Deviation from the average LSP is considered here to take two forms: (1) residual vegetation cover in mid- to late-summer represent locations in which disturbance, drought, or (alternatively) better than average growing conditions have resulted a separation (either negative or positive) from the average vegetation cover for that DOY, and (2) climate conditions that result in an earlier or later onset of greenness, exhibited as a separation from the average spring onset of greenness curve in the DOY direction (either early or late.) Our study system for this work is the deciduous forests of the mid-Atlantic, USA, where we show that late summer vegetation cover is tied to edaphic properties governing the site specific soil moisture
Gorsevski, Pece V; Donevska, Katerina R; Mitrovski, Cvetko D; Frizado, Joseph P
2012-02-01
This paper presents a GIS-based multi-criteria decision analysis approach for evaluating the suitability for landfill site selection in the Polog Region, Macedonia. The multi-criteria decision framework considers environmental and economic factors which are standardized by fuzzy membership functions and combined by integration of analytical hierarchy process (AHP) and ordered weighted average (OWA) techniques. The AHP is used for the elicitation of attribute weights while the OWA operator function is used to generate a wide range of decision alternatives for addressing uncertainty associated with interaction between multiple criteria. The usefulness of the approach is illustrated by different OWA scenarios that report landfill suitability on a scale between 0 and 1. The OWA scenarios are intended to quantify the level of risk taking (i.e., optimistic, pessimistic, and neutral) and to facilitate a better understanding of patterns that emerge from decision alternatives involved in the decision making process. PMID:22030279
Idih, E. E.; Ezem, B. U.; Nzeribe, E. A.; Onyegbule, A. O.; Duru, B. C.; Amajoyi, C. C.
2016-01-01
Background: Despite the global efforts made to eradicate malaria, it continues to be a significant cause of morbidity and mortality in both neonates and the parturients. This study was done to determine the relationship between placental parasitemia, average neonatal birth weight and the relationship between the use of malaria preventive measures and the occurrence of placental parasitemia with the aim to improving maternal and neonatal outcome. Patients and Methods: This cross-sectional study was done at the labor ward unit of the Federal Medical Center, Owerri, from December 2013 to May 2014. It involved one hundred and eighty primigravidae and baby pairs recruited consecutively. Thick and thin blood films were made from maternal peripheral blood and placenta. The babies were examined and weighed immediately after delivery. Results: Most of the participants had only one dose of intermittent preventive therapy (75%) with statistically significant higher level of fever episodes (P < 0.0001). Forty participants (58.0%) did not use any form of malaria preventive measure in pregnancy (P < 0.0001) and had a significantly higher placental parasitemia when compared with their counterparts. Average birth weight of neonates with placental parasitemia in mothers who used intermittent presumptive therapy (IPT) only (t = 2.22, P = 0.005), and IPT + insecticide-treated net (ITN) (t = 7.91, P ≤ 0.000) was significantly higher than those who did not use any form of malaria prevention in pregnancy (t = 4.69, P ≤ 0.0001). Conclusion: Primigravidae with placental or maternal peripheral parasitemia who failed to use malaria preventive measures delivered babies with reduced average birth weight. A scheme aimed at making ITN readily available, and improving the girl child education is highly recommended.
Tsodikov, Oleg V; Record, M Thomas; Sergeev, Yuri V
2002-04-30
New computer programs, SurfRace and FastSurf, perform fast calculations of the solvent accessible and molecular (solvent excluded) surface areas of macromolecules. Program SurfRace also calculates the areas of cavities inaccessible from the outside. We introduce the definition of average curvature of molecular surface and calculate average molecular surface curvatures for each atom in a structure. All surface area and curvature calculations are analytic and therefore yield exact values of these quantities. High calculation speed of this software is achieved primarily by avoiding computationally expensive mathematical procedures wherever possible and by efficient handling of surface data structures. The programs are written initially in the language C for PCs running Windows 2000/98/NT, but their code is portable to other platforms with only minor changes in input-output procedures. The algorithm is robust and does not ignore either multiplicity or degeneracy of atomic overlaps. Fast, memory-efficient and robust execution make this software attractive for applications both in computationally expensive energy minimization algorithms, such as docking or molecular dynamics simulations, and in stand-alone surface area and curvature calculations. PMID:11939594
Larsen, Inge; Hjulsager, Charlotte Kristiane; Holm, Anders; Olsen, John Elmerdahl; Nielsen, Søren Saxmose; Nielsen, Jens Peter
2016-01-01
Oral treatment with antimicrobials is widely used in pig production for the control of gastrointestinal infections. Lawsonia intracellularis (LI) causes enteritis in pigs older than six weeks of age and is commonly treated with antimicrobials. The objective of this study was to evaluate the efficacy of three oral dosage regimens (5, 10 and 20mg/kg body weight) of oxytetracycline (OTC) in drinking water over a five-day period on diarrhoea, faecal shedding of LI and average daily weight gain (ADG). A randomised clinical trial was carried out in four Danish pig herds. In total, 539 animals from 37 batches of nursery pigs were included in the study. The dosage regimens were randomly allocated to each batch and initiated at presence of assumed LI-related diarrhoea. In general, all OTC doses used for the treatment of LI infection resulted in reduced diarrhoea and LI shedding after treatment. Treatment with a low dose of 5mg/kg OTC per kg body weight, however, tended to cause more watery faeces and resulted in higher odds of pigs shedding LI above detection level when compared to medium and high doses (with odds ratios of 5.5 and 8.4, respectively). No association was found between the dose of OTC and the ADG. In conclusion, a dose of 5mg OTC per kg body weight was adequate for reducing the high-level LI shedding associated with enteropathy, but a dose of 10mg OTC per kg body weight was necessary to obtain a maximum reduction in LI shedding. PMID:26718056
Boosting with Averaged Weight Vectors
NASA Technical Reports Server (NTRS)
Oza, Nikunj C.; Clancy, Daniel (Technical Monitor)
2002-01-01
AdaBoost is a well-known ensemble learning algorithm that constructs its constituent or base models in sequence. A key step in AdaBoost is constructing a distribution over the training examples to create each base model. This distribution, represented as a vector, is constructed to be orthogonal to the vector of mistakes made by the previous base model in the sequence. The idea is to make the next base model's errors uncorrelated with those of the previous model. Some researchers have pointed out the intuition that it is probably better to construct a distribution that is orthogonal to the mistake vectors of all the previous base models, but that this is not always possible. We present an algorithm that attempts to come as close as possible to this goal in an efficient manner. We present experimental results demonstrating significant improvement over AdaBoost and the Totally Corrective boosting algorithm, which also attempts to satisfy this goal.
NASA Technical Reports Server (NTRS)
Schols, J. L.; Eloranta, E. W.
1992-01-01
Area-averaged horizontal wind measurements are derived from the motion of spatial inhomogeneities in aerosol backscattering observed with a volume-imaging lidar. Spatial averaging provides high precision, reducing sample variations of wind measurements well below the level of turbulent fluctuations, even under conditions of very light mean winds and strong convection or under the difficult conditions represented by roll convection. Wind velocities are measured using the two-dimensional spatial cross correlation computed between successive horizontal plane maps of aerosol backscattering, assembled from three-dimensional lidar scans. Prior to calculation of the correlation function, three crucial steps are performed: (1) the scans are corrected for image distortion by the wind during a finite scan time; (2) a temporal high pass median filtering is applied to eliminate structures that do not move with the wind; and (3) a histogram equalization is employed to reduce biases to the brightest features.
NASA Astrophysics Data System (ADS)
Obata, Kenta; Miura, Tomoaki; Yoshioka, Hiroki
2012-01-01
Area-averaged vegetation index (VI) depends on spatial resolution and the computational approach used to calculate the VI from the data. Certain data treatments can introduce scaling effects and a systematic bias into datasets gathered from different sensors. This study investigated the mechanisms underlying the scaling effects of a two-band spectral VI defined in terms of the ratio of two linear sums of the red and near-infrared reflectances (a general form of the two-band VI). The general form of the VI model was linearly transformed to yield a common functional VI form that elucidated the nature of the monotonic behavior. An analytic investigation was conducted in which a two-band linear mixture model was assumed. The trends (increasing or decreasing) in the area-averaged VIs could be explained in terms of a single scalar index, ην, which may be expressed in terms of the spectra of the vegetation and nonvegetation endmembers as well as the coefficients unique to each VI. The maximum error bounds on the scaling effects were derived as a function of the endmember spectra and the choice of VI. The validity of the expressions was explored by conducting a set of numerical experiments that focused on the monotonic behavior and trends in several VIs.
Inducing Conservation of Number, Weight, Volume, Area, and Mass in Pre-School Children.
ERIC Educational Resources Information Center
Young, Beverly S.
The major question this study attempted to answer was, "Can conservation of number, area, weight, mass, and volume to be induced and retained by 3- and 4-year-old children by structured instruction with a multivariate approach? Three nursery schools in Iowa City supplied subjects for this study. The Institute of Child Behavior and Development…
Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Rockhold, Mark L.
2008-06-01
A methodology to systematically and quantitatively assess model predictive uncertainty was applied to saturated zone uranium transport at the 300 Area of the U.S. Department of Energy Hanford Site in Washington State, USA. The methodology extends Maximum Likelihood Bayesian Model Averaging (MLBMA) to account jointly for uncertainties due to the conceptual-mathematical basis of models, model parameters, and the scenarios to which the models are applied. Conceptual uncertainty was represented by postulating four alternative models of hydrogeology and uranium adsorption. Parameter uncertainties were represented by estimation covariances resulting from the joint calibration of each model to observed heads and uranium concentration. Posterior model probability was dominated by one model. Results demonstrated the role of model complexity and fidelity to observed system behavior in determining model probabilities, as well as the impact of prior information. Two scenarios representing alternative future behavior of the Columbia River adjacent to the site were considered. Predictive simulations carried out with the calibrated models illustrated the computation of model- and scenario-averaged predictions and how results can be displayed to clearly indicate the individual contributions to predictive uncertainty of the model, parameter, and scenario uncertainties. The application demonstrated the practicability of applying a comprehensive uncertainty assessment to large-scale, detailed groundwater flow and transport modelling.
Yaskolka Meir, Anat; Shelef, Ilan; Schwarzfuchs, Dan; Gepner, Yftach; Tene, Lilac; Zelicha, Hila; Tsaban, Gal; Bilitzky, Avital; Komy, Oded; Cohen, Noa; Bril, Nitzan; Rein, Michal; Serfaty, Dana; Kenigsbuch, Shira; Chassidim, Yoash; Zeller, Lior; Ceglarek, Uta; Stumvoll, Michael; Blüher, Matthias; Thiery, Joachim; Stampfer, Meir J; Rudich, Assaf; Shai, Iris
2016-08-01
It remains unclear whether intermuscular adipose tissue (IMAT) has any metabolic influence or whether it is merely a marker of abnormalities, as well as what are the effects of specific lifestyle strategies for weight loss on the dynamics of both IMAT and thigh muscle area (TMA). We followed the trajectory of IMAT and TMA during 18-mo lifestyle intervention among 278 sedentary participants with abdominal obesity, using magnetic resonance imaging. We measured the resting metabolic rate (RMR) by an indirect calorimeter. Among 273 eligible participants (47.8 ± 9.3 yr of age), the mean IMAT was 9.6 ± 4.6 cm(2) Baseline IMAT levels were directly correlated with waist circumference, abdominal subdepots, C-reactive protein, and leptin and inversely correlated with baseline TMA and creatinine (P < 0.05 for all). After 18 mo (86.3% adherence), both IMAT (-1.6%) and TMA (-3.3%) significantly decreased (P < 0.01 vs. baseline). The changes in both IMAT and TMA were similar across the lifestyle intervention groups and directly corresponded with moderate weight loss (P < 0.001). IMAT change did not remain independently associated with decreased abdominal subdepots or improved cardiometabolic parameters after adjustments for age, sex, and 18-mo weight loss. In similar models, 18-mo TMA loss remained associated with decreased RMR, decreased activity, and with increased fasting glucose levels and IMAT (P < 0.05 for all). Unlike other fat depots, IMAT may not represent a unique or specific adipose tissue, instead largely reflecting body weight change per se. Moderate weight loss induced a significant decrease in thigh muscle area, suggesting the importance of resistance training to accompany weight loss programs. PMID:27402560
NASA Astrophysics Data System (ADS)
Naik, Haladhara; Kim, Guinyun; Kim, Kwangsoo; Zaman, Muhammad; Goswami, Ashok; Lee, Man Woo; Yang, Sung-Chul; Lee, Young-Ouk; Shin, Sung-Gyun; Cho, Moo-Hyun
2016-04-01
Photo-neutron cross sections of 197Au were experimentally determined for the bremsstrahlung end-point energies of 50, 60, and 70 MeV, by utilizing activation and off-line γ-ray spectrometric technique, using the 100 MeV electron linac at the Pohang Accelerator Laboratory (PAL), Pohang, Korea. The 197Au(γ, xn; x = 1- 6) reaction cross sections were calculated as a function of the bombarding photon energy by using the TALYS 1.6 computer code with default parameters. The flux-weighted average cross sections were obtained from the literature data and the theoretical values of TALYS 1.6 and TENDL-2014, for mono-energetic photons, and are found to be in good agreement with the present data. Isomeric yield ratios of 196m2,gAu from the 197Au(γ, n) reaction were also determined for the bremsstrahlung end-point energies of 50, 60, and 70 MeV, from the reaction cross sections of m2- and g-states, based on the present experimental data, and are found to be in good agreement with the theoretical values based on TALYS 1.6 and TENDL-2014.
Shmool, Jessie L C; Bobb, Jennifer F; Ito, Kazuhiko; Elston, Beth; Savitz, David A; Ross, Zev; Matte, Thomas D; Johnson, Sarah; Dominici, Francesca; Clougherty, Jane E
2015-10-01
Numerous studies have linked air pollution with adverse birth outcomes, but relatively few have examined differential associations across the socioeconomic gradient. To evaluate interaction effects of gestational nitrogen dioxide (NO2) and area-level socioeconomic deprivation on fetal growth, we used: (1) highly spatially-resolved air pollution data from the New York City Community Air Survey (NYCCAS); and (2) spatially-stratified principle component analysis of census variables previously associated with birth outcomes to define area-level deprivation. New York City (NYC) hospital birth records for years 2008-2010 were restricted to full-term, singleton births to non-smoking mothers (n=243,853). We used generalized additive mixed models to examine the potentially non-linear interaction of nitrogen dioxide (NO2) and deprivation categories on birth weight (and estimated linear associations, for comparison), adjusting for individual-level socio-demographic characteristics and sensitivity testing adjustment for co-pollutant exposures. Estimated NO2 exposures were highest, and most varying, among mothers residing in the most-affluent census tracts, and lowest among mothers residing in mid-range deprivation tracts. In non-linear models, we found an inverse association between NO2 and birth weight in the least-deprived and most-deprived areas (p-values<0.001 and 0.05, respectively) but no association in the mid-range of deprivation (p=0.8). Likewise, in linear models, a 10 ppb increase in NO2 was associated with a decrease in birth weight among mothers in the least-deprived and most-deprived areas of -16.2g (95% CI: -21.9 to -10.5) and -11.0 g (95% CI: -22.8 to 0.9), respectively, and a non-significant change in the mid-range areas [β=0.5 g (95% CI: -7.7 to 8.7)]. Linear slopes in the most- and least-deprived quartiles differed from the mid-range (reference group) (p-values<0.001 and 0.09, respectively). The complex patterning in air pollution exposure and deprivation
Baimatova, Nassiba; Koziel, Jacek A; Kenessov, Bulat
2015-05-11
A new and simple method for benzene, toluene, ethylbenzene and o-xylene (BTEX) quantification in vehicle exhaust was developed based on diffusion-controlled extraction onto a retracted solid-phase microextraction (SPME) fiber coating. The rationale was to develop a method based on existing and proven SPME technology that is feasible for field adaptation in developing countries. Passive sampling with SPME fiber retracted into the needle extracted nearly two orders of magnitude less mass (n) compared with exposed fiber (outside of needle) and sampling was in a time weighted-averaging (TWA) mode. Both the sampling time (t) and fiber retraction depth (Z) were adjusted to quantify a wider range of Cgas. Extraction and quantification is conducted in a non-equilibrium mode. Effects of Cgas, t, Z and T were tested. In addition, contribution of n extracted by metallic surfaces of needle assembly without SPME coating was studied. Effects of sample storage time on n loss was studied. Retracted TWA-SPME extractions followed the theoretical model. Extracted n of BTEX was proportional to Cgas, t, Dg, T and inversely proportional to Z. Method detection limits were 1.8, 2.7, 2.1 and 5.2 mg m(-3) (0.51, 0.83, 0.66 and 1.62 ppm) for BTEX, respectively. The contribution of extraction onto metallic surfaces was reproducible and influenced by Cgas and t and less so by T and by the Z. The new method was applied to measure BTEX in the exhaust gas of a Ford Crown Victoria 1995 and compared with a whole gas and direct injection method. PMID:25911428
MPWide: a light-weight library for efficient message passing over wide area networks
NASA Astrophysics Data System (ADS)
Groen, D.; Rieder, S.; Portegies Zwart, S.
2013-12-01
We present MPWide, a light weight communication library which allows efficient message passing over a distributed network. MPWide has been designed to connect application running on distributed (super)computing resources, and to maximize the communication performance on wide area networks for those without administrative privileges. It can be used to provide message-passing between application, move files, and make very fast connections in client-server environments. MPWide has already been applied to enable distributed cosmological simulations across up to four supercomputers on two continents, and to couple two different bloodflow simulations to form a multiscale simulation.
White, R R; Capper, J L
2013-12-01
The objective of this study was to assess environmental impact, economic viability, and social acceptability of 3 beef production systems with differing levels of efficiency. A deterministic model of U.S. beef production was used to predict the number of animals required to produce 1 × 10(9) kg HCW beef. Three production treatments were compared, 1 representing average U.S. production (control), 1 with a 15% increase in ADG, and 1 with a 15% increase in finishing weight (FW). For each treatment, various socioeconomic scenarios were compared to account for uncertainty in producer and consumer behavior. Environmental impact metrics included feed consumption, land use, water use, greenhouse gas emissions (GHGe), and N and P excretion. Feed cost, animal purchase cost, animal sales revenue, and income over costs (IOVC) were used as metrics of economic viability. Willingness to pay (WTP) was used to identify improvements or reductions in social acceptability. When ADG improved, feedstuff consumption, land use, and water use decreased by 6.4%, 3.2%, and 12.3%, respectively, compared with the control. Carbon footprint decreased 11.7% and N and P excretion were reduced by 4% and 13.8%, respectively. When FW improved, decreases were seen in feedstuff consumption (12.1%), water use (9.2%). and land use (15.5%); total GHGe decreased 14.7%; and N and P excretion decreased by 10.1% and 17.2%, compared with the control. Changes in IOVC were dependent on socioeconomic scenario. When the ADG scenario was compared with the control, changes in sector profitability ranged from 51 to 117% (cow-calf), -38 to 157% (stocker), and 37 to 134% (feedlot). When improved FW was compared, changes in cow-calf profit ranged from 67% to 143%, stocker profit ranged from -41% to 155% and feedlot profit ranged from 37% to 136%. When WTP was based on marketing beef being more efficiently produced, WTP improved by 10%; thus, social acceptability increased. When marketing was based on production
Krpálková, L; Cabrera, V E; Kvapilík, J; Burdych, J; Crump, P
2014-10-01
The objective of this study was to evaluate the associations of variable intensity in rearing dairy heifers on 33 commercial dairy herds, including 23,008 cows and 18,139 heifers, with age at first calving (AFC), average daily weight gain (ADG), and milk yield (MY) level on reproduction traits and profitability. Milk yield during the production period was analyzed relative to reproduction and economic parameters. Data were collected during a 1-yr period (2011). The farms were located in 12 regions in the Czech Republic. The results show that those herds with more intensive rearing periods had lower conception rates among heifers at first and overall services. The differences in those conception rates between the group with the greatest ADG (≥0.800 kg/d) and the group with the least ADG (≤0.699 kg/d) were approximately 10 percentage points in favor of the least ADG. All the evaluated reproduction traits differed between AFC groups. Conception at first and overall services (cows) was greatest in herds with AFC ≥800 d. The shortest days open (105 d) and calving interval (396 d) were found in the middle AFC group (799 to 750 d). The highest number of completed lactations (2.67) was observed in the group with latest AFC (≥800 d). The earliest AFC group (≤749 d) was characterized by the highest depreciation costs per cow at 8,275 Czech crowns (US$414), and the highest culling rate for cows of 41%. The most profitable rearing approach was reflected in the middle AFC (799 to 750 d) and middle ADG (0.799 to 0.700 kg) groups. The highest MY (≥8,500 kg) occurred with the earliest AFC of 780 d. Higher MY led to lower conception rates in cows, but the highest MY group also had the shortest days open (106 d) and a calving interval of 386 d. The same MY group had the highest cow depreciation costs, net profit, and profitability without subsidies of 2.67%. We conclude that achieving low AFC will not always be the most profitable approach, which will depend upon farm
Code of Federal Regulations, 2011 CFR
2011-07-01
... Extraction for Vegetable Oil Production Compliance Requirements § 63.2854 How do I determine the weighted... received for use in your vegetable oil production process. By the end of each calendar month following an... the solvent in each delivery of solvent, including solvent recovered from off-site oil. To...
Code of Federal Regulations, 2010 CFR
2010-07-01
... Extraction for Vegetable Oil Production Compliance Requirements § 63.2854 How do I determine the weighted... received for use in your vegetable oil production process. By the end of each calendar month following an... the solvent in each delivery of solvent, including solvent recovered from off-site oil. To...
NASA Astrophysics Data System (ADS)
Kumazawa, Shinsuke; Kato, Takeyoshi; Honda, Nobuyuki; Koaizawa, Masakazu; Nishino, Shinichi; Suzuoki, Yasuo
Based on the past studies regarding the insolation fluctuation, the smoothing effect of insolation among different locations would not be enough for the longer cycle than a few ten minutes. This study evaluated the maximum fluctuation width (MFW) within at most 120 min of ensemble average insolation of 40 points, its clearness index, and ensemble average insolation excluding sun-position dependent component. As the results, when the weather condition became worse after the noon in almost all area, the ensemble average insolation significantly reduced, resulting in MFW of 540W/m2 within 120 min. As other example, when the weather recovered during the morning in many areas, MFW was also large. By using the data observed for 6 months, this study calculated the cumulative frequency distribution of MFW of ensemble average insolation, its clearness index, and ensemble average insolation excluding sun-position dependent component. As the results, the absolute value of MFW of ensemble average insolation calculated with 120 min width window ranges mainly between 200-300W/m2. The absolute value of MWF of insolation excluding sun-position dependent component evaluated with 120 min width window is smaller than 200W/m2 in most days, and is not so different from MWF evaluated with 60 min width window. Finally, this study discussed the practical usability of insolation forecast.
Borah, Madhur; Baruah, Rupali
2015-01-01
Introduction: Low birth weight (LBW) infants suffer more episodes of common childhood diseases and the spells of illness are more prolonged and serious. Longitudinal studies are useful to observe the health and disease pattern of LBW babies over time. Aims: This study was carried out in rural areas of Assam to assess the morbidity pattern of LBW babies during their first 6 months of life and to compare them with normal birth weight (NBW) counterparts. Materials and Methods: Total 30 LBW babies (0-2 months) and equal numbers of NBW babies from three subcenters under Boko Primary Health Centre of Assam were followed up in monthly intervals till 6 months of age in a prospective fashion. Results: More than two thirds of LBW babies (77%) were suffering from moderate or severe under-nutrition during the follow up. Acute respiratory tract infection (ARI) was the predominant morbidity suffered by LBW infants. The other illnesses suffered by the LBW infants during the follow up were diarrhea, skin disorders, fever and ear disorders. LBW infants had more episodes of hospitalization (65%) than the NBW infants (35%). Incidence rate of episodes of morbidity was found to be higher among those LBW infants who remained underweight at 6 months of age (Incidence rate of 49.3 per 100 infant months) and those who were not exclusively breast fed till 6 months of age (Incidence rate of 66.7 per 100 infant months). Conclusion: The study revealed that during the follow up, incidence of morbidities were higher among the LBW babies compared to NBW babies. It was also observed that ARI was the predominant morbidity in the LBW infants during first 6 months of age. PMID:26288777
NASA Technical Reports Server (NTRS)
Huff, Edward M.; Mosher, Marianne; Barszcz, Eric
2002-01-01
Recent research using NASA Ames AH-1 and OH-58C helicopters, and NASA Glenn test rigs, has shown that in-flight vibration data are typically non-stationary [l-4]. The nature and extent of this non-stationarity is most likely produced by several factors operating simultaneously. The aerodynamic flight environment and pilot commands provide continuously changing inputs, with a complex dynamic response that includes automatic feedback control from the engine regulator. It would appear that the combined effects operate primarily through an induced torque profile, which causes concomitant stress modulation at the individual internal gear meshes in the transmission. This notion is supported by several analyses, which show that upwards of 93% of the vibration signal s variance can be explained by knowledge of torque alone. That this relationship is stronger in an AH-1 than an OH-58, where measured non-stationarity is greater, suggests that the overall mass of the vehicle is an important consideration. In the lighter aircraft, the unsteady aerodynamic influences transmit relatively greater unsteady dynamic forces on the mechanical components, quite possibly contributing to its greater non-stationarity . In a recent paper using OH-58C pinion data [5], the authors have shown that in computing a time synchronous average (TSA) for various single-value metric computations, an effective trade-off can be obtained between sample size and measured stationarity by using data from only a single mesh cycle. A mesh cycle, which is defined as the number of rotations required for the gear teeth to return to their original mating position, has the property of representing all of the discrete phase angles of the opposing gears exactly once in the average. Measured stationarity is probably maximized because a single mesh cycle of the pinion gear occurs over a very short span of time, during which time-dependent non-stationary effects are kept to a minimum. Clearly, the advantage of local
Ruiz, J M; Busnel, J P; Benoît, J P
1990-09-01
The phase separation of fractionated poly(DL-lactic acid-co-glycolic acid) copolymers 50/50 was determined by silicone oil addition. Polymer fractionation by preparative size exclusion chromatography afforded five different microsphere batches. Average molecular weight determined the existence, width, and displacement of the "stability window" inside the phase diagrams, and also microsphere characteristics such as core loading and amount released over 6 hr. Further, the gyration and hydrodynamic radii were measured by light scattering. It is concluded that the polymer-solvent affinity is largely modified by the variation of average molecular weights owing to different levels of solubility. The lower the average molecular weight is, the better methylene chloride serves as a solvent for the coating material. However, a paradoxical effect due to an increase in free carboxyl and hydroxyl groups is noticed for polymers of 18,130 and 31,030 SEC (size exclusion chromatography) Mw. For microencapsulation, polymers having an intermediate molecular weight (47,250) were the most appropriate in terms of core loading and release purposes. PMID:2235892
NASA Astrophysics Data System (ADS)
Hernández, Leonor; Juliá, J. Enrique; Paranjape, Sidharth; Hibiki, Takashi; Ishii, Mamoru
2010-11-01
In this work, the use of the area-averaged void fraction and bubble chord length entropies is introduced as flow regime indicators in two-phase flow systems. The entropy provides quantitative information about the disorder in the area-averaged void fraction or bubble chord length distributions. The CPDF (cumulative probability distribution function) of void fractions and bubble chord lengths obtained by means of impedance meters and conductivity probes are used to calculate both entropies. Entropy values for 242 flow conditions in upward two-phase flows in 25.4 and 50.8-mm pipes have been calculated. The measured conditions cover ranges from 0.13 to 5 m/s in the superficial liquid velocity j f and ranges from 0.01 to 25 m/s in the superficial gas velocity j g. The physical meaning of both entropies has been interpreted using the visual flow regime map information. The area-averaged void fraction and bubble chord length entropies capability as flow regime indicators have been checked with other statistical parameters and also with different input signals durations. The area-averaged void fraction and the bubble chord length entropies provide better or at least similar results than those obtained with other indicators that include more than one parameter. The entropy is capable to reduce the relevant information of the flow regimes in only one significant and useful parameter. In addition, the entropy computation time is shorter than the majority of the other indicators. The use of one parameter as input also represents faster predictions.
Ito, Tadashi; Sakai, Yoshihito; Nakamura, Eishi; Yamazaki, Kazunori; Yamada, Ayaka; Sato, Noritaka; Morita, Yoshifumi
2015-01-01
[Purpose] The purpose of this study was to examine the relationship between the paraspinal muscle cross-sectional area and the relative proprioceptive weighting ratio during local vibratory stimulation of older persons with lumbar spondylosis in an upright position. [Subjects] In all, 74 older persons hospitalized for lumbar spondylosis were included. [Methods] We measured the relative proprioceptive weighting ratio of postural sway using a Wii board while vibratory stimulations of 30, 60, or 240 Hz were applied to the subjects’ paraspinal or gastrocnemius muscles. Back strength, abdominal muscle strength, and erector spinae muscle (L1/L2, L4/L5) and lumbar multifidus (L1/L2, L4/L5) cross-sectional areas were evaluated. [Results] The erector spinae muscle (L1/L2) cross-sectional area was associated with the relative proprioceptive weighting ratio during 60Hz stimulation. [Conclusion] These findings show that the relative proprioceptive weighting ratio compared to the erector spinae muscle (L1/L2) cross-sectional area under 60Hz proprioceptive stimulation might be a good indicator of trunk proprioceptive sensitivity. PMID:26311962
NASA Astrophysics Data System (ADS)
Shakilur Rahman, Md.; Kim, Kwangsoo; Kim, Guinyun; Naik, Haladhara; Nadeem, Muhammad; Thi Hien, Nguyen; Shahid, Muhammad; Yang, Sung-Chul; Cho, Young-Sik; Lee, Young-Ouk; Shin, Sung-Gyun; Cho, Moo-Hyun; Woo Lee, Man; Kang, Yeong-Rok; Yang, Gwang-Mo; Ro, Tae-Ik
2016-07-01
We measured the flux-weighted average cross-sections and the isomeric yield ratios of 99m, g, 100m, g, 101m, g, 102m, gRh in the 103Rh( γ, xn) reactions with the bremsstrahlung end-point energies of 55 and 60MeV by the activation and the off-line γ-ray spectrometric technique, using the 100MeV electron linac at the Pohang Accelerator Laboratory (PAL), Korea. The flux-weighted average cross-sections were calculated by using the computer code TALYS 1.6 based on mono-energetic photons, and compared with the present experimental data. The flux-weighted average cross-sections of 103Rh( γ, xn) reactions in intermediate bremsstrahlung energies are the first time measurement and are found to increase from their threshold value to a particular value, where the other reaction channels open up. Thereafter, it decreases with bremsstrahlung energy due to its partition in different reaction channels. The isomeric yield ratios (IR) of 99m, g, 100m, g, 101m, g, 102m, gRh in the 103Rh( γ, xn) reactions from the present work were compared with the literature data in the 103Rh(d, x), 102-99Ru(p, x) , 103Rh( α, αn) , 103Rh( α, 2p3n) , 102Ru(3He, x), and 103Rh( γ, xn) reactions. It was found that the IR values of 102, 101, 100, 99Rh in all these reactions increase with the projectile energy, which indicates the role of excitation energy. At the same excitation energy, the IR values of 102, 101, 100, 99Rh are higher in the charged particle-induced reactions than in the photon-induced reaction, which indicates the role of input angular momentum.
2014-01-01
Mesoporous ZnO nanoparticles have been synthesized with tremendous increase in specific surface area of up to 578 m2/g which was 5.54 m2/g in previous reports (J. Phys. Chem. C 113:14676-14680, 2009). Different mesoporous ZnO nanoparticles with average pore sizes ranging from 7.22 to 13.43 nm and specific surface area ranging from 50.41 to 578 m2/g were prepared through the sol-gel method via a simple evaporation-induced self-assembly process. The hydrolysis rate of zinc acetate was varied using different concentrations of sodium hydroxide. Morphology, crystallinity, porosity, and J-V characteristics of the materials have been studied using transmission electron microscopy (TEM), X-ray diffraction (XRD), BET nitrogen adsorption/desorption, and Keithley instruments. PMID:25339855
NASA Astrophysics Data System (ADS)
Nagaoka, Tomoaki; Watanabe, Soichi; Sakurai, Kiyoko; Kunieda, Etsuo; Watanabe, Satoshi; Taki, Masao; Yamanaka, Yukio
2004-01-01
With advances in computer performance, the use of high-resolution voxel models of the entire human body has become more frequent in numerical dosimetries of electromagnetic waves. Using magnetic resonance imaging, we have developed realistic high-resolution whole-body voxel models for Japanese adult males and females of average height and weight. The developed models consist of cubic voxels of 2 mm on each side; the models are segmented into 51 anatomic regions. The adult female model is the first of its kind in the world and both are the first Asian voxel models (representing average Japanese) that enable numerical evaluation of electromagnetic dosimetry at high frequencies of up to 3 GHz. In this paper, we will also describe the basic SAR characteristics of the developed models for the VHF/UHF bands, calculated using the finite-difference time-domain method.
Huang, Yuanyuan; Varsier, Nadège; Niksic, Stevan; Kocan, Enis; Pejanovic-Djurisic, Milica; Popovic, Milica; Koprivica, Mladen; Neskovic, Aleksandar; Milinkovic, Jelena; Gati, Azeddine; Person, Christian; Wiart, Joe
2016-09-01
This article is the first thorough study of average population exposure to third generation network (3G)-induced electromagnetic fields (EMFs), from both uplink and downlink radio emissions in different countries, geographical areas, and for different wireless device usages. Indeed, previous publications in the framework of exposure to EMFs generally focused on individual exposure coming from either personal devices or base stations. Results, derived from device usage statistics collected in France and Serbia, show a strong heterogeneity of exposure, both in time, that is, the traffic distribution over 24 h was found highly variable, and space, that is, the exposure to 3G networks in France was found to be roughly two times higher than in Serbia. Such heterogeneity is further explained based on real data and network architecture. Among those results, authors show that, contrary to popular belief, exposure to 3G EMFs is dominated by uplink radio emissions, resulting from voice and data traffic, and average population EMF exposure differs from one geographical area to another, as well as from one country to another, due to the different cellular network architectures and variability of mobile usage. Bioelectromagnetics. 37:382-390, 2016. © 2016 Wiley Periodicals, Inc. PMID:27385053
NASA Astrophysics Data System (ADS)
Obata, Kenta; Huete, Alfredo R.
2014-01-01
This study investigated the mechanisms underlying the scaling effects that apply to a fraction of vegetation cover (FVC) estimates derived using two-band spectral vegetation index (VI) isoline-based linear mixture models (VI isoline-based LMM). The VIs included the normalized difference vegetation index, a soil-adjusted vegetation index, and a two-band enhanced vegetation index (EVI2). This study focused in part on the monotonicity of an area-averaged FVC estimate as a function of spatial resolution. The proof of monotonicity yielded measures of the intrinsic area-averaged FVC uncertainties due to scaling effects. The derived results demonstrate that a factor ξ, which was defined as a function of "true" and "estimated" endmember spectra of the vegetated and nonvegetated surfaces, was responsible for conveying monotonicity or nonmonotonicity. The monotonic FVC values displayed a uniform increasing or decreasing trend that was independent of the choice of the two-band VI. Conditions under which scaling effects were eliminated from the FVC were identified. Numerical simulations verifying the monotonicity and the practical utility of the scaling theory were evaluated using numerical experiments applied to Landsat7-Enhanced Thematic Mapper Plus (ETM+) data. The findings contribute to developing scale-invariant FVC estimation algorithms for multisensor and data continuity.
NASA Astrophysics Data System (ADS)
Liu, X.; Zhang, J. X.; Zhao, Z.; Ma, A. D.
2015-06-01
Synthetic aperture radar in the application of remote sensing technology is becoming more and more widely because of its all-time and all-weather operation, feature extraction research in high resolution SAR image has become a hot topic of concern. In particular, with the continuous improvement of airborne SAR image resolution, image texture information become more abundant. It's of great significance to classification and extraction. In this paper, a novel method for built-up areas extraction using both statistical and structural features is proposed according to the built-up texture features. First of all, statistical texture features and structural features are respectively extracted by classical method of gray level co-occurrence matrix and method of variogram function, and the direction information is considered in this process. Next, feature weights are calculated innovatively according to the Bhattacharyya distance. Then, all features are weighted fusion. At last, the fused image is classified with K-means classification method and the built-up areas are extracted after post classification process. The proposed method has been tested by domestic airborne P band polarization SAR images, at the same time, two groups of experiments based on the method of statistical texture and the method of structural texture were carried out respectively. On the basis of qualitative analysis, quantitative analysis based on the built-up area selected artificially is enforced, in the relatively simple experimentation area, detection rate is more than 90%, in the relatively complex experimentation area, detection rate is also higher than the other two methods. In the study-area, the results show that this method can effectively and accurately extract built-up areas in high resolution airborne SAR imagery.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., by Management Area (weights in metric tons) 2a Table 2a to Part 660, Subpart C Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF..., Subpart C—2010, Specifications of ABCs, OYs, and HGs, by Management Area (weights in metric tons)...
Code of Federal Regulations, 2010 CFR
2010-10-01
..., by Management Area (weights in metric tons) 1a Table 1a to Part 660, Subpart C Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF..., Subpart C—2009, Specifications of ABCs, OYs, and HGs, by Management Area (weights in metric tons)...
Code of Federal Regulations, 2010 CFR
2010-10-01
..., by Management Area (weights in metric tons) 1a Table 1a to Part 660, Subpart G Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF..., Subpart G—2009, Specifications of ABCs, OYs, and HGs, by Management Area (weights in metric tons) Link...
Code of Federal Regulations, 2010 CFR
2010-10-01
..., by Management Area (weights in metric tons) 2a Table 2a to Part 660, Subpart G Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF..., Subpart G—2010, Specifications of ABCs, OYs, and HGs, by Management Area (weights in metric tons) Link...
NASA Astrophysics Data System (ADS)
Heinemann, Günther; Kerschgens, Michael
2005-03-01
The quantification of subgrid land surface heterogeneity effects on the scale of climate and numerical weather prediction models is of vital interest for the energy budget of the atmospheric boundary layer and for the atmospheric branch of the hydrological cycle. This paper focuses on heterogeneity effects for the exchange processes between land surfaces and the atmosphere. The results are based on high-resolution non-hydrostatic model simulations for the LITFASS area near Berlin. This area represents a highly heterogeneous landscape of 20 × 20 km2 around the Meteorological Observatory Lindenberg of the German Weather Service (DWD). Model simulations were carried out using the non-hydrostatic model FOOT3DK of the University of Köln with resolutions of 1 km and 250 m.The performance of different area-averaging methods for the turbulent surface fluxes was tested for the LITFASS area, namely the aggregation, mosaic and tile methods. For one tile method (station-tile), the experimental setup of the surface energy balance stations of the LITFASS98 experiment was investigated. Two different simulation types are considered: (1) realistic topography and idealized synoptic forcing; (2) realistic topography and realistic synoptic forcing for LITFASS98 cases. A double one-way nesting procedure is used for nesting FOOT3DK in Lokalmodell of the DWD.The mosaic method shows good results, if the wind speed is sufficiently high. During weak-wind convective conditions, errors are particularly large for the latent heat flux on the 20 × 20 km2 scale. The aggregation method yields generally higher errors than the mosaic method, which even increase for higher wind speeds. The main reason is the strong surface heterogeneity associated with the lakes and forests in the LITFASS area. The main uncertainty of the station-tile method is the knowledge of the area coverage in combination with the representativity of the stations for the land-use type and surface conditions. The results of
The Average of Rates and the Average Rate.
ERIC Educational Resources Information Center
Lindstrom, Peter
1988-01-01
Defines arithmetic, harmonic, and weighted harmonic means, and discusses their properties. Describes the application of these properties in problems involving fuel economy estimates and average rates of motion. Gives example problems and solutions. (CW)
Mitchell, Nia S; Nassel, Ariann F; Thomas, Deborah
2015-12-01
Obesity rates are higher for ethnic minority, low-income, and rural communities. Programs are needed to support these communities with weight management. We determined the reach of a low-cost, nationally-available weight loss program in Health Resources and Services Administration medically underserved areas (MUAs) and described the demographics of the communities with program locations. This is a cross-sectional analysis of Take Off Pounds Sensibly (TOPS) chapter locations. Geographic information systems technology was used to combine information about TOPS chapter locations, the geographic boundaries of MUAs, and socioeconomic data from the Decennial 2010 Census. TOPS is available in 30 % of MUAs. The typical TOPS chapter is in a Census Tract that is predominantly white, urban, with a median annual income between $25,000 and $50,000. However, there are TOPS chapters in Census Tracts that can be classified as predominantly black or predominantly Hispanic; predominantly rural; and as low or high income. TOPS provides weight management services in MUAs and across many types of communities. TOPS can help treat obesity in the medically underserved. Future research should determine the differential effectiveness among chapters in different types of communities. PMID:26072259
Lawrence, T E; Farrow, R L; Zollinger, B L; Spivey, K S
2008-06-01
With the adoption of visual instrument grading, the calculated yield grade can be used for payment to cattle producers selling on grid pricing systems. The USDA beef carcass grading standards include a relationship between required LM area (LMA) and HCW that is an important component of the final yield grade. As noted on a USDA yield grade LMA grid, a 272-kg (600-lb) carcass requires a 71-cm(2) (11.0-in.(2)) LMA and a 454-kg (1,000-lb) carcass requires a 102-cm(2) (15.8-in.(2)) LMA. This is a linear relationship, where required LMA = 0.171(HCW) + 24.526. If a beef carcass has a larger LMA than required, the calculated yield grade is lowered, whereas a smaller LMA than required increases the calculated yield grade. The objective of this investigation was to evaluate the LMA to HCW relationship against data on 434,381 beef carcasses in the West Texas A&M University (WTAMU) Beef Carcass Research Center database. In contrast to the USDA relationship, our data indicate a quadratic relationship [WTAMU LMA = 33.585 + 0.17729(HCW) -0.0000863(HCW(2))] between LMA and HCW whereby, on average, a 272-kg carcass has a 75-cm(2) (11.6-in.(2)) LMA and a 454-kg carcass has a 96-cm(2) (14.9-in.(2)) LMA, indicating a different slope and different intercept than those in the USDA grading standards. These data indicate that the USDA calculated yield grade equation favors carcasses lighter than 363 kg (800 lb) for having above average muscling and penalizes carcasses heavier than 363 kg (800 lb) for having below average muscling. If carcass weights continue to increase, we are likely to observe greater proportions of yield grade 4 and 5 carcasses because of the measurement bias that currently exists in the USDA yield grade equation. PMID:18310492
Meirovitch, E; Meirovitch, H
1996-01-01
A small linear peptide in solution may populate several stable states (called here microstates) in thermodynamic equilibrium; elucidating its dynamic three dimensional structure by multi- dimensional nmr is complex since the experimentally measured nuclear Overhauser effect intensities (NOEs) represent averages over the individual contributions. We propose a new methodology based on statistical mechanical considerations for analyzing nmr data of such peptides. In a previous paper (called paper I, H. Meirovitch et al. (1995) Journal of Physical Chemistry, 99, 4847-4854] we have developed theoretical methods for determining the contribution to the partition function Z of the most stable microstates, i.e. those that pertain to a given energy range above the global energy minimum (GEM). This relatively small set of dominant microstates provides the main contribution to medium- and long-range NOE intensities. In this work the individual populations and NOEs of the dominant microstates are determined, and then weighted averages are calculated and compared with experiment. Our methodology is applied to the pentapeptide Leu-enkephalin H-Tyr-Gly-Gly-Phe-Leu-OH, described by the potential energy function ECEPP. Twenty one significantly different energy minimized structures are first identified within the range of 2 kcal/mol above the GEM by an extensive conformational search; this range has been found in paper I to contribute 0.6 of Z. These structures then become "seeds" for Monte Carlo (MC) simulations designed to keep the molecule relatively close to its seed. Indeed, the MC samples (called MC microstates) illustrate what we define as intermediate chain flexibility; some dihedral angles remain in the vicinity of their seed value, while others visit the full range of [-180 degrees, 180 degrees]. The free energies of the MC microstates (which lead to the populations) are calculated by the local states method, which (unlike other techniques) can handle any chain flexibility
Fritzsche, Klaus H.; Thieke, Christian; Klein, Jan; Parzer, Peter; Weber, Marc-André; Stieltjes, Bram
2012-01-01
Abstract The apparent diffusion coefficient (ADC) derived from diffusion-weighted imaging (DWI) correlates inversely with tumor proliferation rates. High-grade gliomas are typically heterogeneous and the delineation of areas of high and low proliferation is impeded by partial volume effects and blurred borders. Commonly used manual delineation is further impeded by potential overlap with cerebrospinal fluid and necrosis. Here we present an algorithm to reproducibly delineate and probabilistically quantify the ADC in areas of high and low proliferation in heterogeneous gliomas, resulting in a reproducible quantification in regions of tissue inhomogeneity. We used an expectation maximization (EM) clustering algorithm, applied on a Gaussian mixture model, consisting of pure superpositions of Gaussian distributions. Soundness and reproducibility of this approach were evaluated in 10 patients with glioma. High- and low-proliferating areas found using the clustering correspond well with conservative regions of interest drawn using all available imaging data. Systematic placement of model initialization seeds shows good reproducibility of the method. Moreover, we illustrate an automatic initialization approach that completely removes user-induced variability. In conclusion, we present a rapid, reproducible and automatic method to separate and quantify heterogeneous regions in gliomas. PMID:22487677
NASA Technical Reports Server (NTRS)
Kovich, G.; Moore, R. D.; Urasek, D. C.
1973-01-01
The overall and blade-element performance are presented for an air compressor stage designed to study the effect of weight flow per unit annulus area on efficiency and flow range. At the design speed of 424.8 m/sec the peak efficiency of 0.81 occurred at the design weight flow and a total pressure ratio of 1.56. Design pressure ratio and weight flow were 1.57 and 29.5 kg/sec (65.0 lb/sec), respectively. Stall margin at design speed was 19 percent based on the weight flow and pressure ratio at peak efficiency and at stall.
Lopes, Thomas J.; Evetts, David M.
2004-01-01
Nevada's reliance on ground-water resources has increased because of increased development and surface-water resources being fully appropriated. The need to accurately quantify Nevada's water resources and water use is more critical than ever to meet future demands. Estimated ground-water pumpage, artificial and natural recharge, and interbasin flow can be used to help evaluate stresses on aquifer systems. In this report, estimates of ground-water pumpage and artificial recharge during calendar year 2000 were made using data from a variety of sources, such as reported estimates and estimates made using Landsat satellite imagery. Average annual natural recharge and interbasin flow were compiled from published reports. An estimated 1,427,100 acre-feet of ground water was pumped in Nevada during calendar year 2000. This total was calculated by summing six categories of ground-water pumpage, based on water use. Total artificial recharge during 2000 was about 145,970 acre-feet. At least one estimate of natural recharge was available for 209 of the 232 hydrographic areas (HAs). Natural recharge for the 209 HAs ranges from 1,793,420 to 2,583,150 acre-feet. Estimates of interbasin flow were available for 151 HAs. The categories and their percentage of the total ground-water pumpage are irrigation and stock watering (47 percent), mining (26 percent), water systems (14 percent), geothermal production (8 percent), self-supplied domestic (4 percent), and miscellaneous (less than 1 percent). Pumpage in the top 10 HAs accounted for about 49 percent of the total ground-water pumpage. The most ground-water pumpage in an HA was due to mining in Pumpernickel Valley (HA 65), Boulder Flat (HA 61), and Lower Reese River Valley (HA 59). Pumpage by water systems in Las Vegas Valley (HA 212) and Truckee Meadows (HA 87) were the fourth and fifth highest pumpage in 2000, respectively. Irrigation and stock watering pumpage accounted for most ground-water withdrawals in the HAs with the sixth
Hossain, Ahmed; Beyene, Joseph
2013-12-01
MicroRNAs (miRNAs) are short non-coding RNAs that play critical roles in numerous cellular processes through post-transcriptional functions. The aberrant role of miRNAs has been reported in a number of diseases. A robust computational method is vital to discover novel miRNAs where level of noise varies dramatically across the different miRNAs. In this paper, we propose a flexible rank-based procedure for estimating a weighted log partial area under the receiver operating characteristic (ROC) curve statistic for selecting differentially expressed miRNAs. The statistic combines results taking partial area under the curve (pAUC) and their corresponding variance. The proposed method does not involve complicated formulas and does not require advanced programming skills. Two real datasets are analyzed to illustrate the method and a simulation study is carried out to assess the performance of different miRNA ranking statistics. We conclude that the proposed method offers robust results with large samples for miRNA expression data, and the method can be used as an alternative analytical tool for identifying a list of target miRNAs for further biological and clinical investigation. PMID:24246291
NASA Astrophysics Data System (ADS)
Rung-Arunwan, Tawat; Siripunvaraporn, Weerachai; Utada, Hisashi
2016-04-01
Through a large number of magnetotelluric (MT) observations conducted in a study area, one can obtain regional one-dimensional (1-D) features of the subsurface electrical conductivity structure simply by taking the geometric average of determinant invariants of observed impedances. This method was proposed by Berdichevsky and coworkers, which is based on the expectation that distortion effects due to near-surface electrical heterogeneities will be statistically smoothed out. A good estimation of a regional mean 1-D model is useful, especially in recent years, to be used as a priori (or a starting) model in 3-D inversion. However, the original theory was derived before the establishment of the present knowledge on galvanic distortion. This paper, therefore, reexamines the meaning of the Berdichevsky average by using the conventional formulation of galvanic distortion. A simple derivation shows that the determinant invariant of distorted impedance and its Berdichevsky average is always downward biased by the distortion parameters of shear and splitting. This means that the regional mean 1-D model obtained from the Berdichevsky average tends to be more conductive. As an alternative rotational invariant, the sum of the squared elements (ssq) invariant is found to be less affected by bias from distortion parameters; thus, we conclude that its geometric average would be more suitable for estimating the regional structure. We find that the combination of determinant and ssq invariants provides parameters useful in dealing with a set of distorted MT impedances.
NASA Astrophysics Data System (ADS)
Shi, Y.; Long, Y.; Wi, X. L.
2014-04-01
When tourists visiting multiple tourist scenic spots, the travel line is usually the most effective road network according to the actual tour process, and maybe the travel line is different from planned travel line. For in the field of navigation, a proposed travel line is normally generated automatically by path planning algorithm, considering the scenic spots' positions and road networks. But when a scenic spot have a certain area and have multiple entrances or exits, the traditional described mechanism of single point coordinates is difficult to reflect these own structural features. In order to solve this problem, this paper focuses on the influence on the process of path planning caused by scenic spots' own structural features such as multiple entrances or exits, and then proposes a doubleweighted Graph Model, for the weight of both vertexes and edges of proposed Model can be selected dynamically. And then discusses the model building method, and the optimal path planning algorithm based on Dijkstra algorithm and Prim algorithm. Experimental results show that the optimal planned travel line derived from the proposed model and algorithm is more reasonable, and the travelling order and distance would be further optimized.
NASA Technical Reports Server (NTRS)
Urasek, D. C.; Kovich, G.; Moore, R. D.
1973-01-01
Performance was obtained for a 50-cm-diameter compressor designed for a high weight flow per unit annulus area of 208 (kg/sec)/sq m. Peak efficiency values of 0.83 and 0.79 were obtained for the rotor and stage, respectively. The stall margin for the stage was 23 percent, based on equivalent weight flow and total-pressure ratio at peak efficiency and stall.
Yulia; Khusun, Helda; Fahmida, Umi
2016-07-01
Developing countries including Indonesia imperatively require an understanding of factors leading to the emerging problem of obesity, especially within low socio-economic groups, whose dietary pattern may contribute to obesity. In this cross-sectional study, we compared the dietary patterns and food consumption of 103 obese and 104 normal-weight women of reproductive age (19-49 years) in urban slum areas in Central Jakarta. A single 24-h food recall was used to assess energy and macronutrient intakes (carbohydrate, protein and fat) and calculate energy density. A principal component analysis was used to define the dietary patterns from the FFQ. Obese women had significantly higher intakes of energy (8436·6 (sd 2358·1) v. 7504·4 (sd 1887·8) kJ (2016·4 (sd 563·6) v. 1793·6 (sd 451·2) kcal)), carbohydrate (263·9 (sd 77·0) v. 237·6 (sd 63·0) g) and fat (83·11 (sd 31·3) v. 70·2 (sd 26·1) g) compared with normal-weight women; however, their protein intake (59·4 (sd 19·1) v. 55·9 (sd 18·5) g) and energy density (8·911 (sd 2·30) v. 8·58 (sd 1·88) kJ/g (2·13 (sd 0·55) v. 2·05 (sd 0·45) kcal/g)) did not differ significantly. Two dietary patterns were revealed and subjectively named 'more healthy' and 'less healthy'. The 'less healthy' pattern was characterised by the consumption of fried foods (snacks, soyabean and roots and tubers) and meat and poultry products, whereas the more healthy pattern was characterised by the consumption of seafood, vegetables, eggs, milk and milk products and non-fried snacks. Subjects with a high score for the more healthy pattern had a lower obesity risk compared with those with a low score. Thus, obesity is associated with high energy intake and unhealthy dietary patterns characterised by consumption of oils and fats through fried foods and snacks. PMID:26931206
NASA Astrophysics Data System (ADS)
Skoczowsky, D.; Heuer, A.; Jechow, A.; Menzel, R.
2007-11-01
Detailed investigations of the spatiotemporal and spectral emission properties of a high power diode laser are presented. The AR coated laser diode with design wavelength of 940 nm is driven in an external resonator. The laser generates up to 340 mW average output power in a train of picosecond pulses with durations of 25 ps and repetition rates of 2.6 GHz. The mechanism of mode locking is discussed as self pulsation because of the strong correlation between round trip time and repetition rate. The double-sided exponential pulses suggest saturable absorber action.
ERIC Educational Resources Information Center
Kurtz, David K.
This paper explores the Veterans Administration (VA) work-study program and its implications for student/veterans at Harrisburg Area Community College in Pennsylvania. Unique advantages of the program include tax-free income, flexible working schedules around students' class schedules, additional study time, easy access to the office from classes,…
NASA Astrophysics Data System (ADS)
Wang, Gongwen; Chen, Jianping; Li, Qing; Ding, Huoping
2007-06-01
This paper aims to monitor desertification evolution of different stages and assess its factors using remote sensing (RS) data and cellular automata (CA)-geographical information system (GIS) with an adaptive analytic hierarchy process (AHP) to derive weights of desertification factors. The study areas (114°E to 117°E and 39.5°to 42.2°N) are one of the important agro-pastoral transitional zone, located in Beijing and its neighboring areas, marginal desertified areas in North China. Desertification information including NDVI and desertification area were derived from the satellite images of 1987TM, 1996TM (with a resolution of 28.5), and 2006 CBERS-(with a resolution of 19.5 m) in study areas. The ancillary data in terms of meteorology, geology, 30m-DEM, hydrography can be statistical analyzed with GIS technology. A CA model based on the desertification factors with AHP-derived weights was built by AML program in ArcGIS workstation to assess the evolution of desertification in different stages (from 1987 to 1996, and from 1996 to 2006). The research results show that desertified areas was increased by 3.28% per year from 1987 to 1996, so was 0.51% per year from 1996 to 2006. Although the weights of desertification factors have some changes in different stages, the main factors including climate, NDVI, and terrain did not change except the values in study areas.
Xaverius, Pamela; Alman, Cameron; Holtz, Lori; Yarber, Laura
2016-03-01
Objectives This study examined risk and protective factors associated with very low birth weight (VLBW) for babies born to women receiving adequate or inadequate prenatal care. Methods Birth records from St. Louis City and County from 2000 to 2009 were used (n = 152,590). Data was categorized across risk factors and stratified by adequacy of prenatal care (PNC). Multivariate logistic regression and population attributable risk (PAR) was used to explore risk factors for VLBW infants. Results Women receiving inadequate prenatal care had a higher prevalence of delivering a VLBW infant than those receiving adequate PNC (4.11 vs. 1.44 %, p < .0001). The distribution of risk factors differed between adequate and inadequate PNC regarding Black race (36.4 vs. 79.0 %, p < .0001), age under 20 (13.0 vs. 33.6 %, p < .0001), <13 years of education (35.9 vs. 77.9 %, p < .0001), Medicaid status (35.7 vs. 74.9, p < .0001), primiparity (41.6 vs. 31.4 %, p < .0001), smoking (9.7 vs. 24.5 %, p < .0001), and diabetes (4.0 vs. 2.4 %, p < .0001), respectively. Black race, advanced maternal age, primiparity and gestational hypertension were significant predictors of VLBW, regardless of adequate or inadequate PNC. Among women with inadequate PNC, Medicaid was protective against (aOR 0.671, 95 % CI 0.563-0.803; PAR -32.6 %) and smoking a risk factor for (aOR 1.23, 95 % CI 1.01, 1.49; PAR 40.1 %) VLBW. When prematurity was added to the adjusted models, the largest PAR shifts to education (44.3 %) among women with inadequate PNC. Conclusions Community actions around broader issues of racism and social determinants of health are needed to prevent VLBW in a large urban area. PMID:26537389
Haines, Aaron M.; Leu, Matthias; Svancara, Leona K.; Wilson, Gina; Scott, J. Michael
2010-01-01
Identification of biodiversity hotspots (hereafter, hotspots) has become a common strategy to delineate important areas for wildlife conservation. However, the use of hotspots has not often incorporated important habitat types, ecosystem services, anthropogenic activity, or consistency in identifying important conservation areas. The purpose of this study was to identify hotspots to improve avian conservation efforts for Species of Greatest Conservation Need (SGCN) in the state of Idaho, United States. We evaluated multiple approaches to define hotspots and used a unique approach based on weighting species by their distribution size and conservation status to identify hotspot areas. All hotspot approaches identified bodies of water (Bear Lake, Grays Lake, and American Falls Reservoir) as important hotspots for Idaho avian SGCN, but we found that the weighted approach produced more congruent hotspot areas when compared to other hotspot approaches. To incorporate anthropogenic activity into hotspot analysis, we grouped species based on their sensitivity to specific human threats (i.e., urban development, agriculture, fire suppression, grazing, roads, and logging) and identified ecological sections within Idaho that may require specific conservation actions to address these human threats using the weighted approach. The Snake River Basalts and Overthrust Mountains ecological sections were important areas for potential implementation of conservation actions to conserve biodiversity. Our approach to identifying hotspots may be useful as part of a larger conservation strategy to aid land managers or local governments in applying conservation actions on the ground.
2012-01-01
Background The study conducts statistical and spatial analyses to investigate amounts and types of permitted surface water pollution discharges in relation to population mortality rates for cancer and non-cancer causes nationwide and by urban-rural setting. Data from the Environmental Protection Agency's (EPA) Discharge Monitoring Report (DMR) were used to measure the location, type, and quantity of a selected set of 38 discharge chemicals for 10,395 facilities across the contiguous US. Exposures were refined by weighting amounts of chemical discharges by their estimated toxicity to human health, and by estimating the discharges that occur not only in a local county, but area-weighted discharges occurring upstream in the same watershed. Centers for Disease Control and Prevention (CDC) mortality files were used to measure age-adjusted population mortality rates for cancer, kidney disease, and total non-cancer causes. Analysis included multiple linear regressions to adjust for population health risk covariates. Spatial analyses were conducted by applying geographically weighted regression to examine the geographic relationships between releases and mortality. Results Greater non-carcinogenic chemical discharge quantities were associated with significantly higher non-cancer mortality rates, regardless of toxicity weighting or upstream discharge weighting. Cancer mortality was higher in association with carcinogenic discharges only after applying toxicity weights. Kidney disease mortality was related to higher non-carcinogenic discharges only when both applying toxicity weights and including upstream discharges. Effects for kidney mortality and total non-cancer mortality were stronger in rural areas than urban areas. Spatial results show correlations between non-carcinogenic discharges and cancer mortality for much of the contiguous United States, suggesting that chemicals not currently recognized as carcinogens may contribute to cancer mortality risk. The
ERIC Educational Resources Information Center
Gutiérrez-Zornoza, Myriam; Sánchez-López, Mairena; García-Hermoso, Antonio; González-García, Alberto; Chillón, Palma; Martínez-Vizcaíno, Vicente
2015-01-01
Purpose: The aim of this study was to examine (a) whether distance from home to school is a determinant of active commuting to school (ACS), (b) the relationship between distance from home to heavily used facilities (school, green spaces, and sports facilities) and the weight status and cardiometabolic risk categories, and (c) whether ACS has a…
Peng, Xiang; Mielke, Michael; Booth, Timothy
2011-01-17
We demonstrate high average power, high energy 1.55 μm ultra-short pulse (<1 ps) laser delivery using helium-filled and argon-filled large mode area hollow core photonic band-gap fibers and compare relevant performance parameters. The ultra-short pulse laser beam-with pulse energy higher than 7 μJ and pulse train average power larger than 0.7 W-is output from a 2 m long hollow core fiber with diffraction limited beam quality. We introduce a pulse tuning mechanism of argon-filled hollow core photonic band-gap fiber. We assess the damage threshold of the hollow core photonic band-gap fiber and propose methods to further increase pulse energy and average power handling. PMID:21263632
On generalized averaged Gaussian formulas
NASA Astrophysics Data System (ADS)
Spalevic, Miodrag M.
2007-09-01
We present a simple numerical method for constructing the optimal (generalized) averaged Gaussian quadrature formulas which are the optimal stratified extensions of Gauss quadrature formulas. These extensions exist in many cases in which real positive Kronrod formulas do not exist. For the Jacobi weight functions w(x)equiv w^{(alpha,beta)}(x)D(1-x)^alpha(1+x)^beta ( alpha,beta>-1 ) we give a necessary and sufficient condition on the parameters alpha and beta such that the optimal averaged Gaussian quadrature formulas are internal.
Chrien, R.E.
1986-10-01
The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs.
Iwashima, Satoru; Ishikawa, Takamichi
2016-08-01
Background Our goal was to evaluate the hemodynamic status of very low-birth-weight infants (VLBWIs) with patent ductus arteriosus (PDA) by measuring the vena contracta width (VCW) and effective shunt orifice area (ESOA) using the proximal isovelocity surface area (PISA) on color Doppler imaging. Method and Results In this study, 34 VLBWIs with PDA (median weight: 949 g) were studied. We measured VCW and ESOA using the PISA on echocardiography. PDA-VCW was measured at the narrowest PDA flow region. ESOA determined using PISA (PDA-ESOA) was defined as the hemispheric area of laminar flow with aliased velocities on color Doppler flow imaging: PDA-ESOA = 2π (PDA radius) 2 × aligning velocity/PDA velocity. Of the 34 VLBWIs, 26 received indomethacin (IND) for symptomatic PDA. Comparing echocardiographic parameters between infants who did versus did not receive IND, significant differences were seen in the left atrial-to-aortic root ratio (LA/AO), PDA-VCW, and PDA-ESOA. Receiver operating characteristic curve analysis to differentiate between IND usage status produced statistically significant results for PDA-VCW (area under the curve [AUC] = 0.880), PDA-ESOA (AUC = 0.813), and LA/AO (AUC = 0.769). Conclusion PDA-VCW and PDA-ESOA may allow noninvasive assessment of PDA severity, and are useful when determining the timing of clinical decision making for IND administration. PMID:27057768
Kazemier, Brenda M.; Schuit, Ewoud; Mol, Ben Willem J.; Pajkrt, Eva; Ganzevoort, Wessel
2014-01-01
Objective. To compare birth weight ratio and birth weight percentile to express infant weight when assessing pregnancy outcome. Study Design. We performed a national cohort study. Birth weight ratio was calculated as the observed birth weight divided by the median birth weight for gestational age. The discriminative ability of birth weight ratio and birth weight percentile to identify infants at risk of perinatal death (fetal death and neonatal death) or adverse pregnancy outcome (perinatal death + severe neonatal morbidity) was compared using the area under the curve. Outcomes were expressed stratified by gestational age at delivery separate for birth weight ratio and birth weight percentile. Results. We studied 1,299,244 pregnant women, with an overall perinatal death rate of 0.62%. Birth weight ratio and birth weight percentile have equivalent overall discriminative performance for perinatal death and adverse perinatal outcome. In late preterm infants (33+0–36+6 weeks), birth weight ratio has better discriminative ability than birth weight percentile for perinatal death (0.68 versus 0.63, P 0.01) or adverse pregnancy outcome (0.67 versus 0.60, P < 0.001). Conclusion. Birth weight ratio is a potentially valuable instrument to identify infants at risk of perinatal death and adverse pregnancy outcome and provides several advantages for use in research and clinical practice. Moreover, it allows comparison of groups with different average birth weights. PMID:25197283
NASA Astrophysics Data System (ADS)
Liu, Zhengjun; Wang, Jian; Chi, Changyan
2008-11-01
Multi-source earth observation data is highly desirable in current landslide hazard prediction modeling, as well as Landslide Hazard Zonation(LHZ) is a very important content of landslide hazard prediction modeling. In this paper, take Wan County for instance, we investigate the potentials of derivation from multi-source data sets to study landslide hazard zonation based on the ordinal scale relative weighting-rating technique. LHZ is then performed with chosen factor layers including: buffer map of thrusts, lithology, slope angle and relative relief calculated from DEM, NDVI, buffer map of drainage and lineaments extracted from the digital satellite imagery(TM). Then Landslide Hazard Index (LHI) value is calculated and landslide hazard zonation is decided by slicing LHI histogram. The statistics results demonstrate that high stable slope zone, stable slope zone, quasi-stable slope zone, relatively unstable slope zone, unstable slope zone and defended slope zone account for 2.20%, 14.02%, 39.88%, 28.27%, 12.17% and 3.47% respectively. Then, GPS deformation control points on the landslide bodies are used to verify the validity of the LHZ technique.
NASA Astrophysics Data System (ADS)
Corsini, Alessandro; Cervi, Federico; Ronchetti, Francesco
2009-10-01
Locations of potential groundwater springs were mapped in an area of 68 km 2 in the Northern Apennines of Italy based on Weight of Evidence (WofE) and Radial Basis Function Link Net (RBFLN). A map of more than 200 springs and maps of five causal factors were uploaded to ArcGIS with Spatial Data Modelling extensions. The WofE and RBFLN potential groundwater spring maps had similar prediction rates, allowing about 50% of the training and validation springs to be predicted in about 15 to 20% of the study area. The two maps were merged using a heuristic combination matrix in order to produce two hybrid maps: one representing susceptible areas in both the WofE and RBFLN maps (type A), while the other representing susceptible areas at least in one of the two maps (type B). For small cumulated areas, the success rate of both hybrid maps was higher than that of the parent maps, while for large cumulated areas, only the type B hybrid map performed similarly to the parent maps. This conclusion suggests different applications of these maps to water management purposes.
NASA Astrophysics Data System (ADS)
Gnanvo, Kondo; Bai, Xinzhan; Gu, Chao; Liyanage, Nilanga; Nelyubin, Vladimir; Zhao, Yuxiang
2016-02-01
A large-area and light-weight gas electron multiplier (GEM) detector was built at the University of Virginia as a prototype for the detector R&D program of the future Electron Ion Collider. The prototype has a trapezoidal geometry designed as a generic sector module in a disk layer configuration of a forward tracker in collider detectors. It is based on light-weight material and narrow support frames in order to minimize multiple scattering and dead-to-sensitive area ratio. The chamber has a novel type of two dimensional (2D) stereo-angle readout board with U-V strips that provides (r,φ) position information in the cylindrical coordinate system of a collider environment. The prototype was tested at the Fermilab Test Beam Facility in October 2013 and the analysis of the test beam data demonstrates an excellent response uniformity of the large area chamber with an efficiency higher than 95%. An angular resolution of 60 μrad in the azimuthal direction and a position resolution better than 550 μm in the radial direction were achieved with the U-V strip readout board. The results are discussed in this paper.
Average Cost of Common Schools.
ERIC Educational Resources Information Center
White, Fred; Tweeten, Luther
The paper shows costs of elementary and secondary schools applicable to Oklahoma rural areas, including the long-run average cost curve which indicates the minimum per student cost for educating various numbers of students and the application of the cost curves determining the optimum school district size. In a stratified sample, the school…
Hmelnitsky, I; Nettheim, N
1987-06-01
Functional anatomy and physiology have naturally attended mainly to those functions which occur most commonly in everyday life. Piano playing is a more specialized area, where functions arise which have so far been neglected in medical science. These functions are here described by a pianist (IH) in the hope that medical researchers will respond to fill the gaps. The importance of this lies not only in the understanding of skilled manipulative activity but also in the avoidance of overuse syndrome (OUS) or repetitive strain injury (RSI). PMID:3614013
ERIC Educational Resources Information Center
Francis, Richard L.
1992-01-01
Presents the template method developed by Galileo for calculating areas of geometric shapes constructed of uniform density and thickness. The method compares the weight of a shape of known area to the weight of a shape of unknown area. Applies this hands-on method to problems involving calculus, Pythagorean's theorem, and cycloids. (MDH)
How to Address Measurement Noise in Bayesian Model Averaging
NASA Astrophysics Data System (ADS)
Schöniger, A.; Wöhling, T.; Nowak, W.
2014-12-01
When confronted with the challenge of selecting one out of several competing conceptual models for a specific modeling task, Bayesian model averaging is a rigorous choice. It ranks the plausibility of models based on Bayes' theorem, which yields an optimal trade-off between performance and complexity. With the resulting posterior model probabilities, their individual predictions are combined into a robust weighted average and the overall predictive uncertainty (including conceptual uncertainty) can be quantified. This rigorous framework does, however, not yet explicitly consider statistical significance of measurement noise in the calibration data set. This is a major drawback, because model weights might be instable due to the uncertainty in noisy data, which may compromise the reliability of model ranking. We present a new extension to the Bayesian model averaging framework that explicitly accounts for measurement noise as a source of uncertainty for the weights. This enables modelers to assess the reliability of model ranking for a specific application and a given calibration data set. Also, the impact of measurement noise on the overall prediction uncertainty can be determined. Technically, our extension is built within a Monte Carlo framework. We repeatedly perturb the observed data with random realizations of measurement error. Then, we determine the robustness of the resulting model weights against measurement noise. We quantify the variability of posterior model weights as weighting variance. We add this new variance term to the overall prediction uncertainty analysis within the Bayesian model averaging framework to make uncertainty quantification more realistic and "complete". We illustrate the importance of our suggested extension with an application to soil-plant model selection, based on studies by Wöhling et al. (2013, 2014). Results confirm that noise in leaf area index or evaporation rate observations produces a significant amount of weighting
Kriging without negative weights
Szidarovszky, F.; Baafi, E.Y.; Kim, Y.C.
1987-08-01
Under a constant drift, the linear kriging estimator is considered as a weighted average of n available sample values. Kriging weights are determined such that the estimator is unbiased and optimal. To meet these requirements, negative kriging weights are sometimes found. Use of negative weights can produce negative block grades, which makes no practical sense. In some applications, all kriging weights may be required to be nonnegative. In this paper, a derivation of a set of nonlinear equations with the nonnegative constraint is presented. A numerical algorithm also is developed for the solution of the new set of kriging equations.
Kim, Sun Hye; Hwang, Ji-Yun; Kim, Mi Kyung; Chung, Hye Won; Nguyet, Tran Thi Phuc
2010-01-01
The objectives of this study were to examine the association between dietary factors and underweight and overweight adult Vietnamese living in the rural areas of Vietnam. A cross-sectional study of 497 Vietnamese aged 19 to 60 years (204 males, 293 females) was conducted in rural areas of Haiphong, Vietnam. The subjects were classified as underweight, normal weight, and overweight based on BMI. General characteristics, anthropometric parameters, blood profiles, and eating habits were obtained and dietary intake was assessed using 24-hour recalls for 2 consecutive days. A high prevalence of both underweight (BMI < 18.5 kg/m2) and overweight (BMI ≥ 23 kg/m2) individuals was observed (14.2% and 21.6% for males and 18.9% and 20.6% for females, respectively). For both genders, the overweight group were older than the under- and normal weight groups (P = 0.0118 for males and P = 0.0002 for females). In female subjects, the overweight group consumed significantly less cereals (P = 0.0033), energy (P = 0.0046), protein (P = 0.0222), and carbohydrate (P = 0.0017) and more fruits (P = 0.0026) than the underweight group; however, no such differences existed in males. The overweight subjects overate more frequently (P = 0.0295) and consumed fish (P = 0.0096) and fruits (P = 0.0083) more often. The prevalence of both underweight and overweight individuals pose serious public health problems in the rural areas of Vietnamese and the overweight group was related to overeating and high fish and fruit consumption. These findings may provide basic data for policymakers and dieticians in order to develop future nutrition and health programs for rural populations in Vietnam. PMID:20607070
Averaging the inhomogeneous universe
NASA Astrophysics Data System (ADS)
Paranjape, Aseem
2012-03-01
A basic assumption of modern cosmology is that the universe is homogeneous and isotropic on the largest observable scales. This greatly simplifies Einstein's general relativistic field equations applied at these large scales, and allows a straightforward comparison between theoretical models and observed data. However, Einstein's equations should ideally be imposed at length scales comparable to, say, the solar system, since this is where these equations have been tested. We know that at these scales the universe is highly inhomogeneous. It is therefore essential to perform an explicit averaging of the field equations in order to apply them at large scales. It has long been known that due to the nonlinear nature of Einstein's equations, any explicit averaging scheme will necessarily lead to corrections in the equations applied at large scales. Estimating the magnitude and behavior of these corrections is a challenging task, due to difficulties associated with defining averages in the context of general relativity (GR). It has recently become possible to estimate these effects in a rigorous manner, and we will review some of the averaging schemes that have been proposed in the literature. A tantalizing possibility explored by several authors is that the corrections due to averaging may in fact account for the apparent acceleration of the expansion of the universe. We will explore this idea, reviewing some of the work done in the literature to date. We will argue however, that this rather attractive idea is in fact not viable as a solution of the dark energy problem, when confronted with observational constraints.
NASA Astrophysics Data System (ADS)
Zavala, M.; Herndon, S. C.; Slott, R. S.; Dunlea, E. J.; Marr, L. C.; Shorter, J. H.; Zahniser, M.; Knighton, W. B.; Rogers, T. M.; Kolb, C. E.; Molina, L. T.; Molina, M. J.
2006-06-01
A mobile laboratory was used to measure on-road vehicle emission ratios during the MCMA-2003 field campaign held during the spring of 2003 in the Mexico City Metropolitan Area (MCMA). The measured emission ratios represent a sample of emissions of in-use vehicles under real world driving conditions for the MCMA. From the relative amounts of NOx and selected VOC's sampled, the results indicate that the technique is capable of differentiating among vehicle categories and fuel type in real world driving conditions. Emission ratios for NOx, NOy, NH3, H2CO, CH3CHO, and other selected volatile organic compounds (VOCs) are presented for chase sampled vehicles and fleet averaged emissions. Results indicate that colectivos, particularly CNG-powered colectivos, are potentially significant contributors of NOx and aldehydes in the MCMA. Similarly, ratios of selected VOCs and NOy showed a strong dependence on traffic mode. These results are compared with the vehicle emissions inventory for the MCMA, other vehicle emissions measurements in the MCMA, and measurements of on-road emissions in US cities. Our estimates for motor vehicle emissions of benzene, toluene, formaldehyde, and acetaldehyde in the MCMA indicate these species are present in concentrations higher than previously reported. The high motor vehicle aldehyde emissions may have an impact on the photochemistry of urban areas.
Yu, Xianyu; Wang, Yi; Niu, Ruiqing; Hu, Youjian
2016-01-01
In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR) technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM) classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO) algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%–19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides. PMID:27187430
Yu, Xianyu; Wang, Yi; Niu, Ruiqing; Hu, Youjian
2016-01-01
In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR) technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM) classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO) algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%-19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides. PMID:27187430
Covariant approximation averaging
NASA Astrophysics Data System (ADS)
Shintani, Eigo; Arthur, Rudy; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph
2015-06-01
We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in Nf=2 +1 lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.
Bonnor, W.B.
1987-05-01
The Einstein-Straus (1945) vacuole is here used to represent a bound cluster of galaxies embedded in a standard pressure-free cosmological model, and the average density of the cluster is compared with the density of the surrounding cosmic fluid. The two are nearly but not quite equal, and the more condensed the cluster, the greater the difference. A theoretical consequence of the discrepancy between the two densities is discussed. 25 references.
NASA Astrophysics Data System (ADS)
Zavala, M.; Herndon, S. C.; Slott, R. S.; Dunlea, E. J.; Marr, L. C.; Shorter, J. H.; Zahniser, M.; Knighton, W. B.; Rogers, T. M.; Kolb, C. E.; Molina, L. T.; Molina, M. J.
2006-11-01
A mobile laboratory was used to measure on-road vehicle emission ratios during the MCMA-2003 field campaign held during the spring of 2003 in the Mexico City Metropolitan Area (MCMA). The measured emission ratios represent a sample of emissions of in-use vehicles under real world driving conditions for the MCMA. From the relative amounts of NOx and selected VOC's sampled, the results indicate that the technique is capable of differentiating among vehicle categories and fuel type in real world driving conditions. Emission ratios for NOx, NOy, NH3, H2CO, CH3CHO, and other selected volatile organic compounds (VOCs) are presented for chase sampled vehicles in the form of frequency distributions as well as estimates for the fleet averaged emissions. Our measurements of emission ratios for both CNG and gasoline powered "colectivos" (public transportation buses that are intensively used in the MCMA) indicate that - in a mole per mole basis - have significantly larger NOx and aldehydes emissions ratios as compared to other sampled vehicles in the MCMA. Similarly, ratios of selected VOCs and NOy showed a strong dependence on traffic mode. These results are compared with the vehicle emissions inventory for the MCMA, other vehicle emissions measurements in the MCMA, and measurements of on-road emissions in U.S. cities. We estimate NOx emissions as 100 600±29 200 metric tons per year for light duty gasoline vehicles in the MCMA for 2003. According to these results, annual NOx emissions estimated in the emissions inventory for this category are within the range of our estimated NOx annual emissions. Our estimates for motor vehicle emissions of benzene, toluene, formaldehyde, and acetaldehyde in the MCMA indicate these species are present in concentrations higher than previously reported. The high motor vehicle aldehyde emissions may have an impact on the photochemistry of urban areas.
Gong, Lunli; Zhou, Xiao; Wu, Yaohao; Zhang, Yun; Wang, Chen; Zhou, Heng; Guo, Fangfang
2014-01-01
The present study was designed to investigate the possibility of full-thickness defects repair in porcine articular cartilage (AC) weight-bearing area using chondrogenic differentiated autologous adipose-derived stem cells (ASCs) with a follow-up of 3 and 6 months, which is successive to our previous study on nonweight-bearing area. The isolated ASCs were seeded onto the phosphoglycerate/polylactic acid (PGA/PLA) with chondrogenic induction in vitro for 2 weeks as the experimental group prior to implantation in porcine AC defects (8 mm in diameter, deep to subchondral bone), with PGA/PLA only as control. With follow-up time being 3 and 6 months, both neo-cartilages of postimplantation integrated well with the neighboring normal cartilage and subchondral bone histologically in experimental group, whereas only fibrous tissue in control group. Immunohistochemical and toluidine blue staining confirmed similar distribution of COL II and glycosaminoglycan in the regenerated cartilage to the native one. A vivid remolding process with repair time was also witnessed in the neo-cartilage as the compressive modulus significantly increased from 70% of the normal cartilage at 3 months to nearly 90% at 6 months, which is similar to our former research. Nevertheless, differences of the regenerated cartilages still could be detected from the native one. Meanwhile, the exact mechanism involved in chondrogenic differentiation from ASCs seeded on PGA/PLA is still unknown. Therefore, proteome is resorted leading to 43 proteins differentially identified from 20 chosen two-dimensional spots, which do help us further our research on some committed factors. In conclusion, the comparison via proteome provided a thorough understanding of mechanisms implicating ASC differentiation toward chondrocytes, which is further substantiated by the present study as a perfect supplement to the former one in nonweight-bearing area. PMID:24044689
NASA Technical Reports Server (NTRS)
Feiveson, A. H. (Principal Investigator)
1979-01-01
The use of a weighted aggregation technique to improve the precision of the overall LACIE estimate is considered. The manner in which a weighted aggregation technique is implemented given a set of weights is described. The problem of variance estimation is discussed and the question of how to obtain the weights in an operational environment is addressed.
Pearlman, David A; Rao, B Govinda; Charifson, Paul
2008-05-15
We demonstrate a new approach to the development of scoring functions through the formulation and parameterization of a new function, which can be used both for rapidly ranking the binding of ligands to proteins and for estimating relative aqueous molecular solubilities. The intent of this work is to introduce a new paradigm for creation of scoring functions, wherein we impose the following criteria upon the function: (1) simple; (2) intuitive; (3) requires no postparameterization tweaking; (4) can be applied (without reparameterization) to multiple target systems; and (5) can be rapidly evaluated for any potential ligand. Following these criteria, a new function, FURSMASA (function for rapid scoring using an MD-averaged grid and the accessible surface area) has been developed. Three novel features of the function include: (1) use of an MD-averaged potential energy grid for ligand-protein interactions, rather than a simple static grid; (2) inclusion of a term that depends on the change in the solvent-accessible surface area changes on an atomic (not molecular) basis; and (3) use of the recently derived predictive index (PI) target when optimizing the function, which focuses the function on its intended purpose of relative ranking. A genetic algorithm is used to optimize the function against test data sets that include ligands for the following proteins: IMPDH, p38, gyrase B, HIV-1, and TACE, as well as the Syracuse Research solubility database. We find that the function is predictive, and can simultaneously fit all the test data sets with cross-validated predictive indices ranging from 0.68 to 0.82. As a test of the ability of this function to predict binding for systems not in the training set, the resulting fitted FURSAMA function is then applied to 23 ligands of the COX-2 enzyme. Comparing the results for COX-2 against those obtained using a variety of well-known rapid scoring functions demonstrates that FURSMASA outperforms all of them in terms of the PI and
Americans' Average Radiation Exposure
NA
2000-08-11
We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body.
Chen, Xue-shuang; Jiang, Tao; Lu, Song; Wei, Shi-qiang; Wang, Ding-yong; Yan, Jin-long
2016-03-15
The study of the molecular weight (MW) fractions of dissolved organic matter (DOM) in aquatic environment is of interests because the size plays an important role in deciding the biogeochemical characteristics of DOM. Thus, using ultrafiltration ( UF) technique combined with three-dimensional fluorescence spectroscopy, DOM samples from four sampling sites in typical water-level fluctuation zones of Three Gorge Reservoir areas were selected to investigate the differences of properties and sources of different DOM MW fractions. The results showed that in these areas, the distribution of MW fractions was highly dispersive, but the approximately equal contributions from colloidal (Mr 1 x 10³-0.22 µm) and true dissolved fraction (Mr < 1 x 10³) to the total DOC concentration were found. Four fluorescence signals (humic-like A and C; protein-like B and T) were observed in all MW fractions including bulk DOM, which showed the same distribution trend: true dissolved > low MW (Mr 1 x 10³-10 x 10³) > medium MW (Mr 10 x 10³-30 x 10³) > high MW (Mr 30 x 10³-0.22 µm). Additionally, with decreasing MW fraction, fluorescence index (FI) and freshness index (BIX) increased suggesting enhanced signals of autochthonous inputs, whereas humification index ( HIX) decreased indicating lowe humification degree. It strongly suggested that terrestrial input mainly affected the composition and property of higher MW fractions of DOM, as compared to lower MW and true dissolved fractions that were controlled by autochthonous sources such as microbial and alga activities, instead of allochthonous sources. Meanwhile, the riparian different land-use types also affected obviously on the characteristics of DOM. Therefore, higher diversity of land-use types, and also higher complexity of ecosystem and landscapes, induced higher heterogeneity of fluorescence components in different MW fraction of DOM. PMID:27337878
Córdova-Palomera, Aldo; Fatjó-Vilas, Mar; Falcón, Carles; Bargalló, Nuria; Alemany, Silvia; Crespo-Facorro, Benedicto; Nenadic, Igor; Fañanás, Lourdes
2015-01-01
Background Previous research suggests that low birth weight (BW) induces reduced brain cortical surface area (SA) which would persist until at least early adulthood. Moreover, low BW has been linked to psychiatric disorders such as depression and psychological distress, and to altered neurocognitive profiles. Aims We present novel findings obtained by analysing high-resolution structural MRI scans of 48 twins; specifically, we aimed: i) to test the BW-SA association in a middle-aged adult sample; and ii) to assess whether either depression/anxiety disorders or intellectual quotient (IQ) influence the BW-SA link, using a monozygotic (MZ) twin design to separate environmental and genetic effects. Results Both lower BW and decreased IQ were associated with smaller total and regional cortical SA in adulthood. Within a twin pair, lower BW was related to smaller total cortical and regional SA. In contrast, MZ twin differences in SA were not related to differences in either IQ or depression/anxiety disorders. Conclusion The present study supports findings indicating that i) BW has a long-lasting effect on cortical SA, where some familial and environmental influences alter both foetal growth and brain morphology; ii) uniquely environmental factors affecting BW also alter SA; iii) higher IQ correlates with larger SA; and iv) these effects are not modified by internalizing psychopathology. PMID:26086820
Shin, Youn Ho; Choi, Suk-Joo; Kim, Kyung Won; Yu, Jinho; Ahn, Kang Mo; Kim, Hyung Young; Seo, Ju-Hee; Kwon, Ji-Won; Kim, Byoung-Ju; Kim, Hyo-Bin; Shim, Jung Yeon; Kim, Woo Kyung; Song, Dae Jin; Lee, So-Yeon; Lee, Soo Young; Jang, Gwang Cheon; Kwon, Ja-Young; Lee, Kyung-Ju; Park, Hee Jin; Lee, Pil Ryang; Won, Hye-Sung
2013-01-01
Previous studies suggest that maternal characteristics may be associated with neonatal outcomes. However, the influence of maternal characteristics on birth weight (BW) has not been adequately determined in Korean populations. We investigated associations between maternal characteristics and BW in a sample of 813 Korean women living in the Seoul metropolitan area, Korea recruited using data from the prospective hospital-based COhort for Childhood Origin of Asthma and allergic diseases (COCOA) between 2007 and 2011. The mean maternal age at delivery was 32.3 ± 3.5 yr and prepregnancy maternal body mass index (BMI) was 20.7 ± 2.5 kg/m2. The mean BW of infant was 3,196 ± 406 g. The overall prevalence of a maternal history of allergic disease was 32.9% and the overall prevalence of allergic symptoms was 65.1%. In multivariate regression models, prepregnancy maternal BMI and gestational age at delivery were positively and a maternal history of allergic disease and nulliparity were negatively associated with BW (all P < 0.05). Presence of allergic symptoms in the mother was not associated with BW. In conclusion, prepregnancy maternal BMI, gestational age at delivery, a maternal history of allergic disease, and nulliparity may be associated with BW, respectively. PMID:23579316
Wang, Tingting; Li, Wenhua; Wu, Xiangru; Yin, Bing; Chu, Caiting; Ding, Ming; Cui, Yanfen
2016-01-01
Objective To assess the added value of diffusion-weighted magnetic resonance imaging (DWI) with apparent diffusion coefficient (ADC) values compared to MRI, for characterizing the tubo-ovarian abscesses (TOA) mimicking ovarian malignancy. Materials and Methods Patients with TOA (or ovarian abscess alone; n = 34) or ovarian malignancy (n = 35) who underwent DWI and MRI were retrospectively reviewed. The signal intensity of cystic and solid component of TOAs and ovarian malignant tumors on DWI and the corresponding ADC values were evaluated, as well as clinical characteristics, morphological features, MRI findings were comparatively analyzed. Receiver operating characteristic (ROC) curve analysis based on logistic regression was applied to identify different imaging characteristics between the two patient groups and assess the predictive value of combination diagnosis with area under the curve (AUC) analysis. Results The mean ADC value of the cystic component in TOA was significantly lower than in malignant tumors (1.04 ± 0 .41 × 10−3 mm2/s vs. 2.42 ± 0.38 × 10−3 mm2/s; p < 0.001). The mean ADC value of the enhanced solid component in 26 TOAs was 1.43 ± 0.16×10−3mm2/s, and 46.2% (12 TOAs; pseudotumor areas) showed significantly higher signal intensity on DW-MRI than in ovarian malignancy (mean ADC value 1.44 ± 0.20×10−3 mm2/s vs.1.18 ± 0.36 × 10−3 mm2/s; p = 0.043). The combination diagnosis of ADC value and dilated tubal structure achieved the best AUC of 0.996. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy of MRI vs. DWI with ADC values for predicting TOA were 47.1%, 91.4%, 84.2%, 64%, and 69.6% vs. 100%, 97.1%, 97.1%, 100%, and 98.6%, respectively. Conclusions DW-MRI is superior to MRI in the assessment of TOA mimicking ovarian malignancy, and the ADC values aid in discriminating the pseudotumor area of TOA from the solid portion of ovarian malignancy. PMID:26894926
... heart failure, and kidney disease. Good nutrition and exercise can help in losing weight. Eating extra calories within a well-balanced diet and treating any underlying medical problems can help to add weight.
... obese. Achieving a healthy weight can help you control your cholesterol, blood pressure and blood sugar. It ... use more calories than you eat. A weight-control strategy might include Choosing low-fat, low-calorie ...
... to medicines, thyroid problems, heart failure, and kidney disease. Good nutrition and exercise can help in losing weight. Eating extra calories within a well-balanced diet and treating any underlying medical problems can help to add weight.
NASA Astrophysics Data System (ADS)
Hermance, J. F.; Jacob, R. W.; Bradley, B. A.; Mustard, J. F.
2005-12-01
In studying vegetation patterns remotely, the objective is to draw inferences on the development of specific or general land surface phenology (LSP) as a function of space and time by determining the behavior of a parameter (in our case NDVI), when the parameter estimate may be biased by noise, data dropouts and obfuscations from atmospheric and other effects. We describe the underpinning concepts of a procedure for a robust interpolation of NDVI data that does not have the limitations of other mathematical approaches which require orthonormal basis functions (e.g. Fourier analysis). In this approach, data need not be uniformly sampled in time, nor do we expect noise to be Gaussian-distributed. Our approach is intuitive and straightforward, and is applied here to the refined modeling of LSP using 7 years of weekly and biweekly AVHRR NDVI data for a 150 x 150 km study area in central Nevada. This site is a microcosm of a broad range of vegetation classes, from irrigated agriculture with annual NDVIvalues of up to 0.7 to playas and alkali salt flats with annual NDVI values of only 0.07. Our procedure involves a form of parameter estimation employing Bayesian statistics. In utilitarian terms, the latter procedure is a method of statistical analysis (in our case, robustified, weighted least-squares recursive curve-fitting) that incorporates a variety of prior knowledge when forming current estimates of a particular process or parameter. In addition to the standard Bayesian approach, we account for outliers due to data dropouts or obfuscations because of clouds and snow cover. An initial "starting model" for the average annual cycle and long term (7 year) trend is determined by jointly fitting a common set of complex annual harmonics and a low order polynomial to an entire multi-year time series in one step. This is not a formal Fourier series in the conventional sense, but rather a set of 4 cosine and 4 sine coefficients with fundamental periods of 12, 6, 3 and 1
Combining remotely sensed and other measurements for hydrologic areal averages
NASA Technical Reports Server (NTRS)
Johnson, E. R.; Peck, E. L.; Keefer, T. N.
1982-01-01
A method is described for combining measurements of hydrologic variables of various sampling geometries and measurement accuracies to produce an estimated mean areal value over a watershed and a measure of the accuracy of the mean areal value. The method provides a means to integrate measurements from conventional hydrological networks and remote sensing. The resulting areal averages can be used to enhance a wide variety of hydrological applications including basin modeling. The correlation area method assigns weights to each available measurement (point, line, or areal) based on the area of the basin most accurately represented by the measurement. The statistical characteristics of the accuracy of the various measurement technologies and of the random fields of the hydrologic variables used in the study (water equivalent of the snow cover and soil moisture) required to implement the method are discussed.
Dissociating Averageness and Attractiveness: Attractive Faces Are Not Always Average
ERIC Educational Resources Information Center
DeBruine, Lisa M.; Jones, Benedict C.; Unger, Layla; Little, Anthony C.; Feinberg, David R.
2007-01-01
Although the averageness hypothesis of facial attractiveness proposes that the attractiveness of faces is mostly a consequence of their averageness, 1 study has shown that caricaturing highly attractive faces makes them mathematically less average but more attractive. Here the authors systematically test the averageness hypothesis in 5 experiments…
... Quit Smoking Benefits of Quitting Health Effects of Smoking Secondhand Smoke Withdrawal Ways to Quit QuitGuide Pregnancy & Motherhood Pregnancy & Motherhood Before Your Baby is Born From Birth to 2 Years Quitting for Two SmokefreeMom Healthy Kids Parenting & ... Weight Management Weight Management ...
Spatial limitations in averaging social cues.
Florey, Joseph; Clifford, Colin W G; Dakin, Steven; Mareschal, Isabelle
2016-01-01
The direction of social attention from groups provides stronger cueing than from an individual. It has previously been shown that both basic visual features such as size or orientation and more complex features such as face emotion and identity can be averaged across multiple elements. Here we used an equivalent noise procedure to compare observers' ability to average social cues with their averaging of a non-social cue. Estimates of observers' internal noise (uncertainty associated with processing any individual) and sample-size (the effective number of gaze-directions pooled) were derived by fitting equivalent noise functions to discrimination thresholds. We also used reverse correlation analysis to estimate the spatial distribution of samples used by participants. Averaging of head-rotation and cone-rotation was less noisy and more efficient than averaging of gaze direction, though presenting only the eye region of faces at a larger size improved gaze averaging performance. The reverse correlation analysis revealed greater sampling areas for head rotation compared to gaze. We attribute these differences in averaging between gaze and head cues to poorer visual processing of faces in the periphery. The similarity between head and cone averaging are examined within the framework of a general mechanism for averaging of object rotation. PMID:27573589
Spatial limitations in averaging social cues
Florey, Joseph; Clifford, Colin W. G.; Dakin, Steven; Mareschal, Isabelle
2016-01-01
The direction of social attention from groups provides stronger cueing than from an individual. It has previously been shown that both basic visual features such as size or orientation and more complex features such as face emotion and identity can be averaged across multiple elements. Here we used an equivalent noise procedure to compare observers’ ability to average social cues with their averaging of a non-social cue. Estimates of observers’ internal noise (uncertainty associated with processing any individual) and sample-size (the effective number of gaze-directions pooled) were derived by fitting equivalent noise functions to discrimination thresholds. We also used reverse correlation analysis to estimate the spatial distribution of samples used by participants. Averaging of head-rotation and cone-rotation was less noisy and more efficient than averaging of gaze direction, though presenting only the eye region of faces at a larger size improved gaze averaging performance. The reverse correlation analysis revealed greater sampling areas for head rotation compared to gaze. We attribute these differences in averaging between gaze and head cues to poorer visual processing of faces in the periphery. The similarity between head and cone averaging are examined within the framework of a general mechanism for averaging of object rotation. PMID:27573589
NASA Technical Reports Server (NTRS)
Moore, R. D.; Urasek, D. C.; Kovich, G.
1973-01-01
The overall and blade-element performances are presented over the stable flow operating range from 50 to 100 percent of design speed. Stage peak efficiency of 0.834 was obtained at a weight flow of 26.4 kg/sec (58.3 lb/sec) and a pressure ratio of 1.581. The stall margin for the stage was 7.5 percent based on weight flow and pressure ratio at stall and peak efficiency conditions. The rotor minimum losses were approximately equal to design except in the blade vibration damper region. Stator minimum losses were less than design except in the tip and damper regions.
Improving Reading Abilities of Average and Below Average Readers through Peer Tutoring.
ERIC Educational Resources Information Center
Galezio, Marne; And Others
A program was designed to improve the progress of average and below average readers in a first-grade, a second-grade, and a sixth-grade classroom in a multicultural, multi-social economic district located in a three-county area northwest of Chicago, Illinois. Classroom teachers noted that students were having difficulty making adequate progress in…
NASA Technical Reports Server (NTRS)
Howard, W. H.; Young, D. R.
1972-01-01
Device applies compressive force to bone to minimize loss of bone calcium during weightlessness or bedrest. Force is applied through weights, or hydraulic, pneumatic or electrically actuated devices. Device is lightweight and easy to maintain and operate.
The Molecular Weight Distribution of Polymer Samples
ERIC Educational Resources Information Center
Horta, Arturo; Pastoriza, M. Alejandra
2007-01-01
Various methods for the determination of the molecular weight distribution (MWD) of different polymer samples are presented. The study shows that the molecular weight averages and distribution of a polymerization completely depend on the characteristics of the reaction itself.
Some series of intuitionistic fuzzy interactive averaging aggregation operators.
Garg, Harish
2016-01-01
In this paper, some series of new intuitionistic fuzzy averaging aggregation operators has been presented under the intuitionistic fuzzy sets environment. For this, some shortcoming of the existing operators are firstly highlighted and then new operational law, by considering the hesitation degree between the membership functions, has been proposed to overcome these. Based on these new operation laws, some new averaging aggregation operators namely, intuitionistic fuzzy Hamacher interactive weighted averaging, ordered weighted averaging and hybrid weighted averaging operators, labeled as IFHIWA, IFHIOWA and IFHIHWA respectively has been proposed. Furthermore, some desirable properties such as idempotency, boundedness, homogeneity etc. are studied. Finally, a multi-criteria decision making method has been presented based on proposed operators for selecting the best alternative. A comparative concelebration between the proposed operators and the existing operators are investigated in detail. PMID:27441128
Physics of the spatially averaged snowmelt process
NASA Astrophysics Data System (ADS)
Horne, Federico E.; Kavvas, M. Levent
1997-04-01
It has been recognized that the snowmelt models developed in the past do not fully meet current prediction requirements. Part of the reason is that they do not account for the spatial variation in the dynamics of the spatially heterogeneous snowmelt process. Most of the current physics-based distributed snowmelt models utilize point-location-scale conservation equations which do not represent the spatially varying snowmelt dynamics over a grid area that surrounds a computational node. In this study, to account for the spatial heterogeneity of the snowmelt dynamics, areally averaged mass and energy conservation equations for the snowmelt process are developed. As a first step, energy and mass conservation equations that govern the snowmelt dynamics at a point location are averaged over the snowpack depth, resulting in depth averaged equations (DAE). In this averaging, it is assumed that the snowpack has two layers. Then, the point location DAE are averaged over the snowcover area. To develop the areally averaged equations of the snowmelt physics, we make the fundamental assumption that snowmelt process is spatially ergodic. The snow temperature and the snow density are considered as the stochastic variables. The areally averaged snowmelt equations are obtained in terms of their corresponding ensemble averages. Only the first two moments are considered. A numerical solution scheme (Runge-Kutta) is then applied to solve the resulting system of ordinary differential equations. This equation system is solved for the areal mean and areal variance of snow temperature and of snow density, for the areal mean of snowmelt, and for the areal covariance of snow temperature and snow density. The developed model is tested using Scott Valley (Siskiyou County, California) snowmelt and meteorological data. The performance of the model in simulating the observed areally averaged snowmelt is satisfactory.
NASA Technical Reports Server (NTRS)
Chen, Fei; Yates, David; LeMone, Margaret
2001-01-01
To understand the effects of land-surface heterogeneity and the interactions between the land-surface and the planetary boundary layer at different scales, we develop a multiscale data set. This data set, based on the Cooperative Atmosphere-Surface Exchange Study (CASES97) observations, includes atmospheric, surface, and sub-surface observations obtained from a dense observation network covering a large region on the order of 100 km. We use this data set to drive three land-surface models (LSMs) to generate multi-scale (with three resolutions of 1, 5, and 10 kilometers) gridded surface heat flux maps for the CASES area. Upon validating these flux maps with measurements from surface station and aircraft, we utilize them to investigate several approaches for estimating the area-integrated surface heat flux for the CASES97 domain of 71x74 square kilometers, which is crucial for land surface model development/validation and area water and energy budget studies. This research is aimed at understanding the relative contribution of random turbulence versus organized mesoscale circulations to the area-integrated surface flux at the scale of 100 kilometers, and identifying the most important effective parameters for characterizing the subgrid-scale variability for large-scale atmosphere-hydrology models.
Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A.; Kagemann, Larry; Schuman, Joel S.
2016-01-01
Purpose Developing a novel image enhancement method so that nonframe-averaged optical coherence tomography (OCT) images become comparable to active eye-tracking frame-averaged OCT images. Methods Twenty-one eyes of 21 healthy volunteers were scanned with noneye-tracking nonframe-averaged OCT device and active eye-tracking frame-averaged OCT device. Virtual averaging was applied to nonframe-averaged images with voxel resampling and adding amplitude deviation with 15-time repetitions. Signal-to-noise (SNR), contrast-to-noise ratios (CNR), and the distance between the end of visible nasal retinal nerve fiber layer (RNFL) and the foveola were assessed to evaluate the image enhancement effect and retinal layer visibility. Retinal thicknesses before and after processing were also measured. Results All virtual-averaged nonframe-averaged images showed notable improvement and clear resemblance to active eye-tracking frame-averaged images. Signal-to-noise and CNR were significantly improved (SNR: 30.5 vs. 47.6 dB, CNR: 4.4 vs. 6.4 dB, original versus processed, P < 0.0001, paired t-test). The distance between the end of visible nasal RNFL and the foveola was significantly different before (681.4 vs. 446.5 μm, Cirrus versus Spectralis, P < 0.0001) but not after processing (442.9 vs. 446.5 μm, P = 0.76). Sectoral macular total retinal and circumpapillary RNFL thicknesses showed systematic differences between Cirrus and Spectralis that became not significant after processing. Conclusion The virtual averaging method successfully improved nontracking nonframe-averaged OCT image quality and made the images comparable to active eye-tracking frame-averaged OCT images. Translational Relevance Virtual averaging may enable detailed retinal structure studies on images acquired using a mixture of nonframe-averaged and frame-averaged OCT devices without concerning about systematic differences in both qualitative and quantitative aspects. PMID:26835180
Averaging Models: Parameters Estimation with the R-Average Procedure
ERIC Educational Resources Information Center
Vidotto, G.; Massidda, D.; Noventa, S.
2010-01-01
The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…
Geologic analysis of averaged magnetic satellite anomalies
NASA Technical Reports Server (NTRS)
Goyal, H. K.; Vonfrese, R. R. B.; Ridgway, J. R.; Hinze, W. J.
1985-01-01
To investigate relative advantages and limitations for quantitative geologic analysis of magnetic satellite scalar anomalies derived from arithmetic averaging of orbital profiles within equal-angle or equal-area parallelograms, the anomaly averaging process was simulated by orbital profiles computed from spherical-earth crustal magnetic anomaly modeling experiments using Gauss-Legendre quadrature integration. The results indicate that averaging can provide reasonable values at satellite elevations, where contributing error factors within a given parallelogram include the elevation distribution of the data, and orbital noise and geomagnetic field attributes. Various inversion schemes including the use of equivalent point dipoles are also investigated as an alternative to arithmetic averaging. Although inversion can provide improved spherical grid anomaly estimates, these procedures are problematic in practice where computer scaling difficulties frequently arise due to a combination of factors including large source-to-observation distances ( 400 km), high geographic latitudes, and low geomagnetic field inclinations.
Interpreting Bivariate Regression Coefficients: Going beyond the Average
ERIC Educational Resources Information Center
Halcoussis, Dennis; Phillips, G. Michael
2010-01-01
Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…
42 CFR 423.279 - National average monthly bid amount.
Code of Federal Regulations, 2012 CFR
2012-10-01
... each MA-PD plan described in section 1851(a)(2)(A)(i) of the Act. The calculation does not include bids... section 1876(h) of the Act. (b) Calculation of weighted average. (1) The national average monthly bid... defined in § 422.258(c)(1) of this chapter) and the denominator equal to the total number of Part...
42 CFR 423.279 - National average monthly bid amount.
Code of Federal Regulations, 2011 CFR
2011-10-01
... bid amounts for each prescription drug plan (not including fallbacks) and for each MA-PD plan...(h) of the Act. (b) Calculation of weighted average. (1) The national average monthly bid amount is a....258(c)(1) of this chapter) and the denominator equal to the total number of Part D...
42 CFR 423.279 - National average monthly bid amount.
Code of Federal Regulations, 2014 CFR
2014-10-01
... each MA-PD plan described in section 1851(a)(2)(A)(i) of the Act. The calculation does not include bids... section 1876(h) of the Act. (b) Calculation of weighted average. (1) The national average monthly bid... defined in § 422.258(c)(1) of this chapter) and the denominator equal to the total number of Part...
42 CFR 423.279 - National average monthly bid amount.
Code of Federal Regulations, 2010 CFR
2010-10-01
... bid amounts for each prescription drug plan (not including fallbacks) and for each MA-PD plan...(h) of the Act. (b) Calculation of weighted average. (1) The national average monthly bid amount is a....258(c)(1) of this chapter) and the denominator equal to the total number of Part D...
Prediction of shelled shrimp weight by machine vision.
Pan, Peng-min; Li, Jian-ping; Lv, Gu-lai; Yang, Hui; Zhu, Song-ming; Lou, Jian-zhong
2009-08-01
The weight of shelled shrimp is an important parameter for grading process. The weight prediction of shelled shrimp by contour area is not accurate enough because of the ignorance of the shrimp thickness. In this paper, a multivariate prediction model containing area, perimeter, length, and width was established. A new calibration algorithm for extracting length of shelled shrimp was proposed, which contains binary image thinning, branch recognition and elimination, and length reconstruction, while its width was calculated during the process of length extracting. The model was further validated with another set of images from 30 shelled shrimps. For a comparison purpose, artificial neural network (ANN) was used for the shrimp weight predication. The ANN model resulted in a better prediction accuracy (with the average relative error at 2.67%), but took a tenfold increase in calculation time compared with the weight-area-perimeter (WAP) model (with the average relative error at 3.02%). We thus conclude that the WAP model is a better method for the prediction of the weight of shelled red shrimp. PMID:19650197
Prediction of shelled shrimp weight by machine vision
Pan, Peng-min; Li, Jian-ping; Lv, Gu-lai; Yang, Hui; Zhu, Song-ming; Lou, Jian-zhong
2009-01-01
The weight of shelled shrimp is an important parameter for grading process. The weight prediction of shelled shrimp by contour area is not accurate enough because of the ignorance of the shrimp thickness. In this paper, a multivariate prediction model containing area, perimeter, length, and width was established. A new calibration algorithm for extracting length of shelled shrimp was proposed, which contains binary image thinning, branch recognition and elimination, and length reconstruction, while its width was calculated during the process of length extracting. The model was further validated with another set of images from 30 shelled shrimps. For a comparison purpose, artificial neural network (ANN) was used for the shrimp weight predication. The ANN model resulted in a better prediction accuracy (with the average relative error at 2.67%), but took a tenfold increase in calculation time compared with the weight-area-perimeter (WAP) model (with the average relative error at 3.02%). We thus conclude that the WAP model is a better method for the prediction of the weight of shelled red shrimp. PMID:19650197
[Comparison of formulas for calculating average skin temperature and their characteristics].
Mochida, T; Shimakura, K; Yoshida, N
1994-11-01
In order to obtain data of skin temperatures experiments were carried out using three healthy young Japanese males. The subjects were exposed to each of the four environments with dry bulb temperatures of 15 degrees C, 19 degrees C, 25 degrees C and 33 degrees C. At each of these air temperatures, relative humidity and air movement were set at 50% and 0.15m/s respectively. The subjects wore only athletic shorts, seated on the meshed chair. Each subject was measured with thermisters continuously for one hour under these conditions to obtain twenty-nine regional skin temperature. The above experiments were made with one subject at a time in the test chamber. The data of skin temperatures observed were substituted into twenty-eight different weighting formulas for comparison. The present analysis revealed that the calculation from the 12-point and the 7-point skin area formulas by Hardy-DuBois showed approximate mean values of the twenty eight. Moreover, the values calculated from the formula by Nadel et al, which was weighted by skin area and thermal sensitivity, are similar to the values calculated by the formula of Mochida, which was weighted by skin area, heat transfer coefficients and thermal sensitivity. Furthermore, the authors verified that the area-mean weighting factor was derived from the Teichner's definition in which a limiting value of arithmetical mean of skin temperatures gave a value of average skin temperature. PMID:7880325
Averaging Internal Consistency Reliability Coefficients
ERIC Educational Resources Information Center
Feldt, Leonard S.; Charter, Richard A.
2006-01-01
Seven approaches to averaging reliability coefficients are presented. Each approach starts with a unique definition of the concept of "average," and no approach is more correct than the others. Six of the approaches are applicable to internal consistency coefficients. The seventh approach is specific to alternate-forms coefficients. Although the…
NASA Technical Reports Server (NTRS)
Ustino, Eugene A.
2006-01-01
This slide presentation reviews the observable radiances as functions of atmospheric parameters and of surface parameters; the mathematics of atmospheric weighting functions (WFs) and surface partial derivatives (PDs) are presented; and the equation of the forward radiative transfer (RT) problem is presented. For non-scattering atmospheres this can be done analytically, and all WFs and PDs can be computed analytically using the direct linearization approach. For scattering atmospheres, in general case, the solution of the forward RT problem can be obtained only numerically, but we need only two numerical solutions: one of the forward RT problem and one of the adjoint RT problem to compute all WFs and PDs we can think of. In this presentation we discuss applications of both the linearization and adjoint approaches
The Averaging Problem in Cosmology
NASA Astrophysics Data System (ADS)
Paranjape, Aseem
2009-06-01
This thesis deals with the averaging problem in cosmology, which has gained considerable interest in recent years, and is concerned with correction terms (after averaging inhomogeneities) that appear in the Einstein equations when working on the large scales appropriate for cosmology. It has been claimed in the literature that these terms may account for the phenomenon of dark energy which causes the late time universe to accelerate. We investigate the nature of these terms by using averaging schemes available in the literature and further developed to be applicable to the problem at hand. We show that the effect of these terms when calculated carefully, remains negligible and cannot explain the late time acceleration.
Feng, Xiaoqi; Astell-Burt, Thomas
2016-06-01
Reductions in body mass index and reduced overweight/obesity risk among participants in the 45 and Up Study diagnosed with type 2 diabetes mellitus (T2DM) were relatively large in rural areas compared to those in urban environs. Further research is needed to explain why where people reside influences optimal management of T2DM. PMID:27321327
Lorcaserin for weight management
Taylor, James R; Dietrich, Eric; Powell, Jason
2013-01-01
Type 2 diabetes and obesity commonly occur together. Obesity contributes to insulin resistance, a main cause of type 2 diabetes. Modest weight loss reduces glucose, lipids, blood pressure, need for medications, and cardiovascular risk. A number of approaches can be used to achieve weight loss, including lifestyle modification, surgery, and medication. Lorcaserin, a novel antiobesity agent, affects central serotonin subtype 2A receptors, resulting in decreased food intake and increased satiety. It has been studied in obese patients with type 2 diabetes and results in an approximately 5.5 kg weight loss, on average, when used for one year. Headache, back pain, nasopharyngitis, and nausea were the most common adverse effects noted with lorcaserin. Hypoglycemia was more common in the lorcaserin groups in the clinical trials, but none of the episodes were categorized as severe. Based on the results of these studies, lorcaserin was approved at a dose of 10 mg twice daily in patients with a body mass index ≥30 kg/m2 or ≥27 kg/m2 with at least one weight-related comorbidity, such as hypertension, type 2 diabetes mellitus, or dyslipidemia, in addition to a reduced calorie diet and increased physical activity. Lorcaserin is effective for weight loss in obese patients with and without type 2 diabetes, although its specific role in the management of obesity is unclear at this time. This paper reviews the clinical trials of lorcaserin, its use from the patient perspective, and its potential role in the treatment of obesity. PMID:23788837
High average power pockels cell
Daly, Thomas P.
1991-01-01
A high average power pockels cell is disclosed which reduces the effect of thermally induced strains in high average power laser technology. The pockels cell includes an elongated, substantially rectangular crystalline structure formed from a KDP-type material to eliminate shear strains. The X- and Y-axes are oriented substantially perpendicular to the edges of the crystal cross-section and to the C-axis direction of propagation to eliminate shear strains.
Effect of high-speed jet on flow behavior, retrogradation, and molecular weight of rice starch.
Fu, Zhen; Luo, Shun-Jing; BeMiller, James N; Liu, Wei; Liu, Cheng-Mei
2015-11-20
Effects of high-speed jet (HSJ) treatment on flow behavior, retrogradation, and degradation of the molecular structure of indica rice starch were investigated. Decreasing with the number of HSJ treatment passes were the turbidity of pastes (degree of retrogradation), the enthalpy of melting of retrograded rice starch, weight-average molecular weights and weight-average root-mean square radii of gyration of the starch polysaccharides, and the amylopectin peak areas of SEC profiles. The areas of lower-molecular-weight polymers increased. The chain-length distribution was not significantly changed. Pastes of all starch samples exhibited pseudoplastic, shear-thinning behavior. HSJ treatment increased the flow behavior index and decreased the consistency coefficient and viscosity. The data suggested that degradation of amylopectin was mainly involved and that breakdown preferentially occurred in chains between clusters. PMID:26344255
Application Bayesian Model Averaging method for ensemble system for Poland
NASA Astrophysics Data System (ADS)
Guzikowski, Jakub; Czerwinska, Agnieszka
2014-05-01
The aim of the project is to evaluate methods for generating numerical ensemble weather prediction using a meteorological data from The Weather Research & Forecasting Model and calibrating this data by means of Bayesian Model Averaging (WRF BMA) approach. We are constructing height resolution short range ensemble forecasts using meteorological data (temperature) generated by nine WRF's models. WRF models have 35 vertical levels and 2.5 km x 2.5 km horizontal resolution. The main emphasis is that the used ensemble members has a different parameterization of the physical phenomena occurring in the boundary layer. To calibrate an ensemble forecast we use Bayesian Model Averaging (BMA) approach. The BMA predictive Probability Density Function (PDF) is a weighted average of predictive PDFs associated with each individual ensemble member, with weights that reflect the member's relative skill. For test we chose a case with heat wave and convective weather conditions in Poland area from 23th July to 1st August 2013. From 23th July to 29th July 2013 temperature oscillated below or above 30 Celsius degree in many meteorology stations and new temperature records were added. During this time the growth of the hospitalized patients with cardiovascular system problems was registered. On 29th July 2013 an advection of moist tropical air masses was recorded in the area of Poland causes strong convection event with mesoscale convection system (MCS). MCS caused local flooding, damage to the transport infrastructure, destroyed buildings, trees and injuries and direct threat of life. Comparison of the meteorological data from ensemble system with the data recorded on 74 weather stations localized in Poland is made. We prepare a set of the model - observations pairs. Then, the obtained data from single ensemble members and median from WRF BMA system are evaluated on the basis of the deterministic statistical error Root Mean Square Error (RMSE), Mean Absolute Error (MAE). To evaluation
Creating "Intelligent" Ensemble Averages Using a Process-Based Framework
NASA Astrophysics Data System (ADS)
Baker, Noel; Taylor, Patrick
2014-05-01
The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is used to add value to individual model projections and construct a consensus projection. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, individual models reproduce certain climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. The intention is to produce improved ("intelligent") unequal-weight ensemble averages. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Several climate process metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument in combination with surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing the equal-weighted ensemble average and an ensemble weighted using the process-based metric. Additionally, this study investigates the dependence of the metric weighting scheme on the climate state using a combination of model simulations including a non-forced preindustrial control experiment, historical simulations, and
NASA Technical Reports Server (NTRS)
1995-01-01
The Attitude Adjuster is a system for weight repositioning corresponding to a SCUBA diver's changing positions. Compact tubes on the diver's air tank permit controlled movement of lead balls within the Adjuster, automatically repositioning when the diver changes position. Manufactured by Think Tank Technologies, the system is light and small, reducing drag and energy requirements and contributing to lower air consumption. The Mid-Continent Technology Transfer Center helped the company with both technical and business information and arranged for the testing at Marshall Space Flight Center's Weightlessness Environmental Training Facility for astronauts.
Vocal attractiveness increases by averaging.
Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal
2010-01-26
Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception. PMID:20129047
Determining GPS average performance metrics
NASA Technical Reports Server (NTRS)
Moore, G. V.
1995-01-01
Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.
Modeling operating weight and axle weight distributions for highway vehicles
Greene, D.L.; Liang, J.C.
1988-07-01
The estimation of highway cost responsibility requires detailed information on vehicle operating weights and axle weights by type of vehicle. Typically, 10--20 vehicle types must be cross-classified by 10--20 registered weight classes and again by 20 or more operating weight categories, resulting in 100--400 relative frequencies to be determined for each vehicle type. For each of these, gross operating weight must be distributed to each axle or axle unit. Given the rarity of many of the heaviest vehicle types, direct estimation of these frequencies and axle weights from traffic classification count statistics and truck weight data may exceed the reliability of even the largest (e.g., 250,000 record) data sources. An alternative is to estimate statistical models of operating weight distributions as functions of registered weight, and models of axle weight shares as functions of operating weight. This paper describes the estimation of such functions using the multinomial logit model (a log-linear model) and the implementation of the modeling framework as a PC-based FORTRAN program. Areas for further research include the addition of highway class and region as explanatory variables in operating weight distribution models, and the development of theory for including registration costs and costs of operating overweight in the modeling framework. 14 refs., 45 figs., 5 tabs.
Combining forecast weights: Why and how?
NASA Astrophysics Data System (ADS)
Yin, Yip Chee; Kok-Haur, Ng; Hock-Eam, Lim
2012-09-01
This paper proposes a procedure called forecast weight averaging which is a specific combination of forecast weights obtained from different methods of constructing forecast weights for the purpose of improving the accuracy of pseudo out of sample forecasting. It is found that under certain specified conditions, forecast weight averaging can lower the mean squared forecast error obtained from model averaging. In addition, we show that in a linear and homoskedastic environment, this superior predictive ability of forecast weight averaging holds true irrespective whether the coefficients are tested by t statistic or z statistic provided the significant level is within the 10% range. By theoretical proofs and simulation study, we have shown that model averaging like, variance model averaging, simple model averaging and standard error model averaging, each produces mean squared forecast error larger than that of forecast weight averaging. Finally, this result also holds true marginally when applied to business and economic empirical data sets, Gross Domestic Product (GDP growth rate), Consumer Price Index (CPI) and Average Lending Rate (ALR) of Malaysia.
Evaluations of average level spacings
Liou, H.I.
1980-01-01
The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of /sup 168/Er data. 19 figures, 2 tables.
Oh, Hee Kyung
2016-01-01
In order to attain heavier live weight without impairing pork or sensory quality characteristics, carcass performance, muscle fiber, pork quality, and sensory quality characteristics were compared among the heavy weight (HW, average live weight of 130.5 kg), medium weight (MW, average weight of 111.1 kg), and light weight (LW, average weight of 96.3 kg) pigs at time of slaughter. The loin eye area was 1.47 times greater in the HW group compared to the LW group (64.0 and 43.5 cm2, p<0.001), while carcass percent was similar between the HW and MW groups (p>0.05). This greater performance by the HW group compared to the LW group can be explained by a greater total number (1,436 vs. 1,188, ×103, p<0.001) and larger area (4,452 vs. 3,716 μm2, p<0.001) of muscle fibers. No significant differences were observed in muscle pH45 min, lightness, drip loss, and shear force among the groups (p>0.05), and higher live weights did not influence sensory quality attributes, including tenderness, juiciness, and flavor. Therefore, these findings indicate that increased live weights in this study did not influence the technological and sensory quality characteristics. Moreover, muscles with a higher number of medium or large size fibers tend to exhibit good carcass performance without impairing meat and sensory quality characteristics. PMID:27433110
Choi, Young Min; Oh, Hee Kyung
2016-01-01
In order to attain heavier live weight without impairing pork or sensory quality characteristics, carcass performance, muscle fiber, pork quality, and sensory quality characteristics were compared among the heavy weight (HW, average live weight of 130.5 kg), medium weight (MW, average weight of 111.1 kg), and light weight (LW, average weight of 96.3 kg) pigs at time of slaughter. The loin eye area was 1.47 times greater in the HW group compared to the LW group (64.0 and 43.5 cm(2), p<0.001), while carcass percent was similar between the HW and MW groups (p>0.05). This greater performance by the HW group compared to the LW group can be explained by a greater total number (1,436 vs. 1,188, ×10(3), p<0.001) and larger area (4,452 vs. 3,716 μm(2), p<0.001) of muscle fibers. No significant differences were observed in muscle pH45 min, lightness, drip loss, and shear force among the groups (p>0.05), and higher live weights did not influence sensory quality attributes, including tenderness, juiciness, and flavor. Therefore, these findings indicate that increased live weights in this study did not influence the technological and sensory quality characteristics. Moreover, muscles with a higher number of medium or large size fibers tend to exhibit good carcass performance without impairing meat and sensory quality characteristics. PMID:27433110
Polyhedral Painting with Group Averaging
ERIC Educational Resources Information Center
Farris, Frank A.; Tsao, Ryan
2016-01-01
The technique of "group-averaging" produces colorings of a sphere that have the symmetries of various polyhedra. The concepts are accessible at the undergraduate level, without being well-known in typical courses on algebra or geometry. The material makes an excellent discovery project, especially for students with some background in…
Averaged Electroencephalic Audiometry in Infants
ERIC Educational Resources Information Center
Lentz, William E.; McCandless, Geary A.
1971-01-01
Normal, preterm, and high-risk infants were tested at 1, 3, 6, and 12 months of age using averaged electroencephalic audiometry (AEA) to determine the usefulness of AEA as a measurement technique for assessing auditory acuity in infants, and to delineate some of the procedural and technical problems often encountered. (KW)
Averaging inhomogeneous cosmologies - a dialogue.
NASA Astrophysics Data System (ADS)
Buchert, T.
The averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists, one of them (RED) advocating the standard model, the other (GREEN) advancing some arguments against it. Technical explanations of these arguments as well as the conclusions of this debate are given by BLUE.
Averaging inhomogenous cosmologies - a dialogue
NASA Astrophysics Data System (ADS)
Buchert, T.
The averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists, one of them (RED) advocating the standard model, the other (GREEN) advancing some arguments against it. Technical explanations of these arguments as well as the conclusions of this debate are given by BLUE.
Averaging facial expression over time
Haberman, Jason; Harp, Tom; Whitney, David
2010-01-01
The visual system groups similar features, objects, and motion (e.g., Gestalt grouping). Recent work suggests that the computation underlying perceptual grouping may be one of summary statistical representation. Summary representation occurs for low-level features, such as size, motion, and position, and even for high level stimuli, including faces; for example, observers accurately perceive the average expression in a group of faces (J. Haberman & D. Whitney, 2007, 2009). The purpose of the present experiments was to characterize the time-course of this facial integration mechanism. In a series of three experiments, we measured observers’ abilities to recognize the average expression of a temporal sequence of distinct faces. Faces were presented in sets of 4, 12, or 20, at temporal frequencies ranging from 1.6 to 21.3 Hz. The results revealed that observers perceived the average expression in a temporal sequence of different faces as precisely as they perceived a single face presented repeatedly. The facial averaging was independent of temporal frequency or set size, but depended on the total duration of exposed faces, with a time constant of ~800 ms. These experiments provide evidence that the visual system is sensitive to the ensemble characteristics of complex objects presented over time. PMID:20053064
Exact averaging of laminar dispersion
NASA Astrophysics Data System (ADS)
Ratnakar, Ram R.; Balakotaiah, Vemuri
2011-02-01
We use the Liapunov-Schmidt (LS) technique of bifurcation theory to derive a low-dimensional model for laminar dispersion of a nonreactive solute in a tube. The LS formalism leads to an exact averaged model, consisting of the governing equation for the cross-section averaged concentration, along with the initial and inlet conditions, to all orders in the transverse diffusion time. We use the averaged model to analyze the temporal evolution of the spatial moments of the solute and show that they do not have the centroid displacement or variance deficit predicted by the coarse-grained models derived by other methods. We also present a detailed analysis of the first three spatial moments for short and long times as a function of the radial Peclet number and identify three clearly defined time intervals for the evolution of the solute concentration profile. By examining the skewness in some detail, we show that the skewness increases initially, attains a maximum for time scales of the order of transverse diffusion time, and the solute concentration profile never attains the Gaussian shape at any finite time. Finally, we reason that there is a fundamental physical inconsistency in representing laminar (Taylor) dispersion phenomena using truncated averaged models in terms of a single cross-section averaged concentration and its large scale gradient. Our approach evaluates the dispersion flux using a local gradient between the dominant diffusive and convective modes. We present and analyze a truncated regularized hyperbolic model in terms of the cup-mixing concentration for the classical Taylor-Aris dispersion that has a larger domain of validity compared to the traditional parabolic model. By analyzing the temporal moments, we show that the hyperbolic model has no physical inconsistencies that are associated with the parabolic model and can describe the dispersion process to first order accuracy in the transverse diffusion time.
Ultrahigh molecular weight aromatic siloxane polymers
NASA Technical Reports Server (NTRS)
Ludwick, L. M.
1982-01-01
The condensation of a diol with a silane in toluene yields a silphenylene-siloxane polymer. The reaction of stiochiometric amounts of the diol and silane produced products with molecular weights in the range 2.0 - 6.0 x 10 to the 5th power. The molecular weight of the product was greatly increased by a multistep technique. The methodology for synthesis of high molecular weight polymers using a two step procedure was refined. Polymers with weight average molecular weights in excess of 1.0 x 10 to the 6th power produced by this method. Two more reactive silanes, bis(pyrrolidinyl)dimethylsilane and bis(gamma butyrolactam)dimethylsilane, are compared with the dimethyleminodimethylsilane in ability to advance the molecular weight of the prepolymer. The polymers produced are characterized by intrinsic viscosity in tetrahydrofuran. Weight and number average molecular weights and polydispersity are determined by gel permeation chromatography.
Spectral Approach to Optimal Estimation of the Global Average Temperature.
NASA Astrophysics Data System (ADS)
Shen, Samuel S. P.; North, Gerald R.; Kim, Kwang-Y.
1994-12-01
Making use of EOF analysis and statistical optimal averaging techniques, the problem of random sampling error in estimating the global average temperature by a network of surface stations has been investigated. The EOF representation makes it unnecessary to use simplified empirical models of the correlation structure of temperature anomalies. If an adjustable weight is assigned to each station according to the criterion of minimum mean-square error, a formula for this error can be derived that consists of a sum of contributions from successive EOF modes. The EOFs were calculated from both observed data and a noise-forced EBM for the problem of one-year and five-year averages. The mean square statistical sampling error depends on the spatial distribution of the stations, length of the averaging interval, and the choice of the weight for each station data stream. Examples used here include four symmetric configurations of 4 × 4, 6 × 4, 9 × 7, and 20 × 10 stations and the Angell-Korshover configuration. Comparisons with the 100-yr U.K. dataset show that correlations for the time series of the global temperature anomaly average between the full dataset and this study's sparse configurations are rather high. For example, the 63-station Angell-Korshover network with uniform weighting explains 92.7% of the total variance, whereas the same network with optimal weighting can lead to 97.8% explained total variance of the U.K. dataset.
Spectral approach to optimal estimation of the global average temperature
Shen, S.S.P.; North, G.R.; Kim, K.Y.
1994-12-01
Making use of EOF analysis and statistical optimal averaging techniques, the problem of random sampling error in estimating the global average temperature by a network of surface stations has been investigated. The EOF representation makes it unnecessary to use simplified empirical models of the correlation structure of temperature anomalies. If an adjustable weight is assigned to each station according to the criterion of minimum mean-square error, a formula for this error can be derived that consists of a sum of contributions from successive EOF modes. The EOFs were calculated from both observed data a noise-forced EBM for the problem of one-year and five-year averages. The mean square statistical sampling error depends on the spatial distribution of the stations, length of the averaging interval, and the choice of the weight for each station data stream. Examples used here include four symmetric configurations of 4 X 4, 5 X 4, 9 X 7, and 20 X 10 stations and the Angell-Korshover configuration. Comparisons with the 100-yr U.K. dataset show that correlations for the time series of the global temperature anomaly average between the full dataset and this study`s sparse configurations are rather high. For example, the 63-station Angell-Korshover network with uniform weighting explains 92.7% of the total variance, whereas the same network with optimal weighting can lead to 97.8% explained total variance of the U.K. dataset. 27 refs., 5 figs., 4 tabs.
... loss-rapid weight loss; Overweight-rapid weight loss; Obesity-rapid weight loss; Diet-rapid weight loss ... for people who have health problems because of obesity. For these people, losing a lot of weight ...
Pregnancy Weight Gain Calculator
... Newsroom Dietary Guidelines Communicator’s Guide Pregnancy Weight Gain Calculator You are here Home / Online Tools Pregnancy Weight Gain Calculator Print Share Pregnancy Weight Gain Calculator Pregnancy Weight Gain Calculator Pregnancy Weight Gain Intro ...
Calculating High Speed Centrifugal Compressor Performance from Averaged Measurements
NASA Astrophysics Data System (ADS)
Lou, Fangyuan; Fleming, Ryan; Key, Nicole L.
2012-12-01
To improve the understanding of high performance centrifugal compressors found in modern aircraft engines, the aerodynamics through these machines must be experimentally studied. To accurately capture the complex flow phenomena through these devices, research facilities that can accurately simulate these flows are necessary. One such facility has been recently developed, and it is used in this paper to explore the effects of averaging total pressure and total temperature measurements to calculate compressor performance. Different averaging techniques (including area averaging, mass averaging, and work averaging) have been applied to the data. Results show that there is a negligible difference in both the calculated total pressure ratio and efficiency for the different techniques employed. However, the uncertainty in the performance parameters calculated with the different averaging techniques is significantly different, with area averaging providing the least uncertainty.
Averaging Robertson-Walker cosmologies
NASA Astrophysics Data System (ADS)
Brown, Iain A.; Robbers, Georg; Behrend, Juliane
2009-04-01
The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ωeff0 approx 4 × 10-6, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10-8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state weff < -1/3 can be found for strongly phantom models.
Averaging Robertson-Walker cosmologies
Brown, Iain A.; Robbers, Georg; Behrend, Juliane E-mail: G.Robbers@thphys.uni-heidelberg.de
2009-04-15
The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the {Lambda}CDM concordance model, the backreaction is of the order of {Omega}{sub eff}{sup 0} Almost-Equal-To 4 Multiplication-Sign 10{sup -6}, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10{sup -8} and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w{sub eff} < -1/3 can be found for strongly phantom models.
Ensemble averaging of acoustic data
NASA Technical Reports Server (NTRS)
Stefanski, P. K.
1982-01-01
A computer program called Ensemble Averaging of Acoustic Data is documented. The program samples analog data, analyzes the data, and displays them in the time and frequency domains. Hard copies of the displays are the program's output. The documentation includes a description of the program and detailed user instructions for the program. This software was developed for use on the Ames 40- by 80-Foot Wind Tunnel's Dynamic Analysis System consisting of a PDP-11/45 computer, two RK05 disk drives, a tektronix 611 keyboard/display terminal, and FPE-4 Fourier Processing Element, and an analog-to-digital converter.
The averaging method in applied problems
NASA Astrophysics Data System (ADS)
Grebenikov, E. A.
1986-04-01
The totality of methods, allowing to research complicated non-linear oscillating systems, named in the literature "averaging method" has been given. THe author is describing the constructive part of this method, or a concrete form and corresponding algorithms, on mathematical models, sufficiently general , but built on concrete problems. The style of the book is that the reader interested in the Technics and algorithms of the asymptotic theory of the ordinary differential equations, could solve individually such problems. For specialists in the area of applied mathematics and mechanics.
The causal meaning of Fisher’s average effect
LEE, JAMES J.; CHOW, CARSON C.
2013-01-01
Summary In order to formulate the Fundamental Theorem of Natural Selection, Fisher defined the average excess and average effect of a gene substitution. Finding these notions to be somewhat opaque, some authors have recommended reformulating Fisher’s ideas in terms of covariance and regression, which are classical concepts of statistics. We argue that Fisher intended his two averages to express a distinction between correlation and causation. On this view, the average effect is a specific weighted average of the actual phenotypic changes that result from physically changing the allelic states of homologous genes. We show that the statistical and causal conceptions of the average effect, perceived as inconsistent by Falconer, can be reconciled if certain relationships between the genotype frequencies and non-additive residuals are conserved. There are certain theory-internal considerations favouring Fisher’s original formulation in terms of causality; for example, the frequency-weighted mean of the average effects equaling zero at each locus becomes a derivable consequence rather than an arbitrary constraint. More broadly, Fisher’s distinction between correlation and causation is of critical importance to gene-trait mapping studies and the foundations of evolutionary biology. PMID:23938113
The influence of aquariums on weight in individuals with dementia.
Edwards, Nancy E; Beck, Alan M
2013-01-01
This study assessed whether individuals with dementia who observe aquariums increase the amount of food they consume and maintain body weight. The sample included 70 residents in dementia units within 3 extended care facilities in 2 states. The intervention included the introduction of an aquarium into each common dining area. A total increase of 196.9 g of daily food intake (25.0%) was noted from baseline to the end of the 10-week study. Resident body weight increased an average of 2.2 pounds during the study. Eight of 70 residents experienced a weight loss ((Equation is included in full-text article.)=1.89 lbs). People with advanced dementia responded to aquariums in their environment documenting that attraction to the natural environment is so innate that it survives dementia. PMID:23138175
Using Bayes Model Averaging for Wind Power Forecasts
NASA Astrophysics Data System (ADS)
Preede Revheim, Pål; Beyer, Hans Georg
2014-05-01
For operational purposes predictions of the forecasts of the lumped output of groups of wind farms spread over larger geographic areas will often be of interest. A naive approach is to make forecasts for each individual site and sum them up to get the group forecast. It is however well documented that a better choice is to use a model that also takes advantage of spatial smoothing effects. It might however be the case that some sites tends to more accurately reflect the total output of the region, either in general or for certain wind directions. It will then be of interest giving these a greater influence over the group forecast. Bayesian model averaging (BMA) is a statistical post-processing method for producing probabilistic forecasts from ensembles. Raftery et al. [1] show how BMA can be used for statistical post processing of forecast ensembles, producing PDFs of future weather quantities. The BMA predictive PDF of a future weather quantity is a weighted average of the ensemble members' PDFs, where the weights can be interpreted as posterior probabilities and reflect the ensemble members' contribution to overall forecasting skill over a training period. In Revheim and Beyer [2] the BMA procedure used in Sloughter, Gneiting and Raftery [3] were found to produce fairly accurate PDFs for the future mean wind speed of a group of sites from the single sites wind speeds. However, when the procedure was attempted applied to wind power it resulted in either problems with the estimation of the parameters (mainly caused by longer consecutive periods of no power production) or severe underestimation (mainly caused by problems with reflecting the power curve). In this paper the problems that arose when applying BMA to wind power forecasting is met through two strategies. First, the BMA procedure is run with a combination of single site wind speeds and single site wind power production as input. This solves the problem with longer consecutive periods where the input data
Flexible time domain averaging technique
NASA Astrophysics Data System (ADS)
Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng
2013-09-01
Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.
Non-Homogeneous Fractal Hierarchical Weighted Networks
Dong, Yujuan; Dai, Meifeng; Ye, Dandan
2015-01-01
A model of fractal hierarchical structures that share the property of non-homogeneous weighted networks is introduced. These networks can be completely and analytically characterized in terms of the involved parameters, i.e., the size of the original graph Nk and the non-homogeneous weight scaling factors r1, r2, · · · rM. We also study the average weighted shortest path (AWSP), the average degree and the average node strength, taking place on the non-homogeneous hierarchical weighted networks. Moreover the AWSP is scrupulously calculated. We show that the AWSP depends on the number of copies and the sum of all non-homogeneous weight scaling factors in the infinite network order limit. PMID:25849619
Birth weight reduction associated with residence near a hazardous waste landfill.
Berry, M; Bove, F
1997-01-01
We examined the relationship between birth weight and mother's residence near a hazardous waste landfill. Twenty-five years of birth certificates (1961-1985) were collected for four towns. Births were grouped into five 5-year periods corresponding to hypothesized exposure periods (1971-1975 having the greatest potential for exposure). From 1971 to 1975, term births (37-44 weeks gestation) to parents living closest to the landfill (Area 1A) had a statistically significant lower average birth weight (192 g) and a statistically significant higher proportion of low birth weight [odds ratio (OR) = 5.1; 95% confidence interval (CI), 2.1-12.3] than the control population. Average term birth weights in Area 1A rebounded by about 332 g after 1975. Parallel results were found for all births (gestational age > 27 weeks) in Area 1A during 1971-1975. Area 1A infants had twice the risk of prematurity (OR = 2.1; 95 CI, 1.0-4.4) during 1971-1975 compared to the control group. The results indicate a significant impact to infants born to residents living near the landfill during the period postulated as having the greatest potential for exposure. The magnitude of the effect is in the range of birth weight reduction due to cigarette smoking during pregnancy. Images Figure 1. Figure 2. PMID:9347901
College Freshman Stress and Weight Change: Differences by Gender
ERIC Educational Resources Information Center
Economos, Christina D.; Hildebrandt, M. Lise; Hyatt, Raymond R.
2008-01-01
Objectives: To examine how stress and health-related behaviors affect freshman weight change by gender. Methods: Three hundred ninety-six freshmen completed a 40-item health behavior survey and height and weight were collected at baseline and follow-up. Results: Average weight change was 5.04 lbs for males, 5.49 lbs for females. Weight gain was…
Pollutant roses for daily averaged ambient air pollutant concentrations
NASA Astrophysics Data System (ADS)
Cosemans, Guido; Kretzschmar, Jan; Mensink, Clemens
Pollutant roses are indispensable tools to identify unknown (fugitive) sources of heavy metals at industrial sites whose current impact exceeds the target values imposed for the year 2012 by the European Air Quality Daughter Directive 2004/207/EC. As most of the measured concentrations of heavy metals in ambient air are daily averaged values, a method to obtain high quality pollutant roses from such data is of practical interest for cost-effective air quality management. A computational scheme is presented to obtain, from daily averaged concentrations, 10° angular resolution pollutant roses, called PRP roses, that are in many aspects comparable to pollutant roses made with half-hourly concentrations. The computational scheme is a ridge regression, based on three building blocks: ordinary least squares regression; outlier handling by weighting based on expected values of the higher percentiles in a lognormal distribution; weighted averages whereby observed values, raised to a power m, and daily wind rose frequencies are used as weights. Distance measures are used to find the optimal value for m. The performance of the computational scheme is illustrated by comparing the pollutant roses, constructed with measured half-hourly SO 2 data for 10 monitoring sites in the Antwerp harbour, with the PRP roses made with the corresponding daily averaged SO 2 concentrations. A miniature dataset, made up of 7 daily concentrations and of half-hourly wind directions assigned to 4 wind sectors, is used to illustrate the formulas and their results.
Probabilistic climate change predictions applying Bayesian model averaging.
Min, Seung-Ki; Simonis, Daniel; Hense, Andreas
2007-08-15
This study explores the sensitivity of probabilistic predictions of the twenty-first century surface air temperature (SAT) changes to different multi-model averaging methods using available simulations from the Intergovernmental Panel on Climate Change fourth assessment report. A way of observationally constrained prediction is provided by training multi-model simulations for the second half of the twentieth century with respect to long-term components. The Bayesian model averaging (BMA) produces weighted probability density functions (PDFs) and we compare two methods of estimating weighting factors: Bayes factor and expectation-maximization algorithm. It is shown that Bayesian-weighted PDFs for the global mean SAT changes are characterized by multi-modal structures from the middle of the twenty-first century onward, which are not clearly seen in arithmetic ensemble mean (AEM). This occurs because BMA tends to select a few high-skilled models and down-weight the others. Additionally, Bayesian results exhibit larger means and broader PDFs in the global mean predictions than the unweighted AEM. Multi-modality is more pronounced in the continental analysis using 30-year mean (2070-2099) SATs while there is only a little effect of Bayesian weighting on the 5-95% range. These results indicate that this approach to observationally constrained probabilistic predictions can be highly sensitive to the method of training, particularly for the later half of the twenty-first century, and that a more comprehensive approach combining different regions and/or variables is required. PMID:17569647
Indices of relative weight and obesity.
Keys, Ancel; Fidanza, Flaminio; Karvonen, Martti J; Kimura, Noburu; Taylor, Henry L
2014-06-01
Analyses are reported on the correlation with height and with subcutaneous fat thickness of relative weight expressed as per cent of average weight at given height, and of the ratios weight/height, weight/height squared, and the ponderal index (cube root of weight divided by height) in 7424 ‘healthy’ men in 12 cohorts in five countries. Analyses are also reported on the relationship of those indicators of relative weight to body density in 180 young men and in 248 men aged 49–59. Judged by the criteria of correlation with height (lowest is best) and to measures of body fatness (highest is best), the ponderal index is the poorest of the relative weight indices studied. The ratio of weight to height squared, here termed the body mass index, is slightly better in these respects than the simple ratio of weight to height. The body mass index seems preferable over other indices of relative weight on these grounds as well as on the simplicity of the calculation and, in contrast to percentage of average weight, the applicability to all populations at all times. PMID:24691951
Effect of clothing weight on body weight
Technology Transfer Automated Retrieval System (TEKTRAN)
Background: In clinical settings, it is common to measure weight of clothed patients and estimate a correction for the weight of clothing, but we can find no papers in the medical literature regarding the variability in clothing weight with weather, season, and gender. Methods: Fifty adults (35 wom...
Judging body weight from faces: the height-weight illusion.
Schneider, Tobias M; Hecht, Heiko; Carbon, Claus-Christian
2012-01-01
Being able to exploit features of the human face to predict health and fitness can serve as an evolutionary advantage. Surface features such as facial symmetry, averageness, and skin colour are known to influence attractiveness. We sought to determine whether observers are able to extract more complex features, namely body weight. If possible, it could be used as a predictor for health and fitness. For instance, facial adiposity could be taken to indicate a cardiovascular challenge or proneness to infections. Observers seem to be able to glean body weight information from frontal views of a face. Is weight estimation robust across different viewing angles? We showed that participants strongly overestimated body weight for faces photographed from a lower vantage point while underestimating it for faces photographed from a higher vantage point. The perspective distortions of simple facial measures (e.g., width-to-height ratio) that accompany changes in vantage point do not suffice to predict body weight. Instead, more complex patterns must be involved in the height-weight illusion. PMID:22611670
Informed Test Component Weighting.
ERIC Educational Resources Information Center
Rudner, Lawrence M.
2001-01-01
Identifies and evaluates alternative methods for weighting tests. Presents formulas for composite reliability and validity as a function of component weights and suggests a rational process that identifies and considers trade-offs in determining weights. Discusses drawbacks to implicit weighting and explicit weighting and the difficulty of…
Model Averaging for Improving Inference from Causal Diagrams
Hamra, Ghassan B.; Kaufman, Jay S.; Vahratian, Anjel
2015-01-01
Model selection is an integral, yet contentious, component of epidemiologic research. Unfortunately, there remains no consensus on how to identify a single, best model among multiple candidate models. Researchers may be prone to selecting the model that best supports their a priori, preferred result; a phenomenon referred to as “wish bias”. Directed acyclic graphs (DAGs), based on background causal and substantive knowledge, are a useful tool for specifying a subset of adjustment variables to obtain a causal effect estimate. In many cases, however, a DAG will support multiple, sufficient or minimally-sufficient adjustment sets. Even though all of these may theoretically produce unbiased effect estimates they may, in practice, yield somewhat distinct values, and the need to select between these models once again makes the research enterprise vulnerable to wish bias. In this work, we suggest combining adjustment sets with model averaging techniques to obtain causal estimates based on multiple, theoretically-unbiased models. We use three techniques for averaging the results among multiple candidate models: information criteria weighting, inverse variance weighting, and bootstrapping. We illustrate these approaches with an example from the Pregnancy, Infection, and Nutrition (PIN) study. We show that each averaging technique returns similar, model averaged causal estimates. An a priori strategy of model averaging provides a means of integrating uncertainty in selection among candidate, causal models, while also avoiding the temptation to report the most attractive estimate from a suite of equally valid alternatives. PMID:26270672
Birth weight pattern in Karnataka.
Prasad, K N; Rao, R S; Sujatha, A
1994-07-01
The pattern of birth weight is described among births recorded in rural maternity homes in coastal areas of Udupi taluk in South Kanara district in Karnataka state, India. Literacy of the study area was 78.5%, and female literacy was 73.0%. The mean age at marriage was 21.4 years. Over 90% of mothers received some prenatal care. Contraceptive prevalence was 43%. The study area had six rural maternity homes that each served a population of about 10,000 people. The homes were well equipped with trained nurse-midwives, medical rooms, and equipment, and were connected by roads and telephones with Kasturba Hospital. High risk cases were transported by air to Kasturba Hospital. Birth weight was recorded with a UNICEF infant lever balance scale within one hour of delivery. Between July 1985 and June 1988, 4498 singleton live births were recorded: 2308 (51.3%) boys and 2190 (48.7%) girls. 80% weighed between 2500 and 3400 g. 13.3% were low birth weight of under 2500 g, and 0.4% were very low birth weight of under 1500 g. The mean birth weight was 2823 g: 2850 g for boys and 2765.4 for girls. The mean birth weight increased with maternal age; it also increased with increased parity and increased gestation age. The lowest birth weight of 2767.7 g occurred among first births; the highest of 2897.6 g was among births to women with multiple births. 91.3% were born between 37-40 weeks, and 7.5% were preterm. There were statistically significant differences in the mean birth weights by gender. 9.1% of births were to teenagers, and 69% of mothers were 20-29 years old. 30% of births were first births, and 51% were second and third births. The small family norm appeared to be accepted by this study population. PMID:7890348
Dietary restraint and gestational weight gain
Mumford, Sunni L.; Siega-Riz, Anna Maria; Herring, Amy; Evenson, Kelly R.
2008-01-01
Objective To determine whether a history of preconceptional dieting and restrained eating was related to higher weight gains in pregnancy. Design Dieting practices were assessed among a prospective cohort of pregnant women using the Revised Restraint Scale. Women were classified on three separate subscales as restrained eaters, dieters, and weight cyclers. Subjects Participants included 1,223 women in the Pregnancy, Infection and Nutrition Study. Main outcome measures Total gestational weight gain and adequacy of weight gain (ratio of observed/expected weight gain based on Institute of Medicine (IOM) recommendations). Statistical analyses performed Multiple linear regression was used to model the two weight gain outcomes, while controlling for potential confounders including physical activity and weight gain attitudes. Results There was a positive association between each subscale and total weight gain, as well as adequacy of weight gain. Women classified as cyclers gained an average of 2 kg more than non-cyclers, and showed higher observed/expected ratios by 0.2 units. Among restrained eaters and dieters, there was a differential effect by BMI. With the exception of underweight women, all other weight status women with a history of dieting or restrained eating gained more weight during pregnancy and had higher adequacy of weight gain ratios. In contrast, underweight women with a history of restrained eating behaviors gained less weight compared to underweight women without those behaviors. Conclusions Restrained eating behaviors were associated with weight gains above the IOM recommendations for normal, overweight, and obese women, and weight gains below the recommendations for underweight women. Excessive gestational weight gain is of concern given its association with postpartum weight retention. The dietary restraint tool is useful for identifying women who would benefit from nutritional counseling prior to or during pregnancy in regards to achieving targeted
Programmable noise bandwidth reduction by means of digital averaging
NASA Technical Reports Server (NTRS)
Poklemba, John J. (Inventor)
1993-01-01
Predetection noise bandwidth reduction is effected by a pre-averager capable of digitally averaging the samples of an input data signal over two or more symbols, the averaging interval being defined by the input sampling rate divided by the output sampling rate. As the averaged sample is clocked to a suitable detector at a much slower rate than the input signal sampling rate the noise bandwidth at the input to the detector is reduced, the input to the detector having an improved signal to noise ratio as a result of the averaging process, and the rate at which such subsequent processing must operate is correspondingly reduced. The pre-averager forms a data filter having an output sampling rate of one sample per symbol of received data. More specifically, selected ones of a plurality of samples accumulated over two or more symbol intervals are output in response to clock signals at a rate of one sample per symbol interval. The pre-averager includes circuitry for weighting digitized signal samples using stored finite impulse response (FIR) filter coefficients. A method according to the present invention is also disclosed.
Phase-compensated averaging for analyzing electroencephalography and magnetoencephalography epochs.
Matani, Ayumu; Naruse, Yasushi; Terazono, Yasushi; Iwasaki, Taro; Fujimaki, Norio; Murata, Tsutomu
2010-05-01
Stimulus-locked averaging for electroencephalography and/or megnetoencephalography (EEG/MEG) epochs cancels out ongoing spontaneous activities by treating them as noise. However, such spontaneous activities are the object of interest for EEG/MEG researchers who study phase-related phenomena, e.g., long-distance synchronization, phase-reset, and event-related synchronization/desynchronization (ERD/ERS). We propose a complex-weighted averaging method, called phase-compensated averaging, to investigate phase-related phenomena. In this method, any EEG/MEG channel is used as a trigger for averaging by setting the instantaneous phases at the trigger timings to 0 so that cross-channel averages are obtained. First, we evaluated the fundamental characteristics of this method by performing simulations. The results showed that this method could selectively average ongoing spontaneous activity phase-locked in each channel; that is, it evaluates the directional phase-synchronizing relationship between channels. We then analyzed flash evoked potentials. This method clarified the directional phase-synchronizing relationship from the frontal to occipital channels and recovered another piece of information, perhaps regarding the sequence of experiments, which is lost when using only conventional averaging. This method can also be used to reconstruct EEG/MEG time series to visualize long-distance synchronization and phase-reset directly, and on the basis of the potentials, ERS/ERD can be explained as a side effect of phase-reset. PMID:20172813
... Measure and Interpret Weight Status Adult Body Mass Index or BMI Body Mass Index (BMI) is a person's weight in kilograms divided ... finding your height and weight in this BMI Index Chart 1 . If your BMI is less than ...
Effect of molecular weight on polyphenylquinoxaline properties
NASA Technical Reports Server (NTRS)
Jensen, Brian J.
1991-01-01
A series of polyphenyl quinoxalines with different molecular weight and end-groups were prepared by varying monomer stoichiometry. Thus, 4,4'-oxydibenzil and 3,3'-diaminobenzidine were reacted in a 50/50 mixture of m-cresol and xylenes. Reaction concentration, temperature, and stir rate were studied and found to have an effect on polymer properties. Number and weight average molecular weights were determined and correlated well with viscosity data. Glass transition temperatures were determined and found to vary with molecular weight and end-groups. Mechanical properties of films from polymers with different molecular weights were essentially identical at room temperature but showed significant differences at 232 C. Diamine terminated polymers were found to be much less thermooxidatively stable than benzil terminated polymers when aged at 316 C even though dynamic thermogravimetric analysis revealed only slight differences. Lower molecular weight polymers exhibited better processability than higher molecular weight polymers.
Does body weight affect wages? Evidence from Europe.
Brunello, Giorgio; D'Hombres, Béatrice
2007-03-01
We use data from the European Community Household Panel to investigate the impact of body weight on wages in nine European countries. When we pool the available data across countries and years, we find that a 10% increase in the average body mass index reduces the real earnings of males and females by 3.27% and 1.86%, respectively. Since European culture, society and labour market are heterogeneous, we estimate separate regressions for Northern and Southern Europe and find that the negative impact of the body mass index on earnings is larger--and statistically significant--in the latter area. PMID:17174614
Graph-balancing algorithms for average consensus over directed networks
NASA Astrophysics Data System (ADS)
Fan, Yuan; Han, Runzhe; Qiu, Jianbin
2016-01-01
Consensus strategies find extensive applications in coordination of robot groups and decision-making of agents. Since balanced graph plays an important role in the average consensus problem and many other coordination problems for directed communication networks, this work explores the conditions and algorithms for the digraph balancing problem. Based on the analysis of graph cycles, we prove that a digraph can be balanced if and only if the null space of its incidence matrix contains positive vectors. Then, based on this result and the corresponding analysis, two weight balance algorithms have been proposed, and the conditions for obtaining a unique balanced solution and a set of analytical results on weight balance problems have been introduced. Then, we point out the relationship between the weight balance problem and the features of the corresponding underlying Markov chain. Finally, two numerical examples are presented to verify the proposed algorithms.
PRECONCEPTION PREDICTORS OF WEIGHT GAIN DURING PREGNANCY
Weisman, Carol S.; Hillemeier, Marianne M.; Downs, Danielle Symons; Chuang, Cynthia H.; Dyer, Anne-Marie
2010-01-01
Objectives We examined preconception (prepregnancy) predictors of pregnancy weight gain and weight gain that exceeds the 2009 Institute of Medicine (IOM) recommendations based on pre-pregnancy body mass index (BMI), in a prospective study. Methods Data are from a population-based cohort study of 1,420 women who were interviewed at baseline and 2 years later. The analytic sample includes 103 women who were not pregnant at baseline and gave birth to full-term singletons during the follow-up period. Preconception maternal weight category as well as health behaviors, psychosocial stress, parity, and age were examined as predictors of pregnancy weight gain and of weight gain in excess of the IOM recommendations using multiple linear and logistic regression analysis. Results Pregnancy weight gain averaged 33.01 pounds, with 51% of women gaining weight in excess of the 2009 IOM recommendations for their preconception weight category. Preconception overweight (BMI = 25–29.9) increased the odds of excessive pregnancy weight gain nearly threefold, whereas preconception physical activity levels meeting activity guidelines reduced the odds of excessive weight gain but was marginally statistically significant. Conclusion Although future research examining the role of physical activity in relation to pregnancy weight gain is needed, preconception overweight and physical activity levels are prime targets for interventions to avoid excessive pregnancy weight gain. PMID:20133152
Body Weight Relationships in Early Marriage: Weight Relevance, Weight Comparisons, and Weight Talk
Bove, Caron F.; Sobal, Jeffery
2011-01-01
This investigation uncovered processes underlying the dynamics of body weight and body image among individuals involved in nascent heterosexual marital relationships in Upstate New York. In-depth, semi-structured qualitative interviews conducted with 34 informants, 20 women and 14 men, just prior to marriage and again one year later were used to explore continuity and change in cognitive, affective, and behavioral factors relating to body weight and body image at the time of marriage, an important transition in the life course. Three major conceptual themes operated in the process of developing and enacting informants’ body weight relationships with their partner: weight relevance, weight comparisons, and weight talk. Weight relevance encompassed the changing significance of weight during early marriage and included attracting and capturing a mate, relaxing about weight, living healthily, and concentrating on weight. Weight comparisons between partners involved weight relativism, weight competition, weight envy, and weight role models. Weight talk employed pragmatic talk, active and passive reassurance, and complaining and critiquing criticism. Concepts emerging from this investigation may be useful in designing future studies of and approaches to managing body weight in adulthood. PMID:21864601
ERIC Educational Resources Information Center
Molfese, Dennis L.; Key, Alexandra Fonaryova; Kelly, Spencer; Cunningham, Natalie; Terrell, Shona; Ferguson, Melissa; Molfese, Victoria J.; Bonebright, Terri
2006-01-01
Event-related potentials (ERPs) were recorded from 27 children (14 girls, 13 boys) who varied in their reading skill levels. Both behavior performance measures recorded during the ERP word classification task and the ERP responses themselves discriminated between children with above-average, average, and below-average reading skills. ERP…
Do Diurnal Aerosol Changes Affect Daily Average Radiative Forcing?
Kassianov, Evgueni I.; Barnard, James C.; Pekour, Mikhail S.; Berg, Larry K.; Michalsky, Joseph J.; Lantz, K.; Hodges, G. B.
2013-06-17
Strong diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project (TCAP) on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF, when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.
Do diurnal aerosol changes affect daily average radiative forcing?
NASA Astrophysics Data System (ADS)
Kassianov, Evgueni; Barnard, James; Pekour, Mikhail; Berg, Larry K.; Michalsky, Joseph; Lantz, Kathy; Hodges, Gary
2013-06-01
diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.
Determinants of Low Birth Weight in Malawi: Bayesian Geo-Additive Modelling.
Ngwira, Alfred; Stanley, Christopher C
2015-01-01
Studies on factors of low birth weight in Malawi have neglected the flexible approach of using smooth functions for some covariates in models. Such flexible approach reveals detailed relationship of covariates with the response. The study aimed at investigating risk factors of low birth weight in Malawi by assuming a flexible approach for continuous covariates and geographical random effect. A Bayesian geo-additive model for birth weight in kilograms and size of the child at birth (less than average or average and higher) with district as a spatial effect using the 2010 Malawi demographic and health survey data was adopted. A Gaussian model for birth weight in kilograms and a binary logistic model for the binary outcome (size of child at birth) were fitted. Continuous covariates were modelled by the penalized (p) splines and spatial effects were smoothed by the two dimensional p-spline. The study found that child birth order, mother weight and height are significant predictors of birth weight. Secondary education for mother, birth order categories 2-3 and 4-5, wealth index of richer family and mother height were significant predictors of child size at birth. The area associated with low birth weight was Chitipa and areas with increased risk to less than average size at birth were Chitipa and Mchinji. The study found support for the flexible modelling of some covariates that clearly have nonlinear influences. Nevertheless there is no strong support for inclusion of geographical spatial analysis. The spatial patterns though point to the influence of omitted variables with some spatial structure or possibly epidemiological processes that account for this spatial structure and the maps generated could be used for targeting development efforts at a glance. PMID:26114866
Fact Sheet Proven Weight Loss Methods What can weight loss do for you? Losing weight can improve your health in a number of ways. It can lower ... at www.hormone.org/Spanish . Proven Weight Loss Methods Fact Sheet www.hormone.org
Weight loss surgery helps people with extreme obesity to lose weight. It may be an option if you cannot lose weight through diet and exercise or have serious health problems caused by obesity. There are different types of weight loss surgery. They often limit the ...
Gestational weight gain among Hispanic women.
Sangi-Haghpeykar, Haleh; Lam, Kim; Raine, Susan P
2014-01-01
To describe gestational weight gain among Hispanic women and to examine psychological, social, and cultural contexts affecting weight gain. A total of 282 Hispanic women were surveyed post-partum before leaving the hospital. Women were queried about their prepregnancy weight and weight gained during pregnancy. Adequacy of gestational weight gain was based on guidelines set by the Institute of Medicine in 2009. Independent risk factors for excessive or insufficient weight gain were examined by logistic regression. Most women were unmarried (59 %), with a mean age of 28.4 ± 6.6 years and an average weight gain of 27.9 ± 13.3 lbs. Approximately 45 % of women had gained too much, 32 % too little, and only 24 % had an adequate amount of weight gain. The mean birth weight was 7.3, 7.9, and 6.8 lbs among the adequate, excessive, and insufficient weight gain groups. Among women who exercised before pregnancy, two-thirds continued to do so during pregnancy; the mean gestational weight gain of those who continued was lower than those who stopped (26.8 vs. 31.4 lbs, p = 0.04). Independent risk factors for excessive weight gain were being unmarried, U.S. born, higher prepregnancy body mass index, and having indifferent or negative views about weight gain. Independent risk factors for insufficient weight gain were low levels of support and late initiation of prenatal care. Depression, stress, and a woman's or her partner's happiness regarding pregnancy were unrelated to weight gain. The results of this study can be used by prenatal programs to identify Hispanic women at risk for excessive or insufficient gestational weight gain. PMID:23456347
Johnston, Caitlin E.; Herschel, Daniel; Lasek, Amy W.; Hammer, Ronald P.; Nikulina, Ella M.
2014-01-01
Social defeat stress causes social avoidance and long-lasting cross-sensitization to psychostimulants, both of which are associated with increased brain-derived neurotrophic factor (BDNF) expression in the ventral tegmental area (VTA). Moreover, social stress upregulates VTA mu-opioid receptor (MOR) mRNA. In the VTA, MOR activation inhibits GABA neurons to disinhibit VTA dopamine neurons, thus providing a role for VTA MORs in the regulation of psychostimulant sensitization. The present study determined the effect of lentivirus-mediated MOR knockdown in the VTA on the consequences of intermittent social defeat stress, a salient and profound stressor in humans and rodents. Social stress exposure induced social avoidance and attenuated weight gain in animals with non-manipulated VTA MORs, but both these effects were prevented by VTA MOR knockdown. Rats with non-manipulated VTA MOR expression exhibited cross-sensitization to amphetamine challenge (1.0 mg/kg, i.p.), evidenced by a significant augmentation of locomotion. By contrast, knockdown of VTA MORs prevented stress-induced cross-sensitization without blunting the locomotor-activating effects of amphetamine. At the time point corresponding to amphetamine challenge, immunohistochemical analysis was performed to examine the effect of stress on VTA BDNF expression. Prior stress exposure increased VTA BDNF expression in rats with non-manipulated VTA MOR expression, while VTA MOR knockdown prevented stress-induced expression of VTA BDNF. Taken together, these results suggest that upregulation of VTA MOR is necessary for the behavioral and biochemical changes induced by social defeat stress. Elucidating VTA MOR regulation of stress effects on the mesolimbic system may provide new therapeutic targets for treating stress-induced vulnerability to substance abuse. PMID:25446676
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...
RHIC BPM system average orbit calculations
Michnoff,R.; Cerniglia, P.; Degen, C.; Hulsart, R.; et al.
2009-05-04
RHIC beam position monitor (BPM) system average orbit was originally calculated by averaging positions of 10000 consecutive turns for a single selected bunch. Known perturbations in RHIC particle trajectories, with multiple frequencies around 10 Hz, contribute to observed average orbit fluctuations. In 2006, the number of turns for average orbit calculations was made programmable; this was used to explore averaging over single periods near 10 Hz. Although this has provided an average orbit signal quality improvement, an average over many periods would further improve the accuracy of the measured closed orbit. A new continuous average orbit calculation was developed just prior to the 2009 RHIC run and was made operational in March 2009. This paper discusses the new algorithm and performance with beam.
NASA Astrophysics Data System (ADS)
Ding, F.; Theobald, M.; Vollmer, B.; Savtchenko, A. K.; Hearty, T. J.; Esfandiari, A. E.
2012-12-01
Producing timely and accurate water forecast and information is the mission of National Weather Service River Forecast Centers (NWS RFCs) of National Oceanic and Atmospheric Administration (NOAA). The river forecast system in RFCs requires average surface temperature in the fixed 6-hour period 000-0600, 0600-1200, 1200-1800, and 1200-0000 UTC. The current logic of RFC temperature forecast relies on ingest of point values of daytime maximum and nighttime minimum temperature. Meanwhile, the mean temperature for the 6-hour period is estimated from a weighted average of daytime maximum and nighttime minimum temperature. The Atmospheric Infrared Sounder (AIRS) in the first high spectral resolution infrared sounder on board the Aqua satellite which was launched in May 2002 and follows a Sun-synchronous polar orbit. It is aimed to produce high resolution atmospheric profile and surface atmospheric parameters. As Aqua crosses the equator at about 1330 and 0130 local time, the AIRS retrieved surface temperature may represent daytime maximum and nighttime minimum value. Comparing to point observation from surface weather stations which are often sparse over the less-populated area and are unevenly distributed, satellite may obtain better area averaged observation. This test study assesses the potential of using AIRS retrieved surface temperature to forecast 6-hour average temperature for NWS RFCs. The California Nevada RFC is selected due to the poor coverage of surface observation in the mountainous region and spring snow melting. The study focuses on the March to May spring season when water from snowpack melting often plays important role in flood. AIRS retrieved temperature and surface weather station data set will be used to derive statistical weighting coefficient for 6-hour average temperature forecast. The resulting forecast biases and errors will be the main indicators of the potential usage. All study results will be presented in the meeting.
Dryer, Rachel; Ware, Nicole
2014-01-01
Purpose: To identify beliefs held by the general public regarding causes of weight gain, weight prevention strategies, and barriers to weight management; and to examine whether such beliefs predict the actual body mass of participants. Methods: A questionnaire-based survey was administered to participants recruited from regional and metropolitan areas of Australia. This questionnaire obtained demographic information, height, weight; as well as beliefs about causes of weight gain, weight prevention strategies, and barriers to weight management. Results: The sample consisted of 376 participants (94 males, 282 females) between the ages of 18 years and 88 years (mean age = 43.25, SD = 13.64). The range and nature of the belief dimensions identified suggest that the Australian public have an understanding of the interaction between internal and external factors that impact on weight gain but also prevent successful weight management. Beliefs about prevention strategies and barriers to effective weight management were found to predict the participants’ actual body mass, even after controlling for demographic characteristics. Conclusions: The general public have a good understanding of the multiple contributing factors to weight gain and successful weight management. However, this understanding may not necessarily lead to individuals adopting the required lifestyle changes that result in achievement or maintenance of healthy weight levels. PMID:25750768
Molecular weight determinations of biosolubilized coals
Linehan, J.C.; Clauss, S.; Bean, R.; Campbell, J.
1991-05-01
We have compared several different methods for determining the molecular weight of biosolubilized coals: Aqueous gel permeation Chromatography (GPC), organic GPC, preparative GPC, dynamic laser light scattering (LLS), static LLS, static LLS, mass spectrometry, vapor phase osmometry (VPO) and ultrafiltration. We have found that careful consideration must be given to the molecular weight result obtained from each method. The average molecular weight and the molecular weight distribution were found to be dependent upon many factors, including the technique used; molecular weight standards, pH, and the percentage of sample analyzed. Weight average molecular weights, M{sub w}, obtained for biosolubilized leonardite range from 800,000 daltons for neutral pH aqueous GPC based on polyethylene glycol molecular weight standards to 570 daltons for pH 11.5 buffered aqueous GPC based on a fulvic acid standard. It is clear that the state of association of the biocoal analyte, as well as the interactions of sample with the separation matrix, can have large influence of the observed result, and these must be understood before reliable GPC measurements can be made. Furthermore, a uniform set of molecular weight standards for biodegraded coals is needed. 10 refs., 1 tab.
PREVENTING WEIGHT REGAIN AFTER WEIGHT LOSS
Technology Transfer Automated Retrieval System (TEKTRAN)
For most dieters, a regaining of lost weight is an all too common experience. Indeed, virtually all interventions for weight loss show limited or even poor long-term effectiveness. This sobering reality was reflected in a comprehensive review of nonsurgical treatments of obesity conducted by the Ins...
Spectral averaging techniques for Jacobi matrices
Rio, Rafael del; Martinez, Carmen; Schulz-Baldes, Hermann
2008-02-15
Spectral averaging techniques for one-dimensional discrete Schroedinger operators are revisited and extended. In particular, simultaneous averaging over several parameters is discussed. Special focus is put on proving lower bounds on the density of the averaged spectral measures. These Wegner-type estimates are used to analyze stability properties for the spectral types of Jacobi matrices under local perturbations.
Averaging and Adding in Children's Worth Judgements
ERIC Educational Resources Information Center
Schlottmann, Anne; Harman, Rachel M.; Paine, Julie
2012-01-01
Under the normative Expected Value (EV) model, multiple outcomes are additive, but in everyday worth judgement intuitive averaging prevails. Young children also use averaging in EV judgements, leading to a disordinal, crossover violation of utility when children average the part worths of simple gambles involving independent events (Schlottmann,…
NASA Astrophysics Data System (ADS)
Thapa, Damber; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan
2015-12-01
In this paper, we propose a speckle noise reduction method for spectral-domain optical coherence tomography (SD-OCT) images called multi-frame weighted nuclear norm minimization (MWNNM). This method is a direct extension of weighted nuclear norm minimization (WNNM) in the multi-frame framework since an adequately denoised image could not be achieved with single-frame denoising methods. The MWNNM method exploits multiple B-scans collected from a small area of a SD-OCT volumetric image, and then denoises and averages them together to obtain a high signal-to-noise ratio B-scan. The results show that the image quality metrics obtained by denoising and averaging only five nearby B-scans with MWNNM method is considerably better than those of the average image obtained by registering and averaging 40 azimuthally repeated B-scans.
Averaging procedures for flow within vegetation canopies
NASA Astrophysics Data System (ADS)
Raupach, M. R.; Shaw, R. H.
1982-01-01
Most one-dimensional models of flow within vegetation canopies are based on horizontally averaged flow variables. This paper formalizes the horizontal averaging operation. Two averaging schemes are considered: pure horizontal averaging at a single instant, and time averaging followed by horizontal averaging. These schemes produce different forms for the mean and turbulent kinetic energy balances, and especially for the ‘wake production’ term describing the transfer of energy from large-scale motion to wake turbulence by form drag. The differences are primarily due to the appearance, in the covariances produced by the second scheme, of dispersive components arising from the spatial correlation of time-averaged flow variables. The two schemes are shown to coincide if these dispersive fluxes vanish.
Data Point Averaging for Computational Fluid Dynamics Data
NASA Technical Reports Server (NTRS)
Norman, Jr., David (Inventor)
2016-01-01
A system and method for generating fluid flow parameter data for use in aerodynamic heating analysis. Computational fluid dynamics data is generated for a number of points in an area on a surface to be analyzed. Sub-areas corresponding to areas of the surface for which an aerodynamic heating analysis is to be performed are identified. A computer system automatically determines a sub-set of the number of points corresponding to each of the number of sub-areas and determines a value for each of the number of sub-areas using the data for the sub-set of points corresponding to each of the number of sub-areas. The value is determined as an average of the data for the sub-set of points corresponding to each of the number of sub-areas. The resulting parameter values then may be used to perform an aerodynamic heating analysis.
Data Point Averaging for Computational Fluid Dynamics Data
NASA Technical Reports Server (NTRS)
Norman, David, Jr. (Inventor)
2014-01-01
A system and method for generating fluid flow parameter data for use in aerodynamic heating analysis. Computational fluid dynamics data is generated for a number of points in an area on a surface to be analyzed. Sub-areas corresponding to areas of the surface for which an aerodynamic heating analysis is to be performed are identified. A computer system automatically determines a sub-set of the number of points corresponding to each of the number of sub-areas and determines a value for each of the number of sub-areas using the data for the sub-set of points corresponding to each of the number of sub-areas. The value is determined as an average of the data for the sub-set of points corresponding to each of the number of sub-areas. The resulting parameter values then may be used to perform an aerodynamic heating analysis.
Mood and Weight Loss in a Behavioral Treatment Program.
ERIC Educational Resources Information Center
Wing, Rena R.; And Others
1983-01-01
Evaluated the relationship between mood and weight loss for 76 patients participating in two consecutive behavioral treatment programs. Weight losses averaged 12.2 pounds (5.55 kg) during the 10-week program. Positive changes in mood were reported during this interval, and these changes appeared to be related to changes in weight. (Author/RC)
76 FR 19275 - Passenger Weight and Inspected Vessel Stability Requirements
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-07
... SECURITY Coast Guard 46 CFR Parts 115, 170, 176, and 178 RIN 1625-AB20 Passenger Weight and Inspected.... SUMMARY: On December 14, 2010, the Coast Guard amended its regulations governing the maximum weight and..., including increasing the Assumed Average Weight per Person (AAWPP) to 185 lb. The amendment triggered...
Recovery of petroleum with chemically treated high molecular weight polymers
Gibb, C.L.; Rhudy, J.S.
1980-11-18
Plugging of reservoirs with high molecular weight polymers, e.g. Partially hydrolyzed polyacrylamide, is overcome by chemically treating a polymer having an excessively high average molecular weight prior to injection into a reservoir with an oxidizing chemical, e.g. sodium hypochlorite, and thereafter incorporating a reducing chemical, e.g., sodium sulfite, to stop degradation of the polymer when a desired lower average molecular weight and flooding characteristics are attained.
ERIC Educational Resources Information Center
Jenny, Mirjam A.; Rieskamp, Jörg; Nilsson, Håkan
2014-01-01
Judging whether multiple events will co-occur is an important aspect of everyday decision making. The underlying probabilities of occurrence are usually unknown and have to be inferred from experience. Using a rigorous, quantitative model comparison, we investigate how people judge the conjunctive probabilities of multiple events to co-occur. In 2…
Implications of the method of capital cost payment on the weighted average cost of capital.
Boles, K E
1986-01-01
The author develops a theoretical and mathematical model, based on published financial management literature, to describe the cost of capital structure for health care delivery entities. This model is then used to generate the implications of changing the capital cost reimbursement mechanism from a cost basis to a prospective basis. The implications are that the cost of capital is increased substantially, the use of debt must be restricted, interest rates for borrowed funds will increase, and, initially, firms utilizing debt efficiently under cost-basis reimbursement will be restricted to the generation of funds from equity only under a prospective system. PMID:3525468
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-14
... Dumping Margin and Assessment Rate in Certain Antidumping Duty Proceedings, 75 FR 81533 (December 28, 2010... Duty Proceedings, 76 FR 5518 (Feb. 1, 2011). In September, 2011, pursuant to section 123(g)(1)(D) of..., 71 FR 77722 (Dec. 27, 2006) (``Final Modification for Investigations''). The Department has...
Average wavefunction method for multiple scattering theory and applications
Singh, H.
1985-01-01
A general approximation scheme, the average wavefunction approximation (AWM), applicable to scattering of atoms and molecules off multi-center targets, is proposed. The total potential is replaced by a sum of nonlocal, separable interactions. Each term in the sum projects the wave function onto a weighted average in the vicinity of a given scattering center. The resultant solution is an infinite order approximation to the true solution, and choosing the weighting function as the zeroth order solution guarantees agreement with the Born approximation to second order. In addition, the approximation also becomes increasingly more accurate in the low energy long wave length limit. A nonlinear, nonperturbative literature scheme for the wave function is proposed. An extension of the scheme to multichannel scattering suitable for treating inelastic scattering is also presented. The method is applied to elastic scattering of a gas off a solid surface. The formalism is developed for both periodic as well as disordered surfaces. Numerical results are presented for atomic clusters on a flat hard wall with a Gaussian like potential at each atomic scattering site. The effect of relative lateral displacement of two clusters upon the scattering pattern is shown. The ability of AWM to accommodate disorder through statistical averaging over cluster configuration is illustrated. Enhanced uniform back scattering is observed with increasing roughness on the surface. Finally, the AWM is applied to atom-molecule scattering.
Average-cost based robust structural control
NASA Technical Reports Server (NTRS)
Hagood, Nesbitt W.
1993-01-01
A method is presented for the synthesis of robust controllers for linear time invariant structural systems with parameterized uncertainty. The method involves minimizing quantities related to the quadratic cost (H2-norm) averaged over a set of systems described by real parameters such as natural frequencies and modal residues. Bounded average cost is shown to imply stability over the set of systems. Approximations for the exact average are derived and proposed as cost functionals. The properties of these approximate average cost functionals are established. The exact average and approximate average cost functionals are used to derive dynamic controllers which can provide stability robustness. The robustness properties of these controllers are demonstrated in illustrative numerical examples and tested in a simple SISO experiment on the MIT multi-point alignment testbed.
... this page: //medlineplus.gov/ency/patientinstructions/000346.htm Weight-loss medicines To use the sharing features on this page, please enable JavaScript. Several weight-loss medicines are available. Ask your health care provider ...
... Sale You are here Home Diet and Nutrition Weight loss & acute Porphyria Being overweight is a particular problem ... one of these diseases before they enter a weight-loss program. Also, they should not participate in a ...
... below the minimum number of calories you need. Breastfeeding If you are breastfeeding, you will want to lose weight slowly. Weight ... not affect your milk supply or your health. Breastfeeding makes your body burn calories. It helps you ...
NASA Astrophysics Data System (ADS)
Farkas, Illés; Ábel, Dániel; Palla, Gergely; Vicsek, Tamás
2007-06-01
The inclusion of link weights into the analysis of network properties allows a deeper insight into the (often overlapping) modular structure of real-world webs. We introduce a clustering algorithm clique percolation method with weights (CPMw) for weighted networks based on the concept of percolating k-cliques with high enough intensity. The algorithm allows overlaps between the modules. First, we give detailed analytical and numerical results about the critical point of weighted k-clique percolation on (weighted) Erdos Rényi graphs. Then, for a scientist collaboration web and a stock correlation graph we compute three-link weight correlations and with the CPMw the weighted modules. After reshuffling link weights in both networks and computing the same quantities for the randomized control graphs as well, we show that groups of three or more strong links prefer to cluster together in both original graphs.
... Division (HMD) of the National Academies of Sciences, Engineering, and Medicine released updated guidelines for weight gain ... Division (HMD) of the National Academies of Sciences, Engineering, and Medicine: Weight Gain During Pregnancy: Reexamining the ...
ERIC Educational Resources Information Center
Clarke, Doug
1993-01-01
Describes an activity shared at an inservice teacher workshop and suitable for middle school in which students predict their ideal weight in kilograms based on tables giving ideal weights for given heights. (MDH)
Effects of spatial variability and scale on areal -average evapotranspiration
NASA Technical Reports Server (NTRS)
Famiglietti, J. S.; Wood, Eric F.
1993-01-01
This paper explores the effect of spatial variability and scale on areally-averaged evapotranspiration. A spatially-distributed water and energy balance model is employed to determine the effect of explicit patterns of model parameters and atmospheric forcing on modeled areally-averaged evapotranspiration over a range of increasing spatial scales. The analysis is performed from the local scale to the catchment scale. The study area is King's Creek catchment, an 11.7 sq km watershed located on the native tallgrass prairie of Kansas. The dominant controls on the scaling behavior of catchment-average evapotranspiration are investigated by simulation, as is the existence of a threshold scale for evapotranspiration modeling, with implications for explicit versus statistical representation of important process controls. It appears that some of our findings are fairly general, and will therefore provide a framework for understanding the scaling behavior of areally-averaged evapotranspiration at the catchment and larger scales.
Averaging of Backscatter Intensities in Compounds
Donovan, John J.; Pingitore, Nicholas E.; Westphal, Andrew J.
2002-01-01
Low uncertainty measurements on pure element stable isotope pairs demonstrate that mass has no influence on the backscattering of electrons at typical electron microprobe energies. The traditional prediction of average backscatter intensities in compounds using elemental mass fractions is improperly grounded in mass and thus has no physical basis. We propose an alternative model to mass fraction averaging, based of the number of electrons or protons, termed “electron fraction,” which predicts backscatter yield better than mass fraction averaging.
Neutron resonance averaging with filtered beams
Chrien, R.E.
1985-01-01
Neutron resonance averaging using filtered beams from a reactor source has proven to be an effective nuclear structure tool within certain limitations. These limitations are imposed by the nature of the averaging process, which produces fluctuations in radiative intensities. The fluctuations have been studied quantitatively. Resonance averaging also gives us information about initial or capture state parameters, in particular the photon strength function. Suitable modifications of the filtered beams are suggested for the enhancement of non-resonant processes.
NASA Astrophysics Data System (ADS)
Afonina, I. A.; Kleptsyna, E. S.; Petukhov, V. L.; Patrashkov, S. A.
2003-05-01
Copper plays an important part in living being bodies. But, both high and low Cu levels may cause human and animal diseases. Some East Siberia areas are characterized by Cu pollution [1]. 5 group of hens were formed: 1 - control, 2-5 - experimental. For a month the hens from experimental groups were drunk with water where Cu content was 5, 10, 20 and 30 times higher than the upper limits (UL). Group 1 - 3 hens' weight was almost the same during the experiment. Weight decrease (from 2020 to 1656 g) was detected in group 4 (20 UL) for the first half a month. All the hens of group 4 except for 3 hens were died for the last 2 weeks. In group 5 (30 UL) all the hens died after 2 ... 14 days. Thus, high Cu concentrations (20 ... 30 UL) cause hens' weight reduction of and their death.
NASA Astrophysics Data System (ADS)
Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc
2015-10-01
This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.
[Success in maintaining weight loss in Portugal: the Portuguese Weight Control Registry].
Vieira, Paulo Nuno; Teixeira, Pedro; Sardinha, Luís Bettencourt; Santos, Teresa; Coutinho, Sílvia; Mata, Jutta; Silva, Marlene Nunes
2014-01-01
The scope of this article is to describe the Portuguese Weight Control Registry (PWCR) methodology and the participants currently enrolled specifically with respect to their individual and family weight history, previous weight loss attempts, and psychosocial characteristics. One hundred and ninety-eight adults (age: 39.7±11.1 years; BMI: 26.0±3.9 kg/m2), 59% women, filled out a questionnaire about demographics, health-related behaviors and motivation, and methods and strategies used to lose and/or maintain weight loss. Participants reported an average weight loss of 17.4 kg for an average of 29 months. Concerning the number of weight loss attempts, 73% of participants reported a maximum of three attempts of going on a diet, and 34% indicated only one attempt to lose weight in the past. The PWCR now features a considerable number of successful long-term weight loss maintainers in Portugal. Participants will be followed over the next years to learn about their characteristics and weight loss strategies in further detail, as well as to identify predictors of continued weight loss maintenance. PMID:24473606
40 CFR 63.10009 - May I use emissions averaging to comply with this subpart?
Code of Federal Regulations, 2012 CFR
2012-07-01
...) You may choose to have your EGU emissions averaging group meet either the heat input basis (MMBtu or... equations. ER16FE12.003 Where: WAERm = Weighted average emissions rate maximum in terms of lb/heat input or... sorbent trap monitoring for hour i, Rmmi = Maximum rated heat input or gross electrical output of unit...
40 CFR 63.5710 - How do I demonstrate compliance using emissions averaging?
Code of Federal Regulations, 2010 CFR
2010-07-01
... demonstrate that the organic HAP emissions from those operations included in the average do not exceed the....) ER22AU01.012 Where: HAP emissions= Organic HAP emissions calculated using MACT model point values for each... section to compute the weighted-average MACT model point value for each open molding resin and gel...
Spectral and parametric averaging for integrable systems
NASA Astrophysics Data System (ADS)
Ma, Tao; Serota, R. A.
2015-05-01
We analyze two theoretical approaches to ensemble averaging for integrable systems in quantum chaos, spectral averaging (SA) and parametric averaging (PA). For SA, we introduce a new procedure, namely, rescaled spectral averaging (RSA). Unlike traditional SA, it can describe the correlation function of spectral staircase (CFSS) and produce persistent oscillations of the interval level number variance (IV). PA while not as accurate as RSA for the CFSS and IV, can also produce persistent oscillations of the global level number variance (GV) and better describes saturation level rigidity as a function of the running energy. Overall, it is the most reliable method for a wide range of statistics.
Statistics of time averaged atmospheric scintillation
Stroud, P.
1994-02-01
A formulation has been constructed to recover the statistics of the moving average of the scintillation Strehl from a discrete set of measurements. A program of airborne atmospheric propagation measurements was analyzed to find the correlation function of the relative intensity over displaced propagation paths. The variance in continuous moving averages of the relative intensity was then found in terms of the correlation functions. An empirical formulation of the variance of the continuous moving average of the scintillation Strehl has been constructed. The resulting characterization of the variance of the finite time averaged Strehl ratios is being used to assess the performance of an airborne laser system.
ERIC Educational Resources Information Center
Ryan, Kevin Michael
2011-01-01
Research on syllable weight in generative phonology has focused almost exclusively on systems in which weight is treated as an ordinal hierarchy of clearly delineated categories (e.g. light and heavy). As I discuss, canonical weight-sensitive phenomena in phonology, including quantitative meter and quantity-sensitive stress, can also treat weight…
Dynamic Multiscale Averaging (DMA) of Turbulent Flow
Richard W. Johnson
2012-09-01
A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical
Factor weighting in DRASTIC modeling.
Pacheco, F A L; Pires, L M G R; Santos, R M B; Sanches Fernandes, L F
2015-02-01
Evaluation of aquifer vulnerability comprehends the integration of very diverse data, including soil characteristics (texture), hydrologic settings (recharge), aquifer properties (hydraulic conductivity), environmental parameters (relief), and ground water quality (nitrate contamination). It is therefore a multi-geosphere problem to be handled by a multidisciplinary team. The DRASTIC model remains the most popular technique in use for aquifer vulnerability assessments. The algorithm calculates an intrinsic vulnerability index based on a weighted addition of seven factors. In many studies, the method is subject to adjustments, especially in the factor weights, to meet the particularities of the studied regions. However, adjustments made by different techniques may lead to markedly different vulnerabilities and hence to insecurity in the selection of an appropriate technique. This paper reports the comparison of 5 weighting techniques, an enterprise not attempted before. The studied area comprises 26 aquifer systems located in Portugal. The tested approaches include: the Delphi consensus (original DRASTIC, used as reference), Sensitivity Analysis, Spearman correlations, Logistic Regression and Correspondence Analysis (used as adjustment techniques). In all cases but Sensitivity Analysis, adjustment techniques have privileged the factors representing soil characteristics, hydrologic settings, aquifer properties and environmental parameters, by leveling their weights to ≈4.4, and have subordinated the factors describing the aquifer media by downgrading their weights to ≈1.5. Logistic Regression predicts the highest and Sensitivity Analysis the lowest vulnerabilities. Overall, the vulnerability indices may be separated by a maximum value of 51 points. This represents an uncertainty of 2.5 vulnerability classes, because they are 20 points wide. Given this ambiguity, the selection of a weighting technique to integrate a vulnerability index may require additional
Weighted Watson-Crick automata
NASA Astrophysics Data System (ADS)
Tamrin, Mohd Izzuddin Mohd; Turaev, Sherzod; Sembok, Tengku Mohd Tengku
2014-07-01
There are tremendous works in biotechnology especially in area of DNA molecules. The computer society is attempting to develop smaller computing devices through computational models which are based on the operations performed on the DNA molecules. A Watson-Crick automaton, a theoretical model for DNA based computation, has two reading heads, and works on double-stranded sequences of the input related by a complementarity relation similar with the Watson-Crick complementarity of DNA nucleotides. Over the time, several variants of Watson-Crick automata have been introduced and investigated. However, they cannot be used as suitable DNA based computational models for molecular stochastic processes and fuzzy processes that are related to important practical problems such as molecular parsing, gene disease detection, and food authentication. In this paper we define new variants of Watson-Crick automata, called weighted Watson-Crick automata, developing theoretical models for molecular stochastic and fuzzy processes. We define weighted Watson-Crick automata adapting weight restriction mechanisms associated with formal grammars and automata. We also study the generative capacities of weighted Watson-Crick automata, including probabilistic and fuzzy variants. We show that weighted variants of Watson-Crick automata increase their generative power.
Weighted Watson-Crick automata
Tamrin, Mohd Izzuddin Mohd; Turaev, Sherzod; Sembok, Tengku Mohd Tengku
2014-07-10
There are tremendous works in biotechnology especially in area of DNA molecules. The computer society is attempting to develop smaller computing devices through computational models which are based on the operations performed on the DNA molecules. A Watson-Crick automaton, a theoretical model for DNA based computation, has two reading heads, and works on double-stranded sequences of the input related by a complementarity relation similar with the Watson-Crick complementarity of DNA nucleotides. Over the time, several variants of Watson-Crick automata have been introduced and investigated. However, they cannot be used as suitable DNA based computational models for molecular stochastic processes and fuzzy processes that are related to important practical problems such as molecular parsing, gene disease detection, and food authentication. In this paper we define new variants of Watson-Crick automata, called weighted Watson-Crick automata, developing theoretical models for molecular stochastic and fuzzy processes. We define weighted Watson-Crick automata adapting weight restriction mechanisms associated with formal grammars and automata. We also study the generative capacities of weighted Watson-Crick automata, including probabilistic and fuzzy variants. We show that weighted variants of Watson-Crick automata increase their generative power.
Whatever Happened to the Average Student?
ERIC Educational Resources Information Center
Krause, Tom
2005-01-01
Mandated state testing, college entrance exams and their perceived need for higher and higher grade point averages have raised the anxiety levels felt by many of the average students. Too much focus is placed on state test scores and college entrance standards with not enough focus on the true level of the students. The author contends that…
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2014 CFR
2014-07-01
... averaging. (a) General. The owner or operator of an existing potline or anode bake furnace in a State that... by total aluminum production. (c) Anode bake furnaces. The owner or operator may average TF emissions from anode bake furnaces and demonstrate compliance with the limits in Table 3 of this subpart...
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2012 CFR
2012-07-01
... averaging. (a) General. The owner or operator of an existing potline or anode bake furnace in a State that... by total aluminum production. (c) Anode bake furnaces. The owner or operator may average TF emissions from anode bake furnaces and demonstrate compliance with the limits in Table 3 of this subpart...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General provisions. In lieu of complying with the...
A note on generalized averaged Gaussian formulas
NASA Astrophysics Data System (ADS)
Spalevic, Miodrag
2007-11-01
We have recently proposed a very simple numerical method for constructing the averaged Gaussian quadrature formulas. These formulas exist in many more cases than the real positive Gauss?Kronrod formulas. In this note we try to answer whether the averaged Gaussian formulas are an adequate alternative to the corresponding Gauss?Kronrod quadrature formulas, to estimate the remainder term of a Gaussian rule.
Determinants of College Grade Point Averages
ERIC Educational Resources Information Center
Bailey, Paul Dean
2012-01-01
Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by…
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2013 CFR
2013-07-01
... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2011 CFR
2011-07-01
... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2010 CFR
2010-07-01
... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...
Average Transmission Probability of a Random Stack
ERIC Educational Resources Information Center
Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg
2010-01-01
The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…
New results on averaging theory and applications
NASA Astrophysics Data System (ADS)
Cândido, Murilo R.; Llibre, Jaume
2016-08-01
The usual averaging theory reduces the computation of some periodic solutions of a system of ordinary differential equations, to find the simple zeros of an associated averaged function. When one of these zeros is not simple, i.e., the Jacobian of the averaged function in it is zero, the classical averaging theory does not provide information about the periodic solution associated to a non-simple zero. Here we provide sufficient conditions in order that the averaging theory can be applied also to non-simple zeros for studying their associated periodic solutions. Additionally, we do two applications of this new result for studying the zero-Hopf bifurcation in the Lorenz system and in the Fitzhugh-Nagumo system.
The Hubble rate in averaged cosmology
Umeh, Obinna; Larena, Julien; Clarkson, Chris E-mail: julien.larena@gmail.com
2011-03-01
The calculation of the averaged Hubble expansion rate in an averaged perturbed Friedmann-Lemaître-Robertson-Walker cosmology leads to small corrections to the background value of the expansion rate, which could be important for measuring the Hubble constant from local observations. It also predicts an intrinsic variance associated with the finite scale of any measurement of H{sub 0}, the Hubble rate today. Both the mean Hubble rate and its variance depend on both the definition of the Hubble rate and the spatial surface on which the average is performed. We quantitatively study different definitions of the averaged Hubble rate encountered in the literature by consistently calculating the backreaction effect at second order in perturbation theory, and compare the results. We employ for the first time a recently developed gauge-invariant definition of an averaged scalar. We also discuss the variance of the Hubble rate for the different definitions.
Short-Term Auditory Memory of Above-Average and Below-Average Grade Three Readers.
ERIC Educational Resources Information Center
Caruk, Joan Marie
To determine if performance on short term auditory memory tasks is influenced by reading ability or sex differences, 62 third grade reading students (16 above average boys, 16 above average girls, 16 below average boys, and 14 below average girls) were administered four memory tests--memory for consonant names, memory for words, memory for…
Effects of wildfire disaster exposure on male birth weight in an Australian population
O’Donnell, M. H.; Behie, A. M.
2015-01-01
Background and objectives: Maternal stress can depress birth weight and gestational age, with potential health effects. A growing number of studies examine the effect of maternal stress caused by environmental disasters on birth outcomes. These changes may indicate an adaptive response. In this study, we examine the effects of maternal exposure to wildfire on birth weight and gestational age, hypothesising that maternal stress will negatively influence these measures. Methodology: Using data from the Australian Capital Territory, we employed Analysis of Variance to examine the influence of the 2003 Canberra wildfires on the weight of babies born to mothers resident in fire-affected regions, while considering the role of other factors. Results: We found that male infants born in the most severely fire-affected area had significantly higher average birth weights than their less exposed peers and were also heavier than males born in the same areas in non-fire years. Higher average weights were attributable to an increase in the number of macrosomic infants. There was no significant effect on the weight of female infants or on gestational age for either sex. Conclusions and implications: Our findings indicate heightened environmental responsivity in the male cohort. We find that elevated maternal stress acted to accelerate the growth of male fetuses, potentially through an elevation of maternal blood glucose levels. Like previous studies, our work finds effects of disaster exposure and suggests that fetal growth patterns respond to maternal signals. However, the direction of the change in birth weight is opposite to that of many earlier studies. PMID:26574560
Clarifying the Relationship between Average Excesses and Average Effects of Allele Substitutions.
Alvarez-Castro, José M; Yang, Rong-Cai
2012-01-01
Fisher's concepts of average effects and average excesses are at the core of the quantitative genetics theory. Their meaning and relationship have regularly been discussed and clarified. Here we develop a generalized set of one locus two-allele orthogonal contrasts for average excesses and average effects, based on the concept of the effective gene content of alleles. Our developments help understand the average excesses of alleles for the biallelic case. We dissect how average excesses relate to the average effects and to the decomposition of the genetic variance. PMID:22509178
Light propagation in the averaged universe
Bagheri, Samae; Schwarz, Dominik J. E-mail: dschwarz@physik.uni-bielefeld.de
2014-10-01
Cosmic structures determine how light propagates through the Universe and consequently must be taken into account in the interpretation of observations. In the standard cosmological model at the largest scales, such structures are either ignored or treated as small perturbations to an isotropic and homogeneous Universe. This isotropic and homogeneous model is commonly assumed to emerge from some averaging process at the largest scales. We assume that there exists an averaging procedure that preserves the causal structure of space-time. Based on that assumption, we study the effects of averaging the geometry of space-time and derive an averaged version of the null geodesic equation of motion. For the averaged geometry we then assume a flat Friedmann-Lemaître (FL) model and find that light propagation in this averaged FL model is not given by null geodesics of that model, but rather by a modified light propagation equation that contains an effective Hubble expansion rate, which differs from the Hubble rate of the averaged space-time.
Tailoring dietary approaches for weight loss
Gardner, C D
2012-01-01
Although the ‘Low-Fat' diet was the predominant public health recommendation for weight loss and weight control for the past several decades, the obesity epidemic continued to grow during this time period. An alternative ‘low-carbohydrate' (Low-Carb) approach, although originally dismissed and even vilified, was comparatively tested in a series of studies over the past decade, and has been found in general to be as effective, if not more, as the Low-Fat approach for weight loss and for several related metabolic health measures. From a glass half full perspective, this suggests that there is more than one choice for a dietary approach to lose weight, and that Low-Fat and Low-Carb diets may be equally effective. From a glass half empty perspective, the average amount of weight lost on either of these two dietary approaches under the conditions studied, particularly when followed beyond 1 year, has been modest at best and negligible at worst, suggesting that the two approaches may be equally ineffective. One could resign themselves at this point to focusing on calories and energy intake restriction, regardless of macronutrient distributions. However, before throwing out the half-glass of water, it is worthwhile to consider that focusing on average results may mask important subgroup successes and failures. In all weight-loss studies, without exception, the range of individual differences in weight change within any particular diet groups is orders of magnitude greater than the average group differences between diet groups. Several studies have now reported that adults with greater insulin resistance are more successful with weight loss on a lower-carbohydrate diet compared with a lower-fat diet, whereas adults with greater insulin sensitivity are equally or more successful with weight loss on a lower-fat diet compared with a lower-carbohydrate diet. Other preliminary findings suggest that there may be some promise with matching individuals with certain genotypes
NASA Astrophysics Data System (ADS)
Li, Xiaobao; Tsai, Frank T.-C.
2009-09-01
This study introduces a Bayesian model averaging (BMA) method that incorporates multiple groundwater models and multiple hydraulic conductivity estimation methods to predict groundwater heads and evaluate prediction uncertainty. BMA is able to distinguish prediction uncertainty arising from individual models, between models, and between methods. Moreover, BMA is able to identify unfavorable models even though they may present small prediction uncertainty. Uncertainty propagation, from model parameter uncertainty to model prediction uncertainty, can also be studied through BMA. This study adopts a variance window to obtain reasonable BMA weights for the best models, which are usually exaggerated by Occam's window. Results from a synthetic case study show that BMA with the variance window can provide better head prediction than individual models, or at least can obtain better predictions close to the best model. The BMA was applied to predicting groundwater heads in the "1500-foot" sand of the Baton Rouge area in Louisiana. Head prediction uncertainty was assessed by the BMA prediction variance. BMA confirms that large head prediction uncertainty occurs at areas lacking head observations and hydraulic conductivity measurements. Further study in these areas is necessary to reduce head prediction uncertainty.
Cosmic Inhomogeneities and Averaged Cosmological Dynamics
NASA Astrophysics Data System (ADS)
Paranjape, Aseem; Singh, T. P.
2008-10-01
If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a “dark energy.” However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be “no.” Averaging effects negligibly influence the cosmological dynamics.
Average shape of transport-limited aggregates.
Davidovitch, Benny; Choi, Jaehyuk; Bazant, Martin Z
2005-08-12
We study the relation between stochastic and continuous transport-limited growth models. We derive a nonlinear integro-differential equation for the average shape of stochastic aggregates, whose mean-field approximation is the corresponding continuous equation. Focusing on the advection-diffusion-limited aggregation (ADLA) model, we show that the average shape of the stochastic growth is similar, but not identical, to the corresponding continuous dynamics. Similar results should apply to DLA, thus explaining the known discrepancies between average DLA shapes and viscous fingers in a channel geometry. PMID:16196793
Average Shape of Transport-Limited Aggregates
NASA Astrophysics Data System (ADS)
Davidovitch, Benny; Choi, Jaehyuk; Bazant, Martin Z.
2005-08-01
We study the relation between stochastic and continuous transport-limited growth models. We derive a nonlinear integro-differential equation for the average shape of stochastic aggregates, whose mean-field approximation is the corresponding continuous equation. Focusing on the advection-diffusion-limited aggregation (ADLA) model, we show that the average shape of the stochastic growth is similar, but not identical, to the corresponding continuous dynamics. Similar results should apply to DLA, thus explaining the known discrepancies between average DLA shapes and viscous fingers in a channel geometry.
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2012 CFR
2012-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...
Code of Federal Regulations, 2013 CFR
2013-07-01
... offset by positive credits from engine families below the applicable emission standard, as allowed under the provisions of this subpart. Averaging of credits in this manner is used to determine...
Orbit-averaged implicit particle codes
NASA Astrophysics Data System (ADS)
Cohen, B. I.; Freis, R. P.; Thomas, V.
1982-03-01
The merging of orbit-averaged particle code techniques with recently developed implicit methods to perform numerically stable and accurate particle simulations are reported. Implicitness and orbit averaging can extend the applicability of particle codes to the simulation of long time-scale plasma physics phenomena by relaxing time-step and statistical constraints. Difference equations for an electrostatic model are presented, and analyses of the numerical stability of each scheme are given. Simulation examples are presented for a one-dimensional electrostatic model. Schemes are constructed that are stable at large-time step, require fewer particles, and, hence, reduce input-output and memory requirements. Orbit averaging, however, in the unmagnetized electrostatic models tested so far is not as successful as in cases where there is a magnetic field. Methods are suggested in which orbit averaging should achieve more significant improvements in code efficiency.
Code of Federal Regulations, 2010 CFR
2010-07-01
... may use averaging to offset an emission exceedance of a nonroad engine family caused by a NOX FEL... exceedance of a nonroad engine family caused by an NMHC+;NOX FEL or a PM FEL above the applicable...
Code of Federal Regulations, 2011 CFR
2011-07-01
... may use averaging to offset an emission exceedance of a nonroad engine family caused by a NOX FEL... exceedance of a nonroad engine family caused by an NMHC+;NOX FEL or a PM FEL above the applicable...
Code of Federal Regulations, 2013 CFR
2013-07-01
... may use averaging to offset an emission exceedance of a nonroad engine family caused by a NOX FEL... exceedance of a nonroad engine family caused by an NMHC+;NOX FEL or a PM FEL above the applicable...
Code of Federal Regulations, 2012 CFR
2012-07-01
... may use averaging to offset an emission exceedance of a nonroad engine family caused by a NOX FEL... exceedance of a nonroad engine family caused by an NMHC+;NOX FEL or a PM FEL above the applicable...
Code of Federal Regulations, 2014 CFR
2014-07-01
... may use averaging to offset an emission exceedance of a nonroad engine family caused by a NOX FEL... exceedance of a nonroad engine family caused by an NMHC+;NOX FEL or a PM FEL above the applicable...
Total-pressure averaging in pulsating flows.
NASA Technical Reports Server (NTRS)
Krause, L. N.; Dudzinski, T. J.; Johnson, R. C.
1972-01-01
A number of total-pressure tubes were tested in a nonsteady flow generator in which the fraction of period that pressure is a maximum is approximately 0.8, thereby simulating turbomachine-type flow conditions. Most of the tubes indicated a pressure which was higher than the true average. Organ-pipe resonance which further increased the indicated pressure was encountered with the tubes at discrete frequencies. There was no obvious combination of tube diameter, length, and/or geometry variation used in the tests which resulted in negligible averaging error. A pneumatic-type probe was found to measure true average pressure and is suggested as a comparison instrument to determine whether nonlinear averaging effects are serious in unknown pulsation profiles.
Stochastic Averaging of Duhem Hysteretic Systems
NASA Astrophysics Data System (ADS)
YING, Z. G.; ZHU, W. Q.; NI, Y. Q.; KO, J. M.
2002-06-01
The response of Duhem hysteretic system to externally and/or parametrically non-white random excitations is investigated by using the stochastic averaging method. A class of integrable Duhem hysteresis models covering many existing hysteresis models is identified and the potential energy and dissipated energy of Duhem hysteretic component are determined. The Duhem hysteretic system under random excitations is replaced equivalently by a non-hysteretic non-linear random system. The averaged Ito's stochastic differential equation for the total energy is derived and the Fokker-Planck-Kolmogorov equation associated with the averaged Ito's equation is solved to yield stationary probability density of total energy, from which the statistics of system response can be evaluated. It is observed that the numerical results by using the stochastic averaging method is in good agreement with that from digital simulation.
Spacetime Average Density (SAD) cosmological measures
Page, Don N.
2014-11-01
The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmological constant.
Total pressure averaging in pulsating flows
NASA Technical Reports Server (NTRS)
Krause, L. N.; Dudzinski, T. J.; Johnson, R. C.
1972-01-01
A number of total-pressure tubes were tested in a non-steady flow generator in which the fraction of period that pressure is a maximum is approximately 0.8, thereby simulating turbomachine-type flow conditions. Most of the tubes indicated a pressure which was higher than the true average. Organ-pipe resonance which further increased the indicated pressure was encountered within the tubes at discrete frequencies. There was no obvious combination of tube diameter, length, and/or geometry variation used in the tests which resulted in negligible averaging error. A pneumatic-type probe was found to measure true average pressure, and is suggested as a comparison instrument to determine whether nonlinear averaging effects are serious in unknown pulsation profiles. The experiments were performed at a pressure level of 1 bar, for Mach number up to near 1, and frequencies up to 3 kHz.
Monthly average polar sea-ice concentration
Schweitzer, Peter N.
1995-01-01
The data contained in this CD-ROM depict monthly averages of sea-ice concentration in the modern polar oceans. These averages were derived from the Scanning Multichannel Microwave Radiometer (SMMR) and Special Sensor Microwave/Imager (SSM/I) instruments aboard satellites of the U.S. Air Force Defense Meteorological Satellite Program from 1978 through 1992. The data are provided as 8-bit images using the Hierarchical Data Format (HDF) developed by the National Center for Supercomputing Applications.
Heuristic approach to capillary pressures averaging
Coca, B.P.
1980-10-01
Several methods are available to average capillary pressure curves. Among these are the J-curve and regression equations of the wetting-fluid saturation in porosity and permeability (capillary pressure held constant). While the regression equation seem completely empiric, the J-curve method seems to be theoretically sound due to its expression based on a relation between the average capillary radius and the permeability-porosity ratio. An analysis is given of each of these methods.
Lokemoen, J.T.; Johnson, D.H.; Sharp, D.E.
1990-01-01
During 1976-81 we weighed several thousands of wild Mallard, Gadwall, and Blue-winged Teal in central North Dakota to examine duckling growth patterns, adult weights, and the factors influencing them. One-day-old Mallard and Gadwall averaged 32.4 and 30.4 g, respectively, a reduction of 34% and 29% from fresh egg weights. In all three species, the logistic growth curve provided a good fit for duckling growth patterns. Except for the asymptote, there was no difference in growth curves between males and females of a species. Mallard and Gadwall ducklings were heavier in years when wetland area was extensive or had increased from the previous year. Weights of after-second-year females were greater than yearlings for Mallard but not for Gadwall or Blue-winged Teal. Adult Mallard females lost weight continuously from late March to early July. Gadwall and Blue-winged Teal females, which nest later than Mallard, gained weight after spring arrival, lost weight from the onset of nesting until early July, and then regained some weight. Females of all species captured on nests were lighter than those captured off nests at the same time. Male Mallard weights decreased from spring arrival until late May. Male Gadwall and Blue-winged Teal weights increased after spring arrival, then declined until early June. Males of all three species then gained weight until the end of June. Among adults, female Gadwall and male Mallard and Blue-winged Teal were heavier in years when wetland area had increased from the previous year; female Blue-winged Teal were heavier in years with more wetland area.
Instrument to average 100 data sets
NASA Technical Reports Server (NTRS)
Tuma, G. B.; Birchenough, A. G.; Rice, W. J.
1977-01-01
An instrumentation system is currently under development which will measure many of the important parameters associated with the operation of an internal combustion engine. Some of these parameters include mass-fraction burn rate, ignition energy, and the indicated mean effective pressure. One of the characteristics of an internal combustion engine is the cycle-to-cycle variation of these parameters. A curve-averaging instrument has been produced which will generate the average curve, over 100 cycles, of any engine parameter. the average curve is described by 2048 discrete points which are displayed on an oscilloscope screen to facilitate recording and is available in real time. Input can be any parameter which is expressed as a + or - 10-volt signal. Operation of the curve-averaging instrument is defined between 100 and 6000 rpm. Provisions have also been made for averaging as many as four parameters simultaneously, with a subsequent decrease in resolution. This provides the means to correlate and perhaps interrelate the phenomena occurring in an internal combustion engine. This instrument has been used successfully on a 1975 Chevrolet V8 engine, and on a Continental 6-cylinder aircraft engine. While this instrument was designed for use on an internal combustion engine, with some modification it can be used to average any cyclically varying waveform.
The Influence of Sleep Disordered Breathing on Weight Loss in a National Weight Management Program
Janney, Carol A.; Kilbourne, Amy M.; Germain, Anne; Lai, Zongshan; Hoerster, Katherine D.; Goodrich, David E.; Klingaman, Elizabeth A.; Verchinina, Lilia; Richardson, Caroline R.
2016-01-01
Study Objective: To investigate the influence of sleep disordered breathing (SDB) on weight loss in overweight/obese veterans enrolled in MOVE!, a nationally implemented behavioral weight management program delivered by the National Veterans Health Administration health system. Methods: This observational study evaluated weight loss by SDB status in overweight/obese veterans enrolled in MOVE! from May 2008–February 2012 who had at least two MOVE! visits, baseline weight, and at least one follow-up weight (n = 84,770). SDB was defined by International Classification of Diseases, Ninth Revision, Clinical Modification codes. Primary outcome was weight change (lb) from MOVE! enrollment to 6- and 12-mo assessments. Weight change over time was modeled with repeated-measures analyses. Results: SDB was diagnosed in one-third of the cohort (n = 28,269). At baseline, veterans with SDB weighed 29 [48] lb more than those without SDB (P < 0.001). On average, veterans attended eight MOVE! visits. Weight loss patterns over time were statistically different between veterans with and without SDB (P < 0.001); veterans with SDB lost less weight (−2.5 [0.1] lb) compared to those without SDB (−3.3 [0.1] lb; P = 0.001) at 6 months. At 12 mo, veterans with SDB continued to lose weight whereas veterans without SDB started to re-gain weight. Conclusions: Veterans with sleep disordered breathing (SDB) had significantly less weight loss over time than veterans without SDB. SDB should be considered in the development and implementation of weight loss programs due to its high prevalence and negative effect on health. Citation: Janney CA, Kilbourne AM, Germain A, Lai Z, Hoerster KD, Goodrich DE, Klingaman EA, Verchinina L, Richardson CR. The influence of sleep disordered breathing on weight loss in a national weight management program. SLEEP 2016;39(1):59–65. PMID:26350475
Computation of vertically averaged velocities in irregular sections of straight channels
NASA Astrophysics Data System (ADS)
Spada, E.; Tucciarelli, T.; Sinagra, M.; Sammartano, V.; Corato, G.
2015-09-01
Two new methods for vertically averaged velocity computation are presented, validated and compared with other available formulas. The first method derives from the well-known Huthoff algorithm, which is first shown to be dependent on the way the river cross section is discretized into several subsections. The second method assumes the vertically averaged longitudinal velocity to be a function only of the friction factor and of the so-called "local hydraulic radius", computed as the ratio between the integral of the elementary areas around a given vertical and the integral of the elementary solid boundaries around the same vertical. Both integrals are weighted with a linear shape function equal to zero at a distance from the integration variable which is proportional to the water depth according to an empirical coefficient β. Both formulas are validated against (1) laboratory experimental data, (2) discharge hydrographs measured in a real site, where the friction factor is estimated from an unsteady-state analysis of water levels recorded in two different river cross sections, and (3) the 3-D solution obtained using the commercial ANSYS CFX code, computing the steady-state uniform flow in a cross section of the Alzette River.
Zhao, Kaiguang; Valle, Denis; Popescu, Sorin; Zhang, Xuesong; Malick, Bani
2013-05-15
Model specification remains challenging in spectroscopy of plant biochemistry, as exemplified by the availability of various spectral indices or band combinations for estimating the same biochemical. This lack of consensus in model choice across applications argues for a paradigm shift in hyperspectral methods to address model uncertainty and misspecification. We demonstrated one such method using Bayesian model averaging (BMA), which performs variable/band selection and quantifies the relative merits of many candidate models to synthesize a weighted average model with improved predictive performances. The utility of BMA was examined using a portfolio of 27 foliage spectral–chemical datasets representing over 80 species across the globe to estimate multiple biochemical properties, including nitrogen, hydrogen, carbon, cellulose, lignin, chlorophyll (a or b), carotenoid, polar and nonpolar extractives, leaf mass per area, and equivalent water thickness. We also compared BMA with partial least squares (PLS) and stepwise multiple regression (SMR). Results showed that all the biochemicals except carotenoid were accurately estimated from hyerspectral data with R2 values > 0.80.
Theory of optimal weighting of data to detect climatic change
NASA Technical Reports Server (NTRS)
Bell, T. L.
1986-01-01
A search for climatic change predicted by climate models can easily yield unconvincing results because of 'climatic noise,' the inherent, unpredictable variability of time-average atmospheric data. A weighted average of data that maximizes the probability of detecting predicted climatic change is presented. To obtain the optimal weights, an estimate of the covariance matrix of the data from a prior data set is needed. This introduces additional sampling error into the method. This is presently taken into account. A form of the weighted average is found whose probability distribution is independent of the true (but unknown) covariance statistics of the data and of the climate model prediction.
Ensemble bayesian model averaging using markov chain Monte Carlo sampling
Vrugt, Jasper A; Diks, Cees G H; Clark, Martyn P
2008-01-01
Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.
Jackson-type inequalities for spherical neural networks with doubling weights.
Lin, Shaobo; Zeng, Jinshan; Xu, Lin; Xu, Zongben
2015-03-01
Recently, the spherical data processing has emerged in many applications and attracted a lot of attention. Among all the methods for dealing with the spherical data, the spherical neural networks (SNNs) method has been recognized as a very efficient tool due to SNNs possess both good approximation capability and spacial localization property. For better localized approximant, weighted approximation should be considered since different areas of the sphere may play different roles in the approximation process. In this paper, using the minimal Riesz energy points and the spherical cap average operator, we first construct a class of well-localized SNNs with a bounded sigmoidal activation function, and then study their approximation capabilities. More specifically, we establish a Jackson-type error estimate for the weighted SNNs approximation in the metric of L(p) space for the well developed doubling weights. PMID:25481671
... trying to do so can have many causes. Metabolism slows down as you age . This can cause weight gain if you eat too much, eat the wrong foods, or do not get enough exercise. Drugs that can cause weight gain include: Birth control ...
... weight gain in a couple of ways. First, alcohol is high in calories. Some mixed drinks can contain as many calories as a meal, but without the nutrients. You also may make poor food choices ... to cut out all alcohol if you are trying to lose weight, you ...
Sansone, Randy A; Sansone, Lori A
2014-07-01
Acute marijuana use is classically associated with snacking behavior (colloquially referred to as "the munchies"). In support of these acute appetite-enhancing effects, several authorities report that marijuana may increase body mass index in patients suffering from human immunodeficiency virus and cancer. However, for these medical conditions, while appetite may be stimulated, some studies indicate that weight gain is not always clinically meaningful. In addition, in a study of cancer patients in which weight gain did occur, it was less than the comparator drug (megestrol). However, data generally suggest that acute marijuana use stimulates appetite, and that marijuana use may stimulate appetite in low-weight individuals. As for large epidemiological studies in the general population, findings consistently indicate that users of marijuana tend to have lower body mass indices than nonusers. While paradoxical and somewhat perplexing, these findings may be explained by various study confounds, such as potential differences between acute versus chronic marijuana use; the tendency for marijuana use to be associated with other types of drug use; and/or the possible competition between food and drugs for the same reward sites in the brain. Likewise, perhaps the effects of marijuana are a function of initial weight status-i.e., maybe marijuana is a metabolic regulatory substance that increases body weight in low-weight individuals but not in normal-weight or overweight individuals. Only further research will clarify the complex relationships between marijuana and body weight. PMID:25337447
ERIC Educational Resources Information Center
Katch, Victor L.
This paper describes a number of factors which go into determining weight. The paper describes what calories are, how caloric expenditure is measured, and why caloric expenditure is different for different people. The paper then outlines the way the body tends to adjust food intake and exercise to maintain a constant body weight. It is speculated…
Technology Transfer Automated Retrieval System (TEKTRAN)
This review evaluated the available scientific literature relative to anthocyanins and weight loss and/or obesity with mention of other effects of anthocyanins on pathologies that are closely related to obesity. Although there is considerable popular press concerning anthocyanins and weight loss, th...
... in a person's diabetes management plan. Weight and Type 1 Diabetes If a person has type 1 diabetes but hasn't been treated yet, he or she often loses weight. In type 1 diabetes, the body can't use glucose (pronounced: GLOO- ...
ERIC Educational Resources Information Center
Lakdawalla, Darius; Philipson, Tomas
2007-01-01
We use panel data from the National Longitudinal Survey of Youth to investigate on-the-job exercise and weight. For male workers, job-related exercise has causal effects on weight, but for female workers, the effects seem primarily selective. A man who spends 18 years in the most physical fitness-demanding occupation is about 25 pounds (14…
... 22990030 www.ncbi.nlm.nih.gov/pubmed/22990030 . Weight-control Information NetworkNational Institute of Diabetes and Digestive and ... www.niddk.nih.gov/health-information/health-topics/weight-control/very-low-calorie-diets/Pages/very-low-calorie- ...
Average luminosity distance in inhomogeneous universes
Kostov, Valentin
2010-04-01
Using numerical ray tracing, the paper studies how the average distance modulus in an inhomogeneous universe differs from its homogeneous counterpart. The averaging is over all directions from a fixed observer not over all possible observers (cosmic), thus is more directly applicable to our observations. In contrast to previous studies, the averaging is exact, non-perturbative, and includes all non-linear effects. The inhomogeneous universes are represented by Swiss-cheese models containing random and simple cubic lattices of mass-compensated voids. The Earth observer is in the homogeneous cheese which has an Einstein-de Sitter metric. For the first time, the averaging is widened to include the supernovas inside the voids by assuming the probability for supernova emission from any comoving volume is proportional to the rest mass in it. Voids aligned along a certain direction give rise to a distance modulus correction which increases with redshift and is caused by cumulative gravitational lensing. That correction is present even for small voids and depends on their density contrast, not on their radius. Averaging over all directions destroys the cumulative lensing correction even in a non-randomized simple cubic lattice of voids. At low redshifts, the average distance modulus correction does not vanish due to the peculiar velocities, despite the photon flux conservation argument. A formula for the maximal possible average correction as a function of redshift is derived and shown to be in excellent agreement with the numerical results. The formula applies to voids of any size that: (a)have approximately constant densities in their interior and walls; and (b)are not in a deep nonlinear regime. The average correction calculated in random and simple cubic void lattices is severely damped below the predicted maximal one after a single void diameter. That is traced to cancellations between the corrections from the fronts and backs of different voids. The results obtained
A re-averaged WENO reconstruction and a third order CWENO scheme for hyperbolic conservation laws
NASA Astrophysics Data System (ADS)
Huang, Chieh-Sen; Arbogast, Todd; Hung, Chen-Hui
2014-04-01
A WENO re-averaging (or re-mapping) technique is developed that converts function averages on one grid to another grid to high order. Nonlinear weighting gives the essentially non-oscillatory property to the re-averaged function values. The new reconstruction grid is used to obtain a standard high order WENO reconstruction of the function averages at a select point. By choosing the reconstruction grid to include the point of interest, a high order function value can be reconstructed using only positive linear weights. The re-averaging technique is applied to define two variants of a classic CWENO3 scheme that combines two linear polynomials to obtain formal third order accuracy. Such a scheme cannot otherwise be defined, due to the nonexistence of linear weights for third order reconstruction at the center of a grid element. The new scheme uses a compact stencil of three solution averages, and only positive linear weights are used. The scheme extends easily to problems in higher space dimensions, essentially as a tensor product of the one-dimensional scheme. The scheme maintains formal third order accuracy in higher dimensions. Numerical results show that this CWENO3 scheme is third order accurate for smooth problems and gives good results for non-smooth problems, including those with shocks.
Explicit cosmological coarse graining via spatial averaging
NASA Astrophysics Data System (ADS)
Paranjape, Aseem; Singh, T. P.
2008-01-01
The present matter density of the Universe, while highly inhomogeneous on small scales, displays approximate homogeneity on large scales. We propose that whereas it is justified to use the Friedmann Lemaître Robertson Walker (FLRW) line element (which describes an exactly homogeneous and isotropic universe) as a template to construct luminosity distances in order to compare observations with theory, the evolution of the scale factor in such a construction must be governed not by the standard Einstein equations for the FLRW metric, but by the modified Friedmann equations derived by Buchert (Gen Relat Gravit 32:105, 2000; 33:1381, 2001) in the context of spatial averaging in Cosmology. Furthermore, we argue that this scale factor, defined in the spatially averaged cosmology, will correspond to the effective FLRW metric provided the size of the averaging domain coincides with the scale at which cosmological homogeneity arises. This allows us, in principle, to compare predictions of a spatially averaged cosmology with observations, in the standard manner, for instance by computing the luminosity distance versus red-shift relation. The predictions of the spatially averaged cosmology would in general differ from standard FLRW cosmology, because the scale-factor now obeys the modified FLRW equations. This could help determine, by comparing with observations, whether or not cosmological inhomogeneities are an alternative explanation for the observed cosmic acceleration.
A Stochastic Model of Space-Time Variability of Mesoscale Rainfall: Statistics of Spatial Averages
NASA Technical Reports Server (NTRS)
Kundu, Prasun K.; Bell, Thomas L.
2003-01-01
A characteristic feature of rainfall statistics is that they depend on the space and time scales over which rain data are averaged. A previously developed spectral model of rain statistics that is designed to capture this property, predicts power law scaling behavior for the second moment statistics of area-averaged rain rate on the averaging length scale L as L right arrow 0. In the present work a more efficient method of estimating the model parameters is presented, and used to fit the model to the statistics of area-averaged rain rate derived from gridded radar precipitation data from TOGA COARE. Statistical properties of the data and the model predictions are compared over a wide range of averaging scales. An extension of the spectral model scaling relations to describe the dependence of the average fraction of grid boxes within an area containing nonzero rain (the "rainy area fraction") on the grid scale L is also explored.
Assessing exposure metrics for PM and birth weight models.
Gray, Simone C; Edwards, Sharon E; Miranda, Marie Lynn
2010-07-01
The link between air pollution exposure and adverse birth outcomes is of public health concern due to the relationship between poor pregnancy outcomes and the onset of childhood and adult diseases. As personal exposure measurements are difficult and expensive to obtain, proximate measures of air pollution exposure are traditionally used. We explored how different air pollution exposure metrics affect birth weight regression models. We examined the effect of maternal exposure to ambient levels of particulate matter <10, <2.5 mum in aerodynamic diameter (PM(10), PM(2.5)) on birth weight among infants in North Carolina. We linked maternal residence to the closest monitor during pregnancy for 2000-2002 (n=350,754). County-level averages of air pollution concentrations were estimated for the entire pregnancy and each trimester. For a finer spatially resolved metric, we calculated exposure averages for women living within 20, 10, and 5 km of a monitor. Multiple linear regression was used to determine the association between exposure and birth weight, adjusting for standard covariates. In the county-level model, an interquartile increase in PM(10) and PM(2.5) during the entire gestational period reduced the birth weight by 5.3 g (95% CI: 3.3-7.4) and 4.6 g (95% CI: 2.3-6.8), respectively. This model also showed a reduction in birth weight for PM(10) (7.1 g, 95% CI: 1.0-13.2) and PM(2.5) (10.4 g, 95% CI: 6.4-14.4) during the third trimester. Proximity models for 20, 10, and 5 km distances showed results similar to the county-level models. County-level models assume that exposure is spatially homogeneous over a larger surface area than proximity models. Sensitivity analysis showed that at varying spatial resolutions, there is still a stable and negative association between air pollution and birth weight, despite North Carolina's consistent attainment of federal air quality standards. PMID:19773814
Weight discrimination and bullying.
Puhl, Rebecca M; King, Kelly M
2013-04-01
Despite significant attention to the medical impacts of obesity, often ignored are the negative outcomes that obese children and adults experience as a result of stigma, bias, and discrimination. Obese individuals are frequently stigmatized because of their weight in many domains of daily life. Research spanning several decades has documented consistent weight bias and stigmatization in employment, health care, schools, the media, and interpersonal relationships. For overweight and obese youth, weight stigmatization translates into pervasive victimization, teasing, and bullying. Multiple adverse outcomes are associated with exposure to weight stigmatization, including depression, anxiety, low self-esteem, body dissatisfaction, suicidal ideation, poor academic performance, lower physical activity, maladaptive eating behaviors, and avoidance of health care. This review summarizes the nature and extent of weight stigmatization against overweight and obese individuals, as well as the resulting consequences that these experiences create for social, psychological, and physical health for children and adults who are targeted. PMID:23731874
Self-averaging in complex brain neuron signals
NASA Astrophysics Data System (ADS)
Bershadskii, A.; Dremencov, E.; Fukayama, D.; Yadid, G.
2002-12-01
Nonlinear statistical properties of Ventral Tegmental Area (VTA) of limbic brain are studied in vivo. VTA plays key role in generation of pleasure and in development of psychological drug addiction. It is shown that spiking time-series of the VTA dopaminergic neurons exhibit long-range correlations with self-averaging behavior. This specific VTA phenomenon has no relation to VTA rewarding function. Last result reveals complex role of VTA in limbic brain.
The weight loss blogosphere: an online survey of weight loss bloggers.
Evans, Martinus; Faghri, Pouran D; Pagoto, Sherry L; Schneider, Kristin L; Waring, Molly E; Whited, Matthew C; Appelhans, Bradley M; Busch, Andrew; Coleman, Ailton S
2016-09-01
Blogging is a form of online journaling that has been increasingly used to document an attempt in weight loss. Despite the prevalence of weight loss bloggers, few studies have examined this population. We examined characteristics of weight loss bloggers and their blogs, including blogging habits, reasons for blogging, like and dislikes of blogging, and associations between blogging activity and weight loss. Participants (N = 194, 92.3 % female, mean age = 35) were recruited from Twitter and Facebook to complete an online survey. Participants reported an average weight loss of 42.3 pounds since starting to blog about their weight loss attempt. Blogging duration significantly predicted greater weight loss during blogging (β = -3.65, t(185) = -2.97, p = .003). Findings suggest that bloggers are generally successful with their weight loss attempt. Future research should explore what determines weight loss success/failure in bloggers and whether individuals desiring to lose weight would benefit from blogging. PMID:27528529
Books average previous decade of economic misery.
Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159
High Average Power Yb:YAG Laser
Zapata, L E; Beach, R J; Payne, S A
2001-05-23
We are working on a composite thin-disk laser design that can be scaled as a source of high brightness laser power for tactical engagement and other high average power applications. The key component is a diffusion-bonded composite comprising a thin gain-medium and thicker cladding that is strikingly robust and resolves prior difficulties with high average power pumping/cooling and the rejection of amplified spontaneous emission (ASE). In contrast to high power rods or slabs, the one-dimensional nature of the cooling geometry and the edge-pump geometry scale gracefully to very high average power. The crucial design ideas have been verified experimentally. Progress this last year included: extraction with high beam quality using a telescopic resonator, a heterogeneous thin film coating prescription that meets the unusual requirements demanded by this laser architecture, thermal management with our first generation cooler. Progress was also made in design of a second-generation laser.
Books Average Previous Decade of Economic Misery
Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159
Deciphering faces: quantifiable visual cues to weight.
Coetzee, Vinet; Chen, Jingying; Perrett, David I; Stephen, Ian D
2010-01-01
Body weight plays a crucial role in mate choice, as weight is related to both attractiveness and health. People are quite accurate at judging weight in faces, but the cues used to make these judgments have not been defined. This study consisted of two parts. First, we wanted to identify quantifiable facial cues that are related to body weight, as defined by body mass index (BMI). Second, we wanted to test whether people use these cues to judge weight. In study 1, we recruited two groups of Caucasian and two groups of African participants, determined their BMI and measured their 2-D facial images for: width-to-height ratio, perimeter-to-area ratio, and cheek-to-jaw-width ratio. All three measures were significantly related to BMI in males, while the width-to-height and cheek-to-jaw-width ratios were significantly related to BMI in females. In study 2, these images were rated for perceived weight by Caucasian observers. We showed that these observers use all three cues to judge weight in African and Caucasian faces of both sexes. These three facial cues, width-to-height ratio, perimeter-to-area ratio, and cheek-to-jaw-width ratio, are therefore not only related to actual weight but provide a basis for perceptual attributes as well. PMID:20301846
Attractors and Time Averages for Random Maps
NASA Astrophysics Data System (ADS)
Araujo, Vitor
2006-07-01
Considering random noise in finite dimensional parameterized families of diffeomorphisms of a compact finite dimensional boundaryless manifold M, we show the existence of time averages for almost every orbit of each point of M, imposing mild conditions on the families. Moreover these averages are given by a finite number of physical absolutely continuous stationary probability measures. We use this result to deduce that situations with infinitely many sinks and Henon-like attractors are not stable under random perturbations, e.g., Newhouse's and Colli's phenomena in the generic unfolding of a quadratic homoclinic tangency by a one-parameter family of diffeomorphisms.
An improved moving average technical trading rule
NASA Astrophysics Data System (ADS)
Papailias, Fotis; Thomakos, Dimitrios D.
2015-06-01
This paper proposes a modified version of the widely used price and moving average cross-over trading strategies. The suggested approach (presented in its 'long only' version) is a combination of cross-over 'buy' signals and a dynamic threshold value which acts as a dynamic trailing stop. The trading behaviour and performance from this modified strategy are different from the standard approach with results showing that, on average, the proposed modification increases the cumulative return and the Sharpe ratio of the investor while exhibiting smaller maximum drawdown and smaller drawdown duration than the standard strategy.
The modulated average structure of mullite.
Birkenstock, Johannes; Petříček, Václav; Pedersen, Bjoern; Schneider, Hartmut; Fischer, Reinhard X
2015-06-01
Homogeneous and inclusion-free single crystals of 2:1 mullite (Al(4.8)Si(1.2)O(9.6)) grown by the Czochralski technique were examined by X-ray and neutron diffraction methods. The observed diffuse scattering together with the pattern of satellite reflections confirm previously published data and are thus inherent features of the mullite structure. The ideal composition was closely met as confirmed by microprobe analysis (Al(4.82 (3))Si(1.18 (1))O(9.59 (5))) and by average structure refinements. 8 (5) to 20 (13)% of the available Si was found in the T* position of the tetrahedra triclusters. The strong tendencey for disorder in mullite may be understood from considerations of hypothetical superstructures which would have to be n-fivefold with respect to the three-dimensional average unit cell of 2:1 mullite and n-fourfold in case of 3:2 mullite. In any of these the possible arrangements of the vacancies and of the tetrahedral units would inevitably be unfavorable. Three directions of incommensurate modulations were determined: q1 = [0.3137 (2) 0 ½], q2 = [0 0.4021 (5) 0.1834 (2)] and q3 = [0 0.4009 (5) -0.1834 (2)]. The one-dimensional incommensurately modulated crystal structure associated with q1 was refined for the first time using the superspace approach. The modulation is dominated by harmonic occupational modulations of the atoms in the di- and the triclusters of the tetrahedral units in mullite. The modulation amplitudes are small and the harmonic character implies that the modulated structure still represents an average structure in the overall disordered arrangement of the vacancies and of the tetrahedral structural units. In other words, when projecting the local assemblies at the scale of a few tens of average mullite cells into cells determined by either one of the modulation vectors q1, q2 or q3 a weak average modulation results with slightly varying average occupation factors for the tetrahedral units. As a result, the real
Polarized electron beams at milliampere average current
Poelker, Matthew
2013-11-01
This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today's CEBAF polarized source operating at ~ 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.
Average: the juxtaposition of procedure and context
NASA Astrophysics Data System (ADS)
Watson, Jane; Chick, Helen; Callingham, Rosemary
2014-09-01
This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.
Mean Element Propagations Using Numerical Averaging
NASA Technical Reports Server (NTRS)
Ely, Todd A.
2009-01-01
The long-term evolution characteristics (and stability) of an orbit are best characterized using a mean element propagation of the perturbed two body variational equations of motion. The averaging process eliminates short period terms leaving only secular and long period effects. In this study, a non-traditional approach is taken that averages the variational equations using adaptive numerical techniques and then numerically integrating the resulting EOMs. Doing this avoids the Fourier series expansions and truncations required by the traditional analytic methods. The resultant numerical techniques can be easily adapted to propagations at most solar system bodies.
Blanc, Ann K.; Wardlaw, Tessa
2005-01-01
OBJECTIVE: To critically examine the data used to produce estimates of the proportion of infants with low birth weight in developing countries and to describe biases in these data. To assess the effect of adjustment procedures on the estimates and propose a modified estimation procedure for international reporting purposes. METHODS: Mothers' reports about their recent births in 62 nationally representative Demographic and Health Surveys (DHS) conducted between 1990 and 2000 were analysed. The proportion of infants weighed at birth, characteristics of those weighed, extent of misreporting, and mothers' subjective assessments of their children's size at birth were examined. FINDINGS: In many developing countries the majority of infants were not weighed at birth. Those who were weighed were more likely to have mothers who live in urban areas and are educated, and to be born in a medical facility with assistance from medically trained personnel. Birth weights reported by mothers are "heaped" on multiples of 500 grams. CONCLUSION: Current survey-based estimates of the prevalence of low birth weight are biased substantially downwards. Two adjustments to reported data are recommended: a weighting procedure that combines reported birth weights with mothers' assessment of the child's size at birth, and categorization of one-quarter of the infants reported to have a birth weight of exactly 2500 grams as having low birth weight. Averaged over all surveys, these procedures increased the proportion classified as having low birth weight by 25%. We also recommend that the proportion of infants not weighed at birth be routinely reported. Efforts are needed to increase the weighing of newborns and the recording of their weights. PMID:15798841
40 CFR 86.449 - Averaging provisions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the new FEL. Manufacturers must test the motorcycles according to 40 CFR part 1051, subpart D...) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles, General Provisions § 86.449 Averaging provisions. (a) This section describes...
A Functional Measurement Study on Averaging Numerosity
ERIC Educational Resources Information Center
Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio
2014-01-01
In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…
Cryo-Electron Tomography and Subtomogram Averaging.
Wan, W; Briggs, J A G
2016-01-01
Cryo-electron tomography (cryo-ET) allows 3D volumes to be reconstructed from a set of 2D projection images of a tilted biological sample. It allows densities to be resolved in 3D that would otherwise overlap in 2D projection images. Cryo-ET can be applied to resolve structural features in complex native environments, such as within the cell. Analogous to single-particle reconstruction in cryo-electron microscopy, structures present in multiple copies within tomograms can be extracted, aligned, and averaged, thus increasing the signal-to-noise ratio and resolution. This reconstruction approach, termed subtomogram averaging, can be used to determine protein structures in situ. It can also be applied to facilitate more conventional 2D image analysis approaches. In this chapter, we provide an introduction to cryo-ET and subtomogram averaging. We describe the overall workflow, including tomographic data collection, preprocessing, tomogram reconstruction, subtomogram alignment and averaging, classification, and postprocessing. We consider theoretical issues and practical considerations for each step in the workflow, along with descriptions of recent methodological advances and remaining limitations. PMID:27572733
Initial Conditions in the Averaging Cognitive Model
ERIC Educational Resources Information Center
Noventa, S.; Massidda, D.; Vidotto, G.
2010-01-01
The initial state parameters s[subscript 0] and w[subscript 0] are intricate issues of the averaging cognitive models in Information Integration Theory. Usually they are defined as a measure of prior information (Anderson, 1981; 1982) but there are no general rules to deal with them. In fact, there is no agreement as to their treatment except in…
Averaging on Earth-Crossing Orbits
NASA Astrophysics Data System (ADS)
Gronchi, G. F.; Milani, A.
The orbits of planet-crossing asteroids (and comets) can undergo close approaches and collisions with some major planet. This introduces a singularity in the N-body Hamiltonian, and the averaging of the equations of motion, traditionally used to compute secular perturbations, is undefined. We show that it is possible to define in a rigorous way some generalised averaged equations of motion, in such a way that the generalised solutions are unique and piecewise smooth. This is obtained, both in the planar and in the three-dimensional case, by means of the method of extraction of the singularities by Kantorovich. The modified distance used to approximate the singularity is the one used by Wetherill in his method to compute probability of collision. Some examples of averaged dynamics have been computed; a systematic exploration of the averaged phase space to locate the secular resonances should be the next step. `Alice sighed wearily. ``I think you might do something better with the time'' she said, ``than waste it asking riddles with no answers'' (Alice in Wonderland, L. Carroll)
Averaging models for linear piezostructural systems
NASA Astrophysics Data System (ADS)
Kim, W.; Kurdila, A. J.; Stepanyan, V.; Inman, D. J.; Vignola, J.
2009-03-01
In this paper, we consider a linear piezoelectric structure which employs a fast-switched, capacitively shunted subsystem to yield a tunable vibration absorber or energy harvester. The dynamics of the system is modeled as a hybrid system, where the switching law is considered as a control input and the ambient vibration is regarded as an external disturbance. It is shown that under mild assumptions of existence and uniqueness of the solution of this hybrid system, averaging theory can be applied, provided that the original system dynamics is periodic. The resulting averaged system is controlled by the duty cycle of a driven pulse-width modulated signal. The response of the averaged system approximates the performance of the original fast-switched linear piezoelectric system. It is analytically shown that the averaging approximation can be used to predict the electromechanically coupled system modal response as a function of the duty cycle of the input switching signal. This prediction is experimentally validated for the system consisting of a piezoelectric bimorph connected to an electromagnetic exciter. Experimental results show that the analytical predictions are observed in practice over a fixed "effective range" of switching frequencies. The same experiments show that the response of the switched system is insensitive to an increase in switching frequency above the effective frequency range.
A Measure of the Average Intercorrelation
ERIC Educational Resources Information Center
Meyer, Edward P.
1975-01-01
Bounds are obtained for a coefficient proposed by Kaiser as a measure of average correlation and the coefficient is given an interpretation in the context of reliability theory. It is suggested that the root-mean-square intercorrelation may be a more appropriate measure of degree of relationships among a group of variables. (Author)
HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS.
BEN-ZVI, ILAN, DAYRAN, D.; LITVINENKO, V.
2005-08-21
Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department.
Measuring Time-Averaged Blood Pressure
NASA Technical Reports Server (NTRS)
Rothman, Neil S.
1988-01-01
Device measures time-averaged component of absolute blood pressure in artery. Includes compliant cuff around artery and external monitoring unit. Ceramic construction in monitoring unit suppresses ebb and flow of pressure-transmitting fluid in sensor chamber. Transducer measures only static component of blood pressure.
Reformulation of Ensemble Averages via Coordinate Mapping.
Schultz, Andrew J; Moustafa, Sabry G; Lin, Weisong; Weinstein, Steven J; Kofke, David A
2016-04-12
A general framework is established for reformulation of the ensemble averages commonly encountered in statistical mechanics. This "mapped-averaging" scheme allows approximate theoretical results that have been derived from statistical mechanics to be reintroduced into the underlying formalism, yielding new ensemble averages that represent exactly the error in the theory. The result represents a distinct alternative to perturbation theory for methodically employing tractable systems as a starting point for describing complex systems. Molecular simulation is shown to provide one appealing route to exploit this advance. Calculation of the reformulated averages by molecular simulation can proceed without contamination by noise produced by behavior that has already been captured by the approximate theory. Consequently, accurate and precise values of properties can be obtained while using less computational effort, in favorable cases, many orders of magnitude less. The treatment is demonstrated using three examples: (1) calculation of the heat capacity of an embedded-atom model of iron, (2) calculation of the dielectric constant of the Stockmayer model of dipolar molecules, and (3) calculation of the pressure of a Lennard-Jones fluid. It is observed that improvement in computational efficiency is related to the appropriateness of the underlying theory for the condition being simulated; the accuracy of the result is however not impacted by this. The framework opens many avenues for further development, both as a means to improve simulation methodology and as a new basis to develop theories for thermophysical properties. PMID:26950263
Bayesian Model Averaging for Propensity Score Analysis
ERIC Educational Resources Information Center
Kaplan, David; Chen, Jianshen
2013-01-01
The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…
40 CFR 86.449 - Averaging provisions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... the new FEL. Manufacturers must test the motorcycles according to 40 CFR part 1051, subpart D...) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles, General Provisions § 86.449 Averaging provisions. (a) This section describes...
Average configuration of the induced venus magnetotail
McComas, D.J.; Spence, H.E.; Russell, C.T.
1985-01-01
In this paper we discuss the interaction of the solar wind flow with Venus and describe the morphology of magnetic field line draping in the Venus magnetotail. In particular, we describe the importance of the interplanetary magnetic field (IMF) X-component in controlling the configuration of field draping in this induced magnetotail, and using the results of a recently developed technique, we examine the average magnetic configuration of this magnetotail. The derived J x B forces must balance the average, steady state acceleration of, and pressure gradients in, the tail plasma. From this relation the average tail plasma velocity, lobe and current sheet densities, and average ion temperature have been derived. In this study we extend these results by making a connection between the derived consistent plasma flow speed and density, and the observational energy/charge range and sensitivity of the Pioneer Venus Orbiter (PVO) plasma analyzer, and demonstrate that if the tail is principally composed of O/sup +/, the bulk of the plasma should not be observable much of the time that the PVO is within the tail. Finally, we examine the importance of solar wind slowing upstream of the obstacle and its implications for the temperature of pick-up planetary ions, compare the derived ion temperatures with their theoretical maximum values, and discuss the implications of this process for comets and AMPTE-type releases.
Glenzinski, D.; /Fermilab
2008-01-01
This paper summarizes a talk given at the Top2008 Workshop at La Biodola, Isola d Elba, Italy. The status of the world average top-quark mass is discussed. Some comments about the challanges facing the experiments in order to further improve the precision are offered.
Why Johnny Can Be Average Today.
ERIC Educational Resources Information Center
Sturrock, Alan
1997-01-01
During a (hypothetical) phone interview with a university researcher, an elementary principal reminisced about a lifetime of reading groups with unmemorable names, medium-paced math problems, patchworked social studies/science lessons, and totally "average" IQ and batting scores. The researcher hung up at the mention of bell-curved assembly lines…
Steinke, Hanno; Rabi, Suganthy; Saito, Toshiyuki; Sawutti, Alimjan; Miyaki, Takayoshi; Itoh, Masahiro; Spanel-Borowski, Katharina
2008-11-20
Plastination is an excellent technique which helps to keep the anatomical specimens in a dry, odourless state. Since the invention of plastination technique by von Hagens, research has been done to improve the quality of plastinated specimens. In this paper, we have described a method of producing light-weight plastinated specimens using xylene along with silicone and in the final step, substitute xylene with air. The finished plastinated specimens were light-weight, dry, odourless and robust. This method requires less use of resin thus making the plastination technique more cost-effective. The light-weight specimens are easy to carry and can easily be used for teaching. PMID:18752934
Averaging processes in granular flows driven by gravity
NASA Astrophysics Data System (ADS)
Rossi, Giulia; Armanini, Aronne
2016-04-01
One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental
Orbit Averaging in Perturbed Planetary Rings
NASA Astrophysics Data System (ADS)
Stewart, Glen R.
2015-11-01
The orbital period is typically much shorter than the time scale for dynamical evolution of large-scale structures in planetary rings. This large separation in time scales motivates the derivation of reduced models by averaging the equations of motion over the local orbit period (Borderies et al. 1985, Shu et al. 1985). A more systematic procedure for carrying out the orbit averaging is to use Lie transform perturbation theory to remove the dependence on the fast angle variable from the problem order-by-order in epsilon, where the small parameter epsilon is proportional to the fractional radial distance from exact resonance. This powerful technique has been developed and refined over the past thirty years in the context of gyrokinetic theory in plasma physics (Brizard and Hahm, Rev. Mod. Phys. 79, 2007). When the Lie transform method is applied to resonantly forced rings near a mean motion resonance with a satellite, the resulting orbit-averaged equations contain the nonlinear terms found previously, but also contain additional nonlinear self-gravity terms of the same order that were missed by Borderies et al. and by Shu et al. The additional terms result from the fact that the self-consistent gravitational potential of the perturbed rings modifies the orbit-averaging transformation at nonlinear order. These additional terms are the gravitational analog of electrostatic ponderomotive forces caused by large amplitude waves in plasma physics. The revised orbit-averaged equations are shown to modify the behavior of nonlinear density waves in planetary rings compared to the previously published theory. This reserach was supported by NASA's Outer Planets Reserach program.
A Note on the Linearly and Quadratically Weighted Kappa Coefficients.
Li, Pingke
2016-09-01
The linearly and quadratically weighted kappa coefficients are popular statistics in measuring inter-rater agreement on an ordinal scale. It has been recently demonstrated that the linearly weighted kappa is a weighted average of the kappa coefficients of the embedded 2 by 2 agreement matrices, while the quadratically weighted kappa is insensitive to the agreement matrices that are row or column reflection symmetric. A rank-one matrix decomposition approach to the weighting schemes is presented in this note such that these phenomena can be demonstrated in a concise manner. PMID:27246436
Particulate matter and early childhood body weight.
Kim, Eunjeong; Park, Hyesook; Park, Eun Ae; Hong, Yun-Chul; Ha, Mina; Kim, Hwan-Cheol; Ha, Eun-Hee
2016-09-01
Concerns over adverse effects of air pollution on children's health have been rapidly rising. However, the effects of air pollution on childhood growth remain to be poorly studied. We investigated the association between prenatal and postnatal exposure to PM10 and children's weight from birth to 60months of age. This birth cohort study evaluated 1129 mother-child pairs in South Korea. Children's weight was measured at birth and at six, 12, 24, 36, and 60months. The average levels of children's exposure to particulate matter up to 10μm in diameter (PM10) were estimated during pregnancy and during the period between each visit until 60months of age. Exposure to PM10 during pregnancy lowered children's weight at 12months. PM10 exposure from seven to 12months negatively affected weight at 12, 36, and 60months. Repeated measures of PM10 and weight from 12 to 60months revealed a negative association between postnatal exposure to PM10 and children's weight. Children continuously exposed to a high level of PM10 (>50μg/m(3)) from pregnancy to 24months of age had weight z-scores of 60 that were 0.44 times lower than in children constantly exposed to a lower level of PM10 (≤50μg/m(3)) for the same period. Furthermore, growth was more vulnerable to PM10 exposure in children with birth weight <3.3kg than in children with birth weight >3.3kg. Air pollution may delay growth in early childhood and exposure to air pollution may be more harmful to children when their birth weight is low. PMID:27344372
Englberger, L.
1999-01-01
A programme of weight loss competitions and associated activities in Tonga, intended to combat obesity and the noncommunicable diseases linked to it, has popular support and the potential to effect significant improvements in health. PMID:10063662
... Differences in BMRs are associated with changes in energy balance. Energy balance reflects the difference between the amount of ... such as amphetamines, animals often have a negative energy balance which leads to weight loss. Based on ...
... behavioral guidelines for post-partum weight control. BMC Pregnancy and Childbirth . 2014;14. Accessed Nov. 24, 2014. Mottola MF. Exercise prescription for overweight and obese women: pregnancy and ...
... of laxatives Other causes such as: Eating disorders, anorexia nervosa that have not been diagnosed yet Diabetes that ... do not know the reason. You have other symptoms along with the weight loss.
... Global Map Premature birth report card Careers Archives Pregnancy Before or between pregnancies Nutrition, weight & fitness Prenatal ... Zika virus and pregnancy Microcephaly Medicine safety and pregnancy Birth defects prevention Learn how to help reduce ...
... If this is the case, preventing further weight gain is a worthy goal. As people age, their body composition gradually shifts â€” the proportion of muscle decreases and the proportion of fat increases. This ...
... spurts in height and weight gain in both boys and girls. Once these changes start, they continue for several ... or obese . Different BMI charts are used for boys and girls under the age of 20 because the amount ...
NASA Astrophysics Data System (ADS)
Fijani, E.; Chitsazan, N.; Nadiri, A.; Tsai, F. T.; Asghari Moghaddam, A.
2012-12-01
Artificial Neural Networks (ANNs) have been widely used to estimate concentration of chemicals in groundwater systems. However, estimation uncertainty is rarely discussed in the literature. Uncertainty in ANN output stems from three sources: ANN inputs, ANN parameters (weights and biases), and ANN structures. Uncertainty in ANN inputs may come from input data selection and/or input data error. ANN parameters are naturally uncertain because they are maximum-likelihood estimated. ANN structure is also uncertain because there is no unique ANN model given a specific case. Therefore, multiple plausible AI models are generally resulted for a study. One might ask why good models have to be ignored in favor of the best model in traditional estimation. What is the ANN estimation variance? How do the variances from different ANN models accumulate to the total estimation variance? To answer these questions we propose a Hierarchical Bayesian Model Averaging (HBMA) framework. Instead of choosing one ANN model (the best ANN model) for estimation, HBMA averages outputs of all plausible ANN models. The model weights are based on the evidence of data. Therefore, the HBMA avoids overconfidence on the single best ANN model. In addition, HBMA is able to analyze uncertainty propagation through aggregation of ANN models in a hierarchy framework. This method is applied for estimation of fluoride concentration in the Poldasht plain and the Bazargan plain in Iran. Unusually high fluoride concentration in the Poldasht and Bazargan plains has caused negative effects on the public health. Management of this anomaly requires estimation of fluoride concentration distribution in the area. The results show that the HBMA provides a knowledge-decision-based framework that facilitates analyzing and quantifying ANN estimation uncertainties from different sources. In addition HBMA allows comparative evaluation of the realizations for each source of uncertainty by segregating the uncertainty sources in
Correctly Expressing Atomic Weights
NASA Astrophysics Data System (ADS)
Paolini, Moreno; Cercignani, Giovanni; Bauer, Carlo
2000-11-01
Very often, atomic or molecular weights are expressed as dimensionless quantities, but although the historical importance of their definition as "pure numbers" is acknowledged, it is inconsistent with experimental formulas and with the theory of measure in general. Here, we propose on the basis of clear-cut formulas that, contrary to customary statements, atomic and molecular weights should be expressed as dimensional quantities (masses) in which the Dalton (= 1.663 x 10-24 g) is taken as the unit.
The entire mean weighted first-passage time on a family of weighted treelike networks
Dai, Meifeng; Sun, Yanqiu; Sun, Yu; Xi, Lifeng; Shao, Shuxiang
2016-01-01
In this paper, we consider the entire mean weighted first-passage time (EMWFPT) with random walks on a family of weighted treelike networks. The EMWFPT on weighted networks is proposed for the first time in the literatures. The dominating terms of the EMWFPT obtained by the following two methods are coincident. On the one hand, using the construction algorithm, we calculate the receiving and sending times for the central node to obtain the asymptotic behavior of the EMWFPT. On the other hand, applying the relationship equation between the EMWFPT and the average weighted shortest path, we also obtain the asymptotic behavior of the EMWFPT. The obtained results show that the effective resistance is equal to the weighted shortest path between two nodes. And the dominating term of the EMWFPT scales linearly with network size in large network. PMID:27357233
Vulnerability of weighted networks
NASA Astrophysics Data System (ADS)
Dall'Asta, Luca; Barrat, Alain; Barthélemy, Marc; Vespignani, Alessandro
2006-04-01
In real networks complex topological features are often associated with a diversity of interactions as measured by the weights of the links. Moreover, spatial constraints may also play an important role, resulting in a complex interplay between topology, weight, and geography. In order to study the vulnerability of such networks to intentional attacks, these attributes must therefore be considered along with the topological quantities. In order to tackle this issue, we consider the case of the worldwide airport network, which is a weighted heterogeneous network whose evolution and structure are influenced by traffic and geographical constraints. We first characterize relevant topological and weighted centrality measures and then use these quantities as selection criteria for the removal of vertices. We consider different attack strategies and different measures of the damage achieved in the network. The analysis of weighted properties shows that centrality driven attacks are capable of shattering the network's communication or transport properties even at a very low level of damage in the connectivity pattern. The inclusion of weight and traffic therefore provides evidence for the extreme vulnerability of complex networks to any targeted strategy and the need for them to be considered as key features in the finding and development of defensive strategies.
The Economic Impact of Weight Regain
Sheppard, Caroline E.; Lester, Erica L. W.; Chuck, Anderson W.; Birch, Daniel W.; Karmali, Shahzeer; de Gara, Christopher J.
2013-01-01
Background. Obesity is well known for being associated with significant economic repercussions. Bariatric surgery is the only evidence-based solution to this problem as well as a cost-effective method of addressing the concern. Numerous authors have calculated the cost effectiveness and cost savings of bariatric surgery; however, to date the economic impact of weight regain as a component of overall cost has not been addressed. Methods. The literature search was conducted to elucidate the direct costs of obesity and primary bariatric surgery, the rate of weight recidivism and surgical revision, and any costs therein. Results. The quoted cost of obesity in Canada was $2.0 billion–$6.7 billion in 2013 CAD. The median percentage of bariatric procedures that fail due to weight gain or insufficient weight loss is 20% (average: 21.1% ± 10.1%, range: 5.2–39, n = 10). Revision of primary surgeries on average ranges from 2.5% to 18.4%, and depending on the procedure accounts for an additional cost between $14,000 and $50,000 USD per patient. Discussion. There was a significant deficit of the literature pertaining to the cost of revision surgery as compared with primary bariatric surgery. As such, the cycle of weight recidivism and bariatric revisions has not as of yet been introduced into any previous cost analysis of bariatric surgery. PMID:24454339
5 CFR 591.210 - What are weights?
Code of Federal Regulations, 2013 CFR
2013-01-01
... 5 Administrative Personnel 1 2013-01-01 2013-01-01 false What are weights? 591.210 Section 591.210 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS ALLOWANCES AND DIFFERENTIALS Cost-of-Living Allowance and Post Differential-Nonforeign Areas Cost-Of-Living Allowances § 591.210 What are weights? (a) A weight is...
NASA Astrophysics Data System (ADS)
Maleika, Wojciech
2015-02-01
The paper presents a new method of digital terrain model (DTM) estimation based on modified moving average interpolation. There are many methods that can be employed in DTM creation, such as kriging, inverse distance weighting, nearest neighbour and moving average. The moving average method is not as precise as the others; hence, it is not commonly comprised in scientific work. Considering the high accuracy, the relatively low time costs, and the huge amount of measurement data collected by multibeam echosounder, however, the moving average method is definitely one of the most promising approaches. In this study, several variants of this method are analysed. An optimization of the moving average method is proposed based on a new module of selecting neighbouring points during the interpolation process—the "growing radius" approach. Tests experiments performed on various multibeam echosounder datasets demonstrate the high potential of this modified moving average method for improved DTM generation.
Lidar uncertainty and beam averaging correction
NASA Astrophysics Data System (ADS)
Giyanani, A.; Bierbooms, W.; van Bussel, G.
2015-05-01
Remote sensing of the atmospheric variables with the use of Lidar is a relatively new technology field for wind resource assessment in wind energy. A review of the draft version of an international guideline (CD IEC 61400-12-1 Ed.2) used for wind energy purposes is performed and some extra atmospheric variables are taken into account for proper representation of the site. A measurement campaign with two Leosphere vertical scanning WindCube Lidars and metmast measurements is used for comparison of the uncertainty in wind speed measurements using the CD IEC 61400-12-1 Ed.2. The comparison revealed higher but realistic uncertainties. A simple model for Lidar beam averaging correction is demonstrated for understanding deviation in the measurements. It can be further applied for beam averaging uncertainty calculations in flat and complex terrain.
Rigid shape matching by segmentation averaging.
Wang, Hongzhi; Oliensis, John
2010-04-01
We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking. PMID:20224119
Polarized electron beams at milliampere average current
Poelker, M.
2013-11-07
This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today’s CEBAF polarized source operating at ∼ 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.
Apparent and average accelerations of the Universe
Bolejko, Krzysztof; Andersson, Lars E-mail: larsa@math.miami.edu
2008-10-15
In this paper we consider the relation between the volume deceleration parameter obtained within the Buchert averaging scheme and the deceleration parameter derived from supernova observation. This work was motivated by recent findings that showed that there are models which despite having {Lambda} = 0 have volume deceleration parameter q{sup vol}<0. This opens the possibility that back-reaction and averaging effects may be used as an interesting alternative explanation to the dark energy phenomenon. We have calculated q{sup vol} in some Lemaitre-Tolman models. For those models which are chosen to be realistic and which fit the supernova data, we find that q{sup vol}>0, while those models which we have been able to find which exhibit q{sup vol}<0 turn out to be unrealistic. This indicates that care must be exercised in relating the deceleration parameter to observations.
Emissions averaging top option for HON compliance
Kapoor, S. )
1993-05-01
In one of its first major rule-setting directives under the CAA Amendments, EPA recently proposed tough new emissions controls for nearly two-thirds of the commercial chemical substances produced by the synthetic organic chemical manufacturing industry (SOCMI). However, the Hazardous Organic National Emission Standards for Hazardous Air Pollutants (HON) also affects several non-SOCMI processes. The author discusses proposed compliance deadlines, emissions averaging, and basic operating and administrative requirements.
The Average Velocity in a Queue
ERIC Educational Resources Information Center
Frette, Vidar
2009-01-01
A number of cars drive along a narrow road that does not allow overtaking. Each driver has a certain maximum speed at which he or she will drive if alone on the road. As a result of slower cars ahead, many cars are forced to drive at speeds lower than their maximum ones. The average velocity in the queue offers a non-trivial example of a mean…
Stochastic Games with Average Payoff Criterion
Ghosh, M. K.; Bagchi, A.
1998-11-15
We study two-person stochastic games on a Polish state and compact action spaces and with average payoff criterion under a certain ergodicity condition. For the zero-sum game we establish the existence of a value and stationary optimal strategies for both players. For the nonzero-sum case the existence of Nash equilibrium in stationary strategies is established under certain separability conditions.
Average Annual Rainfall over the Globe
ERIC Educational Resources Information Center
Agrawal, D. C.
2013-01-01
The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…
Representation of average drop sizes in sprays
NASA Astrophysics Data System (ADS)
Dodge, Lee G.
1987-06-01
Procedures are presented for processing drop-size measurements to obtain average drop sizes that represent overall spray characteristics. These procedures are not currently in general use, but they would represent an improvement over current practice. Clear distinctions are made between processing data for spatial- and temporal-type measurements. The conversion between spatial and temporal measurements is discussed. The application of these procedures is demonstrated by processing measurements of the same spray by two different types of instruments.
Evaluation of a Viscosity-Molecular Weight Relationship.
ERIC Educational Resources Information Center
Mathias, Lon J.
1983-01-01
Background information, procedures, and results are provided for a series of graduate/undergraduate polymer experiments. These include synthesis of poly(methylmethacrylate), viscosity experiment (indicating large effect even small amounts of a polymer may have on solution properties), and measurement of weight-average molecular weight by light…
Average System Cost Methodology : Administrator's Record of Decision.
United States. Bonneville Power Administration.
1984-06-01
Significant features of average system cost (ASC) methodology adopted are: retention of the jurisdictional approach where retail rate orders of regulartory agencies provide primary data for computing the ASC for utilities participating in the residential exchange; inclusion of transmission costs; exclusion of construction work in progress; use of a utility's weighted cost of debt securities; exclusion of income taxes; simplification of separation procedures for subsidized generation and transmission accounts from other accounts; clarification of ASC methodology rules; more generous review timetable for individual filings; phase-in of reformed methodology; and each exchanging utility must file under the new methodology within 20 days of implementation by the Federal Energy Regulatory Commission of the ten major participating utilities, the revised ASC will substantially only affect three. (PSB)
Modern average global sea-surface temperature
Schweitzer, Peter N.
1993-01-01
The data contained in this data set are derived from the NOAA Advanced Very High Resolution Radiometer Multichannel Sea Surface Temperature data (AVHRR MCSST), which are obtainable from the Distributed Active Archive Center at the Jet Propulsion Laboratory (JPL) in Pasadena, Calif. The JPL tapes contain weekly images of SST from October 1981 through December 1990 in nine regions of the world ocean: North Atlantic, Eastern North Atlantic, South Atlantic, Agulhas, Indian, Southeast Pacific, Southwest Pacific, Northeast Pacific, and Northwest Pacific. This data set represents the results of calculations carried out on the NOAA data and also contains the source code of the programs that made the calculations. The objective was to derive the average sea-surface temperature of each month and week throughout the whole 10-year series, meaning, for example, that data from January of each year would be averaged together. The result is 12 monthly and 52 weekly images for each of the oceanic regions. Averaging the images in this way tends to reduce the number of grid cells that lack valid data and to suppress interannual variability.
Digital Averaging Phasemeter for Heterodyne Interferometry
NASA Technical Reports Server (NTRS)
Johnson, Donald; Spero, Robert; Shaklan, Stuart; Halverson, Peter; Kuhnert, Andreas
2004-01-01
A digital averaging phasemeter has been built for measuring the difference between the phases of the unknown and reference heterodyne signals in a heterodyne laser interferometer. This phasemeter performs well enough to enable interferometric measurements of distance with accuracy of the order of 100 pm and with the ability to track distance as it changes at a speed of as much as 50 cm/s. This phasemeter is unique in that it is a single, integral system capable of performing three major functions that, heretofore, have been performed by separate systems: (1) measurement of the fractional-cycle phase difference, (2) counting of multiple cycles of phase change, and (3) averaging of phase measurements over multiple cycles for improved resolution. This phasemeter also offers the advantage of making repeated measurements at a high rate: the phase is measured on every heterodyne cycle. Thus, for example, in measuring the relative phase of two signals having a heterodyne frequency of 10 kHz, the phasemeter would accumulate 10,000 measurements per second. At this high measurement rate, an accurate average phase determination can be made more quickly than is possible at a lower rate.
Disk-averaged synthetic spectra of Mars
NASA Technical Reports Server (NTRS)
Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather
2005-01-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.
Disk-averaged synthetic spectra of Mars.
Tinetti, Giovanna; Meadows, Victoria S; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather
2005-08-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin. PMID:16078866
Viewpoint: observations on scaled average bioequivalence.
Patterson, Scott D; Jones, Byron
2012-01-01
The two one-sided test procedure (TOST) has been used for average bioequivalence testing since 1992 and is required when marketing new formulations of an approved drug. TOST is known to require comparatively large numbers of subjects to demonstrate bioequivalence for highly variable drugs, defined as those drugs having intra-subject coefficients of variation greater than 30%. However, TOST has been shown to protect public health when multiple generic formulations enter the marketplace following patent expiration. Recently, scaled average bioequivalence (SABE) has been proposed as an alternative statistical analysis procedure for such products by multiple regulatory agencies. SABE testing requires that a three-period partial replicate cross-over or full replicate cross-over design be used. Following a brief summary of SABE analysis methods applied to existing data, we will consider three statistical ramifications of the proposed additional decision rules and the potential impact of implementation of scaled average bioequivalence in the marketplace using simulation. It is found that a constraint being applied is biased, that bias may also result from the common problem of missing data and that the SABE methods allow for much greater changes in exposure when generic-generic switching occurs in the marketplace. PMID:22162308
Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing; Liu, Nan-Suey
2012-01-01
In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.
Weight Loss Nutritional Supplements
NASA Astrophysics Data System (ADS)
Eckerson, Joan M.
Obesity has reached what may be considered epidemic proportions in the United States, not only for adults but for children. Because of the medical implications and health care costs associated with obesity, as well as the negative social and psychological impacts, many individuals turn to nonprescription nutritional weight loss supplements hoping for a quick fix, and the weight loss industry has responded by offering a variety of products that generates billions of dollars each year in sales. Most nutritional weight loss supplements are purported to work by increasing energy expenditure, modulating carbohydrate or fat metabolism, increasing satiety, inducing diuresis, or blocking fat absorption. To review the literally hundreds of nutritional weight loss supplements available on the market today is well beyond the scope of this chapter. Therefore, several of the most commonly used supplements were selected for critical review, and practical recommendations are provided based on the findings of well controlled, randomized clinical trials that examined their efficacy. In most cases, the nutritional supplements reviewed either elicited no meaningful effect or resulted in changes in body weight and composition that are similar to what occurs through a restricted diet and exercise program. Although there is some evidence to suggest that herbal forms of ephedrine, such as ma huang, combined with caffeine or caffeine and aspirin (i.e., ECA stack) is effective for inducing moderate weight loss in overweight adults, because of the recent ban on ephedra manufacturers must now use ephedra-free ingredients, such as bitter orange, which do not appear to be as effective. The dietary fiber, glucomannan, also appears to hold some promise as a possible treatment for weight loss, but other related forms of dietary fiber, including guar gum and psyllium, are ineffective.
Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.
ERIC Educational Resources Information Center
Dirks, Jean; And Others
1983-01-01
Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)
Popular weight reduction diets.
Volpe, Stella Lucia
2006-01-01
The percentage of people who are overweight and obese has increased tremendously over the last 30 years. It has become a worldwide epidemic. This is evident by the number of children are being diagnosed with a body mass index >85th percentile, and the number of children begin diagnosed with type 2 diabetes mellitus, a disease previously reserved for adults. The weight loss industry has also gained from this epidemic; it is a billion dollar industry. People pay large sums of money on diet pills, remedies, and books, with the hope of losing weight permanently. Despite these efforts, the number of individuals who are overweight or obese continues to increase. Obesity is a complex, multifactorial disorder. It would be impossible to address all aspects of diet, exercise, and weight loss in this review. Therefore, this article will review popular weight loss diets, with particular attention given to comparing low fat diets with low carbohydrate diets. In addition, the role that the environment plays on both diet and exercise and how they impact obesity will be addressed. Finally, the National Weight Control Registry will be discussed. PMID:16407735
Estimates of Random Error in Satellite Rainfall Averages
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.
2003-01-01
Satellite rain estimates are most accurate when obtained with microwave instruments on low earth-orbiting satellites. Estimation of daily or monthly total areal rainfall, typically of interest to hydrologists and climate researchers, is made difficult, however, by the relatively poor coverage generally available from such satellites. Intermittent coverage by the satellites leads to random "sampling error" in the satellite products. The inexact information about hydrometeors inferred from microwave data also leads to random "retrieval errors" in the rain estimates. In this talk we will review approaches to quantitative estimation of the sampling error in area/time averages of satellite rain retrievals using ground-based observations, and methods of estimating rms random error, both sampling and retrieval, in averages using satellite measurements themselves.
NASA Astrophysics Data System (ADS)
Bradley-Hutchison, Doug
2014-11-01
Once a controversial idea, the fact that gases like air have weight can easily be demonstrated using reasonably precise scales in the modern teaching laboratory. But unlike a liquid, where a mechanical model suggests a pile of hard spheres resting on each other, gas molecules are in continual motion and can have minimal interaction. How should we think about the effect these molecules have on the scale? And more importantly, how should we explain it to students? Several models of gas behavior are employed to answer these questions and it is shown how the weight of a gas is, like electric current, an emergent phenomena in contrast to the weight of a liquid which is direct or causal.
Sethi, Bipin Kumar; Nagesh, V Sri
2015-05-01
Ramadan fasting is associated with significant weight loss in both men and women. Reduction in blood pressure, lipids, blood glucose, body mass index and waist and hip circumference may also occur. However, benefits accrued during this month often reverse within a few weeks of cessation of fasting, with most people returning back to their pre-Ramadan body weights and body composition. To ensure maintenance of this fasting induced weight loss, health care professionals should encourage continuation of healthy dietary habits, moderate physical activity and behaviour modification, even after conclusion of fasting. It should be realized that Ramadan is an ideal platform to target year long lifestyle modification, to ensure that whatever health care benefits have been gained during this month, are perpetuated. PMID:26013789
Generalized constructive tree weights
Rivasseau, Vincent E-mail: adrian.tanasa@ens-lyon.org; Tanasa, Adrian E-mail: adrian.tanasa@ens-lyon.org
2014-04-15
The Loop Vertex Expansion (LVE) is a quantum field theory (QFT) method which explicitly computes the Borel sum of Feynman perturbation series. This LVE relies in a crucial way on symmetric tree weights which define a measure on the set of spanning trees of any connected graph. In this paper we generalize this method by defining new tree weights. They depend on the choice of a partition of a set of vertices of the graph, and when the partition is non-trivial, they are no longer symmetric under permutation of vertices. Nevertheless we prove they have the required positivity property to lead to a convergent LVE; in fact we formulate this positivity property precisely for the first time. Our generalized tree weights are inspired by the Brydges-Battle-Federbush work on cluster expansions and could be particularly suited to the computation of connected functions in QFT. Several concrete examples are explicitly given.
Evers, Daan
2013-04-01
According to Stephen Finlay, 'A ought to X' means that X-ing is more conducive to contextually salient ends than relevant alternatives. This in turn is analysed in terms of probability. I show why this theory of 'ought' is hard to square with a theory of a reason's weight which could explain why 'A ought to X' logically entails that the balance of reasons favours that A X-es. I develop two theories of weight to illustrate my point. I first look at the prospects of a theory of weight based on expected utility theory. I then suggest a simpler theory. Although neither allows that 'A ought to X' logically entails that the balance of reasons favours that A X-es, this price may be accepted. For there remains a strong pragmatic relation between these claims. PMID:23576822
Light weight phosphate cements
Wagh, Arun S.; Natarajan, Ramkumar,; Kahn, David
2010-03-09
A sealant having a specific gravity in the range of from about 0.7 to about 1.6 for heavy oil and/or coal bed methane fields is disclosed. The sealant has a binder including an oxide or hydroxide of Al or of Fe and a phosphoric acid solution. The binder may have MgO or an oxide of Fe and/or an acid phosphate. The binder is present from about 20 to about 50% by weight of the sealant with a lightweight additive present in the range of from about 1 to about 10% by weight of said sealant, a filler, and water sufficient to provide chemically bound water present in the range of from about 9 to about 36% by weight of the sealant when set. A porous ceramic is also disclosed.
Predictive RANS simulations via Bayesian Model-Scenario Averaging
Edeling, W.N.; Cinnella, P.; Dwight, R.P.
2014-10-15
The turbulence closure model is the dominant source of error in most Reynolds-Averaged Navier–Stokes simulations, yet no reliable estimators for this error component currently exist. Here we develop a stochastic, a posteriori error estimate, calibrated to specific classes of flow. It is based on variability in model closure coefficients across multiple flow scenarios, for multiple closure models. The variability is estimated using Bayesian calibration against experimental data for each scenario, and Bayesian Model-Scenario Averaging (BMSA) is used to collate the resulting posteriors, to obtain a stochastic estimate of a Quantity of Interest (QoI) in an unmeasured (prediction) scenario. The scenario probabilities in BMSA are chosen using a sensor which automatically weights those scenarios in the calibration set which are similar to the prediction scenario. The methodology is applied to the class of turbulent boundary-layers subject to various pressure gradients. For all considered prediction scenarios the standard-deviation of the stochastic estimate is consistent with the measurement ground truth. Furthermore, the mean of the estimate is more consistently accurate than the individual model predictions.
NASA Astrophysics Data System (ADS)
Davis, C. V.; Hill, T. M.; Moffitt, S. E.
2013-12-01
Foraminiferal shell weight can be impacted by environmental factors both during initial shell formation and as the result of post mortem preservation. An improved understanding of what determines this relationship can lead to both an understanding of foraminiferal calcite production in modern oceans and proxy development for past environmental conditions. Significantly, foraminiferal shell weight has been linked to carbonate ion concentration in both laboratory culture (of both planktic and benthic species) and in the modern and fossil record (in planktic foraminifera). This study explores the relationship between shell weight and changes in oxygenation and carbonate saturation in fossil benthic foraminifera from a high-resolution sedimentary record (MV0811-15JC; 34°36.930' N, 119°12.920' W; 418m water depth; 16.1-3.4 ka; sedimentation rate 44-100 cm kyr-1) from Santa Barbara Basin, CA (SBB). Ongoing work in SBB has described rapid biotic reorganization through the recent deglaciation in response to changes in dissolved oxygen concentrations, which are used here to create a semi quantitative oxygenation history for site MV0811-15JC. In modern Oxygen Minimum Zones, decreases in oxygen closely covary with increases in Total Carbon (with a corresponding decrease in the carbonate saturation state). We interpret that records from SBB of the average size-normalized test weight of Uvigerinid and Bolivinid foraminifera show that shell weight responds to these changes in oxygenation and saturation state. Multiple metrics of 'size normalization' including by length, geometric estimation of surface area and volume, and tracing of individual silhouettes are tested. Regardless of method utilized, the size normalized shell weight of all species fluctuates with abrupt changes in oxygenation and saturation state. Although all species respond to large-scale environmental changes, the weight records of Bolivinids and Uvigerinids reveal distinct differences, indicating that
A Green's function quantum average atom model
Starrett, Charles Edward
2015-05-21
A quantum average atom model is reformulated using Green's functions. This allows integrals along the real energy axis to be deformed into the complex plane. The advantage being that sharp features such as resonances and bound states are broadened by a Lorentzian with a half-width chosen for numerical convenience. An implementation of this method therefore avoids numerically challenging resonance tracking and the search for weakly bound states, without changing the physical content or results of the model. A straightforward implementation results in up to a factor of 5 speed-up relative to an optimized orbital based code.
Average shape of fluctuations for subdiffusive walks
NASA Astrophysics Data System (ADS)
Yuste, S. B.; Acedo, L.
2004-03-01
We study the average shape of fluctuations for subdiffusive processes, i.e., processes with uncorrelated increments but where the waiting time distribution has a broad power-law tail. This shape is obtained analytically by means of a fractional diffusion approach. We find that, in contrast with processes where the waiting time between increments has finite variance, the fluctuation shape is no longer a semicircle: it tends to adopt a tablelike form as the subdiffusive character of the process increases. The theoretical predictions are compared with numerical simulation results.
Auto-exploratory average reward reinforcement learning
Ok, DoKyeong; Tadepalli, P.
1996-12-31
We introduce a model-based average reward Reinforcement Learning method called H-learning and compare it with its discounted counterpart, Adaptive Real-Time Dynamic Programming, in a simulated robot scheduling task. We also introduce an extension to H-learning, which automatically explores the unexplored parts of the state space, while always choosing greedy actions with respect to the current value function. We show that this {open_quotes}Auto-exploratory H-learning{close_quotes} performs better than the original H-learning under previously studied exploration methods such as random, recency-based, or counter-based exploration.
Behavioral transitions and weight change patterns within the PREMIER trial.
Bartfield, Jessica K; Stevens, Victor J; Jerome, Gerald J; Batch, Bryan C; Kennedy, Betty M; Vollmer, William M; Harsha, David; Appel, Lawrence J; Desmond, Renee; Ard, Jamy D
2011-08-01
Little is known about the transition in behaviors from short-term weight loss to maintenance of weight loss. We wanted to determine how short-term and long-term weight loss and patterns of weight change were associated with intervention behavioral targets. This analysis includes overweight/obese participants in active treatment (n = 507) from the previously published PREMIER trial, an 18-month, multicomponent lifestyle intervention for blood pressure reduction, including 33 intervention sessions and recommendations to self-monitor food intake and physical activity daily. Associations between behaviors (attendance, recorded days/week of physical activity, food records/week) and weight loss of ≥5% at 6 and 18 months were examined using logistic regression. We characterized the sample using 5 weight change categories (weight gained, weight stable, weight loss then relapse, late weight loss, and weight loss then maintenance) and analyzed adherence to the behaviors for each category, comparing means with ANOVA. Participants lost an average of 5.3 ± 5.6 kg at 6 months and 4.0 ± 6.7 kg (4.96% of body weight) by 18 months. Higher levels of attendance, food record completion, and recorded days/week of physical activity were associated with increasing odds of achieving 5% weight loss. All weight change groups had declines in the behaviors over time; however, compared to the other four groups, the weight loss/maintenance group (n = 154) had statistically less significant decline in number of food records/week (48%), recorded days/week of physical activity (41.7%), and intervention sessions attended (12.8%) through 18 months. Behaviors associated with short-term weight loss continue to be associated with long-term weight loss, albeit at lower frequencies. Minimizing the decline in these behaviors may be important in achieving long-term weight loss. PMID:21455122
Lepere, A. J.; Slack-Smith, L. M.
2002-01-01
Intravenous sedation has been used in dentistry for many years because of its perceived advantages over general anesthesia, including shorter recovery times. However, there is limited literature available on recovery from intravenous dental sedation, particularly in the private general practice setting. The aim of this study was to describe the recovery times when sedation was conducted in private dental practice and to consider this in relation to age, weight, procedure type, and procedure time. The data were extracted from the intravenous sedation records available with 1 general anesthesia-trained dental practitioner who provides ambulatory sedation services to a number of private general dental practices in the Perth, Western Australia Metropolitan Area. Standardized intravenous sedation techniques as well as clear standardized discharge criteria were utilized. The sedatives used were fentanyl, midazolam, and propofol. Results from 85 patients produced an average recovery time of 19 minutes. Recovery time was not associated with the type or length of dental procedures performed. PMID:15384295
High-average-power diode-pumped Yb: YAG lasers
Avizonis, P V; Beach, R; Bibeau, C M; Emanuel, M A; Harris, D G; Honea, E C; Monroe, R S; Payne, S A; Skidmore, J A; Sutton, S B
1999-10-01
A scaleable diode end-pumping technology for high-average-power slab and rod lasers has been under development for the past several years at Lawrence Livermore National Laboratory (LLNL). This technology has particular application to high average power Yb:YAG lasers that utilize a rod configured gain element. Previously, this rod configured approach has achieved average output powers in a single 5 cm long by 2 mm diameter Yb:YAG rod of 430 W cw and 280 W q-switched. High beam quality (M{sup 2} = 2.4) q-switched operation has also been demonstrated at over 180 W of average output power. More recently, using a dual rod configuration consisting of two, 5 cm long by 2 mm diameter laser rods with birefringence compensation, we have achieved 1080 W of cw output with an M{sup 2} value of 13.5 at an optical-to-optical conversion efficiency of 27.5%. With the same dual rod laser operated in a q-switched mode, we have also demonstrated 532 W of average power with an M{sup 2} < 2.5 at 17% optical-to-optical conversion efficiency. These q-switched results were obtained at a 10 kHz repetition rate and resulted in 77 nsec pulse durations. These improved levels of operational performance have been achieved as a result of technology advancements made in several areas that will be covered in this manuscript. These enhancements to our architecture include: (1) Hollow lens ducts that enable the use of advanced cavity architectures permitting birefringence compensation and the ability to run in large aperture-filling near-diffraction-limited modes. (2) Compound laser rods with flanged-nonabsorbing-endcaps fabricated by diffusion bonding. (3) Techniques for suppressing amplified spontaneous emission (ASE) and parasitics in the polished barrel rods.
Li, Yongfu; Shen, Hongwei; Lyons, John W; Sammler, Robert L; Brackhagen, Meinolf; Meunier, David M
2016-03-15
Size-exclusion chromatography (SEC) coupled with multi-angle laser light scattering (MALLS) and differential refractive index (DRI) detectors was employed for determination of the molecular weight distributions (MWD) of methylcellulose ethers (MC) and hydroxypropyl methylcellulose ethers (HPMC) having weight-average molecular weights (Mw) ranging from 20 to more than 1,000kg/mol. In comparison to previous work involving right-angle light scattering (RALS) and a viscometer for MWD characterization of MC and HPMC, MALLS yields more reliable molecular weight for materials having weight-average molecular weights (Mw) exceeding about 300kg/mol. A non-ideal SEC separation was observed for cellulose ethers with Mw>800kg/mol, and was manifested by upward divergence of logM vs. elution volume (EV) at larger elution volume at typical SEC flow rate such as 1.0mL/min. As such, the number-average molecular weight (Mn) determined for the sample was erroneously large and polydispersity (Mw/Mn) was erroneously small. This non-ideality resulting in the late elution of high molecular weight chains could be due to the elongation of polymer chains when experimental conditions yield Deborah numbers (De) exceeding 0.5. Non-idealities were eliminated when sufficiently low flow rates were used. Thus, using carefully selected experimental conditions, SEC coupled with MALLS and DRI can provide reliable MWD characterization of MC and HPMC covering the entire ranges of compositions and molecular weights of commercial interest. PMID:26794765
Weighted Uncertainty Relations
NASA Astrophysics Data System (ADS)
Xiao, Yunlong; Jing, Naihuan; Li-Jost, Xianqing; Fei, Shao-Ming
2016-03-01
Recently, Maccone and Pati have given two stronger uncertainty relations based on the sum of variances and one of them is nontrivial when the quantum state is not an eigenstate of the sum of the observables. We derive a family of weighted uncertainty relations to provide an optimal lower bound for all situations and remove the restriction on the quantum state. Generalization to multi-observable cases is also given and an optimal lower bound for the weighted sum of the variances is obtained in general quantum situation.
Weighted Uncertainty Relations
Xiao, Yunlong; Jing, Naihuan; Li-Jost, Xianqing; Fei, Shao-Ming
2016-01-01
Recently, Maccone and Pati have given two stronger uncertainty relations based on the sum of variances and one of them is nontrivial when the quantum state is not an eigenstate of the sum of the observables. We derive a family of weighted uncertainty relations to provide an optimal lower bound for all situations and remove the restriction on the quantum state. Generalization to multi-observable cases is also given and an optimal lower bound for the weighted sum of the variances is obtained in general quantum situation. PMID:26984295
Average observational quantities in the timescape cosmology
Wiltshire, David L.
2009-12-15
We examine the properties of a recently proposed observationally viable alternative to homogeneous cosmology with smooth dark energy, the timescape cosmology. In the timescape model cosmic acceleration is realized as an apparent effect related to the calibration of clocks and rods of observers in bound systems relative to volume-average observers in an inhomogeneous geometry in ordinary general relativity. The model is based on an exact solution to a Buchert average of the Einstein equations with backreaction. The present paper examines a number of observational tests which will enable the timescape model to be distinguished from homogeneous cosmologies with a cosmological constant or other smooth dark energy, in current and future generations of dark energy experiments. Predictions are presented for comoving distance measures; H(z); the equivalent of the dark energy equation of state, w(z); the Om(z) measure of Sahni, Shafieloo, and Starobinsky; the Alcock-Paczynski test; the baryon acoustic oscillation measure, D{sub V}; the inhomogeneity test of Clarkson, Bassett, and Lu; and the time drift of cosmological redshifts. Where possible, the predictions are compared to recent independent studies of similar measures in homogeneous cosmologies with dark energy. Three separate tests with indications of results in possible tension with the {lambda}CDM model are found to be consistent with the expectations of the timescape cosmology.
MACHINE PROTECTION FOR HIGH AVERAGE CURRENT LINACS
Jordan, Kevin; Allison, Trent; Evans, Richard; Coleman, James; Grippo, Albert
2003-05-01
A fully integrated Machine Protection System (MPS) is critical to efficient commissioning and safe operation of all high current accelerators. The Jefferson Lab FEL [1,2] has multiple electron beam paths and many different types of diagnostic insertion devices. The MPS [3] needs to monitor both the status of these devices and the magnet settings which define the beam path. The matrix of these devices and beam paths are programmed into gate arrays, the output of the matrix is an allowable maximum average power limit. This power limit is enforced by the drive laser for the photocathode gun. The Beam Loss Monitors (BLMs), RF status, and laser safety system status are also inputs to the control matrix. There are 8 Machine Modes (electron path) and 8 Beam Modes (average power limits) that define the safe operating limits for the FEL. Combinations outside of this matrix are unsafe and the beam is inhibited. The power limits range from no beam to 2 megawatts of electron beam power.
Climatology of globally averaged thermospheric mass density
NASA Astrophysics Data System (ADS)
Emmert, J. T.; Picone, J. M.
2010-09-01
We present a climatological analysis of daily globally averaged density data, derived from orbit data and covering the years 1967-2007, along with an empirical Global Average Mass Density Model (GAMDM) that encapsulates the 1986-2007 data. The model represents density as a function of the F10.7 solar radio flux index, the day of year, and the Kp geomagnetic activity index. We discuss in detail the dependence of the data on each of the input variables, and demonstrate that all of the terms in the model represent consistent variations in both the 1986-2007 data (on which the model is based) and the independent 1967-1985 data. We also analyze the uncertainty in the results, and quantify how the variance in the data is apportioned among the model terms. We investigate the annual and semiannual variations of the data and quantify the amplitude, height dependence, solar cycle dependence, and interannual variability of these oscillatory modes. The auxiliary material includes Fortran 90 code for evaluating GAMDM.
Global atmospheric circulation statistics: Four year averages
NASA Technical Reports Server (NTRS)
Wu, M. F.; Geller, M. A.; Nash, E. R.; Gelman, M. E.
1987-01-01
Four year averages of the monthly mean global structure of the general circulation of the atmosphere are presented in the form of latitude-altitude, time-altitude, and time-latitude cross sections. The numerical values are given in tables. Basic parameters utilized include daily global maps of temperature and geopotential height for 18 pressure levels between 1000 and 0.4 mb for the period December 1, 1978 through November 30, 1982 supplied by NOAA/NMC. Geopotential heights and geostrophic winds are constructed using hydrostatic and geostrophic formulae. Meridional and vertical velocities are calculated using thermodynamic and continuity equations. Fields presented in this report are zonally averaged temperature, zonal, meridional, and vertical winds, and amplitude of the planetary waves in geopotential height with zonal wave numbers 1-3. The northward fluxes of sensible heat and eastward momentum by the standing and transient eddies along with their wavenumber decomposition and Eliassen-Palm flux propagation vectors and divergences by the standing and transient eddies along with their wavenumber decomposition are also given. Large interhemispheric differences and year-to-year variations are found to originate in the changes in the planetary wave activity.
... oxygen into energy), and behavior or habits. Energy Balance Energy balance is important for maintaining a healthy weight. The ... OUT over time = weight stays the same (energy balance) More energy IN than OUT over time = weight ...
... Therapy and Weight Management (Nemours Foundation) Also in Spanish Weight Loss: Ready to Change Your Habits? (Mayo Foundation for Medical Education and Research) Weight-Loss and Nutrition Myths (National ...
Weight and Diabetes (For Parents)
... Things to Know About Zika & Pregnancy Weight and Diabetes KidsHealth > For Parents > Weight and Diabetes Print A ... or type 2 diabetes. Weight and Type 1 Diabetes Undiagnosed or untreated, type 1 diabetes can make ...
Brief report: Weight dissatisfaction, weight status, and weight loss in Mexican-American children
Technology Transfer Automated Retrieval System (TEKTRAN)
The study objectives were to assess the association between weight dissatisfaction, weight status, and weight loss in Mexican-American children participating in a weight management program. Participants included 265 Mexican American children recruited for a school-based weight management program. Al...
Orthopedic stretcher with average-sized person can pass through 18-inch opening
NASA Technical Reports Server (NTRS)
Lothschuetz, F. X.
1966-01-01
Modified Robinson stretcher for vertical lifting and carrying, will pass through an opening 18 inches in diameter, while containing a person of average height and weight. A subject 6 feet tall and weighing 200 pounds was lowered and raised out of an 18 inch diameter opening in a tank to test the stretcher.
Impact of Field of Study, College and Year on Calculation of Cumulative Grade Point Average
ERIC Educational Resources Information Center
Trail, Carla; Reiter, Harold I.; Bridge, Michelle; Stefanowska, Patricia; Schmuck, Marylou; Norman, Geoff
2008-01-01
A consistent finding from many reviews is that undergraduate Grade Point Average (uGPA) is a key predictor of academic success in medical school. Curiously, while uGPA has established predictive validity, little is known about its reliability. For a variety of reasons, medical schools use different weighting schemas to combine years of study.…
40 CFR 63.7522 - Can I use emission averaging to comply with this subpart?
Code of Federal Regulations, 2012 CFR
2012-07-01
... of pounds per million Btu of heat input. Hm = Maximum rated heat input capacity of boiler, i, in... to calculate the monthly average weighted emission rate using the actual heat capacity for each... particulate matter or TSM, HCl, or mercury, in units of pounds per million Btu of heat input. Er =...
Weight change among people randomized to minimal intervention control groups in weight loss trials
Johns, David J.; Hartmann‐Boyce, Jamie; Jebb, Susan A.; Aveyard, Paul
2016-01-01
Objective Evidence on the effectiveness of behavioral weight management programs often comes from uncontrolled program evaluations. These frequently make the assumption that, without intervention, people will gain weight. The aim of this study was to use data from minimal intervention control groups in randomized controlled trials to examine the evidence for this assumption and the effect of frequency of weighing on weight change. Methods Data were extracted from minimal intervention control arms in a systematic review of multicomponent behavioral weight management programs. Two reviewers classified control arms into three categories based on intensity of minimal intervention and calculated 12‐month mean weight change using baseline observation carried forward. Meta‐regression was conducted in STATA v12. Results Thirty studies met the inclusion criteria, twenty‐nine of which had usable data, representing 5,963 participants allocated to control arms. Control arms were categorized according to intensity, as offering leaflets only, a single session of advice, or more than one session of advice from someone without specialist skills in supporting weight loss. Mean weight change at 12 months across all categories was −0.8 kg (95% CI −1.1 to −0.4). In an unadjusted model, increasing intensity by moving up a category was associated with an additional weight loss of −0.53 kg (95% CI −0.96 to −0.09). Also in an unadjusted model, each additional weigh‐in was associated with a weight change of −0.42 kg (95% CI −0.81 to −0.03). However, when both variables were placed in the same model, neither intervention category nor number of weigh‐ins was associated with weight change. Conclusions Uncontrolled evaluations of weight loss programs should assume that, in the absence of intervention, their population would weigh up to a kilogram on average less than baseline at the end of the first year of follow‐up. PMID:27028279
Implicit Bias about Weight and Weight Loss Treatment Outcomes
Carels, Robert A; Hinman, Nova G; Hoffmann, Debra A; Burmeister, Jacob M; Borushok, Jessica E.; Marx, Jenna M; Ashrafioun, Lisham
2014-01-01
Objectives The goal of the current study was to examine the impact of a weight loss intervention on implicit bias toward weight, as well as the relationship among implicit bias, weight loss behaviors, and weight loss outcomes. Additionally, of interest was the relationship among these variables when implicit weight bias was measured with a novel assessment that portrays individuals who are thin and obese engaged in both stereotypical and nonstereotypical health-related behaviors. Methods Implicit weight bias (stereotype consistent and stereotype inconsistent), binge eating, self-monitoring, and body weight were assessed among weight loss participants at baseline and post-treatment (N=44) participating in two weight loss programs. Results Stereotype consistent bias significantly decreased from baseline to post-treatment. Greater baseline stereotype consistent bias was associated with lower binge eating and greater self-monitoring. Greater post-treatment stereotype consistent bias was associated with greater percent weight loss. Stereotype inconsistent bias did not change from baseline to post-treatment and was generally unrelated to outcomes. Conclusion Weight loss treatment may reduce implicit bias toward overweight individuals among weight loss participants. Higher post-treatment stereotype consistent bias was associated with a higher percent weight loss, possibly suggesting that losing weight may serve to maintain implicit weight bias. Alternatively, great implicit weight bias may identify individuals motivated to make changes necessary for weight loss. PMID:25261809
Efficiency of transportation on weighted extended Koch networks
NASA Astrophysics Data System (ADS)
Wu, Zikai; Hou, Baoyu; Zhang, Hongjuan
2013-10-01
In this paper, we propose a family of weighted extended Koch networks based on a class of extended Koch networks. They originate from a r-complete graph, and each node in each r-complete graph of current generation produces mr-complete graphs whose weighted edges are scaled by factor h in subsequent evolutionary step. We study the structural properties of these networks and random walks on them. In more detail, we calculate exactly the average weighted shortest path length (AWSP), average receiving time (ART) and average sending time (AST). Besides, the technique of resistor network is employed to uncover the relationship between ART and AST on networks with unit weight. In the infinite network order limit, the average weighted shortest path lengths stay bounded with growing network order (0 < h < 1). The closed form expression of ART shows that it exhibits a sub-linear dependence (0 < h < 1) or linear dependence ( h = 1) on network order. On the contrary, the AST behaves super-linearly with the network order. Collectively, all the obtained results show that the efficiency of message transportation on weighted extended Koch networks has close relation to the network parameters h, m and r. All these findings could shed light on the structure and random walks of general weighted networks.
ERIC Educational Resources Information Center
Nutter, June
1995-01-01
Secondary level physical education teachers can have their students use math concepts while working out on the weight-room equipment. The article explains how students can reinforce math skills while weightlifting by estimating their strength, estimating their power, or calculating other formulas. (SM)
Trapping on Weighted Tetrahedron Koch Networks with Small-World Property
NASA Astrophysics Data System (ADS)
Dai, Meifeng; Xie, Qi; Xi, Lifeng
2013-04-01
In this paper, we present weighted tetrahedron Koch networks depending on a weight factor. According to their self-similar construction, we obtain the analytical expressions of the weighted clustering coefficient and average weighted shortest path (AWSP). The obtained solutions show that the weighted tetrahedron Koch networks exhibits small-world property. Then, we calculate the average receiving time (ART) on weighted-dependent walks, which is the sum of mean first-passage times (MFPTs) for all nodes absorpt at the trap located at a hub node. We find that the ART exhibits a sublinear or linear dependence on network order.
Andersson, Neil; Mitchell, Steven
2006-01-01
Evaluation of mine risk education in Afghanistan used population weighted raster maps as an evaluation tool to assess mine education performance, coverage and costs. A stratified last-stage random cluster sample produced representative data on mine risk and exposure to education. Clusters were weighted by the population they represented, rather than the land area. A "friction surface" hooked the population weight into interpolation of cluster-specific indicators. The resulting population weighted raster contours offer a model of the population effects of landmine risks and risk education. Five indicator levels ordered the evidence from simple description of the population-weighted indicators (level 0), through risk analysis (levels 1-3) to modelling programme investment and local variations (level 4). Using graphic overlay techniques, it was possible to metamorphose the map, portraying the prediction of what might happen over time, based on the causality models developed in the epidemiological analysis. Based on a lattice of local site-specific predictions, each cluster being a small universe, the "average" prediction was immediately interpretable without losing the spatial complexity. PMID:16390549
Andersson, Neil; Mitchell, Steven
2006-01-01
Evaluation of mine risk education in Afghanistan used population weighted raster maps as an evaluation tool to assess mine education performance, coverage and costs. A stratified last-stage random cluster sample produced representative data on mine risk and exposure to education. Clusters were weighted by the population they represented, rather than the land area. A "friction surface" hooked the population weight into interpolation of cluster-specific indicators. The resulting population weighted raster contours offer a model of the population effects of landmine risks and risk education. Five indicator levels ordered the evidence from simple description of the population-weighted indicators (level 0), through risk analysis (levels 1–3) to modelling programme investment and local variations (level 4). Using graphic overlay techniques, it was possible to metamorphose the map, portraying the prediction of what might happen over time, based on the causality models developed in the epidemiological analysis. Based on a lattice of local site-specific predictions, each cluster being a small universe, the "average" prediction was immediately interpretable without losing the spatial complexity. PMID:16390549
Perceived weight in youths and risk of overweight or obesity six years later
Duong, Hao T.; Roberts, Robert E.
2014-01-01
Objective To examine the association between perceived overweight in adolescents and the development of overweight or obesity later in life. Methods This paper uses data from a prospective, two-wave cohort study. Participants are 2445 adolescents 11-17 years of age who reported perceived weight at baseline and also had height and weight measured at baseline and at follow-up six years later sampled from managed care groups in a large metropolitan area. Results Youths who perceived themselves as overweight at baseline were approximately 2.5 times as likely to be overweight or obese six years later compared to youths who perceived themselves as average weight (OR= 2.45, 95% CI=1.77-3.39), after adjusting for weight status at baseline, demographic characteristics, major depression, physical activity and dieting behaviors. Those who perceived themselves as skinny were less likely to be overweight or obese later (OR=0.36, 95% CI=0.27-0.49). Conclusions Perceived overweight was associated with overweight or obesity later in life. This relationship was not fully explained by extreme weight control behaviors or major depression. Further research is needed to explore the mechanism involved. PMID:24360137
Average Gait Differential Image Based Human Recognition
Chen, Jinyan; Liu, Jiansheng
2014-01-01
The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI) is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI), AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA) is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition. PMID:24895648
Quetelet, the average man and medical knowledge.
Caponi, Sandra
2013-01-01
Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine. PMID:23970171
Average power laser experiment (APLE) design
NASA Astrophysics Data System (ADS)
Parazzoli, C. G.; Rodenburg, R. E.; Dowell, D. H.; Greegor, R. B.; Kennedy, R. C.; Romero, J. B.; Siciliano, J. A.; Tong, K.-O.; Vetter, A. M.; Adamski, J. L.; Pistoresi, D. J.; Shoffstall, D. R.; Quimby, D. C.
1992-07-01
We describe the details and the design requirements for the 100 kW CW radio frequency free electron laser at 10 μm to be built at Boeing Aerospace and Electronics Division in Seattle with the collaboration of Los Alamos National Laboratory. APLE is a single-accelerator master-oscillator and power-amplifier (SAMOPA) device. The goal of this experiment is to demonstrate a fully operational RF-FEL at 10 μm with an average power of 100 kW. The approach and wavelength were chosen on the basis of maximum cost effectiveness, including utilization of existing hardware and reasonable risk, and potential for future applications. Current plans call for an initial oscillator power demonstration in the fall of 1994 and full SAMOPA operation by December 1995.
Average deployments versus missile and defender parameters
Canavan, G.H.
1991-03-01
This report evaluates the average number of reentry vehicles (RVs) that could be deployed successfully as a function of missile burn time, RV deployment times, and the number of space-based interceptors (SBIs) in defensive constellations. Leakage estimates of boost-phase kinetic-energy defenses as functions of launch parameters and defensive constellation size agree with integral predictions of near-exact calculations for constellation sizing. The calculations discussed here test more detailed aspects of the interaction. They indicate that SBIs can efficiently remove about 50% of the RVs from a heavy missile attack. The next 30% can removed with two-fold less effectiveness. The next 10% could double constellation sizes. 5 refs., 7 figs.
Average prime-pair counting formula
NASA Astrophysics Data System (ADS)
Korevaar, Jaap; Riele, Herman Te
2010-04-01
Taking r>0 , let π_{2r}(x) denote the number of prime pairs (p, p+2r) with p≤ x . The prime-pair conjecture of Hardy and Littlewood (1923) asserts that π_{2r}(x)˜ 2C_{2r} {li}_2(x) with an explicit constant C_{2r}>0 . There seems to be no good conjecture for the remainders ω_{2r}(x)=π_{2r}(x)- 2C_{2r} {li}_2(x) that corresponds to Riemann's formula for π(x)-{li}(x) . However, there is a heuristic approximate formula for averages of the remainders ω_{2r}(x) which is supported by numerical results.
The balanced survivor average causal effect.
Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken
2013-01-01
Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure. PMID:23658214
Averaged implicit hydrodynamic model of semiflexible filaments.
Chandran, Preethi L; Mofrad, Mohammad R K
2010-03-01
We introduce a method to incorporate hydrodynamic interaction in a model of semiflexible filament dynamics. Hydrodynamic screening and other hydrodynamic interaction effects lead to nonuniform drag along even a rigid filament, and cause bending fluctuations in semiflexible filaments, in addition to the nonuniform Brownian forces. We develop our hydrodynamics model from a string-of-beads idealization of filaments, and capture hydrodynamic interaction by Stokes superposition of the solvent flow around beads. However, instead of the commonly used first-order Stokes superposition, we do an equivalent of infinite-order superposition by solving for the true relative velocity or hydrodynamic velocity of the beads implicitly. We also avoid the computational cost of the string-of-beads idealization by assuming a single normal, parallel and angular hydrodynamic velocity over sections of beads, excluding the beads at the filament ends. We do not include the end beads in the averaging and solve for them separately instead, in order to better resolve the drag profiles along the filament. A large part of the hydrodynamic drag is typically concentrated at the filament ends. The averaged implicit hydrodynamics methods can be easily incorporated into a string-of-rods idealization of semiflexible filaments that was developed earlier by the authors. The earlier model was used to solve the Brownian dynamics of semiflexible filaments, but without hydrodynamic interactions incorporated. We validate our current model at each stage of development, and reproduce experimental observations on the mean-squared displacement of fluctuating actin filaments . We also show how hydrodynamic interaction confines a fluctuating actin filament between two stationary lateral filaments. Finally, preliminary examinations suggest that a large part of the observed velocity in the interior segments of a fluctuating filament can be attributed to induced solvent flow or hydrodynamic screening. PMID:20365783
The entropy in finite N-unit nonextensive systems: The normal average and q-average
NASA Astrophysics Data System (ADS)
Hasegawa, Hideo
2010-09-01
We discuss the Tsallis entropy in finite N-unit nonextensive systems by using the multivariate q-Gaussian probability distribution functions (PDFs) derived by the maximum entropy methods with the normal average and the q-average (q: the entropic index). The Tsallis entropy obtained by the q-average has an exponential N dependence: Sq(N)/N≃e(1-q)NS1(1) for large N (≫1/(1-q)>0). In contrast, the Tsallis entropy obtained by the normal average is given by Sq(N)/N≃[1/(q-1)N] for large N (≫1/(q -1)>0). N dependences of the Tsallis entropy obtained by the q- and normal averages are generally quite different, although both results are in fairly good agreement for |q -1|≪1.0. The validity of the factorization approximation (FA) to PDFs, which has been commonly adopted in the literature, has been examined. We have calculated correlations defined by Cm=⟨(δxiδxj)m⟩-⟨(δxi)m⟩⟨(δxj)m⟩ for i ≠j where δxi=xi-⟨xi⟩, and the bracket ⟨ṡ⟩ stands for the normal and q-averages. The first-order correlation (m =1) expresses the intrinsic correlation and higher-order correlations with m ≥2 include nonextensivity-induced correlation, whose physical origin is elucidated in the superstatistics.
Future research in weight bias: What next?
Alberga, Angela S; Russell-Mayhew, Shelly; von Ranson, Kristin M; McLaren, Lindsay; Ramos Salas, Ximena; Sharma, Arya M
2016-06-01
The 2015 Canadian Weight Bias Summit disseminated the newest research advances and brought together 40 experts, stakeholders, and policy makers in various disciplines in health, education, and public policy to identify future research directions in weight bias. In this paper we aim to share the results of the Summit as well as encourage international and interdisciplinary research collaborations in weight bias reduction. Consensus emerged on six research areas that warrant further investigation in weight bias: costs, causes, measurement, qualitative research and lived experience, interventions, and learning from other models of discrimination. These discussions highlighted three key lessons that were informed by the Summit, namely: language matters, the voices of people living with obesity should be incorporated, and interdisciplinary stakeholders should be included. PMID:27129601
Ideal Weight and Weight Satisfaction: Association With Health Practices
Ardern, Chris I.; Church, Timothy S.; Hebert, James R.; Sui, Xuemei; Blair, Steven N.
2009-01-01
Evidence suggests that individuals have become more tolerant of higher body weights over time. To investigate this issue further, the authors examined cross-sectional associations among ideal weight, examination year, and obesity as well as the association of ideal weight and body weight satisfaction with health practices among 15,221 men and 4,126 women in the United States. Participants in 1987 reported higher ideal weights than participants in 2001, an effect particularly pronounced from 1987 to 2001 for younger and obese men (85.5 kg to 94.9 kg) and women (62.2 kg to 70.5 kg). For a given body mass index, higher ideal body weights were associated with greater weight satisfaction but lower intentions to lose weight. Body weight satisfaction was subsequently associated with greater walking/jogging, better diet, and lower lifetime weight loss but with less intention to change physical activity and diet or lose weight (P < 0.01). Conversely, body mass index was negatively associated with weight satisfaction (P < 0.01) and was associated with less walking/jogging, poorer diet, and greater lifetime weight loss but with greater intention to change physical activity and diet or lose weight. Although the health implications of these findings are somewhat unclear, increased weight satisfaction, in conjunction with increases in societal overweight/obesity, may result in decreased motivation to lose weight and/or adopt healthier lifestyle behaviors. PMID:19546153
High Average Power, High Energy Short Pulse Fiber Laser System
Messerly, M J
2007-11-13
Recently continuous wave fiber laser systems with output powers in excess of 500W with good beam quality have been demonstrated [1]. High energy, ultrafast, chirped pulsed fiber laser systems have achieved record output energies of 1mJ [2]. However, these high-energy systems have not been scaled beyond a few watts of average output power. Fiber laser systems are attractive for many applications because they offer the promise of high efficiency, compact, robust systems that are turn key. Applications such as cutting, drilling and materials processing, front end systems for high energy pulsed lasers (such as petawatts) and laser based sources of high spatial coherence, high flux x-rays all require high energy short pulses and two of the three of these applications also require high average power. The challenge in creating a high energy chirped pulse fiber laser system is to find a way to scale the output energy while avoiding nonlinear effects and maintaining good beam quality in the amplifier fiber. To this end, our 3-year LDRD program sought to demonstrate a high energy, high average power fiber laser system. This work included exploring designs of large mode area optical fiber amplifiers for high energy systems as well as understanding the issues associated chirped pulse amplification in optical fiber amplifier systems.
NASA Astrophysics Data System (ADS)
Soltanzadeh, I.; Azadi, M.; Vakili, G. A.
2011-07-01
Using Bayesian Model Averaging (BMA), an attempt was made to obtain calibrated probabilistic numerical forecasts of 2-m temperature over Iran. The ensemble employs three limited area models (WRF, MM5 and HRM), with WRF used with five different configurations. Initial and boundary conditions for MM5 and WRF are obtained from the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) and for HRM the initial and boundary conditions come from analysis of Global Model Europe (GME) of the German Weather Service. The resulting ensemble of seven members was run for a period of 6 months (from December 2008 to May 2009) over Iran. The 48-h raw ensemble outputs were calibrated using BMA technique for 120 days using a 40 days training sample of forecasts and relative verification data. The calibrated probabilistic forecasts were assessed using rank histogram and attribute diagrams. Results showed that application of BMA improved the reliability of the raw ensemble. Using the weighted ensemble mean forecast as a deterministic forecast it was found that the deterministic-style BMA forecasts performed usually better than the best member's deterministic forecast.
A hierarchical Bayesian model averaging framework for groundwater prediction under uncertainty.
Chitsazan, Nima; Tsai, Frank T-C
2015-01-01
Groundwater prediction models are subjected to various sources of uncertainty. This study introduces a hierarchical Bayesian model averaging (HBMA) method to segregate and prioritize sources of uncertainty in a hierarchical structure and conduct BMA for concentration prediction. A BMA tree of models is developed to understand the impact of individual sources of uncertainty and uncertainty propagation to model predictions. HBMA evaluates the relative importance of different modeling propositions at each level in the BMA tree of model weights. The HBMA method is applied to chloride concentration prediction for the "1,500-foot" sand of the Baton Rouge area, Louisiana from 2005 to 2029. The groundwater head data from 1990 to 2004 is used for model calibration. Four sources of uncertainty are considered and resulted in 180 flow and transport models for concentration prediction. The results show that prediction variances of concentration from uncertain model elements are much higher than the prediction variance from uncertain model parameters. The HBMA method is able to quantify the contributions of individual sources of uncertainty to the total uncertainty. PMID:24890644
NASA Astrophysics Data System (ADS)
Chitsazan, Nima; Nadiri, Ata Allah; Tsai, Frank T.-C.
2015-09-01
This study adopts a hierarchical Bayesian model averaging (HBMA) method to analyze prediction uncertainty resulted from uncertain components in artificial neural networks (ANNs). The HBMA is an ensemble method for prediction and is used to segregate the sources of model structure uncertainty in ANNs and investigate their variance contributions to total prediction variance. Specific sources of uncertainty considered in ANNs include the uncertainty in neural network weights and biases (model parameters), uncertainty of selecting an activation function for the hidden layer, and uncertainty of selecting a number of hidden layer nodes (model structure). Prediction uncertainties due to uncertain inputs and ANN model parameters are represented by within-model variance. Prediction uncertainties due to uncertain activation function and uncertain number of nodes for the hidden layer are represented by between-model variance. The method is demonstrated through a study that employs ANNs to predict fluoride concentration in the aquifers of the Maku area, Azarbaijan, Iran. The results show that uncertain inputs and ANN model parameters produces the most prediction variance, followed by prediction variances from uncertain number of hidden layer nodes and uncertain activation function.
Precipitation interpolation in mountainous areas
NASA Astrophysics Data System (ADS)
Kolberg, Sjur
2015-04-01
Different precipitation interpolation techniques as well as external drift covariates are tested and compared in a 26000 km2 mountainous area in Norway, using daily data from 60 stations. The main method of assessment is cross-validation. Annual precipitation in the area varies from below 500 mm to more than 2000 mm. The data were corrected for wind-driven undercatch according to operational standards. While temporal evaluation produce seemingly acceptable at-station correlation values (on average around 0.6), the average daily spatial correlation is less than 0.1. Penalising also bias, Nash-Sutcliffe R2 values are negative for spatial correspondence, and around 0.15 for temporal. Despite largely violated assumptions, plain Kriging produces better results than simple inverse distance weighting. More surprisingly, the presumably 'worst-case' benchmark of no interpolation at all, simply averaging all 60 stations for each day, actually outperformed the standard interpolation techniques. For logistic reasons, high altitudes are under-represented in the gauge network. The possible effect of this was investigated by a) fitting a precipitation lapse rate as an external drift, and b) applying a linear model of orographic enhancement (Smith and Barstad, 2004). These techniques improved the results only marginally. The gauge density in the region is one for each 433 km2; higher than the overall density of the Norwegian national network. Admittedly the cross-validation technique reduces the gauge density, still the results suggest that we are far from able to provide hydrological models with adequate data for the main driving force.
Models for predicting objective function weights in prostate cancer IMRT
Boutilier, Justin J. Lee, Taewoo; Craig, Tim; Sharpe, Michael B.; Chan, Timothy C. Y.
2015-04-15
Purpose: To develop and evaluate the clinical applicability of advanced machine learning models that simultaneously predict multiple optimization objective function weights from patient geometry for intensity-modulated radiation therapy of prostate cancer. Methods: A previously developed inverse optimization method was applied retrospectively to determine optimal objective function weights for 315 treated patients. The authors used an overlap volume ratio (OV) of bladder and rectum for different PTV expansions and overlap volume histogram slopes (OVSR and OVSB for the rectum and bladder, respectively) as explanatory variables that quantify patient geometry. Using the optimal weights as ground truth, the authors trained and applied three prediction models: logistic regression (LR), multinomial logistic regression (MLR), and weighted K-nearest neighbor (KNN). The population average of the optimal objective function weights was also calculated. Results: The OV at 0.4 cm and OVSR at 0.1 cm features were found to be the most predictive of the weights. The authors observed comparable performance (i.e., no statistically significant difference) between LR, MLR, and KNN methodologies, with LR appearing to perform the best. All three machine learning models outperformed the population average by a statistically significant amount over a range of clinical metrics including bladder/rectum V53Gy, bladder/rectum V70Gy, and dose to the bladder, rectum, CTV, and PTV. When comparing the weights directly, the LR model predicted bladder and rectum weights that had, on average, a 73% and 74% relative improvement over the population average weights, respectively. The treatment plans resulting from the LR weights had, on average, a rectum V70Gy that was 35% closer to the clinical plan and a bladder V70Gy that was 29% closer, compared to the population average weights. Similar results were observed for all other clinical metrics. Conclusions: The authors demonstrated that the KNN and MLR
Establishing weights of members in a multi-model ensemble
NASA Astrophysics Data System (ADS)
Murdi Hartanto, Isnaeni; van Andel, Schalk Jan
2016-04-01
In recent years, multi-model ensemble methods have been utilized in hydrology to integrate several model outputs to simulate and predict events. Apart from using the ensemble as multiple predictions of equal weight or probability, a weighting scheme can be applied to improve the probability density function and derivatives such as the ensemble mean. The weighting scheme can be static or dynamic. A multi-model ensemble of discharge simulations for the Rijnland water system in the Netherlands was processed using several weighting schemes. The ensemble was constructed using multiple catchment characteristics and forcing data sources available for the area, resulting in 24 members. The first weighting scheme used equal weights. The second was a static weighting scheme using the relative historic performance of a member as its weight. Performance metrics, i.e. bias and NSE were used. Dynamic weighting was using previous day relative performance to establish the weight for the members. Firstly error (distance) of simulated to observed discharge was used. Secondly trend of the previous day simulated discharge was used by giving zero weight to the members with wrong trend. For the static weighting, results showed that the simple equal weight was already giving satisfactory results. The scheme with previous-year performance only gave a small improvement to the ensemble mean as compared to the mean with uniform weights. The weighting using combined performance metrics also gave a small improvement. The dynamic weighting using previous-day error resulted in stronger improvements. Giving zero weight to half of the members with high error was resulting in a significant improvement of ensemble mean NSE. The weight based on the trend, however, only improved the ensemble mean a little bit compared to the equal weighting. Note that part of these results may be specific to the case study water system of Rijnland, which is a highly controlled water system.
Particle sizing by weighted measurements of scattered light
NASA Technical Reports Server (NTRS)
Buchele, Donald R.
1988-01-01
A description is given of a measurement method, applicable to a poly-dispersion of particles, in which the intensity of scattered light at any angle is weighted by a factor proportional to that angle. Determination is then made of four angles at which the weighted intensity is four fractions of the maximum intensity. These yield four characteristic diameters, i.e., the diameters of the volume/area mean (D sub 32 the Sauter mean) and the volume/diameter mean (D sub 31); the diameters at cumulative volume fractions of 0.5 (D sub v0.5 the volume median) and 0.75 (D sub v0.75). They also yield the volume dispersion of diameters. Mie scattering computations show that an average diameter less than three micrometers cannot be accurately measured. The results are relatively insensitive to extraneous background light and to the nature of the diameter distribution. Also described is an experimental method of verifying the conclusions by using two microscopic slides coated with polystyrene microspheres to simulate the particles and the background.
Microstructural effects on the average properties in porous battery electrodes
NASA Astrophysics Data System (ADS)
García-García, Ramiro; García, R. Edwin
2016-03-01
A theoretical framework is formulated to analytically quantify the effects of the microstructure on the average properties of porous electrodes, including reactive area density and the through-thickness tortuosity as observed in experimentally-determined tomographic sections. The proposed formulation includes the microstructural non-idealities but also captures the well-known perfectly spherical limit. Results demonstrate that in the absence of any particle alignment, the through-thickness Bruggeman exponent α, reaches an asymptotic value of α ∼ 2 / 3 as the shape of the particles become increasingly prolate (needle- or fiber-like). In contrast, the Bruggeman exponent diverges as the shape of the particles become increasingly oblate, regardless of the degree of particle alignment. For aligned particles, tortuosity can be dramatically suppressed, e.g., α → 1 / 10 for ra → 1 / 10 and MRD ∼ 40 . Particle size polydispersity impacts the porosity-tortuosity relation when the average particle size is comparable to the thickness of the electrode layers. Electrode reactivity density can be arbitrarily increased as the particles become increasingly oblate, but asymptotically reach a minimum value as the particles become increasingly prolate. In the limit of a porous electrode comprised of fiber-like particles, the area density decreases by 24% , with respect to a distribution of perfectly spherical particles.
Aubuchon, Mira; Liu, Ying; Petroski, Gregory F; Thomas, Tom R; Polotsky, Alex J
2016-08-01
What is the impact of intentional weight loss and regain on serum androgens in women? We conducted an ancillary analysis of prospectively collected samples from a randomized controlled trial. The trial involved supervised 10% weight loss (8.5 kg on average) with diet and exercise over 4-6 months followed by supervised intentional regain of 50% of the lost weight (4.6 kg on average) over 4-6 months. Participants were randomized prior to the partial weight regain component to either continuation or cessation of endurance exercise. Analytic sample included 30 obese premenopausal women (mean age of 40 ± 5.9 years, mean baseline body mass index (BMI) of 32.9 ± 4.2 kg/m(2)) with metabolic syndrome. We evaluated sex hormone binding globulin (SHBG), total testosterone (T), free androgen index (FAI), and high molecular weight adiponectin (HMWAdp). Insulin, homeostasis model assessment (HOMA), and quantitative insulin sensitivity check index (QUICKI), and visceral adipose tissue (VAT) measured in the original trial were reanalyzed for the current analytic sample. Insulin, HOMA, and QUICKI improved with weight loss and were maintained despite weight regain. Log-transformed SHBG significantly increased from baseline to weight loss, and then significantly decreased with weight regain. LogFAI and logVAT decreased similarly and increased with weight loss followed by weight regain. No changes were found in logT and LogHMWAdp. There was no significant difference in any tested parameters by exercise between the groups. SHBG showed prominent sensitivity to body mass fluctuations, as reduction with controlled intentional weight regain showed an inverse relationship to VAT and occurred despite stable HMWAdp and sustained improvements with insulin resistance. FAI showed opposite changes to SHBG, while T did not change significantly with weight. Continued exercise during weight regain did not appear to impact these findings. PMID:27192090
Gain weighted eigenspace assignment
NASA Technical Reports Server (NTRS)
Davidson, John B.; Andrisani, Dominick, II
1994-01-01
This report presents the development of the gain weighted eigenspace assignment methodology. This provides a designer with a systematic methodology for trading off eigenvector placement versus gain magnitudes, while still maintaining desired closed-loop eigenvalue locations. This is accomplished by forming a cost function composed of a scalar measure of error between desired and achievable eigenvectors and a scalar measure of gain magnitude, determining analytical expressions for the gradients, and solving for the optimal solution by numerical iteration. For this development the scalar measure of gain magnitude is chosen to be a weighted sum of the squares of all the individual elements of the feedback gain matrix. An example is presented to demonstrate the method. In this example, solutions yielding achievable eigenvectors close to the desired eigenvectors are obtained with significant reductions in gain magnitude compared to a solution obtained using a previously developed eigenspace (eigenstructure) assignment method.