Wieczorek, Michael E.
2014-01-01
This digital data release consists of seven data files of soil attributes for the United States and the District of Columbia. The files are derived from National Resources Conservations Service’s (NRCS) Soil Survey Geographic database (SSURGO). The data files can be linked to the raster datasets of soil mapping unit identifiers (MUKEY) available through the NRCS’s Gridded Soil Survey Geographic (gSSURGO) database (http://www.nrcs.usda.gov/wps/portal/nrcs/detail/soils/survey/geo/?cid=nrcs142p2_053628). The associated files, named DRAINAGECLASS, HYDRATING, HYDGRP, HYDRICCONDITION, LAYER, TEXT, and WTDEP are area- and depth-weighted average values for selected soil characteristics from the SSURGO database for the conterminous United States and the District of Columbia. The SSURGO tables were acquired from the NRCS on March 5, 2014. The soil characteristics in the DRAINAGE table are drainage class (DRNCLASS), which identifies the natural drainage conditions of the soil and refers to the frequency and duration of wet periods. The soil characteristics in the HYDRATING table are hydric rating (HYDRATE), a yes/no field that indicates whether or not a map unit component is classified as a "hydric soil". The soil characteristics in the HYDGRP table are the percentages for each hydrologic group per MUKEY. The soil characteristics in the HYDRICCONDITION table are hydric condition (HYDCON), which describes the natural condition of the soil component. The soil characteristics in the LAYER table are available water capacity (AVG_AWC), bulk density (AVG_BD), saturated hydraulic conductivity (AVG_KSAT), vertical saturated hydraulic conductivity (AVG_KV), soil erodibility factor (AVG_KFACT), porosity (AVG_POR), field capacity (AVG_FC), the soil fraction passing a number 4 sieve (AVG_NO4), the soil fraction passing a number 10 sieve (AVG_NO10), the soil fraction passing a number 200 sieve (AVG_NO200), and organic matter (AVG_OM). The soil characteristics in the TEXT table are
Average Weighted Receiving Time of Weighted Tetrahedron Koch Networks
NASA Astrophysics Data System (ADS)
Dai, Meifeng; Zhang, Danping; Ye, Dandan; Zhang, Cheng; Li, Lei
2015-07-01
We introduce weighted tetrahedron Koch networks with infinite weight factors, which are generalization of finite ones. The term of weighted time is firstly defined in this literature. The mean weighted first-passing time (MWFPT) and the average weighted receiving time (AWRT) are defined by weighted time accordingly. We study the AWRT with weight-dependent walk. Results show that the AWRT for a nontrivial weight factor sequence grows sublinearly with the network order. To investigate the reason of sublinearity, the average receiving time (ART) for four cases are discussed.
Asymmetric network connectivity using weighted harmonic averages
NASA Astrophysics Data System (ADS)
Morrison, Greg; Mahadevan, L.
2011-02-01
We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.
NASA Astrophysics Data System (ADS)
Wu, Zikai; Hou, Baoyu; Zhang, Hongjuan; Jin, Feng
2014-04-01
Deterministic network models have been attractive media for discussing dynamical processes' dependence on network structural features. On the other hand, the heterogeneity of weights affect dynamical processes taking place on networks. In this paper, we present a family of weighted expanded Koch networks based on Koch networks. They originate from a r-polygon, and each node of current generation produces m r-polygons including the node and whose weighted edges are scaled by factor w in subsequent evolutionary step. We derive closed-form expressions for average weighted shortest path length (AWSP). In large network, AWSP stays bounded with network order growing (0 < w < 1). Then, we focus on a special random walks and trapping issue on the networks. In more detail, we calculate exactly the average receiving time (ART). ART exhibits a sub-linear dependence on network order (0 < w < 1), which implies that nontrivial weighted expanded Koch networks are more efficient than un-weighted expanded Koch networks in receiving information. Besides, efficiency of receiving information at hub nodes is also dependent on parameters m and r. These findings may pave the way for controlling information transportation on general weighted networks.
Dynamic consensus estimation of weighted average on directed graphs
NASA Astrophysics Data System (ADS)
Li, Shuai; Guo, Yi
2015-07-01
Recent applications call for distributed weighted average estimation over sensor networks, where sensor measurement accuracy or environmental conditions need to be taken into consideration in the final consensused group decision. In this paper, we propose new dynamic consensus filter design to distributed estimate weighted average of sensors' inputs on directed graphs. Based on recent advances in the filed, we modify the existing proportional-integral consensus filter protocol to remove the requirement of bi-directional gain exchange between neighbouring sensors, so that the algorithm works for directed graphs where bi-directional communications are not possible. To compensate for the asymmetric structure of the system introduced by such a removal, sufficient gain conditions are obtained for the filter protocols to guarantee the convergence. It is rigorously proved that the proposed filter protocol converges to the weighted average of constant inputs asymptotically, and to the weighted average of time-varying inputs with a bounded error. Simulations verify the effectiveness of the proposed protocols.
Global Rotation Estimation Using Weighted Iterative Lie Algebraic Averaging
NASA Astrophysics Data System (ADS)
Reich, M.; Heipke, C.
2015-08-01
In this paper we present an approach for a weighted rotation averaging to estimate absolute rotations from relative rotations between two images for a set of multiple overlapping images. The solution does not depend on initial values for the unknown parameters and is robust against outliers. Our approach is one part of a solution for a global image orientation. Often relative rotations are not free from outliers, thus we use the redundancy in available pairwise relative rotations and present a novel graph-based algorithm to detect and eliminate inconsistent rotations. The remaining relative rotations are input to a weighted least squares adjustment performed in the Lie algebra of the rotation manifold SO(3) to obtain absolute orientation parameters for each image. Weights are determined using the prior information we derived from the estimation of the relative rotations. Because we use the Lie algebra of SO(3) for averaging no subsequent adaptation of the results has to be performed but the lossless projection to the manifold. We evaluate our approach on synthetic and real data. Our approach often is able to detect and eliminate all outliers from the relative rotations even if very high outlier rates are present. We show that we improve the quality of the estimated absolute rotations by introducing individual weights for the relative rotations based on various indicators. In comparison with the state-of-the-art in recent publications to global image orientation we achieve best results in the examined datasets.
Scaling of Average Weighted Receiving Time on Double-Weighted Koch Networks
NASA Astrophysics Data System (ADS)
Dai, Meifeng; Ye, Dandan; Hou, Jie; Li, Xingyi
2015-03-01
In this paper, we introduce a model of the double-weighted Koch networks based on actual road networks depending on the two weight factors w,r ∈ (0, 1]. The double weights represent the capacity-flowing weight and the cost-traveling weight, respectively. Denote by wFij the capacity-flowing weight connecting the nodes i and j, and denote by wCij the cost-traveling weight connecting the nodes i and j. Let wFij be related to the weight factor w, and let wCij be related to the weight factor r. This paper assumes that the walker, at each step, starting from its current node, moves to any of its neighbors with probability proportional to the capacity-flowing weight of edge linking them. The weighted time for two adjacency nodes is the cost-traveling weight connecting the two nodes. We define the average weighted receiving time (AWRT) on the double-weighted Koch networks. The obtained result displays that in the large network, the AWRT grows as power-law function of the network order with the exponent, represented by θ(w,r) = ½ log2(1 + 3wr). We show that the AWRT exhibits a sublinear or linear dependence on network order. Thus, the double-weighted Koch networks are more efficient than classic Koch networks in receiving information.
Weighted Average Consensus-Based Unscented Kalman Filtering.
Li, Wangyan; Wei, Guoliang; Han, Fei; Liu, Yurong
2016-02-01
In this paper, we are devoted to investigate the consensus-based distributed state estimation problems for a class of sensor networks within the unscented Kalman filter (UKF) framework. The communication status among sensors is represented by a connected undirected graph. Moreover, a weighted average consensus-based UKF algorithm is developed for the purpose of estimating the true state of interest, and its estimation error is bounded in mean square which has been proven in the following section. Finally, the effectiveness of the proposed consensus-based UKF algorithm is validated through a simulation example. PMID:26168453
Modified box dimension and average weighted receiving time on the weighted fractal networks
Dai, Meifeng; Sun, Yanqiu; Shao, Shuxiang; Xi, Lifeng; Su, Weiyi
2015-01-01
In this paper a family of weighted fractal networks, in which the weights of edges have been assigned to different values with certain scale, are studied. For the case of the weighted fractal networks the definition of modified box dimension is introduced, and a rigorous proof for its existence is given. Then, the modified box dimension depending on the weighted factor and the number of copies is deduced. Assuming that the walker, at each step, starting from its current node, moves uniformly to any of its nearest neighbors. The weighted time for two adjacency nodes is the weight connecting the two nodes. Then the average weighted receiving time (AWRT) is a corresponding definition. The obtained remarkable result displays that in the large network, when the weight factor is larger than the number of copies, the AWRT grows as a power law function of the network order with the exponent, being the reciprocal of modified box dimension. This result shows that the efficiency of the trapping process depends on the modified box dimension: the larger the value of modified box dimension, the more efficient the trapping process is. PMID:26666355
Average receiving scaling of the weighted polygon Koch networks with the weight-dependent walk
NASA Astrophysics Data System (ADS)
Ye, Dandan; Dai, Meifeng; Sun, Yanqiu; Shao, Shuxiang; Xie, Qi
2016-09-01
Based on the weighted Koch networks and the self-similarity of fractals, we present a family of weighted polygon Koch networks with a weight factor r(0 < r ≤ 1) . We study the average receiving time (ART) on weight-dependent walk (i.e., the walker moves to any of its neighbors with probability proportional to the weight of edge linking them), whose key step is to calculate the sum of mean first-passage times (MFPTs) for all nodes absorpt at a hub node. We use a recursive division method to divide the weighted polygon Koch networks in order to calculate the ART scaling more conveniently. We show that the ART scaling exhibits a sublinear or linear dependence on network order. Thus, the weighted polygon Koch networks are more efficient than expended Koch networks in receiving information. Finally, compared with other previous studies' results (i.e., Koch networks, weighted Koch networks), we find out that our models are more general.
Attention Disengagement Difficulties among Average Weight Women Who Binge Eat.
Lyu, Zhenyong; Zheng, Panpan; Jackson, Todd
2016-07-01
In this study, we assessed biases in attention disengagement among average-weight women with binge-eating (n = 33) and non-eating disordered controls (n = 31). Participants engaged in a spatial cueing paradigm task wherein they first observed high-calorie food, low-calorie food, or neutral images and then had to quickly locate targets in either the same or a different location. Within both groups, reaction times (RTs) were longer to valid-cued trials (i.e. target appearing in location of preceding cue) than to invalid-cued trials (i.e. targets appearing in location different from initial location), reflecting a general inhibition of return (IOR) effect. However, RT findings also indicated that women with BE had significantly more difficulty disengaging from high-calorie food images than did controls, even though neither group had disengagement problems related to other image types. Selective attention disengagement difficulties related to high-calorie food images suggested that increased reward sensitivity to such cues is related to binge eating risk. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association. PMID:26856539
47 CFR 65.305 - Calculation of the weighted average cost of capital.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Carriers § 65.305 Calculation of the weighted average cost of capital. (a) The composite weighted average... Commission determines to the contrary in a prescription proceeding, the composite weighted average cost of debt and cost of preferred stock is the composite weight computed in accordance with §...
26 CFR 1.989(b)-1 - Definition of weighted average exchange rate.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 10 2010-04-01 2010-04-01 false Definition of weighted average exchange rate. 1... average exchange rate. For purposes of section 989(b)(3) and (4), the term “weighted average exchange rate” means the simple average of the daily exchange rates (determined by reference to a qualified source...
Latent-variable approaches to the Jamesian model of importance-weighted averages.
Scalas, L Francesca; Marsh, Herbert W; Nagengast, Benjamin; Morin, Alexandre J S
2013-01-01
The individually importance-weighted average (IIWA) model posits that the contribution of specific areas of self-concept to global self-esteem varies systematically with the individual importance placed on each specific component. Although intuitively appealing, this model has weak empirical support; thus, within the framework of a substantive-methodological synergy, we propose a multiple-item latent approach to the IIWA model as applied to a range of self-concept domains (physical, academic, spiritual self-concepts) and subdomains (appearance, math, verbal self-concepts) in young adolescents from two countries. Tests considering simultaneously the effects of self-concept domains on trait self-esteem did not support the IIWA model. On the contrary, support for a normative group importance model was found, in which importance varied as a function of domains but not individuals. Individuals differentially weight the various components of self-concept; however, the weights are largely determined by normative processes, so that little additional information is gained from individual weightings. PMID:23150198
Cohen's Linearly Weighted Kappa Is a Weighted Average of 2 x 2 Kappas
ERIC Educational Resources Information Center
Warrens, Matthijs J.
2011-01-01
An agreement table with [n as an element of N is greater than or equal to] 3 ordered categories can be collapsed into n - 1 distinct 2 x 2 tables by combining adjacent categories. Vanbelle and Albert ("Stat. Methodol." 6:157-163, 2009c) showed that the components of Cohen's weighted kappa with linear weights can be obtained from these n - 1…
76 FR 13580 - Bus Testing; Calculation of Average Passenger Weight and Test Vehicle Weight
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-14
... Federal Register (74 FR 51083) that incorporated brake performance and emissions tests into FTA's bus... Weight Per Person (See, ``Passenger Weight and Inspected Vessel Stability Requirements: Final Rule, 75 FR... Transportation (44 FR 11032). Executive Order 12866 requires agencies to regulate in the ``most...
77 FR 74452 - Bus Testing: Calculation of Average Passenger Weight and Test Vehicle Weight
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-14
... (GVWR) (74 FR 51083, October 5, 2009). The testing procedure simulated a 150 lb. weight for each seated... square feet (76 FR 13850, March 14, 2011). Subsequent to the NPRM, on July 6, 2012, Congress passed the..., Executive Order 13563, the Regulatory Flexibility Act, or the DOT Regulatory Policies and Procedures (44...
Rainfall Estimation Over Tropical Oceans. 1; Area Average Rain Rate
NASA Technical Reports Server (NTRS)
Cuddapah, Prabhakara; Cadeddu, Maria; Meneghini, R.; Short, David A.; Yoo, Jung-Moon; Dalu, G.; Schols, J. L.; Weinman, J. A.
1997-01-01
Multichannel dual polarization microwave radiometer SSM/I observations over oceans do not contain sufficient information to differentiate quantitatively the rain from other hydrometeors on a scale comparable to the radiometer field of view (approx. 30 km). For this reason we have developed a method to retrieve average rain rate over a mesoscale grid box of approx. 300 x 300 sq km area over the TOGA COARE region where simultaneous radiometer and radar observations are available for four months (Nov. 92 to Feb. 93). The rain area in the grid box, inferred from the scattering depression due to hydrometeors in the 85 Ghz brightness temperature, constitutes a key parameter in this method. Then the spectral and polarization information contained in all the channels of the SSM/I is utilized to deduce a second parameter. This is the ratio S/E of scattering index S, and emission index E calculated from the SSM/I data. The rain rate retrieved from this method over the mesoscale area can reproduce the radar observed rain rate with a correlation coefficient of about 0.85. Furthermore monthly total rainfall estimated from this method over that area has an average error of about 15%.
SIMPLE AND WEIGHTED AVERAGING APPROACHES TO SCALING: WHEN CAN SPATIAL CONTEXT BE IGNORED?
Technology Transfer Automated Retrieval System (TEKTRAN)
Scaling from plots to landscapes, landscapes to regions, and regions to the globe based on simple or weighted averaging techniques can be accurate when applied to the appropriate problems. Simple averaging approaches work well when conditions are homogeneous spatially and temporally. For example, ...
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek
NASA Technical Reports Server (NTRS)
Scruggs, T.; Moraguez, M.; Patankar, K.; Fitz-Coy, N.; Liou, J.-C.; Sorge, M.; Huynh, T.
2016-01-01
Debris fragments from the hypervelocity impact testing of DebriSat are being collected and characterized for use in updating existing satellite breakup models. One of the key parameters utilized in these models is the ballistic coefficient of the fragment which is directly related to its area-to-mass ratio. However, since the attitude of fragments varies during their orbital lifetime, it is customary to use the average cross-sectional area in the calculation of the area-to-mass ratio. The average cross-sectional area is defined as the average of the projected surface areas perpendicular to the direction of motion and has been shown to be equal to one-fourth of the total surface area of a convex object. Unfortunately, numerous fragments obtained from the DebriSat experiment show significant concavity (i.e., shadowing) and thus we have explored alternate methods for computing the average cross-sectional area of the fragments. An imaging system based on the volumetric reconstruction of a 3D object from multiple 2D photographs of the object was developed for use in determining the size characteristic (i.e., characteristics length) of the DebriSat fragments. For each fragment, the imaging system generates N number of images from varied azimuth and elevation angles and processes them using a space-carving algorithm to construct a 3D point cloud of the fragment. This paper describes two approaches for calculating the average cross-sectional area of debris fragments based on the 3D imager. Approach A utilizes the constructed 3D object to generate equally distributed cross-sectional area projections and then averages them to determine the average cross-sectional area. Approach B utilizes a weighted average of the area of the 2D photographs to directly compute the average cross-sectional area. A comparison of the accuracy and computational needs of each approach is described as well as preliminary results of an analysis to determine the "optimal" number of images needed for
Code of Federal Regulations, 2011 CFR
2011-04-01
... weighted-average dumping margins disregarded. 351.106 Section 351.106 Customs Duties INTERNATIONAL TRADE... minimis net countervailable subsidies and weighted-average dumping margins disregarded. (a) Introduction... practice of disregarding net countervailable subsidies or weighted-average dumping margins that were...
Code of Federal Regulations, 2010 CFR
2010-04-01
... weighted-average dumping margins disregarded. 351.106 Section 351.106 Customs Duties INTERNATIONAL TRADE... minimis net countervailable subsidies and weighted-average dumping margins disregarded. (a) Introduction... practice of disregarding net countervailable subsidies or weighted-average dumping margins that were...
NASA Astrophysics Data System (ADS)
Lin, Bangjiang; Li, Yiwei; Zhang, Shihao; Tang, Xuan
2015-10-01
Weighted interframe averaging (WIFA)-based channel estimation (CE) is presented for orthogonal frequency division multiplexing passive optical network (OFDM-PON), in which the CE results of the adjacent frames are directly averaged to increase the estimation accuracy. The effectiveness of WIFA combined with conventional least square, intrasymbol frequency-domain averaging, and minimum mean square error, respectively, is demonstrated through 26.7-km standard single-mode fiber transmission. The experimental results show that the WIFA method with low complexity can significantly enhance transmission performance of OFDM-PON.
12 CFR 702.105 - Weighted-average life of investments.
Code of Federal Regulations, 2014 CFR
2014-01-01
..., as defined in 17 CFR 270.2a-7, and collective investment funds operated in accordance with short-term investment fund rules set forth in 12 CFR 9.18(b)(4)(ii)(B)(1)-(3), the weighted-average life is defined as... and nonperpetual capital in corporate credit unions, as defined in 12 CFR 704.2, the...
12 CFR 702.105 - Weighted-average life of investments.
Code of Federal Regulations, 2012 CFR
2012-01-01
..., as defined in 17 CFR 270.2a-7, and collective investment funds operated in accordance with short-term investment fund rules set forth in 12 CFR 9.18(b)(4)(ii)(B)(1)-(3), the weighted-average life is defined as... and nonperpetual capital in corporate credit unions, as defined in 12 CFR 704.2, the...
12 CFR 702.105 - Weighted-average life of investments.
Code of Federal Regulations, 2013 CFR
2013-01-01
..., as defined in 17 CFR 270.2a-7, and collective investment funds operated in accordance with short-term investment fund rules set forth in 12 CFR 9.18(b)(4)(ii)(B)(1)-(3), the weighted-average life is defined as... and nonperpetual capital in corporate credit unions, as defined in 12 CFR 704.2, the...
47 CFR 65.305 - Calculation of the weighted average cost of capital.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 3 2011-10-01 2011-10-01 false Calculation of the weighted average cost of capital. 65.305 Section 65.305 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-28
... Dumping Margin During an Antidumping Investigation; Final Modification, 71 FR 77,722 (December 27, 2006... Measures Concerning Certain Softwood Lumber Products From Canada, 70 FR 22,636 (May 2, 2005). The above... Weighted- Average Dumping Margin During an Antidumping Investigation; Final Modification, 71 FR...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-01
... certain antidumping duty proceedings (75 FR 81533). That proposed rule and proposed modification indicated... International Trade Administration 19 CFR Part 351 RIN 0625-AA87 Antidumping Proceedings: Calculation of the... regarding the calculation of the weighted average dumping margin and antidumping duty assessment rate...
Binary weighted averaging of an ensemble of coherently collected image frames.
MacDonald, Adam; Cain, Stephen; Oxley, Mark
2007-04-01
Recent interest in the collection of remote laser radar imagery has motivated novel systems that process temporally contiguous frames of collected imagery to produce an average image that reduces laser speckle, increases image SNR, decreases the deleterious effects of atmospheric distortion, and enhances image detail. This research seeks an algorithm based on Bayesian estimation theory to select those frames from an ensemble that increases spatial resolution compared to simple unweighted averaging of all frames. The resulting binary weighted motion-compensated frame average is compared to the unweighted average using simulated and experimental data collected from a fielded laser vision system. Image resolution is significantly enhanced as quantified by the estimation of the atmospheric seeing parameter through which the average image was formed. PMID:17405439
Modeling daily average stream temperature from air temperature and watershed area
NASA Astrophysics Data System (ADS)
Butler, N. L.; Hunt, J. R.
2012-12-01
Habitat restoration efforts within watersheds require spatial and temporal estimates of water temperature for aquatic species especially species that migrate within watersheds at different life stages. Monitoring programs are not able to fully sample all aquatic environments within watersheds under the extreme conditions that determine long-term habitat viability. Under these circumstances a combination of selective monitoring and modeling are required for predicting future geospatial and temporal conditions. This study describes a model that is broadly applicable to different watersheds while using readily available regional air temperature data. Daily water temperature data from thirty-eight gauges with drainage areas from 2 km2 to 2000 km2 in the Sonoma Valley, Napa Valley, and Russian River Valley in California were used to develop, calibrate, and test a stream temperature model. Air temperature data from seven NOAA gauges provided the daily maximum and minimum air temperatures. The model was developed and calibrated using five years of data from the Sonoma Valley at ten water temperature gauges and a NOAA air temperature gauge. The daily average stream temperatures within this watershed were bounded by the preceding maximum and minimum air temperatures with smaller upstream watersheds being more dependent on the minimum air temperature than maximum air temperature. The model assumed a linear dependence on maximum and minimum air temperature with a weighting factor dependent on upstream area determined by error minimization using observed data. Fitted minimum air temperature weighting factors were consistent over all five years of data for each gauge, and they ranged from 0.75 for upstream drainage areas less than 2 km2 to 0.45 for upstream drainage areas greater than 100 km2. For the calibration data sets within the Sonoma Valley, the average error between the model estimated daily water temperature and the observed water temperature data ranged from 0.7
Nesterov, V V; Kurenbin, O I; Krasikov, V D; Belenkii, B G
1987-01-01
The problem of preparation of a block copolymer of precise molecular-weight distribution (MWD) and with heterogeneous composition on the basis of gel-permeation chromatography (GPC) data has been investigated. It has been shown that in MWD calculations the distribution f(p) of the composition p in individual GPC fractions should be taken into account. The type of the f(p) functions can be simultaneously established by an independent method, such as use of adsorption-column or thin-layer chromatography sensitive to the composition of the copolymer. It has also been shown that the actual f(p) may be replaced by a corresponding piecewise distribution, of simple form, without decrease in the precision of calculation of the MWD and average molecular weights of most known block copolymers. PMID:18964273
WUATSA: Weighted usable area time series analysis
Franc, G.M.
1995-12-31
As stated in my paper entitled, {open_quotes}FISHN-Minimum Flow Selection Made Easy{close_quotes}, there continues to exist differences of opinion between environmental resource agencies (Agencies) and power producers in the interpretation of Weighted Usable Area (WUA) versus flow data, as a tool for making minimum flow recommendations. WUA-flow curves are developed from Instream Flow Incremental Methodology (IFIM) studies. Each point on a WUA-flow curve defines the usable habitat area created within a bypassed reach, for a specific species and life stage, due to a specified minimum flow being constantly maintained within that reach. In the FISHN paper I discussed the Federal Energy Regulatory Commission`s (FERCs) effort to standardize the use of WUA-flow data to assist in minimum flow selection, as proposed in their article entitled, {open_quotes}Evaluating Relicense Proposals at the Federal Energy Regulatory Commission{close_quotes}. This FERC paper advanced a technique which has subsequently become known as the FARGO method (named after the primary author). The FISHN paper initially critiqued FARGO and then focused discussion on an alternative approach (FISHN) which is an extension to the IFIM methodology.
Model Averaging Methods for Weight Trimming in Generalized Linear Regression Models
Elliott, Michael R.
2012-01-01
In sample surveys where units have unequal probabilities of inclusion, associations between the inclusion probability and the statistic of interest can induce bias in unweighted estimates. This is true even in regression models, where the estimates of the population slope may be biased if the underlying mean model is misspecified or the sampling is nonignorable. Weights equal to the inverse of the probability of inclusion are often used to counteract this bias. Highly disproportional sample designs have highly variable weights; weight trimming reduces large weights to a maximum value, reducing variability but introducing bias. Most standard approaches are ad hoc in that they do not use the data to optimize bias-variance trade-offs. This article uses Bayesian model averaging to create “data driven” weight trimming estimators. We extend previous results for linear regression models (Elliott 2008) to generalized linear regression models, developing robust models that approximate fully-weighted estimators when bias correction is of greatest importance, and approximate unweighted estimators when variance reduction is critical. PMID:23275683
12 CFR 702.105 - Weighted-average life of investments.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., as defined in 17 CFR 270.2a-7, and collective investment funds operated in accordance with short-term investment fund rules set forth in 12 CFR 9.18(b)(4)(ii)(B)(1)-(3), the weighted-average life is defined as...-in capital and membership capital in corporate credit unions, as defined in 12 CFR 704.2,...
12 CFR 702.105 - Weighted-average life of investments.
Code of Federal Regulations, 2011 CFR
2011-01-01
..., as defined in 17 CFR 270.2a-7, and collective investment funds operated in accordance with short-term investment fund rules set forth in 12 CFR 9.18(b)(4)(ii)(B)(1)-(3), the weighted-average life is defined as...-in capital and membership capital in corporate credit unions, as defined in 12 CFR 704.2,...
Girshick, Ahna R.; Banks, Martin S.
2010-01-01
Depth perception involves combining multiple, possibly conflicting, sensory measurements to estimate the 3D structure of the viewed scene. Previous work has shown that the perceptual system combines measurements using a statistically optimal weighted average. However, the system should only combine measurements when they come from the same source. We asked whether the brain avoids combining measurements when they differ from one another: that is, whether the system is robust to outliers. To do this, we investigated how two slant cues—binocular disparity and texture gradients—influence perceived slant as a function of the size of the conflict between the cues. When the conflict was small, we observed weighted averaging. When the conflict was large, we observed robust behavior: perceived slant was dictated solely by one cue, the other being rejected. Interestingly, the rejected cue was either disparity or texture, and was not necessarily the more variable cue. We modeled the data in a probabilistic framework, and showed that weighted averaging and robustness are predicted if the underlying likelihoods have heavier tails than Gaussians. We also asked whether observers had conscious access to the single-cue estimates when they exhibited robustness and found they did not, i.e. they completely fused despite the robust percepts. PMID:19761341
Weighted Averaging for Calculating Azimuthal Angles and Filtering Love Waves Using S-transforms
NASA Astrophysics Data System (ADS)
Napoli, V.; Russell, D. R.
2015-12-01
The S-transform methodology is based on Stockwell transforms, which is a form of a short Fourier transform, with a time domain transform window defined by a Gaussian function. The Gaussian function has a standard deviation equal to the frequency of interest. Applying the transform to multiple frequencies of interest results in a time/frequency spectrogram, which has the advantage of being simply invertible back to the time domain. This allows for the calculation of instantaneous frequency/time phase and amplitude measurements, which makes 2D signal filtration of surface waves possible. By solving for the transverse angle of propagation of narrow band filtered Love waves at a range of periods (8-25s) we calculate a vector of possible azimuths, one at each period. We then average over all the bands of interest to determine the mean angle of propagation. To avoid using unreliable low signal-to-noise (SNR) azimuth estimates, we use a SNR weighted average to more accurately reflect the overall signal propagation azimuth. We then use the mean signal azimuth to design a 2D Love wave rejection filter that will reject off-azimuth noise and then invert this to the time domain for an improved signal on the propagation azimuth. We apply this method to the 2009 Democratic People's Republic of Korea nuclear test. After testing the weighted averaging approach, the SNR ratio increases by a factor of 2 overall, and a signal on the transverse component is identified as a Rayleigh wave that "leaked" into the transverse component. Without this method, there could have been improper Love wave signal identification for the event. Using this innovative SNR weighted averaging technique to calculate propagation angle indicates that S-transform filters can lower the noise level by a factor of 2 or more, helping with low SNR events, and remove Rayleigh "leakage" into the transverse channel.
A new state reconstructor for digital controls systems using weighted-average measurements
NASA Technical Reports Server (NTRS)
Polites, Michael E.
1989-01-01
A state reconstructor is presented for a linear continuous-time plant driven by a zero-order-hold. It takes a continuous-time output vector from the plant and convolutes it with a weighting-function matrix whose elements are time dependent. This result is integrated over T second intervals to generate weighted-averaged measurements, every T seconds, that are used in the state reconstruction process. If the plant is noise-free and can be modeled precisely, the output of this state reconstructor exactly equals the true state of the plant and accomplishes this without knowledge of the plant's initial state. If noise or modeling errors are a problem, it can be catenated with a state observer or a Kalman filter for a synergistic effect.
A new lot inspection procedure based on exponentially weighted moving average
NASA Astrophysics Data System (ADS)
Aslam, Muhammad; Azam, Muhammad; Jun, Chi-Hyuck
2015-06-01
In this manuscript a new variable sampling plan based on the exponentially weighted moving average (EWMA) statistic is proposed assuming that the quality characteristic follows the normal distribution. The plans are proposed when the standard deviation of the normal distribution is known or unknown. The plan parameters for both cases are determined such that the given producer's risk and consumer's risk are satisfied. The proposed plan includes the ordinary variable single sampling plan as a special case and its advantage over the single sampling plan is discussed in terms of the sample size. Extensive tables are provided for industrial use.
Calculation of weighted averages approach for the estimation of ping tolerance values
Silalom, S.; Carter, J.L.; Chantaramongkol, P.
2010-01-01
A biotic index was created and proposed as a tool to assess water quality in the Upper Mae Ping sub-watersheds. The Ping biotic index was calculated by utilizing Ping tolerance values. This paper presents the calculation of Ping tolerance values of the collected macroinvertebrates. Ping tolerance values were estimated by a weighted averages approach based on the abundance of macroinvertebrates and six chemical constituents that include conductivity, dissolved oxygen, biochemical oxygen demand, ammonia nitrogen, nitrate nitrogen and orthophosphate. Ping tolerance values range from 0 to 10. Macroinvertebrates assigned a 0 are very sensitive to organic pollution while macroinvertebrates assigned 10 are highly tolerant to pollution.
Correlation between weighted spectral distribution and average path length in evolving networks
NASA Astrophysics Data System (ADS)
Jiao, Bo; Shi, Jianmai; Wu, Xiaoqun; Nie, Yuanping; Huang, Chengdong; Du, Jing; Zhou, Ying; Guo, Ronghua; Tao, Yerong
2016-02-01
The weighted spectral distribution (WSD) is a metric defined on the normalized Laplacian spectrum. In this study, synchronic random graphs are first used to rigorously analyze the metric's scaling feature, which indicates that the metric grows sublinearly as the network size increases, and the metric's scaling feature is demonstrated to be common in networks with Gaussian, exponential, and power-law degree distributions. Furthermore, a deterministic model of diachronic graphs is developed to illustrate the correlation between the slope coefficient of the metric's asymptotic line and the average path length, and the similarities and differences between synchronic and diachronic random graphs are investigated to better understand the correlation. Finally, numerical analysis is presented based on simulated and real-world data of evolving networks, which shows that the ratio of the WSD to the network size is a good indicator of the average path length.
Correlation between weighted spectral distribution and average path length in evolving networks.
Jiao, Bo; Shi, Jianmai; Wu, Xiaoqun; Nie, Yuanping; Huang, Chengdong; Du, Jing; Zhou, Ying; Guo, Ronghua; Tao, Yerong
2016-02-01
The weighted spectral distribution (WSD) is a metric defined on the normalized Laplacian spectrum. In this study, synchronic random graphs are first used to rigorously analyze the metric's scaling feature, which indicates that the metric grows sublinearly as the network size increases, and the metric's scaling feature is demonstrated to be common in networks with Gaussian, exponential, and power-law degree distributions. Furthermore, a deterministic model of diachronic graphs is developed to illustrate the correlation between the slope coefficient of the metric's asymptotic line and the average path length, and the similarities and differences between synchronic and diachronic random graphs are investigated to better understand the correlation. Finally, numerical analysis is presented based on simulated and real-world data of evolving networks, which shows that the ratio of the WSD to the network size is a good indicator of the average path length. PMID:26931591
Fuzzy weighted average based on left and right scores in Malaysia tourism industry
NASA Astrophysics Data System (ADS)
Kamis, Nor Hanimah; Abdullah, Kamilah; Zulkifli, Muhammad Hazim; Sahlan, Shahrazali; Mohd Yunus, Syaizzal
2013-04-01
Tourism is known as an important sector to the Malaysian economy including economic generator, creating business and job offers. It is reported to bring in almost RM30 billion of the national income, thanks to intense worldwide promotion by Tourism Malaysia. One of the well-known attractions in Malaysia is our beautiful islands. The islands continue to be developed into tourist spots and attracting a continuous number of tourists. Chalets, luxury bungalows and resorts quickly develop along the coastlines of popular islands like Tioman, Redang, Pangkor, Perhentian, Sibu and so many others. In this study, we applied Fuzzy Weighted Average (FWA) method based on left and right scores in order to determine the criteria weights and to select the best island in Malaysia. Cost, safety, attractive activities, accommodation and scenery are five main criteria to be considered and five selected islands in Malaysia are taken into accounts as alternatives. The most important criteria that have been considered by the tourist are defined based on criteria weights ranking order and the best island in Malaysia is then determined in terms of FWA values. This pilot study can be used as a reference to evaluate performances or solving any selection problems, where more criteria, alternatives and decision makers will be considered in the future.
Equating of Subscores and Weighted Averages under the NEAT Design. Research Report. ETS RR-11-01
ERIC Educational Resources Information Center
Sinharay, Sandip; Haberman, Shelby
2011-01-01
Recently, the literature has seen increasing interest in subscores for their potential diagnostic values; for example, one study suggested the report of weighted averages of a subscore and the total score, whereas others showed, for various operational and simulated data sets, that weighted averages, as compared to subscores, lead to more accurate…
Weighted averages of magnetization from magnetic field measurements: A fast interpretation tool
NASA Astrophysics Data System (ADS)
Fedi, Maurizio
2003-08-01
Magnetic anomalies may be interpreted in terms of weighted averages of magnetization (WAM) by a simple transformation. The WAM transformation consists of dividing at each measurement point the experimental magnetic field by a normalizing field, computed from a source volume with a homogeneous unit-magnetization. The transformation yields a straightforward link among source and field position vectors. A main WAM outcome is that sources at different depths appear well discriminated. Due to the symmetry of the problem, the higher the considered field altitude, the deeper the sources outlined by the transformation. This is shown for single and multi-source synthetic cases as well as for real data. We analyze the real case of Mt. Vulture volcano (Southern Italy), where the related anomaly strongly interferes with that from deep intrusive sources. The volcanic edifice is well identified. The deep source is estimated at about 9 km depth, in agreement with other results.
Merigó, José M.
2014-01-01
Linguistic variables are very useful to evaluate alternatives in decision making problems because they provide a vocabulary in natural language rather than numbers. Some aggregation operators for linguistic variables force the use of a symmetric and uniformly distributed set of terms. The need to relax these conditions has recently been posited. This paper presents the induced unbalanced linguistic ordered weighted average (IULOWA) operator. This operator can deal with a set of unbalanced linguistic terms that are represented using fuzzy sets. We propose a new order-inducing criterion based on the specificity and fuzziness of the linguistic terms. Different relevancies are given to the fuzzy values according to their uncertainty degree. To illustrate the behaviour of the precision-based IULOWA operator, we present an environmental assessment case study in which a multiperson multicriteria decision making model is applied. PMID:25136677
Detecting the start of an influenza outbreak using exponentially weighted moving average charts
2010-01-01
Background Influenza viruses cause seasonal outbreaks in temperate climates, usually during winter and early spring, and are endemic in tropical climates. The severity and length of influenza outbreaks vary from year to year. Quick and reliable detection of the start of an outbreak is needed to promote public health measures. Methods We propose the use of an exponentially weighted moving average (EWMA) control chart of laboratory confirmed influenza counts to detect the start and end of influenza outbreaks. Results The chart is shown to provide timely signals in an example application with seven years of data from Victoria, Australia. Conclusions The EWMA control chart could be applied in other applications to quickly detect influenza outbreaks. PMID:20587013
A Weight-Averaged Interpolation Method for Coupling Time-Accurate Rarefied and Continuum Flows
NASA Astrophysics Data System (ADS)
Diaz, Steven William
A novel approach to coupling rarefied and continuum flow regimes as a single, hybrid model is introduced. The method borrows from techniques used in the simulation of spray flows to interpolate Lagrangian point-particles onto an Eulerian grid in a weight-averaged sense. A brief overview of traditional methods for modeling both rarefied and continuum domains is given, and a review of the literature regarding rarefied/continuum flow coupling is presented. Details of the theoretical development of the method of weighted interpolation are then described. The method evaluates macroscopic properties at the nodes of a CFD grid via the weighted interpolation of all simulated molecules in a set surrounding the node. The weight factor applied to each simulated molecule is the inverse of the linear distance between it and the given node. During development, the method was applied to several preliminary cases, including supersonic flow over an airfoil, subsonic flow over tandem airfoils, and supersonic flow over a backward facing step; all at low Knudsen numbers. The main thrust of the research centered on the time-accurate expansion of a rocket plume into a near-vacuum. The method proves flexible enough to be used with various flow solvers, demonstrated by the use of Fluent as the continuum solver for the preliminary cases and a NASA-developed Large Eddy Simulation research code, WRLES, for the full lunar model. The method is applicable to a wide range of Mach numbers and is completely grid independent, allowing the rarefied and continuum solvers to be optimized for their respective domains without consideration of the other. The work presented demonstrates the validity, and flexibility of the method of weighted interpolation as a novel concept in the field of hybrid flow coupling. The method marks a significant divergence from current practices in the coupling of rarefied and continuum flow domains and offers a kernel on which to base an ongoing field of research. It has the
Conductivity image enhancement in MREIT using adaptively weighted spatial averaging filter
2014-01-01
Background In magnetic resonance electrical impedance tomography (MREIT), we reconstruct conductivity images using magnetic flux density data induced by externally injected currents. Since we extract magnetic flux density data from acquired MR phase images, the amount of measurement noise increases in regions of weak MR signals. Especially for local regions of MR signal void, there may occur excessive amounts of noise to deteriorate the quality of reconstructed conductivity images. In this paper, we propose a new conductivity image enhancement method as a postprocessing technique to improve the image quality. Methods Within a magnetic flux density image, the amount of noise varies depending on the position-dependent MR signal intensity. Using the MR magnitude image which is always available in MREIT, we estimate noise levels of measured magnetic flux density data in local regions. Based on the noise estimates, we adjust the window size and weights of a spatial averaging filter, which is applied to reconstructed conductivity images. Without relying on a partial differential equation, the new method is fast and can be easily implemented. Results Applying the novel conductivity image enhancement method to experimental data, we could improve the image quality to better distinguish local regions with different conductivity contrasts. From phantom experiments, the estimated conductivity values had 80% less variations inside regions of homogeneous objects. Reconstructed conductivity images from upper and lower abdominal regions of animals showed much less artifacts in local regions of weak MR signals. Conclusion We developed the fast and simple method to enhance the conductivity image quality by adaptively adjusting the weights and window size of the spatial averaging filter using MR magnitude images. Since the new method is implemented as a postprocessing step, we suggest adopting it without or with other preprocessing methods for application studies where conductivity
NASA Astrophysics Data System (ADS)
Nadi, S.; Delavar, M. R.
2011-06-01
This paper presents a generic model for using different decision strategies in multi-criteria, personalized route planning. Some researchers have considered user preferences in navigation systems. However, these prior studies typically employed a high tradeoff decision strategy, which used a weighted linear aggregation rule, and neglected other decision strategies. The proposed model integrates a pairwise comparison method and quantifier-guided ordered weighted averaging (OWA) aggregation operators to form a personalized route planning method that incorporates different decision strategies. The model can be used to calculate the impedance of each link regarding user preferences in terms of the route criteria, criteria importance and the selected decision strategy. Regarding the decision strategy, the calculated impedance lies between aggregations that use a logical "and" (which requires all the criteria to be satisfied) and a logical "or" (which requires at least one criterion to be satisfied). The calculated impedance also includes taking the average of the criteria scores. The model results in multiple alternative routes, which apply different decision strategies and provide users with the flexibility to select one of them en-route based on the real world situation. The model also defines the robust personalized route under different decision strategies. The influence of different decision strategies on the results are investigated in an illustrative example. This model is implemented in a web-based geographical information system (GIS) for Isfahan in Iran and verified in a tourist routing scenario. The results demonstrated, in real world situations, the validity of the route planning carried out in the model.
Robust HLLC Riemann solver with weighted average flux scheme for strong shock
NASA Astrophysics Data System (ADS)
Kim, Sung Don; Lee, Bok Jik; Lee, Hyoung Jin; Jeung, In-Seuck
2009-11-01
Many researchers have reported failures of the approximate Riemann solvers in the presence of strong shock. This is believed to be due to perturbation transfer in the transverse direction of shock waves. We propose a simple and clear method to prevent such problems for the Harten-Lax-van Leer contact (HLLC) scheme. By defining a sensing function in the transverse direction of strong shock, the HLLC flux is switched to the Harten-Lax-van Leer (HLL) flux in that direction locally, and the magnitude of the additional dissipation is automatically determined using the HLL scheme. We combine the HLLC and HLL schemes in a single framework using a switching function. High-order accuracy is achieved using a weighted average flux (WAF) scheme, and a method for v-shear treatment is presented. The modified HLLC scheme is named HLLC-HLL. It is tested against a steady normal shock instability problem and Quirk's test problems, and spurious solutions in the strong shock regions are successfully controlled.
Time-weighted average SPME analysis for in planta determination of cVOCs.
Sheehan, Emily M; Limmer, Matt A; Mayer, Philipp; Karlson, Ulrich Gosewinkel; Burken, Joel G
2012-03-20
The potential of phytoscreening for plume delineation at contaminated sites has promoted interest in innovative, sensitive contaminant sampling techniques. Solid-phase microextraction (SPME) methods have been developed, offering quick, undemanding, noninvasive sampling without the use of solvents. In this study, time-weighted average SPME (TWA-SPME) sampling was evaluated for in planta quantification of chlorinated solvents. TWA-SPME was found to have increased sensitivity over headspace and equilibrium SPME sampling. Using a variety of chlorinated solvents and a polydimethylsiloxane/carboxen (PDMS/CAR) SPME fiber, most compounds exhibited near linear or linear uptake over the sampling period. Smaller, less hydrophobic compounds exhibited more nonlinearity than larger, more hydrophobic molecules. Using a specifically designed in planta sampler, field sampling was conducted at a site contaminated with chlorinated solvents. Sampling with TWA-SPME produced instrument responses ranging from 5 to over 200 times higher than headspace tree core sampling. This work demonstrates that TWA-SPME can be used for in planta detection of a broad range of chlorinated solvents and methods can likely be applied to other volatile and semivolatile organic compounds. PMID:22332592
Fuzzy Petri nets Using Intuitionistic Fuzzy Sets and Ordered Weighted Averaging Operators.
Liu, Hu-Chen; You, Jian-Xin; You, Xiao-Yue; Su, Qiang
2016-08-01
Fuzzy Petri nets (FPNs) are an important modeling tool for knowledge representation and reasoning, which have been extensively used in a lot of fields. However, the conventional FPN models have been criticized as having many shortcomings in the literature. Many different models have been suggested to enhance the performance of FPNs, but deficiencies still exist in these models. First, various types of uncertain knowledge information provided by domain experts are very hard to be modeled by the existing FPN models. Second, the traditional FPNs determine the results of knowledge reasoning using the min, max, and product operators, which may not work well in many practical applications. In this paper, we propose a new type of FPN model based on intuitionistic fuzzy sets and ordered weighted averaging operators to deal with the problems and improve the effectiveness of the conventional FPNs. Moreover, a max-algebra-based reasoning algorithm is developed in order to implement the intuitionistic fuzzy reasoning formally and automatically. Finally, a case study concerning fault diagnosis of aircraft generator is presented to demonstrate the proposed intuitionistic FPN model. Numerical experiments show that the new FPN model is feasible and quite effective for knowledge representation and reasoning of intuitionistic fuzzy expert systems. PMID:26259253
Iterative weighted average diffusion as a novel external force in the active contour model
NASA Astrophysics Data System (ADS)
Mirov, Ilya S.; Nakhmani, Arie
2016-03-01
The active contour model has good performance in boundary extraction for medical images; particularly, Gradient Vector Flow (GVF) active contour model shows good performance at concavity convergence and insensitivity to initialization, yet it is susceptible to edge leaking, deep and narrow concavities, and has some issues handling noisy images. This paper proposes a novel external force, called Iterative Weighted Average Diffusion (IWAD), which used in tandem with parametric active contours, provides superior performance in images with high values of concavity. The image gradient is first turned into an edge image, smoothed, and modified with enhanced corner detection, then the IWAD algorithm diffuses the force at a given pixel based on its 3x3 pixel neighborhood. A forgetting factor, φ, is employed to ensure that forces being spread away from the boundary of the image will attenuate. The experimental results show better behavior in high curvature regions, faster convergence, and less edge leaking than GVF when both are compared to expert manual segmentation of the images.
Wingard, G.L.; Hudley, J.W.
2012-01-01
A molluscan analogue dataset is presented in conjunction with a weighted-averaging technique as a tool for estimating past salinity patterns in south Florida’s estuaries and developing targets for restoration based on these reconstructions. The method, here referred to as cumulative weighted percent (CWP), was tested using modern surficial samples collected in Florida Bay from sites located near fixed water monitoring stations that record salinity. The results were calibrated using species weighting factors derived from examining species occurrence patterns. A comparison of the resulting calibrated species-weighted CWP (SW-CWP) to the observed salinity at the water monitoring stations averaged over a 3-year time period indicates, on average, the SW-CWP comes within less than two salinity units of estimating the observed salinity. The SW-CWP reconstructions were conducted on a core from near the mouth of Taylor Slough to illustrate the application of the method.
Pardo, C E; Kreuzer, M; Bee, G
2013-11-01
Offspring born from normal litter size (10 to 15 piglets) but classified as having lower than average birth weight (average of the sow herd used: 1.46 ± 0.2 kg; mean ± s.d.) carry at birth negative phenotypic traits normally associated with intrauterine growth restriction, such as brain-sparing and impaired myofiber hyperplasia. The objective of the study was to assess long-term effects of intrauterine crowding by comparing postnatal performance, carcass characteristics and pork quality of offspring born from litters with higher (>1.7 kg) or lower (<1.3 kg) than average litter birth weight. From a population of multiparous Swiss Large White sows (parity 2 to 6), 16 litters with high (H = 1.75 kg) or low (L = 1.26 kg) average litter birth weight were selected. At farrowing, two female pigs and two castrated pigs were chosen from each litter: from the H-litters those with the intermediate (HI = 1.79 kg) and lowest (HL = 1.40 kg) birth weight, and from L-litters those with the highest (LH = 1.49 kg) and intermediate (LI = 1.26 kg) birth weight. Average birth weight of the selected HI and LI piglets differed (P < 0.05), whereas birth weight of the HL- and LH-piglets were similar (P > 0.05). These pigs were fattened in group pen and slaughtered at 165 days of age. Pre-weaning performance of the litters and growth performance, carcass and meat quality traits of the selected pigs were assessed. Number of stillborn and pig mortality were greater (P < 0.05) in L- than in H-litters. Consequently, fewer (P < 0.05) piglets were weaned and average litter weaning weight decreased by 38% (P < 0.05). The selected pigs of the L-litters displayed catch-up growth during the starter and grower-finisher periods, leading to similar (P > 0.05) slaughter weight at 165 days of age. However, HL-gilts were more feed efficient and had leaner carcasses than HI-, LH- and LI-pigs (birth weight class × gender interaction P < 0.05). Meat quality traits were mostly similar between groups. The
Averages, Areas and Volumes; Cambridge Conference on School Mathematics Feasibility Study No. 45.
ERIC Educational Resources Information Center
Cambridge Conference on School Mathematics, Newton, MA.
Presented is an elementary approach to areas, columns and other mathematical concepts usually treated in calculus. The approach is based on the idea of average and this concept is utilized throughout the report. In the beginning the average (arithmetic mean) of a set of numbers is considered and two properties of the average which often simplify…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-19
..., 2011 (76 FR 13580). Furthermore, due to the complexity of the issues proposed in the NPRM, FTA is..., FTA published an NPRM in the Federal Register (76 FR 13850) proposing to amend its bus testing... Federal Transit Administration 49 CFR Part 665 RIN 2132-AB01 Bus Testing: Calculation of Average...
Bacillus subtilis 168 levansucrase (SacB) activity affects average levan molecular weight.
Porras-Domínguez, Jaime R; Ávila-Fernández, Ángela; Miranda-Molina, Afonso; Rodríguez-Alegría, María Elena; Munguía, Agustín López
2015-11-01
Levan is a fructan polymer that offers a variety of applications in the chemical, health, cosmetic and food industries. Most of the levan applications depend on levan molecular weight, which in turn depends on the source of the synthesizing enzyme and/or on reaction conditions. Here we demonstrate that in the particular case of levansucrase from Bacillus subtilis 168, enzyme concentration is also a factor defining the molecular weight levan distribution. While a bimodal distribution has been reported at the usual enzyme concentrations (1 U/ml equivalent to 0.1 μM levansucrase) we found that a low molecular weight normal distribution is solely obtained al high enzyme concentrations (>5 U/ml equivalent to 0.5 μM levansucrase) while a high normal molecular weight distribution is synthesized at low enzyme doses (0.1 U/ml equivalent to 0.01 μM of levansucrase). PMID:26256357
The effect of capsule-filling machine vibrations on average fill weight.
Llusa, Marcos; Faulhammer, Eva; Biserni, Stefano; Calzolari, Vittorio; Lawrence, Simon; Bresciani, Massimo; Khinast, Johannes
2013-09-15
The aim of this paper is to study the effect of the speed of capsule filling and the inherent machine vibrations on fill weight for a dosator-nozzle machine. The results show that increasing speed of capsule filling amplifies the vibration intensity (as measured by Laser Doppler vibrometer) of the machine frame, which leads to powder densification. The mass of the powder (fill weight) collected via the nozzle is significantly larger at a higher capsule filling speed. Therefore, there is a correlation between powder densification under more intense vibrations and larger fill weights. Quality-by Design of powder based products should evaluate the effect of environmental vibrations on material attributes, which in turn may affect product quality. PMID:23872302
López-Soria, S; Sibila, M; Nofrarías, M; Calsamiglia, M; Manzanilla, E G; Ramírez-Mendoza, H; Mínguez, A; Serrano, J M; Marín, O; Joisel, F; Charreyre, C; Segalés, J
2014-12-01
Porcine circovirus type 2 (PCV2) is a ubiquitous virus that mainly affects nursery and fattening pigs causing systemic disease (PCV2-SD) or subclinical infection. A characteristic sign in both presentations is reduction of average daily weight gain (ADWG). The present study aimed to assess the relationship between PCV2 load in serum and ADWG from 3 (weaning) to 21 weeks of age (slaughter) (ADWG 3-21). Thus, three different boar lines were used to inseminate sows from two PCV2-SD affected farms. One or two pigs per sow were selected (60, 61 and 51 piglets from Pietrain, Pietrain×Large White and Duroc×Large White boar lines, respectively). Pigs were bled at 3, 9, 15 and 21 weeks of age and weighted at 3 and 21 weeks. Area under the curve of the viral load at all sampling times (AUCqPCR 3-21) was calculated for each animal according to standard and real time quantitative PCR results; this variable was categorized as "negative or low" (<10(4.3) PCV2 genome copies/ml of serum), "medium" (≥10(4.3) to ≤10(5.3)) and "high" (>10(5.3)). Data regarding sex, PCV2 antibody titre at weaning and sow parity was also collected. A generalized linear model was performed, obtaining that paternal genetic line and AUCqPCR 3-21 were related to ADWG 3-21. ADWG 3-21 (mean±typical error) for "negative or low", "medium" and "high" AUCqPCR 3-21 was 672±9, 650±12 and 603±16 g/day, respectively, showing significant differences among them. This study describes different ADWG performances in 3 pig populations that suffered from different degrees of PCV2 viraemia. PMID:25448444
Area-averaged profiles over the mock urban setting test array
Nelson, M. A.; Brown, M. J.; Pardyjak, E. R.; Klewicki, J. C.
2004-01-01
Urban areas have a large effect on the local climate and meteorology. Efforts have been made to incorporate the bulk dynamic and thermodynamic effects of urban areas into mesoscale models (e.g., Chin et al., 2000; Holt et al., 2002; Lacser and Otte, 2002). At this scale buildings cannot be resolved individually, but parameterizations have been developed to capture their aggregate effect. These urban canopy parameterizations have been designed to account for the area-average drag, turbulent kinetic energy (TKE) production, and surface energy balance modifications due to buildings (e.g., Sorbjan and Uliasz, 1982; Ca, 1999; Brown, 2000; Martilli et al., 2002). These models compute an area-averaged mean profile that is representative of the bulk flow characteristics over the entire mesoscale grid cell. One difficulty has been testing of these parameterizations due to lack of area-averaged data. In this paper, area-averaged velocity and turbulent kinetic energy profiles are derived from data collected at the Mock Urban Setting Test (MUST). The MUST experiment was designed to be a near full-scale model of an idealized urban area imbedded in the Atmospheric Surface Layer (ASL). It's purpose was to study airflow and plume transport in urban areas and to provide a test case for model validation. A large number of velocity measurements were taken at the test site so that it was possible to derive area-averaged velocity and TKE profiles.
Full-custom design of split-set data weighted averaging with output register for jitter suppression
NASA Astrophysics Data System (ADS)
Jubay, M. C.; Gerasta, O. J.
2015-06-01
A full-custom design of an element selection algorithm, named as Split-set Data Weighted Averaging (SDWA) is implemented in 90nm CMOS Technology Synopsys Library. SDWA is applied in seven unit elements (3-bit) using a thermometer-coded input. Split-set DWA is an improved DWA algorithm which caters the requirement for randomization along with long-term equal element usage. Randomization and equal element-usage improve the spectral response of the unit elements due to higher Spurious-free dynamic range (SFDR) and without significantly degrading signal-to-noise ratio (SNR). Since a full-custom, the design is brought to transistor-level and the chip custom layout is also provided, having a total area of 0.3mm2, a power consumption of 0.566 mW, and simulated at 50MHz clock frequency. On this implementation, SDWA is successfully derived and improved by introducing a register at the output that suppresses the jitter introduced at the final stage due to switching loops and successive delays.
47 CFR 36.622 - National and study area average unseparated loop costs.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 2 2010-10-01 2010-10-01 false National and study area average unseparated... Universal Service Fund Calculation of Loop Costs for Expense Adjustment § 36.622 National and study area... provided in paragraph (c) of this section, this is equal to the sum of the Loop Costs for each study...
47 CFR 36.622 - National and study area average unseparated loop costs.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 2 2011-10-01 2011-10-01 false National and study area average unseparated... Universal Service Fund Calculation of Loop Costs for Expense Adjustment § 36.622 National and study area... provided in paragraph (c) of this section, this is equal to the sum of the Loop Costs for each study...
The Effect of Area Averaging on the Approximated Profile of the H α Spectral Line
NASA Astrophysics Data System (ADS)
Bodnárová, M.; Utz, D.; Rybák, J.
2016-04-01
The Hα line is massively used as a diagnostics of the chromosphere. Often one needs to average the line profile over some area to increase the signal to noise ratio. Thus it is important to understand how derived parameters vary with changing approximations. In this study we investigate the effect of spatial averaging of a selected area on the temporal variations of the width, the intensity and the Dopplershift of the Hα spectral line profile. The approximated profile was deduced from co-temporal observations in five points throughout the Hα line profile obtained by the tunable Lyot filter installed on the Dutch Open Telescope. We found variations of the intensity and the Doppler velocities, which were independent of the size of the area used for the computation of the area averaged Hα spectral line profile.
Prediction of oil palm production using the weighted average of fuzzy sets concept approach
NASA Astrophysics Data System (ADS)
Nugraha, R. F.; Setiyowati, Susi; Mukhaiyar, Utriweni; Yuliawati, Apriliani
2015-12-01
Proper planning becomes crucial for decision making in a company. For oil palm producer companies, the prediction of future products realizations is useful and considered in making company's strategies. It is mean that to do the best in predicting is absolute. Until now, to predict the next monthly oil palm productions, the company use simple mean statistics of the latest five-year observations. Lately, imprecision in estimates of oil palm production (overestimate) becomes a problem and the focus of attention in a company. Here we proposed weighted mean approach by using fuzzy concept approach to do estimation and prediction. We obtain that the prediction using fuzzy concept almost always give underestimate of realizations than the simple mean.
Baeck, Annelies; Wagemans, Johan; Op de Beeck, Hans P
2013-04-15
Natural scenes typically contain multiple visual objects, often in interaction, such as when a bottle is used to fill a glass. Previous studies disagree about the representation of multiple objects and the role of object position herein, nor did they pinpoint the effect of potential interactions between the objects. In an fMRI study, we presented four single objects in two different positions and object pairs consisting of all possible combinations of the single objects. Objects pairs could form either a meaningful action configuration in which they interact with each other or a non-meaningful configuration. We found that for single objects and object pairs both identity and position were represented in multi-voxel activity patterns in LOC. The response patterns of object pairs were best predicted by a weighted average of the response patterns of the constituent objects, with the strongest single-object response (the max response) weighted more than the min response. The difference in weight between the max and the min object was larger for familiar action pairs than for other pairs when participants attended to the configuration. A weighted average thus relates the response patterns of object pairs to the response patterns of single objects, even when the objects interact. PMID:23266747
On the theory relating changes in area-average and pan evaporation (Invited)
NASA Astrophysics Data System (ADS)
Shuttleworth, W.; Serrat-Capdevila, A.; Roderick, M. L.; Scott, R.
2009-12-01
Theory relating changes in area-average evaporation with changes in the evaporation from pans or open water is developed. Such changes can arise by Type (a) processes related to large-scale changes in atmospheric concentrations and circulation that modify surface evaporation rates in the same direction, and Type (b) processes related to coupling between the surface and atmospheric boundary layer (ABL) at the landscape scale that usually modify area-average evaporation and pan evaporation in different directions. The interrelationship between evaporation rates in response to Type (a) changes is derived. They have the same sign and broadly similar magnitude but the change in area-average evaporation is modified by surface resistance. As an alternative to assuming the complementary evaporation hypothesis, the results of previous modeling studies that investigated surface-atmosphere coupling are parameterized and used to develop a theoretical description of Type (b) coupling via vapor pressure deficit (VPD) in the ABL. The interrelationship between appropriately normalized pan and area-average evaporation rates is shown to vary with temperature and wind speed but, on average, the Type (b) changes are approximately equal and opposite. Long-term Australian pan evaporation data are analyzed to demonstrate the simultaneous presence of Type (a) and (b) processes, and observations from three field sites in southwestern USA show support for the theory describing Type (b) coupling via VPD. England's victory over Australia in 2009 Ashes cricket test match series will not be mentioned.
ON THE THEORY RELATING CHANGES IN AREA-AVERAGE AND PAN EVAPORATION
Technology Transfer Automated Retrieval System (TEKTRAN)
Theory relating changes in the area-average evaporation from a landscape with changes in the evaporation from pans or open water within the landscape is developed. Such changes can arise in two ways, by Type (a) processes related to large-scale changes in atmospheric concentrations and circulation t...
Shih, H C; Tsai, S W; Kuo, C H
2012-01-01
A solid-phase microextraction (SPME) device was used as a diffusive sampler for airborne propylene glycol ethers (PGEs), including propylene glycol monomethyl ether (PGME), propylene glycol monomethyl ether acetate (PGMEA), and dipropylene glycol monomethyl ether (DPGME). Carboxen-polydimethylsiloxane (CAR/PDMS) SPME fiber was selected for this study. A polytetrafluoroethylene (PTFE) tubing was used as the holder, and the SPME fiber assembly was inserted into the tubing as a diffusive sampler. The diffusion path length and area of the sampler were 0.3 cm and 0.00086 cm(2), respectively. The theoretical sampling constants at 30°C and 1 atm for PGME, PGMEA, and DPGME were 1.50 × 10(-2), 1.23 × 10(-2) and 1.14 × 10(-2) cm(3) min(-1), respectively. For evaluations, known concentrations of PGEs around the threshold limit values/time-weighted average with specific relative humidities (10% and 80%) were generated both by the air bag method and the dynamic generation system, while 15, 30, 60, 120, and 240 min were selected as the time periods for vapor exposures. Comparisons of the SPME diffusive sampling method to Occupational Safety and Health Administration (OSHA) organic Method 99 were performed side-by-side in an exposure chamber at 30°C for PGME. A gas chromatography/flame ionization detector (GC/FID) was used for sample analysis. The experimental sampling constants of the sampler at 30°C were (6.93 ± 0.12) × 10(-1), (4.72 ± 0.03) × 10(-1), and (3.29 ± 0.20) × 10(-1) cm(3) min(-1) for PGME, PGMEA, and DPGME, respectively. The adsorption of chemicals on the stainless steel needle of the SPME fiber was suspected to be one of the reasons why significant differences between theoretical and experimental sampling rates were observed. Correlations between the results for PGME from both SPME device and OSHA organic Method 99 were linear (r = 0.9984) and consistent (slope = 0.97 ± 0.03). Face velocity (0-0.18 m/s) also proved to have no effects on the sampler
Sontag, C A; Stafford, W F; Correia, J J
2004-03-01
Analysis of sedimentation velocity data for indefinite self-associating systems is often achieved by fitting of weight average sedimentation coefficients (s(20,w)) However, this method discriminates poorly between alternative models of association and is biased by the presence of inactive monomers and irreversible aggregates. Therefore, a more robust method for extracting the binding constants for indefinite self-associating systems has been developed. This approach utilizes a set of fitting routines (SedAnal) that perform global non-linear least squares fits of up to 10 sedimentation velocity experiments, corresponding to different loading concentrations, by a combination of finite element simulations and a fitting algorithm that uses a simplex convergence routine to search parameter space. Indefinite self-association is analyzed with the software program isodesfitter, which incorporates user provided functions for sedimentation coefficients as a function of the degree of polymerization for spherical, linear and helical polymer models. The computer program hydro was used to generate the sedimentation coefficient values for the linear and helical polymer assembly mechanisms. Since this curve fitting method directly fits the shape of the sedimenting boundary, it is in principle very sensitive to alternative models and the presence of species not participating in the reaction. This approach is compared with traditional fitting of weight average data and applied to the initial stages of Mg(2+)-induced tubulin self-associating into small curved polymers, and vinblastine-induced tubulin spiral formation. The appropriate use and limitations of the methods are discussed. PMID:15043931
Long-path Scintillometry To Determine Area-averaged Evaporation Over Heterogeneous Terrain
NASA Astrophysics Data System (ADS)
Meininger, W. M. L.; de Bruin, H. A. R.
Results of the Flevopolder 1998 field experiment will be presented. Area-averaged evaporation determined with a combined system of a Large Aperture Scintillometer (LAS) and a Radio-wave (small aperture) Scintillometer (RWS) with a path length of 2.2 km will be compared with 'ground-truth' eddy-correlation measurements. The landscape consists of different rectangular agricultural fields. The main crops are potatoes, sugar beats, onions and wheat. Over each of these different crops micro- meteorological stations were installed, inclusive eddy-correlation equipment. In addi- tion, area-averaged evaporation derived from the LAS alone and a simple estimate of available energy will be discussed also. The results appear to be very promising. Fi- nally, first results of evaporation derived from scintillometry and from satellite images will be presented.
High surface area, low weight composite nickel fiber electrodes
NASA Technical Reports Server (NTRS)
Johnson, Bradley A.; Ferro, Richard E.; Swain, Greg M.; Tatarchuk, Bruce J.
1993-01-01
The energy density and power density of light weight aerospace batteries utilizing the nickel oxide electrode are often limited by the microstructures of both the collector and the resulting active deposit in/on the collector. Heretofore, these two microstructures were intimately linked to one another by the materials used to prepare the collector grid as well as the methods and conditions used to deposit the active material. Significant weight and performance advantages were demonstrated by Britton and Reid at NASA-LeRC using FIBREX nickel mats of ca. 28-32 microns diameter. Work in our laboratory investigated the potential performance advantages offered by nickel fiber composite electrodes containing a mixture of fibers as small as 2 microns diameter (Available from Memtec America Corporation). These electrode collectors possess in excess of an order of magnitude more surface area per gram of collector than FIBREX nickel. The increase in surface area of the collector roughly translates into an order of magnitude thinner layer of active material. Performance data and advantages of these thin layer structures are presented. Attributes and limitations of their electrode microstructure to independently control void volume, pore structure of the Ni(OH)2 deposition, and resulting electrical properties are discussed.
NASA Astrophysics Data System (ADS)
Boroushaki, Soheil; Malczewski, Jacek
2008-04-01
This paper focuses on the integration of GIS and an extension of the analytical hierarchy process (AHP) using quantifier-guided ordered weighted averaging (OWA) procedure. AHP_OWA is a multicriteria combination operator. The nature of the AHP_OWA depends on some parameters, which are expressed by means of fuzzy linguistic quantifiers. By changing the linguistic terms, AHP_OWA can generate a wide range of decision strategies. We propose a GIS-multicriteria evaluation (MCE) system through implementation of AHP_OWA within ArcGIS, capable of integrating linguistic labels within conventional AHP for spatial decision making. We suggest that the proposed GIS-MCE would simplify the definition of decision strategies and facilitate an exploratory analysis of multiple criteria by incorporating qualitative information within the analysis.
Yasugi, T; Kawai, T; Mizunuma, K; Horiguchi, S; Iguchi, H; Ikeda, M
1992-01-01
A diffusive sampling method with water as absorbent was examined in comparison with 3 conventional methods of diffusive sampling with carbon cloth as absorbent, pumping through National Institute of Occupational Safety and Health (NIOSH) charcoal tubes, and pumping through NIOSH silica gel tubes to measure time-weighted average concentration of dimethylformamide (DMF). DMF vapors of constant concentrations at 3-110 ppm were generated by bubbling air at constant velocities through liquid DMF followed by dilution with fresh air. Both types of diffusive samplers could either absorb or adsorb DMF in proportion to time (0.25-8 h) and concentration (3-58 ppm), except that the DMF adsorbed was below the measurable amount when carbon cloth samplers were exposed at 3 ppm for less than 1 h. When both diffusive samplers were loaded with DMF and kept in fresh air, the DMF in water samplers stayed unchanged for at least for 12 h. The DMF in carbon cloth samplers showed a decay with a half-time of 14.3 h. When the carbon cloth was taken out immediately after termination of DMF exposure, wrapped in aluminum foil, and kept refrigerated, however, there was no measurable decrease in DMF for at least 3 weeks. When the air was drawn at 0.2 l/min, a breakthrough of the silica gel tube took place at about 4,000 ppm.min (as the lower 95% confidence limit), whereas charcoal tubes could tolerate even heavier exposures, suggesting that both tubes are fit to measure the 8-h time-weighted average of DMF at 10 ppm. PMID:1577523
NASA Astrophysics Data System (ADS)
Amir Rahmani, Mohammad; Zarghami, Mahdi
2013-03-01
The projections of the climate change by using General Climate Models (GCMs) are uncertain. Hence, combining the results of GCMs is now an effective solution to tackle this uncertainty. To evaluate the performance of GCMs, a new measure based on the similarity of the projections is defined. In defining this measure the Ordered Weighted Averaging (OWA) approach is used. The relative weights of the GCMs projections in different stations, to be aggregated by the OWA operator, are obtained by regular increasing monotone fuzzy quantifiers, which model the risk preferences of the decision maker. To show the effectiveness of the approach, climate change in the northwestern provinces of Iran is studied by using the data of 15 synoptic stations. The weather generator of LARS-WG is used to downscale the GCMs under three emission scenarios (A2, A1B and B1) for the period 2011 to 2030. The combined results, by using the similarity values, indicate a - 0.1 °C to + 4.5 °C change in temperature in the region. Precipitation is expected to increase in summer and fall. Changes in wintry precipitation depend on the location; however the precipitation in spring would have a medium change. The results of this study show the usefulness of OWA operator, which considers the risk attitudes of the decision maker. This approach could help water and environmental managers to tackle the climate uncertainties.
Cook, D A; Coory, M; Webster, R A
2011-06-01
OBJECTIVE To introduce a new type of risk-adjusted (RA) exponentially weighted moving average (EWMA) chart and to compare it to a commonly used type of variable life adjusted display chart for analysis of patient outcomes. DATA Routine inpatient data on mortality following admission for acute myocardial infarction, from all public and private hospitals in Queensland, Australia. METHODS The RA-EWMA plots the EWMA of the observed and predicted values. Predicted values were obtained from a logistic regression model for all hospitals in Queensland. The EWMA of the predicted values is a moving centre line, reflecting current patient case mix at a particular hospital. Thresholds around this moving centre line provide a scale by which to assess the importance of trends in the EWMA of the observed values. RESULTS The RA-EWMA chart can be designed to have equivalent performance, in terms of average run lengths, as variable life adjusted display chart. The advantages of the RA-EWMA are that it communicates information about the current level of an indicator in a direct and understandable way, and it explicitly displays information about the current patient case mix. Also, because it is not reset, the RA-EWMA is a more natural chart to use in health, where it is exceedingly rare to stop or dramatically and abruptly alter a process of care. CONCLUSION The RA-EWMA chart is a direct and intuitive way to display information about an indicator while accounting for differences in case mix. PMID:21209145
Coombes, Brandon; Basu, Saonli; Guha, Sharmistha; Schork, Nicholas
2015-01-01
Multi-locus effect modeling is a powerful approach for detection of genes influencing a complex disease. Especially for rare variants, we need to analyze multiple variants together to achieve adequate power for detection. In this paper, we propose several parsimonious branching model techniques to assess the joint effect of a group of rare variants in a case-control study. These models implement a data reduction strategy within a likelihood framework and use a weighted score test to assess the statistical significance of the effect of the group of variants on the disease. The primary advantage of the proposed approach is that it performs model-averaging over a substantially smaller set of models supported by the data and thus gains power to detect multi-locus effects. We illustrate these proposed approaches on simulated and real data and study their performance compared to several existing rare variant detection approaches. The primary goal of this paper is to assess if there is any gain in power to detect association by averaging over a number of models instead of selecting the best model. Extensive simulations and real data application demonstrate the advantage the proposed approach in presence of causal variants with opposite directional effects along with a moderate number of null variants in linkage disequilibrium. PMID:26436424
NASA Astrophysics Data System (ADS)
Davies, G. R.; Chaplin, W. J.; Elsworth, Y.; Hale, S. J.
2014-07-01
The Birmingham Solar Oscillations Network (BiSON) has provided high-quality high-cadence observations from as far back in time as 1978. These data must be calibrated from the raw observations into radial velocity and the quality of the calibration has a large impact on the signal-to-noise ratio of the final time series. The aim of this work is to maximize the potential science that can be performed with the BiSON data set by optimizing the calibration procedure. To achieve better levels of signal-to-noise ratio, we perform two key steps in the calibration process: we attempt a correction for terrestrial atmospheric differential extinction; and the resulting improvement in the calibration allows us to perform weighted averaging of contemporaneous data from different BiSON stations. The improvements listed produce significant improvement in the signal-to-noise ratio of the BiSON frequency-power spectrum across all frequency ranges. The reduction of noise in the power spectrum will allow future work to provide greater constraint on changes in the oscillation spectrum with solar activity. In addition, the analysis of the low-frequency region suggests that we have achieved a noise level that may allow us to improve estimates of the upper limit of g-mode amplitudes.
Woolcock, Patrick J; Koziel, Jacek A; Cai, Lingshuang; Johnston, Patrick A; Brown, Robert C
2013-03-15
Time-weighted average (TWA) passive sampling using solid-phase microextraction (SPME) and gas chromatography was investigated as a new method of collecting, identifying and quantifying contaminants in process gas streams. Unlike previous TWA-SPME techniques using the retracted fiber configuration (fiber within needle) to monitor ambient conditions or relatively stagnant gases, this method was developed for fast-moving process gas streams at temperatures approaching 300 °C. The goal was to develop a consistent and reliable method of analyzing low concentrations of contaminants in hot gas streams without performing time-consuming exhaustive extraction with a slipstream. This work in particular aims to quantify trace tar compounds found in a syngas stream generated from biomass gasification. This paper evaluates the concept of retracted SPME at high temperatures by testing the three essential requirements for TWA passive sampling: (1) zero-sink assumption, (2) consistent and reliable response by the sampling device to changing concentrations, and (3) equal concentrations in the bulk gas stream relative to the face of the fiber syringe opening. Results indicated the method can accurately predict gas stream concentrations at elevated temperatures. Evidence was also discovered to validate the existence of a second boundary layer within the fiber during the adsorption/absorption process. This limits the technique to operating within reasonable mass loadings and loading rates, established by appropriate sampling depths and times for concentrations of interest. A limit of quantification for the benzene model tar system was estimated at 0.02 g m(-3) (8 ppm) with a limit of detection of 0.5 mg m(-3) (200 ppb). Using the appropriate conditions, the technique was applied to a pilot-scale fluidized-bed gasifier to verify its feasibility. Results from this test were in good agreement with literature and prior pilot plant operation, indicating the new method can measure low
NASA Astrophysics Data System (ADS)
Gasser, Guy; Pankratov, Irena; Elhanany, Sara; Glazman, Hillel; Lev, Ovadia
2014-05-01
A methodology used to estimate the percentage of wastewater effluent in an otherwise pristine water site is proposed on the basis of the weighted mean of the level of a consortium of indicator pollutants. This method considers the levels of uncertainty in the evaluation of each of the indicators in the site, potential effluent sources, and uncontaminated surroundings. A detailed demonstrative study was conducted on a site that is potentially subject to wastewater leakage. The research concentrated on several perched springs that are influenced to an unknown extent by agricultural communities. A comparison was made to a heavily contaminated site receiving wastewater effluent and surface water runoff. We investigated six springs in two nearby ridges where fecal contamination was detected in the past; the major sources of pollution in the area have since been diverted to a wastewater treatment system. We used chloride, acesulfame, and carbamazepine as domestic pollution tracers. Good correlation (R2 > 0.86) was observed between the mixing ratio predictions based on the two organic tracers (the slope of the linear regression was 1.05), whereas the chloride predictions differed considerably. This methodology is potentially useful, particularly for cases in which detailed hydrological modeling is unavailable but in which quantification of wastewater penetration is required. We demonstrate that the use of more than one tracer for estimation of the mixing ratio reduces the combined uncertainty level associated with the estimate and can also help to disqualify biased tracers.
Michiels, A; Piepers, S; Ulens, T; Van Ransbeeck, N; Del Pozo Sacristán, R; Sierens, A; Haesebrouck, F; Demeyer, P; Maes, D
2015-09-01
The present study investigated the simultaneous influence of particulate matter (PM10) and ammonia (NH3) on performance, lung lesions and the presence of Mycoplasma hyopneumoniae (M. hyopneumoniae) in finishing pigs. A pig herd experiencing clinical problems of M. hyopneumoniae infections was selected. In total, 1095 finishing pigs of two replicates in eight compartments each were investigated during the entire finishing period (FP). Indoor PM10 and NH3 were measured at regular intervals during the FP with two Grimm spectrometers and two Graywolf Particle Counters (PM10) and an Innova photoacoustic gas monitor (NH3). Average daily weight gain (ADG) and mortality were calculated and associated with PM10 and NH3 during the FP. Nasal swabs (10 pigs/compartment) were collected one week prior to slaughter to detect DNA of M. hyopneumoniae with nested PCR (nPCR). The prevalence and extent of pneumonia lesions, and prevalence of fissures and pleurisy were examined at slaughter (29 weeks). The results from the nasal swabs and lung lesions were associated with PM10 and NH3 during the FP and the second half of the FP. In the univariable model, increasing PM10 concentrations resulted in a higher odds of pneumonia lesions (second half of the FP: OR=8.72; P=0.015), more severe pneumonia lesions (FP: P=0.04, second half of the FP: P=0.009), a higher odds of pleurisy lesions (FP: OR=20.91; P<0.001 and second half of the FP: OR=40.85; P<0.001) and a higher number of nPCR positive nasal samples (FP: OR=328.00; P=0.01 and second half of the FP: OR=185.49; P=0.02). Increasing NH3 concentrations in the univariable model resulted in a higher odds of pleurisy lesions (FP: OR=21.54; P=0.003) and a higher number of nPCR positive nasal samples (FP: OR=70.39; P=0.049; second half of the FP: OR=8275.05; P=0.01). In the multivariable model, an increasing PM10 concentration resulted in a higher odds of pleurisy lesions (FP: OR=8.85; P=0.049). These findings indicate that the respiratory health
Wells, Frank C.; Schertz, Terry L.
1984-01-01
A computer program using the Statistical Analysis System has been developed to perform the arithmetic calculations and regression analyses to determine volume-weighted-average concentrations of selected water-quality constituents in lakes and reservoirs. The program has been used in Texas to show decreasing trends in dissolved-solids and total-phosphorus concentrations in Lake Arlington after the discharge of sewage effluent into the reservoir was stopped. The program also was used to show that the August 1978 and October 1981 floods on the Brazos River greatly decreased the volume-weighted-average concentrations of selected constituents in Hubbard Creek Reservoir and Possum Kingdom Lake.
Code of Federal Regulations, 2012 CFR
2012-07-01
... weighted average in Equation 2 of § 63.2840 to determine the compliance ratio. (b) To determine the volume... determine chemical properties of the solvent and the volume percentage of all HAP components present in the... by the total volume of all deliveries as expressed in Equation 1 of this section. Record the...
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov
2007-01-01
Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.
Spacially-averaged and point measurements of wind variability in the Geyser's area
Porch, W.M.
1980-05-01
This paper describes the results of a comparison of wind measurements made with conventional cup-vane tower mounted anemometers and optical space-averaged anemometer techniques. The results described cover the period from 7/17/79 to 7/27/79 during the intensive ASCOT experiment in the Geyser's region. The average height of the laser beam above terrain was about 30 meters. Most of the optical anemometer wind data was obtained using a laser beam system described in detail by Lawrence, et al. Some measurements were also made along the same path using a white light photodiode array system developed at LLL.
NASA Astrophysics Data System (ADS)
Kassianov, Evgueni; Barnard, James; Flynn, Connor; Riihimaki, Laura; Marinovici, Cristina
2015-10-01
Areal-averaged albedos are particularly difficult to measure in coastal regions, because the surface is not homogenous, consisting of a sharp demarcation between land and water. With this difficulty in mind, we evaluate a simple retrieval of areal-averaged surface albedo using ground-based measurements of atmospheric transmission alone under fully overcast conditions. To illustrate the performance of our retrieval, we find the areal-averaged albedo using measurements from the Multi-Filter Rotating Shadowband Radiometer (MFRSR) at five wavelengths (415, 500, 615, 673, and 870 nm). These MFRSR data are collected at a coastal site in Graciosa Island, Azores supported by the U.S. Department of Energy's (DOE's) Atmospheric Radiation Measurement (ARM) Program. The areal-averaged albedos obtained from the MFRSR are compared with collocated and coincident Moderate Resolution Imaging Spectroradiometer (MODIS) whitesky albedo at four nominal wavelengths (470, 560, 670 and 860 nm). These comparisons are made during a 19-month period (June 2009 - December 2010). We also calculate composite-based spectral values of surface albedo by a weighted-average approach using estimated fractions of major surface types observed in an area surrounding this coastal site. Taken as a whole, these three methods of finding albedo show spectral and temporal similarities, and suggest that our simple, transmission-based technique holds promise, but with estimated errors of about ±0.03. Additional work is needed to reduce this uncertainty in areas with inhomogeneous surfaces.
Kassianov, Evgueni I.; Barnard, James C.; Flynn, Connor J.; Riihimaki, Laura D.; Marinovici, Maria C.
2015-10-15
Areal-averaged albedos are particularly difficult to measure in coastal regions, because the surface is not homogenous, consisting of a sharp demarcation between land and water. With this difficulty in mind, we evaluate a simple retrieval of areal-averaged surface albedo using ground-based measurements of atmospheric transmission alone under fully overcast conditions. To illustrate the performance of our retrieval, we find the areal-averaged albedo using measurements from the Multi-Filter Rotating Shadowband Radiometer (MFRSR) at five wavelengths (415, 500, 615, 673, and 870 nm). These MFRSR data are collected at a coastal site in Graciosa Island, Azores supported by the U.S. Department of Energy’s (DOE’s) Atmospheric Radiation Measurement (ARM) Program. The areal-averaged albedos obtained from the MFRSR are compared with collocated and coincident Moderate Resolution Imaging Spectroradiometer (MODIS) white-sky albedo at four nominal wavelengths (470, 560, 670 and 860 nm). These comparisons are made during a 19-month period (June 2009 - December 2010). We also calculate composite-based spectral values of surface albedo by a weighted-average approach using estimated fractions of major surface types observed in an area surrounding this coastal site. Taken as a whole, these three methods of finding albedo show spectral and temporal similarities, and suggest that our simple, transmission-based technique holds promise, but with estimated errors of about ±0.03. Additional work is needed to reduce this uncertainty in areas with inhomogeneous surfaces.
ERIC Educational Resources Information Center
Warne, Russell T.; Nagaishi, Chanel; Slade, Michael K.; Hermesmeyer, Paul; Peck, Elizabeth Kimberli
2014-01-01
While research has shown the statistical significance of high school grade point averages (HSGPAs) in predicting future academic outcomes, the systems with which HSGPAs are calculated vary drastically across schools. Some schools employ unweighted grades that carry the same point value regardless of the course in which they are earned; other…
ERIC Educational Resources Information Center
Sadler, Philip M.; Tai, Robert H.
2007-01-01
Honors and advanced placement (AP) courses are commonly viewed as more demanding than standard high school offerings. Schools employ a range of methods to account for such differences when calculating grade point average and the associated rank in class for graduating students. In turn, these statistics have a sizeable impact on college admission…
Thompson, Amanda L; Adair, Linda; Bentley, Margaret E
2014-01-01
Biomedical researchers have raised concerns that mothers’ inability to recognize infant and toddler overweight poses a barrier to stemming increasing rates of overweight and obesity, particularly among low-income or minority mothers. Little anthropological research has examined the sociocultural, economic or structural factors shaping maternal perceptions of infant and toddler size or addressed biomedical depictions of maternal misperception as a “socio-cultural problem.” We use qualitative and quantitative data from 237 low-income, African-American mothers to explore how they define ‘normal’ infant growth and infant overweight. Our quantitative results document that mothers’ perceptions of infant size change with infant age, are sensitive to the size of other infants in the community, and are associated with concerns over health and appetite. Qualitative analysis documents that mothers are concerned with their children’s weight status and assess size in relation to their infants’ cues, local and societal norms of appropriate size, interactions with biomedicine, and concerns about infant health and sufficiency. These findings suggest that mothers use multiple models to interpret and respond to child weight. An anthropological focus on the complex social and structural factors shaping what is considered ‘normal’ and ‘abnormal’ infant weight is critical for shaping appropriate and successful interventions. PMID:25684782
Sether, Bradley A.; Berkas, Wayne R.; Vecchia, Aldo V.
2004-01-01
associated with each estimated annual load. The estimated annual loads for the eight primary sites then were used to estimate annual loads for five intervening reaches in the study area. Results were used as a screening tool to identify which subbasins contributed a disproportionate amount of pollutants to the Red River. To compare the relative water quality of the different subbasins, an estimated flow-weighted average (FWA) concentration was computed from the estimated average annual load and the average annual streamflow for each subbasin. The 5-day biochemical oxygen demands in the upper Red River Basin were fairly small, and medians ranged from 1 to 3 milligrams per liter. The largest estimated FWA concentration for dissolved solids (about 630 milligrams per liter) was for the Bois de Sioux River near Doran, Minn., site. The Otter Tail River above Breckenridge, Minn., site had the smallest estimated FWA concentration (about 240 milligrams per liter). The estimated FWA concentrations for dissolved solids for the main-stem sites ranged from about 300 to 500 milligrams per liter and generally increased in a downstream direction. The estimated FWA concentrations for total nitrite plus nitrate for the main-stem sites increased from about 0.2 milligram per liter for the Red River below Wahpeton, N. Dak., site to about 0.9 milligram per liter for the Red River at Perley, Minn., site. Much of the increase probably resulted from flows from the tributary sites and intervening reaches, excluding the Otter Tail River above Breckenridge, Minn., site. However, uncertainty in the estimated concentrations prevented any reliable conclusions regarding which sites or reaches contributed most to the increase. The estimated FWA concentrations for total ammonia for the main-stem sites increased from about 0.05 milligram per liter for the Red River above Fargo, N. Dak., site to about 0.15 milligram per liter for the Red River near Harwood, N. Dak., site. T
The daily computed weighted averaging basic reproduction number R>0,k,ωn for MERS-CoV in South Korea
NASA Astrophysics Data System (ADS)
Jeong, Darae; Lee, Chang Hyeong; Choi, Yongho; Kim, Junseok
2016-06-01
In this paper, we propose the daily computed weighted averaging basic reproduction number R0,k,ωn for Middle East respiratory syndrome coronavirus (MERS-CoV) outbreak in South Korea, May to July 2015. We use an SIR model with piecewise constant parameters β (contact rate) and γ (removed rate). We use the explicit Euler's method for the solution of the SIR model and a nonlinear least-square fitting procedure for finding the best parameters. In R0,k,ωn, the parameters n, k, and w denote days from a reference date, the number of days in averaging, and a weighting factor, respectively. We perform a series of numerical experiments and compare the results with the real-world data. In particular, using the predicted reproduction number based on the previous two consecutive reproduction numbers, we can predict the future behavior of the reproduction number.
Numerous urban canopy schemes have recently been developed for mesoscale models in order to approximate the drag and turbulent production effects of a city on the air flow. However, little data exists by which to evaluate the efficacy of the schemes since "area-averaged&quo...
Collins, Alison M; Barchia, Idris M
2014-01-31
Serology indicates that Lawsonia intracellularis infection is widespread in many countries, with most pigs seroconverting before 22 weeks of age. However, the majority of animals appear to be sub-clinically affected, demonstrated by the low reported prevalence of diarrhoea. Production losses caused by sub-clinical proliferative enteropathy (PE) are more difficult to diagnose, indicating the need for a quantitative L. intracellularis assay that correlates well with disease severity. In previous studies, increasing numbers of L. intracellularis in pig faeces, quantified with a real time polymerase chain reaction (qPCR), showed a strong negative correlation with average daily gain (ADG). In this study, the association between faecal L. intracellularis numbers and PE severity was examined in two L. intracellularis experimental challenge trials (n1=32 and n2=95). The number of L. intracellularis shed in individual faeces was determined by qPCR on days 0, 7, 14, 17 and 21 days post challenge, and average daily gain was recorded over the same period. The severity of histopathological lesions of PE was scored at 21 days post challenge. L. intracellularis numbers correlated well with histopathology severity and faecal consistency scores (r=0.72 and 0.68, respectively), and negatively with ADG (r=-0.44). Large reductions in ADG (131 g/day) occurred when the number of L. intracellularis shed by experimentally challenged pigs increased from 10(7) to 10(8)L. intracellularis, although smaller ADG reductions were also observed (15 g/day) when the number of L. intracellularis increased from 10(6) to 10(7)L. intracellularis. PMID:24388631
Südmeyer, T; Brunner, F; Innerhofer, E; Paschotta, R; Furusawa, K; Baggett, J C; Monro, T M; Richardson, D J; Keller, U
2003-10-15
We demonstrate that nonlinear fiber compression is possible at unprecedented average power levels by use of a large-mode-area holey (microstructured) fiber and a passively mode-locked thin disk Yb:YAG laser operating at 1030 nm. We broaden the optical spectrum of the 810-fs pump pulses by nonlinear propagation in the fiber and remove the resultant chirp with a dispersive prism pair to achieve 18 W of average power in 33-fs pulses with a peak power of 12 MW and a repetition rate of 34 MHz. The output beam is nearly diffraction limited and is linearly polarized. PMID:14587786
NASA Astrophysics Data System (ADS)
Elmore, A. J.; Guinn, S. M.
2009-12-01
Land surface phenology (LSP) is the seasonal pattern of vegetation dynamics that occur each spring and fall. Multiple drivers of spatial variation in LSP and its variation over time have been analyzed using satellite remote sensing. Until recently, these observations have been restricted to moderate- and low-resolution data, as it is only at these spatial resolutions for which temporally continuous data is available. However, understanding small scale variation in LSP over space and time may be key to linking pattern to process, and in particular, could be used to understand how ecological processes at the stand level scale to landscapes and continents. Through utilization of the large, and now free, Landsat record, recent research has led to the development of robust methods for calculating average phenological patterns at 30-m resolution by stacking two decades worth of data by acquisition day of year (DOY). Here we have extended these techniques to calculate the deviation from the average LSP for any given acquisition DOY-year combination. We model the average LSP as two sigmoid functions, one increasing in spring and a second decreasing in fall, connected by a sloped line representing gradual summer leaf area changes (see Figure). Deviation from the average LSP is considered here to take two forms: (1) residual vegetation cover in mid- to late-summer represent locations in which disturbance, drought, or (alternatively) better than average growing conditions have resulted a separation (either negative or positive) from the average vegetation cover for that DOY, and (2) climate conditions that result in an earlier or later onset of greenness, exhibited as a separation from the average spring onset of greenness curve in the DOY direction (either early or late.) Our study system for this work is the deciduous forests of the mid-Atlantic, USA, where we show that late summer vegetation cover is tied to edaphic properties governing the site specific soil moisture
The Average Body Surface Area of Adult Cancer Patients in the UK: A Multicentre Retrospective Study
Sacco, Joseph J.; Botten, Joanne; Macbeth, Fergus; Bagust, Adrian; Clark, Peter
2010-01-01
The majority of chemotherapy drugs are dosed based on body surface area (BSA). No standard BSA values for patients being treated in the United Kingdom are available on which to base dose and cost calculations. We therefore retrospectively assessed the BSA of patients receiving chemotherapy treatment at three oncology centres in the UK between 1st January 2005 and 31st December 2005. A total of 3613 patients receiving chemotherapy for head and neck, ovarian, lung, upper GI/pancreas, breast or colorectal cancers were included. The overall mean BSA was 1.79 m2 (95% CI 1.78–1.80) with a mean BSA for men of 1.91 m2 (1.90–1.92) and 1.71 m2 (1.70–1.72) for women. Results were consistent across the three centres. No significant differences were noted between treatment in the adjuvant or palliative setting in patients with breast or colorectal cancer. However, statistically significant, albeit small, differences were detected between some tumour groups. In view of the consistency of results between three geographically distinct UK cancer centres, we believe the results of this study may be generalised and used in future costings and budgeting for new chemotherapy agents in the UK. PMID:20126669
Gorsevski, Pece V; Donevska, Katerina R; Mitrovski, Cvetko D; Frizado, Joseph P
2012-02-01
This paper presents a GIS-based multi-criteria decision analysis approach for evaluating the suitability for landfill site selection in the Polog Region, Macedonia. The multi-criteria decision framework considers environmental and economic factors which are standardized by fuzzy membership functions and combined by integration of analytical hierarchy process (AHP) and ordered weighted average (OWA) techniques. The AHP is used for the elicitation of attribute weights while the OWA operator function is used to generate a wide range of decision alternatives for addressing uncertainty associated with interaction between multiple criteria. The usefulness of the approach is illustrated by different OWA scenarios that report landfill suitability on a scale between 0 and 1. The OWA scenarios are intended to quantify the level of risk taking (i.e., optimistic, pessimistic, and neutral) and to facilitate a better understanding of patterns that emerge from decision alternatives involved in the decision making process. PMID:22030279
Idih, E. E.; Ezem, B. U.; Nzeribe, E. A.; Onyegbule, A. O.; Duru, B. C.; Amajoyi, C. C.
2016-01-01
Background: Despite the global efforts made to eradicate malaria, it continues to be a significant cause of morbidity and mortality in both neonates and the parturients. This study was done to determine the relationship between placental parasitemia, average neonatal birth weight and the relationship between the use of malaria preventive measures and the occurrence of placental parasitemia with the aim to improving maternal and neonatal outcome. Patients and Methods: This cross-sectional study was done at the labor ward unit of the Federal Medical Center, Owerri, from December 2013 to May 2014. It involved one hundred and eighty primigravidae and baby pairs recruited consecutively. Thick and thin blood films were made from maternal peripheral blood and placenta. The babies were examined and weighed immediately after delivery. Results: Most of the participants had only one dose of intermittent preventive therapy (75%) with statistically significant higher level of fever episodes (P < 0.0001). Forty participants (58.0%) did not use any form of malaria preventive measure in pregnancy (P < 0.0001) and had a significantly higher placental parasitemia when compared with their counterparts. Average birth weight of neonates with placental parasitemia in mothers who used intermittent presumptive therapy (IPT) only (t = 2.22, P = 0.005), and IPT + insecticide-treated net (ITN) (t = 7.91, P ≤ 0.000) was significantly higher than those who did not use any form of malaria prevention in pregnancy (t = 4.69, P ≤ 0.0001). Conclusion: Primigravidae with placental or maternal peripheral parasitemia who failed to use malaria preventive measures delivered babies with reduced average birth weight. A scheme aimed at making ITN readily available, and improving the girl child education is highly recommended.
Tsodikov, Oleg V; Record, M Thomas; Sergeev, Yuri V
2002-04-30
New computer programs, SurfRace and FastSurf, perform fast calculations of the solvent accessible and molecular (solvent excluded) surface areas of macromolecules. Program SurfRace also calculates the areas of cavities inaccessible from the outside. We introduce the definition of average curvature of molecular surface and calculate average molecular surface curvatures for each atom in a structure. All surface area and curvature calculations are analytic and therefore yield exact values of these quantities. High calculation speed of this software is achieved primarily by avoiding computationally expensive mathematical procedures wherever possible and by efficient handling of surface data structures. The programs are written initially in the language C for PCs running Windows 2000/98/NT, but their code is portable to other platforms with only minor changes in input-output procedures. The algorithm is robust and does not ignore either multiplicity or degeneracy of atomic overlaps. Fast, memory-efficient and robust execution make this software attractive for applications both in computationally expensive energy minimization algorithms, such as docking or molecular dynamics simulations, and in stand-alone surface area and curvature calculations. PMID:11939594
Larsen, Inge; Hjulsager, Charlotte Kristiane; Holm, Anders; Olsen, John Elmerdahl; Nielsen, Søren Saxmose; Nielsen, Jens Peter
2016-01-01
Oral treatment with antimicrobials is widely used in pig production for the control of gastrointestinal infections. Lawsonia intracellularis (LI) causes enteritis in pigs older than six weeks of age and is commonly treated with antimicrobials. The objective of this study was to evaluate the efficacy of three oral dosage regimens (5, 10 and 20mg/kg body weight) of oxytetracycline (OTC) in drinking water over a five-day period on diarrhoea, faecal shedding of LI and average daily weight gain (ADG). A randomised clinical trial was carried out in four Danish pig herds. In total, 539 animals from 37 batches of nursery pigs were included in the study. The dosage regimens were randomly allocated to each batch and initiated at presence of assumed LI-related diarrhoea. In general, all OTC doses used for the treatment of LI infection resulted in reduced diarrhoea and LI shedding after treatment. Treatment with a low dose of 5mg/kg OTC per kg body weight, however, tended to cause more watery faeces and resulted in higher odds of pigs shedding LI above detection level when compared to medium and high doses (with odds ratios of 5.5 and 8.4, respectively). No association was found between the dose of OTC and the ADG. In conclusion, a dose of 5mg OTC per kg body weight was adequate for reducing the high-level LI shedding associated with enteropathy, but a dose of 10mg OTC per kg body weight was necessary to obtain a maximum reduction in LI shedding. PMID:26718056
Boosting with Averaged Weight Vectors
NASA Technical Reports Server (NTRS)
Oza, Nikunj C.; Clancy, Daniel (Technical Monitor)
2002-01-01
AdaBoost is a well-known ensemble learning algorithm that constructs its constituent or base models in sequence. A key step in AdaBoost is constructing a distribution over the training examples to create each base model. This distribution, represented as a vector, is constructed to be orthogonal to the vector of mistakes made by the previous base model in the sequence. The idea is to make the next base model's errors uncorrelated with those of the previous model. Some researchers have pointed out the intuition that it is probably better to construct a distribution that is orthogonal to the mistake vectors of all the previous base models, but that this is not always possible. We present an algorithm that attempts to come as close as possible to this goal in an efficient manner. We present experimental results demonstrating significant improvement over AdaBoost and the Totally Corrective boosting algorithm, which also attempts to satisfy this goal.
NASA Technical Reports Server (NTRS)
Schols, J. L.; Eloranta, E. W.
1992-01-01
Area-averaged horizontal wind measurements are derived from the motion of spatial inhomogeneities in aerosol backscattering observed with a volume-imaging lidar. Spatial averaging provides high precision, reducing sample variations of wind measurements well below the level of turbulent fluctuations, even under conditions of very light mean winds and strong convection or under the difficult conditions represented by roll convection. Wind velocities are measured using the two-dimensional spatial cross correlation computed between successive horizontal plane maps of aerosol backscattering, assembled from three-dimensional lidar scans. Prior to calculation of the correlation function, three crucial steps are performed: (1) the scans are corrected for image distortion by the wind during a finite scan time; (2) a temporal high pass median filtering is applied to eliminate structures that do not move with the wind; and (3) a histogram equalization is employed to reduce biases to the brightest features.
NASA Astrophysics Data System (ADS)
Obata, Kenta; Miura, Tomoaki; Yoshioka, Hiroki
2012-01-01
Area-averaged vegetation index (VI) depends on spatial resolution and the computational approach used to calculate the VI from the data. Certain data treatments can introduce scaling effects and a systematic bias into datasets gathered from different sensors. This study investigated the mechanisms underlying the scaling effects of a two-band spectral VI defined in terms of the ratio of two linear sums of the red and near-infrared reflectances (a general form of the two-band VI). The general form of the VI model was linearly transformed to yield a common functional VI form that elucidated the nature of the monotonic behavior. An analytic investigation was conducted in which a two-band linear mixture model was assumed. The trends (increasing or decreasing) in the area-averaged VIs could be explained in terms of a single scalar index, ην, which may be expressed in terms of the spectra of the vegetation and nonvegetation endmembers as well as the coefficients unique to each VI. The maximum error bounds on the scaling effects were derived as a function of the endmember spectra and the choice of VI. The validity of the expressions was explored by conducting a set of numerical experiments that focused on the monotonic behavior and trends in several VIs.
Inducing Conservation of Number, Weight, Volume, Area, and Mass in Pre-School Children.
ERIC Educational Resources Information Center
Young, Beverly S.
The major question this study attempted to answer was, "Can conservation of number, area, weight, mass, and volume to be induced and retained by 3- and 4-year-old children by structured instruction with a multivariate approach? Three nursery schools in Iowa City supplied subjects for this study. The Institute of Child Behavior and Development…
Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Rockhold, Mark L.
2008-06-01
A methodology to systematically and quantitatively assess model predictive uncertainty was applied to saturated zone uranium transport at the 300 Area of the U.S. Department of Energy Hanford Site in Washington State, USA. The methodology extends Maximum Likelihood Bayesian Model Averaging (MLBMA) to account jointly for uncertainties due to the conceptual-mathematical basis of models, model parameters, and the scenarios to which the models are applied. Conceptual uncertainty was represented by postulating four alternative models of hydrogeology and uranium adsorption. Parameter uncertainties were represented by estimation covariances resulting from the joint calibration of each model to observed heads and uranium concentration. Posterior model probability was dominated by one model. Results demonstrated the role of model complexity and fidelity to observed system behavior in determining model probabilities, as well as the impact of prior information. Two scenarios representing alternative future behavior of the Columbia River adjacent to the site were considered. Predictive simulations carried out with the calibrated models illustrated the computation of model- and scenario-averaged predictions and how results can be displayed to clearly indicate the individual contributions to predictive uncertainty of the model, parameter, and scenario uncertainties. The application demonstrated the practicability of applying a comprehensive uncertainty assessment to large-scale, detailed groundwater flow and transport modelling.
Yaskolka Meir, Anat; Shelef, Ilan; Schwarzfuchs, Dan; Gepner, Yftach; Tene, Lilac; Zelicha, Hila; Tsaban, Gal; Bilitzky, Avital; Komy, Oded; Cohen, Noa; Bril, Nitzan; Rein, Michal; Serfaty, Dana; Kenigsbuch, Shira; Chassidim, Yoash; Zeller, Lior; Ceglarek, Uta; Stumvoll, Michael; Blüher, Matthias; Thiery, Joachim; Stampfer, Meir J; Rudich, Assaf; Shai, Iris
2016-08-01
It remains unclear whether intermuscular adipose tissue (IMAT) has any metabolic influence or whether it is merely a marker of abnormalities, as well as what are the effects of specific lifestyle strategies for weight loss on the dynamics of both IMAT and thigh muscle area (TMA). We followed the trajectory of IMAT and TMA during 18-mo lifestyle intervention among 278 sedentary participants with abdominal obesity, using magnetic resonance imaging. We measured the resting metabolic rate (RMR) by an indirect calorimeter. Among 273 eligible participants (47.8 ± 9.3 yr of age), the mean IMAT was 9.6 ± 4.6 cm(2) Baseline IMAT levels were directly correlated with waist circumference, abdominal subdepots, C-reactive protein, and leptin and inversely correlated with baseline TMA and creatinine (P < 0.05 for all). After 18 mo (86.3% adherence), both IMAT (-1.6%) and TMA (-3.3%) significantly decreased (P < 0.01 vs. baseline). The changes in both IMAT and TMA were similar across the lifestyle intervention groups and directly corresponded with moderate weight loss (P < 0.001). IMAT change did not remain independently associated with decreased abdominal subdepots or improved cardiometabolic parameters after adjustments for age, sex, and 18-mo weight loss. In similar models, 18-mo TMA loss remained associated with decreased RMR, decreased activity, and with increased fasting glucose levels and IMAT (P < 0.05 for all). Unlike other fat depots, IMAT may not represent a unique or specific adipose tissue, instead largely reflecting body weight change per se. Moderate weight loss induced a significant decrease in thigh muscle area, suggesting the importance of resistance training to accompany weight loss programs. PMID:27402560
NASA Astrophysics Data System (ADS)
Naik, Haladhara; Kim, Guinyun; Kim, Kwangsoo; Zaman, Muhammad; Goswami, Ashok; Lee, Man Woo; Yang, Sung-Chul; Lee, Young-Ouk; Shin, Sung-Gyun; Cho, Moo-Hyun
2016-04-01
Photo-neutron cross sections of 197Au were experimentally determined for the bremsstrahlung end-point energies of 50, 60, and 70 MeV, by utilizing activation and off-line γ-ray spectrometric technique, using the 100 MeV electron linac at the Pohang Accelerator Laboratory (PAL), Pohang, Korea. The 197Au(γ, xn; x = 1- 6) reaction cross sections were calculated as a function of the bombarding photon energy by using the TALYS 1.6 computer code with default parameters. The flux-weighted average cross sections were obtained from the literature data and the theoretical values of TALYS 1.6 and TENDL-2014, for mono-energetic photons, and are found to be in good agreement with the present data. Isomeric yield ratios of 196m2,gAu from the 197Au(γ, n) reaction were also determined for the bremsstrahlung end-point energies of 50, 60, and 70 MeV, from the reaction cross sections of m2- and g-states, based on the present experimental data, and are found to be in good agreement with the theoretical values based on TALYS 1.6 and TENDL-2014.
Shmool, Jessie L C; Bobb, Jennifer F; Ito, Kazuhiko; Elston, Beth; Savitz, David A; Ross, Zev; Matte, Thomas D; Johnson, Sarah; Dominici, Francesca; Clougherty, Jane E
2015-10-01
Numerous studies have linked air pollution with adverse birth outcomes, but relatively few have examined differential associations across the socioeconomic gradient. To evaluate interaction effects of gestational nitrogen dioxide (NO2) and area-level socioeconomic deprivation on fetal growth, we used: (1) highly spatially-resolved air pollution data from the New York City Community Air Survey (NYCCAS); and (2) spatially-stratified principle component analysis of census variables previously associated with birth outcomes to define area-level deprivation. New York City (NYC) hospital birth records for years 2008-2010 were restricted to full-term, singleton births to non-smoking mothers (n=243,853). We used generalized additive mixed models to examine the potentially non-linear interaction of nitrogen dioxide (NO2) and deprivation categories on birth weight (and estimated linear associations, for comparison), adjusting for individual-level socio-demographic characteristics and sensitivity testing adjustment for co-pollutant exposures. Estimated NO2 exposures were highest, and most varying, among mothers residing in the most-affluent census tracts, and lowest among mothers residing in mid-range deprivation tracts. In non-linear models, we found an inverse association between NO2 and birth weight in the least-deprived and most-deprived areas (p-values<0.001 and 0.05, respectively) but no association in the mid-range of deprivation (p=0.8). Likewise, in linear models, a 10 ppb increase in NO2 was associated with a decrease in birth weight among mothers in the least-deprived and most-deprived areas of -16.2g (95% CI: -21.9 to -10.5) and -11.0 g (95% CI: -22.8 to 0.9), respectively, and a non-significant change in the mid-range areas [β=0.5 g (95% CI: -7.7 to 8.7)]. Linear slopes in the most- and least-deprived quartiles differed from the mid-range (reference group) (p-values<0.001 and 0.09, respectively). The complex patterning in air pollution exposure and deprivation
Baimatova, Nassiba; Koziel, Jacek A; Kenessov, Bulat
2015-05-11
A new and simple method for benzene, toluene, ethylbenzene and o-xylene (BTEX) quantification in vehicle exhaust was developed based on diffusion-controlled extraction onto a retracted solid-phase microextraction (SPME) fiber coating. The rationale was to develop a method based on existing and proven SPME technology that is feasible for field adaptation in developing countries. Passive sampling with SPME fiber retracted into the needle extracted nearly two orders of magnitude less mass (n) compared with exposed fiber (outside of needle) and sampling was in a time weighted-averaging (TWA) mode. Both the sampling time (t) and fiber retraction depth (Z) were adjusted to quantify a wider range of Cgas. Extraction and quantification is conducted in a non-equilibrium mode. Effects of Cgas, t, Z and T were tested. In addition, contribution of n extracted by metallic surfaces of needle assembly without SPME coating was studied. Effects of sample storage time on n loss was studied. Retracted TWA-SPME extractions followed the theoretical model. Extracted n of BTEX was proportional to Cgas, t, Dg, T and inversely proportional to Z. Method detection limits were 1.8, 2.7, 2.1 and 5.2 mg m(-3) (0.51, 0.83, 0.66 and 1.62 ppm) for BTEX, respectively. The contribution of extraction onto metallic surfaces was reproducible and influenced by Cgas and t and less so by T and by the Z. The new method was applied to measure BTEX in the exhaust gas of a Ford Crown Victoria 1995 and compared with a whole gas and direct injection method. PMID:25911428
MPWide: a light-weight library for efficient message passing over wide area networks
NASA Astrophysics Data System (ADS)
Groen, D.; Rieder, S.; Portegies Zwart, S.
2013-12-01
We present MPWide, a light weight communication library which allows efficient message passing over a distributed network. MPWide has been designed to connect application running on distributed (super)computing resources, and to maximize the communication performance on wide area networks for those without administrative privileges. It can be used to provide message-passing between application, move files, and make very fast connections in client-server environments. MPWide has already been applied to enable distributed cosmological simulations across up to four supercomputers on two continents, and to couple two different bloodflow simulations to form a multiscale simulation.
Krpálková, L; Cabrera, V E; Kvapilík, J; Burdych, J; Crump, P
2014-10-01
The objective of this study was to evaluate the associations of variable intensity in rearing dairy heifers on 33 commercial dairy herds, including 23,008 cows and 18,139 heifers, with age at first calving (AFC), average daily weight gain (ADG), and milk yield (MY) level on reproduction traits and profitability. Milk yield during the production period was analyzed relative to reproduction and economic parameters. Data were collected during a 1-yr period (2011). The farms were located in 12 regions in the Czech Republic. The results show that those herds with more intensive rearing periods had lower conception rates among heifers at first and overall services. The differences in those conception rates between the group with the greatest ADG (≥0.800 kg/d) and the group with the least ADG (≤0.699 kg/d) were approximately 10 percentage points in favor of the least ADG. All the evaluated reproduction traits differed between AFC groups. Conception at first and overall services (cows) was greatest in herds with AFC ≥800 d. The shortest days open (105 d) and calving interval (396 d) were found in the middle AFC group (799 to 750 d). The highest number of completed lactations (2.67) was observed in the group with latest AFC (≥800 d). The earliest AFC group (≤749 d) was characterized by the highest depreciation costs per cow at 8,275 Czech crowns (US$414), and the highest culling rate for cows of 41%. The most profitable rearing approach was reflected in the middle AFC (799 to 750 d) and middle ADG (0.799 to 0.700 kg) groups. The highest MY (≥8,500 kg) occurred with the earliest AFC of 780 d. Higher MY led to lower conception rates in cows, but the highest MY group also had the shortest days open (106 d) and a calving interval of 386 d. The same MY group had the highest cow depreciation costs, net profit, and profitability without subsidies of 2.67%. We conclude that achieving low AFC will not always be the most profitable approach, which will depend upon farm
White, R R; Capper, J L
2013-12-01
The objective of this study was to assess environmental impact, economic viability, and social acceptability of 3 beef production systems with differing levels of efficiency. A deterministic model of U.S. beef production was used to predict the number of animals required to produce 1 × 10(9) kg HCW beef. Three production treatments were compared, 1 representing average U.S. production (control), 1 with a 15% increase in ADG, and 1 with a 15% increase in finishing weight (FW). For each treatment, various socioeconomic scenarios were compared to account for uncertainty in producer and consumer behavior. Environmental impact metrics included feed consumption, land use, water use, greenhouse gas emissions (GHGe), and N and P excretion. Feed cost, animal purchase cost, animal sales revenue, and income over costs (IOVC) were used as metrics of economic viability. Willingness to pay (WTP) was used to identify improvements or reductions in social acceptability. When ADG improved, feedstuff consumption, land use, and water use decreased by 6.4%, 3.2%, and 12.3%, respectively, compared with the control. Carbon footprint decreased 11.7% and N and P excretion were reduced by 4% and 13.8%, respectively. When FW improved, decreases were seen in feedstuff consumption (12.1%), water use (9.2%). and land use (15.5%); total GHGe decreased 14.7%; and N and P excretion decreased by 10.1% and 17.2%, compared with the control. Changes in IOVC were dependent on socioeconomic scenario. When the ADG scenario was compared with the control, changes in sector profitability ranged from 51 to 117% (cow-calf), -38 to 157% (stocker), and 37 to 134% (feedlot). When improved FW was compared, changes in cow-calf profit ranged from 67% to 143%, stocker profit ranged from -41% to 155% and feedlot profit ranged from 37% to 136%. When WTP was based on marketing beef being more efficiently produced, WTP improved by 10%; thus, social acceptability increased. When marketing was based on production
Code of Federal Regulations, 2011 CFR
2011-07-01
... Extraction for Vegetable Oil Production Compliance Requirements § 63.2854 How do I determine the weighted... received for use in your vegetable oil production process. By the end of each calendar month following an... the solvent in each delivery of solvent, including solvent recovered from off-site oil. To...
Code of Federal Regulations, 2010 CFR
2010-07-01
... Extraction for Vegetable Oil Production Compliance Requirements § 63.2854 How do I determine the weighted... received for use in your vegetable oil production process. By the end of each calendar month following an... the solvent in each delivery of solvent, including solvent recovered from off-site oil. To...
NASA Astrophysics Data System (ADS)
Kumazawa, Shinsuke; Kato, Takeyoshi; Honda, Nobuyuki; Koaizawa, Masakazu; Nishino, Shinichi; Suzuoki, Yasuo
Based on the past studies regarding the insolation fluctuation, the smoothing effect of insolation among different locations would not be enough for the longer cycle than a few ten minutes. This study evaluated the maximum fluctuation width (MFW) within at most 120 min of ensemble average insolation of 40 points, its clearness index, and ensemble average insolation excluding sun-position dependent component. As the results, when the weather condition became worse after the noon in almost all area, the ensemble average insolation significantly reduced, resulting in MFW of 540W/m2 within 120 min. As other example, when the weather recovered during the morning in many areas, MFW was also large. By using the data observed for 6 months, this study calculated the cumulative frequency distribution of MFW of ensemble average insolation, its clearness index, and ensemble average insolation excluding sun-position dependent component. As the results, the absolute value of MFW of ensemble average insolation calculated with 120 min width window ranges mainly between 200-300W/m2. The absolute value of MWF of insolation excluding sun-position dependent component evaluated with 120 min width window is smaller than 200W/m2 in most days, and is not so different from MWF evaluated with 60 min width window. Finally, this study discussed the practical usability of insolation forecast.
Borah, Madhur; Baruah, Rupali
2015-01-01
Introduction: Low birth weight (LBW) infants suffer more episodes of common childhood diseases and the spells of illness are more prolonged and serious. Longitudinal studies are useful to observe the health and disease pattern of LBW babies over time. Aims: This study was carried out in rural areas of Assam to assess the morbidity pattern of LBW babies during their first 6 months of life and to compare them with normal birth weight (NBW) counterparts. Materials and Methods: Total 30 LBW babies (0-2 months) and equal numbers of NBW babies from three subcenters under Boko Primary Health Centre of Assam were followed up in monthly intervals till 6 months of age in a prospective fashion. Results: More than two thirds of LBW babies (77%) were suffering from moderate or severe under-nutrition during the follow up. Acute respiratory tract infection (ARI) was the predominant morbidity suffered by LBW infants. The other illnesses suffered by the LBW infants during the follow up were diarrhea, skin disorders, fever and ear disorders. LBW infants had more episodes of hospitalization (65%) than the NBW infants (35%). Incidence rate of episodes of morbidity was found to be higher among those LBW infants who remained underweight at 6 months of age (Incidence rate of 49.3 per 100 infant months) and those who were not exclusively breast fed till 6 months of age (Incidence rate of 66.7 per 100 infant months). Conclusion: The study revealed that during the follow up, incidence of morbidities were higher among the LBW babies compared to NBW babies. It was also observed that ARI was the predominant morbidity in the LBW infants during first 6 months of age. PMID:26288777
NASA Technical Reports Server (NTRS)
Huff, Edward M.; Mosher, Marianne; Barszcz, Eric
2002-01-01
Recent research using NASA Ames AH-1 and OH-58C helicopters, and NASA Glenn test rigs, has shown that in-flight vibration data are typically non-stationary [l-4]. The nature and extent of this non-stationarity is most likely produced by several factors operating simultaneously. The aerodynamic flight environment and pilot commands provide continuously changing inputs, with a complex dynamic response that includes automatic feedback control from the engine regulator. It would appear that the combined effects operate primarily through an induced torque profile, which causes concomitant stress modulation at the individual internal gear meshes in the transmission. This notion is supported by several analyses, which show that upwards of 93% of the vibration signal s variance can be explained by knowledge of torque alone. That this relationship is stronger in an AH-1 than an OH-58, where measured non-stationarity is greater, suggests that the overall mass of the vehicle is an important consideration. In the lighter aircraft, the unsteady aerodynamic influences transmit relatively greater unsteady dynamic forces on the mechanical components, quite possibly contributing to its greater non-stationarity . In a recent paper using OH-58C pinion data [5], the authors have shown that in computing a time synchronous average (TSA) for various single-value metric computations, an effective trade-off can be obtained between sample size and measured stationarity by using data from only a single mesh cycle. A mesh cycle, which is defined as the number of rotations required for the gear teeth to return to their original mating position, has the property of representing all of the discrete phase angles of the opposing gears exactly once in the average. Measured stationarity is probably maximized because a single mesh cycle of the pinion gear occurs over a very short span of time, during which time-dependent non-stationary effects are kept to a minimum. Clearly, the advantage of local
Ruiz, J M; Busnel, J P; Benoît, J P
1990-09-01
The phase separation of fractionated poly(DL-lactic acid-co-glycolic acid) copolymers 50/50 was determined by silicone oil addition. Polymer fractionation by preparative size exclusion chromatography afforded five different microsphere batches. Average molecular weight determined the existence, width, and displacement of the "stability window" inside the phase diagrams, and also microsphere characteristics such as core loading and amount released over 6 hr. Further, the gyration and hydrodynamic radii were measured by light scattering. It is concluded that the polymer-solvent affinity is largely modified by the variation of average molecular weights owing to different levels of solubility. The lower the average molecular weight is, the better methylene chloride serves as a solvent for the coating material. However, a paradoxical effect due to an increase in free carboxyl and hydroxyl groups is noticed for polymers of 18,130 and 31,030 SEC (size exclusion chromatography) Mw. For microencapsulation, polymers having an intermediate molecular weight (47,250) were the most appropriate in terms of core loading and release purposes. PMID:2235892
NASA Astrophysics Data System (ADS)
Hernández, Leonor; Juliá, J. Enrique; Paranjape, Sidharth; Hibiki, Takashi; Ishii, Mamoru
2010-11-01
In this work, the use of the area-averaged void fraction and bubble chord length entropies is introduced as flow regime indicators in two-phase flow systems. The entropy provides quantitative information about the disorder in the area-averaged void fraction or bubble chord length distributions. The CPDF (cumulative probability distribution function) of void fractions and bubble chord lengths obtained by means of impedance meters and conductivity probes are used to calculate both entropies. Entropy values for 242 flow conditions in upward two-phase flows in 25.4 and 50.8-mm pipes have been calculated. The measured conditions cover ranges from 0.13 to 5 m/s in the superficial liquid velocity j f and ranges from 0.01 to 25 m/s in the superficial gas velocity j g. The physical meaning of both entropies has been interpreted using the visual flow regime map information. The area-averaged void fraction and bubble chord length entropies capability as flow regime indicators have been checked with other statistical parameters and also with different input signals durations. The area-averaged void fraction and the bubble chord length entropies provide better or at least similar results than those obtained with other indicators that include more than one parameter. The entropy is capable to reduce the relevant information of the flow regimes in only one significant and useful parameter. In addition, the entropy computation time is shorter than the majority of the other indicators. The use of one parameter as input also represents faster predictions.
Ito, Tadashi; Sakai, Yoshihito; Nakamura, Eishi; Yamazaki, Kazunori; Yamada, Ayaka; Sato, Noritaka; Morita, Yoshifumi
2015-01-01
[Purpose] The purpose of this study was to examine the relationship between the paraspinal muscle cross-sectional area and the relative proprioceptive weighting ratio during local vibratory stimulation of older persons with lumbar spondylosis in an upright position. [Subjects] In all, 74 older persons hospitalized for lumbar spondylosis were included. [Methods] We measured the relative proprioceptive weighting ratio of postural sway using a Wii board while vibratory stimulations of 30, 60, or 240 Hz were applied to the subjects’ paraspinal or gastrocnemius muscles. Back strength, abdominal muscle strength, and erector spinae muscle (L1/L2, L4/L5) and lumbar multifidus (L1/L2, L4/L5) cross-sectional areas were evaluated. [Results] The erector spinae muscle (L1/L2) cross-sectional area was associated with the relative proprioceptive weighting ratio during 60Hz stimulation. [Conclusion] These findings show that the relative proprioceptive weighting ratio compared to the erector spinae muscle (L1/L2) cross-sectional area under 60Hz proprioceptive stimulation might be a good indicator of trunk proprioceptive sensitivity. PMID:26311962
NASA Astrophysics Data System (ADS)
Shakilur Rahman, Md.; Kim, Kwangsoo; Kim, Guinyun; Naik, Haladhara; Nadeem, Muhammad; Thi Hien, Nguyen; Shahid, Muhammad; Yang, Sung-Chul; Cho, Young-Sik; Lee, Young-Ouk; Shin, Sung-Gyun; Cho, Moo-Hyun; Woo Lee, Man; Kang, Yeong-Rok; Yang, Gwang-Mo; Ro, Tae-Ik
2016-07-01
We measured the flux-weighted average cross-sections and the isomeric yield ratios of 99m, g, 100m, g, 101m, g, 102m, gRh in the 103Rh( γ, xn) reactions with the bremsstrahlung end-point energies of 55 and 60MeV by the activation and the off-line γ-ray spectrometric technique, using the 100MeV electron linac at the Pohang Accelerator Laboratory (PAL), Korea. The flux-weighted average cross-sections were calculated by using the computer code TALYS 1.6 based on mono-energetic photons, and compared with the present experimental data. The flux-weighted average cross-sections of 103Rh( γ, xn) reactions in intermediate bremsstrahlung energies are the first time measurement and are found to increase from their threshold value to a particular value, where the other reaction channels open up. Thereafter, it decreases with bremsstrahlung energy due to its partition in different reaction channels. The isomeric yield ratios (IR) of 99m, g, 100m, g, 101m, g, 102m, gRh in the 103Rh( γ, xn) reactions from the present work were compared with the literature data in the 103Rh(d, x), 102-99Ru(p, x) , 103Rh( α, αn) , 103Rh( α, 2p3n) , 102Ru(3He, x), and 103Rh( γ, xn) reactions. It was found that the IR values of 102, 101, 100, 99Rh in all these reactions increase with the projectile energy, which indicates the role of excitation energy. At the same excitation energy, the IR values of 102, 101, 100, 99Rh are higher in the charged particle-induced reactions than in the photon-induced reaction, which indicates the role of input angular momentum.
2014-01-01
Mesoporous ZnO nanoparticles have been synthesized with tremendous increase in specific surface area of up to 578 m2/g which was 5.54 m2/g in previous reports (J. Phys. Chem. C 113:14676-14680, 2009). Different mesoporous ZnO nanoparticles with average pore sizes ranging from 7.22 to 13.43 nm and specific surface area ranging from 50.41 to 578 m2/g were prepared through the sol-gel method via a simple evaporation-induced self-assembly process. The hydrolysis rate of zinc acetate was varied using different concentrations of sodium hydroxide. Morphology, crystallinity, porosity, and J-V characteristics of the materials have been studied using transmission electron microscopy (TEM), X-ray diffraction (XRD), BET nitrogen adsorption/desorption, and Keithley instruments. PMID:25339855
NASA Astrophysics Data System (ADS)
Nagaoka, Tomoaki; Watanabe, Soichi; Sakurai, Kiyoko; Kunieda, Etsuo; Watanabe, Satoshi; Taki, Masao; Yamanaka, Yukio
2004-01-01
With advances in computer performance, the use of high-resolution voxel models of the entire human body has become more frequent in numerical dosimetries of electromagnetic waves. Using magnetic resonance imaging, we have developed realistic high-resolution whole-body voxel models for Japanese adult males and females of average height and weight. The developed models consist of cubic voxels of 2 mm on each side; the models are segmented into 51 anatomic regions. The adult female model is the first of its kind in the world and both are the first Asian voxel models (representing average Japanese) that enable numerical evaluation of electromagnetic dosimetry at high frequencies of up to 3 GHz. In this paper, we will also describe the basic SAR characteristics of the developed models for the VHF/UHF bands, calculated using the finite-difference time-domain method.
NASA Astrophysics Data System (ADS)
Obata, Kenta; Huete, Alfredo R.
2014-01-01
This study investigated the mechanisms underlying the scaling effects that apply to a fraction of vegetation cover (FVC) estimates derived using two-band spectral vegetation index (VI) isoline-based linear mixture models (VI isoline-based LMM). The VIs included the normalized difference vegetation index, a soil-adjusted vegetation index, and a two-band enhanced vegetation index (EVI2). This study focused in part on the monotonicity of an area-averaged FVC estimate as a function of spatial resolution. The proof of monotonicity yielded measures of the intrinsic area-averaged FVC uncertainties due to scaling effects. The derived results demonstrate that a factor ξ, which was defined as a function of "true" and "estimated" endmember spectra of the vegetated and nonvegetated surfaces, was responsible for conveying monotonicity or nonmonotonicity. The monotonic FVC values displayed a uniform increasing or decreasing trend that was independent of the choice of the two-band VI. Conditions under which scaling effects were eliminated from the FVC were identified. Numerical simulations verifying the monotonicity and the practical utility of the scaling theory were evaluated using numerical experiments applied to Landsat7-Enhanced Thematic Mapper Plus (ETM+) data. The findings contribute to developing scale-invariant FVC estimation algorithms for multisensor and data continuity.
Huang, Yuanyuan; Varsier, Nadège; Niksic, Stevan; Kocan, Enis; Pejanovic-Djurisic, Milica; Popovic, Milica; Koprivica, Mladen; Neskovic, Aleksandar; Milinkovic, Jelena; Gati, Azeddine; Person, Christian; Wiart, Joe
2016-09-01
This article is the first thorough study of average population exposure to third generation network (3G)-induced electromagnetic fields (EMFs), from both uplink and downlink radio emissions in different countries, geographical areas, and for different wireless device usages. Indeed, previous publications in the framework of exposure to EMFs generally focused on individual exposure coming from either personal devices or base stations. Results, derived from device usage statistics collected in France and Serbia, show a strong heterogeneity of exposure, both in time, that is, the traffic distribution over 24 h was found highly variable, and space, that is, the exposure to 3G networks in France was found to be roughly two times higher than in Serbia. Such heterogeneity is further explained based on real data and network architecture. Among those results, authors show that, contrary to popular belief, exposure to 3G EMFs is dominated by uplink radio emissions, resulting from voice and data traffic, and average population EMF exposure differs from one geographical area to another, as well as from one country to another, due to the different cellular network architectures and variability of mobile usage. Bioelectromagnetics. 37:382-390, 2016. © 2016 Wiley Periodicals, Inc. PMID:27385053
NASA Astrophysics Data System (ADS)
Liu, X.; Zhang, J. X.; Zhao, Z.; Ma, A. D.
2015-06-01
Synthetic aperture radar in the application of remote sensing technology is becoming more and more widely because of its all-time and all-weather operation, feature extraction research in high resolution SAR image has become a hot topic of concern. In particular, with the continuous improvement of airborne SAR image resolution, image texture information become more abundant. It's of great significance to classification and extraction. In this paper, a novel method for built-up areas extraction using both statistical and structural features is proposed according to the built-up texture features. First of all, statistical texture features and structural features are respectively extracted by classical method of gray level co-occurrence matrix and method of variogram function, and the direction information is considered in this process. Next, feature weights are calculated innovatively according to the Bhattacharyya distance. Then, all features are weighted fusion. At last, the fused image is classified with K-means classification method and the built-up areas are extracted after post classification process. The proposed method has been tested by domestic airborne P band polarization SAR images, at the same time, two groups of experiments based on the method of statistical texture and the method of structural texture were carried out respectively. On the basis of qualitative analysis, quantitative analysis based on the built-up area selected artificially is enforced, in the relatively simple experimentation area, detection rate is more than 90%, in the relatively complex experimentation area, detection rate is also higher than the other two methods. In the study-area, the results show that this method can effectively and accurately extract built-up areas in high resolution airborne SAR imagery.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., by Management Area (weights in metric tons) 2a Table 2a to Part 660, Subpart C Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF..., Subpart C—2010, Specifications of ABCs, OYs, and HGs, by Management Area (weights in metric tons)...
Code of Federal Regulations, 2010 CFR
2010-10-01
..., by Management Area (weights in metric tons) 1a Table 1a to Part 660, Subpart C Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF..., Subpart C—2009, Specifications of ABCs, OYs, and HGs, by Management Area (weights in metric tons)...
Code of Federal Regulations, 2010 CFR
2010-10-01
..., by Management Area (weights in metric tons) 1a Table 1a to Part 660, Subpart G Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF..., Subpart G—2009, Specifications of ABCs, OYs, and HGs, by Management Area (weights in metric tons) Link...
Code of Federal Regulations, 2010 CFR
2010-10-01
..., by Management Area (weights in metric tons) 2a Table 2a to Part 660, Subpart G Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF..., Subpart G—2010, Specifications of ABCs, OYs, and HGs, by Management Area (weights in metric tons) Link...
NASA Astrophysics Data System (ADS)
Heinemann, Günther; Kerschgens, Michael
2005-03-01
The quantification of subgrid land surface heterogeneity effects on the scale of climate and numerical weather prediction models is of vital interest for the energy budget of the atmospheric boundary layer and for the atmospheric branch of the hydrological cycle. This paper focuses on heterogeneity effects for the exchange processes between land surfaces and the atmosphere. The results are based on high-resolution non-hydrostatic model simulations for the LITFASS area near Berlin. This area represents a highly heterogeneous landscape of 20 × 20 km2 around the Meteorological Observatory Lindenberg of the German Weather Service (DWD). Model simulations were carried out using the non-hydrostatic model FOOT3DK of the University of Köln with resolutions of 1 km and 250 m.The performance of different area-averaging methods for the turbulent surface fluxes was tested for the LITFASS area, namely the aggregation, mosaic and tile methods. For one tile method (station-tile), the experimental setup of the surface energy balance stations of the LITFASS98 experiment was investigated. Two different simulation types are considered: (1) realistic topography and idealized synoptic forcing; (2) realistic topography and realistic synoptic forcing for LITFASS98 cases. A double one-way nesting procedure is used for nesting FOOT3DK in Lokalmodell of the DWD.The mosaic method shows good results, if the wind speed is sufficiently high. During weak-wind convective conditions, errors are particularly large for the latent heat flux on the 20 × 20 km2 scale. The aggregation method yields generally higher errors than the mosaic method, which even increase for higher wind speeds. The main reason is the strong surface heterogeneity associated with the lakes and forests in the LITFASS area. The main uncertainty of the station-tile method is the knowledge of the area coverage in combination with the representativity of the stations for the land-use type and surface conditions. The results of
Mitchell, Nia S; Nassel, Ariann F; Thomas, Deborah
2015-12-01
Obesity rates are higher for ethnic minority, low-income, and rural communities. Programs are needed to support these communities with weight management. We determined the reach of a low-cost, nationally-available weight loss program in Health Resources and Services Administration medically underserved areas (MUAs) and described the demographics of the communities with program locations. This is a cross-sectional analysis of Take Off Pounds Sensibly (TOPS) chapter locations. Geographic information systems technology was used to combine information about TOPS chapter locations, the geographic boundaries of MUAs, and socioeconomic data from the Decennial 2010 Census. TOPS is available in 30 % of MUAs. The typical TOPS chapter is in a Census Tract that is predominantly white, urban, with a median annual income between $25,000 and $50,000. However, there are TOPS chapters in Census Tracts that can be classified as predominantly black or predominantly Hispanic; predominantly rural; and as low or high income. TOPS provides weight management services in MUAs and across many types of communities. TOPS can help treat obesity in the medically underserved. Future research should determine the differential effectiveness among chapters in different types of communities. PMID:26072259
The Average of Rates and the Average Rate.
ERIC Educational Resources Information Center
Lindstrom, Peter
1988-01-01
Defines arithmetic, harmonic, and weighted harmonic means, and discusses their properties. Describes the application of these properties in problems involving fuel economy estimates and average rates of motion. Gives example problems and solutions. (CW)
Lawrence, T E; Farrow, R L; Zollinger, B L; Spivey, K S
2008-06-01
With the adoption of visual instrument grading, the calculated yield grade can be used for payment to cattle producers selling on grid pricing systems. The USDA beef carcass grading standards include a relationship between required LM area (LMA) and HCW that is an important component of the final yield grade. As noted on a USDA yield grade LMA grid, a 272-kg (600-lb) carcass requires a 71-cm(2) (11.0-in.(2)) LMA and a 454-kg (1,000-lb) carcass requires a 102-cm(2) (15.8-in.(2)) LMA. This is a linear relationship, where required LMA = 0.171(HCW) + 24.526. If a beef carcass has a larger LMA than required, the calculated yield grade is lowered, whereas a smaller LMA than required increases the calculated yield grade. The objective of this investigation was to evaluate the LMA to HCW relationship against data on 434,381 beef carcasses in the West Texas A&M University (WTAMU) Beef Carcass Research Center database. In contrast to the USDA relationship, our data indicate a quadratic relationship [WTAMU LMA = 33.585 + 0.17729(HCW) -0.0000863(HCW(2))] between LMA and HCW whereby, on average, a 272-kg carcass has a 75-cm(2) (11.6-in.(2)) LMA and a 454-kg carcass has a 96-cm(2) (14.9-in.(2)) LMA, indicating a different slope and different intercept than those in the USDA grading standards. These data indicate that the USDA calculated yield grade equation favors carcasses lighter than 363 kg (800 lb) for having above average muscling and penalizes carcasses heavier than 363 kg (800 lb) for having below average muscling. If carcass weights continue to increase, we are likely to observe greater proportions of yield grade 4 and 5 carcasses because of the measurement bias that currently exists in the USDA yield grade equation. PMID:18310492
Fritzsche, Klaus H.; Thieke, Christian; Klein, Jan; Parzer, Peter; Weber, Marc-André; Stieltjes, Bram
2012-01-01
Abstract The apparent diffusion coefficient (ADC) derived from diffusion-weighted imaging (DWI) correlates inversely with tumor proliferation rates. High-grade gliomas are typically heterogeneous and the delineation of areas of high and low proliferation is impeded by partial volume effects and blurred borders. Commonly used manual delineation is further impeded by potential overlap with cerebrospinal fluid and necrosis. Here we present an algorithm to reproducibly delineate and probabilistically quantify the ADC in areas of high and low proliferation in heterogeneous gliomas, resulting in a reproducible quantification in regions of tissue inhomogeneity. We used an expectation maximization (EM) clustering algorithm, applied on a Gaussian mixture model, consisting of pure superpositions of Gaussian distributions. Soundness and reproducibility of this approach were evaluated in 10 patients with glioma. High- and low-proliferating areas found using the clustering correspond well with conservative regions of interest drawn using all available imaging data. Systematic placement of model initialization seeds shows good reproducibility of the method. Moreover, we illustrate an automatic initialization approach that completely removes user-induced variability. In conclusion, we present a rapid, reproducible and automatic method to separate and quantify heterogeneous regions in gliomas. PMID:22487677
Meirovitch, E; Meirovitch, H
1996-01-01
A small linear peptide in solution may populate several stable states (called here microstates) in thermodynamic equilibrium; elucidating its dynamic three dimensional structure by multi- dimensional nmr is complex since the experimentally measured nuclear Overhauser effect intensities (NOEs) represent averages over the individual contributions. We propose a new methodology based on statistical mechanical considerations for analyzing nmr data of such peptides. In a previous paper (called paper I, H. Meirovitch et al. (1995) Journal of Physical Chemistry, 99, 4847-4854] we have developed theoretical methods for determining the contribution to the partition function Z of the most stable microstates, i.e. those that pertain to a given energy range above the global energy minimum (GEM). This relatively small set of dominant microstates provides the main contribution to medium- and long-range NOE intensities. In this work the individual populations and NOEs of the dominant microstates are determined, and then weighted averages are calculated and compared with experiment. Our methodology is applied to the pentapeptide Leu-enkephalin H-Tyr-Gly-Gly-Phe-Leu-OH, described by the potential energy function ECEPP. Twenty one significantly different energy minimized structures are first identified within the range of 2 kcal/mol above the GEM by an extensive conformational search; this range has been found in paper I to contribute 0.6 of Z. These structures then become "seeds" for Monte Carlo (MC) simulations designed to keep the molecule relatively close to its seed. Indeed, the MC samples (called MC microstates) illustrate what we define as intermediate chain flexibility; some dihedral angles remain in the vicinity of their seed value, while others visit the full range of [-180 degrees, 180 degrees]. The free energies of the MC microstates (which lead to the populations) are calculated by the local states method, which (unlike other techniques) can handle any chain flexibility
NASA Technical Reports Server (NTRS)
Kovich, G.; Moore, R. D.; Urasek, D. C.
1973-01-01
The overall and blade-element performance are presented for an air compressor stage designed to study the effect of weight flow per unit annulus area on efficiency and flow range. At the design speed of 424.8 m/sec the peak efficiency of 0.81 occurred at the design weight flow and a total pressure ratio of 1.56. Design pressure ratio and weight flow were 1.57 and 29.5 kg/sec (65.0 lb/sec), respectively. Stall margin at design speed was 19 percent based on the weight flow and pressure ratio at peak efficiency and at stall.
Lopes, Thomas J.; Evetts, David M.
2004-01-01
Nevada's reliance on ground-water resources has increased because of increased development and surface-water resources being fully appropriated. The need to accurately quantify Nevada's water resources and water use is more critical than ever to meet future demands. Estimated ground-water pumpage, artificial and natural recharge, and interbasin flow can be used to help evaluate stresses on aquifer systems. In this report, estimates of ground-water pumpage and artificial recharge during calendar year 2000 were made using data from a variety of sources, such as reported estimates and estimates made using Landsat satellite imagery. Average annual natural recharge and interbasin flow were compiled from published reports. An estimated 1,427,100 acre-feet of ground water was pumped in Nevada during calendar year 2000. This total was calculated by summing six categories of ground-water pumpage, based on water use. Total artificial recharge during 2000 was about 145,970 acre-feet. At least one estimate of natural recharge was available for 209 of the 232 hydrographic areas (HAs). Natural recharge for the 209 HAs ranges from 1,793,420 to 2,583,150 acre-feet. Estimates of interbasin flow were available for 151 HAs. The categories and their percentage of the total ground-water pumpage are irrigation and stock watering (47 percent), mining (26 percent), water systems (14 percent), geothermal production (8 percent), self-supplied domestic (4 percent), and miscellaneous (less than 1 percent). Pumpage in the top 10 HAs accounted for about 49 percent of the total ground-water pumpage. The most ground-water pumpage in an HA was due to mining in Pumpernickel Valley (HA 65), Boulder Flat (HA 61), and Lower Reese River Valley (HA 59). Pumpage by water systems in Las Vegas Valley (HA 212) and Truckee Meadows (HA 87) were the fourth and fifth highest pumpage in 2000, respectively. Irrigation and stock watering pumpage accounted for most ground-water withdrawals in the HAs with the sixth
Hossain, Ahmed; Beyene, Joseph
2013-12-01
MicroRNAs (miRNAs) are short non-coding RNAs that play critical roles in numerous cellular processes through post-transcriptional functions. The aberrant role of miRNAs has been reported in a number of diseases. A robust computational method is vital to discover novel miRNAs where level of noise varies dramatically across the different miRNAs. In this paper, we propose a flexible rank-based procedure for estimating a weighted log partial area under the receiver operating characteristic (ROC) curve statistic for selecting differentially expressed miRNAs. The statistic combines results taking partial area under the curve (pAUC) and their corresponding variance. The proposed method does not involve complicated formulas and does not require advanced programming skills. Two real datasets are analyzed to illustrate the method and a simulation study is carried out to assess the performance of different miRNA ranking statistics. We conclude that the proposed method offers robust results with large samples for miRNA expression data, and the method can be used as an alternative analytical tool for identifying a list of target miRNAs for further biological and clinical investigation. PMID:24246291
NASA Astrophysics Data System (ADS)
Rung-Arunwan, Tawat; Siripunvaraporn, Weerachai; Utada, Hisashi
2016-04-01
Through a large number of magnetotelluric (MT) observations conducted in a study area, one can obtain regional one-dimensional (1-D) features of the subsurface electrical conductivity structure simply by taking the geometric average of determinant invariants of observed impedances. This method was proposed by Berdichevsky and coworkers, which is based on the expectation that distortion effects due to near-surface electrical heterogeneities will be statistically smoothed out. A good estimation of a regional mean 1-D model is useful, especially in recent years, to be used as a priori (or a starting) model in 3-D inversion. However, the original theory was derived before the establishment of the present knowledge on galvanic distortion. This paper, therefore, reexamines the meaning of the Berdichevsky average by using the conventional formulation of galvanic distortion. A simple derivation shows that the determinant invariant of distorted impedance and its Berdichevsky average is always downward biased by the distortion parameters of shear and splitting. This means that the regional mean 1-D model obtained from the Berdichevsky average tends to be more conductive. As an alternative rotational invariant, the sum of the squared elements (ssq) invariant is found to be less affected by bias from distortion parameters; thus, we conclude that its geometric average would be more suitable for estimating the regional structure. We find that the combination of determinant and ssq invariants provides parameters useful in dealing with a set of distorted MT impedances.
NASA Astrophysics Data System (ADS)
Shi, Y.; Long, Y.; Wi, X. L.
2014-04-01
When tourists visiting multiple tourist scenic spots, the travel line is usually the most effective road network according to the actual tour process, and maybe the travel line is different from planned travel line. For in the field of navigation, a proposed travel line is normally generated automatically by path planning algorithm, considering the scenic spots' positions and road networks. But when a scenic spot have a certain area and have multiple entrances or exits, the traditional described mechanism of single point coordinates is difficult to reflect these own structural features. In order to solve this problem, this paper focuses on the influence on the process of path planning caused by scenic spots' own structural features such as multiple entrances or exits, and then proposes a doubleweighted Graph Model, for the weight of both vertexes and edges of proposed Model can be selected dynamically. And then discusses the model building method, and the optimal path planning algorithm based on Dijkstra algorithm and Prim algorithm. Experimental results show that the optimal planned travel line derived from the proposed model and algorithm is more reasonable, and the travelling order and distance would be further optimized.
NASA Technical Reports Server (NTRS)
Urasek, D. C.; Kovich, G.; Moore, R. D.
1973-01-01
Performance was obtained for a 50-cm-diameter compressor designed for a high weight flow per unit annulus area of 208 (kg/sec)/sq m. Peak efficiency values of 0.83 and 0.79 were obtained for the rotor and stage, respectively. The stall margin for the stage was 23 percent, based on equivalent weight flow and total-pressure ratio at peak efficiency and stall.
Yulia; Khusun, Helda; Fahmida, Umi
2016-07-01
Developing countries including Indonesia imperatively require an understanding of factors leading to the emerging problem of obesity, especially within low socio-economic groups, whose dietary pattern may contribute to obesity. In this cross-sectional study, we compared the dietary patterns and food consumption of 103 obese and 104 normal-weight women of reproductive age (19-49 years) in urban slum areas in Central Jakarta. A single 24-h food recall was used to assess energy and macronutrient intakes (carbohydrate, protein and fat) and calculate energy density. A principal component analysis was used to define the dietary patterns from the FFQ. Obese women had significantly higher intakes of energy (8436·6 (sd 2358·1) v. 7504·4 (sd 1887·8) kJ (2016·4 (sd 563·6) v. 1793·6 (sd 451·2) kcal)), carbohydrate (263·9 (sd 77·0) v. 237·6 (sd 63·0) g) and fat (83·11 (sd 31·3) v. 70·2 (sd 26·1) g) compared with normal-weight women; however, their protein intake (59·4 (sd 19·1) v. 55·9 (sd 18·5) g) and energy density (8·911 (sd 2·30) v. 8·58 (sd 1·88) kJ/g (2·13 (sd 0·55) v. 2·05 (sd 0·45) kcal/g)) did not differ significantly. Two dietary patterns were revealed and subjectively named 'more healthy' and 'less healthy'. The 'less healthy' pattern was characterised by the consumption of fried foods (snacks, soyabean and roots and tubers) and meat and poultry products, whereas the more healthy pattern was characterised by the consumption of seafood, vegetables, eggs, milk and milk products and non-fried snacks. Subjects with a high score for the more healthy pattern had a lower obesity risk compared with those with a low score. Thus, obesity is associated with high energy intake and unhealthy dietary patterns characterised by consumption of oils and fats through fried foods and snacks. PMID:26931206
NASA Astrophysics Data System (ADS)
Skoczowsky, D.; Heuer, A.; Jechow, A.; Menzel, R.
2007-11-01
Detailed investigations of the spatiotemporal and spectral emission properties of a high power diode laser are presented. The AR coated laser diode with design wavelength of 940 nm is driven in an external resonator. The laser generates up to 340 mW average output power in a train of picosecond pulses with durations of 25 ps and repetition rates of 2.6 GHz. The mechanism of mode locking is discussed as self pulsation because of the strong correlation between round trip time and repetition rate. The double-sided exponential pulses suggest saturable absorber action.
NASA Astrophysics Data System (ADS)
Wang, Gongwen; Chen, Jianping; Li, Qing; Ding, Huoping
2007-06-01
This paper aims to monitor desertification evolution of different stages and assess its factors using remote sensing (RS) data and cellular automata (CA)-geographical information system (GIS) with an adaptive analytic hierarchy process (AHP) to derive weights of desertification factors. The study areas (114°E to 117°E and 39.5°to 42.2°N) are one of the important agro-pastoral transitional zone, located in Beijing and its neighboring areas, marginal desertified areas in North China. Desertification information including NDVI and desertification area were derived from the satellite images of 1987TM, 1996TM (with a resolution of 28.5), and 2006 CBERS-(with a resolution of 19.5 m) in study areas. The ancillary data in terms of meteorology, geology, 30m-DEM, hydrography can be statistical analyzed with GIS technology. A CA model based on the desertification factors with AHP-derived weights was built by AML program in ArcGIS workstation to assess the evolution of desertification in different stages (from 1987 to 1996, and from 1996 to 2006). The research results show that desertified areas was increased by 3.28% per year from 1987 to 1996, so was 0.51% per year from 1996 to 2006. Although the weights of desertification factors have some changes in different stages, the main factors including climate, NDVI, and terrain did not change except the values in study areas.
ERIC Educational Resources Information Center
Kurtz, David K.
This paper explores the Veterans Administration (VA) work-study program and its implications for student/veterans at Harrisburg Area Community College in Pennsylvania. Unique advantages of the program include tax-free income, flexible working schedules around students' class schedules, additional study time, easy access to the office from classes,…
Xaverius, Pamela; Alman, Cameron; Holtz, Lori; Yarber, Laura
2016-03-01
Objectives This study examined risk and protective factors associated with very low birth weight (VLBW) for babies born to women receiving adequate or inadequate prenatal care. Methods Birth records from St. Louis City and County from 2000 to 2009 were used (n = 152,590). Data was categorized across risk factors and stratified by adequacy of prenatal care (PNC). Multivariate logistic regression and population attributable risk (PAR) was used to explore risk factors for VLBW infants. Results Women receiving inadequate prenatal care had a higher prevalence of delivering a VLBW infant than those receiving adequate PNC (4.11 vs. 1.44 %, p < .0001). The distribution of risk factors differed between adequate and inadequate PNC regarding Black race (36.4 vs. 79.0 %, p < .0001), age under 20 (13.0 vs. 33.6 %, p < .0001), <13 years of education (35.9 vs. 77.9 %, p < .0001), Medicaid status (35.7 vs. 74.9, p < .0001), primiparity (41.6 vs. 31.4 %, p < .0001), smoking (9.7 vs. 24.5 %, p < .0001), and diabetes (4.0 vs. 2.4 %, p < .0001), respectively. Black race, advanced maternal age, primiparity and gestational hypertension were significant predictors of VLBW, regardless of adequate or inadequate PNC. Among women with inadequate PNC, Medicaid was protective against (aOR 0.671, 95 % CI 0.563-0.803; PAR -32.6 %) and smoking a risk factor for (aOR 1.23, 95 % CI 1.01, 1.49; PAR 40.1 %) VLBW. When prematurity was added to the adjusted models, the largest PAR shifts to education (44.3 %) among women with inadequate PNC. Conclusions Community actions around broader issues of racism and social determinants of health are needed to prevent VLBW in a large urban area. PMID:26537389
Haines, Aaron M.; Leu, Matthias; Svancara, Leona K.; Wilson, Gina; Scott, J. Michael
2010-01-01
Identification of biodiversity hotspots (hereafter, hotspots) has become a common strategy to delineate important areas for wildlife conservation. However, the use of hotspots has not often incorporated important habitat types, ecosystem services, anthropogenic activity, or consistency in identifying important conservation areas. The purpose of this study was to identify hotspots to improve avian conservation efforts for Species of Greatest Conservation Need (SGCN) in the state of Idaho, United States. We evaluated multiple approaches to define hotspots and used a unique approach based on weighting species by their distribution size and conservation status to identify hotspot areas. All hotspot approaches identified bodies of water (Bear Lake, Grays Lake, and American Falls Reservoir) as important hotspots for Idaho avian SGCN, but we found that the weighted approach produced more congruent hotspot areas when compared to other hotspot approaches. To incorporate anthropogenic activity into hotspot analysis, we grouped species based on their sensitivity to specific human threats (i.e., urban development, agriculture, fire suppression, grazing, roads, and logging) and identified ecological sections within Idaho that may require specific conservation actions to address these human threats using the weighted approach. The Snake River Basalts and Overthrust Mountains ecological sections were important areas for potential implementation of conservation actions to conserve biodiversity. Our approach to identifying hotspots may be useful as part of a larger conservation strategy to aid land managers or local governments in applying conservation actions on the ground.
2012-01-01
Background The study conducts statistical and spatial analyses to investigate amounts and types of permitted surface water pollution discharges in relation to population mortality rates for cancer and non-cancer causes nationwide and by urban-rural setting. Data from the Environmental Protection Agency's (EPA) Discharge Monitoring Report (DMR) were used to measure the location, type, and quantity of a selected set of 38 discharge chemicals for 10,395 facilities across the contiguous US. Exposures were refined by weighting amounts of chemical discharges by their estimated toxicity to human health, and by estimating the discharges that occur not only in a local county, but area-weighted discharges occurring upstream in the same watershed. Centers for Disease Control and Prevention (CDC) mortality files were used to measure age-adjusted population mortality rates for cancer, kidney disease, and total non-cancer causes. Analysis included multiple linear regressions to adjust for population health risk covariates. Spatial analyses were conducted by applying geographically weighted regression to examine the geographic relationships between releases and mortality. Results Greater non-carcinogenic chemical discharge quantities were associated with significantly higher non-cancer mortality rates, regardless of toxicity weighting or upstream discharge weighting. Cancer mortality was higher in association with carcinogenic discharges only after applying toxicity weights. Kidney disease mortality was related to higher non-carcinogenic discharges only when both applying toxicity weights and including upstream discharges. Effects for kidney mortality and total non-cancer mortality were stronger in rural areas than urban areas. Spatial results show correlations between non-carcinogenic discharges and cancer mortality for much of the contiguous United States, suggesting that chemicals not currently recognized as carcinogens may contribute to cancer mortality risk. The
ERIC Educational Resources Information Center
Gutiérrez-Zornoza, Myriam; Sánchez-López, Mairena; García-Hermoso, Antonio; González-García, Alberto; Chillón, Palma; Martínez-Vizcaíno, Vicente
2015-01-01
Purpose: The aim of this study was to examine (a) whether distance from home to school is a determinant of active commuting to school (ACS), (b) the relationship between distance from home to heavily used facilities (school, green spaces, and sports facilities) and the weight status and cardiometabolic risk categories, and (c) whether ACS has a…
Peng, Xiang; Mielke, Michael; Booth, Timothy
2011-01-17
We demonstrate high average power, high energy 1.55 μm ultra-short pulse (<1 ps) laser delivery using helium-filled and argon-filled large mode area hollow core photonic band-gap fibers and compare relevant performance parameters. The ultra-short pulse laser beam-with pulse energy higher than 7 μJ and pulse train average power larger than 0.7 W-is output from a 2 m long hollow core fiber with diffraction limited beam quality. We introduce a pulse tuning mechanism of argon-filled hollow core photonic band-gap fiber. We assess the damage threshold of the hollow core photonic band-gap fiber and propose methods to further increase pulse energy and average power handling. PMID:21263632
On generalized averaged Gaussian formulas
NASA Astrophysics Data System (ADS)
Spalevic, Miodrag M.
2007-09-01
We present a simple numerical method for constructing the optimal (generalized) averaged Gaussian quadrature formulas which are the optimal stratified extensions of Gauss quadrature formulas. These extensions exist in many cases in which real positive Kronrod formulas do not exist. For the Jacobi weight functions w(x)equiv w^{(alpha,beta)}(x)D(1-x)^alpha(1+x)^beta ( alpha,beta>-1 ) we give a necessary and sufficient condition on the parameters alpha and beta such that the optimal averaged Gaussian quadrature formulas are internal.
Iwashima, Satoru; Ishikawa, Takamichi
2016-08-01
Background Our goal was to evaluate the hemodynamic status of very low-birth-weight infants (VLBWIs) with patent ductus arteriosus (PDA) by measuring the vena contracta width (VCW) and effective shunt orifice area (ESOA) using the proximal isovelocity surface area (PISA) on color Doppler imaging. Method and Results In this study, 34 VLBWIs with PDA (median weight: 949 g) were studied. We measured VCW and ESOA using the PISA on echocardiography. PDA-VCW was measured at the narrowest PDA flow region. ESOA determined using PISA (PDA-ESOA) was defined as the hemispheric area of laminar flow with aliased velocities on color Doppler flow imaging: PDA-ESOA = 2π (PDA radius) 2 × aligning velocity/PDA velocity. Of the 34 VLBWIs, 26 received indomethacin (IND) for symptomatic PDA. Comparing echocardiographic parameters between infants who did versus did not receive IND, significant differences were seen in the left atrial-to-aortic root ratio (LA/AO), PDA-VCW, and PDA-ESOA. Receiver operating characteristic curve analysis to differentiate between IND usage status produced statistically significant results for PDA-VCW (area under the curve [AUC] = 0.880), PDA-ESOA (AUC = 0.813), and LA/AO (AUC = 0.769). Conclusion PDA-VCW and PDA-ESOA may allow noninvasive assessment of PDA severity, and are useful when determining the timing of clinical decision making for IND administration. PMID:27057768
Chrien, R.E.
1986-10-01
The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs.
Kazemier, Brenda M.; Schuit, Ewoud; Mol, Ben Willem J.; Pajkrt, Eva; Ganzevoort, Wessel
2014-01-01
Objective. To compare birth weight ratio and birth weight percentile to express infant weight when assessing pregnancy outcome. Study Design. We performed a national cohort study. Birth weight ratio was calculated as the observed birth weight divided by the median birth weight for gestational age. The discriminative ability of birth weight ratio and birth weight percentile to identify infants at risk of perinatal death (fetal death and neonatal death) or adverse pregnancy outcome (perinatal death + severe neonatal morbidity) was compared using the area under the curve. Outcomes were expressed stratified by gestational age at delivery separate for birth weight ratio and birth weight percentile. Results. We studied 1,299,244 pregnant women, with an overall perinatal death rate of 0.62%. Birth weight ratio and birth weight percentile have equivalent overall discriminative performance for perinatal death and adverse perinatal outcome. In late preterm infants (33+0–36+6 weeks), birth weight ratio has better discriminative ability than birth weight percentile for perinatal death (0.68 versus 0.63, P 0.01) or adverse pregnancy outcome (0.67 versus 0.60, P < 0.001). Conclusion. Birth weight ratio is a potentially valuable instrument to identify infants at risk of perinatal death and adverse pregnancy outcome and provides several advantages for use in research and clinical practice. Moreover, it allows comparison of groups with different average birth weights. PMID:25197283
NASA Astrophysics Data System (ADS)
Liu, Zhengjun; Wang, Jian; Chi, Changyan
2008-11-01
Multi-source earth observation data is highly desirable in current landslide hazard prediction modeling, as well as Landslide Hazard Zonation(LHZ) is a very important content of landslide hazard prediction modeling. In this paper, take Wan County for instance, we investigate the potentials of derivation from multi-source data sets to study landslide hazard zonation based on the ordinal scale relative weighting-rating technique. LHZ is then performed with chosen factor layers including: buffer map of thrusts, lithology, slope angle and relative relief calculated from DEM, NDVI, buffer map of drainage and lineaments extracted from the digital satellite imagery(TM). Then Landslide Hazard Index (LHI) value is calculated and landslide hazard zonation is decided by slicing LHI histogram. The statistics results demonstrate that high stable slope zone, stable slope zone, quasi-stable slope zone, relatively unstable slope zone, unstable slope zone and defended slope zone account for 2.20%, 14.02%, 39.88%, 28.27%, 12.17% and 3.47% respectively. Then, GPS deformation control points on the landslide bodies are used to verify the validity of the LHZ technique.
NASA Astrophysics Data System (ADS)
Corsini, Alessandro; Cervi, Federico; Ronchetti, Francesco
2009-10-01
Locations of potential groundwater springs were mapped in an area of 68 km 2 in the Northern Apennines of Italy based on Weight of Evidence (WofE) and Radial Basis Function Link Net (RBFLN). A map of more than 200 springs and maps of five causal factors were uploaded to ArcGIS with Spatial Data Modelling extensions. The WofE and RBFLN potential groundwater spring maps had similar prediction rates, allowing about 50% of the training and validation springs to be predicted in about 15 to 20% of the study area. The two maps were merged using a heuristic combination matrix in order to produce two hybrid maps: one representing susceptible areas in both the WofE and RBFLN maps (type A), while the other representing susceptible areas at least in one of the two maps (type B). For small cumulated areas, the success rate of both hybrid maps was higher than that of the parent maps, while for large cumulated areas, only the type B hybrid map performed similarly to the parent maps. This conclusion suggests different applications of these maps to water management purposes.
NASA Astrophysics Data System (ADS)
Gnanvo, Kondo; Bai, Xinzhan; Gu, Chao; Liyanage, Nilanga; Nelyubin, Vladimir; Zhao, Yuxiang
2016-02-01
A large-area and light-weight gas electron multiplier (GEM) detector was built at the University of Virginia as a prototype for the detector R&D program of the future Electron Ion Collider. The prototype has a trapezoidal geometry designed as a generic sector module in a disk layer configuration of a forward tracker in collider detectors. It is based on light-weight material and narrow support frames in order to minimize multiple scattering and dead-to-sensitive area ratio. The chamber has a novel type of two dimensional (2D) stereo-angle readout board with U-V strips that provides (r,φ) position information in the cylindrical coordinate system of a collider environment. The prototype was tested at the Fermilab Test Beam Facility in October 2013 and the analysis of the test beam data demonstrates an excellent response uniformity of the large area chamber with an efficiency higher than 95%. An angular resolution of 60 μrad in the azimuthal direction and a position resolution better than 550 μm in the radial direction were achieved with the U-V strip readout board. The results are discussed in this paper.
Hmelnitsky, I; Nettheim, N
1987-06-01
Functional anatomy and physiology have naturally attended mainly to those functions which occur most commonly in everyday life. Piano playing is a more specialized area, where functions arise which have so far been neglected in medical science. These functions are here described by a pianist (IH) in the hope that medical researchers will respond to fill the gaps. The importance of this lies not only in the understanding of skilled manipulative activity but also in the avoidance of overuse syndrome (OUS) or repetitive strain injury (RSI). PMID:3614013
Average Cost of Common Schools.
ERIC Educational Resources Information Center
White, Fred; Tweeten, Luther
The paper shows costs of elementary and secondary schools applicable to Oklahoma rural areas, including the long-run average cost curve which indicates the minimum per student cost for educating various numbers of students and the application of the cost curves determining the optimum school district size. In a stratified sample, the school…
ERIC Educational Resources Information Center
Francis, Richard L.
1992-01-01
Presents the template method developed by Galileo for calculating areas of geometric shapes constructed of uniform density and thickness. The method compares the weight of a shape of known area to the weight of a shape of unknown area. Applies this hands-on method to problems involving calculus, Pythagorean's theorem, and cycloids. (MDH)
Kriging without negative weights
Szidarovszky, F.; Baafi, E.Y.; Kim, Y.C.
1987-08-01
Under a constant drift, the linear kriging estimator is considered as a weighted average of n available sample values. Kriging weights are determined such that the estimator is unbiased and optimal. To meet these requirements, negative kriging weights are sometimes found. Use of negative weights can produce negative block grades, which makes no practical sense. In some applications, all kriging weights may be required to be nonnegative. In this paper, a derivation of a set of nonlinear equations with the nonnegative constraint is presented. A numerical algorithm also is developed for the solution of the new set of kriging equations.
Kim, Sun Hye; Hwang, Ji-Yun; Kim, Mi Kyung; Chung, Hye Won; Nguyet, Tran Thi Phuc
2010-01-01
The objectives of this study were to examine the association between dietary factors and underweight and overweight adult Vietnamese living in the rural areas of Vietnam. A cross-sectional study of 497 Vietnamese aged 19 to 60 years (204 males, 293 females) was conducted in rural areas of Haiphong, Vietnam. The subjects were classified as underweight, normal weight, and overweight based on BMI. General characteristics, anthropometric parameters, blood profiles, and eating habits were obtained and dietary intake was assessed using 24-hour recalls for 2 consecutive days. A high prevalence of both underweight (BMI < 18.5 kg/m2) and overweight (BMI ≥ 23 kg/m2) individuals was observed (14.2% and 21.6% for males and 18.9% and 20.6% for females, respectively). For both genders, the overweight group were older than the under- and normal weight groups (P = 0.0118 for males and P = 0.0002 for females). In female subjects, the overweight group consumed significantly less cereals (P = 0.0033), energy (P = 0.0046), protein (P = 0.0222), and carbohydrate (P = 0.0017) and more fruits (P = 0.0026) than the underweight group; however, no such differences existed in males. The overweight subjects overate more frequently (P = 0.0295) and consumed fish (P = 0.0096) and fruits (P = 0.0083) more often. The prevalence of both underweight and overweight individuals pose serious public health problems in the rural areas of Vietnamese and the overweight group was related to overeating and high fish and fruit consumption. These findings may provide basic data for policymakers and dieticians in order to develop future nutrition and health programs for rural populations in Vietnam. PMID:20607070
How to Address Measurement Noise in Bayesian Model Averaging
NASA Astrophysics Data System (ADS)
Schöniger, A.; Wöhling, T.; Nowak, W.
2014-12-01
When confronted with the challenge of selecting one out of several competing conceptual models for a specific modeling task, Bayesian model averaging is a rigorous choice. It ranks the plausibility of models based on Bayes' theorem, which yields an optimal trade-off between performance and complexity. With the resulting posterior model probabilities, their individual predictions are combined into a robust weighted average and the overall predictive uncertainty (including conceptual uncertainty) can be quantified. This rigorous framework does, however, not yet explicitly consider statistical significance of measurement noise in the calibration data set. This is a major drawback, because model weights might be instable due to the uncertainty in noisy data, which may compromise the reliability of model ranking. We present a new extension to the Bayesian model averaging framework that explicitly accounts for measurement noise as a source of uncertainty for the weights. This enables modelers to assess the reliability of model ranking for a specific application and a given calibration data set. Also, the impact of measurement noise on the overall prediction uncertainty can be determined. Technically, our extension is built within a Monte Carlo framework. We repeatedly perturb the observed data with random realizations of measurement error. Then, we determine the robustness of the resulting model weights against measurement noise. We quantify the variability of posterior model weights as weighting variance. We add this new variance term to the overall prediction uncertainty analysis within the Bayesian model averaging framework to make uncertainty quantification more realistic and "complete". We illustrate the importance of our suggested extension with an application to soil-plant model selection, based on studies by Wöhling et al. (2013, 2014). Results confirm that noise in leaf area index or evaporation rate observations produces a significant amount of weighting
Yu, Xianyu; Wang, Yi; Niu, Ruiqing; Hu, Youjian
2016-01-01
In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR) technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM) classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO) algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%–19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides. PMID:27187430
Yu, Xianyu; Wang, Yi; Niu, Ruiqing; Hu, Youjian
2016-01-01
In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR) technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM) classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO) algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%-19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides. PMID:27187430
Averaging the inhomogeneous universe
NASA Astrophysics Data System (ADS)
Paranjape, Aseem
2012-03-01
A basic assumption of modern cosmology is that the universe is homogeneous and isotropic on the largest observable scales. This greatly simplifies Einstein's general relativistic field equations applied at these large scales, and allows a straightforward comparison between theoretical models and observed data. However, Einstein's equations should ideally be imposed at length scales comparable to, say, the solar system, since this is where these equations have been tested. We know that at these scales the universe is highly inhomogeneous. It is therefore essential to perform an explicit averaging of the field equations in order to apply them at large scales. It has long been known that due to the nonlinear nature of Einstein's equations, any explicit averaging scheme will necessarily lead to corrections in the equations applied at large scales. Estimating the magnitude and behavior of these corrections is a challenging task, due to difficulties associated with defining averages in the context of general relativity (GR). It has recently become possible to estimate these effects in a rigorous manner, and we will review some of the averaging schemes that have been proposed in the literature. A tantalizing possibility explored by several authors is that the corrections due to averaging may in fact account for the apparent acceleration of the expansion of the universe. We will explore this idea, reviewing some of the work done in the literature to date. We will argue however, that this rather attractive idea is in fact not viable as a solution of the dark energy problem, when confronted with observational constraints.
NASA Astrophysics Data System (ADS)
Zavala, M.; Herndon, S. C.; Slott, R. S.; Dunlea, E. J.; Marr, L. C.; Shorter, J. H.; Zahniser, M.; Knighton, W. B.; Rogers, T. M.; Kolb, C. E.; Molina, L. T.; Molina, M. J.
2006-06-01
A mobile laboratory was used to measure on-road vehicle emission ratios during the MCMA-2003 field campaign held during the spring of 2003 in the Mexico City Metropolitan Area (MCMA). The measured emission ratios represent a sample of emissions of in-use vehicles under real world driving conditions for the MCMA. From the relative amounts of NOx and selected VOC's sampled, the results indicate that the technique is capable of differentiating among vehicle categories and fuel type in real world driving conditions. Emission ratios for NOx, NOy, NH3, H2CO, CH3CHO, and other selected volatile organic compounds (VOCs) are presented for chase sampled vehicles and fleet averaged emissions. Results indicate that colectivos, particularly CNG-powered colectivos, are potentially significant contributors of NOx and aldehydes in the MCMA. Similarly, ratios of selected VOCs and NOy showed a strong dependence on traffic mode. These results are compared with the vehicle emissions inventory for the MCMA, other vehicle emissions measurements in the MCMA, and measurements of on-road emissions in US cities. Our estimates for motor vehicle emissions of benzene, toluene, formaldehyde, and acetaldehyde in the MCMA indicate these species are present in concentrations higher than previously reported. The high motor vehicle aldehyde emissions may have an impact on the photochemistry of urban areas.
Covariant approximation averaging
NASA Astrophysics Data System (ADS)
Shintani, Eigo; Arthur, Rudy; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph
2015-06-01
We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in Nf=2 +1 lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.
Bonnor, W.B.
1987-05-01
The Einstein-Straus (1945) vacuole is here used to represent a bound cluster of galaxies embedded in a standard pressure-free cosmological model, and the average density of the cluster is compared with the density of the surrounding cosmic fluid. The two are nearly but not quite equal, and the more condensed the cluster, the greater the difference. A theoretical consequence of the discrepancy between the two densities is discussed. 25 references.
NASA Astrophysics Data System (ADS)
Zavala, M.; Herndon, S. C.; Slott, R. S.; Dunlea, E. J.; Marr, L. C.; Shorter, J. H.; Zahniser, M.; Knighton, W. B.; Rogers, T. M.; Kolb, C. E.; Molina, L. T.; Molina, M. J.
2006-11-01
A mobile laboratory was used to measure on-road vehicle emission ratios during the MCMA-2003 field campaign held during the spring of 2003 in the Mexico City Metropolitan Area (MCMA). The measured emission ratios represent a sample of emissions of in-use vehicles under real world driving conditions for the MCMA. From the relative amounts of NOx and selected VOC's sampled, the results indicate that the technique is capable of differentiating among vehicle categories and fuel type in real world driving conditions. Emission ratios for NOx, NOy, NH3, H2CO, CH3CHO, and other selected volatile organic compounds (VOCs) are presented for chase sampled vehicles in the form of frequency distributions as well as estimates for the fleet averaged emissions. Our measurements of emission ratios for both CNG and gasoline powered "colectivos" (public transportation buses that are intensively used in the MCMA) indicate that - in a mole per mole basis - have significantly larger NOx and aldehydes emissions ratios as compared to other sampled vehicles in the MCMA. Similarly, ratios of selected VOCs and NOy showed a strong dependence on traffic mode. These results are compared with the vehicle emissions inventory for the MCMA, other vehicle emissions measurements in the MCMA, and measurements of on-road emissions in U.S. cities. We estimate NOx emissions as 100 600±29 200 metric tons per year for light duty gasoline vehicles in the MCMA for 2003. According to these results, annual NOx emissions estimated in the emissions inventory for this category are within the range of our estimated NOx annual emissions. Our estimates for motor vehicle emissions of benzene, toluene, formaldehyde, and acetaldehyde in the MCMA indicate these species are present in concentrations higher than previously reported. The high motor vehicle aldehyde emissions may have an impact on the photochemistry of urban areas.
Gong, Lunli; Zhou, Xiao; Wu, Yaohao; Zhang, Yun; Wang, Chen; Zhou, Heng; Guo, Fangfang
2014-01-01
The present study was designed to investigate the possibility of full-thickness defects repair in porcine articular cartilage (AC) weight-bearing area using chondrogenic differentiated autologous adipose-derived stem cells (ASCs) with a follow-up of 3 and 6 months, which is successive to our previous study on nonweight-bearing area. The isolated ASCs were seeded onto the phosphoglycerate/polylactic acid (PGA/PLA) with chondrogenic induction in vitro for 2 weeks as the experimental group prior to implantation in porcine AC defects (8 mm in diameter, deep to subchondral bone), with PGA/PLA only as control. With follow-up time being 3 and 6 months, both neo-cartilages of postimplantation integrated well with the neighboring normal cartilage and subchondral bone histologically in experimental group, whereas only fibrous tissue in control group. Immunohistochemical and toluidine blue staining confirmed similar distribution of COL II and glycosaminoglycan in the regenerated cartilage to the native one. A vivid remolding process with repair time was also witnessed in the neo-cartilage as the compressive modulus significantly increased from 70% of the normal cartilage at 3 months to nearly 90% at 6 months, which is similar to our former research. Nevertheless, differences of the regenerated cartilages still could be detected from the native one. Meanwhile, the exact mechanism involved in chondrogenic differentiation from ASCs seeded on PGA/PLA is still unknown. Therefore, proteome is resorted leading to 43 proteins differentially identified from 20 chosen two-dimensional spots, which do help us further our research on some committed factors. In conclusion, the comparison via proteome provided a thorough understanding of mechanisms implicating ASC differentiation toward chondrocytes, which is further substantiated by the present study as a perfect supplement to the former one in nonweight-bearing area. PMID:24044689
NASA Technical Reports Server (NTRS)
Feiveson, A. H. (Principal Investigator)
1979-01-01
The use of a weighted aggregation technique to improve the precision of the overall LACIE estimate is considered. The manner in which a weighted aggregation technique is implemented given a set of weights is described. The problem of variance estimation is discussed and the question of how to obtain the weights in an operational environment is addressed.
Pearlman, David A; Rao, B Govinda; Charifson, Paul
2008-05-15
We demonstrate a new approach to the development of scoring functions through the formulation and parameterization of a new function, which can be used both for rapidly ranking the binding of ligands to proteins and for estimating relative aqueous molecular solubilities. The intent of this work is to introduce a new paradigm for creation of scoring functions, wherein we impose the following criteria upon the function: (1) simple; (2) intuitive; (3) requires no postparameterization tweaking; (4) can be applied (without reparameterization) to multiple target systems; and (5) can be rapidly evaluated for any potential ligand. Following these criteria, a new function, FURSMASA (function for rapid scoring using an MD-averaged grid and the accessible surface area) has been developed. Three novel features of the function include: (1) use of an MD-averaged potential energy grid for ligand-protein interactions, rather than a simple static grid; (2) inclusion of a term that depends on the change in the solvent-accessible surface area changes on an atomic (not molecular) basis; and (3) use of the recently derived predictive index (PI) target when optimizing the function, which focuses the function on its intended purpose of relative ranking. A genetic algorithm is used to optimize the function against test data sets that include ligands for the following proteins: IMPDH, p38, gyrase B, HIV-1, and TACE, as well as the Syracuse Research solubility database. We find that the function is predictive, and can simultaneously fit all the test data sets with cross-validated predictive indices ranging from 0.68 to 0.82. As a test of the ability of this function to predict binding for systems not in the training set, the resulting fitted FURSAMA function is then applied to 23 ligands of the COX-2 enzyme. Comparing the results for COX-2 against those obtained using a variety of well-known rapid scoring functions demonstrates that FURSMASA outperforms all of them in terms of the PI and
Chen, Xue-shuang; Jiang, Tao; Lu, Song; Wei, Shi-qiang; Wang, Ding-yong; Yan, Jin-long
2016-03-15
The study of the molecular weight (MW) fractions of dissolved organic matter (DOM) in aquatic environment is of interests because the size plays an important role in deciding the biogeochemical characteristics of DOM. Thus, using ultrafiltration ( UF) technique combined with three-dimensional fluorescence spectroscopy, DOM samples from four sampling sites in typical water-level fluctuation zones of Three Gorge Reservoir areas were selected to investigate the differences of properties and sources of different DOM MW fractions. The results showed that in these areas, the distribution of MW fractions was highly dispersive, but the approximately equal contributions from colloidal (Mr 1 x 10³-0.22 µm) and true dissolved fraction (Mr < 1 x 10³) to the total DOC concentration were found. Four fluorescence signals (humic-like A and C; protein-like B and T) were observed in all MW fractions including bulk DOM, which showed the same distribution trend: true dissolved > low MW (Mr 1 x 10³-10 x 10³) > medium MW (Mr 10 x 10³-30 x 10³) > high MW (Mr 30 x 10³-0.22 µm). Additionally, with decreasing MW fraction, fluorescence index (FI) and freshness index (BIX) increased suggesting enhanced signals of autochthonous inputs, whereas humification index ( HIX) decreased indicating lowe humification degree. It strongly suggested that terrestrial input mainly affected the composition and property of higher MW fractions of DOM, as compared to lower MW and true dissolved fractions that were controlled by autochthonous sources such as microbial and alga activities, instead of allochthonous sources. Meanwhile, the riparian different land-use types also affected obviously on the characteristics of DOM. Therefore, higher diversity of land-use types, and also higher complexity of ecosystem and landscapes, induced higher heterogeneity of fluorescence components in different MW fraction of DOM. PMID:27337878
Córdova-Palomera, Aldo; Fatjó-Vilas, Mar; Falcón, Carles; Bargalló, Nuria; Alemany, Silvia; Crespo-Facorro, Benedicto; Nenadic, Igor; Fañanás, Lourdes
2015-01-01
Background Previous research suggests that low birth weight (BW) induces reduced brain cortical surface area (SA) which would persist until at least early adulthood. Moreover, low BW has been linked to psychiatric disorders such as depression and psychological distress, and to altered neurocognitive profiles. Aims We present novel findings obtained by analysing high-resolution structural MRI scans of 48 twins; specifically, we aimed: i) to test the BW-SA association in a middle-aged adult sample; and ii) to assess whether either depression/anxiety disorders or intellectual quotient (IQ) influence the BW-SA link, using a monozygotic (MZ) twin design to separate environmental and genetic effects. Results Both lower BW and decreased IQ were associated with smaller total and regional cortical SA in adulthood. Within a twin pair, lower BW was related to smaller total cortical and regional SA. In contrast, MZ twin differences in SA were not related to differences in either IQ or depression/anxiety disorders. Conclusion The present study supports findings indicating that i) BW has a long-lasting effect on cortical SA, where some familial and environmental influences alter both foetal growth and brain morphology; ii) uniquely environmental factors affecting BW also alter SA; iii) higher IQ correlates with larger SA; and iv) these effects are not modified by internalizing psychopathology. PMID:26086820
Shin, Youn Ho; Choi, Suk-Joo; Kim, Kyung Won; Yu, Jinho; Ahn, Kang Mo; Kim, Hyung Young; Seo, Ju-Hee; Kwon, Ji-Won; Kim, Byoung-Ju; Kim, Hyo-Bin; Shim, Jung Yeon; Kim, Woo Kyung; Song, Dae Jin; Lee, So-Yeon; Lee, Soo Young; Jang, Gwang Cheon; Kwon, Ja-Young; Lee, Kyung-Ju; Park, Hee Jin; Lee, Pil Ryang; Won, Hye-Sung
2013-01-01
Previous studies suggest that maternal characteristics may be associated with neonatal outcomes. However, the influence of maternal characteristics on birth weight (BW) has not been adequately determined in Korean populations. We investigated associations between maternal characteristics and BW in a sample of 813 Korean women living in the Seoul metropolitan area, Korea recruited using data from the prospective hospital-based COhort for Childhood Origin of Asthma and allergic diseases (COCOA) between 2007 and 2011. The mean maternal age at delivery was 32.3 ± 3.5 yr and prepregnancy maternal body mass index (BMI) was 20.7 ± 2.5 kg/m2. The mean BW of infant was 3,196 ± 406 g. The overall prevalence of a maternal history of allergic disease was 32.9% and the overall prevalence of allergic symptoms was 65.1%. In multivariate regression models, prepregnancy maternal BMI and gestational age at delivery were positively and a maternal history of allergic disease and nulliparity were negatively associated with BW (all P < 0.05). Presence of allergic symptoms in the mother was not associated with BW. In conclusion, prepregnancy maternal BMI, gestational age at delivery, a maternal history of allergic disease, and nulliparity may be associated with BW, respectively. PMID:23579316
Americans' Average Radiation Exposure
NA
2000-08-11
We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body.
Wang, Tingting; Li, Wenhua; Wu, Xiangru; Yin, Bing; Chu, Caiting; Ding, Ming; Cui, Yanfen
2016-01-01
Objective To assess the added value of diffusion-weighted magnetic resonance imaging (DWI) with apparent diffusion coefficient (ADC) values compared to MRI, for characterizing the tubo-ovarian abscesses (TOA) mimicking ovarian malignancy. Materials and Methods Patients with TOA (or ovarian abscess alone; n = 34) or ovarian malignancy (n = 35) who underwent DWI and MRI were retrospectively reviewed. The signal intensity of cystic and solid component of TOAs and ovarian malignant tumors on DWI and the corresponding ADC values were evaluated, as well as clinical characteristics, morphological features, MRI findings were comparatively analyzed. Receiver operating characteristic (ROC) curve analysis based on logistic regression was applied to identify different imaging characteristics between the two patient groups and assess the predictive value of combination diagnosis with area under the curve (AUC) analysis. Results The mean ADC value of the cystic component in TOA was significantly lower than in malignant tumors (1.04 ± 0 .41 × 10−3 mm2/s vs. 2.42 ± 0.38 × 10−3 mm2/s; p < 0.001). The mean ADC value of the enhanced solid component in 26 TOAs was 1.43 ± 0.16×10−3mm2/s, and 46.2% (12 TOAs; pseudotumor areas) showed significantly higher signal intensity on DW-MRI than in ovarian malignancy (mean ADC value 1.44 ± 0.20×10−3 mm2/s vs.1.18 ± 0.36 × 10−3 mm2/s; p = 0.043). The combination diagnosis of ADC value and dilated tubal structure achieved the best AUC of 0.996. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy of MRI vs. DWI with ADC values for predicting TOA were 47.1%, 91.4%, 84.2%, 64%, and 69.6% vs. 100%, 97.1%, 97.1%, 100%, and 98.6%, respectively. Conclusions DW-MRI is superior to MRI in the assessment of TOA mimicking ovarian malignancy, and the ADC values aid in discriminating the pseudotumor area of TOA from the solid portion of ovarian malignancy. PMID:26894926
... obese. Achieving a healthy weight can help you control your cholesterol, blood pressure and blood sugar. It ... use more calories than you eat. A weight-control strategy might include Choosing low-fat, low-calorie ...
... heart failure, and kidney disease. Good nutrition and exercise can help in losing weight. Eating extra calories within a well-balanced diet and treating any underlying medical problems can help to add weight.
... to medicines, thyroid problems, heart failure, and kidney disease. Good nutrition and exercise can help in losing weight. Eating extra calories within a well-balanced diet and treating any underlying medical problems can help to add weight.
NASA Astrophysics Data System (ADS)
Hermance, J. F.; Jacob, R. W.; Bradley, B. A.; Mustard, J. F.
2005-12-01
In studying vegetation patterns remotely, the objective is to draw inferences on the development of specific or general land surface phenology (LSP) as a function of space and time by determining the behavior of a parameter (in our case NDVI), when the parameter estimate may be biased by noise, data dropouts and obfuscations from atmospheric and other effects. We describe the underpinning concepts of a procedure for a robust interpolation of NDVI data that does not have the limitations of other mathematical approaches which require orthonormal basis functions (e.g. Fourier analysis). In this approach, data need not be uniformly sampled in time, nor do we expect noise to be Gaussian-distributed. Our approach is intuitive and straightforward, and is applied here to the refined modeling of LSP using 7 years of weekly and biweekly AVHRR NDVI data for a 150 x 150 km study area in central Nevada. This site is a microcosm of a broad range of vegetation classes, from irrigated agriculture with annual NDVIvalues of up to 0.7 to playas and alkali salt flats with annual NDVI values of only 0.07. Our procedure involves a form of parameter estimation employing Bayesian statistics. In utilitarian terms, the latter procedure is a method of statistical analysis (in our case, robustified, weighted least-squares recursive curve-fitting) that incorporates a variety of prior knowledge when forming current estimates of a particular process or parameter. In addition to the standard Bayesian approach, we account for outliers due to data dropouts or obfuscations because of clouds and snow cover. An initial "starting model" for the average annual cycle and long term (7 year) trend is determined by jointly fitting a common set of complex annual harmonics and a low order polynomial to an entire multi-year time series in one step. This is not a formal Fourier series in the conventional sense, but rather a set of 4 cosine and 4 sine coefficients with fundamental periods of 12, 6, 3 and 1
Combining remotely sensed and other measurements for hydrologic areal averages
NASA Technical Reports Server (NTRS)
Johnson, E. R.; Peck, E. L.; Keefer, T. N.
1982-01-01
A method is described for combining measurements of hydrologic variables of various sampling geometries and measurement accuracies to produce an estimated mean areal value over a watershed and a measure of the accuracy of the mean areal value. The method provides a means to integrate measurements from conventional hydrological networks and remote sensing. The resulting areal averages can be used to enhance a wide variety of hydrological applications including basin modeling. The correlation area method assigns weights to each available measurement (point, line, or areal) based on the area of the basin most accurately represented by the measurement. The statistical characteristics of the accuracy of the various measurement technologies and of the random fields of the hydrologic variables used in the study (water equivalent of the snow cover and soil moisture) required to implement the method are discussed.
Dissociating Averageness and Attractiveness: Attractive Faces Are Not Always Average
ERIC Educational Resources Information Center
DeBruine, Lisa M.; Jones, Benedict C.; Unger, Layla; Little, Anthony C.; Feinberg, David R.
2007-01-01
Although the averageness hypothesis of facial attractiveness proposes that the attractiveness of faces is mostly a consequence of their averageness, 1 study has shown that caricaturing highly attractive faces makes them mathematically less average but more attractive. Here the authors systematically test the averageness hypothesis in 5 experiments…
... Quit Smoking Benefits of Quitting Health Effects of Smoking Secondhand Smoke Withdrawal Ways to Quit QuitGuide Pregnancy & Motherhood Pregnancy & Motherhood Before Your Baby is Born From Birth to 2 Years Quitting for Two SmokefreeMom Healthy Kids Parenting & ... Weight Management Weight Management ...
Spatial limitations in averaging social cues.
Florey, Joseph; Clifford, Colin W G; Dakin, Steven; Mareschal, Isabelle
2016-01-01
The direction of social attention from groups provides stronger cueing than from an individual. It has previously been shown that both basic visual features such as size or orientation and more complex features such as face emotion and identity can be averaged across multiple elements. Here we used an equivalent noise procedure to compare observers' ability to average social cues with their averaging of a non-social cue. Estimates of observers' internal noise (uncertainty associated with processing any individual) and sample-size (the effective number of gaze-directions pooled) were derived by fitting equivalent noise functions to discrimination thresholds. We also used reverse correlation analysis to estimate the spatial distribution of samples used by participants. Averaging of head-rotation and cone-rotation was less noisy and more efficient than averaging of gaze direction, though presenting only the eye region of faces at a larger size improved gaze averaging performance. The reverse correlation analysis revealed greater sampling areas for head rotation compared to gaze. We attribute these differences in averaging between gaze and head cues to poorer visual processing of faces in the periphery. The similarity between head and cone averaging are examined within the framework of a general mechanism for averaging of object rotation. PMID:27573589
Spatial limitations in averaging social cues
Florey, Joseph; Clifford, Colin W. G.; Dakin, Steven; Mareschal, Isabelle
2016-01-01
The direction of social attention from groups provides stronger cueing than from an individual. It has previously been shown that both basic visual features such as size or orientation and more complex features such as face emotion and identity can be averaged across multiple elements. Here we used an equivalent noise procedure to compare observers’ ability to average social cues with their averaging of a non-social cue. Estimates of observers’ internal noise (uncertainty associated with processing any individual) and sample-size (the effective number of gaze-directions pooled) were derived by fitting equivalent noise functions to discrimination thresholds. We also used reverse correlation analysis to estimate the spatial distribution of samples used by participants. Averaging of head-rotation and cone-rotation was less noisy and more efficient than averaging of gaze direction, though presenting only the eye region of faces at a larger size improved gaze averaging performance. The reverse correlation analysis revealed greater sampling areas for head rotation compared to gaze. We attribute these differences in averaging between gaze and head cues to poorer visual processing of faces in the periphery. The similarity between head and cone averaging are examined within the framework of a general mechanism for averaging of object rotation. PMID:27573589
NASA Technical Reports Server (NTRS)
Moore, R. D.; Urasek, D. C.; Kovich, G.
1973-01-01
The overall and blade-element performances are presented over the stable flow operating range from 50 to 100 percent of design speed. Stage peak efficiency of 0.834 was obtained at a weight flow of 26.4 kg/sec (58.3 lb/sec) and a pressure ratio of 1.581. The stall margin for the stage was 7.5 percent based on weight flow and pressure ratio at stall and peak efficiency conditions. The rotor minimum losses were approximately equal to design except in the blade vibration damper region. Stator minimum losses were less than design except in the tip and damper regions.
NASA Technical Reports Server (NTRS)
Howard, W. H.; Young, D. R.
1972-01-01
Device applies compressive force to bone to minimize loss of bone calcium during weightlessness or bedrest. Force is applied through weights, or hydraulic, pneumatic or electrically actuated devices. Device is lightweight and easy to maintain and operate.
Improving Reading Abilities of Average and Below Average Readers through Peer Tutoring.
ERIC Educational Resources Information Center
Galezio, Marne; And Others
A program was designed to improve the progress of average and below average readers in a first-grade, a second-grade, and a sixth-grade classroom in a multicultural, multi-social economic district located in a three-county area northwest of Chicago, Illinois. Classroom teachers noted that students were having difficulty making adequate progress in…
The Molecular Weight Distribution of Polymer Samples
ERIC Educational Resources Information Center
Horta, Arturo; Pastoriza, M. Alejandra
2007-01-01
Various methods for the determination of the molecular weight distribution (MWD) of different polymer samples are presented. The study shows that the molecular weight averages and distribution of a polymerization completely depend on the characteristics of the reaction itself.
Some series of intuitionistic fuzzy interactive averaging aggregation operators.
Garg, Harish
2016-01-01
In this paper, some series of new intuitionistic fuzzy averaging aggregation operators has been presented under the intuitionistic fuzzy sets environment. For this, some shortcoming of the existing operators are firstly highlighted and then new operational law, by considering the hesitation degree between the membership functions, has been proposed to overcome these. Based on these new operation laws, some new averaging aggregation operators namely, intuitionistic fuzzy Hamacher interactive weighted averaging, ordered weighted averaging and hybrid weighted averaging operators, labeled as IFHIWA, IFHIOWA and IFHIHWA respectively has been proposed. Furthermore, some desirable properties such as idempotency, boundedness, homogeneity etc. are studied. Finally, a multi-criteria decision making method has been presented based on proposed operators for selecting the best alternative. A comparative concelebration between the proposed operators and the existing operators are investigated in detail. PMID:27441128
Physics of the spatially averaged snowmelt process
NASA Astrophysics Data System (ADS)
Horne, Federico E.; Kavvas, M. Levent
1997-04-01
It has been recognized that the snowmelt models developed in the past do not fully meet current prediction requirements. Part of the reason is that they do not account for the spatial variation in the dynamics of the spatially heterogeneous snowmelt process. Most of the current physics-based distributed snowmelt models utilize point-location-scale conservation equations which do not represent the spatially varying snowmelt dynamics over a grid area that surrounds a computational node. In this study, to account for the spatial heterogeneity of the snowmelt dynamics, areally averaged mass and energy conservation equations for the snowmelt process are developed. As a first step, energy and mass conservation equations that govern the snowmelt dynamics at a point location are averaged over the snowpack depth, resulting in depth averaged equations (DAE). In this averaging, it is assumed that the snowpack has two layers. Then, the point location DAE are averaged over the snowcover area. To develop the areally averaged equations of the snowmelt physics, we make the fundamental assumption that snowmelt process is spatially ergodic. The snow temperature and the snow density are considered as the stochastic variables. The areally averaged snowmelt equations are obtained in terms of their corresponding ensemble averages. Only the first two moments are considered. A numerical solution scheme (Runge-Kutta) is then applied to solve the resulting system of ordinary differential equations. This equation system is solved for the areal mean and areal variance of snow temperature and of snow density, for the areal mean of snowmelt, and for the areal covariance of snow temperature and snow density. The developed model is tested using Scott Valley (Siskiyou County, California) snowmelt and meteorological data. The performance of the model in simulating the observed areally averaged snowmelt is satisfactory.
NASA Technical Reports Server (NTRS)
Chen, Fei; Yates, David; LeMone, Margaret
2001-01-01
To understand the effects of land-surface heterogeneity and the interactions between the land-surface and the planetary boundary layer at different scales, we develop a multiscale data set. This data set, based on the Cooperative Atmosphere-Surface Exchange Study (CASES97) observations, includes atmospheric, surface, and sub-surface observations obtained from a dense observation network covering a large region on the order of 100 km. We use this data set to drive three land-surface models (LSMs) to generate multi-scale (with three resolutions of 1, 5, and 10 kilometers) gridded surface heat flux maps for the CASES area. Upon validating these flux maps with measurements from surface station and aircraft, we utilize them to investigate several approaches for estimating the area-integrated surface heat flux for the CASES97 domain of 71x74 square kilometers, which is crucial for land surface model development/validation and area water and energy budget studies. This research is aimed at understanding the relative contribution of random turbulence versus organized mesoscale circulations to the area-integrated surface flux at the scale of 100 kilometers, and identifying the most important effective parameters for characterizing the subgrid-scale variability for large-scale atmosphere-hydrology models.
Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A.; Kagemann, Larry; Schuman, Joel S.
2016-01-01
Purpose Developing a novel image enhancement method so that nonframe-averaged optical coherence tomography (OCT) images become comparable to active eye-tracking frame-averaged OCT images. Methods Twenty-one eyes of 21 healthy volunteers were scanned with noneye-tracking nonframe-averaged OCT device and active eye-tracking frame-averaged OCT device. Virtual averaging was applied to nonframe-averaged images with voxel resampling and adding amplitude deviation with 15-time repetitions. Signal-to-noise (SNR), contrast-to-noise ratios (CNR), and the distance between the end of visible nasal retinal nerve fiber layer (RNFL) and the foveola were assessed to evaluate the image enhancement effect and retinal layer visibility. Retinal thicknesses before and after processing were also measured. Results All virtual-averaged nonframe-averaged images showed notable improvement and clear resemblance to active eye-tracking frame-averaged images. Signal-to-noise and CNR were significantly improved (SNR: 30.5 vs. 47.6 dB, CNR: 4.4 vs. 6.4 dB, original versus processed, P < 0.0001, paired t-test). The distance between the end of visible nasal RNFL and the foveola was significantly different before (681.4 vs. 446.5 μm, Cirrus versus Spectralis, P < 0.0001) but not after processing (442.9 vs. 446.5 μm, P = 0.76). Sectoral macular total retinal and circumpapillary RNFL thicknesses showed systematic differences between Cirrus and Spectralis that became not significant after processing. Conclusion The virtual averaging method successfully improved nontracking nonframe-averaged OCT image quality and made the images comparable to active eye-tracking frame-averaged OCT images. Translational Relevance Virtual averaging may enable detailed retinal structure studies on images acquired using a mixture of nonframe-averaged and frame-averaged OCT devices without concerning about systematic differences in both qualitative and quantitative aspects. PMID:26835180
Prediction of shelled shrimp weight by machine vision.
Pan, Peng-min; Li, Jian-ping; Lv, Gu-lai; Yang, Hui; Zhu, Song-ming; Lou, Jian-zhong
2009-08-01
The weight of shelled shrimp is an important parameter for grading process. The weight prediction of shelled shrimp by contour area is not accurate enough because of the ignorance of the shrimp thickness. In this paper, a multivariate prediction model containing area, perimeter, length, and width was established. A new calibration algorithm for extracting length of shelled shrimp was proposed, which contains binary image thinning, branch recognition and elimination, and length reconstruction, while its width was calculated during the process of length extracting. The model was further validated with another set of images from 30 shelled shrimps. For a comparison purpose, artificial neural network (ANN) was used for the shrimp weight predication. The ANN model resulted in a better prediction accuracy (with the average relative error at 2.67%), but took a tenfold increase in calculation time compared with the weight-area-perimeter (WAP) model (with the average relative error at 3.02%). We thus conclude that the WAP model is a better method for the prediction of the weight of shelled red shrimp. PMID:19650197
Prediction of shelled shrimp weight by machine vision
Pan, Peng-min; Li, Jian-ping; Lv, Gu-lai; Yang, Hui; Zhu, Song-ming; Lou, Jian-zhong
2009-01-01
The weight of shelled shrimp is an important parameter for grading process. The weight prediction of shelled shrimp by contour area is not accurate enough because of the ignorance of the shrimp thickness. In this paper, a multivariate prediction model containing area, perimeter, length, and width was established. A new calibration algorithm for extracting length of shelled shrimp was proposed, which contains binary image thinning, branch recognition and elimination, and length reconstruction, while its width was calculated during the process of length extracting. The model was further validated with another set of images from 30 shelled shrimps. For a comparison purpose, artificial neural network (ANN) was used for the shrimp weight predication. The ANN model resulted in a better prediction accuracy (with the average relative error at 2.67%), but took a tenfold increase in calculation time compared with the weight-area-perimeter (WAP) model (with the average relative error at 3.02%). We thus conclude that the WAP model is a better method for the prediction of the weight of shelled red shrimp. PMID:19650197
Averaging Models: Parameters Estimation with the R-Average Procedure
ERIC Educational Resources Information Center
Vidotto, G.; Massidda, D.; Noventa, S.
2010-01-01
The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…
Geologic analysis of averaged magnetic satellite anomalies
NASA Technical Reports Server (NTRS)
Goyal, H. K.; Vonfrese, R. R. B.; Ridgway, J. R.; Hinze, W. J.
1985-01-01
To investigate relative advantages and limitations for quantitative geologic analysis of magnetic satellite scalar anomalies derived from arithmetic averaging of orbital profiles within equal-angle or equal-area parallelograms, the anomaly averaging process was simulated by orbital profiles computed from spherical-earth crustal magnetic anomaly modeling experiments using Gauss-Legendre quadrature integration. The results indicate that averaging can provide reasonable values at satellite elevations, where contributing error factors within a given parallelogram include the elevation distribution of the data, and orbital noise and geomagnetic field attributes. Various inversion schemes including the use of equivalent point dipoles are also investigated as an alternative to arithmetic averaging. Although inversion can provide improved spherical grid anomaly estimates, these procedures are problematic in practice where computer scaling difficulties frequently arise due to a combination of factors including large source-to-observation distances ( 400 km), high geographic latitudes, and low geomagnetic field inclinations.
Interpreting Bivariate Regression Coefficients: Going beyond the Average
ERIC Educational Resources Information Center
Halcoussis, Dennis; Phillips, G. Michael
2010-01-01
Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…
42 CFR 423.279 - National average monthly bid amount.
Code of Federal Regulations, 2012 CFR
2012-10-01
... each MA-PD plan described in section 1851(a)(2)(A)(i) of the Act. The calculation does not include bids... section 1876(h) of the Act. (b) Calculation of weighted average. (1) The national average monthly bid... defined in § 422.258(c)(1) of this chapter) and the denominator equal to the total number of Part...
42 CFR 423.279 - National average monthly bid amount.
Code of Federal Regulations, 2011 CFR
2011-10-01
... bid amounts for each prescription drug plan (not including fallbacks) and for each MA-PD plan...(h) of the Act. (b) Calculation of weighted average. (1) The national average monthly bid amount is a....258(c)(1) of this chapter) and the denominator equal to the total number of Part D...
42 CFR 423.279 - National average monthly bid amount.
Code of Federal Regulations, 2014 CFR
2014-10-01
... each MA-PD plan described in section 1851(a)(2)(A)(i) of the Act. The calculation does not include bids... section 1876(h) of the Act. (b) Calculation of weighted average. (1) The national average monthly bid... defined in § 422.258(c)(1) of this chapter) and the denominator equal to the total number of Part...
42 CFR 423.279 - National average monthly bid amount.
Code of Federal Regulations, 2010 CFR
2010-10-01
... bid amounts for each prescription drug plan (not including fallbacks) and for each MA-PD plan...(h) of the Act. (b) Calculation of weighted average. (1) The national average monthly bid amount is a....258(c)(1) of this chapter) and the denominator equal to the total number of Part D...
[Comparison of formulas for calculating average skin temperature and their characteristics].
Mochida, T; Shimakura, K; Yoshida, N
1994-11-01
In order to obtain data of skin temperatures experiments were carried out using three healthy young Japanese males. The subjects were exposed to each of the four environments with dry bulb temperatures of 15 degrees C, 19 degrees C, 25 degrees C and 33 degrees C. At each of these air temperatures, relative humidity and air movement were set at 50% and 0.15m/s respectively. The subjects wore only athletic shorts, seated on the meshed chair. Each subject was measured with thermisters continuously for one hour under these conditions to obtain twenty-nine regional skin temperature. The above experiments were made with one subject at a time in the test chamber. The data of skin temperatures observed were substituted into twenty-eight different weighting formulas for comparison. The present analysis revealed that the calculation from the 12-point and the 7-point skin area formulas by Hardy-DuBois showed approximate mean values of the twenty eight. Moreover, the values calculated from the formula by Nadel et al, which was weighted by skin area and thermal sensitivity, are similar to the values calculated by the formula of Mochida, which was weighted by skin area, heat transfer coefficients and thermal sensitivity. Furthermore, the authors verified that the area-mean weighting factor was derived from the Teichner's definition in which a limiting value of arithmetical mean of skin temperatures gave a value of average skin temperature. PMID:7880325
Averaging Internal Consistency Reliability Coefficients
ERIC Educational Resources Information Center
Feldt, Leonard S.; Charter, Richard A.
2006-01-01
Seven approaches to averaging reliability coefficients are presented. Each approach starts with a unique definition of the concept of "average," and no approach is more correct than the others. Six of the approaches are applicable to internal consistency coefficients. The seventh approach is specific to alternate-forms coefficients. Although the…
NASA Technical Reports Server (NTRS)
Ustino, Eugene A.
2006-01-01
This slide presentation reviews the observable radiances as functions of atmospheric parameters and of surface parameters; the mathematics of atmospheric weighting functions (WFs) and surface partial derivatives (PDs) are presented; and the equation of the forward radiative transfer (RT) problem is presented. For non-scattering atmospheres this can be done analytically, and all WFs and PDs can be computed analytically using the direct linearization approach. For scattering atmospheres, in general case, the solution of the forward RT problem can be obtained only numerically, but we need only two numerical solutions: one of the forward RT problem and one of the adjoint RT problem to compute all WFs and PDs we can think of. In this presentation we discuss applications of both the linearization and adjoint approaches
Feng, Xiaoqi; Astell-Burt, Thomas
2016-06-01
Reductions in body mass index and reduced overweight/obesity risk among participants in the 45 and Up Study diagnosed with type 2 diabetes mellitus (T2DM) were relatively large in rural areas compared to those in urban environs. Further research is needed to explain why where people reside influences optimal management of T2DM. PMID:27321327
Lorcaserin for weight management
Taylor, James R; Dietrich, Eric; Powell, Jason
2013-01-01
Type 2 diabetes and obesity commonly occur together. Obesity contributes to insulin resistance, a main cause of type 2 diabetes. Modest weight loss reduces glucose, lipids, blood pressure, need for medications, and cardiovascular risk. A number of approaches can be used to achieve weight loss, including lifestyle modification, surgery, and medication. Lorcaserin, a novel antiobesity agent, affects central serotonin subtype 2A receptors, resulting in decreased food intake and increased satiety. It has been studied in obese patients with type 2 diabetes and results in an approximately 5.5 kg weight loss, on average, when used for one year. Headache, back pain, nasopharyngitis, and nausea were the most common adverse effects noted with lorcaserin. Hypoglycemia was more common in the lorcaserin groups in the clinical trials, but none of the episodes were categorized as severe. Based on the results of these studies, lorcaserin was approved at a dose of 10 mg twice daily in patients with a body mass index ≥30 kg/m2 or ≥27 kg/m2 with at least one weight-related comorbidity, such as hypertension, type 2 diabetes mellitus, or dyslipidemia, in addition to a reduced calorie diet and increased physical activity. Lorcaserin is effective for weight loss in obese patients with and without type 2 diabetes, although its specific role in the management of obesity is unclear at this time. This paper reviews the clinical trials of lorcaserin, its use from the patient perspective, and its potential role in the treatment of obesity. PMID:23788837
The Averaging Problem in Cosmology
NASA Astrophysics Data System (ADS)
Paranjape, Aseem
2009-06-01
This thesis deals with the averaging problem in cosmology, which has gained considerable interest in recent years, and is concerned with correction terms (after averaging inhomogeneities) that appear in the Einstein equations when working on the large scales appropriate for cosmology. It has been claimed in the literature that these terms may account for the phenomenon of dark energy which causes the late time universe to accelerate. We investigate the nature of these terms by using averaging schemes available in the literature and further developed to be applicable to the problem at hand. We show that the effect of these terms when calculated carefully, remains negligible and cannot explain the late time acceleration.
Effect of high-speed jet on flow behavior, retrogradation, and molecular weight of rice starch.
Fu, Zhen; Luo, Shun-Jing; BeMiller, James N; Liu, Wei; Liu, Cheng-Mei
2015-11-20
Effects of high-speed jet (HSJ) treatment on flow behavior, retrogradation, and degradation of the molecular structure of indica rice starch were investigated. Decreasing with the number of HSJ treatment passes were the turbidity of pastes (degree of retrogradation), the enthalpy of melting of retrograded rice starch, weight-average molecular weights and weight-average root-mean square radii of gyration of the starch polysaccharides, and the amylopectin peak areas of SEC profiles. The areas of lower-molecular-weight polymers increased. The chain-length distribution was not significantly changed. Pastes of all starch samples exhibited pseudoplastic, shear-thinning behavior. HSJ treatment increased the flow behavior index and decreased the consistency coefficient and viscosity. The data suggested that degradation of amylopectin was mainly involved and that breakdown preferentially occurred in chains between clusters. PMID:26344255
High average power pockels cell
Daly, Thomas P.
1991-01-01
A high average power pockels cell is disclosed which reduces the effect of thermally induced strains in high average power laser technology. The pockels cell includes an elongated, substantially rectangular crystalline structure formed from a KDP-type material to eliminate shear strains. The X- and Y-axes are oriented substantially perpendicular to the edges of the crystal cross-section and to the C-axis direction of propagation to eliminate shear strains.
Application Bayesian Model Averaging method for ensemble system for Poland
NASA Astrophysics Data System (ADS)
Guzikowski, Jakub; Czerwinska, Agnieszka
2014-05-01
The aim of the project is to evaluate methods for generating numerical ensemble weather prediction using a meteorological data from The Weather Research & Forecasting Model and calibrating this data by means of Bayesian Model Averaging (WRF BMA) approach. We are constructing height resolution short range ensemble forecasts using meteorological data (temperature) generated by nine WRF's models. WRF models have 35 vertical levels and 2.5 km x 2.5 km horizontal resolution. The main emphasis is that the used ensemble members has a different parameterization of the physical phenomena occurring in the boundary layer. To calibrate an ensemble forecast we use Bayesian Model Averaging (BMA) approach. The BMA predictive Probability Density Function (PDF) is a weighted average of predictive PDFs associated with each individual ensemble member, with weights that reflect the member's relative skill. For test we chose a case with heat wave and convective weather conditions in Poland area from 23th July to 1st August 2013. From 23th July to 29th July 2013 temperature oscillated below or above 30 Celsius degree in many meteorology stations and new temperature records were added. During this time the growth of the hospitalized patients with cardiovascular system problems was registered. On 29th July 2013 an advection of moist tropical air masses was recorded in the area of Poland causes strong convection event with mesoscale convection system (MCS). MCS caused local flooding, damage to the transport infrastructure, destroyed buildings, trees and injuries and direct threat of life. Comparison of the meteorological data from ensemble system with the data recorded on 74 weather stations localized in Poland is made. We prepare a set of the model - observations pairs. Then, the obtained data from single ensemble members and median from WRF BMA system are evaluated on the basis of the deterministic statistical error Root Mean Square Error (RMSE), Mean Absolute Error (MAE). To evaluation
NASA Technical Reports Server (NTRS)
1995-01-01
The Attitude Adjuster is a system for weight repositioning corresponding to a SCUBA diver's changing positions. Compact tubes on the diver's air tank permit controlled movement of lead balls within the Adjuster, automatically repositioning when the diver changes position. Manufactured by Think Tank Technologies, the system is light and small, reducing drag and energy requirements and contributing to lower air consumption. The Mid-Continent Technology Transfer Center helped the company with both technical and business information and arranged for the testing at Marshall Space Flight Center's Weightlessness Environmental Training Facility for astronauts.
Creating "Intelligent" Ensemble Averages Using a Process-Based Framework
NASA Astrophysics Data System (ADS)
Baker, Noel; Taylor, Patrick
2014-05-01
The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is used to add value to individual model projections and construct a consensus projection. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, individual models reproduce certain climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. The intention is to produce improved ("intelligent") unequal-weight ensemble averages. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Several climate process metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument in combination with surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing the equal-weighted ensemble average and an ensemble weighted using the process-based metric. Additionally, this study investigates the dependence of the metric weighting scheme on the climate state using a combination of model simulations including a non-forced preindustrial control experiment, historical simulations, and
Modeling operating weight and axle weight distributions for highway vehicles
Greene, D.L.; Liang, J.C.
1988-07-01
The estimation of highway cost responsibility requires detailed information on vehicle operating weights and axle weights by type of vehicle. Typically, 10--20 vehicle types must be cross-classified by 10--20 registered weight classes and again by 20 or more operating weight categories, resulting in 100--400 relative frequencies to be determined for each vehicle type. For each of these, gross operating weight must be distributed to each axle or axle unit. Given the rarity of many of the heaviest vehicle types, direct estimation of these frequencies and axle weights from traffic classification count statistics and truck weight data may exceed the reliability of even the largest (e.g., 250,000 record) data sources. An alternative is to estimate statistical models of operating weight distributions as functions of registered weight, and models of axle weight shares as functions of operating weight. This paper describes the estimation of such functions using the multinomial logit model (a log-linear model) and the implementation of the modeling framework as a PC-based FORTRAN program. Areas for further research include the addition of highway class and region as explanatory variables in operating weight distribution models, and the development of theory for including registration costs and costs of operating overweight in the modeling framework. 14 refs., 45 figs., 5 tabs.
Combining forecast weights: Why and how?
NASA Astrophysics Data System (ADS)
Yin, Yip Chee; Kok-Haur, Ng; Hock-Eam, Lim
2012-09-01
This paper proposes a procedure called forecast weight averaging which is a specific combination of forecast weights obtained from different methods of constructing forecast weights for the purpose of improving the accuracy of pseudo out of sample forecasting. It is found that under certain specified conditions, forecast weight averaging can lower the mean squared forecast error obtained from model averaging. In addition, we show that in a linear and homoskedastic environment, this superior predictive ability of forecast weight averaging holds true irrespective whether the coefficients are tested by t statistic or z statistic provided the significant level is within the 10% range. By theoretical proofs and simulation study, we have shown that model averaging like, variance model averaging, simple model averaging and standard error model averaging, each produces mean squared forecast error larger than that of forecast weight averaging. Finally, this result also holds true marginally when applied to business and economic empirical data sets, Gross Domestic Product (GDP growth rate), Consumer Price Index (CPI) and Average Lending Rate (ALR) of Malaysia.
Determining GPS average performance metrics
NASA Technical Reports Server (NTRS)
Moore, G. V.
1995-01-01
Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.
Vocal attractiveness increases by averaging.
Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal
2010-01-26
Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception. PMID:20129047
Evaluations of average level spacings
Liou, H.I.
1980-01-01
The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of /sup 168/Er data. 19 figures, 2 tables.
Oh, Hee Kyung
2016-01-01
In order to attain heavier live weight without impairing pork or sensory quality characteristics, carcass performance, muscle fiber, pork quality, and sensory quality characteristics were compared among the heavy weight (HW, average live weight of 130.5 kg), medium weight (MW, average weight of 111.1 kg), and light weight (LW, average weight of 96.3 kg) pigs at time of slaughter. The loin eye area was 1.47 times greater in the HW group compared to the LW group (64.0 and 43.5 cm2, p<0.001), while carcass percent was similar between the HW and MW groups (p>0.05). This greater performance by the HW group compared to the LW group can be explained by a greater total number (1,436 vs. 1,188, ×103, p<0.001) and larger area (4,452 vs. 3,716 μm2, p<0.001) of muscle fibers. No significant differences were observed in muscle pH45 min, lightness, drip loss, and shear force among the groups (p>0.05), and higher live weights did not influence sensory quality attributes, including tenderness, juiciness, and flavor. Therefore, these findings indicate that increased live weights in this study did not influence the technological and sensory quality characteristics. Moreover, muscles with a higher number of medium or large size fibers tend to exhibit good carcass performance without impairing meat and sensory quality characteristics. PMID:27433110
Choi, Young Min; Oh, Hee Kyung
2016-01-01
In order to attain heavier live weight without impairing pork or sensory quality characteristics, carcass performance, muscle fiber, pork quality, and sensory quality characteristics were compared among the heavy weight (HW, average live weight of 130.5 kg), medium weight (MW, average weight of 111.1 kg), and light weight (LW, average weight of 96.3 kg) pigs at time of slaughter. The loin eye area was 1.47 times greater in the HW group compared to the LW group (64.0 and 43.5 cm(2), p<0.001), while carcass percent was similar between the HW and MW groups (p>0.05). This greater performance by the HW group compared to the LW group can be explained by a greater total number (1,436 vs. 1,188, ×10(3), p<0.001) and larger area (4,452 vs. 3,716 μm(2), p<0.001) of muscle fibers. No significant differences were observed in muscle pH45 min, lightness, drip loss, and shear force among the groups (p>0.05), and higher live weights did not influence sensory quality attributes, including tenderness, juiciness, and flavor. Therefore, these findings indicate that increased live weights in this study did not influence the technological and sensory quality characteristics. Moreover, muscles with a higher number of medium or large size fibers tend to exhibit good carcass performance without impairing meat and sensory quality characteristics. PMID:27433110
Averaged Electroencephalic Audiometry in Infants
ERIC Educational Resources Information Center
Lentz, William E.; McCandless, Geary A.
1971-01-01
Normal, preterm, and high-risk infants were tested at 1, 3, 6, and 12 months of age using averaged electroencephalic audiometry (AEA) to determine the usefulness of AEA as a measurement technique for assessing auditory acuity in infants, and to delineate some of the procedural and technical problems often encountered. (KW)
Averaging inhomogeneous cosmologies - a dialogue.
NASA Astrophysics Data System (ADS)
Buchert, T.
The averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists, one of them (RED) advocating the standard model, the other (GREEN) advancing some arguments against it. Technical explanations of these arguments as well as the conclusions of this debate are given by BLUE.
Averaging inhomogenous cosmologies - a dialogue
NASA Astrophysics Data System (ADS)
Buchert, T.
The averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists, one of them (RED) advocating the standard model, the other (GREEN) advancing some arguments against it. Technical explanations of these arguments as well as the conclusions of this debate are given by BLUE.
Averaging facial expression over time
Haberman, Jason; Harp, Tom; Whitney, David
2010-01-01
The visual system groups similar features, objects, and motion (e.g., Gestalt grouping). Recent work suggests that the computation underlying perceptual grouping may be one of summary statistical representation. Summary representation occurs for low-level features, such as size, motion, and position, and even for high level stimuli, including faces; for example, observers accurately perceive the average expression in a group of faces (J. Haberman & D. Whitney, 2007, 2009). The purpose of the present experiments was to characterize the time-course of this facial integration mechanism. In a series of three experiments, we measured observers’ abilities to recognize the average expression of a temporal sequence of distinct faces. Faces were presented in sets of 4, 12, or 20, at temporal frequencies ranging from 1.6 to 21.3 Hz. The results revealed that observers perceived the average expression in a temporal sequence of different faces as precisely as they perceived a single face presented repeatedly. The facial averaging was independent of temporal frequency or set size, but depended on the total duration of exposed faces, with a time constant of ~800 ms. These experiments provide evidence that the visual system is sensitive to the ensemble characteristics of complex objects presented over time. PMID:20053064
Polyhedral Painting with Group Averaging
ERIC Educational Resources Information Center
Farris, Frank A.; Tsao, Ryan
2016-01-01
The technique of "group-averaging" produces colorings of a sphere that have the symmetries of various polyhedra. The concepts are accessible at the undergraduate level, without being well-known in typical courses on algebra or geometry. The material makes an excellent discovery project, especially for students with some background in…
Ultrahigh molecular weight aromatic siloxane polymers
NASA Technical Reports Server (NTRS)
Ludwick, L. M.
1982-01-01
The condensation of a diol with a silane in toluene yields a silphenylene-siloxane polymer. The reaction of stiochiometric amounts of the diol and silane produced products with molecular weights in the range 2.0 - 6.0 x 10 to the 5th power. The molecular weight of the product was greatly increased by a multistep technique. The methodology for synthesis of high molecular weight polymers using a two step procedure was refined. Polymers with weight average molecular weights in excess of 1.0 x 10 to the 6th power produced by this method. Two more reactive silanes, bis(pyrrolidinyl)dimethylsilane and bis(gamma butyrolactam)dimethylsilane, are compared with the dimethyleminodimethylsilane in ability to advance the molecular weight of the prepolymer. The polymers produced are characterized by intrinsic viscosity in tetrahydrofuran. Weight and number average molecular weights and polydispersity are determined by gel permeation chromatography.
Exact averaging of laminar dispersion
NASA Astrophysics Data System (ADS)
Ratnakar, Ram R.; Balakotaiah, Vemuri
2011-02-01
We use the Liapunov-Schmidt (LS) technique of bifurcation theory to derive a low-dimensional model for laminar dispersion of a nonreactive solute in a tube. The LS formalism leads to an exact averaged model, consisting of the governing equation for the cross-section averaged concentration, along with the initial and inlet conditions, to all orders in the transverse diffusion time. We use the averaged model to analyze the temporal evolution of the spatial moments of the solute and show that they do not have the centroid displacement or variance deficit predicted by the coarse-grained models derived by other methods. We also present a detailed analysis of the first three spatial moments for short and long times as a function of the radial Peclet number and identify three clearly defined time intervals for the evolution of the solute concentration profile. By examining the skewness in some detail, we show that the skewness increases initially, attains a maximum for time scales of the order of transverse diffusion time, and the solute concentration profile never attains the Gaussian shape at any finite time. Finally, we reason that there is a fundamental physical inconsistency in representing laminar (Taylor) dispersion phenomena using truncated averaged models in terms of a single cross-section averaged concentration and its large scale gradient. Our approach evaluates the dispersion flux using a local gradient between the dominant diffusive and convective modes. We present and analyze a truncated regularized hyperbolic model in terms of the cup-mixing concentration for the classical Taylor-Aris dispersion that has a larger domain of validity compared to the traditional parabolic model. By analyzing the temporal moments, we show that the hyperbolic model has no physical inconsistencies that are associated with the parabolic model and can describe the dispersion process to first order accuracy in the transverse diffusion time.
Pregnancy Weight Gain Calculator
... Newsroom Dietary Guidelines Communicator’s Guide Pregnancy Weight Gain Calculator You are here Home / Online Tools Pregnancy Weight Gain Calculator Print Share Pregnancy Weight Gain Calculator Pregnancy Weight Gain Calculator Pregnancy Weight Gain Intro ...
... loss-rapid weight loss; Overweight-rapid weight loss; Obesity-rapid weight loss; Diet-rapid weight loss ... for people who have health problems because of obesity. For these people, losing a lot of weight ...
Spectral Approach to Optimal Estimation of the Global Average Temperature.
NASA Astrophysics Data System (ADS)
Shen, Samuel S. P.; North, Gerald R.; Kim, Kwang-Y.
1994-12-01
Making use of EOF analysis and statistical optimal averaging techniques, the problem of random sampling error in estimating the global average temperature by a network of surface stations has been investigated. The EOF representation makes it unnecessary to use simplified empirical models of the correlation structure of temperature anomalies. If an adjustable weight is assigned to each station according to the criterion of minimum mean-square error, a formula for this error can be derived that consists of a sum of contributions from successive EOF modes. The EOFs were calculated from both observed data and a noise-forced EBM for the problem of one-year and five-year averages. The mean square statistical sampling error depends on the spatial distribution of the stations, length of the averaging interval, and the choice of the weight for each station data stream. Examples used here include four symmetric configurations of 4 × 4, 6 × 4, 9 × 7, and 20 × 10 stations and the Angell-Korshover configuration. Comparisons with the 100-yr U.K. dataset show that correlations for the time series of the global temperature anomaly average between the full dataset and this study's sparse configurations are rather high. For example, the 63-station Angell-Korshover network with uniform weighting explains 92.7% of the total variance, whereas the same network with optimal weighting can lead to 97.8% explained total variance of the U.K. dataset.
Spectral approach to optimal estimation of the global average temperature
Shen, S.S.P.; North, G.R.; Kim, K.Y.
1994-12-01
Making use of EOF analysis and statistical optimal averaging techniques, the problem of random sampling error in estimating the global average temperature by a network of surface stations has been investigated. The EOF representation makes it unnecessary to use simplified empirical models of the correlation structure of temperature anomalies. If an adjustable weight is assigned to each station according to the criterion of minimum mean-square error, a formula for this error can be derived that consists of a sum of contributions from successive EOF modes. The EOFs were calculated from both observed data a noise-forced EBM for the problem of one-year and five-year averages. The mean square statistical sampling error depends on the spatial distribution of the stations, length of the averaging interval, and the choice of the weight for each station data stream. Examples used here include four symmetric configurations of 4 X 4, 5 X 4, 9 X 7, and 20 X 10 stations and the Angell-Korshover configuration. Comparisons with the 100-yr U.K. dataset show that correlations for the time series of the global temperature anomaly average between the full dataset and this study`s sparse configurations are rather high. For example, the 63-station Angell-Korshover network with uniform weighting explains 92.7% of the total variance, whereas the same network with optimal weighting can lead to 97.8% explained total variance of the U.K. dataset. 27 refs., 5 figs., 4 tabs.
Calculating High Speed Centrifugal Compressor Performance from Averaged Measurements
NASA Astrophysics Data System (ADS)
Lou, Fangyuan; Fleming, Ryan; Key, Nicole L.
2012-12-01
To improve the understanding of high performance centrifugal compressors found in modern aircraft engines, the aerodynamics through these machines must be experimentally studied. To accurately capture the complex flow phenomena through these devices, research facilities that can accurately simulate these flows are necessary. One such facility has been recently developed, and it is used in this paper to explore the effects of averaging total pressure and total temperature measurements to calculate compressor performance. Different averaging techniques (including area averaging, mass averaging, and work averaging) have been applied to the data. Results show that there is a negligible difference in both the calculated total pressure ratio and efficiency for the different techniques employed. However, the uncertainty in the performance parameters calculated with the different averaging techniques is significantly different, with area averaging providing the least uncertainty.
Averaging Robertson-Walker cosmologies
NASA Astrophysics Data System (ADS)
Brown, Iain A.; Robbers, Georg; Behrend, Juliane
2009-04-01
The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ωeff0 approx 4 × 10-6, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10-8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state weff < -1/3 can be found for strongly phantom models.
Averaging Robertson-Walker cosmologies
Brown, Iain A.; Robbers, Georg; Behrend, Juliane E-mail: G.Robbers@thphys.uni-heidelberg.de
2009-04-15
The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the {Lambda}CDM concordance model, the backreaction is of the order of {Omega}{sub eff}{sup 0} Almost-Equal-To 4 Multiplication-Sign 10{sup -6}, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10{sup -8} and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w{sub eff} < -1/3 can be found for strongly phantom models.
Ensemble averaging of acoustic data
NASA Technical Reports Server (NTRS)
Stefanski, P. K.
1982-01-01
A computer program called Ensemble Averaging of Acoustic Data is documented. The program samples analog data, analyzes the data, and displays them in the time and frequency domains. Hard copies of the displays are the program's output. The documentation includes a description of the program and detailed user instructions for the program. This software was developed for use on the Ames 40- by 80-Foot Wind Tunnel's Dynamic Analysis System consisting of a PDP-11/45 computer, two RK05 disk drives, a tektronix 611 keyboard/display terminal, and FPE-4 Fourier Processing Element, and an analog-to-digital converter.
The influence of aquariums on weight in individuals with dementia.
Edwards, Nancy E; Beck, Alan M
2013-01-01
This study assessed whether individuals with dementia who observe aquariums increase the amount of food they consume and maintain body weight. The sample included 70 residents in dementia units within 3 extended care facilities in 2 states. The intervention included the introduction of an aquarium into each common dining area. A total increase of 196.9 g of daily food intake (25.0%) was noted from baseline to the end of the 10-week study. Resident body weight increased an average of 2.2 pounds during the study. Eight of 70 residents experienced a weight loss ((Equation is included in full-text article.)=1.89 lbs). People with advanced dementia responded to aquariums in their environment documenting that attraction to the natural environment is so innate that it survives dementia. PMID:23138175
The averaging method in applied problems
NASA Astrophysics Data System (ADS)
Grebenikov, E. A.
1986-04-01
The totality of methods, allowing to research complicated non-linear oscillating systems, named in the literature "averaging method" has been given. THe author is describing the constructive part of this method, or a concrete form and corresponding algorithms, on mathematical models, sufficiently general , but built on concrete problems. The style of the book is that the reader interested in the Technics and algorithms of the asymptotic theory of the ordinary differential equations, could solve individually such problems. For specialists in the area of applied mathematics and mechanics.
The causal meaning of Fisher’s average effect
LEE, JAMES J.; CHOW, CARSON C.
2013-01-01
Summary In order to formulate the Fundamental Theorem of Natural Selection, Fisher defined the average excess and average effect of a gene substitution. Finding these notions to be somewhat opaque, some authors have recommended reformulating Fisher’s ideas in terms of covariance and regression, which are classical concepts of statistics. We argue that Fisher intended his two averages to express a distinction between correlation and causation. On this view, the average effect is a specific weighted average of the actual phenotypic changes that result from physically changing the allelic states of homologous genes. We show that the statistical and causal conceptions of the average effect, perceived as inconsistent by Falconer, can be reconciled if certain relationships between the genotype frequencies and non-additive residuals are conserved. There are certain theory-internal considerations favouring Fisher’s original formulation in terms of causality; for example, the frequency-weighted mean of the average effects equaling zero at each locus becomes a derivable consequence rather than an arbitrary constraint. More broadly, Fisher’s distinction between correlation and causation is of critical importance to gene-trait mapping studies and the foundations of evolutionary biology. PMID:23938113
Using Bayes Model Averaging for Wind Power Forecasts
NASA Astrophysics Data System (ADS)
Preede Revheim, Pål; Beyer, Hans Georg
2014-05-01
For operational purposes predictions of the forecasts of the lumped output of groups of wind farms spread over larger geographic areas will often be of interest. A naive approach is to make forecasts for each individual site and sum them up to get the group forecast. It is however well documented that a better choice is to use a model that also takes advantage of spatial smoothing effects. It might however be the case that some sites tends to more accurately reflect the total output of the region, either in general or for certain wind directions. It will then be of interest giving these a greater influence over the group forecast. Bayesian model averaging (BMA) is a statistical post-processing method for producing probabilistic forecasts from ensembles. Raftery et al. [1] show how BMA can be used for statistical post processing of forecast ensembles, producing PDFs of future weather quantities. The BMA predictive PDF of a future weather quantity is a weighted average of the ensemble members' PDFs, where the weights can be interpreted as posterior probabilities and reflect the ensemble members' contribution to overall forecasting skill over a training period. In Revheim and Beyer [2] the BMA procedure used in Sloughter, Gneiting and Raftery [3] were found to produce fairly accurate PDFs for the future mean wind speed of a group of sites from the single sites wind speeds. However, when the procedure was attempted applied to wind power it resulted in either problems with the estimation of the parameters (mainly caused by longer consecutive periods of no power production) or severe underestimation (mainly caused by problems with reflecting the power curve). In this paper the problems that arose when applying BMA to wind power forecasting is met through two strategies. First, the BMA procedure is run with a combination of single site wind speeds and single site wind power production as input. This solves the problem with longer consecutive periods where the input data
Flexible time domain averaging technique
NASA Astrophysics Data System (ADS)
Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng
2013-09-01
Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.
Non-Homogeneous Fractal Hierarchical Weighted Networks
Dong, Yujuan; Dai, Meifeng; Ye, Dandan
2015-01-01
A model of fractal hierarchical structures that share the property of non-homogeneous weighted networks is introduced. These networks can be completely and analytically characterized in terms of the involved parameters, i.e., the size of the original graph Nk and the non-homogeneous weight scaling factors r1, r2, · · · rM. We also study the average weighted shortest path (AWSP), the average degree and the average node strength, taking place on the non-homogeneous hierarchical weighted networks. Moreover the AWSP is scrupulously calculated. We show that the AWSP depends on the number of copies and the sum of all non-homogeneous weight scaling factors in the infinite network order limit. PMID:25849619
Birth weight reduction associated with residence near a hazardous waste landfill.
Berry, M; Bove, F
1997-01-01
We examined the relationship between birth weight and mother's residence near a hazardous waste landfill. Twenty-five years of birth certificates (1961-1985) were collected for four towns. Births were grouped into five 5-year periods corresponding to hypothesized exposure periods (1971-1975 having the greatest potential for exposure). From 1971 to 1975, term births (37-44 weeks gestation) to parents living closest to the landfill (Area 1A) had a statistically significant lower average birth weight (192 g) and a statistically significant higher proportion of low birth weight [odds ratio (OR) = 5.1; 95% confidence interval (CI), 2.1-12.3] than the control population. Average term birth weights in Area 1A rebounded by about 332 g after 1975. Parallel results were found for all births (gestational age > 27 weeks) in Area 1A during 1971-1975. Area 1A infants had twice the risk of prematurity (OR = 2.1; 95 CI, 1.0-4.4) during 1971-1975 compared to the control group. The results indicate a significant impact to infants born to residents living near the landfill during the period postulated as having the greatest potential for exposure. The magnitude of the effect is in the range of birth weight reduction due to cigarette smoking during pregnancy. Images Figure 1. Figure 2. PMID:9347901