Salganik, Matthew J; Fazito, Dimitri; Bertoni, Neilane; Abdo, Alexandre H; Mello, Maeve B; Bastos, Francisco I
2011-11-15
One of the many challenges hindering the global response to the human immunodeficiency virus (HIV)/acquired immunodeficiency syndrome (AIDS) epidemic is the difficulty of collecting reliable information about the populations most at risk for the disease. Thus, the authors empirically assessed a promising new method for estimating the sizes of most at-risk populations: the network scale-up method. Using 4 different data sources, 2 of which were from other researchers, the authors produced 5 estimates of the number of heavy drug users in Curitiba, Brazil. The authors found that the network scale-up and generalized network scale-up estimators produced estimates 5-10 times higher than estimates made using standard methods (the multiplier method and the direct estimation method using data from 2004 and 2010). Given that equally plausible methods produced such a wide range of results, the authors recommend that additional studies be undertaken to compare estimates based on the scale-up method with those made using other methods. If scale-up-based methods routinely produce higher estimates, this would suggest that scale-up-based methods are inappropriate for populations most at risk of HIV/AIDS or that standard methods may tend to underestimate the sizes of these populations.
ERIC Educational Resources Information Center
National Center for Educational Statistics (DHEW/OE), Washington, DC.
In response to needs expressed by the community of higher education institutions, the National Center for Educational Statistics has produced early estimates of a selected group of mean salaries of instructional faculty in institutions of higher education in 1972-73. The number and salaries of male and female instructional staff by rank are of…
Future Demand for Higher Education in Australia. Go8 Backgrounder 10
ERIC Educational Resources Information Center
Group of Eight (NJ1), 2010
2010-01-01
This paper produces two sets of estimates of future student demand for higher education in Australia. The two sets of estimates allow Go8 to consider the capacity of the university sector to accommodate future growth in student numbers (including staff and facilities), and to identify the costs involved, including for the Government which has…
Cost Efficiency in Public Higher Education.
ERIC Educational Resources Information Center
Robst, John
This study used the frontier cost function framework to examine cost efficiency in public higher education. The frontier cost function estimates the minimum predicted cost for producing a given amount of output. Data from the annual Almanac issues of the "Chronicle of Higher Education" were used to calculate state level enrollments at two-year and…
Estimating the size of the hardwood sawmill industry in Pennsylvania
Paul M. Smith; William G. Luppold; Sudipta Dasmohapatra
2003-01-01
The size of the hardwood sawmill industry in Pennsylvania in 1999 is estimated at 1.311 BBF by 556 mills. Study results show an 11 percent higher estimate of the volume of hardwood lumber produced and a 60 percent greater number of Pennsylvania sawmills in 1999 as compared to the 1.186 BBF of hardwood lumber by 339 sawmills estimated by the USDC Census Bureau for the...
Winter bird population studies and project prairie birds for surveying grassland birds
Twedt, D.J.; Hamel, P.B.; Woodrey, M.S.
2008-01-01
We compared 2 survey methods for assessing winter bird communities in temperate grasslands: Winter Bird Population Study surveys are area-searches that have long been used in a variety of habitats whereas Project Prairie Bird surveys employ active-flushing techniques on strip-transects and are intended for use in grasslands. We used both methods to survey birds on 14 herbaceous reforested sites and 9 coastal pine savannas during winter and compared resultant estimates of species richness and relative abundance. These techniques did not yield similar estimates of avian populations. We found Winter Bird Population Studies consistently produced higher estimates of species richness, whereas Project Prairie Birds produced higher estimates of avian abundance for some species. When it is important to identify all species within the winter bird community, Winter Bird Population Studies should be the survey method of choice. If estimates of the abundance of relatively secretive grassland bird species are desired, the use of Project Prairie Birds protocols is warranted. However, we suggest that both survey techniques, as currently employed, are deficient and recommend distance- based survey methods that provide species-specific estimates of detection probabilities be incorporated into these survey methods.
Morris, Martina; Leslie-Cook, Ayn; Akom, Eniko; Stephen, Aloo; Sherard, Donna
2014-01-01
We compare estimates of multiple and concurrent sexual partnerships from Demographic and Health Surveys (DHS) with comparable Population Services International (PSI) surveys in four African countries (Kenya, Lesotho, Uganda, Zambia). DHS data produce significantly lower estimates of all indicators for both sexes in all countries. PSI estimates of multiple partnerships are 1.7 times higher [1.4 for men (M), 3.0 for women (W)], cumulative prevalence of concurrency is 2.4 times higher (2.2 M, 2.7 W), the point prevalence of concurrency is 3.5 times higher (3.5 M, 3.3 W), and the fraction of multi-partnered persons who report concurrency last year is 1.4 times higher (1.6 M, 0.9 W). These findings provide strong empirical evidence that DHS surveys systematically underestimate levels of multiple and concurrent partnerships. The underestimates will contaminate both empirical analyses of the link between sexual behavior and HIV infection, and theoretical models for combination prevention that use these data for inputs. PMID:24077973
Morris, Martina; Vu, Lung; Leslie-Cook, Ayn; Akom, Eniko; Stephen, Aloo; Sherard, Donna
2014-04-01
We compare estimates of multiple and concurrent sexual partnerships from Demographic and Health Surveys (DHS) with comparable Population Services International (PSI) surveys in four African countries (Kenya, Lesotho, Uganda, Zambia). DHS data produce significantly lower estimates of all indicators for both sexes in all countries. PSI estimates of multiple partnerships are 1.7 times higher [1.4 for men (M), 3.0 for women (W)], cumulative prevalence of concurrency is 2.4 times higher (2.2 M, 2.7 W), the point prevalence of concurrency is 3.5 times higher (3.5 M, 3.3 W), and the fraction of multi-partnered persons who report concurrency last year is 1.4 times higher (1.6 M, 0.9 W). These findings provide strong empirical evidence that DHS surveys systematically underestimate levels of multiple and concurrent partnerships. The underestimates will contaminate both empirical analyses of the link between sexual behavior and HIV infection, and theoretical models for combination prevention that use these data for inputs.
Active-passive data fusion algorithms for seafloor imaging and classification from CZMIL data
NASA Astrophysics Data System (ADS)
Park, Joong Yong; Ramnath, Vinod; Feygels, Viktor; Kim, Minsu; Mathur, Abhinav; Aitken, Jennifer; Tuell, Grady
2010-04-01
CZMIL will simultaneously acquire lidar and passive spectral data. These data will be fused to produce enhanced seafloor reflectance images from each sensor, and combined at a higher level to achieve seafloor classification. In the DPS software, the lidar data will first be processed to solve for depth, attenuation, and reflectance. The depth measurements will then be used to constrain the spectral optimization of the passive spectral data, and the resulting water column estimates will be used recursively to improve the estimates of seafloor reflectance from the lidar. Finally, the resulting seafloor reflectance cube will be combined with texture metrics estimated from the seafloor topography to produce classifications of the seafloor.
Comparing different stimulus configurations for population receptive field mapping in human fMRI
Alvarez, Ivan; de Haas, Benjamin; Clark, Chris A.; Rees, Geraint; Schwarzkopf, D. Samuel
2015-01-01
Population receptive field (pRF) mapping is a widely used approach to measuring aggregate human visual receptive field properties by recording non-invasive signals using functional MRI. Despite growing interest, no study to date has systematically investigated the effects of different stimulus configurations on pRF estimates from human visual cortex. Here we compared the effects of three different stimulus configurations on a model-based approach to pRF estimation: size-invariant bars and eccentricity-scaled bars defined in Cartesian coordinates and traveling along the cardinal axes, and a novel simultaneous “wedge and ring” stimulus defined in polar coordinates, systematically covering polar and eccentricity axes. We found that the presence or absence of eccentricity scaling had a significant effect on goodness of fit and pRF size estimates. Further, variability in pRF size estimates was directly influenced by stimulus configuration, particularly for higher visual areas including V5/MT+. Finally, we compared eccentricity estimation between phase-encoded and model-based pRF approaches. We observed a tendency for more peripheral eccentricity estimates using phase-encoded methods, independent of stimulus size. We conclude that both eccentricity scaling and polar rather than Cartesian stimulus configuration are important considerations for optimal experimental design in pRF mapping. While all stimulus configurations produce adequate estimates, simultaneous wedge and ring stimulation produced higher fit reliability, with a significant advantage in reduced acquisition time. PMID:25750620
NASA Astrophysics Data System (ADS)
Henderson, Laura S.; Subbarao, Kamesh
2017-12-01
This work presents a case wherein the selection of models when producing synthetic light curves affects the estimation of the size of unresolved space objects. Through this case, "inverse crime" (using the same model for the generation of synthetic data and data inversion), is illustrated. This is done by using two models to produce the synthetic light curve and later invert it. It is shown here that the choice of model indeed affects the estimation of the shape/size parameters. When a higher fidelity model (henceforth the one that results in the smallest error residuals after the crime is committed) is used to both create, and invert the light curve model the estimates of the shape/size parameters are significantly better than those obtained when a lower fidelity model (in comparison) is implemented for the estimation. It is therefore of utmost importance to consider the choice of models when producing synthetic data, which later will be inverted, as the results might be misleadingly optimistic.
Biomass estimators for thinned second-growth ponderosa pine trees.
P.H. Cochran; J.W. Jennings; C.T. Youngberg
1984-01-01
Usable estimates of the mass of live foliage and limbs of sapling and pole-sized ponderosa pine in managed stands in central Oregon can be obtained with equations using the logarithm of diameter as the only independent variable. These equations produce only slightly higher root mean square deviations than equations that include additional independent variables. A...
Curl, Cynthia L; Beresford, Shirley A A; Fenske, Richard A; Fitzpatrick, Annette L; Lu, Chensheng; Nettleton, Jennifer A; Kaufman, Joel D
2015-05-01
Organophosphate pesticide (OP) exposure to the U.S. population is dominated by dietary intake. The magnitude of exposure from diet depends partly on personal decisions such as which foods to eat and whether to choose organic food. Most studies of OP exposure rely on urinary biomarkers, which are limited by short half-lives and often lack specificity to parent compounds. A reliable means of estimating long-term dietary exposure to individual OPs is needed to assess the potential relationship with adverse health effects. We assessed long-term dietary exposure to 14 OPs among 4,466 participants in the Multi-Ethnic Study of Atherosclerosis, and examined the influence of organic produce consumption on this exposure. Individual-level exposure was estimated by combining information on typical intake of specific food items with average OP residue levels on those items. In an analysis restricted to a subset of participants who reported rarely or never eating organic produce ("conventional consumers"), we assessed urinary dialkylphosphate (DAP) levels across tertiles of estimated exposure (n = 480). In a second analysis, we compared DAP levels across subgroups with differing self-reported organic produce consumption habits (n = 240). Among conventional consumers, increasing tertile of estimated dietary OP exposure was associated with higher DAP concentrations (p < 0.05). DAP concentrations were also significantly lower in groups reporting more frequent consumption of organic produce (p < 0.02). Long-term dietary exposure to OPs was estimated from dietary intake data, and estimates were consistent with DAP measurements. More frequent consumption of organic produce was associated with lower DAPs.
Optical rangefinding applications using communications modulation technique
NASA Astrophysics Data System (ADS)
Caplan, William D.; Morcom, Christopher John
2010-10-01
A novel range detection technique combines optical pulse modulation patterns with signal cross-correlation to produce an accurate range estimate from low power signals. The cross-correlation peak is analyzed by a post-processing algorithm such that the phase delay is proportional to the range to target. This technique produces a stable range estimate from noisy signals. The advantage is higher accuracy obtained with relatively low optical power transmitted. The technique is useful for low cost, low power and low mass sensors suitable for tactical use. The signal coding technique allows applications including IFF and battlefield identification systems.
Education and Synthetic Work-Life Earnings Estimates. American Community Survey Reports. ACS-14
ERIC Educational Resources Information Center
Julian, Tiffany; Kominski, Robert
2011-01-01
The relationship between education and earnings is a long-analyzed topic of study. Generally, there is a strong belief that achievement of higher levels of education is a well established path to better jobs and better earnings. This report provides one view of the economic value of educational attainment by producing an estimate of the amount of…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-06
... this spring; and the potential for higher prices in the wine and juice markets, which compete for...-term benefits of this action are expected to outweigh the costs. The committee believes that with no... NS raisin producers benefit more from those raisins which are free tonnage, a lower free tonnage...
Direct Measurement of Perchlorate Exposure Biomarkers in a Highly Exposed Population: A Pilot Study
Wong, Michelle; Copan, Lori; Olmedo, Luis; Patton, Sharyle; Haas, Robert; Atencio, Ryan; Xu, Juhua; Valentin-Blasini, Liza
2011-01-01
Exposure to perchlorate is ubiquitous in the United States and has been found to be widespread in food and drinking water. People living in the lower Colorado River region may have perchlorate exposure because of perchlorate in ground water and locally-grown produce. Relatively high doses of perchlorate can inhibit iodine uptake and impair thyroid function, and thus could impair neurological development in utero. We examined human exposures to perchlorate in the Imperial Valley among individuals consuming locally grown produce and compared perchlorate exposure doses to state and federal reference doses. We collected 24-hour urine specimen from a convenience sample of 31 individuals and measured urinary excretion rates of perchlorate, thiocyanate, nitrate, and iodide. In addition, drinking water and local produce were also sampled for perchlorate. All but two of the water samples tested negative for perchlorate. Perchlorate levels in 79 produce samples ranged from non-detect to 1816 ppb. Estimated perchlorate doses ranged from 0.02 to 0.51 µg/kg of body weight/day. Perchlorate dose increased with the number of servings of dairy products consumed and with estimated perchlorate levels in produce consumed. The geometric mean perchlorate dose was 70% higher than for the NHANES reference population. Our sample of 31 Imperial Valley residents had higher perchlorate dose levels compared with national reference ranges. Although none of our exposure estimates exceeded the U. S. EPA reference dose, three participants exceeded the acceptable daily dose as defined by bench mark dose methods used by the California Office of Environmental Health Hazard Assessment. PMID:21394205
Cunha, C S; Lopes, N L; Veloso, C M; Jacovine, L A G; Tomich, T R; Pereira, L G R; Marcondes, M I
2016-11-15
The adoption of carbon inventories for dairy farms in tropical countries based on models developed from animals and diets of temperate climates is questionable. Thus, the objectives of this study were to estimate enteric methane (CH4) emissions through the SF6 tracer gas technique and through equations proposed by the Intergovernmental Panel on Climate Change (IPCC) Tier 2 and to calculate the inventory of greenhouse gas (GHG) emissions from two dairy systems. In addition, the carbon balance of these properties was estimated using enteric CH4 emissions obtained using both methodologies. In trial 1, the CH4 emissions were estimated from seven Holstein dairy cattle categories based on the SF6 tracer gas technique and on IPCC equations. The categories used in the study were prepubertal heifers (n=6); pubertal heifers (n=4); pregnant heifers (n=5); high-producing (n=6); medium-producing (n=5); low-producing (n=4) and dry cows (n=5). Enteric methane emission was higher for the category comprising prepubertal heifers when estimated by the equations proposed by the IPCC Tier 2. However, higher CH4 emissions were estimated by the SF6 technique in the categories including medium- and high-producing cows and dry cows. Pubertal heifers, pregnant heifers, and low-producing cows had equal CH4 emissions as estimated by both methods. In trial 2, two dairy farms were monitored for one year to identify all activities that contributed in any way to GHG emissions. The total emission from Farm 1 was 3.21t CO2e/animal/yr, of which 1.63t corresponded to enteric CH4. Farm 2 emitted 3.18t CO2e/animal/yr, with 1.70t of enteric CH4. IPCC estimations can underestimate CH4 emissions from some categories while overestimate others. However, considering the whole property, these discrepancies are offset and we would submit that the equations suggested by the IPCC properly estimate the total CH4 emission and carbon balance of the properties. Thus, the IPCC equations should be utilized with caution, and the herd composition should be analysed at the property level. When the carbon stock in pasture and other crops was considered, the carbon balance suggested that both farms are sustainable for GHG, by both methods. On the other hand, carbon balance without carbon stock, by both methods, suggests that farms emit more carbon than the system is capable of stock. Copyright © 2016 Elsevier B.V. All rights reserved.
Reitz, Meredith; Sanford, Ward E.; Senay, Gabriel; Cazenas, J.
2017-01-01
This study presents new data-driven, annual estimates of the division of precipitation into the recharge, quick-flow runoff, and evapotranspiration (ET) water budget components for 2000-2013 for the contiguous United States (CONUS). The algorithms used to produce these maps ensure water budget consistency over this broad spatial scale, with contributions from precipitation influx attributed to each component at 800 m resolution. The quick-flow runoff estimates for the contribution to the rapidly varying portion of the hydrograph are produced using data from 1,434 gaged watersheds, and depend on precipitation, soil saturated hydraulic conductivity, and surficial geology type. Evapotranspiration estimates are produced from a regression using water balance data from 679 gaged watersheds and depend on land cover, temperature, and precipitation. The quick-flow and ET estimates are combined to calculate recharge as the remainder of precipitation. The ET and recharge estimates are checked against independent field data, and the results show good agreement. Comparisons of recharge estimates with groundwater extraction data show that in 15% of the country, groundwater is being extracted at rates higher than the local recharge. These maps of the internally consistent water budget components of recharge, quick-flow runoff, and ET, being derived from and tested against data, are expected to provide reliable first-order estimates of these quantities across the CONUS, even where field measurements are sparse.
Context retrieval and description benefits for recognition of unfamiliar faces.
Jones, Todd C; Robinson, Kealagh; Steel, Brenna C
2018-04-19
Describing unfamiliar faces during or immediately after their presentation in a study phase can produce better recognition memory performance compared with a view-only control condition. We treated descriptions as elaborative information that is part of the study context and investigated how context retrieval influences recognition memory. Following general dual-process theories, we hypothesized that recollection would be used to recall descriptions and that description recall would influence recognition decisions, including the level of recognition confidence. In four experiments description conditions produced higher hit rates and higher levels of recognition confidence than control conditions. Participants recalled descriptive content on some trials, and this context retrieval was linked to an increase in the recognition confidence level. Repeating study faces in description conditions increased recognition scores, recognition confidence level, and context retrieval. Estimates of recollection from Yonelinas' (1994) dual-process signal detection ROCs were, on average, very close to the measures of context recall. Description conditions also produced higher estimates of familiarity. Finally, we found evidence that participants engaged in description activity in some ostensibly view-only trials. An emphasis on the information participants use in making their recognition decisions can advance understanding on description effects when descriptions are part of the study trial context. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
A. M. S. Smith; N. A. Drake; M. J. Wooster; A. T. Hudak; Z. A. Holden; C. J. Gibbons
2007-01-01
Accurate production of regional burned area maps are necessary to reduce uncertainty in emission estimates from African savannah fires. Numerous methods have been developed that map burned and unburned surfaces. These methods are typically applied to coarse spatial resolution (1 km) data to produce regional estimates of the area burned, while higher spatial resolution...
Salinet, João L; Masca, Nicholas; Stafford, Peter J; Ng, G André; Schlindwein, Fernando S
2016-03-08
Areas with high frequency activity within the atrium are thought to be 'drivers' of the rhythm in patients with atrial fibrillation (AF) and ablation of these areas seems to be an effective therapy in eliminating DF gradient and restoring sinus rhythm. Clinical groups have applied the traditional FFT-based approach to generate the three-dimensional dominant frequency (3D DF) maps during electrophysiology (EP) procedures but literature is restricted on using alternative spectral estimation techniques that can have a better frequency resolution that FFT-based spectral estimation. Autoregressive (AR) model-based spectral estimation techniques, with emphasis on selection of appropriate sampling rate and AR model order, were implemented to generate high-density 3D DF maps of atrial electrograms (AEGs) in persistent atrial fibrillation (persAF). For each patient, 2048 simultaneous AEGs were recorded for 20.478 s-long segments in the left atrium (LA) and exported for analysis, together with their anatomical locations. After the DFs were identified using AR-based spectral estimation, they were colour coded to produce sequential 3D DF maps. These maps were systematically compared with maps found using the Fourier-based approach. 3D DF maps can be obtained using AR-based spectral estimation after AEGs downsampling (DS) and the resulting maps are very similar to those obtained using FFT-based spectral estimation (mean 90.23 %). There were no significant differences between AR techniques (p = 0.62). The processing time for AR-based approach was considerably shorter (from 5.44 to 5.05 s) when lower sampling frequencies and model order values were used. Higher levels of DS presented higher rates of DF agreement (sampling frequency of 37.5 Hz). We have demonstrated the feasibility of using AR spectral estimation methods for producing 3D DF maps and characterised their differences to the maps produced using the FFT technique, offering an alternative approach for 3D DF computation in human persAF studies.
Beresford, Shirley A.A.; Fenske, Richard A.; Fitzpatrick, Annette L.; Lu, Chensheng; Nettleton, Jennifer A.; Kaufman, Joel D.
2015-01-01
Background Organophosphate pesticide (OP) exposure to the U.S. population is dominated by dietary intake. The magnitude of exposure from diet depends partly on personal decisions such as which foods to eat and whether to choose organic food. Most studies of OP exposure rely on urinary biomarkers, which are limited by short half-lives and often lack specificity to parent compounds. A reliable means of estimating long-term dietary exposure to individual OPs is needed to assess the potential relationship with adverse health effects. Objectives We assessed long-term dietary exposure to 14 OPs among 4,466 participants in the Multi-Ethnic Study of Atherosclerosis, and examined the influence of organic produce consumption on this exposure. Methods Individual-level exposure was estimated by combining information on typical intake of specific food items with average OP residue levels on those items. In an analysis restricted to a subset of participants who reported rarely or never eating organic produce (“conventional consumers”), we assessed urinary dialkylphosphate (DAP) levels across tertiles of estimated exposure (n = 480). In a second analysis, we compared DAP levels across subgroups with differing self-reported organic produce consumption habits (n = 240). Results Among conventional consumers, increasing tertile of estimated dietary OP exposure was associated with higher DAP concentrations (p < 0.05). DAP concentrations were also significantly lower in groups reporting more frequent consumption of organic produce (p < 0.02). Conclusions Long-term dietary exposure to OPs was estimated from dietary intake data, and estimates were consistent with DAP measurements. More frequent consumption of organic produce was associated with lower DAPs. Citation Curl CL, Beresford SA, Fenske RA, Fitzpatrick AL, Lu C, Nettleton JA, Kaufman JD. 2015. Estimating pesticide exposure from dietary intake and organic food choices: the Multi-Ethnic Study of Atherosclerosis (MESA). Environ Health Perspect 123:475–483; http://dx.doi.org/10.1289/ehp.1408197 PMID:25650532
Can, Seda; van de Schoot, Rens; Hox, Joop
2015-06-01
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions.
DOT National Transportation Integrated Search
2012-07-01
Microscopic models produce emissions and fuel consumption estimates with higher temporal resolution than other scales of : models. Most emissions and fuel consumption models were developed with data from dynamometer testing which are : sufficiently a...
Robust Mosaicking of Stereo Digital Elevation Models from the Ames Stereo Pipeline
NASA Technical Reports Server (NTRS)
Kim, Tae Min; Moratto, Zachary M.; Nefian, Ara Victor
2010-01-01
Robust estimation method is proposed to combine multiple observations and create consistent, accurate, dense Digital Elevation Models (DEMs) from lunar orbital imagery. The NASA Ames Intelligent Robotics Group (IRG) aims to produce higher-quality terrain reconstructions of the Moon from Apollo Metric Camera (AMC) data than is currently possible. In particular, IRG makes use of a stereo vision process, the Ames Stereo Pipeline (ASP), to automatically generate DEMs from consecutive AMC image pairs. However, the DEMs currently produced by the ASP often contain errors and inconsistencies due to image noise, shadows, etc. The proposed method addresses this problem by making use of multiple observations and by considering their goodness of fit to improve both the accuracy and robustness of the estimate. The stepwise regression method is applied to estimate the relaxed weight of each observation.
Signorini, Marcelo L; Rossler, Eugenia; Díaz David, Diego C; Olivero, Carolina R; Romero-Scharpen, Analía; Soto, Lorena P; Astesana, Diego M; Berisvil, Ayelen P; Zimmermann, Jorge A; Fusari, Marcia L; Frizzo, Laureano S; Zbrun, María V
2018-04-30
The objective of this meta-analysis was to summarize available information on the prevalence of antimicrobial-resistant Campylobacter species in humans, food-producing animals, and products of animal origin. A number of multilevel random-effect meta-analysis models were fitted to estimate mean occurrence rate of antimicrobial-resistant thermotolerant Campylobacter and to compare them throughout the years and among the species, food-producing animals (i.e., bovine, pigs, broilers, hen, goat, and sheep), country of origin, sample type, methodology to determine the antimicrobial susceptibility, and the species of Campylobacter. Among the considered antibiotics, thermotolerant Campylobacter showed the highest resistance to tetracycline (pool estimate [PE] = 0.493; 95% CI 0.466-0.519), nalidixic acid (PE = 0.385; 95% CI 0.348-0.423), and ciprofloxacin (PE = 0.376; 95% CI 0.339-0.415). In general, the prevalence of antimicrobial-resistant Campylobacter spp. was higher in hen, broilers, and swine. Campylobacter coli showed a higher prevalence of antimicrobial resistance than Campylobacter jejuni. Independent of the antimicrobial evaluated, the disk diffusion method showed higher prevalence of antimicrobial-resistant Campylobacter than the methods based on the minimum inhibitory concentration estimation. The meta-analysis showed that the prevalence of antimicrobial-resistant Campylobacter is relevant essentially in foods derived from hens and broilers, and it was observed worldwide. The prevalence of this pathogen is of public health importance and the increase in the prevalence of Campylobacter strains resistant to the antimicrobial of choice worsens the situation, hence, national authorities must monitor the situation in each country with the aim to establish the appropriate risk management measures.
2012-09-25
amplitudes of the model’s produc- tion parameters (w, , s) and degradation parameters (kp, dc) because the estimates for all of these parameters... degradation parameters (kp, dc), because the estimates for all of these parameters are higher for group A than for group C. E1194 A MODEL OF...values of both production and degradation parameters (Table 3), but there is significant variability between subjects that is caused by underlying
Latimer, Nicholas R; Abrams, K R; Lambert, P C; Crowther, M J; Wailoo, A J; Morden, J P; Akehurst, R L; Campbell, M J
2017-04-01
Estimates of the overall survival benefit of new cancer treatments are often confounded by treatment switching in randomised controlled trials (RCTs) - whereby patients randomised to the control group are permitted to switch onto the experimental treatment upon disease progression. In health technology assessment, estimates of the unconfounded overall survival benefit associated with the new treatment are needed. Several switching adjustment methods have been advocated in the literature, some of which have been used in health technology assessment. However, it is unclear which methods are likely to produce least bias in realistic RCT-based scenarios. We simulated RCTs in which switching, associated with patient prognosis, was permitted. Treatment effect size and time dependency, switching proportions and disease severity were varied across scenarios. We assessed the performance of alternative adjustment methods based upon bias, coverage and mean squared error, related to the estimation of true restricted mean survival in the absence of switching in the control group. We found that when the treatment effect was not time-dependent, rank preserving structural failure time models (RPSFTM) and iterative parameter estimation methods produced low levels of bias. However, in the presence of a time-dependent treatment effect, these methods produced higher levels of bias, similar to those produced by an inverse probability of censoring weights method. The inverse probability of censoring weights and structural nested models produced high levels of bias when switching proportions exceeded 85%. A simplified two-stage Weibull method produced low bias across all scenarios and provided the treatment switching mechanism is suitable, represents an appropriate adjustment method.
Cutler, J H Higginson; Rushen, J; de Passillé, A M; Gibbons, J; Orsel, K; Pajor, E; Barkema, H W; Solano, L; Pellerin, D; Haley, D; Vasseur, E
2017-12-01
Lameness is one of the most important welfare and productivity concerns in the dairy industry. Our objectives were to obtain producers' estimates of its prevalence and their perceptions of lameness, and to investigate how producers monitor lameness in tiestall (TS), freestall with milking parlor (FS), and automated milking system (AMS) herds. Forty focal cows per farm in 237 Canadian dairy herds were scored for lameness by trained researchers. On the same day, the producers completed a questionnaire. Mean herd-level prevalence of lameness estimated by producers was 9.0% (±0.9%; ±SE), whereas the researchers observed a mean prevalence of 22.2% (±0.9%). Correlation between producer- and researcher-estimated lameness prevalence was low (r = 0.19) and mean researcher prevalence was 1.6, 1.8, and 4.1 times higher in AMS, FS, and TS farms, respectively. A total of 48% of producers thought lameness was a moderate or major problem in their herds (TS = 34%; AMS =53%; FS = 59%). One third of producers considered lameness the highest ranked health problem they were trying to control, whereas two-thirds of producers (TS = 43%; AMS = 63%; FS = 71%) stated that they had made management changes to deal with lameness in the past 2 yr. Almost all producers (98%) stated they routinely check cows to identify new cases of lameness; however, 40% of producers did not keep records of lameness (AMS = 24%; FS = 23%; TS = 60%). A majority (69%) of producers treated lame cows themselves immediately after detection, whereas 13% relied on hoof-trimmer or veterinarians to plan treatment. Producers are aware of lameness as an issue in dairy herds and almost all monitor lameness as part of their daily routine. However, producers underestimate lameness prevalence, which highlights that lameness detection continues to be difficult in in all housing systems, especially in TS herds. Training to improve detection, record keeping, identification of farm-specific risk factors, and treatment planning for lame cows is likely to help decrease lameness prevalence. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Collinear Latent Variables in Multilevel Confirmatory Factor Analysis
van de Schoot, Rens; Hox, Joop
2014-01-01
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions. PMID:29795827
NASA Astrophysics Data System (ADS)
Secchi, Silvia; Gassman, Philip W.; Williams, Jimmy R.; Babcock, Bruce A.
2009-10-01
Growing demand for corn due to the expansion of ethanol has increased concerns that environmentally sensitive lands retired from agricultural production and enrolled into the Conservation Reserve Program (CRP) will be cropped again. Iowa produces more ethanol than any other state in the United States, and it also produces the most corn. Thus, an examination of the impacts of higher crop prices on CRP land in Iowa can give insight into what we might expect nationally in the years ahead if crop prices remain high. We construct CRP land supply curves for various corn prices and then estimate the environmental impacts of cropping CRP land through the Environmental Policy Integrated Climate (EPIC) model. EPIC provides edge-of-field estimates of soil erosion, nutrient loss, and carbon sequestration. We find that incremental impacts increase dramatically as higher corn prices bring into production more and more environmentally fragile land. Maintaining current levels of environmental quality will require substantially higher spending levels. Even allowing for the cost savings that would accrue as CRP land leaves the program, a change in targeting strategies will likely be required to ensure that the most sensitive land does not leave the program.
Multiple imputation for handling missing outcome data when estimating the relative risk.
Sullivan, Thomas R; Lee, Katherine J; Ryan, Philip; Salter, Amy B
2017-09-06
Multiple imputation is a popular approach to handling missing data in medical research, yet little is known about its applicability for estimating the relative risk. Standard methods for imputing incomplete binary outcomes involve logistic regression or an assumption of multivariate normality, whereas relative risks are typically estimated using log binomial models. It is unclear whether misspecification of the imputation model in this setting could lead to biased parameter estimates. Using simulated data, we evaluated the performance of multiple imputation for handling missing data prior to estimating adjusted relative risks from a correctly specified multivariable log binomial model. We considered an arbitrary pattern of missing data in both outcome and exposure variables, with missing data induced under missing at random mechanisms. Focusing on standard model-based methods of multiple imputation, missing data were imputed using multivariate normal imputation or fully conditional specification with a logistic imputation model for the outcome. Multivariate normal imputation performed poorly in the simulation study, consistently producing estimates of the relative risk that were biased towards the null. Despite outperforming multivariate normal imputation, fully conditional specification also produced somewhat biased estimates, with greater bias observed for higher outcome prevalences and larger relative risks. Deleting imputed outcomes from analysis datasets did not improve the performance of fully conditional specification. Both multivariate normal imputation and fully conditional specification produced biased estimates of the relative risk, presumably since both use a misspecified imputation model. Based on simulation results, we recommend researchers use fully conditional specification rather than multivariate normal imputation and retain imputed outcomes in the analysis when estimating relative risks. However fully conditional specification is not without its shortcomings, and so further research is needed to identify optimal approaches for relative risk estimation within the multiple imputation framework.
Impact of calibration on estimates of central blood pressures.
Soender, T K; Van Bortel, L M; Møller, J E; Lambrechtsen, J; Hangaard, J; Egstrup, K
2012-12-01
Using the Sphygmocor device it is recommended that the radial pressure wave is calibrated for brachial systolic blood pressure (SBP) and diastolic blood pressure (DBP). However it has been suggested that brachial-to-radial pressure amplification causes underestimation of central blood pressures (BPs) using this calibration. In the present study we examined if different calibrations had an impact on estimates of central BPs and on the clinical interpretation of our results. On the basis of ambulatory BP measurements, patients were categorized into patients with controlled, uncontrolled or resistant hypertension. We first calibrated the radial pressure wave as recommended and afterwards recalibrated the same pressure wave using brachial DBP and calculated mean arterial pressure. Recalibration of the pressure wave generated significantly higher estimates of central SBP (P=0.0003 and P<0.0001 at baseline and P=0.0001 and P=0.0002 after 6 months). Using recommended calibration we found a significant change in central SBP in both treatment groups (P=0.05 and P=0.01), however, after recalibrating significance was lost in patients with resistant hypertension (P=0.15). We conclude that calibration with DBP and mean arterial pressure produces higher estimates of central BPs than recommended calibration. The present study also shows that this difference between the two calibration methods can produce more than a systematic error and has an impact on interpretation of clinical results.
Truong, Q T; Nguyen, Q V; Truong, V T; Park, H C; Byun, D Y; Goo, N S
2011-09-01
We present an unsteady blade element theory (BET) model to estimate the aerodynamic forces produced by a freely flying beetle and a beetle-mimicking flapping wing system. Added mass and rotational forces are included to accommodate the unsteady force. In addition to the aerodynamic forces needed to accurately estimate the time history of the forces, the inertial forces of the wings are also calculated. All of the force components are considered based on the full three-dimensional (3D) motion of the wing. The result obtained by the present BET model is validated with the data which were presented in a reference paper. The difference between the averages of the estimated forces (lift and drag) and the measured forces in the reference is about 5.7%. The BET model is also used to estimate the force produced by a freely flying beetle and a beetle-mimicking flapping wing system. The wing kinematics used in the BET calculation of a real beetle and the flapping wing system are captured using high-speed cameras. The results show that the average estimated vertical force of the beetle is reasonably close to the weight of the beetle, and the average estimated thrust of the beetle-mimicking flapping wing system is in good agreement with the measured value. Our results show that the unsteady lift and drag coefficients measured by Dickinson et al are still useful for relatively higher Reynolds number cases, and the proposed BET can be a good way to estimate the force produced by a flapping wing system.
NASA Astrophysics Data System (ADS)
Wang, Yu; Liu, Qun
2013-01-01
Surplus-production models are widely used in fish stock assessment and fisheries management due to their simplicity and lower data demands than age-structured models such as Virtual Population Analysis. The CEDA (catch-effort data analysis) and ASPIC (a surplus-production model incorporating covariates) computer packages are data-fitting or parameter estimation tools that have been developed to analyze catch-and-effort data using non-equilibrium surplus production models. We applied CEDA and ASPIC to the hairtail ( Trichiurus japonicus) fishery in the East China Sea. Both packages produced robust results and yielded similar estimates. In CEDA, the Schaefer surplus production model with log-normal error assumption produced results close to those of ASPIC. CEDA is sensitive to the choice of initial proportion, while ASPIC is not. However, CEDA produced higher R 2 values than ASPIC.
United States Foreign Policy in Africa: A Right Approach
1990-04-01
a higher cost : Except for two of the platinum group metals (platinum and rhodium), andalusite, and a, specific type of industrial diamornd and grade...Defense Department officials, albeit at a higher cost . The Bureau of Mines report in 1988 estimated the 5-year cumulative direct economic cost of a US...the report understated the economic costs and overstated the ability of other mineral- producing nations to replace South African exports.7 Presently
Estimating the Efficiency of Michigan's Rural and Urban Public School Districts
ERIC Educational Resources Information Center
Maranowski, Rita
2012-01-01
This study examined student achievement in Michigan public school districts to determine if rural school districts are demonstrating greater financial efficiency by producing higher levels of student achievement than school districts in other geographic locations with similar socioeconomics. Three models were developed using multiple regression…
Scaling an in situ network for high resolution modeling during SMAPVEX15
USDA-ARS?s Scientific Manuscript database
Among the greatest challenges within the field of soil moisture estimation is that of scaling sparse point measurements within a network to produce higher resolution map products. Large-scale field experiments present an ideal opportunity to develop methodologies for this scaling, by coupling in si...
Perceptions and practices of Finnish dairy producers on disbudding pain in calves.
Hokkanen, A-H; Wikman, I; Korhonen, T; Pastell, M; Valros, A; Vainio, O; Hänninen, L
2015-02-01
Disbudding causes pain-related distress and behavioral changes in calves. Local anesthesia and non-steroidal anti-inflammatory drugs are effective for treating disbudding-related pain. Dairy producers play a key role in whether or not calves to be disbudded are properly medicated. Pain and distress related to disbudding of calves often remains untreated. Thus, we conducted this study to characterize perceptions and practices of dairy producers on disbudding and disbudding-related pain management. A questionnaire was sent to 1,000 randomly selected Finnish dairy producers (response rate: 45%). Our aim was to investigate producer perceptions about disbudding-related pain, the perceived need for pain alleviation before disbudding, and how these perceptions affect the valuing and use of pain alleviation before disbudding. More than 70% of Finnish dairy farms disbud their calves. Producers who ranked disbudding-related pain and need for pain alleviation higher called a veterinarian to medicate calves before disbudding more often than producers who ranked disbudding pain and need for pain alleviation lower. Among respondents who disbudded calves on their farms, 69% stated that disbudding caused severe pain, 63% stated that pain alleviation during disbudding is important, and 45% always had a veterinarian medicate their calves before disbudding. Producers with a herd healthcare agreement with their veterinarian estimated disbudding-related pain to be higher and had a veterinarian medicate calves more often than producers without such an agreement. Producers with tiestall systems and producers who did not use disbudding valued pain alleviation prior to disbudding higher than producers with freestalls and producers who used disbudding. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Huffman, George J.; Adler, Robert F.; Rudolf, Bruno; Schneider, Udo; Keehn, Peter R.
1995-01-01
The 'satellite-gauge model' (SGM) technique is described for combining precipitation estimates from microwave satellite data, infrared satellite data, rain gauge analyses, and numerical weather prediction models into improved estimates of global precipitation. Throughout, monthly estimates on a 2.5 degrees x 2.5 degrees lat-long grid are employed. First, a multisatellite product is developed using a combination of low-orbit microwave and geosynchronous-orbit infrared data in the latitude range 40 degrees N - 40 degrees S (the adjusted geosynchronous precipitation index) and low-orbit microwave data alone at higher latitudes. Then the rain gauge analysis is brougth in, weighting each field by its inverse relative error variance to produce a nearly global, observationally based precipitation estimate. To produce a complete global estimate, the numerical model results are used to fill data voids in the combined satellite-gauge estimate. Our sequential approach to combining estimates allows a user to select the multisatellite estimate, the satellite-gauge estimate, or the full SGM estimate (observationally based estimates plus the model information). The primary limitation in the method is imperfections in the estimation of relative error for the individual fields. The SGM results for one year of data (July 1987 to June 1988) show important differences from the individual estimates, including model estimates as well as climatological estimates. In general, the SGM results are drier in the subtropics than the model and climatological results, reflecting the relatively dry microwave estimates that dominate the SGM in oceanic regions.
Validation of Pooled Whole-Genome Re-Sequencing in Arabidopsis lyrata.
Fracassetti, Marco; Griffin, Philippa C; Willi, Yvonne
2015-01-01
Sequencing pooled DNA of multiple individuals from a population instead of sequencing individuals separately has become popular due to its cost-effectiveness and simple wet-lab protocol, although some criticism of this approach remains. Here we validated a protocol for pooled whole-genome re-sequencing (Pool-seq) of Arabidopsis lyrata libraries prepared with low amounts of DNA (1.6 ng per individual). The validation was based on comparing single nucleotide polymorphism (SNP) frequencies obtained by pooling with those obtained by individual-based Genotyping By Sequencing (GBS). Furthermore, we investigated the effect of sample number, sequencing depth per individual and variant caller on population SNP frequency estimates. For Pool-seq data, we compared frequency estimates from two SNP callers, VarScan and Snape; the former employs a frequentist SNP calling approach while the latter uses a Bayesian approach. Results revealed concordance correlation coefficients well above 0.8, confirming that Pool-seq is a valid method for acquiring population-level SNP frequency data. Higher accuracy was achieved by pooling more samples (25 compared to 14) and working with higher sequencing depth (4.1× per individual compared to 1.4× per individual), which increased the concordance correlation coefficient to 0.955. The Bayesian-based SNP caller produced somewhat higher concordance correlation coefficients, particularly at low sequencing depth. We recommend pooling at least 25 individuals combined with sequencing at a depth of 100× to produce satisfactory frequency estimates for common SNPs (minor allele frequency above 0.05).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xingchang; Wang, Chuankuan; Bond-Lamberty, Benjamin
Carbon dioxide (CO 2) fluxes between terrestrial ecosystems and the atmosphere are primarily measured with eddy covariance (EC), biometric, and chamber methods. However, it is unclear why the estimates of CO 2-fluxes, when measured using these different methods, converge at some sites but diverge at others. We synthesized a novel global dataset of forest CO 2-fluxes to evaluate the consistency between EC and biometric or chamber methods for quantifying CO 2 budget in forests. The EC approach, comparing with the other two methods, tended to produce 25% higher estimate of net ecosystem production (NEP, 0.52Mg C ha-1 yr-1), mainly resultingmore » from lower EC-estimated Re; 10% lower ecosystem respiration (Re, 1.39Mg C ha-1 yr-1); and 3% lower gross primary production (0.48 Mg C ha-1 yr-1) The discrepancies between EC and the other methods were higher at sites with complex topography and dense canopies versus those with flat topography and open canopies. Forest age also influenced the discrepancy through the change of leaf area index. The open-path EC system induced >50% of the discrepancy in NEP, presumably due to its surface heating effect. These results provided strong evidence that EC produces biased estimates of NEP and Re in forest ecosystems. A global extrapolation suggested that the discrepancies in CO 2 fluxes between methods were consistent with a global underestimation of Re, and overestimation of NEP, by the EC method. Accounting for these discrepancies would substantially improve the our estimates of the terrestrial carbon budget .« less
Tervo, Outi M; Christoffersen, Mads F; Simon, Malene; Miller, Lee A; Jensen, Frants H; Parks, Susan E; Madsen, Peter T
2012-01-01
The low-frequency, powerful vocalizations of blue and fin whales may potentially be detected by conspecifics across entire ocean basins. In contrast, humpback and bowhead whales produce equally powerful, but more complex broadband vocalizations composed of higher frequencies that suffer from higher attenuation. Here we evaluate the active space of high frequency song notes of bowhead whales (Balaena mysticetus) in Western Greenland using measurements of song source levels and ambient noise. Four independent, GPS-synchronized hydrophones were deployed through holes in the ice to localize vocalizing bowhead whales, estimate source levels and measure ambient noise. The song had a mean apparent source level of 185±2 dB rms re 1 µPa @ 1 m and a high mean centroid frequency of 444±48 Hz. Using measured ambient noise levels in the area and Arctic sound spreading models, the estimated active space of these song notes is between 40 and 130 km, an order of magnitude smaller than the estimated active space of low frequency blue and fin whale songs produced at similar source levels and for similar noise conditions. We propose that bowhead whales spatially compensate for their smaller communication range through mating aggregations that co-evolved with broadband song to form a complex and dynamic acoustically mediated sexual display.
Tervo, Outi M.; Christoffersen, Mads F.; Simon, Malene; Miller, Lee A.; Jensen, Frants H.; Parks, Susan E.; Madsen, Peter T.
2012-01-01
The low-frequency, powerful vocalizations of blue and fin whales may potentially be detected by conspecifics across entire ocean basins. In contrast, humpback and bowhead whales produce equally powerful, but more complex broadband vocalizations composed of higher frequencies that suffer from higher attenuation. Here we evaluate the active space of high frequency song notes of bowhead whales (Balaena mysticetus) in Western Greenland using measurements of song source levels and ambient noise. Four independent, GPS-synchronized hydrophones were deployed through holes in the ice to localize vocalizing bowhead whales, estimate source levels and measure ambient noise. The song had a mean apparent source level of 185±2 dB rms re 1 µPa @ 1 m and a high mean centroid frequency of 444±48 Hz. Using measured ambient noise levels in the area and Arctic sound spreading models, the estimated active space of these song notes is between 40 and 130 km, an order of magnitude smaller than the estimated active space of low frequency blue and fin whale songs produced at similar source levels and for similar noise conditions. We propose that bowhead whales spatially compensate for their smaller communication range through mating aggregations that co-evolved with broadband song to form a complex and dynamic acoustically mediated sexual display. PMID:23300591
Ghrelin and cholecystokinin in term and preterm human breast milk.
Kierson, Jennifer A; Dimatteo, Darlise M; Locke, Robert G; Mackley, Amy B; Spear, Michael L
2006-08-01
To determine whether ghrelin and cholecystokinin (CCK) are present in significant quantities in term and preterm human breast milk, and to identify their source. Samples were collected from 10 mothers who delivered term infants and 10 mothers who delivered preterm infants. Estimated fat content was measured. Ghrelin and CCK levels were measured in whole and skim breast milk samples using radioimmunoassays (RIA). Reverse transcriptase-polymerase chain reaction (RT-PCR) was performed using RNA from human mammary epithelial cells (hMECs) and mammary gland with primers specific to ghrelin. The median ghrelin level in whole breast milk was 2125 pg/ml, which is significantly higher than normal plasma levels. There was a direct correlation between whole milk ghrelin levels and estimated milk fat content (r=0.84, p<0.001). Both the mammary gland and hMECs produced ghrelin. While CCK was detected in some samples, levels were insignificant. Infant gestational age, birthweight, maternal age, and maternal pre-pregnancy body mass index did not significantly affect the results. Ghrelin, but not CCK, is present in breast milk. Since the mammary gland produces ghrelin message, and ghrelin levels in breast milk are higher than those found in plasma, we conclude that ghrelin is produced and secreted by the breast.
The impact of high-end climate change on agricultural welfare
Stevanović, Miodrag; Popp, Alexander; Lotze-Campen, Hermann; Dietrich, Jan Philipp; Müller, Christoph; Bonsch, Markus; Schmitz, Christoph; Bodirsky, Benjamin Leon; Humpenöder, Florian; Weindl, Isabelle
2016-01-01
Climate change threatens agricultural productivity worldwide, resulting in higher food prices. Associated economic gains and losses differ not only by region but also between producers and consumers and are affected by market dynamics. On the basis of an impact modeling chain, starting with 19 different climate projections that drive plant biophysical process simulations and ending with agro-economic decisions, this analysis focuses on distributional effects of high-end climate change impacts across geographic regions and across economic agents. By estimating the changes in surpluses of consumers and producers, we find that climate change can have detrimental impacts on global agricultural welfare, especially after 2050, because losses in consumer surplus generally outweigh gains in producer surplus. Damage in agriculture may reach the annual loss of 0.3% of future total gross domestic product at the end of the century globally, assuming further opening of trade in agricultural products, which typically leads to interregional production shifts to higher latitudes. Those estimated global losses could increase substantially if international trade is more restricted. If beneficial effects of atmospheric carbon dioxide fertilization can be realized in agricultural production, much of the damage could be avoided. Although trade policy reforms toward further liberalization help alleviate climate change impacts, additional compensation mechanisms for associated environmental and development concerns have to be considered. PMID:27574700
The impact of high-end climate change on agricultural welfare.
Stevanović, Miodrag; Popp, Alexander; Lotze-Campen, Hermann; Dietrich, Jan Philipp; Müller, Christoph; Bonsch, Markus; Schmitz, Christoph; Bodirsky, Benjamin Leon; Humpenöder, Florian; Weindl, Isabelle
2016-08-01
Climate change threatens agricultural productivity worldwide, resulting in higher food prices. Associated economic gains and losses differ not only by region but also between producers and consumers and are affected by market dynamics. On the basis of an impact modeling chain, starting with 19 different climate projections that drive plant biophysical process simulations and ending with agro-economic decisions, this analysis focuses on distributional effects of high-end climate change impacts across geographic regions and across economic agents. By estimating the changes in surpluses of consumers and producers, we find that climate change can have detrimental impacts on global agricultural welfare, especially after 2050, because losses in consumer surplus generally outweigh gains in producer surplus. Damage in agriculture may reach the annual loss of 0.3% of future total gross domestic product at the end of the century globally, assuming further opening of trade in agricultural products, which typically leads to interregional production shifts to higher latitudes. Those estimated global losses could increase substantially if international trade is more restricted. If beneficial effects of atmospheric carbon dioxide fertilization can be realized in agricultural production, much of the damage could be avoided. Although trade policy reforms toward further liberalization help alleviate climate change impacts, additional compensation mechanisms for associated environmental and development concerns have to be considered.
International Student Guide to U.S. Community Colleges, 2007-2008
ERIC Educational Resources Information Center
American Association of Community Colleges (NJ1), 2007
2007-01-01
This guide was produced for prospective students considering studying in the United States. The guide is organized to help students through all stages of the process, including learning about the U.S. higher education system, finding the right college, benefits of attending community college, obtaining a student visa, estimating expenses, living…
Crangle, R.D.
2013-01-01
The United States is the world’s fifth ranked producer and consumer of gypsum. Production of crude gypsum in the United States during 2012 was estimated to be 9.9 Mt (10.9 million st), an increase of 11 percent compared with 2011 production. The average price of mined crude gypsum was $7/t ($6.35/st). Synthetic gypsum production in 2012, most of which is generated as a flue-gas desulphurization product from coal-fired electric powerplants, was estimated to be 11.8 Mt (13 million st) and priced at approximately $1.50/t ($1.36/st). Forty-seven companies produced gypsum in the United States at 54 mines and plants in 34 states. U.S. gypsum exports totaled 408 kt (450,000 st). Imports were much higher at 3.2 Mt (3.5 million st).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newsom, R. K.; Sivaraman, C.; Shippert, T. R.
Accurate height-resolved measurements of higher-order statistical moments of vertical velocity fluctuations are crucial for improved understanding of turbulent mixing and diffusion, convective initiation, and cloud life cycles. The Atmospheric Radiation Measurement (ARM) Climate Research Facility operates coherent Doppler lidar systems at several sites around the globe. These instruments provide measurements of clear-air vertical velocity profiles in the lower troposphere with a nominal temporal resolution of 1 sec and height resolution of 30 m. The purpose of the Doppler lidar vertical velocity statistics (DLWSTATS) value-added product (VAP) is to produce height- and time-resolved estimates of vertical velocity variance, skewness, and kurtosismore » from these raw measurements. The VAP also produces estimates of cloud properties, including cloud-base height (CBH), cloud frequency, cloud-base vertical velocity, and cloud-base updraft fraction.« less
Compaction of forest soil by logging machinery favours occurrence of prokaryotes.
Schnurr-Pütz, Silvia; Bååth, Erland; Guggenberger, Georg; Drake, Harold L; Küsel, Kirsten
2006-12-01
Soil compaction caused by passage of logging machinery reduces the soil air capacity. Changed abiotic factors might induce a change in the soil microbial community and favour organisms capable of tolerating anoxic conditions. The goals of this study were to resolve differences between soil microbial communities obtained from wheel-tracks (i.e. compacted) and their adjacent undisturbed sites, and to evaluate differences in potential anaerobic microbial activities of these contrasting soils. Soil samples obtained from compacted soil had a greater bulk density and a higher pH than uncompacted soil. Analyses of phospholipid fatty acids demonstrated that the eukaryotic/prokaryotic ratio in compacted soils was lower than that of uncompacted soils, suggesting that fungi were not favoured by the in situ conditions produced by compaction. Indeed, most-probable-number (MPN) estimates of nitrous oxide-producing denitrifiers, acetate- and lactate-utilizing iron and sulfate reducers, and methanogens were higher in compacted than in uncompacted soils obtained from one site that had large differences in bulk density. Compacted soils from this site yielded higher iron-reducing, sulfate-reducing and methanogenic potentials than did uncompacted soils. MPN estimates of H2-utilizing acetogens in compacted and uncompacted soils were similar. These results indicate that compaction of forest soil alters the structure and function of the soil microbial community and favours occurrence of prokaryotes.
Improving the Measurement of Poverty
Hutto, Nathan; Waldfogel, Jane; Kaushal, Neeraj; Garfinkel, Irwin
2013-01-01
This study estimates 2007 national poverty rates using an approach largely conceptualized by a 1995 National Academy of Sciences panel and similar to the supplemental poverty measure that will soon be produced by the U.S. Census Bureau. The study uses poverty thresholds based on expenditures for shelter, food, clothing, and utilities, as well as a measure of family income that includes earnings, cash transfers, near-cash benefits, tax credits, and tax payments. The measure also accounts for child care, work, and out-of-pocket medical expenses; variation in regional cost of living; and mortgage-free homeownership. Under this method, the rate of poverty is estimated to be higher than the rate calculated in the traditional manner, rising from 12.4 percent in the official measure to 16 percent in the new measure; the rate of child poverty is more than 3 percentage points higher, and elderly poverty is nearly 7 points higher. PMID:26316658
Proper orthogonal decomposition-based spectral higher-order stochastic estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baars, Woutijn J., E-mail: wbaars@unimelb.edu.au; Tinney, Charles E.
A unique routine, capable of identifying both linear and higher-order coherence in multiple-input/output systems, is presented. The technique combines two well-established methods: Proper Orthogonal Decomposition (POD) and Higher-Order Spectra Analysis. The latter of these is based on known methods for characterizing nonlinear systems by way of Volterra series. In that, both linear and higher-order kernels are formed to quantify the spectral (nonlinear) transfer of energy between the system's input and output. This reduces essentially to spectral Linear Stochastic Estimation when only first-order terms are considered, and is therefore presented in the context of stochastic estimation as spectral Higher-Order Stochastic Estimationmore » (HOSE). The trade-off to seeking higher-order transfer kernels is that the increased complexity restricts the analysis to single-input/output systems. Low-dimensional (POD-based) analysis techniques are inserted to alleviate this void as POD coefficients represent the dynamics of the spatial structures (modes) of a multi-degree-of-freedom system. The mathematical framework behind this POD-based HOSE method is first described. The method is then tested in the context of jet aeroacoustics by modeling acoustically efficient large-scale instabilities as combinations of wave packets. The growth, saturation, and decay of these spatially convecting wave packets are shown to couple both linearly and nonlinearly in the near-field to produce waveforms that propagate acoustically to the far-field for different frequency combinations.« less
Economic and environmental benefits of higher-octane gasoline.
Speth, Raymond L; Chow, Eric W; Malina, Robert; Barrett, Steven R H; Heywood, John B; Green, William H
2014-06-17
We quantify the economic and environmental benefits of designing U.S. light-duty vehicles (LDVs) to attain higher fuel economy by utilizing higher octane (98 RON) gasoline. We use engine simulations, a review of experimental data, and drive cycle simulations to estimate the reduction in fuel consumption associated with using higher-RON gasoline in individual vehicles. Lifecycle CO2 emissions and economic impacts for the U.S. LDV fleet are estimated based on a linear-programming refinery model, a historically calibrated fleet model, and a well-to-wheels emissions analysis. We find that greater use of high-RON gasoline in appropriately tuned vehicles could reduce annual gasoline consumption in the U.S. by 3.0-4.4%. Accounting for the increase in refinery emissions from production of additional high-RON gasoline, net CO2 emissions are reduced by 19-35 Mt/y in 2040 (2.5-4.7% of total direct LDV CO2 emissions). For the strategies studied, the annual direct economic benefit is estimated to be $0.4-6.4 billion in 2040, and the annual net societal benefit including the social cost of carbon is estimated to be $1.7-8.8 billion in 2040. Adoption of a RON standard in the U.S. in place of the current antiknock index (AKI) may enable refineries to produce larger quantities of high-RON gasoline.
The effects of sample size on population genomic analyses--implications for the tests of neutrality.
Subramanian, Sankar
2016-02-20
One of the fundamental measures of molecular genetic variation is the Watterson's estimator (θ), which is based on the number of segregating sites. The estimation of θ is unbiased only under neutrality and constant population growth. It is well known that the estimation of θ is biased when these assumptions are violated. However, the effects of sample size in modulating the bias was not well appreciated. We examined this issue in detail based on large-scale exome data and robust simulations. Our investigation revealed that sample size appreciably influences θ estimation and this effect was much higher for constrained genomic regions than that of neutral regions. For instance, θ estimated for synonymous sites using 512 human exomes was 1.9 times higher than that obtained using 16 exomes. However, this difference was 2.5 times for the nonsynonymous sites of the same data. We observed a positive correlation between the rate of increase in θ estimates (with respect to the sample size) and the magnitude of selection pressure. For example, θ estimated for the nonsynonymous sites of highly constrained genes (dN/dS < 0.1) using 512 exomes was 3.6 times higher than that estimated using 16 exomes. In contrast this difference was only 2 times for the less constrained genes (dN/dS > 0.9). The results of this study reveal the extent of underestimation owing to small sample sizes and thus emphasize the importance of sample size in estimating a number of population genomic parameters. Our results have serious implications for neutrality tests such as Tajima D, Fu-Li D and those based on the McDonald and Kreitman test: Neutrality Index and the fraction of adaptive substitutions. For instance, use of 16 exomes produced 2.4 times higher proportion of adaptive substitutions compared to that obtained using 512 exomes (24% vs 10 %).
Chen, Yi; Ho, Kin Fai; Ho, Steven Sai Hang; Ho, Wing Kei; Lee, Shun Cheng; Yu, Jian Zhen; Sit, Elber Hoi Leung
2007-12-01
Commercial cooking emissions are important air pollution sources in a heavily urbanized city. Exhaust samples were collected in six representative commercial kitchens including Chinese restaurants, Western restaurants, and Western fast-food restaurants in Hong Kong during peak lunch hours. Both gaseous and particulate emissions were evaluated. Eight gaseous and twenty-two particulate polycyclic aromatic hydrocarbons (PAHs) were quantified in this study. In the gaseous phase, naphthalene (67-89%) was the most abundant PAH in all of the exhaust samples. The contribution of acenaphthylene in the gaseous phase was significantly higher in emissions from the Chinese restaurants, whereas fluorene was higher in emissions from the Western cooking style restaurants (i.e., Western restaurants and Western fast-food restaurants). Pyrene is the most abundant particulate PAH in the Chinese restaurants (14-49%) while its contribution was much lower in the Western cooking style restaurants (10-22%). Controlled cooking conditions were monitored in a staff canteen to compare the emissions from several different local cooking styles, including deep frying, steaming, and mixed cooking styles (combination of steaming and frying). Deep frying produced the highest amount of total gaseous PAHs, 6 times higher than the steaming. However, steaming produced the highest particulate emissions. The estimated annual gaseous PAH emissions for the Chinese restaurants, Western restaurants, and Western fast-food restaurants were 255, 173, and 20.2 t y(-1) whereas 252, 1.9, and 0.4 t y(-1) were estimated for particulate phase PAH emissions. The study provides useful information and estimates for PAH emissions from commercial cooking exhaust in Hong Kong.
Regional Distribution of Forest Height and Biomass from Multisensor Data Fusion
NASA Technical Reports Server (NTRS)
Yu, Yifan; Saatchi, Sassan; Heath, Linda S.; LaPoint, Elizabeth; Myneni, Ranga; Knyazikhin, Yuri
2010-01-01
Elevation data acquired from radar interferometry at C-band from SRTM are used in data fusion techniques to estimate regional scale forest height and aboveground live biomass (AGLB) over the state of Maine. Two fusion techniques have been developed to perform post-processing and parameter estimations from four data sets: 1 arc sec National Elevation Data (NED), SRTM derived elevation (30 m), Landsat Enhanced Thematic Mapper (ETM) bands (30 m), derived vegetation index (VI) and NLCD2001 land cover map. The first fusion algorithm corrects for missing or erroneous NED data using an iterative interpolation approach and produces distribution of scattering phase centers from SRTM-NED in three dominant forest types of evergreen conifers, deciduous, and mixed stands. The second fusion technique integrates the USDA Forest Service, Forest Inventory and Analysis (FIA) ground-based plot data to develop an algorithm to transform the scattering phase centers into mean forest height and aboveground biomass. Height estimates over evergreen (R2 = 0.86, P < 0.001; RMSE = 1.1 m) and mixed forests (R2 = 0.93, P < 0.001, RMSE = 0.8 m) produced the best results. Estimates over deciduous forests were less accurate because of the winter acquisition of SRTM data and loss of scattering phase center from tree ]surface interaction. We used two methods to estimate AGLB; algorithms based on direct estimation from the scattering phase center produced higher precision (R2 = 0.79, RMSE = 25 Mg/ha) than those estimated from forest height (R2 = 0.25, RMSE = 66 Mg/ha). We discuss sources of uncertainty and implications of the results in the context of mapping regional and continental scale forest biomass distribution.
Bhattarai, Bikash; Fosgate, Geoffrey T; Osterstock, Jason B; Fossler, Charles P; Park, Seong C; Roussel, Allen J
2013-11-01
This study compares the perceptions of producers and veterinarians on the economic impacts of Mycobacterium avium subspecies paratuberculosis (MAP) infection in cow-calf herds. Questionnaires were mailed to beef producers through the Designated Johne's Coordinators and to veterinarians belonging to a nationwide professional organization. Important components of losses associated with MAP infected cows were used to estimate total loss per infected cow-year using an iterative approach based on collected survey data. Veterinarians were more likely to perceive a lower calving percentage in MAP infected cows compared to producers (P=0.02). Income lost due to the presence of Johne's disease (JD) in an infected cattle herd was perceived to be higher by veterinarians (P<0.01). Compared to veterinarians without JD certification, seedstock producers were more likely to perceive genetic losses due to culling cows positive for MAP (P<0.01). There were mixed opinions regarding the magnitude of lowered weaning weight in calves from infected cows and perceived differences in risk of other diseases or conditions in infected cows. An annual loss of $235 (95% CR: $89-$457) for each infected animal was estimated based on information from the producer survey. The analogous estimate using information inputs from veterinarians was $250 ($82-$486). Mean annual loss due to JD in a 100 cow herd with a 7% true prevalence was $1644 ($625-$3250) based on information provided by producers. Similarly, mean annual loss based on information collected from veterinarians was $1747 ($575-$3375). Copyright © 2013 Elsevier B.V. All rights reserved.
SOFTCOST - DEEP SPACE NETWORK SOFTWARE COST MODEL
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1994-01-01
The early-on estimation of required resources and a schedule for the development and maintenance of software is usually the least precise aspect of the software life cycle. However, it is desirable to make some sort of an orderly and rational attempt at estimation in order to plan and organize an implementation effort. The Software Cost Estimation Model program, SOFTCOST, was developed to provide a consistent automated resource and schedule model which is more formalized than the often used guesswork model based on experience, intuition, and luck. SOFTCOST was developed after the evaluation of a number of existing cost estimation programs indicated that there was a need for a cost estimation program with a wide range of application and adaptability to diverse kinds of software. SOFTCOST combines several software cost models found in the open literature into one comprehensive set of algorithms that compensate for nearly fifty implementation factors relative to size of the task, inherited baseline, organizational and system environment, and difficulty of the task. SOFTCOST produces mean and variance estimates of software size, implementation productivity, recommended staff level, probable duration, amount of computer resources required, and amount and cost of software documentation. Since the confidence level for a project using mean estimates is small, the user is given the opportunity to enter risk-biased values for effort, duration, and staffing, to achieve higher confidence levels. SOFTCOST then produces a PERT/CPM file with subtask efforts, durations, and precedences defined so as to produce the Work Breakdown Structure (WBS) and schedule having the asked-for overall effort and duration. The SOFTCOST program operates in an interactive environment prompting the user for all of the required input. The program builds the supporting PERT data base in a file for later report generation or revision. The PERT schedule and the WBS schedule may be printed and stored in a file for later use. The SOFTCOST program is written in Microsoft BASIC for interactive execution and has been implemented on an IBM PC-XT/AT operating MS-DOS 2.1 or higher with 256K bytes of memory. SOFTCOST was originally developed for the Zylog Z80 system running under CP/M in 1981. It was converted to run on the IBM PC XT/AT in 1986. SOFTCOST is a copyrighted work with all copyright vested in NASA.
Poland, Jesse A; Nelson, Rebecca J
2011-02-01
The agronomic importance of developing durably resistant cultivars has led to substantial research in the field of quantitative disease resistance (QDR) and, in particular, mapping quantitative trait loci (QTL) for disease resistance. The assessment of QDR is typically conducted by visual estimation of disease severity, which raises concern over the accuracy and precision of visual estimates. Although previous studies have examined the factors affecting the accuracy and precision of visual disease assessment in relation to the true value of disease severity, the impact of this variability on the identification of disease resistance QTL has not been assessed. In this study, the effects of rater variability and rating scales on mapping QTL for northern leaf blight resistance in maize were evaluated in a recombinant inbred line population grown under field conditions. The population of 191 lines was evaluated by 22 different raters using a direct percentage estimate, a 0-to-9 ordinal rating scale, or both. It was found that more experienced raters had higher precision and that using a direct percentage estimation of diseased leaf area produced higher precision than using an ordinal scale. QTL mapping was then conducted using the disease estimates from each rater using stepwise general linear model selection (GLM) and inclusive composite interval mapping (ICIM). For GLM, the same QTL were largely found across raters, though some QTL were only identified by a subset of raters. The magnitudes of estimated allele effects at identified QTL varied drastically, sometimes by as much as threefold. ICIM produced highly consistent results across raters and for the different rating scales in identifying the location of QTL. We conclude that, despite variability between raters, the identification of QTL was largely consistent among raters, particularly when using ICIM. However, care should be taken in estimating QTL allele effects, because this was highly variable and rater dependent.
NASA Astrophysics Data System (ADS)
Uprety, Bibhisha
Within the aerospace industry the need to detect and locate impact events, even when no visible damage is present, is important both from the maintenance and design perspectives. This research focused on the use of Acoustic Emission (AE) based sensing technologies to identify impact events and characterize damage modes in composite structures for structural health monitoring. Six commercially available piezoelectric AE sensors were evaluated for use with impact location estimation algorithms under development at the University of Utah. Both active and passive testing were performed to estimate the time of arrival and plate wave mode velocities for impact location estimation. Four sensors were recommended for further comparative investigations. Furthermore, instrumented low-velocity impact experiments were conducted on quasi-isotropic carbon/epoxy composite laminates to initiate specific types of damage: matrix cracking, delamination and fiber breakage. AE signal responses were collected during impacting and the test panels were ultrasonically C-scanned after impact to identify the internal damage corresponding to the AE signals. Matrix cracking and delamination damage produced using more compliant test panels and larger diameter impactor were characterized by lower frequency signals while fiber breakage produced higher frequency responses. The results obtained suggest that selected characteristics of sensor response signals can be used both to determine whether damage is produced during impacting and to characterize the types of damage produced in an impacted composite structure.
Hill, Benjamin Mako; Shaw, Aaron
2013-01-01
Opt-in surveys are the most widespread method used to study participation in online communities, but produce biased results in the absence of adjustments for non-response. A 2008 survey conducted by the Wikimedia Foundation and United Nations University at Maastricht is the source of a frequently cited statistic that less than 13% of Wikipedia contributors are female. However, the same study suggested that only 39.9% of Wikipedia readers in the US were female - a finding contradicted by a representative survey of American adults by the Pew Research Center conducted less than two months later. Combining these two datasets through an application and extension of a propensity score estimation technique used to model survey non-response bias, we construct revised estimates, contingent on explicit assumptions, for several of the Wikimedia Foundation and United Nations University at Maastricht claims about Wikipedia editors. We estimate that the proportion of female US adult editors was 27.5% higher than the original study reported (22.7%, versus 17.8%), and that the total proportion of female editors was 26.8% higher (16.1%, versus 12.7%).
2013-01-01
Opt-in surveys are the most widespread method used to study participation in online communities, but produce biased results in the absence of adjustments for non-response. A 2008 survey conducted by the Wikimedia Foundation and United Nations University at Maastricht is the source of a frequently cited statistic that less than 13% of Wikipedia contributors are female. However, the same study suggested that only 39.9% of Wikipedia readers in the US were female – a finding contradicted by a representative survey of American adults by the Pew Research Center conducted less than two months later. Combining these two datasets through an application and extension of a propensity score estimation technique used to model survey non-response bias, we construct revised estimates, contingent on explicit assumptions, for several of the Wikimedia Foundation and United Nations University at Maastricht claims about Wikipedia editors. We estimate that the proportion of female US adult editors was 27.5% higher than the original study reported (22.7%, versus 17.8%), and that the total proportion of female editors was 26.8% higher (16.1%, versus 12.7%). PMID:23840366
Kim, Hyun-Kyung; Zhang, Yanxin
2017-04-01
Large spinal compressive force combined with axial torsional shear force during asymmetric lifting tasks is highly associated with lower back injury (LBI). The aim of this study was to estimate lumbar spinal loading and muscle forces during symmetric lifting (SL) and asymmetric lifting (AL) tasks using a whole-body musculoskeletal modelling approach. Thirteen healthy males lifted loads of 7 and 12 kg under two lifting conditions (SL and AL). Kinematic data and ground reaction force data were collected and then processed by a whole-body musculoskeletal model. The results show AL produced a significantly higher peak lateral shear force as well as greater peak force of psoas major, quadratus lumborum, multifidus, iliocostalis lumborum pars lumborum, longissimus thoracis pars lumborum and external oblique than SL. The greater lateral shear forces combined with higher muscle force and asymmetrical muscle contractions may have the biomechanical mechanism responsible for the increased risk of LBI during AL. Practitioner Summary: Estimating lumbar spinal loading and muscle forces during free-dynamic asymmetric lifting tasks with a whole-body musculoskeletal modelling in OpenSim is the core value of this research. The results show that certain muscle groups are fundamentally responsible for asymmetric movement, thereby producing high lumbar spinal loading and muscle forces, which may increase risks of LBI during asymmetric lifting tasks.
Impact of data assimilation on Eulerian versus Lagrangian estimates of upper ocean transport
NASA Astrophysics Data System (ADS)
Sperrevik, Ann Kristin; Röhrs, Johannes; Christensen, Kai Hâkon
2017-07-01
Using four-dimensional variational analysis, we produce an estimate of the state of a coastal region in Northern Norway during the late winter and spring in 1984. We use satellite sea surface temperature and in situ observations from a series of intensive field campaigns, and obtain a more realistic distribution of water masses both in the horizontal and the vertical than a pure downscaling approach can achieve. Although the distribution of Eulerian surface current speeds are similar, we find that they are more variable and less dependent on model bathymetry in our reanalysis compared to a hindcast produced using the same modeling system. Lagrangian drift currents on the other hand are significantly changed, with overall higher kinetic energy levels in the reanalysis than in the hindcast, particularly in the superinertial frequency band.
Information flow in an atmospheric model and data assimilation
NASA Astrophysics Data System (ADS)
Yoon, Young-noh
2011-12-01
Weather forecasting consists of two processes, model integration and analysis (data assimilation). During the model integration, the state estimate produced by the analysis evolves to the next cycle time according to the atmospheric model to become the background estimate. The analysis then produces a new state estimate by combining the background state estimate with new observations, and the cycle repeats. In an ensemble Kalman filter, the probability distribution of the state estimate is represented by an ensemble of sample states, and the covariance matrix is calculated using the ensemble of sample states. We perform numerical experiments on toy atmospheric models introduced by Lorenz in 2005 to study the information flow in an atmospheric model in conjunction with ensemble Kalman filtering for data assimilation. This dissertation consists of two parts. The first part of this dissertation is about the propagation of information and the use of localization in ensemble Kalman filtering. If we can perform data assimilation locally by considering the observations and the state variables only near each grid point, then we can reduce the number of ensemble members necessary to cover the probability distribution of the state estimate, reducing the computational cost for the data assimilation and the model integration. Several localized versions of the ensemble Kalman filter have been proposed. Although tests applying such schemes have proven them to be extremely promising, a full basic understanding of the rationale and limitations of localization is currently lacking. We address these issues and elucidate the role played by chaotic wave dynamics in the propagation of information and the resulting impact on forecasts. The second part of this dissertation is about ensemble regional data assimilation using joint states. Assuming that we have a global model and a regional model of higher accuracy defined in a subregion inside the global region, we propose a data assimilation scheme that produces the analyses for the global and the regional model simultaneously, considering forecast information from both models. We show that our new data assimilation scheme produces better results both in the subregion and the global region than the data assimilation scheme that produces the analyses for the global and the regional model separately.
Pollutant loading from low-density residential neighborhoods in California.
Bale, Andrew E; Greco, Steven E; Pitton, Bruno J L; Haver, Darren L; Oki, Lorence R
2017-08-01
This paper presents a comparison of pollutant load estimations for runoff from two geographically distinct residential suburban neighborhoods in northern and southern California. The two neighborhoods represent a single urban land use type: low-density residential in small catchments (<0.3 km 2 ) under differing regional climates and irrigation practices. Pollutant loads of pesticides, nutrients, and drinking water constituents of concern are estimated for both storm and non-storm runoff. From continuous flow monitoring, it was found that a daily cycle of persistent runoff that peaks mid-morning occurs at both sites. These load estimations indicate that many residential neighborhoods in California produce significant non-storm pollutant loads year-round. Results suggest that non-storm flow accounted for 47-69% of total annual runoff and significantly contributed to annual loading rates of most nutrients and pesticides at both sites. At the Southern California site, annual non-storm loads are 1.2-10 times higher than storm loads of all conventional constituents and nutrients with one exception (total suspended solids). At the Northern California site, annual storm loads range from 51 to 76% of total loads for all conventional constituents and nutrients with one exception (total dissolved solids). Non-storm yields of pesticides at the Southern California site range from 1.3-65 times higher than those at the Northern California site. The disparity in estimated pollutant loads between the two sites indicates large potential variation from site-to-site within the state and suggests neighborhoods in drier and milder climates may produce significantly larger non-storm loads due to persistent dry season runoff and year-round pest control.
Assessing the feasibility of using produced water for irrigation in Colorado.
Dolan, Flannery C; Cath, Tzahi Y; Hogue, Terri S
2018-06-01
The Colorado Water Plan estimates as much as 0.8 million irrigated acres may dry up statewide from agricultural to municipal and industrial transfers. To help mitigate this loss, new sources of water are being explored in Colorado. One such source may be produced water. Oil and gas production in 2016 alone produced over 300 million barrels of produced water. Currently, the most common method of disposal of produced water is deep well injection, which is costly and has been shown to cause induced seismicity. Treating this water to agricultural standards eliminates the need to dispose of this water and provides a new source of water. This research explores which counties in Colorado may be best suited to reusing produced water for agriculture based on a combined index of need, quality of produced water, and quantity of produced water. The volumetric impact of using produced water for agricultural needs is determined for the top six counties. Irrigation demand is obtained using evapotranspiration estimates from a range of methods, including remote sensing products and ground-based observations. The economic feasibility of treating produced water to irrigation standards is also determined using an integrated decision selection tool (iDST). We find that produced water can make a substantial volumetric impact on irrigation demand in some counties. Results from the iDST indicate that while costs of treating produced water are higher than the cost of injection into private disposal wells, the costs are much less than disposal into commercial wells. The results of this research may aid in the transition between viewing produced water as a waste product and using it as a tool to help secure water for the arid west. Copyright © 2018 Elsevier B.V. All rights reserved.
Wu, Hulin; Xue, Hongqi; Kumar, Arun
2012-06-01
Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. © 2012, The International Biometric Society.
Guadagnin, S G; Rath, S; Reyes, F G R
2005-12-01
The nitrate content of leafy vegetables (watercress, lettuce and arugula) produced by different agricultural systems (conventional, organic and hydroponic) was determined. The daily nitrate intake from the consumption of these crop species by the average Brazilian consumer was also estimated. Sampling was carried out between June 2001 to February 2003 in Campinas, São Paulo State, Brazil. Nitrate was extracted from the samples using the procedure recommended by the AOAC. Flow injection analysis with spectrophotometric detection at 460 nm was used for nitrate determination through the ternary complex FeSCNNO+. For lettuce and arugula, the average nitrate content varied (p < 0.05) between the three agricultural systems with the nitrate level in the crops produced by the organic system being lower than in the conventional system that, in turn, was lower than in the hydroponic system. For watercress, no difference (p < 0.05) was found between the organic and hydroponic samples, both having higher nitrate contents (p < 0.05) than conventionally cultivated samples. The nitrate content for each crop species varied among producers, between different parts of the plant and in relation to the season. The estimated daily nitrate intake, calculated from the consumption of the crops produced by the hydroponic system, represented 29% of the acceptable daily intake established for this ion.
ERIC Educational Resources Information Center
Hickson, Stephen; Reed, W. Robert; Sander, Nicholas
2012-01-01
This study investigates the degree to which grades based solely on constructed-response (CR) questions differ from grades based solely on multiple-choice (MC) questions. If CR questions are to justify their higher costs, they should produce different grade outcomes than MC questions. We use a data set composed of thousands of observations on…
Primer and platform effects on 16S rRNA tag sequencing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tremblay, Julien; Singh, Kanwar; Fern, Alison
Sequencing of 16S rRNA gene tags is a popular method for profiling and comparing microbial communities. The protocols and methods used, however, vary considerably with regard to amplification primers, sequencing primers, sequencing technologies; as well as quality filtering and clustering. How results are affected by these choices, and whether data produced with different protocols can be meaningfully compared, is often unknown. Here we compare results obtained using three different amplification primer sets (targeting V4, V6–V8, and V7–V8) and two sequencing technologies (454 pyrosequencing and Illumina MiSeq) using DNA from a mock community containing a known number of species as wellmore » as complex environmental samples whose PCR-independent profiles were estimated using shotgun sequencing. We find that paired-end MiSeq reads produce higher quality data and enabled the use of more aggressive quality control parameters over 454, resulting in a higher retention rate of high quality reads for downstream data analysis. While primer choice considerably influences quantitative abundance estimations, sequencing platform has relatively minor effects when matched primers are used. In conclusion, beta diversity metrics are surprisingly robust to both primer and sequencing platform biases.« less
Primer and platform effects on 16S rRNA tag sequencing
Tremblay, Julien; Singh, Kanwar; Fern, Alison; ...
2015-08-04
Sequencing of 16S rRNA gene tags is a popular method for profiling and comparing microbial communities. The protocols and methods used, however, vary considerably with regard to amplification primers, sequencing primers, sequencing technologies; as well as quality filtering and clustering. How results are affected by these choices, and whether data produced with different protocols can be meaningfully compared, is often unknown. Here we compare results obtained using three different amplification primer sets (targeting V4, V6–V8, and V7–V8) and two sequencing technologies (454 pyrosequencing and Illumina MiSeq) using DNA from a mock community containing a known number of species as wellmore » as complex environmental samples whose PCR-independent profiles were estimated using shotgun sequencing. We find that paired-end MiSeq reads produce higher quality data and enabled the use of more aggressive quality control parameters over 454, resulting in a higher retention rate of high quality reads for downstream data analysis. While primer choice considerably influences quantitative abundance estimations, sequencing platform has relatively minor effects when matched primers are used. In conclusion, beta diversity metrics are surprisingly robust to both primer and sequencing platform biases.« less
Martínez, Fedra S; Franceschini, Celeste
2018-01-01
We assessed the damage produced by invertebrate herbivores per leaf lamina and per m2 of populations floating-leaf macrophytes of Neotropical wetlands in the growth and decay periods, and assessed if the damage produced by the herbivores should be taken into account in the estimations of plant biomass of these macrophytes or not. The biomass removed per lamina and per m2 was higher during the growth period than in decay period in Nymphoides indica and Hydrocleys nymphoides, while Nymphaea prolifera had low values of herbivory in growth period. During decay period this plant is only present as vegetative propagules. According to the values of biomass removed per m2 of N. indica, underestimation up to 17.69% should be produced in cases that herbivory do not should be taking account to evaluate these plant parameters on this macrophyte. Therefore, for the study of biomass and productivity in the study area, we suggest the use of corrected lamina biomass after estimating the biomass removed by herbivores on N. indica. The values of damage in N. indica emphasize the importance of this macrophyte as a food resource for invertebrate herbivores in the trophic networks of the Neotropical wetlands.
Xiaopeng, Q I; Liang, Wei; Barker, Laurie; Lekiachvili, Akaki; Xingyou, Zhang
Temperature changes are known to have significant impacts on human health. Accurate estimates of population-weighted average monthly air temperature for US counties are needed to evaluate temperature's association with health behaviours and disease, which are sampled or reported at the county level and measured on a monthly-or 30-day-basis. Most reported temperature estimates were calculated using ArcGIS, relatively few used SAS. We compared the performance of geostatistical models to estimate population-weighted average temperature in each month for counties in 48 states using ArcGIS v9.3 and SAS v 9.2 on a CITGO platform. Monthly average temperature for Jan-Dec 2007 and elevation from 5435 weather stations were used to estimate the temperature at county population centroids. County estimates were produced with elevation as a covariate. Performance of models was assessed by comparing adjusted R 2 , mean squared error, root mean squared error, and processing time. Prediction accuracy for split validation was above 90% for 11 months in ArcGIS and all 12 months in SAS. Cokriging in SAS achieved higher prediction accuracy and lower estimation bias as compared to cokriging in ArcGIS. County-level estimates produced by both packages were positively correlated (adjusted R 2 range=0.95 to 0.99); accuracy and precision improved with elevation as a covariate. Both methods from ArcGIS and SAS are reliable for U.S. county-level temperature estimates; However, ArcGIS's merits in spatial data pre-processing and processing time may be important considerations for software selection, especially for multi-year or multi-state projects.
Ford, Michael J; Murdoch, Andrew; Hughes, Michael
2015-03-01
We used parentage analysis based on microsatellite genotypes to measure rates of homing and straying of Chinook salmon (Oncorhynchus tshawytscha) among five major spawning tributaries within the Wenatchee River, Washington. On the basis of analysis of 2248 natural-origin and 11594 hatchery-origin fish, we estimated that the rate of homing to natal tributaries by natural-origin fish ranged from 0% to 99% depending on the tributary. Hatchery-origin fish released in one of the five tributaries homed to that tributary at a far lower rate than the natural-origin fish (71% compared to 96%). For hatchery-released fish, stray rates based on parentage analysis were consistent with rates estimated using physical tag recoveries. Stray rates among major spawning tributaries were generally higher than stray rates of tagged fish to areas outside of the Wenatchee River watershed. Within the Wenatchee watershed, rates of straying by natural-origin fish were significantly affected by spawning tributary and by parental origin: progeny of naturally spawning hatchery-produced fish strayed at significantly higher rates than progeny whose parents were themselves of natural origin. Notably, none of the 170 offspring that were products of mating by two natural-origin fish strayed from their natal tributary. Indirect estimates of gene flow based on FST statistics were correlated with but higher than the estimates from the parentage data. Tributary-specific estimates of effective population size were also correlated with the number of spawners in each tributary. Published [2015]. This article is a U.S. Government work and is in the public domain in the USA.
Mokánszki, Attila; Molnár, Zsuzsanna; Ujfalusi, Anikó; Balogh, Erzsébet; Bazsáné, Zsuzsa Kassai; Varga, Attila; Jakab, Attila; Oláh, Éva
2012-12-01
Infertile men with low sperm concentration and/or less motile spermatozoa have an increased risk of producing aneuploid spermatozoa. Selecting spermatozoa by hyaluronic acid (HA) binding may reduce genetic risks such as chromosomal rearrangements and numerical aberrations. Fluorescence in-situ hybridization (FISH) has been used to evaluate the presence of aneuploidies. This study examined spermatozoa of 10 oligozoospermic, 9 asthenozoospermic, 9 oligoasthenozoospermic and 17 normozoospermic men by HA binding and FISH. Mean percentage of HA-bound spermatozoa in the normozoospermic group was 81%, which was significantly higher than in the oligozoospermic (P<0.001), asthenozoospermic (P<0.001) and oligoasthenozoospermic (P<0.001) groups. Disomy of sex chromosomes (P=0.014) and chromosome 17 (P=0.0019), diploidy (P=0.03) and estimated numerical chromosome aberrations (P=0.004) were significantly higher in the oligoasthenozoospermic group compared with the other groups. There were statistically significant relationships (P<0.001) between sperm concentration and HA binding (r=0.658), between sperm concentration and estimated numerical chromosome aberrations (r=-0.668) and between HA binding and estimated numerical chromosome aberrations (r=-0.682). HA binding and aneuploidy studies of spermatozoa in individual cases allow prediction of reproductive prognosis and provision of appropriate genetic counselling. Infertile men with normal karyotypes and low sperm concentrations and/or less motile spermatozoa have significantly increased risks of producing aneuploid (diminished mature) spermatozoa. Selecting spermatozoa by hyaluronic acid (HA) binding, based on a binding between sperm receptors for zona pellucida and HA, may reduce the potential genetic risks such as chromosomal rearrangements and numerical aberrations. In the present study we examined sperm samples of 45 men with different sperm parameters by HA-binding assay and fluorescence in-situ hybridization (FISH). Mean percentage of HA-bound spermatozoa in the normozoospermic group was significantly higher than the oligozoospermic, the asthenozoospermic and the oligoasthenozoospermic groups. Using FISH, disomy of sex chromosomes and chromosome 17, diploidy and estimated numerical chromosome aberration frequencies were significantly higher in the oligoasthenozoospermic group compared with the three other groups. A significant positive correlation was found between the sperm concentration and the HA-binding capacity, and significant negative correlations between the sperm concentration and the estimated numerical chromosomes aberrations as well as between the HA-binding ability and the estimated numerical chromosome aberrations were identified. We conclude that HA-binding assay and sperm aneuploidy study using FISH may help to predict the reproductive ability of selected infertile male patients and to provide appropriate genetic counselling. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.
Absorption of the Martian regolith: Specific surface area and missing CO(sub 2)
NASA Technical Reports Server (NTRS)
Zent, A. P.; Fanale, F. P.; Postawko, S. E.
1987-01-01
For most estimates of available regolith and initial degassed CO(sub 2) inventories, it appears that any initial inventory must have been lost to space or incorporated into carbonates. Most estimates of the total available degassed CO(sub 2) inventory are only marginally sufficient to allow for a major early greenhouse effect. It is suggested that the requirements for greenhouse warming to produce old dessicated terrain would be greatly lessened if groundwater brines rather than rainfall were involved and if a higher internal gradient were involved to raise the water (brine) table, leading to more frequent sapping.
Role of fish distribution on estimates of standing crop in a cooling reservoir
Barwick, D. Hugh
1984-01-01
Estimates of fish standing crop from coves in Keowee Reservoir, South Carolina, were obtained in May and August for 3 consecutive years. Estimates were significantly higher in May than in August for most of the major species of fish collected, suggesting that considerable numbers of fish had migrated from the coves by August. This change in fish distribution may have resulted from the operation of a 2,580-megawatt nuclear power plant which altered reservoir stratification. Because fish distribution is sensitive to conditions of reservoir stratification, and because power plants often alter reservoir stratification, annual cove sampling in August may not be sufficient to produce comparable estimates of fish standing crop on which to assess the impact of power plant operations on fish populations. Comparable estimates of fish standing crop can probably be obtained from cooling reservoirs by collecting annual samples at similar water temperatures and concentrations of dissolved oxygen.
Huang, Kuo-Chen; Wang, Hsiu-Feng; Chen, Chun-Ching
2010-06-01
Effects of shape, size, and chromaticity of stimuli on participants' errors when estimating the size of simultaneously presented standard and comparison stimuli were examined. 48 Taiwanese college students ages 20 to 24 years old (M = 22.3, SD = 1.3) participated. Analysis showed that the error for estimated size was significantly greater for those in the low-vision group than for those in the normal-vision and severe-myopia groups. The errors were significantly greater with green and blue stimuli than with red stimuli. Circular stimuli produced smaller mean errors than did square stimuli. The actual size of the standard stimulus significantly affected the error for estimated size. Errors for estimations using smaller sizes were significantly higher than when the sizes were larger. Implications of the results for graphics-based interface design, particularly when taking account of visually impaired users, are discussed.
Parametric study of ion heating in a burnout device (HIP-1)
NASA Technical Reports Server (NTRS)
Sigman, D. R.; Reinmann, J. J.; Lauver, M. R.
1974-01-01
Results of further studies on the Lewis Research Center hot-ion plasma source (HIP-1) are reported. Changes have been made in both the electrode geometry and materials to produce higher ion temperatures. Ion temperature increased significantly with increased vacuum pumping speed. The best ion temperatures achieved, so far, for H(+), D(+), and He(+) plasmas are estimated to be equal to, or greater than 0.6, equal to, or greater than 0.9, and equal to, greater than 2.0 keV, respectively. Electrode pairs produced high ion temperatures whether on the magnetic axis or off it by 5.5 cm. Multiple sources, one on-axis and one off-axis, were run simultaneously from a single power supply by using independent gas feed rates. A momentum analyzer has been added to the charge-exchange neutral particle analyzer to identify particles according to mass, as well as energy. Under any given plasma condition, the higher mass ions have higher average energies but not by as much as the ratio of their respective masses.
Using flow cytometry to estimate pollen DNA content: improved methodology and applications
Kron, Paul; Husband, Brian C.
2012-01-01
Background and Aims Flow cytometry has been used to measure nuclear DNA content in pollen, mostly to understand pollen development and detect unreduced gametes. Published data have not always met the high-quality standards required for some applications, in part due to difficulties inherent in the extraction of nuclei. Here we describe a simple and relatively novel method for extracting pollen nuclei, involving the bursting of pollen through a nylon mesh, compare it with other methods and demonstrate its broad applicability and utility. Methods The method was tested across 80 species, 64 genera and 33 families, and the data were evaluated using established criteria for estimating genome size and analysing cell cycle. Filter bursting was directly compared with chopping in five species, yields were compared with published values for sonicated samples, and the method was applied by comparing genome size estimates for leaf and pollen nuclei in six species. Key Results Data quality met generally applied standards for estimating genome size in 81 % of species and the higher best practice standards for cell cycle analysis in 51 %. In 41 % of species we met the most stringent criterion of screening 10 000 pollen grains per sample. In direct comparison with two chopping techniques, our method produced better quality histograms with consistently higher nuclei yields, and yields were higher than previously published results for sonication. In three binucleate and three trinucleate species we found that pollen-based genome size estimates differed from leaf tissue estimates by 1·5 % or less when 1C pollen nuclei were used, while estimates from 2C generative nuclei differed from leaf estimates by up to 2·5 %. Conclusions The high success rate, ease of use and wide applicability of the filter bursting method show that this method can facilitate the use of pollen for estimating genome size and dramatically improve unreduced pollen production estimation with flow cytometry. PMID:22875815
Method and apparatus for digitally based high speed x-ray spectrometer
Warburton, W.K.; Hubbard, B.
1997-11-04
A high speed, digitally based, signal processing system which accepts input data from a detector-preamplifier and produces a spectral analysis of the x-rays illuminating the detector. The system achieves high throughputs at low cost by dividing the required digital processing steps between a ``hardwired`` processor implemented in combinatorial digital logic, which detects the presence of the x-ray signals in the digitized data stream and extracts filtered estimates of their amplitudes, and a programmable digital signal processing computer, which refines the filtered amplitude estimates and bins them to produce the desired spectral analysis. One set of algorithms allow this hybrid system to match the resolution of analog systems while operating at much higher data rates. A second set of algorithms implemented in the processor allow the system to be self calibrating as well. The same processor also handles the interface to an external control computer. 19 figs.
Method and apparatus for digitally based high speed x-ray spectrometer
Warburton, William K.; Hubbard, Bradley
1997-01-01
A high speed, digitally based, signal processing system which accepts input data from a detector-preamplifier and produces a spectral analysis of the x-rays illuminating the detector. The system achieves high throughputs at low cost by dividing the required digital processing steps between a "hardwired" processor implemented in combinatorial digital logic, which detects the presence of the x-ray signals in the digitized data stream and extracts filtered estimates of their amplitudes, and a programmable digital signal processing computer, which refines the filtered amplitude estimates and bins them to produce the desired spectral analysis. One set of algorithms allow this hybrid system to match the resolution of analog systems while operating at much higher data rates. A second set of algorithms implemented in the processor allow the system to be self calibrating as well. The same processor also handles the interface to an external control computer.
NASA Astrophysics Data System (ADS)
Suzuki, Ryosuke; Nishimura, Motoki; Yuan, Lee Chang; Kamahara, Hirotsugu; Atsuta, Yoichi; Daimon, Hiroyuki
2017-10-01
Utilization of sewage sludge using anaerobic digestion has been promoted for decades. However, it is still relatively uncommon especially in Japan. As an approach to promote the utilization of sewage sludge using anaerobic digestion, an integrated system that combines anaerobic digestion with greenhouse, composting and seaweed cultivation was proposed. Based on the concept of the integrated system, not only sewage sludge can be treated using anaerobic digestion that creates green energy, but also the by-products such as CO2 and heat produced during the process can be utilized for crops production. In this study, the potentials of such integrated system were discussed through the estimation of possible commercialized scale as well as comparison of energy consumption with conventional approach for sewage sludge treatment, which is the incineration. The estimation of possible commercialized scale was calculated based on the carbon flow of the system. Results showed that 25% of the current total electricity of the wastewater treatment plant can be covered by the energy produced using anaerobic digestion of sewage sludge. It was estimated that the total energy consumption of the integrated system was actually 14% lower when compared to incineration approach. In addition to the large amount of crops that can be produced, all in all this study aimed to be the showcase of the potentials of sewage sludge as a biomass by implementing the proposed integrated system. The extra values of producing crops through the utilization of CO2 and heat can serve as a stimulus to the public, which would surely lead to higher interest to implement the utilization of sewage sludge using anaerobic digestion.
Estimation of Uncertainties in the Global Distance Test (GDT_TS) for CASP Models.
Li, Wenlin; Schaeffer, R Dustin; Otwinowski, Zbyszek; Grishin, Nick V
2016-01-01
The Critical Assessment of techniques for protein Structure Prediction (or CASP) is a community-wide blind test experiment to reveal the best accomplishments of structure modeling. Assessors have been using the Global Distance Test (GDT_TS) measure to quantify prediction performance since CASP3 in 1998. However, identifying significant score differences between close models is difficult because of the lack of uncertainty estimations for this measure. Here, we utilized the atomic fluctuations caused by structure flexibility to estimate the uncertainty of GDT_TS scores. Structures determined by nuclear magnetic resonance are deposited as ensembles of alternative conformers that reflect the structural flexibility, whereas standard X-ray refinement produces the static structure averaged over time and space for the dynamic ensembles. To recapitulate the structural heterogeneous ensemble in the crystal lattice, we performed time-averaged refinement for X-ray datasets to generate structural ensembles for our GDT_TS uncertainty analysis. Using those generated ensembles, our study demonstrates that the time-averaged refinements produced structure ensembles with better agreement with the experimental datasets than the averaged X-ray structures with B-factors. The uncertainty of the GDT_TS scores, quantified by their standard deviations (SDs), increases for scores lower than 50 and 70, with maximum SDs of 0.3 and 1.23 for X-ray and NMR structures, respectively. We also applied our procedure to the high accuracy version of GDT-based score and produced similar results with slightly higher SDs. To facilitate score comparisons by the community, we developed a user-friendly web server that produces structure ensembles for NMR and X-ray structures and is accessible at http://prodata.swmed.edu/SEnCS. Our work helps to identify the significance of GDT_TS score differences, as well as to provide structure ensembles for estimating SDs of any scores.
A Comparative Study of Automated Infrasound Detectors - PMCC and AFD with Analyst Review.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Junghyun; Hayward, Chris; Zeiler, Cleat
Automated detections calculated by the progressive multi-channel correlation (PMCC) method (Cansi, 1995) and the adaptive F detector (AFD) (Arrowsmith et al., 2009) are compared to the signals identified by five independent analysts. Each detector was applied to a four-hour time sequence recorded by the Korean infrasound array CHNAR. This array was used because it is composed of both small (<100 m) and large (~1000 m) aperture element spacing. The four hour time sequence contained a number of easily identified signals under noise conditions that have average RMS amplitudes varied from 1.2 to 4.5 mPa (1 to 5 Hz), estimated withmore » running five-minute window. The effectiveness of the detectors was estimated for the small aperture, large aperture, small aperture combined with the large aperture, and full array. The full and combined arrays performed the best for AFD under all noise conditions while the large aperture array had the poorest performance for both detectors. PMCC produced similar results as AFD under the lower noise conditions, but did not produce as dramatic an increase in detections using the full and combined arrays. Both automated detectors and the analysts produced a decrease in detections under the higher noise conditions. Comparing the detection probabilities with Estimated Receiver Operating Characteristic (EROC) curves we found that the smaller value of consistency for PMCC and the larger p-value for AFD had the highest detection probability. These parameters produced greater changes in detection probability than estimates of the false alarm rate. The detection probability was impacted the most by noise level, with low noise (average RMS amplitude of 1.7 mPa) having an average detection probability of ~40% and high noise (average RMS amplitude of 2.9 mPa) average detection probability of ~23%.« less
Canivez, Gary L; Watkins, Marley W; Dombrowski, Stefan C
2017-04-01
The factor structure of the Wechsler Intelligence Scale for Children-Fifth Edition (WISC-V; Wechsler, 2014a) standardization sample (N = 2,200) was examined using confirmatory factor analyses (CFA) with maximum likelihood estimation for all reported models from the WISC-V Technical and Interpretation Manual (Wechsler, 2014b). Additionally, alternative bifactor models were examined and variance estimates and model-based reliability estimates (ω coefficients) were provided. Results from analyses of the 16 primary and secondary WISC-V subtests found that all higher-order CFA models with 5 group factors (VC, VS, FR, WM, and PS) produced model specification errors where the Fluid Reasoning factor produced negative variance and were thus judged inadequate. Of the 16 models tested, the bifactor model containing 4 group factors (VC, PR, WM, and PS) produced the best fit. Results from analyses of the 10 primary WISC-V subtests also found the bifactor model with 4 group factors (VC, PR, WM, and PS) produced the best fit. Variance estimates from both 16 and 10 subtest based bifactor models found dominance of general intelligence (g) in accounting for subtest variance (except for PS subtests) and large ω-hierarchical coefficients supporting general intelligence interpretation. The small portions of variance uniquely captured by the 4 group factors and low ω-hierarchical subscale coefficients likely render the group factors of questionable interpretive value independent of g (except perhaps for PS). Present CFA results confirm the EFA results reported by Canivez, Watkins, and Dombrowski (2015); Dombrowski, Canivez, Watkins, and Beaujean (2015); and Canivez, Dombrowski, and Watkins (2015). (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Abbreviation definition identification based on automatic precision estimates.
Sohn, Sunghwan; Comeau, Donald C; Kim, Won; Wilbur, W John
2008-09-25
The rapid growth of biomedical literature presents challenges for automatic text processing, and one of the challenges is abbreviation identification. The presence of unrecognized abbreviations in text hinders indexing algorithms and adversely affects information retrieval and extraction. Automatic abbreviation definition identification can help resolve these issues. However, abbreviations and their definitions identified by an automatic process are of uncertain validity. Due to the size of databases such as MEDLINE only a small fraction of abbreviation-definition pairs can be examined manually. An automatic way to estimate the accuracy of abbreviation-definition pairs extracted from text is needed. In this paper we propose an abbreviation definition identification algorithm that employs a variety of strategies to identify the most probable abbreviation definition. In addition our algorithm produces an accuracy estimate, pseudo-precision, for each strategy without using a human-judged gold standard. The pseudo-precisions determine the order in which the algorithm applies the strategies in seeking to identify the definition of an abbreviation. On the Medstract corpus our algorithm produced 97% precision and 85% recall which is higher than previously reported results. We also annotated 1250 randomly selected MEDLINE records as a gold standard. On this set we achieved 96.5% precision and 83.2% recall. This compares favourably with the well known Schwartz and Hearst algorithm. We developed an algorithm for abbreviation identification that uses a variety of strategies to identify the most probable definition for an abbreviation and also produces an estimated accuracy of the result. This process is purely automatic.
The effect of multiple primary rules on population-based cancer survival
Weir, Hannah K.; Johnson, Christopher J.; Thompson, Trevor D.
2015-01-01
Purpose Different rules for registering multiple primary (MP) cancers are used by cancer registries throughout the world, making international data comparisons difficult. This study evaluates the effect of Surveillance, Epidemiology, and End Results (SEER) and International Association of Cancer Registries (IACR) MP rules on population-based cancer survival estimates. Methods Data from five US states and six metropolitan area cancer registries participating in the SEER Program were used to estimate age-standardized relative survival (RS%) for first cancers-only and all first cancers matching the selection criteria according to SEER and IACR MP rules for all cancer sites combined and for the top 25 cancer site groups among men and women. Results During 1995–2008, the percentage of MP cancers (all sites, both sexes) increased 25.4 % by using SEER rules (from 14.6 to 18.4 %) and 20.1 % by using IACR rules (from 13.2 to 15.8 %). More MP cancers were registered among females than among males, and SEER rules registered more MP cancers than IACR rules (15.8 vs. 14.4 % among males; 17.2 vs. 14.5 % among females). The top 3 cancer sites with the largest differences were melanoma (5.8 %), urinary bladder (3.5 %), and kidney and renal pelvis (2.9 %) among males, and breast (5.9 %), melanoma (3.9 %), and urinary bladder (3.4 %) among females. Five-year survival estimates (all sites combined) restricted to first primary cancers-only were higher than estimates by using first site-specific primaries (SEER or IACR rules), and for 11 of 21 sites among males and 11 of 23 sites among females. SEER estimates are comparable to IACR estimates for all site-specific cancers and marginally higher for all sites combined among females (RS 62.28 vs. 61.96 %). Conclusion Survival after diagnosis has improved for many leading cancers. However, cancer patients remain at risk of subsequent cancers. Survival estimates based on first cancers-only exclude a large and increasing number of MP cancers. To produce clinically and epidemiologically relevant and less biased cancer survival estimates, data on all cancers should be included in the analysis. The multiple primary rules (SEER or IACR) used to identify primary cancers do not affect survival estimates if all first cancers matching the selection criteria are used to produce site-specific survival estimates. PMID:23558444
NASA Astrophysics Data System (ADS)
Hansel, Amie K.; Ehrenhauser, Franz S.; Richards-Henderson, Nicole K.; Anastasio, Cort; Valsaraj, Kalliat T.
2015-02-01
Green leaf volatiles (GLVs) are a group of biogenic volatile organic compounds (BVOCs) released into the atmosphere by vegetation. BVOCs produce secondary organic aerosol (SOA) via gas-phase reactions, but little is known of their aqueous-phase oxidation as a source of SOA. GLVs can partition into atmospheric water phases, e.g., fog, mist, dew or rain, and be oxidized by hydroxyl radicals (˙OH). These reactions in the liquid phase also lead to products that have higher molecular weights, increased polarity, and lower vapor pressures, ultimately forming SOA after evaporation of the droplet. To examine this process, we investigated the aqueous, ˙OH-mediated oxidation of methyl jasmonate (MeJa) and methyl salicylate (MeSa), two GLVs that produce aqueous-phase SOA. High performance liquid chromatography/electrospray ionization mass spectrometry (HPLC-ESI-MS) was used to monitor product formation. The oxidation products identified exhibit higher molecular mass than their parent GLV due to either dimerization or the addition of oxygen and hydroxyl functional groups. The proposed structures of potential products are based on mechanistic considerations combined with the HPLC/ESI-MS data. Based on the structures, the vapor pressure and the Henry's law constant were estimated with multiple methods (SPARC, SIMPOL, MPBPVP, Bond and Group Estimations). The estimated vapor pressures of the products identified are significantly (up to 7 orders of magnitude) lower than those of the associated parent compounds, and therefore, the GLV oxidation products may remain as SOA after evaporation of the water droplet. The contribution of the identified oxidation products to SOA formation is estimated based on measured HPLC-ESI/MS responses relative to previous aqueous SOA mass yield measurements.
Kauffman, Mandy; Peck, Dannele; Scurlock, Brandon; Logan, Jim; Robinson, Timothy; Cook, Walt; Boroff, Kari; Schumaker, Brant
2016-09-15
Livestock producers and state wildlife agencies have used multiple management strategies to control bovine brucellosis in the Greater Yellowstone Area (GYA). However, spillover from elk to domestic bison and cattle herds continues to occur. Although knowledge is increasing about the location and behavior of elk in the SGYA, predicting spatiotemporal overlap between elk and cattle requires locations of livestock operations and observations of elk contact by producers. We queried all producers in a three-county area using a questionnaire designed to determine location of cattle and whether producers saw elk comingle with their animals. This information was used to parameterize a spatially-explicit risk model to estimate the number of elk expected to overlap with cattle during the brucellosis transmission risk period. Elk-cattle overlap was predicted in areas further from roads and forest boundaries in areas with wolf activity, with higher slopes, lower hunter densities, and where the cost-distance to feedgrounds was very low or very high. The model was used to estimate the expected number of years until a cattle reactor will be detected, under alternative management strategies. The model predicted cattle cases every 4.28 years in the highest risk herd unit, a higher prediction than the one case in 26 years we have observed. This difference likely indicates that ongoing management strategies are at least somewhat effective in preventing potential elk-cattle brucellosis transmission in these areas. Using this model, we can infer the expected effectiveness of various management strategies for reducing the risk of brucellosis spillover from elk to cattle. Copyright © 2016 Elsevier B.V. All rights reserved.
On the nature of the anti-tail of Comet Kohoutek /1973f/. I - A working model
NASA Technical Reports Server (NTRS)
Sekanina, Z.
1974-01-01
The model derived for the anti-tail of Comet Kohoutek describes it as a flat formation, confined essentially to the comet's orbit plane and composed of relatively heavy particles (mostly in the size range 0.1-1 mm) whose motions are controlled by solar gravity and solar radiation pressure. Almost all the material was produced by the comet before perihelion at a rate about an order of magnitude higher than for Comets Arend-Roland and Bennett. The latent heat of vaporization of the particle material is estimated at 40-45 kcal/mole or higher.
Wood, Molly S.; Fosness, Ryan L.; Etheridge, Alexandra B.
2015-12-14
Acoustic surrogate ratings were developed between backscatter data collected using acoustic Doppler velocity meters (ADVMs) and results of suspended-sediment samples. Ratings were successfully fit to various sediment size classes (total, fines, and sands) using ADVMs of different frequencies (1.5 and 3 megahertz). Surrogate ratings also were developed using variations of streamflow and seasonal explanatory variables. The streamflow surrogate ratings produced average annual sediment load estimates that were 8–32 percent higher, depending on site and sediment type, than estimates produced using the acoustic surrogate ratings. The streamflow surrogate ratings tended to overestimate suspended-sediment concentrations and loads during periods of elevated releases from Libby Dam as well as on the falling limb of the streamflow hydrograph. Estimates from the acoustic surrogate ratings more closely matched suspended-sediment sample results than did estimates from the streamflow surrogate ratings during these periods as well as for rating validation samples collected in water year 2014. Acoustic surrogate technologies are an effective means to obtain continuous, accurate estimates of suspended-sediment concentrations and loads for general monitoring and sediment-transport modeling. In the Kootenai River, continued operation of the acoustic surrogate sites and use of the acoustic surrogate ratings to calculate continuous suspended-sediment concentrations and loads will allow for tracking changes in sediment transport over time.
Caetano, Raul; Mills, Britain A; Harris, T Robert
2012-01-01
This study was conducted to examine discrepancies in alcohol consumption estimates between a self-reported standard quantity-frequency measure and an adjusted version based on respondents' typically used container size. Using a multistage cluster sample design, 5,224 Hispanic individuals 18 years of age and older were selected from the household population in five metropolitan areas of the United States: Miami, New York, Philadelphia, Houston, and Los Angeles. The survey-weighted response rate was 76%. Personal interviews lasting an average of 1 hour were conducted in respondents' homes in either English or Spanish. The overall effect of container adjustment was to increase estimates of ethanol consumption by 68% for women (range across Hispanic groups: 17%-99%) and 30% for men (range: 14%-42%). With the exception of female Cuban American, Mexican American, and South/Central American beer drinkers and male Cuban American wine drinkers, all percentage differences between unadjusted and container-adjusted estimates were positive. Second, container adjustments produced the largest change for volume of distilled spirits, followed by wine and beer. Container size adjustments generally produced larger percentage increases in consumption estimates for the higher volume drinkers, especially the upper tertile of female drinkers. Self-reported alcohol consumption based on standard drinks underreports consumption when compared with reports based on the amount of alcohol poured into commonly used containers.
Xiaopeng, QI; Liang, WEI; BARKER, Laurie; LEKIACHVILI, Akaki; Xingyou, ZHANG
2015-01-01
Temperature changes are known to have significant impacts on human health. Accurate estimates of population-weighted average monthly air temperature for US counties are needed to evaluate temperature’s association with health behaviours and disease, which are sampled or reported at the county level and measured on a monthly—or 30-day—basis. Most reported temperature estimates were calculated using ArcGIS, relatively few used SAS. We compared the performance of geostatistical models to estimate population-weighted average temperature in each month for counties in 48 states using ArcGIS v9.3 and SAS v 9.2 on a CITGO platform. Monthly average temperature for Jan-Dec 2007 and elevation from 5435 weather stations were used to estimate the temperature at county population centroids. County estimates were produced with elevation as a covariate. Performance of models was assessed by comparing adjusted R2, mean squared error, root mean squared error, and processing time. Prediction accuracy for split validation was above 90% for 11 months in ArcGIS and all 12 months in SAS. Cokriging in SAS achieved higher prediction accuracy and lower estimation bias as compared to cokriging in ArcGIS. County-level estimates produced by both packages were positively correlated (adjusted R2 range=0.95 to 0.99); accuracy and precision improved with elevation as a covariate. Both methods from ArcGIS and SAS are reliable for U.S. county-level temperature estimates; However, ArcGIS’s merits in spatial data pre-processing and processing time may be important considerations for software selection, especially for multi-year or multi-state projects. PMID:26167169
Disentangling Aerosol Cooling and Greenhouse Warming to Reveal Earth's Climate Sensitivity
NASA Astrophysics Data System (ADS)
Storelvmo, Trude; Leirvik, Thomas; Phillips, Petter; Lohmann, Ulrike; Wild, Martin
2015-04-01
Earth's climate sensitivity has been the subject of heated debate for decades, and recently spurred renewed interest after the latest IPCC assessment report suggested a downward adjustment of the most likely range of climate sensitivities. Here, we present a study based on the time period 1964 to 2010, which is unique in that it does not rely on global climate models (GCMs) in any way. The study uses surface observations of temperature and incoming solar radiation from approximately 1300 surface sites, along with observations of the equivalent CO2 concentration (CO2,eq) in the atmosphere, to produce a new best estimate for the transient climate sensitivity of 1.9K (95% confidence interval 1.2K - 2.7K). This is higher than other recent observation-based estimates, and is better aligned with the estimate of 1.8K and range (1.1K - 2.5K) derived from the latest generation of GCMs. The new estimate is produced by incorporating the observations in an energy balance framework, and by applying statistical methods that are standard in the field of Econometrics, but less common in climate studies. The study further suggests that about a third of the continental warming due to increasing CO2,eq was masked by aerosol cooling during the time period studied.
Disentangling Greenhouse Warming and Aerosol Cooling to Reveal Earth's Transient Climate Sensitivity
NASA Astrophysics Data System (ADS)
Storelvmo, T.
2015-12-01
Earth's climate sensitivity has been the subject of heated debate for decades, and recently spurred renewed interest after the latest IPCC assessment report suggested a downward adjustment of the most likely range of climate sensitivities. Here, we present an observation-based study based on the time period 1964 to 2010, which is unique in that it does not rely on global climate models (GCMs) in any way. The study uses surface observations of temperature and incoming solar radiation from approximately 1300 surface sites, along with observations of the equivalent CO2 concentration (CO2,eq) in the atmosphere, to produce a new best estimate for the transient climate sensitivity of 1.9K (95% confidence interval 1.2K - 2.7K). This is higher than other recent observation-based estimates, and is better aligned with the estimate of 1.8K and range (1.1K - 2.5K) derived from the latest generation of GCMs. The new estimate is produced by incorporating the observations in an energy balance framework, and by applying statistical methods that are standard in the field of Econometrics, but less common in climate studies. The study further suggests that about a third of the continental warming due to increasing CO2,eq was masked by aerosol cooling during the time period studied.
Acute and chronic environmental effects of clandestine methamphetamine waste.
Kates, Lisa N; Knapp, Charles W; Keenan, Helen E
2014-09-15
The illicit manufacture of methamphetamine (MAP) produces substantial amounts of hazardous waste that is dumped illegally. This study presents the first environmental evaluation of waste produced from illicit MAP manufacture. Chemical oxygen demand (COD) was measured to assess immediate oxygen depletion effects. A mixture of five waste components (10mg/L/chemical) was found to have a COD (130 mg/L) higher than the European Union wastewater discharge regulations (125 mg/L). Two environmental partition coefficients, K(OW) and K(OC), were measured for several chemicals identified in MAP waste. Experimental values were input into a computer fugacity model (EPI Suite™) to estimate environmental fate. Experimental log K(OW) values ranged from -0.98 to 4.91, which were in accordance with computer estimated values. Experimental K(OC) values ranged from 11 to 72, which were much lower than the default computer values. The experimental fugacity model for discharge to water estimates that waste components will remain in the water compartment for 15 to 37 days. Using a combination of laboratory experimentation and computer modelling, the environmental fate of MAP waste products was estimated. While fugacity models using experimental and computational values were very similar, default computer models should not take the place of laboratory experimentation. Copyright © 2014 Elsevier B.V. All rights reserved.
Preserving subject variability in group fMRI analysis: performance evaluation of GICA vs. IVA
Michael, Andrew M.; Anderson, Mathew; Miller, Robyn L.; Adalı, Tülay; Calhoun, Vince D.
2014-01-01
Independent component analysis (ICA) is a widely applied technique to derive functionally connected brain networks from fMRI data. Group ICA (GICA) and Independent Vector Analysis (IVA) are extensions of ICA that enable users to perform group fMRI analyses; however a full comparison of the performance limits of GICA and IVA has not been investigated. Recent interest in resting state fMRI data with potentially higher degree of subject variability makes the evaluation of the above techniques important. In this paper we compare component estimation accuracies of GICA and an improved version of IVA using simulated fMRI datasets. We systematically change the degree of inter-subject spatial variability of components and evaluate estimation accuracy over all spatial maps (SMs) and time courses (TCs) of the decomposition. Our results indicate the following: (1) at low levels of SM variability or when just one SM is varied, both GICA and IVA perform well, (2) at higher levels of SM variability or when more than one SMs are varied, IVA continues to perform well but GICA yields SM estimates that are composites of other SMs with errors in TCs, (3) both GICA and IVA remove spatial correlations of overlapping SMs and introduce artificial correlations in their TCs, (4) if number of SMs is over estimated, IVA continues to perform well but GICA introduces artifacts in the varying and extra SMs with artificial correlations in the TCs of extra components, and (5) in the absence or presence of SMs unique to one subject, GICA produces errors in TCs and IVA estimates are accurate. In summary, our simulation experiments (both simplistic and realistic) and our holistic analyses approach indicate that IVA produces results that are closer to ground truth and thereby better preserves subject variability. The improved version of IVA is now packaged into the GIFT toolbox (http://mialab.mrn.org/software/gift). PMID:25018704
Reflectance of vegetation, soil, and water. [Hidalgo County, Texas
NASA Technical Reports Server (NTRS)
Wiegand, C. L. (Principal Investigator)
1974-01-01
The author has identified the following significant results. The majority of the rangelands of Hidalgo County, Texas are used in cow-calf operations. Continuous year-long grazing is practiced on about 60% of the acreage and some type of deferred system on the rest. Mechanical brush control is used more than chemical control. Ground surveys gave representative estimates for 15 vegetable crops produced in Hidalgo County. ERTS-1 data were used to estimate the acreage of citrus in the county. Combined Kubleka Munk and regression models, that included a term for shadow areas, gave a higher correlation of composite canopy reflectance with ground truth than either model alone.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, Brian E.; Müller, Hans-Reinhard; Bzowski, Maciej
We explore the possibility that interstellar O and Ne may be contributing to the particle signal from the GAS instrument on Ulysses, which is generally assumed to be entirely He. Motivating this study is the recognition that an interstellar temperature higher than any previously estimated from Ulysses data could potentially resolve a discrepancy between Ulysses He measurements and those from the Interstellar Boundary Explorer (IBEX). Contamination by O and Ne could lead to Ulysses temperature measurements that are too low. We estimate the degree of O and Ne contamination necessary to increase the inferred Ulysses temperature to 8500 K, whichmore » would be consistent with both the Ulysses and IBEX data given the same interstellar flow speed. We find that producing the desired effect requires a heavy element contamination level of ∼9% of the total Ulysses/GAS signal. However, this degree of heavy element contribution is about an order of magnitude higher than expected based on our best estimates of detection efficiencies, ISM abundances, and heliospheric survival probabilities, making it unlikely that heavy element contamination is significantly affecting temperatures derived from Ulysses data.« less
2009-06-01
capability. Mass Rearing: The ability to mass-produce large numbers of high quality insect biocontrol agents can be a tremendous asset when...implementing a biocontrol program. Common sense would dictate that releasing a high number of agents allows for higher establishment success, more rapid...varying densities of biocontrol agent (costs estimated using a constant weight). Table 1. Hypothetical calculation of Hydrellia pakistanae production cost
Toward Assessing the Causes of Volcanic Diversity in the Cascades Arc
NASA Astrophysics Data System (ADS)
Till, C. B.; Kent, A. J.; Abers, G. A.; Pitcher, B.; Janiszewski, H. A.; Schmandt, B.
2017-12-01
A fundamental unanswered question in subduction system science is the cause of the observed diversity in volcanic arc style at an arc-segment to whole-arc scale. Specifically, we have yet to distinguish the predominant mantle and crustal processes responsible for the diversity of arc volcanic phenomenon, including the presence of central volcanoes vs. dispersed volcanism; episodicity in volcanic fluxes in time and space; variations in magma chemistry; and differences in the extent of magmatic focusing. Here we present a thought experiment using currently available data to estimate the relative role of crustal magmatic processes in producing the observed variations in Cascades arc volcanism. A compilation of available major element compositions of Quaternary arc volcanism and estimates of eruptive volumes are used to examine variations in the composition of arc magmas along strike. We then calculate the Quaternary volcanic heat flux into the crust, assuming steady state, required to produce the observed distribution of compositions via crystallization of mantle-derived primitive magmas vs. crustal melting using experiment constraints on possible liquid lines of descent and crustal melting scenarios. For pure crystallization, heat input into the crust scales with silica content, with dacitic to rhyolite compositions producing significantly greater latent heat relative to basalts to andesites. In contrast, the heat required to melt lower crustal amphibolite decreases with increasing silica and is likely provided by the latent heat of crystallization. Thus we develop maximum and minimum estimates for heat added to the crust at a given SiO2 range. When volumes are considered, we find that the average Quaternary volcanic heat flux at latitudes south of South Sister to be more than twice that to the north. Distributed mafic volcanism produces only a quarter to half the heat flux calculated for the main edifices at a given latitude because of their lesser eruptive volumes and quantities of evolved magma. When we compare our Quaternary heat flux calculations to a variety of geophysical observations, we find that regions of calculated higher volcanic heat flux coincide with regions of significantly lower crustal seismic wave speeds beneath and behind the arc, as well as with regions of significantly higher heat flow.
Probable Maximum Precipitation in the U.S. Pacific Northwest in a Changing Climate
NASA Astrophysics Data System (ADS)
Chen, Xiaodong; Hossain, Faisal; Leung, L. Ruby
2017-11-01
The safety of large and aging water infrastructures is gaining attention in water management given the accelerated rate of change in landscape, climate, and society. In current engineering practice, such safety is ensured by the design of infrastructure for the Probable Maximum Precipitation (PMP). Recently, several numerical modeling approaches have been proposed to modernize the conventional and ad hoc PMP estimation approach. However, the underlying physics have not been fully investigated and thus differing PMP estimates are sometimes obtained without physics-based interpretations. In this study, we present a hybrid approach that takes advantage of both traditional engineering practice and modern climate science to estimate PMP for current and future climate conditions. The traditional PMP approach is modified and applied to five statistically downscaled CMIP5 model outputs, producing an ensemble of PMP estimates in the Pacific Northwest (PNW) during the historical (1970-2016) and future (2050-2099) time periods. The hybrid approach produced consistent historical PMP estimates as the traditional estimates. PMP in the PNW will increase by 50% ± 30% of the current design PMP by 2099 under the RCP8.5 scenario. Most of the increase is caused by warming, which mainly affects moisture availability through increased sea surface temperature, with minor contributions from changes in storm efficiency in the future. Moist track change tends to reduce the future PMP. Compared with extreme precipitation, PMP exhibits higher internal variability. Thus, long-time records of high-quality data in both precipitation and related meteorological fields (temperature, wind fields) are required to reduce uncertainties in the ensemble PMP estimates.
Estimates of cancer incidence, mortality and survival in aboriginal people from NSW, Australia
2012-01-01
Background Aboriginal status has been unreliably and incompletely recorded in health and vital registration data collections for the most populous areas of Australia, including NSW where 29% of Australian Aboriginal people reside. This paper reports an assessment of Aboriginal status recording in NSW cancer registrations and estimates incidence, mortality and survival from cancer in NSW Aboriginal people using multiple imputation of missing Aboriginal status in NSW Central Cancer Registry (CCR) records. Methods Logistic regression modelling and multiple imputation were used to assign Aboriginal status to those records of cancer diagnosed from 1999 to 2008 with missing Aboriginality (affecting 12-18% of NSW cancers registered in this period). Estimates of incidence, mortality and survival from cancer in NSW Aboriginal people were compared with the NSW total population, as standardised incidence and mortality ratios, and with the non-Aboriginal population. Results Following imputation, 146 (12.2%) extra cancers in Aboriginal males and 140 (12.5%) in Aboriginal females were found for 1999-2007. Mean annual cancer incidence in NSW Aboriginal people was estimated to be 660 per 100,000 and 462 per 100,000, 9% and 6% higher than all NSW males and females respectively. Mean annual cancer mortality in NSW Aboriginal people was estimated to be 373 per 100,000 in males and 240 per 100,000 in females, 68% and 73% higher than for all NSW males and females respectively. Despite similar incidence of localised cancer, mortality from localised cancer in Aboriginal people is significantly higher than in non-Aboriginal people, as is mortality from cancers with regional, distant and unknown degree of spread at diagnosis. Cancer survival in Aboriginal people is significantly lower: 51% of males and 43% of females had died of the cancer by 5 years following diagnosis, compared to 36% and 33% of non-Aboriginal males and females respectively. Conclusion The present study is the first to produce valid and reliable estimates of cancer incidence, survival and mortality in Australian Aboriginal people from NSW. Despite somewhat higher cancer incidence in Aboriginal than in non-Aboriginal people, substantially higher mortality and lower survival in Aboriginal people is only partly explained by more advanced cancer at diagnosis. PMID:22559220
NASA Astrophysics Data System (ADS)
Shoko, Cletah; Mutanga, Onisimo; Dube, Timothy; Slotow, Rob
2018-06-01
C3 and C4 grass species composition, with different physiological, morphological and most importantly phenological characteristics, influence Aboveground Biomass (AGB) and their ability to provide ecosystem goods and services, over space and time. For decades, the lack of appropriate remote sensing data sources compromised C3 and C4 grasses AGB estimation, over space and time. This resulted in uncertainties in understanding their potential and contribution to the provision of services. This study therefore examined the utility of the new multi-temporal Sentinel 2 to estimate and map C3 and C4 grasses AGB over time, using the advanced Sparse Partial Least Squares Regression (SPLSR) model. Overall results have shown the variability in AGB between C3 and C4 grasses, estimation accuracies and the performance of the SPLSR model, over time. Themeda (C4) produced higher AGB from February to April, whereas from May to September, Festuca produced higher AGB. Both species also showed a decrease in AGB in August and September, although this was most apparent for Themeda than its counterpart. Spectral bands information predicted species AGB with lowest accuracies and an improvement was observed when both spectral bands and vegetation indices were applied. For instance, in the month of May, spectral bands predicted species AGB with lowest accuracies for Festuca (R2 = 0.57; 31.70% of the mean), Themeda (R2 = 0.59; 24.02% of the mean) and combined species (R2 = 0.61; 15.64% of the mean); the use of spectral bands and vegetation indices yielded 0.77; (18.64%), 0.75 (14.27%) and 0.73 (16.47%), for Festuca, Themeda and combined species, respectively. The red edge (at 0.705 and 0.74 μm) and derived indices, NIR and SWIR 2 (2.19 μm) were found to contribute more to grass species AGB estimation, over time. Findings have also revealed the potential of the SPLSR model in estimating C3 and C4 grasses AGB using Sentinel 2 images, over time. The AGB spatial variability maps produced in this study can be used to quantify C3 and C4 forage availability or accumulating fuel, as well as for developing operational management strategies.
Polyhydroxyalkanoate (PHA) production from waste.
Rhu, D H; Lee, W H; Kim, J Y; Choi, E
2003-01-01
PHA (polyhydroxyalkanoate) production was attempted with SBRs from food waste. Seed microbes were collected from a sewage treatment plant with a biological nutrient removal process, and acclimated with synthetic substrate prior to the application of the fermented food waste. Laboratory SBRs were used to produce PHA with limited oxygen and nutrients. The maximum content of 51% PHA was obtained with an anaerobic/aerobic cycle with P limitation, and the yield was estimated to be about 0.05 gPHA(produced)/gCOD(applied) or 25 kg PHA/dry ton of food waste, assuming more than 40% of the PHA contents were recoverable. PHB/PHA ratios were 0.74 to 0.77 due to the higher acetate concentrations. Economical analysis seemed to suggest the PHA produced from the food waste could be an alternative material to produce the biodegradable plastic to be used for the collection bags for solid waste.
Neckband retention for lesser snow geese in the western Arctic
Samuel, M.D.; Goldberg, Diana R.; Smith, A.E.; Baranyuk, W.; Cooch, E.G.
2001-01-01
Neckbands are commonly used in waterfowl studies (especially geese) to identify individuals for determination of movement and behavior and to estimate population parameters. Substantial neckband loss can adversely affect these research objectives and produce biased survival estimates. We used capture, recovery, and observation histories for lesser snow geese (Chen caerulescens caerulescens) banded in the western Arctic, 1993-1996, to estimate neckband retention. We found that neckband retention differed between snow goose breeding colonies at Wrangel Island, Russia, and Banks Island, Northwest Territories, Canada. Male snow geese had higher neckband loss than females, a pattern similar to that found for Canada geese (Branta canadensis) and lesser snow geese in Alaska. We found that the rate of neckband loss increased with time, suggesting that neckbands are lost as the plastic deteriorates. Survival estimates for geese based on resighting neckbands will be biased unless estimates are corrected for neckband loss. We recommend that neckband loss be estimated using survival estimators that incorporate recaptures, recoveries, and observations of marked birds. Research and management studies using neckbands should be designed to improve neckband retention and to include the assessment of neckband retention.
The association between food prices and the blood glucose level of US adults with type 2 diabetes.
Anekwe, Tobenna D; Rahkovsky, Ilya
2014-04-01
We estimated the association between the price of healthy and less-healthy food groups and blood sugar among US adults with type 2 diabetes. We linked 1999-2006 National Health and Nutrition Examination Survey health information to food prices contained in the Quarterly Food-at-Home Price Database. We regressed blood sugar levels on food prices from the previous calendar quarter, controlling for market region and a range of other covariates. We also examined whether the association between food prices and blood sugar varies among different income groups. The prices of produce and low-fat dairy foods were associated with blood sugar levels of people with type 2 diabetes. Specifically, higher prices for produce and low-fat dairy foods were associated with higher levels of glycated hemoglobin and fasting plasma glucose 3 months later. Food prices had a greater association with blood sugar for low-income people than for higher-income people, and in the expected direction. Higher prices of healthy foods were associated with increased blood sugar among people with type 2 diabetes. The association was especially pronounced among low-income people with type 2 diabetes.
The Association Between Food Prices and the Blood Glucose Level of US Adults With Type 2 Diabetes
Anekwe, Tobenna D.; Rahkovsky, Ilya
2014-01-01
Objectives. We estimated the association between the price of healthy and less-healthy food groups and blood sugar among US adults with type 2 diabetes. Methods. We linked 1999–2006 National Health and Nutrition Examination Survey health information to food prices contained in the Quarterly Food-at-Home Price Database. We regressed blood sugar levels on food prices from the previous calendar quarter, controlling for market region and a range of other covariates. We also examined whether the association between food prices and blood sugar varies among different income groups. Results. The prices of produce and low-fat dairy foods were associated with blood sugar levels of people with type 2 diabetes. Specifically, higher prices for produce and low-fat dairy foods were associated with higher levels of glycated hemoglobin and fasting plasma glucose 3 months later. Food prices had a greater association with blood sugar for low-income people than for higher-income people, and in the expected direction. Conclusions. Higher prices of healthy foods were associated with increased blood sugar among people with type 2 diabetes. The association was especially pronounced among low-income people with type 2 diabetes. PMID:24524504
Estimation of the rate of egg contamination from Salmonella-infected chickens.
Arnold, M E; Martelli, F; McLaren, I; Davies, R H
2014-02-01
Salmonella enterica serovar Enteritidis (S. Enteritidis) is one of the most prevalent causes for human gastroenteritis and is by far the predominant Salmonella serovar among human cases, followed by Salmonella Typhimurium. Contaminated eggs produced by infected laying hens are thought to be the main source of human infection with S. Enteritidis throughout the world. Although previous studies have looked at the proportion of infected eggs from infected flocks, there is still uncertainty over the rate at which infected birds produce contaminated eggs. The aim of this study was to estimate the rate at which infected birds produce contaminated egg shells and egg contents. Data were collected from two studies, consisting of 15 and 20 flocks, respectively. Faecal and environmental sampling and testing of ovaries/caeca from laying hens were carried out in parallel with (i) for the first study, testing 300 individual eggs, contents and shells together and (ii) for the second study, testing 4000 eggs in pools of six, with shells and contents tested separately. Bayesian methods were used to estimate the within-flock prevalence of infection from the faecal and hen post-mortem data, and this was related to the proportion of positive eggs. Results indicated a linear relationship between the rate of contamination of egg contents and the prevalence of infected chickens, but a nonlinear (quadratic) relationship between infection prevalence and the rate of egg shell contamination, with egg shell contamination occurring at a much higher rate than that of egg contents. There was also a significant difference in the rate of egg contamination between serovars, with S. Enteritidis causing a higher rate of contamination of egg contents and a lower rate of contamination of egg shells compared to non-S. Enteritidis serovars. These results will be useful for risk assessments of human exposure to Salmonella-contaminated eggs. © 2013 Crown copyright. This article is published with the permission of the Controller of HMSO and the Queen's Printer for Scotland.
Meal frequency of pre-exercise carbohydrate feedings.
Chryssanthopoulos, C; Petridou, A; Maridaki, M; Mougios, V
2008-04-01
This study compared the effect of single and multiple carbohydrate feedings before exercise on biochemical and physiological responses during exercise. Eight males performed 3 runs for 1 h at 70 % VO(2max) after consuming a meal containing 2.5 g carbohydrate per kg body mass in a single dose 3 h before exercise (SF), the same meal in 5 equal doses at 3, 2.5, 2, 1.5, and 1 h before exercise (MF), or a liquid placebo 3 h before exercise (P). RER and carbohydrate oxidation rates were higher in SF and MF compared to P trials, but there was no difference between SF and MF trials. Pre-exercise insulin was 2.0- and 3.4- fold higher in SF and MF, respectively, compared to P, and 1.7-fold higher in MF compared to SF. Glycerol and NEFA were higher in P compared to SF and MF trials before and at the end of exercise. In conclusion, a carbohydrate meal containing 2.5 g . kg(-1) ingested in doses over 3 h before running produced higher hyperinsulinemia pre-exercise than that produced when the meal was consumed in a single dose. Nevertheless, estimated carbohydrate utilization and adipose tissue lipolysis during exercise after multiple feedings seemed to be as high as after a single feeding.
Nova Scorpii and Coalescing Low-Mass Black Hole Binaries as LIGO Sources
NASA Astrophysics Data System (ADS)
Sipior, Michael S.; Sigurdsson, Steinn
2002-06-01
Double neutron star (NS-NS) binaries, analogous to the well-known Hulse-Taylor pulsar PSR 1913+16 (documented by Hulse & Taylor in 1974), are guaranteed-to-exist sources of high-frequency gravitational radiation detectable by LIGO. There is considerable uncertainty in the estimated rate of coalescence of such systems (see the work of Phinney in 1991, Narayan and coworkers in 1991, and Kalogera and coworkers in 2001), with conservative estimates of ~1 per 106 yr per galaxy, and optimistic theoretical estimates 1 or more mag larger. Formation rates of low-mass black hole (BH)-neutron star binaries may be higher than those of NS-NS binaries and may dominate the detectable LIGO signal rate. Rate estimates for such binaries are plagued by severe model uncertainties. Recent estimates by Portegies Zwart & Yungelson in 1998 and De Donder & Vanbeveren in 1998 suggest that BH-BH binaries do not coalesce at significant rates despite being formed at high rates. We estimate the enhanced coalescence rate for BH-BH binaries due to weak asymmetric kicks during the formation of low-mass black holes like Nova Sco (see the work of Brandt, Podsiadlowski, & Sigurdsson in 1995) and find they may contribute significantly to the LIGO signal rate, possibly dominating the phase I detectable signals if the range of black hole masses for which there is significant kick is broad enough. For a standard Salpeter initial mass function, assuming mild natal kicks, we project that the R6 merger rate (the rate of mergers per 106 yr in a Milky Way-like galaxy) of BH-BH systems is ~0.5, smaller than that of NS-NS systems. However, the higher chirp mass of these systems produces a signal nearly 4 times greater, on average, with a commensurate increase in search volume, hence, our claim that BH-BH mergers (and, to a lesser extent, BH-NS coalescence) should comprise a significant fraction of the signal seen by LIGO. The BH-BH coalescence channel considered here also predicts that a substantial fraction of BH-BH systems should have at least one component with near-maximal spin (a/M~1). This is from the spin-up provided by the fallback material after a supernova. If no mass transfer occurs between the two supernovae, both components could be spinning rapidly. The waveforms produced by the coalescence of such a system should produce a clear spin signature, so this hypothesis could be directly tested by LIGO.
Pinto-Prades, Jose-Luis; Farreras, Veronica; de Bobadilla, Jaime Fernandez
2008-02-01
In order to allocate health care resources more efficiently, it is necessary to relate health improvements provided by new medicines to their cost. It is necessary to ascertain when the additional cost of introducing a new health technology is justified by the additional health gain produced. Eplerenone is a new medicine that reduces the risk of death after myocardial infarction (MI) but produces additional cost to the health system. The contingent valuation approach can be used to measure the monetary value of this risk reduction and to estimate society's willingness to pay (WTP) for a new medicine that reduces the risk of death after MI by 2% points. We used a contingent valuation approach to evaluate WTP amongst members of the general population. We used the ex-ante and the ex-post approach. In the ex-ante approach, subjects are asked if they would accept an increase in their taxes in order to have access to eplerenone should they need it in the future. In the ex-post approach, subjects are asked if they would pay a certain amount of money as co-payment per month during 5 years if they suffered an MI. We used the dichotomous choice method, using five bids in each approach. The WTP was estimated using both single-bound and double-bound dichotomous choice (SBDC, DBDC). Extensive piloting (n = 187) preceded the final survey (n = 350). The WTP in the ex-ante case was euro 58 per year under both SBDC and DBDC. In the ex-post case, monthly WTP was euro 141 for the SBDC and euro 85 for the DBDC. Subjects with higher income and subjects with a higher perception of risk showed a higher WTP (P 0.05). Society is willing to pay an additional amount of money in order to give eplerenone to present and future patients. We estimate that euro 85 per month is a conservative estimate of the monetary value of a 2% risk reduction in mortality after MI and to spend this additional amount of money in Eplerenone can be considered an efficient policy.
Implications of allometric model selection for county-level biomass mapping.
Duncanson, Laura; Huang, Wenli; Johnson, Kristofer; Swatantran, Anu; McRoberts, Ronald E; Dubayah, Ralph
2017-10-18
Carbon accounting in forests remains a large area of uncertainty in the global carbon cycle. Forest aboveground biomass is therefore an attribute of great interest for the forest management community, but the accuracy of aboveground biomass maps depends on the accuracy of the underlying field estimates used to calibrate models. These field estimates depend on the application of allometric models, which often have unknown and unreported uncertainties outside of the size class or environment in which they were developed. Here, we test three popular allometric approaches to field biomass estimation, and explore the implications of allometric model selection for county-level biomass mapping in Sonoma County, California. We test three allometric models: Jenkins et al. (For Sci 49(1): 12-35, 2003), Chojnacky et al. (Forestry 87(1): 129-151, 2014) and the US Forest Service's Component Ratio Method (CRM). We found that Jenkins and Chojnacky models perform comparably, but that at both a field plot level and a total county level there was a ~ 20% difference between these estimates and the CRM estimates. Further, we show that discrepancies are greater in high biomass areas with high canopy covers and relatively moderate heights (25-45 m). The CRM models, although on average ~ 20% lower than Jenkins and Chojnacky, produce higher estimates in the tallest forests samples (> 60 m), while Jenkins generally produces higher estimates of biomass in forests < 50 m tall. Discrepancies do not continually increase with increasing forest height, suggesting that inclusion of height in allometric models is not primarily driving discrepancies. Models developed using all three allometric models underestimate high biomass and overestimate low biomass, as expected with random forest biomass modeling. However, these deviations were generally larger using the Jenkins and Chojnacky allometries, suggesting that the CRM approach may be more appropriate for biomass mapping with lidar. These results confirm that allometric model selection considerably impacts biomass maps and estimates, and that allometric model errors remain poorly understood. Our findings that allometric model discrepancies are not explained by lidar heights suggests that allometric model form does not drive these discrepancies. A better understanding of the sources of allometric model errors, particularly in high biomass systems, is essential for improved forest biomass mapping.
Potential and Limitations of an Improved Method to Produce Dynamometric Wheels
García de Jalón, Javier
2018-01-01
A new methodology for the estimation of tyre-contact forces is presented. The new procedure is an evolution of a previous method based on harmonic elimination techniques developed with the aim of producing low cost dynamometric wheels. While the original method required stress measurement in many rim radial lines and the fulfillment of some rigid conditions of symmetry, the new methodology described in this article significantly reduces the number of required measurement points and greatly relaxes symmetry constraints. This can be done without compromising the estimation error level. The reduction of the number of measuring radial lines increases the ripple of demodulated signals due to non-eliminated higher order harmonics. Therefore, it is necessary to adapt the calibration procedure to this new scenario. A new calibration procedure that takes into account angular position of the wheel is completely described. This new methodology is tested on a standard commercial five-spoke car wheel. Obtained results are qualitatively compared to those derived from the application of former methodology leading to the conclusion that the new method is both simpler and more robust due to the reduction in the number of measuring points, while contact forces’ estimation error remains at an acceptable level. PMID:29439427
Angular motion estimation using dynamic models in a gyro-free inertial measurement unit.
Edwan, Ezzaldeen; Knedlik, Stefan; Loffeld, Otmar
2012-01-01
In this paper, we summarize the results of using dynamic models borrowed from tracking theory in describing the time evolution of the state vector to have an estimate of the angular motion in a gyro-free inertial measurement unit (GF-IMU). The GF-IMU is a special type inertial measurement unit (IMU) that uses only a set of accelerometers in inferring the angular motion. Using distributed accelerometers, we get an angular information vector (AIV) composed of angular acceleration and quadratic angular velocity terms. We use a Kalman filter approach to estimate the angular velocity vector since it is not expressed explicitly within the AIV. The bias parameters inherent in the accelerometers measurements' produce a biased AIV and hence the AIV bias parameters are estimated within an augmented state vector. Using dynamic models, the appended bias parameters of the AIV become observable and hence we can have unbiased angular motion estimate. Moreover, a good model is required to extract the maximum amount of information from the observation. Observability analysis is done to determine the conditions for having an observable state space model. For higher grades of accelerometers and under relatively higher sampling frequency, the error of accelerometer measurements is dominated by the noise error. Consequently, simulations are conducted on two models, one has bias parameters appended in the state space model and the other is a reduced model without bias parameters.
Angular Motion Estimation Using Dynamic Models in a Gyro-Free Inertial Measurement Unit
Edwan, Ezzaldeen; Knedlik, Stefan; Loffeld, Otmar
2012-01-01
In this paper, we summarize the results of using dynamic models borrowed from tracking theory in describing the time evolution of the state vector to have an estimate of the angular motion in a gyro-free inertial measurement unit (GF-IMU). The GF-IMU is a special type inertial measurement unit (IMU) that uses only a set of accelerometers in inferring the angular motion. Using distributed accelerometers, we get an angular information vector (AIV) composed of angular acceleration and quadratic angular velocity terms. We use a Kalman filter approach to estimate the angular velocity vector since it is not expressed explicitly within the AIV. The bias parameters inherent in the accelerometers measurements' produce a biased AIV and hence the AIV bias parameters are estimated within an augmented state vector. Using dynamic models, the appended bias parameters of the AIV become observable and hence we can have unbiased angular motion estimate. Moreover, a good model is required to extract the maximum amount of information from the observation. Observability analysis is done to determine the conditions for having an observable state space model. For higher grades of accelerometers and under relatively higher sampling frequency, the error of accelerometer measurements is dominated by the noise error. Consequently, simulations are conducted on two models, one has bias parameters appended in the state space model and the other is a reduced model without bias parameters. PMID:22778586
Impact-induced thermal effects in the lunar and Mercurian regoliths
NASA Technical Reports Server (NTRS)
Cintala, Mark J.
1992-01-01
Thermal effects of micrometeoroid impact into the regoliths of the moon and Mercury, and some comparisons between the regoliths of the two bodies are presented. The impact calculations used to estimate the volumes of melt and vapor produced in the regoliths of the two bodies are described. An overview of the process of impact metamorphism in a modeled regolith target is presented, in which the roles played by impact velocity and target temperature in determining the quantities of melt and vapor are evaluated. The model impact process and fluxes are combined to estimate the production rates for impact melt and vapor on the two bodies, and the results are compared with those of previous studies. It is concluded that the rates of impact melting and vaporization on Mercury are much greater than on the moon. In a given period of time, a factor of 14 times more melt and 20 times more vapor are produced on Mercury than on the moon. A 'typical' Mercurian microcratering event produces 2.6 times more melt than its lunar counterpart; the flux calculated for Mercury is 5.5 times higher than it is at the moon.
Transceiver optics for interplanetary communications
NASA Astrophysics Data System (ADS)
Roberts, W. T.; Farr, W. H.; Rider, B.; Sampath, D.
2017-11-01
In-situ interplanetary science missions constantly push the spacecraft communications systems to support successively higher downlink rates. However, the highly restrictive mass and power constraints placed on interplanetary spacecraft significantly limit the desired bandwidth increases in going forward with current radio frequency (RF) technology. To overcome these limitations, we have evaluated the ability of free-space optical communications systems to make substantial gains in downlink bandwidth, while holding to the mass and power limits allocated to current state-of-the-art Ka-band communications systems. A primary component of such an optical communications system is the optical assembly, comprised of the optical support structure, optical elements, baffles and outer enclosure. We wish to estimate the total mass that such an optical assembly might require, and assess what form it might take. Finally, to ground this generalized study, we should produce a conceptual design, and use that to verify its ability to achieve the required downlink gain, estimate it's specific optical and opto-mechanical requirements, and evaluate the feasibility of producing the assembly.
The World Bank’s Clean Technology Fund (CTF)
2008-11-24
economies such as China and India. The incremental carbon dioxide (CO2) emissions from China and India alone have accounted for an estimated 62% of new...gas reductions. Carbon dioxide emissions from power plants are directly proportional to efficiency, so a plant going with an 18% absolute increase in...efficiency would produce 18% less carbon dioxide . These are the higher heating value (HHV) thermal efficiency rates, not be confused with lower heating
A Comparison of Two Methods for Initiating Air Mass Back Trajectories
NASA Astrophysics Data System (ADS)
Putman, A.; Posmentier, E. S.; Faiia, A. M.; Sonder, L. J.; Feng, X.
2014-12-01
Lagrangian air mass tracking programs in back cast mode are a powerful tool for estimating the water vapor source of precipitation events. The altitudes above the precipitation site where particle's back trajectories begin influences the source estimation. We assume that precipitation comes from water vapor in condensing regions of the air column, so particles are placed in proportion to an estimated condensation profile. We compare two methods for estimating where condensation occurs and the resulting evaporation sites for 63 events at Barrow, AK. The first method (M1) uses measurements from a 35 GHz vertically resolved cloud radar (MMCR), and algorithms developed by Zhao and Garrett1 to calculate precipitation rate. The second method (M2) uses the Global Data Assimilation System reanalysis data in a lofting model. We assess how accurately M2, developed for global coverage, will perform in absence of direct cloud observations. Results from the two methods are statistically similar. The mean particle height estimated by M2 is, on average, 695 m (s.d. = 1800 m) higher than M1. The corresponding average vapor source estimated by M2 is 1.5⁰ (s.d. = 5.4⁰) south of M1. In addition, vapor sources for M2 relative to M1 have ocean surface temperatures averaging 1.1⁰C (s.d. = 3.5⁰C) warmer, and reported ocean surface relative humidities 0.31% (s.d. = 6.1%) drier. All biases except the latter are statistically significant (p = 0.02 for each). Results were skewed by events where M2 estimated very high altitudes of condensation. When M2 produced an average particle height less than 5000 m (89% of events), M2 estimated mean particle heights 76 m (s.d. = 741 m) higher than M1, corresponding to a vapor source 0.54⁰ (s.d. = 4.2⁰) south of M1. The ocean surface at the vapor source was an average of 0.35⁰C (s.d. = 2.35⁰C) warmer and ocean surface relative humidities were 0.02% (s.d. = 5.5%) wetter. None of the biases was statistically significant. If the vapor source meteorology estimated by M2 is used to determine vapor isotopic properties it would produce results similar to M1 in all cases except the occasional very high cloud. The methods strive to balance a sufficient number of tracked air masses for meaningful vapor source estimation with minimal computational time. Zhao, C and Garrett, T.J. 2008, J. Geophys. Res.
Mills, Britain A.; Harris, T. Robert
2012-01-01
Objective: This study was conducted to examine discrepancies in alcohol consumption estimates between a self-reported standard quantity—frequency measure and an adjusted version based on respondents’ typically used container size. Method: Using a multistage cluster sample design, 5,224 Hispanic individuals 18 years of age and older were selected from the household population in five metropolitan areas of the United States: Miami, New York, Philadelphia, Houston, and Los Angeles. The survey-weighted response rate was 76%. Personal interviews lasting an average of 1 hour were conducted in respondents’ homes in either English or Spanish. Results: The overall effect of container adjustment was to increase estimates of ethanol consumption by 68% for women (range across Hispanic groups: 17%–99%) and 30% for men (range: 14%–42%). With the exception of female Cuban American, Mexican American, and South/Central American beer drinkers and male Cuban American wine drinkers, all percentage differences between unadjusted and container-adjusted estimates were positive. Second, container adjustments produced the largest change for volume of distilled spirits, followed by wine and beer. Container size adjustments generally produced larger percentage increases in consumption estimates for the higher volume drinkers, especially the upper tertile of female drinkers. Conclusions: Self-reported alcohol consumption based on standard drinks underreports consumption when compared with reports based on the amount of alcohol poured into commonly used containers. PMID:22152669
NASA Astrophysics Data System (ADS)
Yoshikawa, C.; Sasai, Y.; Wakita, M.; Honda, M. C.; Fujiki, T.; Harada, N.; Makabe, A.; Matsushima, S.; Toyoda, S.; Yoshida, N.; Ogawa, N. O.; Suga, H.; Ohkouchi, N.
2016-02-01
Based on the observed inverse relationship between the dissolved oxygen and N2O concentrations in the ocean, previous models have indirectly predicted marine N2O emissions from the apparent oxygen utilization (AOU), In this study, a marine ecosystem model that incorporates nitrous oxide (N2O) production processes (i.e., ammonium oxidation during nitrification and nitrite reduction during nitrifier denitrification) was newly developed to estimate the sea-air N2O flux and to quantify N2O production processes. Site preference of 15N (SP) in N2O isotopomers (14N15N16O and 15N14N16O) and the average nitrogen isotope ratio (δ15N) were added to the model because they are useful tracers to distinguish between ammonium oxidation and nitrite reduction. This model was applied to two contrasting time series sites, a subarctic station (K2) and a subtropical station (S1) in the western North Pacific. The model was validated with observed nitrogen concentration and nitrogen isotopomer datasets, and successfully simulated the higher N2O concentrations, higher δ15N values, and higher site preference values for N2O at K2 compared with S1. The annual mean N2O emissions were estimated to be 34 mg N m-2 yr-1 at K2 and 2 mg N m-2 yr-1 at S1. Using this model, we conducted three case studies: 1) estimating the ratio of in-situ biological N2O production to nitrate (NO3-) production during nitrification, 2) estimating the ratio of N2O production by ammonium oxidation to that by nitrite reduction, and 3) estimating the ratio of AOA ammonium oxidation to AOB ammonium oxidation. The results of case studies estimated the ratios of in situ biological N2O production to nitrate production during nitrification to be 0.22% at K2 and 0.06% at S1. It is also suggested that N2O was mainly produced via ammonium oxidation at K2 but was produced via both ammonium oxidation and nitrite reduction at S1. It is also revealed that 80% of the ammonium oxidation at K2 was caused by archaea in the subsurface water. The results of isotope tracer incubation experiments using an archaeal activity inhibitor supported this hypothesis.
Disruption of State Estimation in the Human Lateral Cerebellum
Miall, R. Chris; Christensen, Lars O. D; Cain, Owen; Stanley, James
2007-01-01
The cerebellum has been proposed to be a crucial component in the state estimation process that combines information from motor efferent and sensory afferent signals to produce a representation of the current state of the motor system. Such a state estimate of the moving human arm would be expected to be used when the arm is rapidly and skillfully reaching to a target. We now report the effects of transcranial magnetic stimulation (TMS) over the ipsilateral cerebellum as healthy humans were made to interrupt a slow voluntary movement to rapidly reach towards a visually defined target. Errors in the initial direction and in the final finger position of this reach-to-target movement were significantly higher for cerebellar stimulation than they were in control conditions. The average directional errors in the cerebellar TMS condition were consistent with the reaching movements being planned and initiated from an estimated hand position that was 138 ms out of date. We suggest that these results demonstrate that the cerebellum is responsible for estimating the hand position over this time interval and that TMS disrupts this state estimate. PMID:18044990
Duchêne, Sebastián; Geoghegan, Jemma L; Holmes, Edward C; Ho, Simon Y W
2016-11-15
In rapidly evolving pathogens, including viruses and some bacteria, genetic change can accumulate over short time-frames. Accordingly, their sampling times can be used to calibrate molecular clocks, allowing estimation of evolutionary rates. Methods for estimating rates from time-structured data vary in how they treat phylogenetic uncertainty and rate variation among lineages. We compiled 81 virus data sets and estimated nucleotide substitution rates using root-to-tip regression, least-squares dating and Bayesian inference. Although estimates from these three methods were often congruent, this largely relied on the choice of clock model. In particular, relaxed-clock models tended to produce higher rate estimates than methods that assume constant rates. Discrepancies in rate estimates were also associated with high among-lineage rate variation, and phylogenetic and temporal clustering. These results provide insights into the factors that affect the reliability of rate estimates from time-structured sequence data, emphasizing the importance of clock-model testing. sduchene@unimelb.edu.au or garzonsebastian@hotmail.comSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Ice shelf basal melt rates around Antarctica from simulations and observations
NASA Astrophysics Data System (ADS)
Schodlok, M. P.; Menemenlis, D.; Rignot, E. J.
2016-02-01
We introduce an explicit representation of Antarctic ice shelf cavities in the Estimating the Circulation and Climate of the Ocean, Phase II (ECCO2) ocean retrospective analysis; and compare resulting basal melt rates and patterns to independent estimates from satellite observations. Two simulations are carried out: the first is based on the original ECCO2 vertical discretization; the second has higher vertical resolution particularly at the depth range of ice shelf cavities. The original ECCO2 vertical discretization produces higher than observed melt rates and leads to a misrepresentation of Southern Ocean water mass properties and transports. In general, thicker levels at the base of the ice shelves lead to increased melting because of their larger heat capacity. This strengthens horizontal gradients and circulation within and outside the cavities and, in turn, warm water transports from the shelf break to the ice shelves. The simulation with more vertical levels produces basal melt rates (1735 ± 164 Gt/a) and patterns that are in better agreement with observations. Thinner levels in the sub-ice-shelf cavities improve the representation of a fresh/cold layer at the ice shelf base and of warm/salty water near the bottom, leading to a sharper pycnocline and reduced vertical mixing underneath the ice shelf. Improved water column properties lead to more accurate melt rates and patterns, especially for melt/freeze patterns under large cold-water ice shelves. At the 18 km grid spacing of the ECCO2 model configuration, the smaller, warm-water ice shelves cannot be properly represented, with higher than observed melt rates in both simulations.
Venus - Global gravity and topography
NASA Technical Reports Server (NTRS)
Mcnamee, J. B.; Borderies, N. J.; Sjogren, W. L.
1993-01-01
A new gravity field determination that has been produced combines both the Pioneer Venus Orbiter (PVO) and the Magellan Doppler radio data. Comparisonsbetween this estimate, a spherical harmonic model of degree and order 21, and previous models show that significant improvements have been made. Results are displayed as gravity contours overlaying a topographic map. We also calculate a new spherical harmonic model of topography based on Magellan altimetry, with PVO altimetry included where gaps exist in the Magellan data. This model is also of degree and order 21, so in conjunction with the gravity model, Bouguer and isostatic anomaly maps can be produced. These results are very consistent with previous results, but reveal more spatial resolution in the higher latitudes.
Crangle, R.D.
2012-01-01
The United States is the world's fourth leading producer and consumer of gypsum. Production of gypsum in the U.S. during 2011 was estimated to be 9.4 Mt (103 million st), an increase of 6 percent compared with 2010 production. The average price of mined crude gypsum was $7/t ($6.35/st). Synthetic gypsum, most of which is generated as a fluegas desulfurization process from coal-fired electric powerplants, was priced at approximately $1.50/t (1.36/st). Forty-seven companies produced gypsum in the U.S. at 54 mines and plants in 34 states. U.S. gypsum exports totaled about 300 kt (330,000 st). Imports were much higher at approximately 3.3 Mt (3.6 million st).
Night sampling improves indices used for management of yellow perch in Lake Erie
Kocovsky, P.M.; Stapanian, M.A.; Knight, C.T.
2010-01-01
Catch rate (catch per hour) was examined for age-0 and age-1 yellow perch, Perca flavescens (Mitchill), captured in bottom trawls from 1991 to 2005 in western Lake Erie: (1) to examine variation of catch rate among years, seasons, diel periods and their interactions; and (2) to determine whether sampling during particular diel periods improved the management value of CPH data used in models to project abundance of age-2 yellow perch. Catch rate varied with year, season and the diel period during which sampling was conducted as well as by the interaction between year and season. Indices of abundance of age-0 and age-1 yellow perch estimated from night samples typically produced better fitting models and lower estimates of age-2 abundance than those using morning or afternoon samples, whereas indices using afternoon samples typically produced less precise and higher estimates of abundance. The diel period during which sampling is conducted will not affect observed population trends but may affect estimates of abundance of age-0 and age-1 yellow perch, which in turn affect recommended allowable harvest. A field experiment throughout western Lake Erie is recommended to examine potential benefits of night sampling to management of yellow perch. Published 2010. The article is a US Government work and is in the public domain in the USA.
NASA Astrophysics Data System (ADS)
Sherwood, S. C.; Fuchs, D.; Bony, S.; Jean-Louis, D.
2014-12-01
Earth's climate sensitivity has been the subject of heated debate for decades, and recently spurred renewed interest after the latest IPCC assessment report suggested a downward adjustment of the most likely range of climate sensitivities. Here, we present an observation-based study based on the time period 1964 to 2010, which is unique in that it does not rely on global climate models (GCMs) in any way. The study uses surface observations of temperature and incoming solar radiation from approximately 1300 surface sites, along with observations of the equivalent CO2 concentration (CO2,eq) in the atmosphere, to produce a new best estimate for the transient climate sensitivity of 1.9K (95% confidence interval 1.2K - 2.7K). This is higher than other recent observation-based estimates, and is better aligned with the estimate of 1.8K and range (1.1K - 2.5K) derived from the latest generation of GCMs. The new estimate is produced by incorporating the observations in an energy balance framework, and by applying statistical methods that are standard in the field of Econometrics, but less common in climate studies. The study further suggests that about a third of the continental warming due to increasing CO2,eq was masked by aerosol cooling during the time period studied.
NASA Astrophysics Data System (ADS)
Mafanya, Madodomzi; Tsele, Philemon; Botai, Joel; Manyama, Phetole; Swart, Barend; Monate, Thabang
2017-07-01
Invasive alien plants (IAPs) not only pose a serious threat to biodiversity and water resources but also have impacts on human and animal wellbeing. To support decision making in IAPs monitoring, semi-automated image classifiers which are capable of extracting valuable information in remotely sensed data are vital. This study evaluated the mapping accuracies of supervised and unsupervised image classifiers for mapping Harrisia pomanensis (a cactus plant commonly known as the Midnight Lady) using two interlinked evaluation strategies i.e. point and area based accuracy assessment. Results of the point-based accuracy assessment show that with reference to 219 ground control points, the supervised image classifiers (i.e. Maxver and Bhattacharya) mapped H. pomanensis better than the unsupervised image classifiers (i.e. K-mediuns, Euclidian Length and Isoseg). In this regard, user and producer accuracies were 82.4% and 84% respectively for the Maxver classifier. The user and producer accuracies for the Bhattacharya classifier were 90% and 95.7%, respectively. Though the Maxver produced a higher overall accuracy and Kappa estimate than the Bhattacharya classifier, the Maxver Kappa estimate of 0.8305 is not significantly (statistically) greater than the Bhattacharya Kappa estimate of 0.8088 at a 95% confidence interval. The area based accuracy assessment results show that the Bhattacharya classifier estimated the spatial extent of H. pomanensis with an average mapping accuracy of 86.1% whereas the Maxver classifier only gave an average mapping accuracy of 65.2%. Based on these results, the Bhattacharya classifier is therefore recommended for mapping H. pomanensis. These findings will aid in the algorithm choice making for the development of a semi-automated image classification system for mapping IAPs.
Projected 1981 exposure estimates using iterative proportional fitting
DOT National Transportation Integrated Search
1985-10-01
1981 VMT estimates categorized by eight driver, vehicle, and environmental : variables are produced. These 1981 estimates are produced using analytical : methods developed in a previous report. The estimates are based on 1977 : NPTS data (the latest ...
Kenneth Skog; Richard W. Haynes
2002-01-01
The RPA Timber Assessment projects, over the next 50 years, the likelihood of increasing relative use of imported forest products versus U.S. roundwood harvest to meet U.S. consumption needs, although our projected consumption needs are now estimated to be lower than we projected in 1993. The projected higher imports relative to U.S. roundwood harvest is due in part to...
Tamura, Motoi; Hori, Sachiko; Nakagawa, Hiroyuki
2011-01-01
Much attention has been focused on the biological effects of equol, a metabolite of daidzein produced by intestinal microbiota. However, little is known about the role of isoflavone metabolizing bacteria in the intestinal microbiota. Recently, we isolated a dihydrodaidzein (DHD)-producing Clostridium-like bacterium, strain TM-40, from human feces. We investigated the effects of strain TM-40 on in vitro daidzein metabolism by human fecal microbiota from a male equol producer and two male equol non-producers. In the fecal suspension from the male equol non-producer and DHD producer, DHD was detected in the in vitro fecal incubation of daidzein after addition of TM-40. The DHD concentration increased as the concentration of strain TM-40 increased. In the fecal suspension from the equol producer, the fecal equol production was increased by the addition of strain TM-40. The occupation ratios of Bifidobacterium and Lactobacillales were higher in the equol non-producers than in the equol producer. Adding isoflavone-metabolizing bacteria to the fecal microbiota should facilitate the estimation of the metabolism of isoflavonoids by fecal microbiota. Studies on the interactions among equol-producing microbiota and DHD-producing bacteria might lead to clarification of some of the mechanisms regulating the production of equol by fecal microbiota.
TAMURA, Motoi; HORI, Sachiko; NAKAGAWA, Hiroyuki
2011-01-01
Much attention has been focused on the biological effects of equol, a metabolite of daidzein produced by intestinal microbiota. However, little is known about the role of isoflavone metabolizing bacteria in the intestinal microbiota. Recently, we isolated a dihydrodaidzein (DHD)-producing Clostridium-like bacterium, strain TM-40, from human feces. We investigated the effects of strain TM-40 on in vitro daidzein metabolism by human fecal microbiota from a male equol producer and two male equol non-producers. In the fecal suspension from the male equol non-producer and DHD producer, DHD was detected in the in vitro fecal incubation of daidzein after addition of TM-40. The DHD concentration increased as the concentration of strain TM-40 increased. In the fecal suspension from the equol producer, the fecal equol production was increased by the addition of strain TM-40. The occupation ratios of Bifidobacterium and Lactobacillales were higher in the equol non-producers than in the equol producer. Adding isoflavone-metabolizing bacteria to the fecal microbiota should facilitate the estimation of the metabolism of isoflavonoids by fecal microbiota. Studies on the interactions among equol-producing microbiota and DHD-producing bacteria might lead to clarification of some of the mechanisms regulating the production of equol by fecal microbiota. PMID:25045313
Jelicic Kadic, Antonia; Vucic, Katarina; Dosenovic, Svjetlana; Sapunar, Damir; Puljak, Livia
2016-06-01
To compare speed and accuracy of graphical data extraction using manual estimation and open source software. Data points from eligible graphs/figures published in randomized controlled trials (RCTs) from 2009 to 2014 were extracted by two authors independently, both by manual estimation and with the Plot Digitizer, open source software. Corresponding authors of each RCT were contacted up to four times via e-mail to obtain exact numbers that were used to create graphs. Accuracy of each method was compared against the source data from which the original graphs were produced. Software data extraction was significantly faster, reducing time for extraction for 47%. Percent agreement between the two raters was 51% for manual and 53.5% for software data extraction. Percent agreement between the raters and original data was 66% vs. 75% for the first rater and 69% vs. 73% for the second rater, for manual and software extraction, respectively. Data extraction from figures should be conducted using software, whereas manual estimation should be avoided. Using software for data extraction of data presented only in figures is faster and enables higher interrater reliability. Copyright © 2016 Elsevier Inc. All rights reserved.
Preliminary Assessment of Spatial Competition in the Market for E85
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clinton, Bentley
Anecdotal evidence suggests retail E85 prices may track retail gasoline prices rather than wholesale costs. This indicates E85 prices may be higher than they would be if priced on a cost basis hence limiting adoption by some price-sensitive consumers. Using publicly available and proprietary E83 and regular gasoline price data, we examine pricing behavior in the market for E85. Specifically, we assess the extent to which local retail competition in E85 markets decreases E85 retail prices. Results of econometric analysis suggest that higher levels of retail competition (measured in terms of station density) are associated with lower E85 prices atmore » the pump. While more precise causal estimates may be produced from more comprehensive data, this study is the first to our knowledge that estimates the spatial competition dimension of E85 pricing behavior by firms. This is an initial presentation; a related technical report is also available.« less
Preliminary Assessment of Spatial Competition in the Market for E85: Presentation Supplement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clinton, Bentley; Johnson, Caley; Moriarty, Kristi
Anecdotal evidence suggests retail E85 prices may track retail gasoline prices rather than wholesale costs. This indicates E85 prices may be higher than they would be if priced on a cost basis hence limiting adoption by some price-sensitive consumers. Using publicly available and proprietary E85 and regular gasoline price data, we examine pricing behavior in the market for E85. Specifically, we assess the extent to which local retail competition in E85 markets decreases E85 retail prices. Results of econometric analysis suggest that higher levels of retail competition (measured in terms of station density) are associated with lower E85 prices atmore » the pump. While more precise causal estimates may be produced from more comprehensive data, this study is the first to our knowledge that estimates the spatial competition dimension of E85 pricing behavior by firms. This technical report elaborates on a related presentation.« less
NASA Astrophysics Data System (ADS)
Chen, Yuanchen; Shen, Guofeng; Liu, Weijian; Du, Wei; Su, Shu; Duan, Yonghong; Lin, Nan; Zhuo, Shaojie; Wang, Xilong; Xing, Baoshan; Tao, Shu
2016-01-01
Pollutant emissions into outdoor air from cooking and space heating processes with various solid fuels were measured, and daily household emissions were estimated from the kitchen performance tests. The burning of honeycomb briquette had the lowest emission factors, while the use of wood produced the highest pollutants. Daily emissions from space heating were significantly higher than those from cooking, and the use of honeycomb briquette for cooking and raw coal chunk for space heating reduces 28%, 24% and 25% for CO, PM10 and PM2.5, compared to wood for cooking and peat for space heating. Much higher emissions were observed during the initial phase than the stable phase due to insufficient air supply and lower combustion temperature at the beginning of burning processes. However, more mass percent of fine particles formed in the later high temperature stable burning phase may increase potential inhalation exposure risks.
Broom, Mark; Johanis, Michal; Rychtář, Jan
2018-01-01
In the "producer-scrounger" model, a producer discovers a resource and is in turn discovered by a second individual, the scrounger, who attempts to steal it. This resource can be food or a territory, and in some situations, potentially divisible. In a previous paper we considered a producer and scrounger competing for an indivisible resource, where each individual could choose the level of energy that they would invest in the contest. The higher the investment, the higher the probability of success, but also the higher the costs incurred in the contest. In that paper decisions were sequential with the scrounger choosing their strategy before the producer. In this paper we consider a version of the game where decisions are made simultaneously. For the same cost functions as before, we analyse this case in detail, and then make comparisons between the two cases. Finally we discuss some real examples with potentially variable and asymmetric energetic investments, including intraspecific contests amongst spiders and amongst parasitoid wasps. In the case of the spiders, detailed estimates of energetic expenditure are available which demonstrate the asymmetric values assumed in our models. For the wasps the value of the resource can affect the probabilities of success of the defender and attacker, and differential energetic investment can be inferred. In general for real populations energy usage varies markedly depending upon crucial parameters extrinsic to the individual such as resource value and intrinsic ones such as age, and is thus an important factor to consider when modelling.
Jedrejčić, Nikolina; Ganić, Karin Kovačević; Staver, Mario; Peršurić, Đordano
2015-01-01
Summary To investigate the phenolic and aroma composition of Malvazija istarska (Vitis vinifera L.) white wines produced by an unconventional technology comprising prolonged maceration followed by maturation in wooden barrels, representative samples were subjected to analysis by UV/Vis spectrometry, high-performance liquid chromatography, and gas chromatography-mass spectrometry. When compared to standard wines, the investigated samples contained higher levels of dry extract, volatile acidity, lactic acid, phenols, colour intensity, antioxidant activity, majority of monoterpenes, C13-norisoprenoids, methanol, higher alcohols, ethyl acetate, branched-chain esters and esters of hydroxy and dicarboxylic acids, ethylphenols, furans, and acetals, as well as lower levels of malic acid, β-damascenone, straight-chain fatty acids, ethyl and acetate esters. It was estimated that maceration had a stronger influence on phenols, and maturation on volatile aromas. Despite different vintages and technological details, the investigated wines showed a relative homogeneity in the composition, representing a clear and distinctive type. PMID:27904375
Ion cyclotron waves at Saturn: Implications of latitudinal distribution for the neutral water torus
NASA Astrophysics Data System (ADS)
Crary, F. J.; Dols, V. J.
2016-12-01
Ion cyclotron waves in Saturn's magnetosphere, produced by freshly produced pickup ions, are an indication of plasma production and constrain the distribution of the parent neutrals. Cassini spacecraft observations have shown that these waves are generally present between 4 and 6 Saturn radii, are generated near the equator and propagate to higher latitudes. Wave amplitudes peak at approximately 2 degrees off the equator, where the amplitude is roughly twice its equatorial value. At higher latitudes, the wave amplitudes decrease, dropping by over an order of magnitude by 5 degrees latitude. This has been interpreted as advective growth, from due to equatorially confined pickup ions. Away from this source population, the waves are damped by the thermal background ions. Here, we present an analysis of this growth and damping. Using both analytic theory and hybrid simulations, calculate ion cyclotron wave amplitudes as a function of latitude. These results allow us to estimate the vertical extent of the neutral cloud.
An estimation of the artisanal small-scale production of gold in the world.
Seccatore, Jacopo; Veiga, Marcello; Origliasso, Chiara; Marin, Tatiane; De Tomi, Giorgio
2014-10-15
The increase in gold price of over 400% between 2002 and 2012, due to a shift towards safe investments in a period of crisis in the global economy, created a rapid increase in gold production. A response to this shift in production was observed for artisanal and small-scale mining (ASM) units in remote locations of the world, but this phenomenon has not been quantified yet. The work presented here was done to provide a quantitative tool for estimation of the gold (Au) produced by ASM and the population of workers involved in the production process, and assessment of mercury (Hg) consumed. The following hypotheses were addressed: i) It is possible to estimate, on first approximation, the amount of Au production in the world by artisanal mining; ii) Au production by artisanal mining varies by country and continent and iii) Hg consumption due to ASM can be correlated with the methods applied in the different countries and continents for the production of Au. To do this we estimated the number of miners, calculated the change in Au price and production and then applied an adjustment factor to calculate Hg production by country and continent. The amount of Au produced depends on technology of the miners by continents (highest in South America, medium in Asia and Central America, and lowest in Africa), and the geologic setting (not investigated here). The results of the estimation show that, as of 2011, over 16 million Artisanal Miners, in the world, were involved in gold extraction (mining or treatment), producing between 380 and 450 t of gold per year, with clear global behavior between the continents in terms of recovery efficiency, confirmed by data on Hg release that is higher in countries with lower technology. Copyright © 2014 Elsevier B.V. All rights reserved.
Gastelum, Sandra L; Mejía-Velázquez, G M; Lozano-García, D Fabián
2016-06-01
In addition to oxygen, hydrocarbons are the most reactive chemical compounds produced by plants into the atmosphere. These compounds are part of the family of volatile organic compounds (VOCs) and are discharged in a great variety of forms. Among the VOCs produced by natural sources such as vegetation, the most studied until today are the isoprene and monoterpene. These substances can play an important role in the chemical balance of the atmosphere of a region. In this project, we develop a methodology to estimate the natural (vegetation) emission of isoprene and monoterpenes and applied it to the Monterrey Metropolitan Area, Mexico and its surrounding areas. Landsat-TM data was used to identify the dominant vegetation communities and field work to determine the foliage biomass density of key species. The studied communities were submontane scrub, oak, and pine forests and a combination of both. We carried out the estimation of emissions for isoprene and monoterpenes compounds in the different plant communities, with two different criteria: (1) taking into account the average foliage biomass density obtained from the various sample point in each vegetation community, and (2) using the foliage biomass density obtained for each transect, associated to an individual spectral class within a particular vegetation type. With this information, we obtained emission maps for each case. The results show that the main producers of isoprene are the communities that include species of the genus Quercus, located mainly on the Sierra Madre Oriental and Sierra de Picachos, with average isoprene emissions of 314.6 ton/day and 207.3 ton/day for the two methods utilized. The higher estimates of monoterpenes were found in the submontane scrub areas distributed along the valley of the metropolitan zone, with an estimated average emissions of 47.1 ton/day and 181.4 tons for the two methods respectively.
Locatelli, R.; Bousquet, P.; Chevallier, F.; ...
2013-10-08
A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the three-component PYVAR-LMDZ-SACS (PYthon VARiational-Laboratoire de Météorologie Dynamique model with Zooming capability-Simplified Atmospheric Chemistry System) inversion system to produce 10 different methane emission estimates at the global scale for the year 2005. The same methane sinks, emissions and initial conditions have been applied to produce the 10more » synthetic observation datasets. The same inversion set-up (statistical errors, prior emissions, inverse procedure) is then applied to derive flux estimates by inverse modelling. Consequently, only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. Here in our framework, we show that transport model errors lead to a discrepancy of 27 Tg yr -1 at the global scale, representing 5% of total methane emissions. At continental and annual scales, transport model errors are proportionally larger than at the global scale, with errors ranging from 36 Tg yr -1 in North America to 7 Tg yr -1 in Boreal Eurasia (from 23 to 48%, respectively). At the model grid-scale, the spread of inverse estimates can reach 150% of the prior flux. Therefore, transport model errors contribute significantly to overall uncertainties in emission estimates by inverse modelling, especially when small spatial scales are examined. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher horizontal resolution in transport models. The large differences found between methane flux estimates inferred in these different configurations highly question the consistency of transport model errors in current inverse systems.« less
van de Kassteele, Jan; Zwakhals, Laurens; Breugelmans, Oscar; Ameling, Caroline; van den Brink, Carolien
2017-07-01
Local policy makers increasingly need information on health-related indicators at smaller geographic levels like districts or neighbourhoods. Although more large data sources have become available, direct estimates of the prevalence of a health-related indicator cannot be produced for neighbourhoods for which only small samples or no samples are available. Small area estimation provides a solution, but unit-level models for binary-valued outcomes that can handle both non-linear effects of the predictors and spatially correlated random effects in a unified framework are rarely encountered. We used data on 26 binary-valued health-related indicators collected on 387,195 persons in the Netherlands. We associated the health-related indicators at the individual level with a set of 12 predictors obtained from national registry data. We formulated a structured additive regression model for small area estimation. The model captured potential non-linear relations between the predictors and the outcome through additive terms in a functional form using penalized splines and included a term that accounted for spatially correlated heterogeneity between neighbourhoods. The registry data were used to predict individual outcomes which in turn are aggregated into higher geographical levels, i.e. neighbourhoods. We validated our method by comparing the estimated prevalences with observed prevalences at the individual level and by comparing the estimated prevalences with direct estimates obtained by weighting methods at municipality level. We estimated the prevalence of the 26 health-related indicators for 415 municipalities, 2599 districts and 11,432 neighbourhoods in the Netherlands. We illustrate our method on overweight data and show that there are distinct geographic patterns in the overweight prevalence. Calibration plots show that the estimated prevalences agree very well with observed prevalences at the individual level. The estimated prevalences agree reasonably well with the direct estimates at the municipal level. Structured additive regression is a useful tool to provide small area estimates in a unified framework. We are able to produce valid nationwide small area estimates of 26 health-related indicators at neighbourhood level in the Netherlands. The results can be used for local policy makers to make appropriate health policy decisions.
Estimating parameters for probabilistic linkage of privacy-preserved datasets.
Brown, Adrian P; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Boyd, James H
2017-07-10
Probabilistic record linkage is a process used to bring together person-based records from within the same dataset (de-duplication) or from disparate datasets using pairwise comparisons and matching probabilities. The linkage strategy and associated match probabilities are often estimated through investigations into data quality and manual inspection. However, as privacy-preserved datasets comprise encrypted data, such methods are not possible. In this paper, we present a method for estimating the probabilities and threshold values for probabilistic privacy-preserved record linkage using Bloom filters. Our method was tested through a simulation study using synthetic data, followed by an application using real-world administrative data. Synthetic datasets were generated with error rates from zero to 20% error. Our method was used to estimate parameters (probabilities and thresholds) for de-duplication linkages. Linkage quality was determined by F-measure. Each dataset was privacy-preserved using separate Bloom filters for each field. Match probabilities were estimated using the expectation-maximisation (EM) algorithm on the privacy-preserved data. Threshold cut-off values were determined by an extension to the EM algorithm allowing linkage quality to be estimated for each possible threshold. De-duplication linkages of each privacy-preserved dataset were performed using both estimated and calculated probabilities. Linkage quality using the F-measure at the estimated threshold values was also compared to the highest F-measure. Three large administrative datasets were used to demonstrate the applicability of the probability and threshold estimation technique on real-world data. Linkage of the synthetic datasets using the estimated probabilities produced an F-measure that was comparable to the F-measure using calculated probabilities, even with up to 20% error. Linkage of the administrative datasets using estimated probabilities produced an F-measure that was higher than the F-measure using calculated probabilities. Further, the threshold estimation yielded results for F-measure that were only slightly below the highest possible for those probabilities. The method appears highly accurate across a spectrum of datasets with varying degrees of error. As there are few alternatives for parameter estimation, the approach is a major step towards providing a complete operational approach for probabilistic linkage of privacy-preserved datasets.
Energy Efficiency of Biogas Produced from Different Biomass Sources
NASA Astrophysics Data System (ADS)
Begum, Shahida; Nazri, A. H.
2013-06-01
Malaysia has different sources of biomass like palm oil waste, agricultural waste, cow dung, sewage waste and landfill sites, which can be used to produce biogas and as a source of energy. Depending on the type of biomass, the biogas produced can have different calorific value. At the same time the energy, being used to produce biogas is dependent on transportation distance, means of transportation, conversion techniques and for handling of raw materials and digested residues. An energy systems analysis approach based on literature is applied to calculate the energy efficiency of biogas produced from biomass. Basically, the methodology is comprised of collecting data, proposing locations and estimating the energy input needed to produce biogas and output obtained from the generated biogas. The study showed that palm oil and municipal solid waste is two potential sources of biomass. The energy efficiency of biogas produced from palm oil residues and municipal solid wastes is 1.70 and 3.33 respectively. Municipal solid wastes have the higher energy efficiency due to less transportation distance and electricity consumption. Despite the inherent uncertainties in the calculations, it can be concluded that the energy potential to use biomass for biogas production is a promising alternative.
Survival and harvest-related mortality of white-tailed deer in Massachusetts
Mcdonald, John E.; DeStefano, Stephen; Gaughan, Christopher; Mayer, Michael; Woytek, William A.; Christensen, Sonja; Fuller, Todd K.
2011-01-01
We monitored 142 radiocollared adult (≥1.0 yr old) white-tailed deer (Odocoileus virginianus) in 3 study areas of Massachusetts, USA, to estimate annual survival and mortality due to legal hunting. We then applied these rates to deer harvest information to estimate deer population trends over time, and compared these to trends derived solely from harvest data estimates. Estimated adult female survival rates were similar (0.82–0.86), and uniformly high, across 3 management zones in Massachusetts that differed in landscape composition, human density, and harvest regulations. Legal hunting accounted for 16–29% of all adult female mortality. Estimated adult male survival rates varied from 0.55 to 0.79, and legal hunting accounted for 40–75% of all mortality. Use of composite hunting mortality rates produced realistic estimates for adult deer populations in 2 zones, but not for the third, where estimation was hindered by regulatory restrictions on antlerless deer harvest. In addition, the population estimates we calculated were generally higher than those derived from population reconstruction, likely due to relatively low harvest pressure. Legal harvest may not be the dominant form of deer mortality in developed landscapes; thus, estimates of populations or trends that rely solely on harvest data will likely be underestimates.
NASA Astrophysics Data System (ADS)
Herdeiro, Victor
2017-09-01
Herdeiro and Doyon [Phys. Rev. E 94, 043322 (2016), 10.1103/PhysRevE.94.043322] introduced a numerical recipe, dubbed uv sampler, offering precise estimations of the conformal field theory (CFT) data of the planar two-dimensional (2D) critical Ising model. It made use of scale invariance emerging at the critical point in order to sample finite sublattice marginals of the infinite plane Gibbs measure of the model by producing holographic boundary distributions. The main ingredient of the Markov chain Monte Carlo sampler is the invariance under dilation. This paper presents a generalization to higher dimensions with the critical 3D Ising model. This leads to numerical estimations of a subset of the CFT data—scaling weights and structure constants—through fitting of measured correlation functions. The results are shown to agree with the recent most precise estimations from numerical bootstrap methods [Kos, Poland, Simmons-Duffin, and Vichi, J. High Energy Phys. 08 (2016) 036, 10.1007/JHEP08(2016)036].
Estimates of present and future flood risk in the conterminous United States
NASA Astrophysics Data System (ADS)
Wing, Oliver E. J.; Bates, Paul D.; Smith, Andrew M.; Sampson, Christopher C.; Johnson, Kris A.; Fargione, Joseph; Morefield, Philip
2018-03-01
Past attempts to estimate rainfall-driven flood risk across the US either have incomplete coverage, coarse resolution or use overly simplified models of the flooding process. In this paper, we use a new 30 m resolution model of the entire conterminous US with a 2D representation of flood physics to produce estimates of flood hazard, which match to within 90% accuracy the skill of local models built with detailed data. These flood depths are combined with exposure datasets of commensurate resolution to calculate current and future flood risk. Our data show that the total US population exposed to serious flooding is 2.6-3.1 times higher than previous estimates, and that nearly 41 million Americans live within the 1% annual exceedance probability floodplain (compared to only 13 million when calculated using FEMA flood maps). We find that population and GDP growth alone are expected to lead to significant future increases in exposure, and this change may be exacerbated in the future by climate change.
Evaluating uses of data mining techniques in propensity score estimation: a simulation study.
Setoguchi, Soko; Schneeweiss, Sebastian; Brookhart, M Alan; Glynn, Robert J; Cook, E Francis
2008-06-01
In propensity score modeling, it is a standard practice to optimize the prediction of exposure status based on the covariate information. In a simulation study, we examined in what situations analyses based on various types of exposure propensity score (EPS) models using data mining techniques such as recursive partitioning (RP) and neural networks (NN) produce unbiased and/or efficient results. We simulated data for a hypothetical cohort study (n = 2000) with a binary exposure/outcome and 10 binary/continuous covariates with seven scenarios differing by non-linear and/or non-additive associations between exposure and covariates. EPS models used logistic regression (LR) (all possible main effects), RP1 (without pruning), RP2 (with pruning), and NN. We calculated c-statistics (C), standard errors (SE), and bias of exposure-effect estimates from outcome models for the PS-matched dataset. Data mining techniques yielded higher C than LR (mean: NN, 0.86; RPI, 0.79; RP2, 0.72; and LR, 0.76). SE tended to be greater in models with higher C. Overall bias was small for each strategy, although NN estimates tended to be the least biased. C was not correlated with the magnitude of bias (correlation coefficient [COR] = -0.3, p = 0.1) but increased SE (COR = 0.7, p < 0.001). Effect estimates from EPS models by simple LR were generally robust. NN models generally provided the least numerically biased estimates. C was not associated with the magnitude of bias but was with the increased SE.
Small Area Income and Poverty Estimates (SAIPE): 2010 Highlights
ERIC Educational Resources Information Center
US Census Bureau, 2011
2011-01-01
This document presents 2010 data from the Small Area Income and Poverty Estimates (SAIPE) program of the U.S. Census Bureau. The SAIPE program produces poverty estimates for the total population and median household income estimates annually for all counties and states. SAIPE data also produces single-year poverty estimates for the school-age…
NASA Astrophysics Data System (ADS)
Huang, Shih-Yu; Deng, Yi; Wang, Jingfeng
2017-09-01
The maximum-entropy-production (MEP) model of surface heat fluxes, based on contemporary non-equilibrium thermodynamics, information theory, and atmospheric turbulence theory, is used to re-estimate the global surface heat fluxes. The MEP model predicted surface fluxes automatically balance the surface energy budgets at all time and space scales without the explicit use of near-surface temperature and moisture gradient, wind speed and surface roughness data. The new MEP-based global annual mean fluxes over the land surface, using input data of surface radiation, temperature data from National Aeronautics and Space Administration-Clouds and the Earth's Radiant Energy System (NASA CERES) supplemented by surface specific humidity data from the Modern-Era Retrospective Analysis for Research and Applications (MERRA), agree closely with previous estimates. The new estimate of ocean evaporation, not using the MERRA reanalysis data as model inputs, is lower than previous estimates, while the new estimate of ocean sensible heat flux is higher than previously reported. The MEP model also produces the first global map of ocean surface heat flux that is not available from existing global reanalysis products.
García-González, Miguel A; Fernández-Chimeno, Mireya; Ramos-Castro, Juan
2009-02-01
An analysis of the errors due to the finite resolution of RR time series in the estimation of the approximate entropy (ApEn) is described. The quantification errors in the discrete RR time series produce considerable errors in the ApEn estimation (bias and variance) when the signal variability or the sampling frequency is low. Similar errors can be found in indices related to the quantification of recurrence plots. An easy way to calculate a figure of merit [the signal to resolution of the neighborhood ratio (SRN)] is proposed in order to predict when the bias in the indices could be high. When SRN is close to an integer value n, the bias is higher than when near n - 1/2 or n + 1/2. Moreover, if SRN is close to an integer value, the lower this value, the greater the bias is.
Turbulent vertical diffusivity in the sub-tropical stratosphere
NASA Astrophysics Data System (ADS)
Pisso, I.; Legras, B.
2008-02-01
Vertical (cross-isentropic) mixing is produced by small-scale turbulent processes which are still poorly understood and paramaterized in numerical models. In this work we provide estimates of local equivalent diffusion in the lower stratosphere by comparing balloon borne high-resolution measurements of chemical tracers with reconstructed mixing ratio from large ensembles of random Lagrangian backward trajectories using European Centre for Medium-range Weather Forecasts analysed winds and a chemistry-transport model (REPROBUS). We focus on a case study in subtropical latitudes using data from HIBISCUS campaign. An upper bound on the vertical diffusivity is found in this case study to be of the order of 0.5 m2 s-1 in the subtropical region, which is larger than the estimates at higher latitudes. The relation between diffusion and dispersion is studied by estimating Lyapunov exponents and studying their variation according to the presence of active dynamical structures.
Gregory, K E; Maurer, R R
1991-03-01
Brown Swiss-Hereford (BS-H) reciprocal cross embryos were transferred to BS and H recipient cows and Red Poll-Angus (RP-A) reciprocal cross embryos were transferred to RP and A recipient cows to estimate the relative contributions of ovum cytoplasm and uterine influences to prenatal maternal effects. Calves resulting from embryo transfers (ET) were weaned early (3 to 5 d). Reciprocal cross mating also were made by natural service (NS) between BS and H and between RP and A breeds; part of the offspring were weaned at 3 to 5 d, and the remainder nursed their dams to an age of 150 to 180 d. This was done to estimate breed differences in prenatal and postnatal effects combined and to separate the effects of prenatal maternal influences from postnatal maternal influences of these breeds. Females produced in both ET and NS parts of the experiment were retained to produce three calf crops to an age of about 4.5 yr. The following traits were analyzed: percentage of conception rate; percentage of calf survival; percentage of calves produced per cow exposed; birth and weaning weights of calves produced; and periodic weights, heights, and condition scores of females to an age of 4.5 yr. Neither breed of donor (cytoplasmic influence) nor breed of recipient (uterine influence) had consistently important effects on the traits evaluated. In NS matings, differences between reciprocal crosses were small for most of the traits evaluated. Method of rearing (nursed vs weaned at 3 to 5 d) had no effect on reproductive and maternal traits for RP-A reciprocal cross females, but females that nursed generally were heavier, were taller, and had higher condition scores at most ages than early-weaned females. For the BS-H reciprocal cross, early-weaned females were favored over females reared by their dams in percentage of calves produced per cow exposed, but the method of rearing did not affect other reproductive or maternal traits. BS-H reciprocal cross females that nursed their dams were heavier at 550 d and were heavier and had higher condition scores at an age of 34 mo than early-weaned females.
Samuel, Michael D.; Storm, Daniel J.
2016-01-01
Chronic wasting disease (CWD) is a fatal neurodegenerative disease affecting free-ranging and captive cervids that now occurs in 24 U.S. states and two Canadian provinces. Despite the potential threat of CWD to deer populations, little is known about the rates of infection and mortality caused by this disease. We used epidemiological models to estimate the force of infection and disease-associated mortality for white-tailed deer in the Wisconsin and Illinois CWD outbreaks. Models were based on age-prevalence data corrected for bias in aging deer using the tooth wear and replacement method. Both male and female deer in the Illinois outbreak had higher corrected age-specific prevalence with slightly higher female infection than deer in the Wisconsin outbreak. Corrected ages produced more complex models with different infection and mortality parameters than those based on apparent prevalence. We found that adult male deer have a more than threefold higher risk of CWD infection than female deer. Males also had higher disease mortality than female deer. As a result, CWD prevalence was twofold higher in adult males than females. We also evaluated the potential impacts of alternative contact structures on transmission dynamics in Wisconsin deer. Results suggested that transmission of CWD among male deer during the nonbreeding season may be a potential mechanism for producing higher rates of infection and prevalence characteristically found in males. However, alternatives based on high environmental transmission and transmission from females to males during the breeding season may also play a role.
Refusal bias in HIV prevalence estimates from nationally representative seroprevalence surveys.
Reniers, Georges; Eaton, Jeffrey
2009-03-13
To assess the relationship between prior knowledge of one's HIV status and the likelihood to refuse HIV testing in populations-based surveys and explore its potential for producing bias in HIV prevalence estimates. Using longitudinal survey data from Malawi, we estimate the relationship between prior knowledge of HIV-positive status and subsequent refusal of an HIV test. We use that parameter to develop a heuristic model of refusal bias that is applied to six Demographic and Health Surveys, in which refusal by HIV status is not observed. The model only adjusts for refusal bias conditional on a completed interview. Ecologically, HIV prevalence, prior testing rates and refusal for HIV testing are highly correlated. Malawian data further suggest that amongst individuals who know their status, HIV-positive individuals are 4.62 (95% confidence interval, 2.60-8.21) times more likely to refuse testing than HIV-negative ones. On the basis of that parameter and other inputs from the Demographic and Health Surveys, our model predicts downward bias in national HIV prevalence estimates ranging from 1.5% (95% confidence interval, 0.7-2.9) for Senegal to 13.3% (95% confidence interval, 7.2-19.6) for Malawi. In absolute terms, bias in HIV prevalence estimates is negligible for Senegal but 1.6 (95% confidence interval, 0.8-2.3) percentage points for Malawi. Downward bias is more severe in urban populations. Because refusal rates are higher in men, seroprevalence surveys also tend to overestimate the female-to-male ratio of infections. Prior knowledge of HIV status informs decisions to participate in seroprevalence surveys. Informed refusals may produce bias in estimates of HIV prevalence and the sex ratio of infections.
NASA Astrophysics Data System (ADS)
Yildirim, N.; Shaler, S.
2016-10-01
Nanocellulose is a polymer which can be isolated from nature (woods, plants, bacteria, and from sea animals) through chemical or mechanical treatments, as cellulose nanofibrils (CNF), cellulose nanocrystals or bacterial celluloses. Focused global research activities have resulted in decreasing costs. A nascent industry of producers has created a huge market interest in CNF. However, there is still lack of knowledge on the nanomechanical properties of CNF, which create barriers for the scientist and producers to optimize and predict behavior of the final product. In this research, the behavior of CNF under nano compression loads were investigated through three different approaches, Oliver-Pharr (OP), fused silica (FS), and tip imaging (TI) via nanoindentation in an atomic force microscope. The CNF modulus estimates for the three approaches were 16.6 GPa, for OP, 15.8 GPa for FS, and 10.9 GPa for TI. The CNF reduced moduli estimates were consistently higher and followed the same estimate rankings by analysis technique (18.2, 17.4, and 11.9 GPa). This unique study minimizes the uncertainties related to the nanomechanical properties of CNFs and provides increased knowledge on understanding the role of CNFs as a reinforcing material in composites and also improvement in making accurate theoretical calculations and predictions.
Computation of acoustic ressure fields produced in feline brain by high-intensity focused ultrasound
NASA Astrophysics Data System (ADS)
Omidi, Nazanin
In 1975, Dunn et al. (JASA 58:512-514) showed that a simple relation describes the ultrasonic threshold for cavitation-induced changes in the mammalian brain. The thresholds for tissue damage were estimated for a variety of acoustic parameters in exposed feline brain. The goal of this study was to improve the estimates for acoustic pressures and intensities present in vivo during those experimental exposures by estimating them using nonlinear rather than linear theory. In our current project, the acoustic pressure waveforms produced in the brains of anesthetized felines were numerically simulated for a spherically focused, nominally f1-transducer (focal length = 13 cm) at increasing values of the source pressure at frequencies of 1, 3, and 9 MHz. The corresponding focal intensities were correlated with the experimental data of Dunn et al. The focal pressure waveforms were also computed at the location of the true maximum. For low source pressures, the computed waveforms were the same as those determined using linear theory, and the focal intensities matched experimentally determined values. For higher source pressures, the focal pressure waveforms became increasingly distorted, with the compressional amplitude of the wave becoming greater, and the rarefactional amplitude becoming lower than the values calculated using linear theory. The implications of these results for clinical exposures are discussed.
Inverse modeling of Asian (222)Rn flux using surface air (222)Rn concentration.
Hirao, Shigekazu; Yamazawa, Hiromi; Moriizumi, Jun
2010-11-01
When used with an atmospheric transport model, the (222)Rn flux distribution estimated in our previous study using soil transport theory caused underestimation of atmospheric (222)Rn concentrations as compared with measurements in East Asia. In this study, we applied a Bayesian synthesis inverse method to produce revised estimates of the annual (222)Rn flux density in Asia by using atmospheric (222)Rn concentrations measured at seven sites in East Asia. The Bayesian synthesis inverse method requires a prior estimate of the flux distribution and its uncertainties. The atmospheric transport model MM5/HIRAT and our previous estimate of the (222)Rn flux distribution as the prior value were used to generate new flux estimates for the eastern half of the Eurasian continent dividing into 10 regions. The (222)Rn flux densities estimated using the Bayesian inversion technique were generally higher than the prior flux densities. The area-weighted average (222)Rn flux density for Asia was estimated to be 33.0 mBq m(-2) s(-1), which is substantially higher than the prior value (16.7 mBq m(-2) s(-1)). The estimated (222)Rn flux densities decrease with increasing latitude as follows: Southeast Asia (36.7 mBq m(-2) s(-1)); East Asia (28.6 mBq m(-2) s(-1)) including China, Korean Peninsula and Japan; and Siberia (14.1 mBq m(-2) s(-1)). Increase of the newly estimated fluxes in Southeast Asia, China, Japan, and the southern part of Eastern Siberia from the prior ones contributed most significantly to improved agreement of the model-calculated concentrations with the atmospheric measurements. The sensitivity analysis of prior flux errors and effects of locally exhaled (222)Rn showed that the estimated fluxes in Northern and Central China, Korea, Japan, and the southern part of Eastern Siberia were robust, but that in Central Asia had a large uncertainty.
Sandhu, Harpinder; Waterhouse, Benjamin; Boyer, Stephane; Wratten, Steve
2016-01-01
Ecosystem services (ES) such as pollination are vital for the continuous supply of food to a growing human population, but the decline in populations of insect pollinators worldwide poses a threat to food and nutritional security. Using a pollinator (honeybee) exclusion approach, we evaluated the impact of pollinator scarcity on production in four brassica fields, two producing hybrid seeds and two producing open-pollinated ones. There was a clear reduction in seed yield as pollination rates declined. Open-pollinated crops produced significantly higher yields than did the hybrid ones at all pollination rates. The hybrid crops required at least 0.50 of background pollination rates to achieve maximum yield, whereas in open-pollinated crops, 0.25 pollination rates were necessary for maximum yield. The total estimated economic value of pollination services provided by honeybees to the agricultural industry in New Zealand is NZD $1.96 billion annually. This study indicates that loss of pollination services can result in significant declines in production and have serious implications for the market economy in New Zealand. Depending on the extent of honeybee population decline, and assuming that results in declining pollination services, the estimated economic loss to New Zealand agriculture could be in the range of NZD $295-728 million annually.
Waterhouse, Benjamin; Wratten, Steve
2016-01-01
Ecosystem services (ES) such as pollination are vital for the continuous supply of food to a growing human population, but the decline in populations of insect pollinators worldwide poses a threat to food and nutritional security. Using a pollinator (honeybee) exclusion approach, we evaluated the impact of pollinator scarcity on production in four brassica fields, two producing hybrid seeds and two producing open-pollinated ones. There was a clear reduction in seed yield as pollination rates declined. Open-pollinated crops produced significantly higher yields than did the hybrid ones at all pollination rates. The hybrid crops required at least 0.50 of background pollination rates to achieve maximum yield, whereas in open-pollinated crops, 0.25 pollination rates were necessary for maximum yield. The total estimated economic value of pollination services provided by honeybees to the agricultural industry in New Zealand is NZD $1.96 billion annually. This study indicates that loss of pollination services can result in significant declines in production and have serious implications for the market economy in New Zealand. Depending on the extent of honeybee population decline, and assuming that results in declining pollination services, the estimated economic loss to New Zealand agriculture could be in the range of NZD $295–728 million annually. PMID:27441108
Integration versus apartheid in post-Roman Britain: a response to Thomas et Al. (2008).
Pattison, John E
2011-12-01
The genetic surveys of the population of Britain conducted by Weale et al. and Capelli et al. produced estimates of the Germani immigration into Britain during the early Anglo-Saxon period, c.430-c.730. These estimates are considerably higher than the estimates of archaeologists. A possible explanation suggests that an apartheid-like social system existed in the early Anglo-Saxon kingdoms resulting in the Germani breeding more quickly than the Britons. Thomas et al. attempted to model this suggestion and showed that it was a possible explanation if all Anglo-Saxon kingdoms had such a system for up to 400 years. I noted that their explanation ignored the probability that Germani have been arriving in Britain for at least the past three millennia, including Belgae and Roman soldiers, and not only during the early Anglo-Saxon period. I produced a population model for Britain taking into account this long term, low level migration that showed that the estimates could be reconciled without the need for introducing an apartheid-like system. In turn, Thomas et al. responded, criticizing my model and arguments, which they considered persuasively written but wanting in terms of methodology, data sources, underlying assumptions, and application. Here, I respond in detail to those criticisms and argue that it is still unnecessary to introduce an apartheid-like system in order to reconcile the different estimates of Germani arrivals. A point of confusion is that geneticists are interested in ancestry, while archaeologists are interested in ethnicity: it is the bones, not the burial rites, which are important in the present context.
Velásquez, A V; da Silva, G G; Sousa, D O; Oliveira, C A; Martins, C M M R; Dos Santos, P P M; Balieiro, J C C; Rennó, F P; Fukushima, R S
2018-04-18
Feed intake assessment is a valuable tool for herd management decisions. The use of markers, either internal or external, is currently the most used technique for estimating feed intake in production animals. The experiment used 10 multiparous Holstein cows fed a corn silage-based diet, with 55:45 forage-to-concentrate ratio, the average fecal recovery (FR) of TiO 2 was higher than FR of Cr 2 O 3 , and both FR were more than unity. With internal markers, acetyl bromide lignin and cutin FR were lower than unity, and average FR for indigestible neutral detergent fiber (iNDF) and indigestible acid detergent fiber (iADF) was 1.5. The FR was unaffected by the fecal sampling procedure and appears to be an intrinsic property of each molecule and how it interacts with digesta. Of the 2 external markers, only Cr 2 O 3 produced accurate fecal output (FO) estimates and the same happened to dry matter digestibility (DMD) when iNDF and iADF were used. Estimates for DMD and FO were affected by sampling procedure; 72-h bulk [sub-sample from total feces collection (TFC)] sampling consistently produced accurate results. The grab (sub-samples taken at specific times during the day) sampling procedures were accurate when using either of the indigestible fibers (iNDF or iADF) to estimate DMD. However, grab sampling procedures can only be recommended when concomitant TFC is performed on at least one animal per treatment to determine FR. Under these conditions, Cr 2 O 3 is a suitable marker for estimating FO, and iNDF and iADF are adequate for estimating DMD. Moreover, the Cr 2 O 3 +iADF marker pair produces accurate dry matter intake estimates and deserves further attention in ruminant nutrition studies. The method of dosing the external markers is extremely important and greatly affects and determines results. Whichever the method, it must allow the animals to display normal feeding behavior and not affect performance. The grab sampling procedures can replace TFC (once FR is established), which may open new possibilities for pasture-based or collectively housed animals. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Resonant scattering and charm showers in ultrahigh-energy neutrino interactions
NASA Technical Reports Server (NTRS)
Wilczek, F.
1985-01-01
Electron antineutrinos with energy of about 7 x 10 to the 6th GeV have much-enhanced cross sections due to W-boson production off electrons. Possible signals due to cosmic-ray sources are estimated. Higher-energy antineutrinos can efficiently produce a W accompanied by radiation. Another possibility, which could lead to shadowing at modest depths, is resonant production of a charged Higgs particle. The importance of muon production by charm showers in rock is pointed out.
NASA Technical Reports Server (NTRS)
Douglas, A. R.; Stolarski, R. S.; Schoeberl, M. R.; Jackman, C. H.; Gupta, M. L.; Newman, P. A.; Nielsen, J. E.; Fleming, E. L.
2008-01-01
Model-derived estimates of the annually integrated destruction and lifetime for various ozone depleting substances (ODSs) depend on the simulated stratospheric transport and mixing in the global model used to produce the estimate. Observations in the middle and high latitude lower stratosphere show that the mean age of an air parcel (i.e., the time since its stratospheric entry) is related to the fractional release for the ODs (i.e., the amount of the ODS that has been destroyed relative to the amount at the time of stratospheric entry). We use back trajectory calculations to produce an age spectrum, and explain the relationship between the mean age and the fractional release by showing that older elements in the age spectrum have experienced higher altitudes and greater ODs destruction than younger elements. In our study, models with faster circulations produce distributions for the age-of-air that are 'young' compared to a distribution derived from observations. These models also fail to reproduce the observed relationship between the mean age of air and the fractional release. Models with slower circulations produce both realistic distributions for mean age and a realistic relationship between mean age and fractional release. These models also produce a CFCl3 lifetime of approximately 56 years, longer than the 45 year lifetime used to project future mixing ratios. We find that the use of flux boundary conditions in assessment models would have several advantages, including consistency between ODS evolution and simulated loss even if the simulated residual circulation changes due to climate change.
Simulation and analysis of chemical release in the ionosphere
NASA Astrophysics Data System (ADS)
Gao, Jing-Fan; Guo, Li-Xin; Xu, Zheng-Wen; Zhao, Hai-Sheng; Feng, Jie
2018-05-01
Ionospheric inhomogeneous plasma produced by single point chemical release has simple space-time structure, and cannot impact radio wave frequencies higher than Very High Frequency (VHF) band. In order to produce more complicated ionospheric plasma perturbation structure and trigger instabilities phenomena, multiple-point chemical release scheme is presented in this paper. The effects of chemical release on low latitude ionospheric plasma are estimated by linear instability growth rate theory that high growth rate represents high irregularities, ionospheric scintillation occurrence probability and high scintillation intension in scintillation duration. The amplitude scintillations and the phase scintillations of 150 MHz, 400 MHz, and 1000 MHz are calculated based on the theory of multiple phase screen (MPS), when they propagate through the disturbed area.
The economic feasibility of producing sweet sorghum as an ethanol feedstock in Mississippi
NASA Astrophysics Data System (ADS)
Linton, Joseph Andrew
This study examines the feasibility of producing sweet sorghum as an ethanol feedstock in Mississippi. An enterprise budgeting system is used along with estimates of transportation costs to estimate farmers' breakeven costs for producing and delivering sweet sorghum biomass. This breakeven cost for the farmer, along with breakeven costs for the producer based on wholesale ethanol price, production costs, and transportation and marketing costs for the refined ethanol, is used to estimate the amounts that farmers and ethanol producers would be willing to accept (WTA) and willing to pay (WTP), respectively, for sweet sorghum biomass. These WTA and WTP estimates are analyzed by varying key factors in the biomass and ethanol production processes. Deterministic and stochastic models are used to estimate profits for sweet sorghum and competing crops in two representative counties in Mississippi, with sweet sorghum consistently yielding negative per-acre profits in both counties.
Weitz, Melissa; Coburn, Jeffrey B; Salinas, Edgar
2008-05-01
This paper estimates national methane emissions from solid waste disposal sites in Panama over the time period 1990-2020 using both the 2006 Intergovernmental Panel on Climate Change (IPCC) Waste Model spreadsheet and the default emissions estimate approach presented in the 1996 IPCC Good Practice Guidelines. The IPCC Waste Model has the ability to calculate emissions from a variety of solid waste disposal site types, taking into account country- or region-specific waste composition and climate information, and can be used with a limited amount of data. Countries with detailed data can also run the model with country-specific values. The paper discusses methane emissions from solid waste disposal; explains the differences between the two methodologies in terms of data needs, assumptions, and results; describes solid waste disposal circumstances in Panama; and presents the results of this analysis. It also demonstrates the Waste Model's ability to incorporate landfill gas recovery data and to make projections. The former default method methane emissions estimates are 25 Gg in 1994, and range from 23.1 Gg in 1990 to a projected 37.5 Gg in 2020. The Waste Model estimates are 26.7 Gg in 1994, ranging from 24.6 Gg in 1990 to 41.6 Gg in 2020. Emissions estimates for Panama produced by the new model were, on average, 8% higher than estimates produced by the former default methodology. The increased estimate can be attributed to the inclusion of all solid waste disposal in Panama (as opposed to only disposal in managed landfills), but the increase was offset somewhat by the different default factors and regional waste values between the 1996 and 2006 IPCC guidelines, and the use of the first-order decay model with a time delay for waste degradation in the IPCC Waste Model.
NASA Astrophysics Data System (ADS)
Zhang, Y.; Paulson, K. V.
For audio-frequency magnetotelluric surveys where the signals are lightning-stroke transients, the conventional Fourier transform method often fails to produce a high quality impedance tensor. An alternative approach is to use the wavelet transform method which is capable of localizing target information simultaneously in both the temporal and frequency domains. Unlike Fourier analysis that yields an average amplitude and phase, the wavelet transform produces an instantaneous estimate of the amplitude and phase of a signal. In this paper a complex well-localized wavelet, the Morlet wavelet, has been used to transform and analyze audio-frequency magnetotelluric data. With the Morlet wavelet, the magnetotelluric impedance tensor can be computed directly in the wavelet transform domain. The lightning-stroke transients are easily identified on the dilation-translation plane. Choosing those wavelet transform values where the signals are located, a higher signal-to-noise ratio estimation of the impedance tensor can be obtained. In a test using real data, the wavelet transform showed a significant improvement in the signal-to-noise ratio over the conventional Fourier transform.
Aeroacoustic prediction of turbulent free shear flows
NASA Astrophysics Data System (ADS)
Bodony, Daniel Joseph
2005-12-01
For many people living in the immediate vicinity of an active airport the noise of jet aircraft flying overhead can be a nuisance, if not worse. Airports, which are held accountable for the noise they produce, and upcoming international noise limits are pressuring the major airframe and jet engine manufacturers to bring quieter aircraft into service. However, component designers need a predictive tool that can estimate the sound generated by a new configuration. Current noise prediction techniques are almost entirely based on previously collected experimental data and are applicable only to evolutionary, not revolutionary, changes in the basic design. Physical models of final candidate designs must still be built and tested before a single design is selected. By focusing on the noise produced in the jet engine exhaust at take-off conditions, the prediction of sound generated by turbulent flows is addressed. The technique of large-eddy simulation is used to calculate directly the radiated sound produced by jets at different operating conditions. Predicted noise spectra agree with measurements for frequencies up to, and slightly beyond, the peak frequency. Higher frequencies are missed, however, due to the limited resolution of the simulations. Two methods of estimating the 'missing' noise are discussed. In the first a subgrid scale noise model, analogous to a subgrid scale closure model, is proposed. In the second method the governing equations are expressed in a wavelet basis from which simplified time-dependent equations for the subgrid scale fluctuations can be derived. These equations are inexpensively integrated to yield estimates of the subgrid scale fluctuations with proper space-time dynamics.
Consistency of Rasch Model Parameter Estimation: A Simulation Study.
ERIC Educational Resources Information Center
van den Wollenberg, Arnold L.; And Others
1988-01-01
The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…
NASA Astrophysics Data System (ADS)
Song, X. P.; Potapov, P.; Adusei, B.; King, L.; Khan, A.; Krylov, A.; Di Bella, C. M.; Pickens, A. H.; Stehman, S. V.; Hansen, M.
2016-12-01
Reliable and timely information on agricultural production is essential for ensuring world food security. Freely available medium-resolution satellite data (e.g. Landsat, Sentinel) offer the possibility of improved global agriculture monitoring. Here we develop and test a method for estimating in-season crop acreage using a probability sample of field visits and producing wall-to-wall crop type maps at national scales. The method is first illustrated for soybean cultivated area in the US for 2015. A stratified, two-stage cluster sampling design was used to collect field data to estimate national soybean area. The field-based estimate employed historical soybean extent maps from the U.S. Department of Agriculture (USDA) Cropland Data Layer to delineate and stratify U.S. soybean growing regions. The estimated 2015 U.S. soybean cultivated area based on the field sample was 341,000 km2 with a standard error of 23,000 km2. This result is 1.0% lower than USDA's 2015 June survey estimate and 1.9% higher than USDA's 2016 January estimate. Our area estimate was derived in early September, about 2 months ahead of harvest. To map soybean cover, the Landsat image archive for the year 2015 growing season was processed using an active learning approach. Overall accuracy of the soybean map was 84%. The field-based sample estimated area was then used to calibrate the map such that the soybean acreage of the map derived through pixel counting matched the sample-based area estimate. The strength of the sample-based area estimation lies in the stratified design that takes advantage of the spatially explicit cropland layers to construct the strata. The success of the mapping was built upon an automated system which transforms Landsat images into standardized time-series metrics. The developed method produces reliable and timely information on soybean area in a cost-effective way and could be implemented in an operational mode. The approach has also been applied for other crops in other regions, such as winter wheat in Pakistan, soybean in Argentina and soybean in the entire South America. Similar levels of accuracy and timeliness were achieved as in the US.
Aditya, Kaustav; Sud, U. C.
2018-01-01
Poverty affects many people, but the ramifications and impacts affect all aspects of society. Information about the incidence of poverty is therefore an important parameter of the population for policy analysis and decision making. In order to provide specific, targeted solutions when addressing poverty disadvantage small area statistics are needed. Surveys are typically designed and planned to produce reliable estimates of population characteristics of interest mainly at higher geographic area such as national and state level. Sample sizes are usually not large enough to provide reliable estimates for disaggregated analysis. In many instances estimates are required for areas of the population for which the survey providing the data was unplanned. Then, for areas with small sample sizes, direct survey estimation of population characteristics based only on the data available from the particular area tends to be unreliable. This paper describes an application of small area estimation (SAE) approach to improve the precision of estimates of poverty incidence at district level in the State of Bihar in India by linking data from the Household Consumer Expenditure Survey 2011–12 of NSSO and the Population Census 2011. The results show that the district level estimates generated by SAE method are more precise and representative. In contrast, the direct survey estimates based on survey data alone are less stable. PMID:29879202
Chandra, Hukum; Aditya, Kaustav; Sud, U C
2018-01-01
Poverty affects many people, but the ramifications and impacts affect all aspects of society. Information about the incidence of poverty is therefore an important parameter of the population for policy analysis and decision making. In order to provide specific, targeted solutions when addressing poverty disadvantage small area statistics are needed. Surveys are typically designed and planned to produce reliable estimates of population characteristics of interest mainly at higher geographic area such as national and state level. Sample sizes are usually not large enough to provide reliable estimates for disaggregated analysis. In many instances estimates are required for areas of the population for which the survey providing the data was unplanned. Then, for areas with small sample sizes, direct survey estimation of population characteristics based only on the data available from the particular area tends to be unreliable. This paper describes an application of small area estimation (SAE) approach to improve the precision of estimates of poverty incidence at district level in the State of Bihar in India by linking data from the Household Consumer Expenditure Survey 2011-12 of NSSO and the Population Census 2011. The results show that the district level estimates generated by SAE method are more precise and representative. In contrast, the direct survey estimates based on survey data alone are less stable.
Assessing the impact of Syrian refugees on earthquake fatality estimations in southeast Turkey
NASA Astrophysics Data System (ADS)
Wilson, Bradley; Paradise, Thomas
2018-01-01
The influx of millions of Syrian refugees into Turkey has rapidly changed the population distribution along the Dead Sea Rift and East Anatolian fault zones. In contrast to other countries in the Middle East where refugees are accommodated in camp environments, the majority of displaced individuals in Turkey are integrated into local cities, towns, and villages - placing stress on urban settings and increasing potential exposure to strong earthquake shaking. Yet displaced populations are often unaccounted for in the census-based population models used in earthquake fatality estimations. This study creates a minimally modeled refugee gridded population model and analyzes its impact on semi-empirical fatality estimations across southeast Turkey. Daytime and nighttime fatality estimates were produced for five fault segments at earthquake magnitudes 5.8, 6.4, and 7.0. Baseline fatality estimates calculated from census-based population estimates for the study area varied in scale from tens to thousands of fatalities, with higher death totals in nighttime scenarios. Refugee fatality estimations were analyzed across 500 semi-random building occupancy distributions. Median fatality estimates for refugee populations added non-negligible contributions to earthquake fatalities at four of five fault locations, increasing total fatality estimates by 7-27 %. These findings communicate the necessity of incorporating refugee statistics into earthquake fatality estimations in southeast Turkey and the ongoing importance of placing environmental hazards in their appropriate regional and temporal context.
Aguirre-von-Wobeser, Eneas; Eguiarte, Luis E; Souza, Valeria; Soberón-Chávez, Gloria
2015-01-01
Many strains of bacteria produce antagonistic substances that restrain the growth of others, and potentially give them a competitive advantage. These substances are commonly released to the surrounding environment, involving metabolic costs in terms of energy and nutrients. The rate at which these molecules need to be produced to maintain a certain amount of them close to the producing cell before they are diluted into the environment has not been explored so far. To understand the potential cost of production of antagonistic substances in water environments, we used two different theoretical approaches. Using a probabilistic model, we determined the rate at which a cell needs to produce individual molecules in order to keep on average a single molecule in its vicinity at all times. For this minimum protection, a cell would need to invest 3.92 × 10(-22) kg s(-1) of organic matter, which is 9 orders of magnitude lower than the estimated expense for growth. Next, we used a continuous model, based on Fick's laws, to explore the production rate needed to sustain minimum inhibitory concentrations around a cell, which would provide much more protection from competitors. In this scenario, cells would need to invest 1.20 × 10(-11) kg s(-1), which is 2 orders of magnitude higher than the estimated expense for growth, and thus not sustainable. We hypothesize that the production of antimicrobial compounds by bacteria in aquatic environments lies between these two extremes.
NASA Astrophysics Data System (ADS)
Lyman, S. N.
2017-12-01
Most of the water extracted with oil and natural gas (i.e., produced water) is disposed of by injection into the subsurface. In the arid western United States, however, a significant portion of produced water is discharged in ponds for evaporative disposal, and produced water is often stored in open ponds prior to subsurface injection. Even though they are common in the West (Utah's Uinta Basin has almost 200 ha), produced water ponds have been excluded from oil and gas emissions inventories because little information about their emission rates and speciation is available. We used flux chambers and inverse plume modeling to measure emissions of methane, C2-C11 hydrocarbons, light alcohols, carbonyls, and carbon dioxide from oil and gas produced water storage and disposal ponds in the Uinta Basin and the Upper Green River Basin, Wyoming, during 2013-2017. Methanol was the most abundant organic compound in produced water (91 ± 2% of the total volatile organic concentration; mean ± 95% confidence interval) but accounted for only 25 ± 30% of total organic compound emissions from produced water ponds. Non-methane hydrocarbons, especially C6-C9 alkanes and aromatics, accounted for the majority of emitted organics. We were able to predict emissions of individual compounds based on water concentrations, but only to within an order of magnitude. The speciation and magnitude of emissions varied strongly across facilities and was influenced by water age, the presence or absence of oil sheens, and with meteorological conditions (especially ice cover). Flux chamber measurements were lower than estimates from inverse modeling techniques.Based on our flux chamber measurements, we estimate that produced water ponds are responsible for between 3 and 9% of all non-methane organic compound emissions in the Uinta Basin (or as much as 18% if we rely on our inverse modeling results). Emissions from produced water ponds contain little methane and are more reactive (i.e., they have higher maximum incremental reactivity) than typical oil and gas-related emissions. Produced water ponds emit about 11% and 28%, respectively, of all aromatics and alcohols from the Uinta Basin oil and gas industry.
A study of helium atmospheric-pressure guided streamers for potential biological applications
NASA Astrophysics Data System (ADS)
Gazeli, K.; Noël, C.; Clément, F.; Daugé, C.; Svarnas, P.; Belmonte, T.
2013-04-01
The origin of differences in the rotational temperatures of various molecules and ions ( N_{2}^{+} (B), OH(A) and N2(C)) is studied in helium atmospheric-pressure guided streamers. The rotational temperature of N_{2}^{+} (B) is room temperature. It is estimated from the emission band of the first negative system at 391.4 nm, and it is governed by the temperature of N2(X) in the surrounding air. N2(X) is ionized by direct electron impact in the outer part of the plasma. N_{2}^{+} (B) is deactivated by collisions with N2 and O2. The rotational temperature of OH(A), estimated from the OH band at 306.4 nm, is slightly higher than that of N_{2}^{+} (B). OH(A) is excited by electron impact with H2O during the first 100 ns of the applied voltage pulse. Next, OH(A) is produced by electron impact with OH(X) created by the quenching of OH(A) by N2 and O2. H2O diffuses deeper than N2 into the plasma ring and the rotational temperature of OH(A) is slightly higher than that of N_{2}^{+} (B). The rotational temperature of N2(C), estimated from the emission of the second positive system at 315.9 nm, is governed by its collisions with helium. The gas temperature of helium at the beginning of the pulse is predicted to be several hundred kelvin higher than room temperature.
Ekinci, Yunus Levent
2016-01-01
This paper presents an easy-to-use open source computer algorithm (code) for estimating the depths of isolated single thin dike-like source bodies by using numerical second-, third-, and fourth-order horizontal derivatives computed from observed magnetic anomalies. The approach does not require a priori information and uses some filters of successive graticule spacings. The computed higher-order horizontal derivative datasets are used to solve nonlinear equations for depth determination. The solutions are independent from the magnetization and ambient field directions. The practical usability of the developed code, designed in MATLAB R2012b (MathWorks Inc.), was successfully examined using some synthetic simulations with and without noise. The algorithm was then used to estimate the depths of some ore bodies buried in different regions (USA, Sweden, and Canada). Real data tests clearly indicated that the obtained depths are in good agreement with those of previous studies and drilling information. Additionally, a state-of-the-art inversion scheme based on particle swarm optimization produced comparable results to those of the higher-order horizontal derivative analyses in both synthetic and real anomaly cases. Accordingly, the proposed code is verified to be useful in interpreting isolated single thin dike-like magnetized bodies and may be an alternative processing technique. The open source code can be easily modified and adapted to suit the benefits of other researchers.
Beck, H J; Birch, G F
2013-06-01
Stormwater contaminant loading estimates using event mean concentration (EMC), rainfall/runoff relationship calculations and computer modelling (Model of Urban Stormwater Infrastructure Conceptualisation--MUSIC) demonstrated high variability in common methods of water quality assessment. Predictions of metal, nutrient and total suspended solid loadings for three highly urbanised catchments in Sydney estuary, Australia, varied greatly within and amongst methods tested. EMC and rainfall/runoff relationship calculations produced similar estimates (within 1 SD) in a statistically significant number of trials; however, considerable variability within estimates (∼50 and ∼25 % relative standard deviation, respectively) questions the reliability of these methods. Likewise, upper and lower default inputs in a commonly used loading model (MUSIC) produced an extensive range of loading estimates (3.8-8.3 times above and 2.6-4.1 times below typical default inputs, respectively). Default and calibrated MUSIC simulations produced loading estimates that agreed with EMC and rainfall/runoff calculations in some trials (4-10 from 18); however, they were not frequent enough to statistically infer that these methods produced the same results. Great variance within and amongst mean annual loads estimated by common methods of water quality assessment has important ramifications for water quality managers requiring accurate estimates of the quantities and nature of contaminants requiring treatment.
Nickerson, Brett S; Tinsley, Grant M
2018-03-21
The purpose of this study was to compare body fat estimates and fat-free mass (FFM) characteristics produced by multicompartment models when utilizing either dual energy X-ray absorptiometry (DXA) or single-frequency bioelectrical impedance analysis (SF-BIA) for bone mineral content (BMC) in a sample of physically active adults. Body fat percentage (BF%) was estimated with 5-compartment (5C), 4-compartment (4C), 3-compartment (3C), and 2-compartment (2C) models, and DXA. The 5C-Wang with DXA for BMC (i.e., 5C-Wang DXA ) was the criterion. 5C-Wang using SF-BIA for BMC (i.e., 5C-Wang BIA ), 4C-Wang DXA (DXA for BMC), 4C-Wang BIA (BIA for BMC), and 3C-Siri all produced values similar to 5C-Wang DXA (r > 0.99; total error [TE] < 0.83%; standard error of estimate < 0.67%; 95% limits of agreement [LOAs] < ±1.35%). The 2C models (2C-Pace, 2C-Siri, and 2C-Brozek) and DXA each produced similar standard error of estimate and 95% LOAs (2.13%-3.12% and ±4.15%-6.14%, respectively). Furthermore, 3C-Lohman DXA (underwater weighing for body volume and DXA for BMC) and 3C-Lohman BIA (underwater weighing for body volume and SF-BIA for BMC) produced the largest 95% LOAs (±5.94%-8.63%). The FFM characteristics (i.e., FFM density, water/FFM, mineral/FFM, and protein/FFM) for 5C-Wang DXA and 5C-Wang BIA were each compared with the "reference body" cadavers of Brozek et al. 5C-Wang BIA FFM density differed significantly from the "reference body" in women (1.103 ± 0.007 g/cm 3 ; p < 0.001), but no differences were observed for 5C-Wang DXA or either 5C model in men. Moreover, water/FFM and mineral/FFM were significantly lower in men and women when comparing 5C-Wang DXA and 5C-Wang BIA with the "reference body," whereas protein/FFM was significantly higher (all p ≤ 0.001). 3C-Lohman BIA and 3C-Lohman DXA produced error similar to 2C models and DXA and are therefore not recommended multicompartment models. Although more advanced multicompartment models (e.g., 4C-Wang and 5C-Wang) can utilize BIA-derived BMC with minimal impact on body fat estimates, the increased accuracy of these models over 3C-Siri is minimal. Copyright © 2018 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.
Alam, Iftikhar; Alam, Ibrar; Paracha, Parvez I; Pawelec, Graham
2012-01-01
Dietary intake has been shown to influence the acid–base balance in human subjects; however, this phenomenon is poorly understood and rarely reported for the least well-studied segment of older people in a developing country. The aims of the present study were to: (1) quantify estimates of daily net endogenous acid production (NEAP) (mEq/d) in a sample of otherwise healthy elderly aged 50 years and above; and (2) compare NEAP between the elderly and young to determine the effects of aging, which could contribute to changes in the acid–base balance. Analyses were carried out among 526 elderly and 131 young participants (aged 50–80 and 23–28 years, respectively), all of whom were free of discernible disease, nonsmokers, and not on any chronic medication. Selected anthropometric factors were measured and 24-hour dietary recall was recorded. We used two measures to characterize dietary acid load: (1) NEAP estimated as the dietary potential renal acid load plus organic acid excretion, the latter as a multiple of estimated body surface area; and (2) estimated NEAP based on protein and K. For the young and elderly, the ranges of NEAP were 12.1–67.8 mEq/d and 2.0–78.3 mEq/d, respectively. Regardless of the method used, the mean dietary acid–base balance (NEAP) was significantly higher for the elderly than the young (P = 0.0035 for NEAP [elderly, 44.1 mEq/d versus young 40.1 mEq/d]; and P = 0.0035 for the protein:potassium ratio [elderly, 1.4 mEq/d versus young 1.1 mEq/d]). A positive and significant correlation was found between NEAP and energy, protein, and phosphorus (P < 0.05 for all trends). The findings from this study provide evidence of the relatively higher production of NEAP in older people, possibly as an effect of higher consumption of certain acid-producing foods by the elderly. PMID:23271903
Does choice of estimators influence conclusions from true metabolizable energy feeding trials?
Sherfy, M.H.; Kirkpatrick, R.L.; Webb, K.E.
2005-01-01
True metabolizable energy (TME) is a measure of avian dietary quality that accounts for metabolic fecal and endogenous urinary energy losses (EL) of non-dietary origin. The TME is calculated using a bird fed the test diet and an estimate of EL derived from another bird (Paired Bird Correction), the same bird (Self Correction), or several other birds (Group Mean Correction). We evaluated precision of these estimators by using each to calculate TME of three seed diets in blue-winged teal (Anas discors). The TME varied by <2% among estimators for all three diets, and Self Correction produced the least variable TMEs for each. The TME did not differ between estimators in nine paired comparisons within diets, but variation between estimators within individual birds was sufficient to be of practical consequence. Although differences in precision among methods were slight, Self Correction required the lowest sample size to achieve a given precision. Feeding trial methods that minimize variation among individuals have several desirable properties, including higher precision of TME estimates and more rigorous experimental control. Consequently, we believe that Self Correction is most likely to accurately represent nutritional value of food items and should be considered the standard method for TME feeding trials. ?? Dt. Ornithologen-Gesellschaft e.V. 2005.
Objectivity and validity of EMG method in estimating anaerobic threshold.
Kang, S-K; Kim, J; Kwon, M; Eom, H
2014-08-01
The purposes of this study were to verify and compare the performances of anaerobic threshold (AT) point estimates among different filtering intervals (9, 15, 20, 25, 30 s) and to investigate the interrelationships of AT point estimates obtained by ventilatory threshold (VT) and muscle fatigue thresholds using electromyographic (EMG) activity during incremental exercise on a cycle ergometer. 69 untrained male university students, yet pursuing regular exercise voluntarily participated in this study. The incremental exercise protocol was applied with a consistent stepwise increase in power output of 20 watts per minute until exhaustion. AT point was also estimated in the same manner using V-slope program with gas exchange parameters. In general, the estimated values of AT point-time computed by EMG method were more consistent across 5 filtering intervals and demonstrated higher correlations among themselves when compared with those values obtained by VT method. The results found in the present study suggest that the EMG signals could be used as an alternative or a new option in estimating AT point. Also the proposed computing procedure implemented in Matlab for the analysis of EMG signals appeared to be valid and reliable as it produced nearly identical values and high correlations with VT estimates. © Georg Thieme Verlag KG Stuttgart · New York.
NASA Astrophysics Data System (ADS)
Chang, Yaping; Qin, Dahe; Ding, Yongjian; Zhao, Qiudong; Zhang, Shiqiang
2018-06-01
The long-term change of evapotranspiration (ET) is crucial for managing water resources in areas with extreme climates, such as the Tibetan Plateau (TP). This study proposed a modified algorithm for estimating ET based on the MOD16 algorithm on a global scale over alpine meadow on the TP in China. Wind speed and vegetation height were integrated to estimate aerodynamic resistance, while the temperature and moisture constraints for stomatal conductance were revised based on the technique proposed by Fisher et al. (2008). Moreover, Fisher's method for soil evaporation was adopted to reduce the uncertainty in soil evaporation estimation. Five representative alpine meadow sites on the TP were selected to investigate the performance of the modified algorithm. Comparisons were made between the ET observed using the Eddy Covariance (EC) and estimated using both the original and modified algorithms. The results revealed that the modified algorithm performed better than the original MOD16 algorithm with the coefficient of determination (R2) increasing from 0.26 to 0.68, and root mean square error (RMSE) decreasing from 1.56 to 0.78 mm d-1. The modified algorithm performed slightly better with a higher R2 (0.70) and lower RMSE (0.61 mm d-1) for after-precipitation days than for non-precipitation days at Suli site. Contrarily, better results were obtained for non-precipitation days than for after-precipitation days at Arou, Tanggula, and Hulugou sites, indicating that the modified algorithm may be more suitable for estimating ET for non-precipitation days with higher accuracy than for after-precipitation days, which had large observation errors. The comparisons between the modified algorithm and two mainstream methods suggested that the modified algorithm could produce high accuracy ET over the alpine meadow sites on the TP.
NASA Astrophysics Data System (ADS)
Mainhagu, J.; Brusseau, M. L.
2016-09-01
The mass of contaminant present at a site, particularly in the source zones, is one of the key parameters for assessing the risk posed by contaminated sites, and for setting and evaluating remediation goals and objectives. This quantity is rarely known and is challenging to estimate accurately. This work investigated the efficacy of fitting mass-depletion functions to temporal contaminant mass discharge (CMD) data as a means of estimating initial mass. Two common mass-depletion functions, exponential and power functions, were applied to historic soil vapor extraction (SVE) CMD data collected from 11 contaminated sites for which the SVE operations are considered to be at or close to essentially complete mass removal. The functions were applied to the entire available data set for each site, as well as to the early-time data (the initial 1/3 of the data available). Additionally, a complete differential-time analysis was conducted. The latter two analyses were conducted to investigate the impact of limited data on method performance, given that the primary mode of application would be to use the method during the early stages of a remediation effort. The estimated initial masses were compared to the total masses removed for the SVE operations. The mass estimates obtained from application to the full data sets were reasonably similar to the measured masses removed for both functions (13 and 15% mean error). The use of the early-time data resulted in a minimally higher variation for the exponential function (17%) but a much higher error (51%) for the power function. These results suggest that the method can produce reasonable estimates of initial mass useful for planning and assessing remediation efforts.
Rezende, Daniela; Melo, José W S; Oliveira, José E M; Gondim, Manoel G C
2016-07-01
Reducing the losses caused by Aceria guerreronis Keifer has been an arduous task for farmers. However, there are no detailed studies on losses that simultaneously analyse correlated parameters, and very few studies that address the economic viability of chemical control, the main strategy for managing this pest. In this study the objectives were (1) to estimate the crop loss due to coconut mite and (2) to perform a financial analysis of acaricide application to control the pest. For this, the following parameters were evaluated: number and weight of fruits, liquid albumen volume, and market destination of plants with and without monthly abamectin spraying (three harvests). The costs involved in the chemical control of A. guerreronis were also quantified. Higher A. guerreronis incidence on plants resulted in a 60 % decrease in the mean number of fruits harvested per bunch and a 28 % decrease in liquid albumen volume. Mean fruit weight remained unaffected. The market destination of the harvested fruit was also affected by higher A. guerreronis incidence. Untreated plants, with higher A. guerreronis infestation intensity, produced a lower proportion of fruit intended for fresh market and higher proportions of non-marketable fruit and fruit intended for industrial processing. Despite the costs involved in controlling A. guerreronis, the difference between the profit from the treated site and the untreated site was 18,123.50 Brazilian Real; this value represents 69.1 % higher profit at the treated site.
Pouget, Enrique R; Friedman, Samuel R; Cleland, Charles M; Tempalski, Barbara; Cooper, Hannah L F
2012-06-01
Little information exists on the population prevalence or geographic distribution of injection drug users (IDUs) who are Hispanic in the USA. Here, we present yearly estimates of IDU population prevalence among Hispanic residents of the 96 most populated US metropolitan statistical areas (MSAs) for 1992-2002. First, yearly estimates of the proportion of IDUs who were Hispanic in each MSA were created by combining data on (1) IDUs receiving drug treatment services in Substance Abuse and Mental Health Services Administration (SAMHSA)'s Treatment Entry Data System, (2) IDUs being tested in the Centers for Disease Control and Prevention (CDC) HIV-Counseling and Testing System, and (3) incident AIDS diagnoses among IDUs, supplemented by (4) data on IDUs who were living with AIDS. Then, the resulting proportions were multiplied by published yearly estimates of the number of IDUs of all racial/ethnic groups in each MSA to produce Hispanic IDU population estimates. These were divided by Hispanic population data to produce population prevalence rates. Time trends were tested using mixed-effects regression models. Hispanic IDU prevalence declined significantly on average (1992 mean = 192, median = 133; 2002 mean = 144, median = 93; units are per 10,000 Hispanics aged 15-64). The highest prevalence rates across time tended to be in smaller northeastern MSAs. Comparing the last three study years to the first three, prevalence decreased in 82% of MSAs and increased in 18%. Comparisons with data on drug-related mortality and hepatitis C mortality supported the validity of the estimates. Generally, estimates of Hispanic IDU population prevalence were higher than published estimates for non-Hispanic White residents and lower than published estimates for non-Hispanic Black residents. Further analysis indicated that the proportion of IDUs that was Hispanic decreased in 52% and increased in 48% of MSAs between 2002 and 2007. The estimates resulting from this study can be used to investigate MSA-level social and economic factors that may have contributed to variations across MSAs and to help guide prevention program planning for Hispanic IDUs within MSAs. Future research should attempt to determine to what extent these trends are applicable to Hispanic national origin subgroups.
Accounting for climate and air quality damages in future U.S. electricity generation scenarios.
Brown, Kristen E; Henze, Daven K; Milford, Jana B
2013-04-02
The EPA-MARKAL model of the U.S. electricity sector is used to examine how imposing emissions fees based on estimated health and environmental damages might change electricity generation. Fees are imposed on life-cycle emissions of SO(2), nitrogen oxides (NO(x)), particulate matter, and greenhouse gases (GHG) from 2015 through 2055. Changes in electricity production, fuel type, emissions controls, and emissions produced under various fees are examined. A shift in fuels used for electricity production results from $30/ton CO(2)-equivalent GHG fees or from criteria pollutant fees set at the higher-end of the range of published damage estimates, but not from criteria pollutant fees based on low or midrange damage estimates. With midrange criteria pollutant fees assessed, SO(2) and NOx emissions are lower than the business as usual case (by 52% and 10%, respectively), with larger differences in the western U.S. than in the eastern U.S. GHG emissions are not significantly impacted by midrange criteria pollutant fees alone; conversely, with only GHG fees, NO(x) emissions are reduced by up to 11%, yet SO(2) emissions are slightly higher than in the business as usual case. Therefore, fees on both GHG and criteria pollutants may be needed to achieve significant reductions in both sets of pollutants.
Evaluation of methods for the quantification of ether extract contents in forage and cattle feces.
Barbosa, Marcília M; Detmann, Edenio; Valadares, Sebastião C; Detmann, Kelly S C; Franco, Marcia O; Batista, Erick D; Rocha, Gabriel C
2017-01-01
The objective of this study was to compare the estimates of ether extract (EE) contents obtained by the Randall method and by the high-temperature method of the American Oil Chemist's Society (AOCS; Am 5-04) in forages (n = 20) and cattle feces (n = 15). The EE contents were quantified by using the Randall extraction or AOCS method and XT4 filter bags or cartridges made of qualitative filter paper (80 g/m²) as containers for the samples. It was also evaluated the loss of particles, and concentration of residual chlorophyll after extraction and the recovery of protein and minerals in the material subjected to extraction. Significant interaction was observed between extraction method and material for EE contents. The EE estimates using the AOCS method were higher, mainly in forages. No loss of particles was observed with different containers. The chlorophyll contents in the residues of cattle feces were not affected by the extraction method; however, residual chlorophyll was lower using the AOCS method in forages. There was complete recovery of the protein and ash after extraction. The results suggest that AOCS method produces higher estimates of EE contents in forages and cattle feces, possibly by providing greater extraction of non-fatty EE.
Chen, Ming-Jen; Hsu, Hui-Tsung; Lin, Cheng-Li; Ju, Wei-Yuan
2012-10-01
Human exposure to acrylamide (AA) through consumption of French fries and other foods has been recognized as a potential health concern. Here, we used a statistical non-linear regression model, based on the two most influential factors, cooking temperature and time, to estimate AA concentrations in French fries. The R(2) of the predictive model is 0.83, suggesting the developed model was significant and valid. Based on French fry intake survey data conducted in this study and eight frying temperature-time schemes which can produce tasty and visually appealing French fries, the Monte Carlo simulation results showed that if AA concentration is higher than 168 ppb, the estimated cancer risk for adolescents aged 13-18 years in Taichung City would be already higher than the target excess lifetime cancer risk (ELCR), and that by taking into account this limited life span only. In order to reduce the cancer risk associated with AA intake, the AA levels in French fries might have to be reduced even further if the epidemiological observations are valid. Our mathematical model can serve as basis for further investigations on ELCR including different life stages and behavior and population groups. Copyright © 2012 Elsevier Ltd. All rights reserved.
Hain, Christopher R; Anderson, Martha C
2017-10-16
Observations of land surface temperature (LST) are crucial for the monitoring of surface energy fluxes from satellite. Methods that require high temporal resolution LST observations (e.g., from geostationary orbit) can be difficult to apply globally because several geostationary sensors are required to attain near-global coverage (60°N to 60°S). While these LST observations are available from polar-orbiting sensors, providing global coverage at higher spatial resolutions, the temporal sampling (twice daily observations) can pose significant limitations. For example, the Atmosphere Land Exchange Inverse (ALEXI) surface energy balance model, used for monitoring evapotranspiration and drought, requires an observation of the morning change in LST - a quantity not directly observable from polar-orbiting sensors. Therefore, we have developed and evaluated a data-mining approach to estimate the mid-morning rise in LST from a single sensor (2 observations per day) of LST from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor on the Aqua platform. In general, the data-mining approach produced estimates with low relative error (5 to 10%) and statistically significant correlations when compared against geostationary observations. This approach will facilitate global, near real-time applications of ALEXI at higher spatial and temporal coverage from a single sensor than currently achievable with current geostationary datasets.
Comparing four methods to estimate usual intake distributions.
Souverein, O W; Dekkers, A L; Geelen, A; Haubrock, J; de Vries, J H; Ocké, M C; Harttig, U; Boeing, H; van 't Veer, P
2011-07-01
The aim of this paper was to compare methods to estimate usual intake distributions of nutrients and foods. As 'true' usual intake distributions are not known in practice, the comparison was carried out through a simulation study, as well as empirically, by application to data from the European Food Consumption Validation (EFCOVAL) Study in which two 24-h dietary recalls (24-HDRs) and food frequency data were collected. The methods being compared were the Iowa State University Method (ISU), National Cancer Institute Method (NCI), Multiple Source Method (MSM) and Statistical Program for Age-adjusted Dietary Assessment (SPADE). Simulation data were constructed with varying numbers of subjects (n), different values for the Box-Cox transformation parameter (λ(BC)) and different values for the ratio of the within- and between-person variance (r(var)). All data were analyzed with the four different methods and the estimated usual mean intake and selected percentiles were obtained. Moreover, the 2-day within-person mean was estimated as an additional 'method'. These five methods were compared in terms of the mean bias, which was calculated as the mean of the differences between the estimated value and the known true value. The application of data from the EFCOVAL Project included calculations of nutrients (that is, protein, potassium, protein density) and foods (that is, vegetables, fruit and fish). Overall, the mean bias of the ISU, NCI, MSM and SPADE Methods was small. However, for all methods, the mean bias and the variation of the bias increased with smaller sample size, higher variance ratios and with more pronounced departures from normality. Serious mean bias (especially in the 95th percentile) was seen using the NCI Method when r(var) = 9, λ(BC) = 0 and n = 1000. The ISU Method and MSM showed a somewhat higher s.d. of the bias compared with NCI and SPADE Methods, indicating a larger method uncertainty. Furthermore, whereas the ISU, NCI and SPADE Methods produced unimodal density functions by definition, MSM produced distributions with 'peaks', when sample size was small, because of the fact that the population's usual intake distribution was based on estimated individual usual intakes. The application to the EFCOVAL data showed that all estimates of the percentiles and mean were within 5% of each other for the three nutrients analyzed. For vegetables, fruit and fish, the differences were larger than that for nutrients, but overall the sample mean was estimated reasonably. The four methods that were compared seem to provide good estimates of the usual intake distribution of nutrients. Nevertheless, care needs to be taken when a nutrient has a high within-person variation or has a highly skewed distribution, and when the sample size is small. As the methods offer different features, practical reasons may exist to prefer one method over the other.
Genome-wide minor histocompatibility matching as related to the risk of graft-versus-host disease.
Martin, Paul J; Levine, David M; Storer, Barry E; Warren, Edus H; Zheng, Xiuwen; Nelson, Sarah C; Smith, Anajane G; Mortensen, Bo K; Hansen, John A
2017-02-09
The risk of acute graft-versus-host disease (GVHD) is higher after allogeneic hematopoietic cell transplantation (HCT) from unrelated donors as compared with related donors. This difference has been explained by increased recipient mismatching for major histocompatibility antigens or minor histocompatibility antigens. In the current study, we used genome-wide arrays to enumerate single nucleotide polymorphisms (SNPs) that produce graft-versus-host (GVH) amino acid coding differences between recipients and donors. We then tested the hypothesis that higher degrees of genome-wide recipient GVH mismatching correlate with higher risks of GVHD after allogeneic HCT. In HLA-genotypically matched sibling recipients, the average recipient mismatching of coding SNPs was 9.35%. Each 1% increase in genome-wide recipient mismatching was associated with an estimated 20% increase in the hazard of grades III-IV GVHD (hazard ratio [HR], 1.20; 95% confidence interval [CI], 1.05-1.37; P = .007) and an estimated 22% increase in the hazard of stage 2-4 acute gut GVHD (HR, 1.22; 95% CI, 1.02-1.45; P = .03). In HLA-A, B, C, DRB1, DQA1, DQB1, DPA1, DPB1-phenotypically matched unrelated recipients, the average recipient mismatching of coding SNPs was 17.3%. The estimated risks of GVHD-related outcomes in HLA-phenotypically matched unrelated recipients were low, relative to the large difference in genome-wide mismatching between the 2 groups. In contrast, the risks of GVHD-related outcomes were higher in HLA-DP GVH-mismatched unrelated recipients than in HLA-matched sibling recipients. Taken together, these results suggest that the increased GVHD risk after unrelated HCT is predominantly an effect of HLA-mismatching. © 2017 by The American Society of Hematology.
Lebozec, Kristell; Jandrot-Perrus, Martine; Avenard, Gilles; Favre-Bulle, Olivier; Billiald, Philippe
2018-09-25
Monoclonal antibody fragments (Fab) are a promising class of therapeutic agents. Fabs are aglycosylated proteins and so many expression platforms have been developed including prokaryotic, yeast and mammalian cells. However, these platforms are not equivalent in terms of cell line development and culture time, product quality and possibly cost of production that greatly influence the success of a drug candidate's pharmaceutical development. This study is an assessment of the humanized Fab fragment ACT017 produced from two microorganisms (Escherichia coli and Pichia pastoris) and one mammalian cell host (CHO). Following low scale production and Protein L-affinity purification under generic conditions, physico-chemical and functional quality assessments were carried out prior to economic analysis of industrial scale production using a specialized software (Biosolve, Biopharm Services, UK). Results show higher titer production when using E. coli but associated with high heterogeneity of the protein content recovered in the supernatant. We also observed glycoforms of the Fab produced from P. pastoris, while Fab secreted from CHO was the most homogeneous despite a much longer culture time and slightly higher estimated cost of goods. This study may help inform future pharmaceutical development of this class of therapeutic proteins. Copyright © 2018 Elsevier B.V. All rights reserved.
An improved procedure for the validation of satellite-based precipitation estimates
NASA Astrophysics Data System (ADS)
Tang, Ling; Tian, Yudong; Yan, Fang; Habib, Emad
2015-09-01
The objective of this study is to propose and test a new procedure to improve the validation of remote-sensing, high-resolution precipitation estimates. Our recent studies show that many conventional validation measures do not accurately capture the unique error characteristics in precipitation estimates to better inform both data producers and users. The proposed new validation procedure has two steps: 1) an error decomposition approach to separate the total retrieval error into three independent components: hit error, false precipitation and missed precipitation; and 2) the hit error is further analyzed based on a multiplicative error model. In the multiplicative error model, the error features are captured by three model parameters. In this way, the multiplicative error model separates systematic and random errors, leading to more accurate quantification of the uncertainties. The proposed procedure is used to quantitatively evaluate the recent two versions (Version 6 and 7) of TRMM's Multi-sensor Precipitation Analysis (TMPA) real-time and research product suite (3B42 and 3B42RT) for seven years (2005-2011) over the continental United States (CONUS). The gauge-based National Centers for Environmental Prediction (NCEP) Climate Prediction Center (CPC) near-real-time daily precipitation analysis is used as the reference. In addition, the radar-based NCEP Stage IV precipitation data are also model-fitted to verify the effectiveness of the multiplicative error model. The results show that winter total bias is dominated by the missed precipitation over the west coastal areas and the Rocky Mountains, and the false precipitation over large areas in Midwest. The summer total bias is largely coming from the hit bias in Central US. Meanwhile, the new version (V7) tends to produce more rainfall in the higher rain rates, which moderates the significant underestimation exhibited in the previous V6 products. Moreover, the error analysis from the multiplicative error model provides a clear and concise picture of the systematic and random errors, with both versions of 3B42RT have higher errors in varying degrees than their research (post-real-time) counterparts. The new V7 algorithm shows obvious improvements in reducing random errors in both winter and summer seasons, compared to its predecessors V6. Stage IV, as expected, surpasses the satellite-based datasets in all the metrics over CONUS. Based on the results, we recommend the new procedure be adopted for routine validation of satellite-based precipitation datasets, and we expect the procedure will work effectively for higher resolution data to be produced in the Global Precipitation Measurement (GPM) era.
Characteristics of produced water discharged to the Gulf of Mexico hypoxiczone.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veil, J. A.; Kimmell, T. A.; Rechner, A. C.
2005-08-24
Each summer, an area of low dissolved oxygen (the hypoxic zone) forms in the shallow nearshore Gulf of Mexico waters from the Mississippi River Delta westward to near the Texas/Louisiana border. Most scientists believe that the leading contributor to the hypoxic zone is input of nutrients (primarily nitrogen and phosphorus compounds) from the Mississippi and Atchafalaya Rivers. The nutrients stimulate growth of phytoplankton. As the phytoplankton subsequently die, they fall to the bottom waters where they are decomposed by microorganisms. The decomposition process consumes oxygen in the bottom waters to create hypoxic conditions. Sources other than the two rivers mentionedmore » above may also contribute significant quantities of oxygen-demanding pollutants. One very visible potential source is the hundreds of offshore oil and gas platforms located within or near the hypoxic zone. Many of these platforms discharge varying volumes of produced water. However, only limited data characterizing oxygen demand and nutrient concentration and loading from offshore produced water discharges have been collected. No comprehensive and coordinated oxygen demand data exist for produced water discharges in the Gulf of Mexico. This report describes the results of a program to sample 50 offshore oil and gas platforms located within the Gulf of Mexico hypoxic zone. The program was conducted in response to a requirement in the U.S. Environmental Protection Agency (EPA) general National Pollutant Discharge Elimination System (NPDES) permit for offshore oil and gas discharges. EPA requested information on the amount of oxygen-demanding substances contained in the produced water discharges. This information is needed as inputs to several water quality models that EPA intends to run to estimate the relative contributions of the produced water discharges to the occurrence of the hypoxic zone. Sixteen platforms were sampled 3 times each at approximately one-month intervals to give an estimate of temporal variability. An additional 34 platforms were sampled one time. The 50 sampled platforms were scattered throughout the hypoxic zone to give an estimate of spatial variability. Each platform was sampled for biochemical oxygen demand (BOD), total organic carbon (TOC), nitrogen (ammonia, nitrate, nitrite, and total Kjeldahl nitrogen [TKN]), and phosphorus (total phosphorus and orthophosphate). In addition to these parameters, each sample was monitored for pH, conductivity, salinity, and temperature. The sampling provided average platform concentrations for each parameter. Table ES-1 shows the mean, median, maximum, and minimum for the sampled parameters. For some of the parameters, the mean is considerably larger than the median, suggesting that one or a few data points are much higher than the rest of the points (outliers). Chapter 4 contains an extensive discussion of outliers and shows how the sample results change if outliers are deleted from consideration. A primary goal of this study is to estimate the mass loading (lb/day) of each of the oxygen-demanding pollutants from the 50 platforms sampled in the study. Loading is calculated by multiplying concentrations by the discharge volume and then by a conversion factor to allow units to match. The loadings calculated in this study of 50 platforms represent a produced water discharge volume of about 176,000 bbl/day. The total amount of produced water generated in the hypoxic zone during the year 2003 was estimated as 508,000 bbl/day. This volume is based on reports by operators to the Minerals Management Service each year. It reflects the volume of produced water that is generated from each lease, not the volume that is discharged from each platform. The mass loadings from offshore oil and gas discharges to the entire hypoxic zone were estimated by multiplying the 50-platform loadings by the ratio of total water generated to 50-platform discharge volume. The loadings estimated for the 50 platforms and for the entire hypoxic zone are shown in Table ES-2. These estimates and the sampling data from 50 platforms represent the most complete and comprehensive effort ever undertaken to characterize the amount and potential sources of the oxygen demand in offshore oil and gas produced water discharges.« less
Streptozotocin and Alloxan-based Selection Improves Toxin Resistance of Insulin-Producing RINm Cells
Zemel, Romy; Bloch, Olga V.; Grief, Hagar; Vardi, Pnina
2000-01-01
The aim of our study was to develop a method for selection of subpopulations of insulin producing RINm cells with higher resistance to beta cell toxins. Cells, resistant to streptozotocin (RINmS) and alloxan (RINmA), were obtained by repeated exposure of parental RINm cells to these two toxins, while the defense capacity, was estimated by the MTT colorimetric method, and [3H]-thymidine incorporation assay. We found that RINmS and RINmA displayed higher resistance to both streptozotocin (STZ) and alloxan (AL) when compared to the parental RINm cells. In contrast, no differences in sensitivity to hydrogen peroxide were found between toxin selected and parental cells. Partial protection from the toxic effect of STZ and AL was obtained only in the parental RINm cells after preincubation of cells with the unmetabolizable 3- O-methyl-glucose. The possibility that GLUT-2 is involved in cell sensitivity to toxins was confirmed by Western blot analysis, which showed higher expression of GLUT-2 in parental RINm compared to RINmS and RINmA cells. In addition to the higher cell defense property evidenced in the selected cells, we also found higher insulin content and insulin secretion in both RINmS and RINmA cells when compared to the parental RINm cells. In conclusion, STZ and AL treatment can be used for selection of cell sub-populations with higher cell defense properties and hormone production. The different GLUT-2 expression in parental and re sistant cells suggest involvement of GLUT-2 in mechanisms of cell response to different toxins. PMID:11467412
The Effects of Hot Corrosion Pits on the Fatigue Resistance of a Disk Superalloy
NASA Technical Reports Server (NTRS)
Gabb, Timothy P.; Telesman, Jack; Hazel, Brian; Mourer, David P.
2009-01-01
The effects of hot corrosion pits on low cycle fatigue life and failure modes of the disk superalloy ME3 were investigated. Low cycle fatigue specimens were subjected to hot corrosion exposures producing pits, then tested at low and high temperatures. Fatigue lives and failure initiation points were compared to those of specimens without corrosion pits. Several tests were interrupted to estimate the fraction of fatigue life that fatigue cracks initiated at pits. Corrosion pits significantly reduced fatigue life by 60 to 98 percent. Fatigue cracks initiated at a very small fraction of life for high temperature tests, but initiated at higher fractions in tests at low temperature. Critical pit sizes required to promote fatigue cracking were estimated, based on measurements of pits initiating cracks on fracture surfaces.
Adjusting for sampling variability in sparse data: geostatistical approaches to disease mapping
2011-01-01
Background Disease maps of crude rates from routinely collected health data indexed at a small geographical resolution pose specific statistical problems due to the sparse nature of the data. Spatial smoothers allow areas to borrow strength from neighboring regions to produce a more stable estimate of the areal value. Geostatistical smoothers are able to quantify the uncertainty in smoothed rate estimates without a high computational burden. In this paper, we introduce a uniform model extension of Bayesian Maximum Entropy (UMBME) and compare its performance to that of Poisson kriging in measures of smoothing strength and estimation accuracy as applied to simulated data and the real data example of HIV infection in North Carolina. The aim is to produce more reliable maps of disease rates in small areas to improve identification of spatial trends at the local level. Results In all data environments, Poisson kriging exhibited greater smoothing strength than UMBME. With the simulated data where the true latent rate of infection was known, Poisson kriging resulted in greater estimation accuracy with data that displayed low spatial autocorrelation, while UMBME provided more accurate estimators with data that displayed higher spatial autocorrelation. With the HIV data, UMBME performed slightly better than Poisson kriging in cross-validatory predictive checks, with both models performing better than the observed data model with no smoothing. Conclusions Smoothing methods have different advantages depending upon both internal model assumptions that affect smoothing strength and external data environments, such as spatial correlation of the observed data. Further model comparisons in different data environments are required to provide public health practitioners with guidelines needed in choosing the most appropriate smoothing method for their particular health dataset. PMID:21978359
Adjusting for sampling variability in sparse data: geostatistical approaches to disease mapping.
Hampton, Kristen H; Serre, Marc L; Gesink, Dionne C; Pilcher, Christopher D; Miller, William C
2011-10-06
Disease maps of crude rates from routinely collected health data indexed at a small geographical resolution pose specific statistical problems due to the sparse nature of the data. Spatial smoothers allow areas to borrow strength from neighboring regions to produce a more stable estimate of the areal value. Geostatistical smoothers are able to quantify the uncertainty in smoothed rate estimates without a high computational burden. In this paper, we introduce a uniform model extension of Bayesian Maximum Entropy (UMBME) and compare its performance to that of Poisson kriging in measures of smoothing strength and estimation accuracy as applied to simulated data and the real data example of HIV infection in North Carolina. The aim is to produce more reliable maps of disease rates in small areas to improve identification of spatial trends at the local level. In all data environments, Poisson kriging exhibited greater smoothing strength than UMBME. With the simulated data where the true latent rate of infection was known, Poisson kriging resulted in greater estimation accuracy with data that displayed low spatial autocorrelation, while UMBME provided more accurate estimators with data that displayed higher spatial autocorrelation. With the HIV data, UMBME performed slightly better than Poisson kriging in cross-validatory predictive checks, with both models performing better than the observed data model with no smoothing. Smoothing methods have different advantages depending upon both internal model assumptions that affect smoothing strength and external data environments, such as spatial correlation of the observed data. Further model comparisons in different data environments are required to provide public health practitioners with guidelines needed in choosing the most appropriate smoothing method for their particular health dataset.
Global petroleum resources: A view to the future
Ahlbrandt, T.S.; McCabe, P.J.
2002-01-01
It is necessary to periodically reassess petroleum resources, not only because new data become available and better geologic models are developed; but also because many non-geologic factors determine which part of the crustal abundance of petroleum will be economic and acceptable over the foreseeable future. In 2000, the U.S. Geological Survey completed an assessment of the world's conventional petroleum resources, exclusive of the United States. This assessment is different from those before it: Overall the 2000 assessment of potential petroleum resources is higher than previous assessments, largely because it is the first USGS world assessment to include field growth estimates. Based on a thorough investigation of the petroleum geology of each province, the assessment couples geologic analysis with a probabilistic methodology to estimate remaining potential. Including the assessment numbers for the United States from USGS and the Minerals Management Service (MMS), the world's endowment of recoverable oil - which consists of cumulative production, remaining reserves, reserve growth and undiscovered resources - is estimated at about 3 trillion barrels of oil. Of this, about 24 percent has been produced and an additional 29 percent has been discovered and booked as reserves. The natural gas endowment is estimated at 15.4 quadrillion cubic feet (2.5 trillion barrels of oil equivalent), of which only about 11 percent has been produced and an additional 31 percent has been discovered and booked as reserves. The USGS assessment is not exhaustive, because it does not cover all sedimentary basins of the world. Relatively small volumes of oil or gas have been found in an additional 279 provinces, and significant accumulations may occur in these or other basins that were not assessed. The estimates are therefore conservative.
GRACE time-variable gravity field recovery using an improved energy balance approach
NASA Astrophysics Data System (ADS)
Shang, Kun; Guo, Junyi; Shum, C. K.; Dai, Chunli; Luo, Jia
2015-12-01
A new approach based on energy conservation principle for satellite gravimetry mission has been developed and yields more accurate estimation of in situ geopotential difference observables using K-band ranging (KBR) measurements from the Gravity Recovery and Climate Experiment (GRACE) twin-satellite mission. This new approach preserves more gravity information sensed by KBR range-rate measurements and reduces orbit error as compared to previous energy balance methods. Results from analysis of 11 yr of GRACE data indicated that the resulting geopotential difference estimates agree well with predicted values from official Level 2 solutions: with much higher correlation at 0.9, as compared to 0.5-0.8 reported by previous published energy balance studies. We demonstrate that our approach produced a comparable time-variable gravity solution with the Level 2 solutions. The regional GRACE temporal gravity solutions over Greenland reveals that a substantially higher temporal resolution is achievable at 10-d sampling as compared to the official monthly solutions, but without the compromise of spatial resolution, nor the need to use regularization or post-processing.
Friedman, Bernard
2013-01-01
Objective Our objective was to provide a national estimate across all payers of the distribution and cost of selected chronic conditions for hospitalized adults in 2009, stratified by demographic characteristics. Analysis We analyzed the Nationwide Inpatient Sample (NIS), the largest all-payer inpatient database in the United States. Use, cost, and mortality estimates across payer, age, sex, and race/ethnicity are produced for grouped or multiple chronic conditions (MCC). The 5 most common dyads and triads were determined. Results In 2009, there were approximately 28 million adult discharges from US hospitals other than those related to pregnancy and maternity; 39% had 2 to 3 MCC, and 33% had 4 or more. A higher number of MCC was associated with higher mortality, use of services, and average cost. The percentages of Medicaid, privately insured patients, and ethnic/racial groups with 4 or more MCC were highly sensitive to age. Summary This descriptive analysis of multipayer inpatient data provides a robust national view of the substantial use and costs among adults hospitalized with MCC. PMID:23618542
Higher-Order Systematic Effects in the Muon Beam-Spin Dynamics for Muon g-2
NASA Astrophysics Data System (ADS)
Crnkovic, Jason; Brown, Hugh; Krouppa, Brandon; Metodiev, Eric; Morse, William; Semertzidis, Yannis; Tishchenko, Vladimir
2016-03-01
The BNL Muon g-2 Experiment (E821) produced a precision measurement of the muon anomalous magnetic moment, where as the Fermilab Muon g-2 Experiment (E989) is an upgraded version of E821 that has a goal of producing a measurement with approximately 4 times more precision. Improving the precision requires a more detailed understanding of the experimental systematic effects, and so three higher-order systematic effects in the muon beam-spin dynamics have recently been found and estimated for E821. The beamline systematic effect originates from muon production in beamline spectrometers, as well as from muons traversing beamline bending magnets. The kicker systematic effect comes from a combination of the variation in time spent inside the muon storage ring across a muon bunch and the temporal structure of the storage ring kicker waveform. Finally, the detector systematic effect arises from a combination of the energy dependent muon equilibrium orbit in the storage ring, muon decay electron drift time, and decay electron detector acceptance effects. Brookhaven Natl Lab.
Exposure to TCDD from base perimeter application of Agent Orange in Vietnam.
Ross, John H; Hewitt, Andrew; Armitage, James; Solomon, Keith; Watkins, Deborah K; Ginevan, Michael E
2015-04-01
Using recognized methods routinely employed by pesticide regulatory agencies, the exposures of military personnel that were mixer/loader/applicators (M/L/A) of Agent Orange (AO) for perimeter foliage at bases during the Vietnam War were estimated. From the fraction of TCDD in AO, absorbed dosage of the manufacturing contaminant was estimated. Dermal exposure estimated from spray drift to residents of the bases was calculated using internationally recognized software that accounted for proximity, foliar density of application site, droplet size and wind speed among other factors, and produced estimates of deposition. Those that directly handled AO generally had much higher exposures than those further from the areas of use. The differences in exposure potential varied by M/L/A activity, but were typically orders of magnitude greater than bystanders. However, even the most-exposed M/L/A involved in perimeter application had lifetime exposures comparable to persons living in the U.S. at the time, i.e., ~1.3 to 5 pg TCDD/kg bodyweight. Copyright © 2014 Elsevier B.V. All rights reserved.
Implicit individual discount rate in China: A contingent valuation study.
Wang, Hua; He, Jie
2018-03-15
Two contingent valuation (CV) surveys were conducted in Kunming, China, to estimate households' willingness to pay (WTP) for the Panlong River rehabilitation project. The two surveys were conducted using the same procedures and questionnaires except for the payment schedule arrangements, which permitted a calculation of respondents' implicit discount rate. The surveys provided two estimates of WTP, one with a mean of 23 Yuan in monthly payment over 5 years and the other with a mean of 311 Yuan in a lump-sum payment that will cover all the expenses for a period of 5 years. The results produce an estimate of monthly discount rate of 7.6%-12.6% or annual discount rate of 141-315%. The estimates are higher than that reported from those studies conducted in the U.S., but are compatible with that of some other studies. This study also shows that both mean individual WTP and implicit individual discount rates are closely related to household demographic and economic characteristics and environment-related perceptions, as reported in the studies conducted in other countries. Copyright © 2017 Elsevier Ltd. All rights reserved.
Doubell, Marcé; Grant, Paul B C; Esterhuizen, Nanike; Bazelet, Corinna S; Addison, Pia; Terblanche, John S
2017-12-01
Katydids produce acoustic signals via stridulation, which they use to attract conspecific females for mating. However, direct estimates of the metabolic costs of calling to date have produced diverse cost estimates and are limited to only a handful of insect species. Therefore, in this study, we investigated the metabolic cost of calling in an unstudied sub-Saharan katydid, Plangia graminea Using wild-caught animals, we measured katydid metabolic rate using standard flow-through respirometry while simultaneously recording the number of calls produced. Overall, the metabolic rate during calling in P. graminea males was 60% higher than the resting metabolic rate (0.443±0.056 versus 0.279±0.028 ml CO 2 h -1 g -1 ), although this was highly variable among individuals. Although individual call costs were relatively inexpensive (ranging from 0.02 to 5.4% increase in metabolic rate per call), the individuals with cheaper calls called more often and for longer than those with expensive calls, resulting in the former group having significantly greater cumulative costs over a standard amount of time (9.5 h). However, the metabolic costs of calling are context dependent because the amount of time spent calling greatly influenced these costs in our trials. A power law function described this relationship between cumulative cost ( y ) and percentage increase per call ( x ) ( y =130.21 x -1.068 , R 2 =0.858). The choice of metric employed for estimating energy costs (i.e. how costs are expressed) also affects the outcome and any interpretation of costs of sexual signalling. For example, the absolute, relative and cumulative metabolic costs of calling yielded strongly divergent estimates, and any fitness implications depend on the organism's energy budget and the potential trade-offs in allocation of resources that are made as a direct consequence of increased calling effort. © 2017. Published by The Company of Biologists Ltd.
Luminescence imaging of water during carbon-ion irradiation for range estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamamoto, Seiichi, E-mail: s-yama@met.nagoya-u.ac.jp; Komori, Masataka; Koyama, Shuji
Purpose: The authors previously reported successful luminescence imaging of water during proton irradiation and its application to range estimation. However, since the feasibility of this approach for carbon-ion irradiation remained unclear, the authors conducted luminescence imaging during carbon-ion irradiation and estimated the ranges. Methods: The authors placed a pure-water phantom on the patient couch of a carbon-ion therapy system and measured the luminescence images with a high-sensitivity, cooled charge-coupled device camera during carbon-ion irradiation. The authors also carried out imaging of three types of phantoms (tap-water, an acrylic block, and a plastic scintillator) and compared their intensities and distributions withmore » those of a phantom containing pure-water. Results: The luminescence images of pure-water phantoms during carbon-ion irradiation showed clear Bragg peaks, and the measured carbon-ion ranges from the images were almost the same as those obtained by simulation. The image of the tap-water phantom showed almost the same distribution as that of the pure-water phantom. The acrylic block phantom’s luminescence image produced seven times higher luminescence and had a 13% shorter range than that of the water phantoms; the range with the acrylic phantom generally matched the calculated value. The plastic scintillator showed ∼15 000 times higher light than that of water. Conclusions: Luminescence imaging during carbon-ion irradiation of water is not only possible but also a promising method for range estimation in carbon-ion therapy.« less
Atmospheric Carbon Dioxide and the Global Carbon Cycle: The Key Uncertainties
DOE R&D Accomplishments Database
Peng, T. H.; Post, W. M.; DeAngelis, D. L.; Dale, V. H.; Farrell, M. P.
1987-12-01
The biogeochemical cycling of carbon between its sources and sinks determines the rate of increase in atmospheric CO{sub 2} concentrations. The observed increase in atmospheric CO{sub 2} content is less than the estimated release from fossil fuel consumption and deforestation. This discrepancy can be explained by interactions between the atmosphere and other global carbon reservoirs such as the oceans, and the terrestrial biosphere including soils. Undoubtedly, the oceans have been the most important sinks for CO{sub 2} produced by man. But, the physical, chemical, and biological processes of oceans are complex and, therefore, credible estimates of CO{sub 2} uptake can probably only come from mathematical models. Unfortunately, one- and two-dimensional ocean models do not allow for enough CO{sub 2} uptake to accurately account for known releases. Thus, they produce higher concentrations of atmospheric CO{sub 2} than was historically the case. More complex three-dimensional models, while currently being developed, may make better use of existing tracer data than do one- and two-dimensional models and will also incorporate climate feedback effects to provide a more realistic view of ocean dynamics and CO{sub 2} fluxes. The instability of current models to estimate accurately oceanic uptake of CO{sub 2} creates one of the key uncertainties in predictions of atmospheric CO{sub 2} increases and climate responses over the next 100 to 200 years.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoesly, Rachel M.; Smith, Steven J.; Feng, Leyang
Here, we present a new data set of annual historical (1750–2014) anthropogenic chemically reactive gases (CO, CH 4, NH 3, NO x, SO 2, NMVOCs), carbonaceous aerosols (black carbon – BC, and organic carbon – OC), and CO 2 developed with the Community Emissions Data System (CEDS). We improve upon existing inventories with a more consistent and reproducible methodology applied to all emission species, updated emission factors, and recent estimates through 2014. The data system relies on existing energy consumption data sets and regional and country-specific inventories to produce trends over recent decades. All emission species are consistently estimated using the samemore » activity data over all time periods. Emissions are provided on an annual basis at the level of country and sector and gridded with monthly seasonality. These estimates are comparable to, but generally slightly higher than, existing global inventories. Emissions over the most recent years are more uncertain, particularly in low- and middle-income regions where country-specific emission inventories are less available. Future work will involve refining and updating these emission estimates, estimating emissions' uncertainty, and publication of the system as open-source software.« less
Hoesly, Rachel M.; Smith, Steven J.; Feng, Leyang; ...
2018-01-29
Here, we present a new data set of annual historical (1750–2014) anthropogenic chemically reactive gases (CO, CH 4, NH 3, NO x, SO 2, NMVOCs), carbonaceous aerosols (black carbon – BC, and organic carbon – OC), and CO 2 developed with the Community Emissions Data System (CEDS). We improve upon existing inventories with a more consistent and reproducible methodology applied to all emission species, updated emission factors, and recent estimates through 2014. The data system relies on existing energy consumption data sets and regional and country-specific inventories to produce trends over recent decades. All emission species are consistently estimated using the samemore » activity data over all time periods. Emissions are provided on an annual basis at the level of country and sector and gridded with monthly seasonality. These estimates are comparable to, but generally slightly higher than, existing global inventories. Emissions over the most recent years are more uncertain, particularly in low- and middle-income regions where country-specific emission inventories are less available. Future work will involve refining and updating these emission estimates, estimating emissions' uncertainty, and publication of the system as open-source software.« less
NASA Astrophysics Data System (ADS)
Hoesly, Rachel M.; Smith, Steven J.; Feng, Leyang; Klimont, Zbigniew; Janssens-Maenhout, Greet; Pitkanen, Tyler; Seibert, Jonathan J.; Vu, Linh; Andres, Robert J.; Bolt, Ryan M.; Bond, Tami C.; Dawidowski, Laura; Kholod, Nazar; Kurokawa, June-ichi; Li, Meng; Liu, Liang; Lu, Zifeng; Moura, Maria Cecilia P.; O'Rourke, Patrick R.; Zhang, Qiang
2018-01-01
We present a new data set of annual historical (1750-2014) anthropogenic chemically reactive gases (CO, CH4, NH3, NOx, SO2, NMVOCs), carbonaceous aerosols (black carbon - BC, and organic carbon - OC), and CO2 developed with the Community Emissions Data System (CEDS). We improve upon existing inventories with a more consistent and reproducible methodology applied to all emission species, updated emission factors, and recent estimates through 2014. The data system relies on existing energy consumption data sets and regional and country-specific inventories to produce trends over recent decades. All emission species are consistently estimated using the same activity data over all time periods. Emissions are provided on an annual basis at the level of country and sector and gridded with monthly seasonality. These estimates are comparable to, but generally slightly higher than, existing global inventories. Emissions over the most recent years are more uncertain, particularly in low- and middle-income regions where country-specific emission inventories are less available. Future work will involve refining and updating these emission estimates, estimating emissions' uncertainty, and publication of the system as open-source software.
Wagner, James; Schroeder, Heather M.; Piskorowski, Andrew; Ursano, Robert J.; Stein, Murray B.; Heeringa, Steven G.; Colpe, Lisa J.
2017-01-01
Mixed-mode surveys need to determine a number of design parameters that may have a strong influence on costs and errors. In a sequential mixed-mode design with web followed by telephone, one of these decisions is when to switch modes. The web mode is relatively inexpensive but produces lower response rates. The telephone mode complements the web mode in that it is relatively expensive but produces higher response rates. Among the potential negative consequences, delaying the switch from web to telephone may lead to lower response rates if the effectiveness of the prenotification contact materials is reduced by longer time lags, or if the additional e-mail reminders to complete the web survey annoy the sampled person. On the positive side, delaying the switch may decrease the costs of the survey. We evaluate these costs and errors by experimentally testing four different timings (1, 2, 3, or 4 weeks) for the mode switch in a web–telephone survey. This experiment was conducted on the fourth wave of a longitudinal study of the mental health of soldiers in the U.S. Army. We find that the different timings of the switch in the range of 1–4 weeks do not produce differences in final response rates or key estimates but longer delays before switching do lead to lower costs. PMID:28943717
Radiation Safety Issues in High Altitude Commercial Aircraft
NASA Technical Reports Server (NTRS)
Wilson, John W.; Cucinotta, Francis A.; Shinn, Judy L.
1995-01-01
The development of a global economy makes the outlook for high speed commercial intercontinental flight feasible, and the development of various configurations operating from 20 to 30 km have been proposed. In addition to the still unresolved issues relating to current commercial operations (12-16 km), the higher dose rates associated with the higher operating altitudes makes il imperative that the uncertainties in the atmospheric radiation environment and the associated health risks be re-examined. Atmospheric radiation associated with the galactic cosmic rays forms a background level which may, under some circumstances, exceed newly recommended allowable exposure limits proposed on the basis of recent evaluations of the A -bomb survivor data (due to increased risk coefficients). These larger risk coefficients, within the context of the methodology for estimating exposure limits, are resulting in exceedingly low estimated allowable exposure limits which may impact even present day flight operations and was the reason for the CEC workshop in Luxembourg (1990). At higher operating altitudes, solar particles events can produce exposures many orders of magnitude above background levels and pose significant health risks to the most sensitive individuals (such as during pregnancy). In this case the appropriate quality factors are undefined, and some evidence exists which indicates that the quality factor for stochastic effects is a substantial underestimate.
Sedimentation rates in Atibaia River basin, São Paulo State, Brazil, using 210Pb as geochronometer.
Sabaris, T P P; Bonotto, D M
2011-01-01
The constant initial concentration (CIC) of unsupported/excess (210)Pb model was successfully used to assess (210)Pb data of nine sediment cores from Atibaia River basin, São Paulo State, Brazil. The (210)Pb-based apparent sediment mass accumulation rates ranged from 47.7 to 782.4 mg/cm(2)yr, whereas the average linear sedimentation rates between 0.16 and 1.32 cm/yr, which are compatible with the calculated sediment mass fluxes, i.e. a higher sediment mass accumulation rate yielded a higher linear sedimentation rate. The higher long-term based accumulation rate tended to be found in topographically softer regions. This occurs because the sediments are preferentially transported in topographically steeper regions instead of being deposited. Anthropic activities like deforestation possibly interfered with the natural/normal sedimentation processes, which increased in accordance with modifications on the channel drainage. The radionuclide geochronology as described in this paper allows determination of sedimentation rates that are compatible with values estimated elsewhere. The adoption of an appropriate factor generated from previous laboratory experiments resulted in a successful correction for the (222)Rn-loss from the sediments, bringing the estimate of the parent-supported (in-situ produced) (210)Pb to reliable values required by the CIC model. Copyright © 2010 Elsevier Ltd. All rights reserved.
DOT National Transportation Integrated Search
2010-03-01
The objectives of this project were to (a) produce historic estimates of travel times on Twin-Cities arterials : for 1995 and 2005, and (b) develop an initial architecture and database that could, in the future, produce timely : estimates of arterial...
The Greenville Fault: preliminary estimates of its long-term creep rate and seismic potential
Lienkaemper, James J.; Barry, Robert G.; Smith, Forrest E.; Mello, Joseph D.; McFarland, Forrest S.
2013-01-01
Once assumed locked, we show that the northern third of the Greenville fault (GF) creeps at 2 mm/yr, based on 47 yr of trilateration net data. This northern GF creep rate equals its 11-ka slip rate, suggesting a low strain accumulation rate. In 1980, the GF, easternmost strand of the San Andreas fault system east of San Francisco Bay, produced a Mw5.8 earthquake with a 6-km surface rupture and dextral slip growing to ≥2 cm on cracks over a few weeks. Trilateration shows a 10-cm post-1980 transient slip ending in 1984. Analysis of 2000-2012 crustal velocities on continuous global positioning system stations, allows creep rates of ~2 mm/yr on the northern GF, 0-1 mm/yr on the central GF, and ~0 mm/yr on its southern third. Modeled depth ranges of creep along the GF allow 5-25% aseismic release. Greater locking in the southern two thirds of the GF is consistent with paleoseismic evidence there for large late Holocene ruptures. Because the GF lacks large (>1 km) discontinuities likely to arrest higher (~1 m) slip ruptures, we expect full-length (54-km) ruptures to occur that include the northern creeping zone. We estimate sufficient strain accumulation on the entire GF to produce Mw6.9 earthquakes with a mean recurrence of ~575 yr. While the creeping 16-km northern part has the potential to produce a Mw6.2 event in 240 yr, it may rupture in both moderate (1980) and large events. These two-dimensional-model estimates of creep rate along the southern GF need verification with small aperture surveys.
Robust Ambiguity Estimation for an Automated Analysis of the Intensive Sessions
NASA Astrophysics Data System (ADS)
Kareinen, Niko; Hobiger, Thomas; Haas, Rüdiger
2016-12-01
Very Long Baseline Interferometry (VLBI) is a unique space-geodetic technique that can directly determine the Earth's phase of rotation, namely UT1. The daily estimates of the difference between UT1 and Coordinated Universal Time (UTC) are computed from one-hour long VLBI Intensive sessions. These sessions are essential for providing timely UT1 estimates for satellite navigation systems. To produce timely UT1 estimates, efforts have been made to completely automate the analysis of VLBI Intensive sessions. This requires automated processing of X- and S-band group delays. These data often contain an unknown number of integer ambiguities in the observed group delays. In an automated analysis with the c5++ software the standard approach in resolving the ambiguities is to perform a simplified parameter estimation using a least-squares adjustment (L2-norm minimization). We implement the robust L1-norm with an alternative estimation method in c5++. The implemented method is used to automatically estimate the ambiguities in VLBI Intensive sessions for the Kokee-Wettzell baseline. The results are compared to an analysis setup where the ambiguity estimation is computed using the L2-norm. Additionally, we investigate three alternative weighting strategies for the ambiguity estimation. The results show that in automated analysis the L1-norm resolves ambiguities better than the L2-norm. The use of the L1-norm leads to a significantly higher number of good quality UT1-UTC estimates with each of the three weighting strategies.
Tie, Qiang; Hu, Hongchang; Tian, Fuqiang; Holbrook, N Michele
2018-08-15
Accurately estimating forest evapotranspiration and its components is of great importance for hydrology, ecology, and meteorology. In this study, a comparison of methods for determining forest evapotranspiration and its components at annual, monthly, daily, and diurnal scales was conducted based on in situ measurements in the subhumid mountainous forest of North China. The goal of the study was to evaluate the accuracies and reliabilities of the different methods. The results indicate the following: (1) The sap flow upscaling procedure, taking into account diversities in forest types and tree species, produced component-based forest evapotranspiration estimate that agreed with eddy covariance-based estimate at the temporal scales of year, month, and day, while soil water budget-based forest evapotranspiration estimate was also qualitatively consistent with eddy covariance-based estimate at the daily scale; (2) At the annual scale, catchment water balance-based forest evapotranspiration estimate was significantly higher than eddy covariance-based estimate, which might probably result from non-negligible subsurface runoff caused by the widely distributed regolith and fractured bedrock under the ground; (3) At the sub-daily scale, the diurnal course of sap flow based-canopy transpiration estimate lagged significantly behind eddy covariance-based forest evapotranspiration estimate, which might physiologically be due to stem water storage and stem hydraulic conductivity. The results in this region may have much referential significance for forest evapotranspiration estimation and method evaluation in regions with similar environmental conditions. Copyright © 2018 Elsevier B.V. All rights reserved.
Evaluating lidar point densities for effective estimation of aboveground biomass
Wu, Zhuoting; Dye, Dennis G.; Stoker, Jason M.; Vogel, John M.; Velasco, Miguel G.; Middleton, Barry R.
2016-01-01
The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) was recently established to provide airborne lidar data coverage on a national scale. As part of a broader research effort of the USGS to develop an effective remote sensing-based methodology for the creation of an operational biomass Essential Climate Variable (Biomass ECV) data product, we evaluated the performance of airborne lidar data at various pulse densities against Landsat 8 satellite imagery in estimating above ground biomass for forests and woodlands in a study area in east-central Arizona, U.S. High point density airborne lidar data, were randomly sampled to produce five lidar datasets with reduced densities ranging from 0.5 to 8 point(s)/m2, corresponding to the point density range of 3DEP to provide national lidar coverage over time. Lidar-derived aboveground biomass estimate errors showed an overall decreasing trend as lidar point density increased from 0.5 to 8 points/m2. Landsat 8-based aboveground biomass estimates produced errors larger than the lowest lidar point density of 0.5 point/m2, and therefore Landsat 8 observations alone were ineffective relative to airborne lidar for generating a Biomass ECV product, at least for the forest and woodland vegetation types of the Southwestern U.S. While a national Biomass ECV product with optimal accuracy could potentially be achieved with 3DEP data at 8 points/m2, our results indicate that even lower density lidar data could be sufficient to provide a national Biomass ECV product with accuracies significantly higher than that from Landsat observations alone.
Short-chain chlorinated paraffins in cooking oil and related products from China.
Cao, Yang; Harada, Kouji H; Liu, Wanyang; Yan, Junxia; Zhao, Can; Niisoe, Tamon; Adachi, Ayumu; Fujii, Yukiko; Nouda, Chihiro; Takasuga, Takumi; Koizumi, Akio
2015-11-01
Short-chain chlorinated paraffins (SCCPs) are emerging persistent organic pollutants. It has been found that dietary intakes of SCCPs in China have recently increased and are now higher than in Japan and Korea. The contribution of cooking oil to dietary exposure to SCCPs in China was evaluated by analyzing SCCPs in cooking oil, raw seeds used to produce cooking oil, and fried confectionery products collected in China in 2010 and 2012. Detectable amounts of SCCP homologs were found in 48 out of the 49 cooking oil samples analyzed, and the SCCP concentrations varied widely, from <9 to 7500 ng g(-1). Estimated dietary intakes of total SCCPs in cooking oil ranged from <0.78 to 38 μg d(-1). The estimated dietary intake of SCCPs was relatively high (mean 14.8 μg d(-1)) for residents of Beijing. Fried confectionery was found to contain SCCP concentrations of 11-1000 ng g(-1). Cooking oil might therefore be one of the sources of SCCPs to Chinese diets. SCCPs were also detected in raw seeds used to produce cooking oil, but the concentrations varied widely. The SCCP homolog patterns in the raw seed and cooking oil samples were different, implying that the seeds used to produce the oil (and therefore the soil on which the seeds were produced) were unlikely to be the sources of SCCPs in cooking oil. Further investigations are needed to determine the routes through which cooking oil becomes contaminated with SCCPs during the production and processing of the oil. Copyright © 2015 Elsevier Ltd. All rights reserved.
Derado, Gordana; Wise, Matthew; Harris, Julie R.; Chiller, Tom; Meltzer, Martin I.; Park, Benjamin J.
2015-01-01
During 2012–2013, the US Centers for Disease Control and Prevention and partners responded to a multistate outbreak of fungal infections linked to methylprednisolone acetate (MPA) injections produced by a compounding pharmacy. We evaluated the effects of public health actions on the scope of this outbreak. A comparison of 60-day case-fatality rates and clinical characteristics of patients given a diagnosis on or before October 4, the date the outbreak was widely publicized, with those of patients given a diagnosis after October 4 showed that an estimated 3,150 MPA injections, 153 cases of meningitis or stroke, and 124 deaths were averted. Compared with diagnosis after October 4, diagnosis on or before October 4 was significantly associated with a higher 60-day case-fatality rate (28% vs. 5%; p<0.0001). Aggressive public health action resulted in a substantially reduced estimated number of persons affected by this outbreak and improved survival of affected patients. PMID:25989264
Smith, Rachel M; Derado, Gordana; Wise, Matthew; Harris, Julie R; Chiller, Tom; Meltzer, Martin I; Park, Benjamin J
2015-06-01
During 2012-2013, the US Centers for Disease Control and Prevention and partners responded to a multistate outbreak of fungal infections linked to methylprednisolone acetate (MPA) injections produced by a compounding pharmacy. We evaluated the effects of public health actions on the scope of this outbreak. A comparison of 60-day case-fatality rates and clinical characteristics of patients given a diagnosis on or before October 4, the date the outbreak was widely publicized, with those of patients given a diagnosis after October 4 showed that an estimated 3,150 MPA injections, 153 cases of meningitis or stroke, and 124 deaths were averted. Compared with diagnosis after October 4, diagnosis on or before October 4 was significantly associated with a higher 60-day case-fatality rate (28% vs. 5%; p<0.0001). Aggressive public health action resulted in a substantially reduced estimated number of persons affected by this outbreak and improved survival of affected patients.
Gamma-ray burst constraints on the galactic frequency of extrasolar Oort Clouds
NASA Technical Reports Server (NTRS)
Shull, J. Michael; Stern, S. Alan
1995-01-01
With the strong Compton Gamma-Ray Observatory/Burst and Transient Source Experiment (CGRO/BATSE) evidence that most gamma-ray bursts do not come from galactic neutron stars, models involving the accretion of a comet onto a neutron star (NS) no longer appear to be strong contenders for explaining the majority of bursts. If this is the case, then it is worth asking whether the lack of an observed galactic gamma-ray burst population provides a useful constraint on the number of comets and comet clouds in the galaxy. Owing to the previously unrecognized structural weakness of cometary nuclei, we find the capture cross sections for comet-NS events to be much higher than previously published estimates, with tidal breakup at distances R(sub b) approx. equals 4 x 10(exp 10) cm from the NS. As a result, impacts of comets onto field NSs penetrating the Oort Clouds of other stars are found to dominate all other galactic NS-comet capture rates by a factor of 100. This in turn predicts that if comet clouds are common, there should be a significant population of repeater sources with (1) a galactic distribution, (2) space-correlated repetition, and (3) a wide range of peak luminosities and luminosity time histories. If all main sequence stars have Oort Clouds like our own, we predict approximately 4000 such repeater sources in the Milky Way at any time, each repeating on time scales of months to years. Based on estimates of the sensitivity of the CGRO/BATSE instrument and assuming isotropic gamma-ray beaming from such events, we estimate that a population of approximately 20-200 of these galactic NS-Oort Cloud gamma-ray repeater sources should be detectable by CGRO. In addition, if giant planet formation is common in the galaxy, we estimate that the accretion of isolated comets injected to the interstellar medium by giant planet formation should produce an additional source of galactic, nonrepeating, events. Comparing these estimates to the 3-4 soft gamma-ray repeater sources detected by BATSE, one is forced to conclude that (1) comet impacts on NSs are inefficient at producing gamma rays; or (2) the gamma rays from such events are highly beamed; or (3) the fraction of stars in the galaxy with Oort Clouds like our own is not higher than a few percent.
Gamma-ray burst constraints on the galactic frequency of extra-solar Oort clouds
NASA Technical Reports Server (NTRS)
Shull, J. Michael; Stern, S. Alan
1994-01-01
With the strong CGRO/BATSE evidence that most gamma-ray bursts do not come from galactic neutron stars, models involving the accretion of a comet onto a neutron star (NS) no longer appear to be strong contenders for explaining the majority of bursts. If this is the case, then it is worth asking whether the lack of an observed galactic gamma-ray burst population provides a useful constraint on the number of comets and comet clouds in the galaxy. Owing to the previously unrecognized structural weakness of cometary nuclei, we find the capture cross sections for comet-NS events to be much higher than previously published estimates, with tidal breakup at distances R(sub b) approximately equals to 4 x 10(exp 10) cm from the NS. As a result, impacts of comets onto field NS's penetrating the Oort Clouds of other stars are found to dominate all other galactic NS-comet capture rates by a factor of 100. This in turn predicts that if comet clouds are common, there should be a significant population of repeater sources with (1) a galactic distribution, (2) space-correlated repetition, and (3) a wide range of peak luminosities and luminosity time histories. If all main sequences stars have Oort Clouds like our own, we predict approximately 4000 such repeater sources in the Milky Way at any time, each repeating on timescales of months to years. Based on estimates of the sensitivity of the CGRO/BATSE instrument and assuming isotropic gamma-ray beaming from such events, we estimate that a population of approximately 20-200 of these galactic NS-Oort Cloud gamma-ray repeater sources should be detectable by CGRO. In addition, if giant planet formation is common in the galaxy, we estimate that the accretion of isolated comets injected to the interstellar medium by giant planet formation should produce an additional source of galactic, nonrepeating events. Comparing these estimates to the three to four soft gamma-ray repeater sources detected by BATSE, one is forced to conclude that (1) comet impacts on NS's are inefficient at producing gamma-rays; or (2) the gamma-rays from such events are highly beamed; or (3) the fraction of stars in the galaxy with Oort Cloud like our own is not higher than a few percent.
Olaiya, Oluwatosin; Nerlander, Lina; Mattson, Christine L; Beer, Linda
2018-04-20
Many studies of persons who exchange sex for money or drugs have focused on their HIV acquisition risk, and are often limited to select populations and/or geographical locations. National estimates of exchange sex among people living with HIV (PLWH) who are in medical care, and its correlates, are lacking. To address these gaps, we analyzed data from the Medical Monitoring Project, a surveillance system that produces nationally representative estimates of behavioral and clinical characteristics of PLWH receiving medical care in the United States, to estimate the weighted prevalence of exchange sex overall, and by selected socio-demographic, behavioral and clinical characteristics. We found 3.6% of sexually active adults reported exchange sex in the past 12 months. We found a higher prevalence of exchange sex among transgender persons, those who experienced homelessness, and those with unmet needs for social and medical services. Persons who exchanged sex were more likely to report depression and substance use than those who did not exchange sex. We found a higher prevalence of sexual behaviors that increase the risk of HIV transmission and lower viral suppression among persons who exchanged sex. PLWH who exchanged sex had a higher prevalence of not being prescribed ART, and not being ART adherent than those who did not exchange sex. We identify several areas for intervention, including: provision of or referral to services for unmet needs (such as housing or shelter), enhanced delivery of mental health and substance abuse screening and treatment, risk-reduction counseling, and ART prescription and adherence support services.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brady, P.; Ditmire, T.; Horton, W.
Magnetosphere-solar wind interactions are simulated in a laboratory setting with a small permanent magnet driven by two types of supersonic plasma wind sources. The first higher speed, shorter duration plasma wind is from a laser blow-off plasma while the second longer duration, lower speed plasma wind is produced with a capacitor discharge driven coaxial electrode creating plasma jets. The stand off distance of the solar wind from the magnetosphere was measured to be 1.7{+-}0.3 cm for the laser-produced plasma experiment and 0.87{+-}0.03 cm for the coaxial electrode plasma experiment. The stand off distance of the plasma was calculated using datamore » from HYADES[J. T. Larsen and S. M. Lane, J. Quant. Spectrosc. Radiat. Transf. 51, 179 (1994)] as 1.46{+-}0.02 cm for the laser-produced plasma, and estimated for the coaxial plasma jet as r{sub mp}=0.72{+-}0.07 cm. Plasma build up on the poles of the magnets, consistent with magnetosphere systems, was also observed.« less
Looking at the world with your ears: how do we get the size of an object from its sound?
Grassi, Massimo; Pastore, Massimiliano; Lemaitre, Guillaume
2013-05-01
Identifying the properties of on-going events by the sound they produce is crucial for our interaction with the environment when visual information is not available. Here, we investigated the ability of listeners to estimate the size of an object (a ball) dropped on a plate with ecological listening conditions (balls were dropped in real time) and response methods (listeners estimate ball-size by drawing a disk). Previous studies had shown that listeners can veridically estimate the size of objects by the sound they produce, but it is yet unclear which acoustical index listeners use to produce their estimates. In particular, it is unclear whether listeners listen to amplitude (related to loudness) or frequency (related to the sound's brightness) domain cue to produce their estimates. In the current study, in order to understand which cue is used by the listener to recover the size of the object, we manipulated the sound source event in such a way that frequency and amplitude cues provided contrasting size-information (balls were dropped from various heights). Results showed that listeners' estimations were accurate regardless of the experimental manipulations performed in the experiments. In addition, results suggest that listeners were likely integrating frequency and amplitude acoustical cues in order to produce their estimate and although these cues were often providing contrasting size-information. Copyright © 2013 Elsevier B.V. All rights reserved.
Dorazio, R.M.; Rago, P.J.
1991-01-01
We simulated mark–recapture experiments to evaluate a method for estimating fishing mortality and migration rates of populations stratified at release and recovery. When fish released in two or more strata were recovered from different recapture strata in nearly the same proportions, conditional recapture probabilities were estimated outside the [0, 1] interval. The maximum likelihood estimates tended to be biased and imprecise when the patterns of recaptures produced extremely "flat" likelihood surfaces. Absence of bias was not guaranteed, however, in experiments where recapture rates could be estimated within the [0, 1] interval. Inadequate numbers of tag releases and recoveries also produced biased estimates, although the bias was easily detected by the high sampling variability of the estimates. A stratified tag–recapture experiment with sockeye salmon (Oncorhynchus nerka) was used to demonstrate procedures for analyzing data that produce biased estimates of recapture probabilities. An estimator was derived to examine the sensitivity of recapture rate estimates to assumed differences in natural and tagging mortality, tag loss, and incomplete reporting of tag recoveries.
Classification of CO2 Geologic Storage: Resource and Capacity
Frailey, S.M.; Finley, R.J.
2009-01-01
The use of the term capacity to describe possible geologic storage implies a realistic or likely volume of CO2 to be sequestered. Poor data quantity and quality may lead to very high uncertainty in the storage estimate. Use of the term "storage resource" alleviates the implied certainty of the term "storage capacity". This is especially important to non- scientists (e.g. policy makers) because "capacity" is commonly used to describe the very specific and more certain quantities such as volume of a gas tank or a hotel's overnight guest limit. Resource is a term used in the classification of oil and gas accumulations to infer lesser certainty in the commercial production of oil and gas. Likewise for CO2 sequestration, a suspected porous and permeable zone can be classified as a resource, but capacity can only be estimated after a well is drilled into the formation and a relatively higher degree of economic and regulatory certainty is established. Storage capacity estimates are lower risk or higher certainty compared to storage resource estimates. In the oil and gas industry, prospective resource and contingent resource are used for estimates with less data and certainty. Oil and gas reserves are classified as Proved and Unproved, and by analogy, capacity can be classified similarly. The highest degree of certainty for an oil or gas accumulation is Proved, Developed Producing (PDP) Reserves. For CO2 sequestration this could be Proved Developed Injecting (PDI) Capacity. A geologic sequestration storage classification system is developed by analogy to that used by the oil and gas industry. When a CO2 sequestration industry emerges, storage resource and capacity estimates will be considered a company asset and consequently regulated by the Securities and Exchange Commission. Additionally, storage accounting and auditing protocols will be required to confirm projected storage estimates and assignment of credits from actual injection. An example illustrates the use of these terms and how storage classification changes as new data become available. ?? 2009 Elsevier Ltd. All rights reserved.
Prospects of poverty eradication through the existing Zakat system in Pakistan.
Mohammad, F
1991-01-01
In the Muslim system, Zakat functions as a means to reduce inequalities and eradicate poverty. Zakat means growth, extension, and purification. It is a usually annual premium charged on all accumulated productive wealth and on a variety of agricultural produce. Various rates are used. In the past, Zakat was paid on a self assessed basis and given to the needy. Due to influence on Sunni Muslims, in 1980 collection and disbursement was deemed the function of an Islamic state and the state system was introduced. The formal system is described in detail. A random sample (1050) of Local Zakat Committee (LZC) members, Zakat recipients, and the general population was conducted in 1988 to see to what extent poverty has been eradicated with this system. Zakat recipients were either those receiving a subsistence allowance or those receiving funds for permanent rehabilitation. Estimates of Zakat and Ushr (for agricultural produce) received and the maximum limit to collection and the maximum potential are given by region. Estimates are also given for the number of Mustahqueen-e-Zakat (MZ) (needy) by province. The total number is 5.46 million households, or 32.22% of all households in Pakistan, which is slightly higher than other prior estimates. Those receiving Zakat number 3.967 million or 23.43% of total households. Clearly not all those in need are receiving aid. The range of needy is 18.4% to 42.58% and could include those who are not poor but qualify for receiving Zakat according to Islamic principles. Estimates are given for the shortfall in funds needed to fill the gap. Other funding is needed to retrain MZ and estimates by province are generated to this end. It is clear that the present system needs to be reformed because the estimated funding requirements exceed the potential; there is a gap in the number needing aid and those receiving aid; and there is a gap in funds secured to rehabilitate and those requesting rehabilitation. To augment the system, it is suggested that Zakat exemptions be removed, stock in trade should be included, all agricultural produce should be included, subsistence should be given to only the most poor and disabled and the rest should receive a modest amount for starting a project on an annual rotation, and greater government emphasis at all levels must be placed on eliminating poverty.
Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong
2016-05-30
Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.
The Flex Track: Flexible Partitioning between Low- and High-Acuity Areas of an Emergency Department
Laker, Lauren F.; Froehle, Craig M.; Lindsell, Christopher J.; Ward, Michael J.
2014-01-01
Study Objective EDs with both low- and high-acuity treatment areas often have fixed allocation of resources, regardless of demand. We demonstrate the utility of discrete-event simulation to evaluate flexible partitioning between low- and high-acuity ED areas to identify the best operational strategy for subsequent implementation. Methods A discrete-event simulation was used to model patient flow through a 50-bed, urban, teaching ED that handles 85,000 patient visits annually. The ED has historically allocated ten beds to a Fast Track for low-acuity patients. We estimated the effect of a Flex Track policy, which involved switching up to five of these Fast Track beds to serving both low- and high-acuity patients, on patient waiting times. When the high-acuity beds were not at capacity, low-acuity patients were given priority access to flexible beds. Otherwise, high-acuity patients were given priority access to flexible beds. Wait times were estimated for patients by disposition and emergency severity index (ESI) score. Results A Flex Track policy using three flexible beds produced the lowest mean patient waiting of 30.9 (95% CI 30.6–31.2) minutes. The typical Fast Track approach of rigidly separating high- and low–acuity beds produced a mean patient wait time of 40.6 (95% CI 40.2–50.0) minutes, 31% higher than the three-bed Flex Track. A completely flexible ED, where all beds can accommodate any patient, produced mean wait times of 35.1 (95% CI 34.8–35.4) minutes. The results from the three-bed Flex Track scenario were robust, performing well across a range of scenarios involving higher and lower patient volumes and care durations. Conclusion Using discrete-event simulation, we have shown that adding some flexibility into bed allocation between low- and high-acuity can provide substantial reductions in overall patient waiting and a more efficient ED. PMID:24954578
The employer's decision to provide health insurance under the health reform law.
Pang, Gaobo; Warshawsky, Mark J
2013-01-01
This article considers the employer's decision to continue or to drop health insurance coverage for its workers under the provisions of the 2010 health reform law, on the presumption that the primary influence on that decision is what will produce a higher worker standard of living during working years and retirement. The authors incorporate the most recent empirical estimates of health care costs into their long-horizon, optimal savings consumption model for workers. Their results show that the employer sponsorship of health plans is valuable for maintaining a consistent and higher living standard over the life cycle for middle- and upper-income households considered here, whereas exchange-purchased and subsidized coverage is more beneficial for lower income households (roughly 4-6% of illustrative single workers and 15-22% of working families).
Exposure to electromagnetic fields from laptop use of "laptop" computers.
Bellieni, C V; Pinto, I; Bogi, A; Zoppetti, N; Andreuccetti, D; Buonocore, G
2012-01-01
Portable computers are often used at tight contact with the body and therefore are called "laptop." The authors measured electromagnetic fields (EMFs) laptop computers produce and estimated the induced currents in the body, to assess the safety of laptop computers. The authors evaluated 5 commonly used laptop of different brands. They measured EMF exposure produced and, using validated computerized models, the authors exploited the data of one of the laptop computers (LTCs) to estimate the magnetic flux exposure of the user and of the fetus in the womb, when the laptop is used at close contact with the woman's womb. In the LTCs analyzed, EMF values (range 1.8-6 μT) are within International Commission on Non-Ionizing Radiation (NIR) Protection (ICNIRP) guidelines, but are considerably higher than the values recommended by 2 recent guidelines for computer monitors magnetic field emissions, MPR II (Swedish Board for Technical Accreditation) and TCO (Swedish Confederation of Professional Employees), and those considered risky for tumor development. When close to the body, the laptop induces currents that are within 34.2% to 49.8% ICNIRP recommendations, but not negligible, to the adult's body and to the fetus (in pregnant women). On the contrary, the power supply induces strong intracorporal electric current densities in the fetus and in the adult subject, which are respectively 182-263% and 71-483% higher than ICNIRP 98 basic restriction recommended to prevent adverse health effects. Laptop is paradoxically an improper site for the use of a LTC, which consequently should be renamed to not induce customers towards an improper use.
Pandemic risk: how large are the expected losses?
Fan, Victoria Y; Jamison, Dean T; Summers, Lawrence H
2018-02-01
There is an unmet need for greater investment in preparedness against major epidemics and pandemics. The arguments in favour of such investment have been largely based on estimates of the losses in national incomes that might occur as the result of a major epidemic or pandemic. Recently, we extended the estimate to include the valuation of the lives lost as a result of pandemic-related increases in mortality. This produced markedly higher estimates of the full value of loss that might occur as the result of a future pandemic. We parametrized an exceedance probability function for a global influenza pandemic and estimated that the expected number of influenza-pandemic-related deaths is about 720 000 per year. We calculated that the expected annual losses from pandemic risk to be about 500 billion United States dollars - or 0.6% of global income - per year. This estimate falls within - but towards the lower end of - the Intergovernmental Panel on Climate Change's estimates of the value of the losses from global warming, which range from 0.2% to 2% of global income. The estimated percentage of annual national income represented by the expected value of losses varied by country income grouping: from a little over 0.3% in high-income countries to 1.6% in lower-middle-income countries. Most of the losses from influenza pandemics come from rare, severe events.
Baldys, Stanley; Raines, T.H.; Mansfield, B.L.; Sandlin, J.T.
1998-01-01
Local regression equations were developed to estimate loads produced by individual storms. Mean annual loads were estimated by applying the storm-load equations for all runoff-producing storms in an average climatic year and summing individual storm loads to determine the annual load.
Information Flow in an Atmospheric Model and Data Assimilation
ERIC Educational Resources Information Center
Yoon, Young-noh
2011-01-01
Weather forecasting consists of two processes, model integration and analysis (data assimilation). During the model integration, the state estimate produced by the analysis evolves to the next cycle time according to the atmospheric model to become the background estimate. The analysis then produces a new state estimate by combining the background…
Nonequilibrium electroweak baryogenesis at preheating after inflation
NASA Astrophysics Data System (ADS)
García-Bellido, Juan; Grigoriev, Dmitri; Kusenko, Alexander; Shaposhnikov, Mikhail
1999-12-01
We present a novel scenario for baryogenesis in a hybrid inflation model at the electroweak scale, in which the standard model Higgs field triggers the end of inflation. One of the conditions for successful baryogenesis, the departure from thermal equilibrium, is naturally achieved at the stage of preheating after inflation. The inflaton oscillations induce large occupation numbers for long-wavelength configurations of the Higgs and gauge fields, which leads to a large rate of sphaleron transitions. We estimate this rate during the first stages of reheating and evaluate the amount of baryons produced due to a particular type of higher-dimensional CP violating operator. The universe thermalizes through fermion interactions, at a temperature below critical, Trh<~100 GeV, preventing the wash-out of the produced baryon asymmetry. Numerical simulations in 1+1 dimensions support our theoretical analyses.
Conventional development versus managed growth: the costs of sprawl.
Burchell, Robert W; Mukherji, Sahan
2003-09-01
We examined the effects of sprawl, or conventional development, versus managed (or "smart") growth on land and infrastructure consumption as well as on real estate development and public service costs in the United States. Mathematical impact models were used to produce US estimates of differences in resources consumed according to each growth scenario over the period 2000-2025. Sprawl produces a 21% increase in amount of undeveloped land converted to developed land (2.4 million acres) and approximately a 10% increase in local road lane-miles (188 300). Furthermore, sprawl causes about 10% more annual public service (fiscal) deficits ($4.2 billion US dollars) and 8% higher housing occupancy costs ($13 000 US dollars per dwelling unit). Managed growth can save significant amounts of human and natural resources with limited effects on traditional development procedures.
Conventional Development Versus Managed Growth: The Costs of Sprawl
Burchell, Robert W.; Mukherji, Sahan
2003-01-01
Objectives. We examined the effects of sprawl, or conventional development, versus managed (or “smart”) growth on land and infrastructure consumption as well as on real estate development and public service costs in the United States. Methods. Mathematical impact models were used to produce US estimates of differences in resources consumed according to each growth scenario over the period 2000–2025. Results. Sprawl produces a 21% increase in amount of undeveloped land converted to developed land (2.4 million acres) and approximately a 10% increase in local road lane-miles (188 300). Furthermore, sprawl causes about 10% more annual public service (fiscal) deficits ($4.2 billion) and 8% higher housing occupancy costs ($13 000 per dwelling unit). Conclusions. Managed growth can save significant amounts of human and natural resources with limited effects on traditional development procedures. PMID:12948976
Nunn, Amy S; Fonseca, Elize M; Bastos, Francisco I; Gruskin, Sofia; Salomon, Joshua A
2007-11-13
Little is known about the long-term drug costs associated with treating AIDS in developing countries. Brazil's AIDS treatment program has been cited widely as the developing world's largest and most successful AIDS treatment program. The program guarantees free access to highly active antiretroviral therapy (HAART) for all people living with HIV/AIDS in need of treatment. Brazil produces non-patented generic antiretroviral drugs (ARVs), procures many patented ARVs with negotiated price reductions, and recently issued a compulsory license to import one patented ARV. In this study, we investigate the drivers of recent ARV cost trends in Brazil through analysis of drug-specific prices and expenditures between 2001 and 2005. We compared Brazil's ARV prices to those in other low- and middle-income countries. We analyzed trends in drug expenditures for HAART in Brazil from 2001 to 2005 on the basis of cost data disaggregated by each ARV purchased by the Brazilian program. We decomposed the overall changes in expenditures to compare the relative impacts of changes in drug prices and drug purchase quantities. We also estimated the excess costs attributable to the difference between prices for generics in Brazil and the lowest global prices for these drugs. Finally, we estimated the savings attributable to Brazil's reduced prices for patented drugs. Negotiated drug prices in Brazil are lowest for patented ARVs for which generic competition is emerging. In recent years, the prices for efavirenz and lopinavir-ritonavir (lopinavir/r) have been lower in Brazil than in other middle-income countries. In contrast, the price of tenofovir is US$200 higher per patient per year than that reported in other middle-income countries. Despite precipitous price declines for four patented ARVs, total Brazilian drug expenditures doubled, to reach US$414 million in 2005. We find that the major driver of cost increases was increased purchase quantities of six specific drugs: patented lopinavir/r, efavirenz, tenofovir, atazanavir, enfuvirtide, and a locally produced generic, fixed-dose combination of zidovudine and lamivudine (AZT/3TC). Because prices declined for many of the patented drugs that constitute the largest share of drug costs, nearly the entire increase in overall drug expenditures between 2001 and 2005 is attributable to increases in drug quantities. Had all drug quantities been held constant from 2001 until 2005 (or for those drugs entering treatment guidelines after 2001, held constant between the year of introduction and 2005), total costs would have increased by only an estimated US$7 million. We estimate that in the absence of price declines for patented drugs, Brazil would have spent a cumulative total of US$2 billion on drugs for HAART between 2001 and 2005, implying a savings of US$1.2 billion from price declines. Finally, in comparing Brazilian prices for locally produced generic ARVs to the lowest international prices meeting global pharmaceutical quality standards, we find that current prices for Brazil's locally produced generics are generally much higher than corresponding global prices, and note that these prices have risen in Brazil while declining globally. We estimate the excess costs of Brazil's locally produced generics totaled US$110 million from 2001 to 2005. Despite Brazil's more costly generic ARVs, the net result of ARV price changes has been a cost savings of approximately US$1 billion since 2001. HAART costs have nevertheless risen steeply as Brazil has scaled up treatment. These trends may foreshadow future AIDS treatment cost trends in other developing countries as more people start treatment, AIDS patients live longer and move from first-line to second and third-line treatment, AIDS treatment becomes more complex, generic competition emerges, and newer patented drugs become available. The specific application of the Brazilian model to other countries will depend, however, on the strength of their health systems, intellectual property regulations, epidemiological profiles, AIDS treatment guidelines, and differing capacities to produce drugs locally.
Furnham, Adrian; Reeves, Emma; Budhani, Salima
2002-03-01
In this study, 156 participants, predominantly White British adults (M age = 44.3 years) rated themselves on overall IQ and on H. Gardner's (1983) 7 intelligence subtypes. Parents (n = 120) also estimated the intelligence of their children. Men's self-estimates were significantly higher than women's (110.15 vs. 104.84). Participants thought their verbal, mathematical, and spatial intelligence scores were the best indicators of their own overall intelligence. Parents estimated that their sons had significantly higher IQs than their daughters (115.21 vs. 107.49). Self-estimates and estimates of children's multiple intelligences were higher for men and sons, significantly so for logical-mathematical and spatial intelligence. Parents rated 2nd-born daughters as having significantly higher verbal and musical intelligence than their male counterparts. Higher parental IQ self-estimates corresponded with higher IQ estimates for children. Results for 1st-born children were clearest and showed the most significant differences. The findings are interpreted in terms of sociocultural and familial influences and the possibility of actual sex differences in particular abilities.
Giacaman, Rodrigo A; Torres, Sebastián; Gómez, Yenifer; Muñoz-Sandoval, Cecilia; Kreth, Jens
2015-01-01
This study was conducted to estimate oral colonization by Streptococcus mutans and Streptococcus sanguinis in adults with high and without any caries experience. Furthermore, differences in the amount of hydrogen peroxide (H2O2) produced by S. sanguinis isolated from both groups were assessed. Forty adults were divided into: (i) carious lesion-free, without any carious lesion, assessed by the International Caries Detection and Assessment System (ICDAS), or restoration, (CF) and (ii) high caries experience (HC). Saliva samples were collected and seeded on respective agar-plates for enumeration of total streptococci, S. mutans and S. sanguinis (CFU/mL) and compared between groups. Additionally, S. sanguinis colonies obtained from both groups were inoculated on Prussian blue agar for H2O2 detection. Production of H2O2 was quantified and compared between the two groups. S. sanguinis counts were significantly higher in CF than HC individuals (p<0.05). Conversely, S. mutans showed significantly higher levels in HC than CF subjects (p<0.001). S. sanguinis colonies from CF individuals produced significantly larger H2O2 halos compared with HC subjects. S. sanguinis predominates over S. mutans in saliva of adults without caries experience. In those people, S. sanguinis produces more H2O2ex vivo. Copyright © 2014 Elsevier Ltd. All rights reserved.
Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number.
Fragkos, Konstantinos C; Tsagris, Michail; Frangos, Christos C
2014-01-01
The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator.
Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number
Fragkos, Konstantinos C.; Tsagris, Michail; Frangos, Christos C.
2014-01-01
The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator. PMID:27437470
Nachman, Keeve E.; Love, David C.; Baron, Patrick A.; Nigra, Anne E.; Murko, Manuela; Raber, Georg; Francesconi, Kevin A.; Navas-Acien, Ana
2016-01-01
Background: Use of nitarsone, an arsenic-based poultry drug, may result in dietary exposures to inorganic arsenic (iAs) and other arsenic species. Nitarsone was withdrawn from the U.S. market in 2015, but its use in other countries may continue. Objectives: We characterized the impact of nitarsone use on arsenic species in turkey meat and arsenic exposures among turkey consumers, and we estimated cancer risk increases from consuming turkey treated with nitarsone before its 2015 U.S. withdrawal. Methods: Turkey from three cities was analyzed for total arsenic, iAs, methylarsonate (MA), dimethylarsinate, and nitarsone, which were compared across label type and month of purchase. Turkey consumption was estimated from NHANES data to estimate daily arsenic exposures for adults and children 4–30 months of age and cancer risks among adult consumers. Results: Turkey meat from conventional producers not prohibiting nitarsone use showed increased mean levels of iAs (0.64 μg/kg) and MA (5.27 μg/kg) compared with antibiotic-free and organic meat (0.39 μg/kg and 1.54 μg/kg, respectively) and meat from conventional producers prohibiting nitarsone use (0.33 μg/kg and 0.28 μg/kg, respectively). Samples with measurable nitarsone had the highest mean iAs and MA (0.92 μg/kg and 10.96 μg/kg, respectively). Nitarsone was higher in October samples than in March samples, possibly resulting from increased summer use. Based on mean iAs concentrations in samples from conventional producers with no known policy versus policies prohibiting nitarsone, estimated lifetime daily consumption by an 80-kg adult, and a recently proposed cancer slope factor, we estimated that use of nitarsone by all turkey producers would result in 3.1 additional cases of bladder or lung cancer per 1,000,000 consumers. Conclusions: Nitarsone use can expose turkey consumers to iAs and MA. The results of our study support the U.S. Food and Drug Administration’s removal of nitarsone from the U.S. market and further support its removal from the global marketplace. Citation: Nachman KE, Love DC, Baron PA, Nigra AE, Murko M, Raber G, Francesconi KA, Navas-Acien A. 2017. Nitarsone, inorganic arsenic, and other arsenic species in turkey meat: exposure and risk assessment based on a 2014 U.S. market basket sample. Environ Health Perspect 125:363–369; http://dx.doi.org/10.1289/EHP225 PMID:27735789
Nachman, Keeve E; Love, David C; Baron, Patrick A; Nigra, Anne E; Murko, Manuela; Raber, Georg; Francesconi, Kevin A; Navas-Acien, Ana
2017-03-01
Use of nitarsone, an arsenic-based poultry drug, may result in dietary exposures to inorganic arsenic (iAs) and other arsenic species. Nitarsone was withdrawn from the U.S. market in 2015, but its use in other countries may continue. We characterized the impact of nitarsone use on arsenic species in turkey meat and arsenic exposures among turkey consumers, and we estimated cancer risk increases from consuming turkey treated with nitarsone before its 2015 U.S. withdrawal. Turkey from three cities was analyzed for total arsenic, iAs, methylarsonate (MA), dimethylarsinate, and nitarsone, which were compared across label type and month of purchase. Turkey consumption was estimated from NHANES data to estimate daily arsenic exposures for adults and children 4-30 months of age and cancer risks among adult consumers. Turkey meat from conventional producers not prohibiting nitarsone use showed increased mean levels of iAs (0.64 μg/kg) and MA (5.27 μg/kg) compared with antibiotic-free and organic meat (0.39 μg/kg and 1.54 μg/kg, respectively) and meat from conventional producers prohibiting nitarsone use (0.33 μg/kg and 0.28 μg/kg, respectively). Samples with measurable nitarsone had the highest mean iAs and MA (0.92 μg/kg and 10.96 μg/kg, respectively). Nitarsone was higher in October samples than in March samples, possibly resulting from increased summer use. Based on mean iAs concentrations in samples from conventional producers with no known policy versus policies prohibiting nitarsone, estimated lifetime daily consumption by an 80-kg adult, and a recently proposed cancer slope factor, we estimated that use of nitarsone by all turkey producers would result in 3.1 additional cases of bladder or lung cancer per 1,000,000 consumers. Nitarsone use can expose turkey consumers to iAs and MA. The results of our study support the U.S. Food and Drug Administration's removal of nitarsone from the U.S. market and further support its removal from the global marketplace. Citation: Nachman KE, Love DC, Baron PA, Nigra AE, Murko M, Raber G, Francesconi KA, Navas-Acien A. 2017. Nitarsone, inorganic arsenic, and other arsenic species in turkey meat: exposure and risk assessment based on a 2014 U.S. market basket sample. Environ Health Perspect 125:363-369; http://dx.doi.org/10.1289/EHP225.
Updated folate data in the Dutch Food Composition Database and implications for intake estimates
Westenbrink, Susanne; Jansen-van der Vliet, Martine; van Rossum, Caroline
2012-01-01
Background and objective Nutrient values are influenced by the analytical method used. Food folate measured by high performance liquid chromatography (HPLC) or by microbiological assay (MA) yield different results, with in general higher results from MA than from HPLC. This leads to the question of how to deal with different analytical methods in compiling standardised and internationally comparable food composition databases? A recent inventory on folate in European food composition databases indicated that currently MA is more widely used than HPCL. Since older Dutch values are produced by HPLC and newer values by MA, analytical methods and procedures for compiling folate data in the Dutch Food Composition Database (NEVO) were reconsidered and folate values were updated. This article describes the impact of this revision of folate values in the NEVO database as well as the expected impact on the folate intake assessment in the Dutch National Food Consumption Survey (DNFCS). Design The folate values were revised by replacing HPLC with MA values from recent Dutch analyses. Previously MA folate values taken from foreign food composition tables had been recalculated to the HPLC level, assuming a 27% lower value from HPLC analyses. These recalculated values were replaced by the original MA values. Dutch HPLC and MA values were compared to each other. Folate intake was assessed for a subgroup within the DNFCS to estimate the impact of the update. Results In the updated NEVO database nearly all folate values were produced by MA or derived from MA values which resulted in an average increase of 24%. The median habitual folate intake in young children was increased by 11–15% using the updated folate values. Conclusion The current approach for folate in NEVO resulted in more transparency in data production and documentation and higher comparability among European databases. Results of food consumption surveys are expected to show higher folate intakes when using the updated values. PMID:22481900
Barker, S Fiona; Amoah, Philip; Drechsel, Pay
2014-07-15
With a rapidly growing urban population in Kumasi, Ghana, the consumption of street food is increasing. Raw salads, which often accompany street food dishes, are typically composed of perishable vegetables that are grown in close proximity to the city using poor quality water for irrigation. This study assessed the risk of gastroenteritis illness (caused by rotavirus, norovirus and Ascaris lumbricoides) associated with the consumption of street food salads using Quantitative Microbial Risk Assessment (QMRA). Three different risk assessment models were constructed, based on availability of microbial concentrations: 1) Water - starting from irrigation water quality, 2) Produce - starting from the quality of produce at market, and 3) Street - using microbial quality of street food salad. In the absence of viral concentrations, published ratios between faecal coliforms and viruses were used to estimate the quality of water, produce and salad, and annual disease burdens were determined. Rotavirus dominated the estimates of annual disease burden (~10(-3)Disability Adjusted Life Years per person per year (DALYs pppy)), although norovirus also exceeded the 10(-4)DALY threshold for both Produce and Street models. The Water model ignored other on-farm and post-harvest sources of contamination and consistently produced lower estimates of risk; it likely underestimates disease burden and therefore is not recommended. Required log reductions of up to 5.3 (95th percentile) for rotavirus were estimated for the Street model, demonstrating that significant interventions are required to protect the health and safety of street food consumers in Kumasi. Estimates of virus concentrations were a significant source of model uncertainty and more data on pathogen concentrations is needed to refine QMRA estimates of disease burden. Copyright © 2014 Elsevier B.V. All rights reserved.
Testing an automated method to estimate ground-water recharge from streamflow records
Rutledge, A.T.; Daniel, C.C.
1994-01-01
The computer program, RORA, allows automated analysis of streamflow hydrographs to estimate ground-water recharge. Output from the program, which is based on the recession-curve-displacement method (often referred to as the Rorabaugh method, for whom the program is named), was compared to estimates of recharge obtained from a manual analysis of 156 years of streamflow record from 15 streamflow-gaging stations in the eastern United States. Statistical tests showed that there was no significant difference between paired estimates of annual recharge by the two methods. Tests of results produced by the four workers who performed the manual method showed that results can differ significantly between workers. Twenty-two percent of the variation between manual and automated estimates could be attributed to having different workers perform the manual method. The program RORA will produce estimates of recharge equivalent to estimates produced manually, greatly increase the speed od analysis, and reduce the subjectivity inherent in manual analysis.
Kirtadze, Irma; Otiashvili, David; Tabatadze, Mzia; Vardanashvili, Irina; Sturua, Lela; Zabransky, Tomas; Anthony, James C
2018-06-01
Validity of responses in surveys is an important research concern, especially in emerging market economies where surveys in the general population are a novelty, and the level of social control is traditionally higher. The Randomized Response Technique (RRT) can be used as a check on response validity when the study aim is to estimate population prevalence of drug experiences and other socially sensitive and/or illegal behaviors. To apply RRT and to study potential under-reporting of drug use in a nation-scale, population-based general population survey of alcohol and other drug use. For this first-ever household survey on addictive substances for the Country of Georgia, we used the multi-stage probability sampling of 18-to-64-year-old household residents of 111 urban and 49 rural areas. During the interviewer-administered assessments, RRT involved pairing of sensitive and non-sensitive questions about drug experiences. Based upon the standard household self-report survey estimate, an estimated 17.3% [95% confidence interval, CI: 15.5%, 19.1%] of Georgian household residents have tried cannabis. The corresponding RRT estimate was 29.9% [95% CI: 24.9%, 34.9%]. The RRT estimates for other drugs such as heroin also were larger than the standard self-report estimates. We remain unsure about what is the "true" value for prevalence of using illegal psychotropic drugs in the Republic of Georgia study population. Our RRT results suggest that standard non-RRT approaches might produce 'under-estimates' or at best, highly conservative, lower-end estimates. Copyright © 2018 Elsevier B.V. All rights reserved.
Modified fast frequency acquisition via adaptive least squares algorithm
NASA Technical Reports Server (NTRS)
Kumar, Rajendra (Inventor)
1992-01-01
A method and the associated apparatus for estimating the amplitude, frequency, and phase of a signal of interest are presented. The method comprises the following steps: (1) inputting the signal of interest; (2) generating a reference signal with adjustable amplitude, frequency and phase at an output thereof; (3) mixing the signal of interest with the reference signal and a signal 90 deg out of phase with the reference signal to provide a pair of quadrature sample signals comprising respectively a difference between the signal of interest and the reference signal and a difference between the signal of interest and the signal 90 deg out of phase with the reference signal; (4) using the pair of quadrature sample signals to compute estimates of the amplitude, frequency, and phase of an error signal comprising the difference between the signal of interest and the reference signal employing a least squares estimation; (5) adjusting the amplitude, frequency, and phase of the reference signal from the numerically controlled oscillator in a manner which drives the error signal towards zero; and (6) outputting the estimates of the amplitude, frequency, and phase of the error signal in combination with the reference signal to produce a best estimate of the amplitude, frequency, and phase of the signal of interest. The preferred method includes the step of providing the error signal as a real time confidence measure as to the accuracy of the estimates wherein the closer the error signal is to zero, the higher the probability that the estimates are accurate. A matrix in the estimation algorithm provides an estimate of the variance of the estimation error.
A RECONNECTION-DRIVEN MODEL OF THE HARD X-RAY LOOP-TOP SOURCE FROM FLARE 2004 FEBRUARY 26
DOE Office of Scientific and Technical Information (OSTI.GOV)
Longcope, Dana; Qiu, Jiong; Brewer, Jasmine
A compact X-class flare on 2004 February 26 showed a concentrated source of hard X-rays at the tops of the flare’s loops. This was analyzed in previous work and interpreted as plasma heated and compressed by slow magnetosonic shocks (SMSs) generated during post-reconnection retraction of the flux. That work used analytic expressions from a thin flux tube (TFT) model, which neglected many potentially important factors such as thermal conduction and chromospheric evaporation. Here we use a numerical solution of the TFT equations to produce a more comprehensive and accurate model of the same flare, including those effects previously omitted. Thesemore » simulations corroborate the prior hypothesis that slow-mode shocks persist well after the retraction has ended, thus producing a compact, loop-top source instead of an elongated jet, as steady reconnection models predict. Thermal conduction leads to densities higher than analytic estimates had predicted, and evaporation enhances the density still higher, but at lower temperatures. X-ray light curves and spectra are synthesized by convolving the results from a single TFT simulation with the rate at which flux is reconnected, as measured through motion of flare ribbons, for example. These agree well with light curves observed by RHESSI and GOES and spectra from RHESSI . An image created from a superposition of TFT model runs resembles one produced from RHESSI observations. This suggests that the HXR loop-top source, at least the one observed in this flare, could be the result of SMSs produced in fast reconnection models like Petschek’s.« less
Correlation Between Hierarchical Bayesian and Aerosol ...
Tools to estimate PM2.5 mass have expanded in recent years, and now include: 1) stationary monitor readings, 2) Community Multi-Scale Air Quality (CMAQ) model estimates, 3) Hierarchical Bayesian (HB) estimates from combined stationary monitor readings and CMAQ model output; and, 4) calibrated Aerosol Optical Depth (AOD) readings from two Moderate Resolution Imaging Spetroradiometer (MODIS) units on National Aeronautics and Space Administration’s (NASA) Terra and Aqua satellites. Case-crossover design and conditional logistic regression were used to determine concentration response (CR) functions for three different PM2.5 levels on asthma emergency department (ED) visits and acute myocardial infarction (MI) inpatient hospitalizations in ninety-nine, 12 km2 grids in Baltimore, MD (2005 data). HB analyses for asthma ED visits produced significant results at 3-day lags for the main effect (OR=1.002, 95% CI=1.000-1.005), and two effect modifiers for females (OR=1.003, 95% CI=1.000-1.006), and non-Caucasian/non-African American persons (OR=1.010, 95% CI=1.001-1.019). HB analyses for acute MI inpatient hospitalizations also consistently produced a significant outcome for persons of other race (OR=1.031, 95% CI=1.006-1.056). Correlation coefficients computed between stationary monitor and satellite AOD PM2.5 values were significant for both asthma (rxy=0.944) and acute MI (rxy=0.940). Both monitor and AOD PM2.5 values were higher in February and June through Aug
Lu, Sen; Ren, Tusheng; Lu, Yili; Meng, Ping; Sun, Shiyou
2014-01-01
Accurate estimation of soil water retention curve (SWRC) at the dry region is required to describe the relation between soil water content and matric suction from saturation to oven dryness. In this study, the extrapolative capability of two models for predicting the complete SWRC from limited ranges of soil water retention data was evaluated. When the model parameters were obtained from SWRC data in the 0-1500 kPa range, the FX model (Fredlund and Xing, 1994) estimations agreed well with measurements from saturation to oven dryness with RMSEs less than 0.01. The GG model (Groenevelt and Grant, 2004) produced larger errors at the dry region, with significantly larger RMSEs and MEs than the FX model. Further evaluations indicated that when SWRC measurements in the 0-100 kPa suction range was applied for model establishment, the FX model was capable of producing acceptable SWRCs across the entire water content range. For a higher accuracy, the FX model requires soil water retention data at least in the 0- to 300-kPa range to extend the SWRC to oven dryness. Comparing with the Khlosi et al. (2006) model, which requires measurements in the 0-500 kPa range to reproduce the complete SWRCs, the FX model has the advantage of requiring less SWRC measurements. Thus the FX modeling approach has the potential to eliminate the processes for measuring soil water retention in the dry range.
NASA Astrophysics Data System (ADS)
Wang, Rong; Chen, Jing M.; Pavlic, Goran; Arain, Altaf
2016-09-01
Winter leaf area index (LAI) of evergreen coniferous forests exerts strong control on the interception of snow, snowmelt and energy balance. Simulation of winter LAI and associated winter processes in land surface models is challenging. Retrieving winter LAI from remote sensing data is difficult due to cloud contamination, poor illumination, lower solar elevation and higher radiation reflection by snow background. Underestimated winter LAI in evergreen coniferous forests is one of the major issues limiting the application of current remote sensing LAI products. It has not been fully addressed in past studies in the literature. In this study, we used needle lifespan to correct winter LAI in a remote sensing product developed by the University of Toronto. For the validation purpose, the corrected winter LAI was then used to calculate land surface albedo at five FLUXNET coniferous forests in Canada. The RMSE and bias values for estimated albedo were 0.05 and 0.011, respectively, for all sites. The albedo map over coniferous forests across Canada produced with corrected winter LAI showed much better agreement with the GLASS (Global LAnd Surface Satellites) albedo product than the one produced with uncorrected winter LAI. The results revealed that the corrected winter LAI yielded much greater accuracy in simulating land surface albedo, making the new LAI product an improvement over the original one. Our study will help to increase the usability of remote sensing LAI products in land surface energy budget modeling.
Major influencing factors of indoor radon concentrations in Switzerland.
Kropat, Georg; Bochud, Francois; Jaboyedoff, Michel; Laedermann, Jean-Pascal; Murith, Christophe; Palacios, Martha; Baechler, Sébastien
2014-03-01
In Switzerland, nationwide large-scale radon surveys have been conducted since the early 1980s to establish the distribution of indoor radon concentrations (IRC). The aim of this work was to study the factors influencing IRC in Switzerland using univariate analyses that take into account biases caused by spatial irregularities of sampling. About 212,000 IRC measurements carried out in more than 136,000 dwellings were available for this study. A probability map to assess risk of exceeding an IRC of 300 Bq/m(3) was produced using basic geostatistical techniques. Univariate analyses of IRC for different variables, namely the type of radon detector, various building characteristics such as foundation type, year of construction and building type, as well as the altitude, the average outdoor temperature during measurement and the lithology, were performed comparing 95% confidence intervals among classes of each variable. Furthermore, a map showing the spatial aggregation of the number of measurements was generated for each class of variable in order to assess biases due to spatially irregular sampling. IRC measurements carried out with electret detectors were 35% higher than measurements performed with track detectors. Regarding building characteristics, the IRC of apartments are significantly lower than individual houses. Furthermore, buildings with concrete foundations have the lowest IRC. A significant decrease in IRC was found in buildings constructed after 1900 and again after 1970. Moreover, IRC decreases at higher outdoor temperatures. There is also a tendency to have higher IRC with altitude. Regarding lithology, carbonate rock in the Jura Mountains produces significantly higher IRC, almost by a factor of 2, than carbonate rock in the Alps. Sedimentary rock and sediment produce the lowest IRC while carbonate rock from the Jura Mountains and igneous rock produce the highest IRC. Potential biases due to spatially unbalanced sampling of measurements were identified for several influencing factors. Significant associations were found between IRC and all variables under study. However, we showed that the spatial distribution of samples strongly affected the relevance of those associations. Therefore, future methods to estimate local radon hazards should take the multidimensionality of the process of IRC into account. Copyright © 2013 Elsevier Ltd. All rights reserved.
Botelho, Anabela
2013-10-01
This study empirically evaluates whether the increasingly large numbers of private outpatient healthcare facilities (HCFs) within the European Union (EU) countries comply with the existing European waste legislation, and whether compliance with such legislation affects the fraction of healthcare waste (HCW) classified as hazardous. To that end, this study uses data collected by a large survey of more than 700 small private HCFs distributed throughout Portugal, a full member of the EU since 1986, where 50% of outpatient care is currently dominated by private operators. The collected data are then used to estimate a hurdle model, i.e. a statistical specification in which there are two processes: one is the process by which some HCFs generate zero or some positive fraction of hazardous HCW, and another is the process by which HCFs generate a specific positive fraction of hazardous HCW conditional on producing any. Taken together, the results show that although compliance with the law is far from ideal, it is the strongest factor influencing hazardous waste generation. In particular, it is found that higher compliance has a small and insignificant effect on the probability of generating (or reporting) positive amounts of hazardous waste, but it does have a large and significant effect on the fraction of hazardous waste produced, conditional on producing any, with a unit increase in the compliance rate leading to an estimated decrease in the fraction of hazardous HCW by 16.3 percentage points.
Impact of multicollinearity on small sample hydrologic regression models
NASA Astrophysics Data System (ADS)
Kroll, Charles N.; Song, Peter
2013-06-01
Often hydrologic regression models are developed with ordinary least squares (OLS) procedures. The use of OLS with highly correlated explanatory variables produces multicollinearity, which creates highly sensitive parameter estimators with inflated variances and improper model selection. It is not clear how to best address multicollinearity in hydrologic regression models. Here a Monte Carlo simulation is developed to compare four techniques to address multicollinearity: OLS, OLS with variance inflation factor screening (VIF), principal component regression (PCR), and partial least squares regression (PLS). The performance of these four techniques was observed for varying sample sizes, correlation coefficients between the explanatory variables, and model error variances consistent with hydrologic regional regression models. The negative effects of multicollinearity are magnified at smaller sample sizes, higher correlations between the variables, and larger model error variances (smaller R2). The Monte Carlo simulation indicates that if the true model is known, multicollinearity is present, and the estimation and statistical testing of regression parameters are of interest, then PCR or PLS should be employed. If the model is unknown, or if the interest is solely on model predictions, is it recommended that OLS be employed since using more complicated techniques did not produce any improvement in model performance. A leave-one-out cross-validation case study was also performed using low-streamflow data sets from the eastern United States. Results indicate that OLS with stepwise selection generally produces models across study regions with varying levels of multicollinearity that are as good as biased regression techniques such as PCR and PLS.
NASA Astrophysics Data System (ADS)
Luo, Shezhou; Wang, Cheng; Xi, Xiaohuan; Pan, Feifei; Qian, Mingjie; Peng, Dailiang; Nie, Sheng; Qin, Haiming; Lin, Yi
2017-06-01
Wetland biomass is essential for monitoring the stability and productivity of wetland ecosystems. Conventional field methods to measure or estimate wetland biomass are accurate and reliable, but expensive, time consuming and labor intensive. This research explored the potential for estimating wetland reed biomass using a combination of airborne discrete-return Light Detection and Ranging (LiDAR) and hyperspectral data. To derive the optimal predictor variables of reed biomass, a range of LiDAR and hyperspectral metrics at different spatial scales were regressed against the field-observed biomasses. The results showed that the LiDAR-derived H_p99 (99th percentile of the LiDAR height) and hyperspectral-calculated modified soil-adjusted vegetation index (MSAVI) were the best metrics for estimating reed biomass using the single regression model. Although the LiDAR data yielded a higher estimation accuracy compared to the hyperspectral data, the combination of LiDAR and hyperspectral data produced a more accurate prediction model for reed biomass (R2 = 0.648, RMSE = 167.546 g/m2, RMSEr = 20.71%) than LiDAR data alone. Thus, combining LiDAR data with hyperspectral data has a great potential for improving the accuracy of aboveground biomass estimation.
A meta-analysis of the worldwide prevalence of pica during pregnancy and the postpartum period.
Fawcett, Emily J; Fawcett, Jonathan M; Mazmanian, Dwight
2016-06-01
Although pica has long been associated with pregnancy, the exact prevalence in this population remains unknown. To estimate the prevalence of pica during pregnancy and the postpartum period, and to explain variations in prevalence estimates by examining potential moderating variables. PsycARTICLES, PsycINFO, PubMed, and Google Scholar were searched from inception to February 2014 using the keywords pica, prevalence, and epidemiology. Articles estimating pica prevalence during pregnancy and/or the postpartum period using a self-report questionnaire or interview were included. Study characteristics, pica prevalence, and eight potential moderating variables were recorded (parity, anemia, duration of pregnancy, mean maternal age, education, sampling method employed, region, and publication date). Random-effects models were employed. In total, 70 studies were included, producing an aggregate prevalence estimate of 27.8% (95% confidence interval 22.8-33.3). In light of substantial heterogeneity within the study model, the primary focus was identifying moderator variables. Pica prevalence was higher in Africa compared with elsewhere in the world, increased as the prevalence of anemia increased, and decreased as educational attainment increased. Geographical region, anemia, and education were found to moderate pica prevalence, partially explaining the heterogeneity in prevalence estimates across the literature. Copyright © 2016 International Federation of Gynecology and Obstetrics. Published by Elsevier Ireland Ltd. All rights reserved.
Address-based versus random-digit-dial surveys: comparison of key health and risk indicators.
Link, Michael W; Battaglia, Michael P; Frankel, Martin R; Osborn, Larry; Mokdad, Ali H
2006-11-15
Use of random-digit dialing (RDD) for conducting health surveys is increasingly problematic because of declining participation rates and eroding frame coverage. Alternative survey modes and sampling frames may improve response rates and increase the validity of survey estimates. In a 2005 pilot study conducted in six states as part of the Behavioral Risk Factor Surveillance System, the authors administered a mail survey to selected household members sampled from addresses in a US Postal Service database. The authors compared estimates based on data from the completed mail surveys (n = 3,010) with those from the Behavioral Risk Factor Surveillance System telephone surveys (n = 18,780). The mail survey data appeared reasonably complete, and estimates based on data from the two survey modes were largely equivalent. Differences found, such as differences in the estimated prevalences of binge drinking (mail = 20.3%, telephone = 13.1%) or behaviors linked to human immunodeficiency virus transmission (mail = 7.1%, telephone = 4.2%), were consistent with previous research showing that, for questions about sensitive behaviors, self-administered surveys generally produce higher estimates than interviewer-administered surveys. The mail survey also provided access to cell-phone-only households and households without telephones, which cannot be reached by means of standard RDD surveys.
Methodological comparison of alpine meadow evapotranspiration on the Tibetan Plateau, China.
Chang, Yaping; Wang, Jie; Qin, Dahe; Ding, Yongjian; Zhao, Qiudong; Liu, Fengjing; Zhang, Shiqiang
2017-01-01
Estimation of evapotranspiration (ET) for alpine meadow areas in the Tibetan Plateau (TP) is essential for water resource management. However, observation data has been limited due to the extreme climates and complex terrain of this region. To address these issues, four representative methods, Penman-Monteith (PM), Priestley-Taylor (PT), Hargreaves-Samani (HS), and Mahringer (MG) methods, were adopted to estimate ET, which were then compared with ET measured using Eddy Covariance (EC) for five alpine meadow sites during the growing seasons from 2010 to 2014. And each site was measured for one growing season during this period. The results demonstrate that the PT method outperformed at all sites with a coefficient of determination (R2) ranging from 0.76 to 0.94 and root mean square error (RMSE) ranging from 0.41 to 0.62 mm d-1. The PM method showed better performance than HS and MG methods, and the HS method produced relatively acceptable results with higher R2 (0.46) and lower RMSE (0.89 mm d-1) compared to MG method with R2 of 0.16 and RMSE of 1.62 mm d-1, while MG underestimated ET at all alpine meadow sites. Therefore, the PT method, being the simpler approach and less data dependent, is recommended to estimate ET for alpine meadow areas in the Tibetan Plateau. The PM method produced reliable results when available data were sufficient, and the HS method proved to be a complementary method when variables were insufficient. On the contrary, the MG method always underestimated ET and is, thus, not suitable for alpine meadows. These results provide a basis for estimating ET on the Tibetan Plateau for annual data collection, analysis, and future studies.
NASA Astrophysics Data System (ADS)
Sullivan, A. B.; Mulholland, P. J.; Jones, J. B.
2001-05-01
Headwater streams are almost always supersaturated with CO2 compared to concentrations expected in equilibrium with atmospheric CO2. Direct measurements of CO2 in two streams in eastern Tennessee with different bedrock lithologies (Walker Branch, Upper Gum Hollow Branch) over a year revealed levels of supersaturation of two to five times atmospheric CO2. Highest levels were generally found during the summer months. Springs discharging into the stream had dissolved CO2 concentration up to an order of magnitude higher than that in streamwater. These levels of supersaturation are a reflection of the high concentrations of CO2 in soil produced by root respiration and organic matter decomposition. The hydrologic connection between soil CO2 and streamwater CO2 forms the basis of our method to determine soil CO2 concentrations and efflux from the soil to the atmosphere. The method starts with streamwater measurements of CO2. Then corrections are made for evasion from the stream surface using injections of a conservative solute tracer and volatile gas, and for instream metabolism using a dissolved oxygen change technique. The approach then works backward along the hydrologic flowpath and evaluates the contribution of bedrock weathering, which consumes CO2, by examining the changes in major ion chemistry between precipitation and the stream. This produces estimates of CO2 concentration in soil water and soil atmosphere, which when coupled with soil porosity, allows estimation of CO2 efflux from soil. The hydrologic integration of CO2 signals from whole watersheds into streamwater allows calculation of soil CO2 efflux at large scales. These estimates are at scales larger than current chamber or tower methods, and can provide broad estimates of soil CO2 efflux with easily collected stream chemistry data.
Sampling effects on the identification of roadkill hotspots: Implications for survey design.
Santos, Sara M; Marques, J Tiago; Lourenço, André; Medinas, Denis; Barbosa, A Márcia; Beja, Pedro; Mira, António
2015-10-01
Although locating wildlife roadkill hotspots is essential to mitigate road impacts, the influence of study design on hotspot identification remains uncertain. We evaluated how sampling frequency affects the accuracy of hotspot identification, using a dataset of vertebrate roadkills (n = 4427) recorded over a year of daily surveys along 37 km of roads. "True" hotspots were identified using this baseline dataset, as the 500-m segments where the number of road-killed vertebrates exceeded the upper 95% confidence limit of the mean, assuming a Poisson distribution of road-kills per segment. "Estimated" hotspots were identified likewise, using datasets representing progressively lower sampling frequencies, which were produced by extracting data from the baseline dataset at appropriate time intervals (1-30 days). Overall, 24.3% of segments were "true" hotspots, concentrating 40.4% of roadkills. For different groups, "true" hotspots accounted from 6.8% (bats) to 29.7% (small birds) of road segments, concentrating from <40% (frogs and toads, snakes) to >60% (lizards, lagomorphs, carnivores) of roadkills. Spatial congruence between "true" and "estimated" hotspots declined rapidly with increasing time interval between surveys, due primarily to increasing false negatives (i.e., missing "true" hotspots). There were also false positives (i.e., wrong "estimated" hotspots), particularly at low sampling frequencies. Spatial accuracy decay with increasing time interval between surveys was higher for smaller-bodied (amphibians, reptiles, small birds, small mammals) than for larger-bodied species (birds of prey, hedgehogs, lagomorphs, carnivores). Results suggest that widely used surveys at weekly or longer intervals may produce poor estimates of roadkill hotspots, particularly for small-bodied species. Surveying daily or at two-day intervals may be required to achieve high accuracy in hotspot identification for multiple species. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Petty, Alek A.; Tsamados, Michel C.; Kurtz, Nathan T.
2017-01-01
Sea ice topography significantly impacts turbulent energy/momentum exchange, e.g., atmospheric (wind) drag, over Arctic sea ice. Unfortunately, observational estimates of this contribution to atmospheric drag variability are spatially and temporally limited. Here we present new estimates of the neutral atmospheric form drag coefficient over Arctic sea ice in early spring, using high-resolution Airborne Topographic Mapper elevation data from NASA's Operation IceBridge mission. We utilize a new three-dimensional ice topography data set and combine this with an existing parameterization scheme linking surface feature height and spacing to form drag. To be consistent with previous studies investigating form drag, we compare these results with those produced using a new linear profiling topography data set. The form drag coefficient from surface feature variability shows lower values [less than 0.5-1 × 10(exp. -3)] in the Beaufort/Chukchi Seas, compared with higher values [greater than 0.5-1 ×10(exp. -3)] in the more deformed ice regimes of the Central Arctic (north of Greenland and the Canadian Archipelago), which increase with coastline proximity. The results show moderate interannual variability, including a strong increase in the form drag coefficient from 2013 to 2014/2015 north of the Canadian Archipelago. The form drag coefficient estimates are extrapolated across the Arctic with Advanced Scatterometer satellite radar backscatter data, further highlighting the regional/interannual drag coefficient variability. Finally, we combine the results with existing parameterizations of form drag from floe edges (a function of ice concentration) and skin drag to produce, to our knowledge, the first pan-Arctic estimates of the total neutral atmospheric drag coefficient (in early spring) from 2009 to 2015.
Methodological comparison of alpine meadow evapotranspiration on the Tibetan Plateau, China
Chang, Yaping; Wang, Jie; Qin, Dahe; Ding, Yongjian; Zhao, Qiudong; Liu, Fengjing
2017-01-01
Estimation of evapotranspiration (ET) for alpine meadow areas in the Tibetan Plateau (TP) is essential for water resource management. However, observation data has been limited due to the extreme climates and complex terrain of this region. To address these issues, four representative methods, Penman-Monteith (PM), Priestley-Taylor (PT), Hargreaves-Samani (HS), and Mahringer (MG) methods, were adopted to estimate ET, which were then compared with ET measured using Eddy Covariance (EC) for five alpine meadow sites during the growing seasons from 2010 to 2014. And each site was measured for one growing season during this period. The results demonstrate that the PT method outperformed at all sites with a coefficient of determination (R2) ranging from 0.76 to 0.94 and root mean square error (RMSE) ranging from 0.41 to 0.62 mm d-1. The PM method showed better performance than HS and MG methods, and the HS method produced relatively acceptable results with higher R2 (0.46) and lower RMSE (0.89 mm d-1) compared to MG method with R2 of 0.16 and RMSE of 1.62 mm d-1, while MG underestimated ET at all alpine meadow sites. Therefore, the PT method, being the simpler approach and less data dependent, is recommended to estimate ET for alpine meadow areas in the Tibetan Plateau. The PM method produced reliable results when available data were sufficient, and the HS method proved to be a complementary method when variables were insufficient. On the contrary, the MG method always underestimated ET and is, thus, not suitable for alpine meadows. These results provide a basis for estimating ET on the Tibetan Plateau for annual data collection, analysis, and future studies. PMID:29236754
2014-01-01
Disability and sexual orientation have been used by some to unjustly discriminate against differently-abled and differently-oriented minority groups. Because little is known about the disability rates of individuals in same-sex unions, this technical report presents disability rates by separating couples into: same-sex-female; same-sex-male; different-sex-married; and different-sex-unmarried couples. Data from the American Community Survey (ACS) Public Use Microdata Sample (PUMS) 2009–2011 3-year file is utilized to produce estimates (and their standard errors) for the following six disability items: independent living; ambulatory; self-care; cognitive; hearing; and vision. Estimates of disability by selected geographies—i.e., Public Use Microdata Areas (PUMAs)—are also presented as is a figure showing a PUMA polygon. Qualitative comparisons seem to indicate that: same-sex-female couples have higher rates of disability compared to the other three groups; that in general, disability estimates for individuals in same-sex couples have a greater degree of uncertainty; and that disability-item-allocations are most prevalent in same-sex couples. Because societal marginalization may increase through cumulative processes, public health professionals should continue to seek out ways to identify underserved populations. PMID:25745275
Gu, Sol-A; Jun, Chanha; Joo, Jeong Chan; Kim, Seil; Lee, Seung Hwan; Kim, Yong Hwan
2014-05-10
Lactobacillus coryniformis is known to produce d-lactic acid as a dominant fermentation product at a cultivation temperature of approximately 30°C. However, the considerable production of l-lactic acid is observed when the fermentation temperature is greater than 40°C. Because optically pure lactates are synthesized from pyruvate by the catalysis of chiral-specific d- or l-lactate dehydrogenase, the higher thermostability of l-LDHs is assumed to be one of the key factors decreasing the optical purity of d-lactic acid produced from L. coryniformis at high temperature. To verify this hypothesis, two types of d-ldh genes and six types of l-ldh genes based on the genomic information of L. coryniformis were synthesized and expressed in Escherichia coli. Among the LDHs tested, five LDHs showed activity and were used to construct polyclonal antibodies. d-LDH1, l-LDH2, and l-LDH3 were found to be expressed in L. coryniformis by Western blotting analysis. The half-life values (t1/2) of the LDHs at 40°C were estimated to be 10.50, 41.76, and 2311min, and the T50(10) values were 39.50, 39.90, and 58.60°C, respectively. In addition, the Tm values were 36.0, 41.0, and 62.4°C, respectively, which indicates that l-LDH has greater thermostability than d-LDH. The higher thermostability of l-LDHs compared with that of d-LDH1 may be a major reason why the enantiopurity of d-lactic acid is decreased at high fermentation temperatures. The key enzymes characterized will suggest a direction for the design of genetically modified lactic acid bacteria to produce optically pure d-lactic acid. Copyright © 2014 Elsevier Inc. All rights reserved.
Dachery, Bruna; Veras, Flávio Fonseca; Dal Magro, Lucas; Manfroi, Vitor; Welke, Juliane Elisa
2017-11-01
The goals of this study were (i) to verify the effect of steam extraction used in juice production and the stages of vinification on the ochratoxin A (OTA) levels found in grapes naturally contaminated, and (ii) evaluate the risk of exposure to this toxin when the daily consumption of juice and wine is followed to prevent cardiovascular disease. OTA-producing fungi were isolated from Cabernet Sauvignon, Moscato Itálico and Concord grapes harvested from the same vineyard and intended to produce red wine, white wine and juice, respectively. The highest levels of this toxin were found in the Concord grapes used for juice production. Although greater reduction in OTA levels occurred during juice production (73%) compared to winemaking (66 and 44%, for red and white, respectively), the estimated OTA exposure through juice was higher than the tolerable intake established for this toxin by JECFA. The risk associated with juice consumption, rather than wine, can be explained by (i) higher OTA levels found in Concord must than those of Cabernet and Moscato, indicating that Concord grapes appear to be more susceptible to OTA production by toxigenic fungi; and (ii) the daily recommended juice consumption is higher than those proposed to red wine. Copyright © 2017 Elsevier Ltd. All rights reserved.
Macroecological and macroevolutionary patterns of leaf herbivory across vascular plants.
Turcotte, Martin M; Davies, T Jonathan; Thomsen, Christina J M; Johnson, Marc T J
2014-07-22
The consumption of plants by animals underlies important evolutionary and ecological processes in nature. Arthropod herbivory evolved approximately 415 Ma and the ensuing coevolution between plants and herbivores is credited with generating much of the macroscopic diversity on the Earth. In contemporary ecosystems, herbivory provides the major conduit of energy from primary producers to consumers. Here, we show that when averaged across all major lineages of vascular plants, herbivores consume 5.3% of the leaf tissue produced annually by plants, whereas previous estimates are up to 3.8× higher. This result suggests that for many plant species, leaf herbivory may play a smaller role in energy and nutrient flow than currently thought. Comparative analyses of a diverse global sample of 1058 species across 2085 populations reveal that models of stabilizing selection best describe rates of leaf consumption, and that rates vary substantially within and among major plant lineages. A key determinant of this variation is plant growth form, where woody plant species experience 64% higher leaf herbivory than non-woody plants. Higher leaf herbivory in woody species supports a key prediction of the plant apparency theory. Our study provides insight into how a long history of coevolution has shaped the ecological and evolutionary relationships between plants and herbivores. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
NASA Astrophysics Data System (ADS)
Niyogi, Dev K.; Koren, Mark; Arbuckle, Chris J.; Townsend, Colin R.
2007-02-01
When native grassland catchments are converted to pasture, the main effects on stream physicochemistry are usually related to increased nutrient concentrations and fine-sediment input. We predicted that increasing nutrient concentrations would produce a subsidy-stress response (where several ecological metrics first increase and then decrease at higher concentrations) and that increasing sediment cover of the streambed would produce a linear decline in stream health. We predicted that the net effect of agricultural development, estimated as percentage pastoral land cover, would have a nonlinear subsidy-stress or threshold pattern. In our suite of 21 New Zealand streams, epilithic algal biomass and invertebrate density and biomass were higher in catchments with a higher proportion of pastoral land cover, responding mainly to increased nutrient concentration. Invertebrate species richness had a linear, negative relationship with fine-sediment cover but was unrelated to nutrients or pastoral land cover. In accord with our predictions, several invertebrate stream health metrics (Ephemeroptera-Plecoptera-Trichoptera density and richness, New Zealand Macroinvertebrate Community Index, and percent abundance of noninsect taxa) had nonlinear relationships with pastoral land cover and nutrients. Most invertebrate health metrics usually had linear negative relationships with fine-sediment cover. In this region, stream health, as indicated by macroinvertebrates, primarily followed a subsidy-stress pattern with increasing pastoral development; management of these streams should focus on limiting development beyond the point where negative effects are seen.
Price responsiveness of demand for cigarettes: does rationality matter?
Laporte, Audrey
2006-01-01
Meta-analysis is applied to aggregate-level studies that model the demand for cigarettes using static, myopic, or rational addiction frameworks in an attempt to synthesize key findings in the literature and to identify determinants of the variation in reported price elasticity estimates across studies. The results suggest that the rational addiction framework produces statistically similar estimates to the static framework but that studies that use the myopic framework tend to report more elastic price effects. Studies that applied panel data techniques or controlled for cross-border smuggling reported more elastic price elasticity estimates, whereas the use of instrumental variable techniques and time trends or time dummy variables produced less elastic estimates. The finding that myopic models produce different estimates than either of the other two model frameworks underscores that careful attention must be given to time series properties of the data.
Kinetics of ion and prompt electron emission from laser-produced plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farid, N.; Key Laboratory of Materials Modification by Laser, Ion and Electron Beams, School of Physics and Optical Engineering, Dalian University of Technology, Dalian; Harilal, S. S.
2013-07-15
We investigated ion emission dynamics of laser-produced plasma from several elements, comprised of metals and non-metals (C, Al, Si, Cu, Mo, Ta, W), under vacuum conditions using a Faraday cup. The estimated ion flux for various targets studied showed a decreasing tendency with increasing atomic mass. For metals, the ion flux is found to be a function of sublimation energy. A comparison of temporal ion profiles of various materials showed only high-Z elements exhibited multiple structures in the ion time of flight profile indicated by the observation of higher peak kinetic energies, which were absent for low-Z element targets. Themore » slower ions were seen regardless of the atomic number of target material propagated with a kinetic energy of 1–5 keV, while the fast ions observed in high-Z materials possessed significantly higher energies. A systematic study of plasma properties employing fast photography, time, and space resolved optical emission spectroscopy, and electron analysis showed that there existed different mechanisms for generating ions in laser ablation plumes. The origin of high kinetic energy ions is related to prompt electron emission from high-Z targets.« less
Young, Brent; Conti, David V; Dean, Matthew D
2013-12-01
In a variety of taxa, males deploy alternative reproductive tactics to secure fertilizations. In many species, small "sneaker" males attempt to steal fertilizations while avoiding encounters with larger, more aggressive, dominant males. Sneaker males usually face a number of disadvantages, including reduced access to females and the higher likelihood that upon ejaculation, their sperm face competition from other males. Nevertheless, sneaker males represent an evolutionarily stable strategy under a wide range of conditions. Game theory suggests that sneaker males compensate for these disadvantages by investing disproportionately in spermatogenesis, by producing more sperm per unit body mass (the "fair raffle") and/or by producing higher quality sperm (the "loaded raffle"). Here, we test these models by competing sperm from sneaker "jack" males against sperm from dominant "hooknose" males in Chinook salmon. Using two complementary approaches, we reject the fair raffle in favor of the loaded raffle and estimate that jack males were ∼1.35 times as likely as hooknose males to fertilize eggs under controlled competitive conditions. Interestingly, the direction and magnitude of this skew in paternity shifted according to individual female egg donors, suggesting cryptic female choice could moderate the outcomes of sperm competition in this externally fertilizing species.
NASA Astrophysics Data System (ADS)
Potter, Christopher; Brooks Genovese, Vanessa; Klooster, Steven; Bobo, Matthew; Torregrosa, Alicia
To produce a new daily record of gross carbon emissions from biomass burning events and post-burning decomposition fluxes in the states of the Brazilian Legal Amazon (Instituto Brasileiro de Geografia e Estatistica (IBGE), 1991. Anuario Estatistico do Brasil, Vol. 51. Rio de Janeiro, Brazil pp. 1-1024). We have used vegetation greenness estimates from satellite images as inputs to a terrestrial ecosystem production model. This carbon allocation model generates new estimates of regional aboveground vegetation biomass at 8-km resolution. The modeled biomass product is then combined for the first time with fire pixel counts from the advanced very high-resolution radiometer (AVHRR) to overlay regional burning activities in the Amazon. Results from our analysis indicate that carbon emission estimates from annual region-wide sources of deforestation and biomass burning in the early 1990s are apparently three to five times higher than reported in previous studies for the Brazilian Legal Amazon (Houghton et al., 2000. Nature 403, 301-304; Fearnside, 1997. Climatic Change 35, 321-360), i.e., studies which implied that the Legal Amazon region tends toward a net-zero annual source of terrestrial carbon. In contrast, our analysis implies that the total source fluxes over the entire Legal Amazon region range from 0.2 to 1.2 Pg C yr -1, depending strongly on annual rainfall patterns. The reasons for our higher burning emission estimates are (1) use of combustion fractions typically measured during Amazon forest burning events for computing carbon losses, (2) more detailed geographic distribution of vegetation biomass and daily fire activity for the region, and (3) inclusion of fire effects in extensive areas of the Legal Amazon covered by open woodland, secondary forests, savanna, and pasture vegetation. The total area of rainforest estimated annually to be deforested did not differ substantially among the previous analyses cited and our own.
España-Romero, Vanesa; Golubic, Rajna; Martin, Kathryn R; Hardy, Rebecca; Ekelund, Ulf; Kuh, Diana; Wareham, Nicholas J; Cooper, Rachel; Brage, Soren
2014-01-01
To compare physical activity (PA) subcomponents from EPIC Physical Activity Questionnaire (EPAQ2) and combined heart rate and movement sensing in older adults. Participants aged 60-64y from the MRC National Survey of Health and Development in Great Britain completed EPAQ2, which assesses self-report PA in 4 domains (leisure time, occupation, transportation and domestic life) during the past year and wore a combined sensor for 5 consecutive days. Estimates of PA energy expenditure (PAEE), sedentary behaviour, light (LPA) and moderate-to-vigorous PA (MVPA) were obtained from EPAQ2 and combined sensing and compared. Complete data were available in 1689 participants (52% women). EPAQ2 estimates of PAEE and MVPA were higher than objective estimates and sedentary time and LPA estimates were lower [bias (95% limits of agreement) in men and women were 32.3 (-61.5 to 122.6) and 29.0 (-39.2 to 94.6) kJ/kg/day for PAEE; -4.6 (-10.6 to 1.3) and -6.0 (-10.9 to -1.0) h/day for sedentary time; -171.8 (-454.5 to 110.8) and -60.4 (-367.5 to 246.6) min/day for LPA; 91.1 (-159.5 to 341.8) and 55.4 (-117.2 to 228.0) min/day for MVPA]. There were significant positive correlations between all self-reported and objectively assessed PA subcomponents (rho= 0.12 to 0.36); the strongest were observed for MVPA (rho = 0.30 men; rho = 0.36 women) and PAEE (rho = 0.26 men; rho = 0.25 women). EPAQ2 produces higher estimates of PAEE and MVPA and lower estimates of sedentary and LPA than objective assessment. However, both methodologies rank individuals similarly, suggesting that EPAQ2 may be used in etiological studies in this population.
Genetic parameter estimation for pre- and post-weaning traits in Brahman cattle in Brazil.
Vargas, Giovana; Buzanskas, Marcos Eli; Guidolin, Diego Gomes Freire; Grossi, Daniela do Amaral; Bonifácio, Alexandre da Silva; Lôbo, Raysildo Barbosa; da Fonseca, Ricardo; Oliveira, João Ademir de; Munari, Danísio Prado
2014-10-01
Beef cattle producers in Brazil use body weight traits as breeding program selection criteria due to their great economic importance. The objectives of this study were to evaluate different animal models, estimate genetic parameters, and define the most fitting model for Brahman cattle body weight standardized at 120 (BW120), 210 (BW210), 365 (BW365), 450 (BW450), and 550 (BW550) days of age. To estimate genetic parameters, single-, two-, and multi-trait analyses were performed using the animal model. The likelihood ratio test was verified between all models. For BW120 and BW210, additive direct genetic, maternal genetic, maternal permanent environment, and residual effects were considered, while for BW365 and BW450, additive direct genetic, maternal genetic, and residual effects were considered. Finally, for BW550, additive direct genetic and residual effects were considered. Estimates of direct heritability for BW120 were similar in all analyses; however, for the other traits, multi-trait analysis resulted in higher estimates. The maternal heritability and proportion of maternal permanent environmental variance to total variance were minimal in multi-trait analyses. Genetic, environmental, and phenotypic correlations were of high magnitude between all traits. Multi-trait analyses would aid in the parameter estimation for body weight at older ages because they are usually affected by a lower number of animals with phenotypic information due to culling and mortality.
Estimating chronic disease rates in Canada: which population-wide denominator to use?
Ellison, J; Nagamuthu, C; Vanderloo, S; McRae, B; Waters, C
2016-10-01
Chronic disease rates are produced from the Public Health Agency of Canada's Canadian Chronic Disease Surveillance System (CCDSS) using administrative health data from provincial/territorial health ministries. Denominators for these rates are based on estimates of populations derived from health insurance files. However, these data may not be accessible to all researchers. Another source for population size estimates is the Statistics Canada census. The purpose of our study was to calculate the major differences between the CCDSS and Statistics Canada's population denominators and to identify the sources or reasons for the potential differences between these data sources. We compared the 2009 denominators from the CCDSS and Statistics Canada. The CCDSS denominator was adjusted for the growth components (births, deaths, emigration and immigration) from Statistics Canada's census data. The unadjusted CCDSS denominator was 34 429 804, 3.2% higher than Statistics Canada's estimate of population in 2009. After the CCDSS denominator was adjusted for the growth components, the difference between the two estimates was reduced to 431 323 people, a difference of 1.3%. The CCDSS overestimates the population relative to Statistics Canada overall. The largest difference between the two estimates was from the migrant growth component, while the smallest was from the emigrant component. By using data descriptions by data source, researchers can make decisions about which population to use in their calculations of disease frequency.
Fan, Chong; Wu, Chaoyun; Li, Grand; Ma, Jun
2017-01-01
To solve the problem on inaccuracy when estimating the point spread function (PSF) of the ideal original image in traditional projection onto convex set (POCS) super-resolution (SR) reconstruction, this paper presents an improved POCS SR algorithm based on PSF estimation of low-resolution (LR) remote sensing images. The proposed algorithm can improve the spatial resolution of the image and benefit agricultural crop visual interpolation. The PSF of the high-resolution (HR) image is unknown in reality. Therefore, analysis of the relationship between the PSF of the HR image and the PSF of the LR image is important to estimate the PSF of the HR image by using multiple LR images. In this study, the linear relationship between the PSFs of the HR and LR images can be proven. In addition, the novel slant knife-edge method is employed, which can improve the accuracy of the PSF estimation of LR images. Finally, the proposed method is applied to reconstruct airborne digital sensor 40 (ADS40) three-line array images and the overlapped areas of two adjacent GF-2 images by embedding the estimated PSF of the HR image to the original POCS SR algorithm. Experimental results show that the proposed method yields higher quality of reconstructed images than that produced by the blind SR method and the bicubic interpolation method. PMID:28208837
Yield of acid curd cheese produced from cow's milk from different lactation periods.
Salamończyk, Ewa; Młynek, Krzysztof; Guliński, Piotr; Zawadzka, Wiesława
2017-01-01
Milk production intensification has led in many countries, including Poland, to increased milk yields per cow. A higher milk yield resulted in changes in cow productivity, including extended lactations. There is a paucity of information on the quality of milk harvested during the last months of lactations exceed- ing 10 months. Production capacity cheese (“cheese expenditure”) is an important parameter of providing a recovery as much as the possible components of the milk processed are dry substances, which in turn af- fects the economics of production. The aim of the study was to determine the influence of the lactation period (from standard lactation; extended lactation phase) on the performance of the acid curd cheese. the relation- ship between total protein content and acidity of fresh milk collected in two separate periods of lactation on the yield of acid cheese was also evaluated. The study included 1384 samples of milk collected from Polish Holstein-Friesian cows, the Black-White variety. The basic chemical composition of fresh milk and acid-curd cheese produced in the laboratory were analyzed. The cheese milk yield was evaluated on the basis of the quantity of the re- sulting curd mass. According to our estimates, under laboratory conditions an average of 100 kg of milk per cow in population produced an estimated 20.1 kg of curd cheese. The basic chemical composition of raw milk, which was diverse in terms of the period of lactation, showed a higher dry matter, fat and protein content in milk acquired during the extension phase of lactation compared to the milk of standard lactation. It has been found that the lower titratable acidity of fresh milk appeared with a higher yield of cheese curd. This difference was between 1.76 kg (with milk from cows milked during the extended lactation phase) to 2.72 kg from 100 kg of cheese milk (milk with the standard lactation). Thus, the optimum level of titratable acidity of milk for cheese yield is 6.0–7.5°SH. Most samples with the highest yields of acid curd cheese (>20%) were obtained from the milk from collected in the period from day 306 till the end of lactation (60.54%).
Economic analysis of Mycobacterium avium subspecies paratuberculosis vaccines in dairy herds.
Cho, J; Tauer, L W; Schukken, Y H; Gómez, M I; Smith, R L; Lu, Z; Grohn, Y T
2012-04-01
Johne's disease, or paratuberculosis, is a chronic infectious enteric disease of ruminants, caused by infection with Mycobacterium avium ssp. paratuberculosis (MAP). Given the absence of a fail-safe method of prevention or a cure, Johne's disease can inflict significant economic loss on the US dairy industry, with an estimated annual cost of over $200 million. Currently available MAP control strategies include management measures to improve hygiene, culling MAP serologic- or fecal-positive adult cows, and vaccination. Although the 2 first control strategies have been reported to be effective in reducing the incidence of MAP infection, the changes in herd management needed to conduct these control strategies require significant effort on the part of the dairy producer. On the other hand, vaccination is relatively simple to apply and requires minor changes in herd management. Despite these advantages, only 5% of US dairy operations use vaccination to control MAP. This low level of adoption of this technology is due to limited information on its cost-effectiveness and efficacy and some important inherent drawbacks associated with current MAP vaccines. This study investigates the epidemiological effect and economic values of MAP vaccines in various stages of development. We create scenarios for the potential epidemiological effects of MAP vaccines, and then estimate economically justifiable monetary values at which vaccines become economically beneficial to dairy producers such that a net present value (NPV) of a farm's net cash flow can be higher than the NPV of a farm using no control or alternative nonvaccine controls. Any vaccination with either low or high efficacy considered in this study yielded a higher NPV compared with a no MAP control. Moreover, high-efficacy vaccines generated an even higher NPV compared with alternative controls, making vaccination economically attractive. Two high-efficacy vaccines were particularly effective in MAP control and NPV maximization. One was a high-efficacy vaccine that reduced susceptibility to MAP infection. The other was a high-efficacy vaccine that had multiple efficacies on the dynamics of MAP infection and disease progress. Only one high-efficacy vaccine, in which the vaccine is targeted at reducing MAP shedding and the number of clinical cases, was not economically beneficial to dairy producers compared with an alternative nonvaccine control, when herds were highly infected with MAP. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
FOOTPRINT is a screening model used to estimate the length and surface area of benzene, toluene, ethylbenzene, and xylene (BTEX) plumes in groundwater, produced from a gasoline spill that contains ethanol.
Robinson, Mark; Shipton, Deborah; Walsh, David; Whyte, Bruce; McCartney, Gerry
2015-01-07
Regional differences in population levels of alcohol-related harm exist across Great Britain, but these are not entirely consistent with differences in population levels of alcohol consumption. This incongruence may be due to the use of self-report surveys to estimate consumption. Survey data are subject to various biases and typically produce consumption estimates much lower than those based on objective alcohol sales data. However, sales data have never been used to estimate regional consumption within Great Britain (GB). This ecological study uses alcohol retail sales data to provide novel insights into regional alcohol consumption in GB, and to explore the relationship between alcohol consumption and alcohol-related mortality. Alcohol sales estimates derived from electronic sales, delivery records and retail outlet sampling were obtained. The volume of pure alcohol sold was used to estimate per adult consumption, by market sector and drink type, across eleven GB regions in 2010-11. Alcohol-related mortality rates were calculated for the same regions and a cross-sectional correlation analysis between consumption and mortality was performed. Per adult consumption in northern England was above the GB average and characterised by high beer sales. A high level of consumption in South West England was driven by on-trade sales of cider and spirits and off-trade wine sales. Scottish regions had substantially higher spirits sales than elsewhere in GB, particularly through the off-trade. London had the lowest per adult consumption, attributable to lower off-trade sales across most drink types. Alcohol-related mortality was generally higher in regions with higher per adult consumption. The relationship was weakened by the South West and Central Scotland regions, which had the highest consumption levels, but discordantly low and very high alcohol-related mortality rates, respectively. This study provides support for the ecological relationship between alcohol-related mortality and alcohol consumption. The synthesis of knowledge from a combination of sales, survey and mortality data, as well as primary research studies, is key to ensuring that regional alcohol consumption, and its relationship with alcohol-related harms, is better understood.
Melzer, David; Osborne, Nicholas J; Henley, William E; Cipelli, Riccardo; Young, Anita; Money, Cathryn; McCormack, Paul; Luben, Robert; Khaw, Kay-Tee; Wareham, Nicholas J; Galloway, Tamara S
2012-03-27
The endocrine-disrupting chemical bisphenol A (BPA) is widely used in food and beverage packaging. Higher urinary BPA concentrations were cross-sectionally associated with heart disease in National Health and Nutrition Examination Survey (NHANES) 2003-2004 and NHANES 2005-2006, independent of traditional risk factors. We included 758 incident coronary artery disease (CAD) cases and 861 controls followed for 10.8 years from the European Prospective Investigation of Cancer-Norfolk UK. Respondents aged 40 to 74 years and free of CAD, stroke, or diabetes mellitus provided baseline spot urine samples. Urinary BPA concentrations (median value, 1.3 ng/mL) were low. Per-SD (4.56 ng/mL) increases in urinary BPA concentration were associated with incident CAD in age-, sex-, and urinary creatinine-adjusted models (n=1919; odds ratio=1.13; 95% confidence interval, 1.02-1.24; P=0.017). With CAD risk factor adjustment (including education, occupational social class, body mass index category, systolic blood pressure, lipid concentrations, and exercise), the estimate was similar but narrowly missed 2-sided significance (n=1744; odds ratio=1.11; 95% confidence interval, 1.00-1.23; P=0.058). Sensitivity analyses with the fully adjusted model, excluding those with early CAD (<3-year follow-up), body mass index >30, or abnormal renal function or with additional adjustment for vitamin C, C-reactive protein, or alcohol consumption, all produced similar estimates, and all showed associations at P≤0.05. Associations between higher BPA exposure (reflected in higher urinary concentrations) and incident CAD during >10 years of follow-up showed trends similar to previously reported cross-sectional findings in the more highly exposed NHANES respondents. Further work is needed to accurately estimate the prospective exposure-response curve and to establish the underlying mechanisms.
Knochenmus, L.A.; Bowman, Geronia
1998-01-01
The intermediate aquifer system is an important water source in Sarasota County, Florida, because the quality of water in it is usually better than that in the underlying Upper Floridan aquifer. The intermediate aquifer system consists of a group of up to three water-producing zones separated by less-permeable units that restrict the vertical movement of ground water between zones. The diverse lithology, that makes up the intermediate aquifer system, reflects the variety of depositional environments that occurred during the late Oligocene and Miocene epochs. Slight changes in the depositional environment resulted in aquifer heterogeneity, creating both localized connection between water-producing zones and abrupt culmination of water-producing zones that are not well documented. Aquifer heterogeneity results in vertical and areal variability in hydraulic and water-quality properties. The uppermost water-producing zone is designated producing zone 1 but is not extensively used because of its limited production capability and limited areal extent. The second water-producing zone is designated producing zone 2, and most of the domestic- and irrigation-supply wells in the area are open to this zone. Additionally, producing zone 2 is utilized for public supply in southern coastal areas of Sarasota County. Producing zone 3 is the lowermost and most productive water-producing zone in the intermediate aquifer system. Public-supply well fields serving the cities of Sarasota and Venice, as well as the Plantation and Mabry Carlton Reserve well fields, utilize producing zone 3. Heads within the intermediate aquifer system generally increase with aquifer depth. However, localized head-gradient reversals occur in the study area, coinciding with sites of intense ground-water withdrawals. Heads in producing zones 1, 2, and 3 range from 1 to 23, 0.2 to 34, and 7 to 42 feet above sea level, respectively. Generally, an upward head gradient exists between producing zones 3 and 2. However, an upward head gradient between producing zones 2 and 1 does not consistently occur throughout Sarasota County, probably the result of greater ground-water withdrawals from producing zone 2 than from producing zone 1. The transmissivity of the intermediate aquifer system is spatially variable. Specific-capacity data from selected wells penetrating producing zones 2 and 3, were used to estimate transmissivity. Estimated transmissivity values for producing zones 2 and 3 range from about 100 to 26,000 feet squared per day and from about 1,300 to 6,200 feet squared per day, respectively. Because the capacity of specific water-producing zones is highly variable from site to site, estimating the performance of a specific water-producing zone as a water resource is difficult. Water samples collected during the study were analyzed for major-ion concentrations. Generally, bicarbonate type water from rock interaction occurs in northern Sarasota County; enriched calcium-magnesium-sulfate type water from deeper aquifers occurs in central Sarasota County; and sodium-chloride type water from saltwater mixing occurs in southern Sarasota County. In some areas of northern Sarasota County, the major-ion concentrations in water are lower in producing zone 2 than in producing zone 1. Major-ion concentrations in water are higher in producing zone 3 throughout the study area. A major objective of the study was to evaluate hydraulic and water-quality data to determine distinctions that could be used to characterize a particular producing zone. However, data indicate that both hydraulic and water-quality properties are highly variable within and between zones, and are more related to the degree of connection between and areal extent of water-producing zones than to aquifer depth and distance from the coast.
Cross-sectional study of equol producer status and cognitive impairment in older adults.
Igase, Michiya; Igase, Keiji; Tabara, Yasuharu; Ohyagi, Yasumasa; Kohara, Katsuhiko
2017-11-01
It is well known that consumption of isoflavones reduces the risk of cardiovascular disease. However, the effectiveness of isoflavones in preventing dementia is controversial. A number of intervention studies have produced conflicting results. One possible reason is that the ability to produce equol, a metabolite of a soy isoflavone, differs greatly in individuals. In addition to existing data, we sought to confirm whether an apparent beneficial effect in cognitive function is observed after soy consumption in equol producers compared with non-producers. The present study was a cross-sectional, observational study of 152 (male/female = 61/91, mean age 69.2 ± 9.2 years) individuals. Participants were divided into two groups according to equol production status, which was determined using urine samples collected after a soy challenge test. Cognitive function was assessed using two computer-based questionnaires (touch panel-type dementia assessment scale [TDAS] and mild cognitive impairment [MCI] screen). Overall, 60 (40%) of 152 participants were equol producers. Both TDAS and prevalence of MCI were significantly higher in the equol producer group than in the non-producer group. In univariate analyses, TDAS significantly correlated with age, serum creatinine, estimated glomerular filtration rate and low-density lipoprotein cholesterol. In multiple regression analysis using TDAS as a dependent variable, equol producer (β = 0.236, P = 0.005) was selected as an independent variable. In addition, multiple logistic regression analysis to assess the presence of MCI showed that being an equol producer was an independent risk factor for MCI (odds ratio 3.961). Compared with equol non-producers, equol producers showed an apparent beneficial effect in cognitive function after soy intake. Geriatr Gerontol Int 2017; 17: 2103-2108. © 2017 Japan Geriatrics Society.
Global estimates of country health indicators: useful, unnecessary, inevitable?
AbouZahr, Carla; Boerma, Ties; Hogan, Daniel
2017-01-01
ABSTRACT Background: The MDG era relied on global health estimates to fill data gaps and ensure temporal and cross-country comparability in reporting progress. Monitoring the Sustainable Development Goals will present new challenges, requiring enhanced capacities to generate, analyse, interpret and use country produced data. Objective: To summarize the development of global health estimates and discuss their utility and limitations from global and country perspectives. Design: Descriptive paper based on findings of intercountry workshops, reviews of literatureon and synthesis of experiences. Results: Producers of global health estimates focus on the technical soundness of estimation methods and comparability of the results across countries and over time. By contrast, country users are more concerned about the extent of their involvement in the estimation process and hesitate to buy into estimates derived using methods their technical staff cannot explain and that differ from national data sources. Quantitative summaries of uncertainty may be of limited practical use in policy discussions where decisions need to be made about what to do next. Conclusions: Greater transparency and involvement of country partners in the development of global estimates will help improve ownership, strengthen country capacities for data production and use, and reduce reliance on externally produced estimates. PMID:28532307
Njage, Patrick Murigu Kamau; Sawe, Chemutai Tonui; Onyango, Cecilia Moraa; Habib, I; Njagi, Edmund Njeru; Aerts, Marc; Molenberghs, Geert
2017-01-01
Current approaches such as inspections, audits, and end product testing cannot detect the distribution and dynamics of microbial contamination. Despite the implementation of current food safety management systems, foodborne outbreaks linked to fresh produce continue to be reported. A microbial assessment scheme and statistical modeling were used to systematically assess the microbial performance of core control and assurance activities in five Kenyan fresh produce processing and export companies. Generalized linear mixed models and correlated random-effects joint models for multivariate clustered data followed by empirical Bayes estimates enabled the analysis of the probability of contamination across critical sampling locations (CSLs) and factories as a random effect. Salmonella spp. and Listeria monocytogenes were not detected in the final products. However, none of the processors attained the maximum safety level for environmental samples. Escherichia coli was detected in five of the six CSLs, including the final product. Among the processing-environment samples, the hand or glove swabs of personnel revealed a higher level of predicted contamination with E. coli , and 80% of the factories were E. coli positive at this CSL. End products showed higher predicted probabilities of having the lowest level of food safety compared with raw materials. The final products were E. coli positive despite the raw materials being E. coli negative for 60% of the processors. There was a higher probability of contamination with coliforms in water at the inlet than in the final rinse water. Four (80%) of the five assessed processors had poor to unacceptable counts of Enterobacteriaceae on processing surfaces. Personnel-, equipment-, and product-related hygiene measures to improve the performance of preventive and intervention measures are recommended.
Time-shifted synchronization of chaotic oscillator chains without explicit coupling delays.
Blakely, Jonathan N; Stahl, Mark T; Corron, Ned J
2009-12-01
We examine chains of unidirectionally coupled oscillators in which time-shifted synchronization occurs without explicit delays in the coupling. In numerical simulations and in an experimental system of electronic oscillators, we examine the time shift and the degree of distortion (primarily in the form of attenuation) of the waveforms of the oscillators located far from the drive oscillator. Surprisingly, under weak coupling we observe minimal attenuation in spite of a significant total time shift. In contrast, at higher coupling strengths the observed attenuation increases dramatically and approaches the value predicted by an analytically derived estimate. In this regime, we verify directly that generalized synchronization is maintained over the entire chain length despite severe attenuation. These results suggest that weak coupling generally may produce higher quality synchronization in systems for which truly identical synchronization is not possible.
Fertility, Human Capital, and Economic Growth over the Demographic Transition
Mason, Andrew
2009-01-01
Do low fertility and population aging lead to economic decline if couples have fewer children, but invest more in each child? By addressing this question, this article extends previous work in which the authors show that population aging leads to an increased demand for wealth that can, under some conditions, lead to increased capital per worker and higher per capita consumption. This article is based on an overlapping generations (OLG) model which highlights the quantity–quality tradeoff and the links between human capital investment and economic growth. It incorporates new national level estimates of human capital investment produced by the National Transfer Accounts project. Simulation analysis is employed to show that, even in the absence of the capital dilution effect, low fertility leads to higher per capita consumption through human capital accumulation, given plausible model parameters. PMID:20495605
Low-Cost energy contraption design using playground seesaw
NASA Astrophysics Data System (ADS)
Banlawe, I. A. P.; Acosta, N. J. E. L.
2017-05-01
The study was conducted at Western Philippines University, San Juan, Aborlan, Palawan. The study used the mechanical motion of playground seesaw as a means to produce electrical energy. The study aimed to design a low-cost prototype energy contraption using playground seesaw using locally available and recycled materials, to measure the voltage, current and power outputs produced at different situations and estimate the cost of the prototype. Using principle of pneumatics, two hand air pumps were employed on the two end sides of the playground seesaw and the mechanical motion of the seesaw up and down produces air that is used to rotate a DC motor to produce electrical energy. This electricity can be utilized for powering basic or low-power appliances. There were two trials of testing, each trial tests the different pressure level of the air tank and tests the opening of on-off valve (Full open and half open) when the compressed air was released. Results showed that all pressure level at full open produced significantly higher voltage, than the half open. However, the mean values of the current and power produced in all pressure level at full and half open have negligible variation. These results signify that the energy contraption using playground seesaw is an alternative viable source of electrical energy in the playgrounds, parks and other places and can be used as an auxiliary or back-up source for electricity.
Thorup, V M; Edwards, D; Friggens, N C
2012-04-01
Precise energy balance estimates for individual cows are of great importance to monitor health, reproduction, and feed management. Energy balance is usually calculated as energy input minus output (EB(inout)), requiring measurements of feed intake and energy output sources (milk, maintenance, activity, growth, and pregnancy). Except for milk yield, direct measurements of the other sources are difficult to obtain in practice, and estimates contain considerable error sources, limiting on-farm use. Alternatively, energy balance can be estimated from body reserve changes (EB(body)) using body weight (BW) and body condition score (BCS). Automated weighing systems exist and new technology performing semi-automated body condition scoring has emerged, so frequent automated BW and BCS measurements are feasible. We present a method to derive individual EB(body) estimates from frequently measured BW and BCS and evaluate the performance of the estimated EB(body) against the traditional EB(inout) method. From 76 Danish Holstein and Jersey cows, parity 1 or 2+, on a glycerol-rich or a whole grain-rich total mixed ration, BW was measured automatically at each milking. The BW was corrected for the weight of milk produced and for gutfill. Changes in BW and BCS were used to calculate changes in body protein, body lipid, and EB(body) during the first 150 d in milk. The EB(body) was compared with the traditional EB(inout) by isolating the term within EB(inout) associated with most uncertainty; that is, feed energy content (FEC); FEC=(EB(body)+EMilk+EMaintenance+Eactivity)/dry matter intake, where the energy requirements are for milk produced (EMilk), maintenance (EMaintenance), and activity (EActivity). Estimated FEC agreed well with FEC values derived from tables (the mean estimate was 0.21 MJ of effective energy/kg of dry matter or 2.2% higher than the mean table value). Further, the FEC profile did not suggest systematic bias in EB(body) with stage of lactation. The EB(body) estimated from daily BW, adjusted for milk and meal-related gutfill and combined with frequent BCS, can provide a successful tool. This offers a pragmatic solution to on-farm calculation of energy balance with the perspective of improved precision under commercial conditions. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Jordan, Brett Watson
2017-06-03
Firms that extract and produce multiple metals are an important component of mineral supply. The reaction of such firms to changes in their relevant output prices is tested econometrically for five metals using a panel representing more than 100 mines across the time period 1991-2005. Here, the estimation strategy is drawn from joint production theory, namely a flexible form, dual revenue approach with seemingly unrelated regressions (SUR) estimation. The results indicate that multi-product mines respond (in the short run) to higher prices of a particular metal by reducing output of that metal (indicative of low-grading behavior) and increasing and/or decreasingmore » output of joint metal products (indicative of substitutes and complements in supply). As a result, the price responses are not readily explained by a metal's classification as a by-product or main product based on revenue.« less
Investigation on the structural characterization of pulsed p-type porous silicon
NASA Astrophysics Data System (ADS)
Wahab, N. H. Abd; Rahim, A. F. Abd; Mahmood, A.; Yusof, Y.
2017-08-01
P-type Porous silicon (PS) was sucessfully formed by using an electrochemical pulse etching (PC) and conventional direct current (DC) etching techniques. The PS was etched in the Hydrofluoric (HF) based solution at a current density of J = 10 mA/cm2 for 30 minutes from a crystalline silicon wafer with (100) orientation. For the PC process, the current was supplied through a pulse generator with 14 ms cycle time (T) with 10 ms on time (Ton) and pause time (Toff) of 4 ms respectively. FESEM, EDX, AFM, and XRD have been used to characterize the morphological properties of the PS. FESEM images showed that pulse PS (PPC) sample produces more uniform circular structures with estimated average pore sizes of 42.14 nm compared to DC porous (PDC) sample with estimated average size of 16.37nm respectively. The EDX spectrum for both samples showed higher Si content with minimal presence of oxide.
Estimation of global anthropogenic dust aerosol using CALIOP satellite
NASA Astrophysics Data System (ADS)
Chen, B.; Huang, J.; Liu, J.
2014-12-01
Anthropogenic dust aerosols are those produced by human activity, which mainly come from cropland, pasture, and urban in this paper. Because understanding of the emissions of anthropogenic dust is still very limited, a new technique for separating anthropogenic dust from natural dustusing CALIPSO dust and planetary boundary layer height retrievalsalong with a land use dataset is introduced. Using this technique, the global distribution of dust is analyzed and the relative contribution of anthropogenic and natural dust sources to regional and global emissions are estimated. Local anthropogenic dust aerosol due to human activity, such as agriculture, industrial activity, transportation, and overgrazing, accounts for about 22.3% of the global continentaldust load. Of these anthropogenic dust aerosols, more than 52.5% come from semi-arid and semi-wet regions. On the whole, anthropogenic dust emissions from East China and India are higher than other regions.
Recent Advances in Synthesis and Characterization of SWCNTs Produced by Laser Oven Process
NASA Technical Reports Server (NTRS)
Aepalli, Sivaram
2004-01-01
Results from the parametric study of the two-laser oven process indicated possible improvements with flow conditions and laser characteristics. Higher flow rates, lower operating pressures coupled with changes in flow tube material are found to improve the nanotube yields. The collected nanotube material is analyzed using a combination of characterization techniques including SEM, TEM, TGA, Raman and UV-VIS-NIR to estimate the purity of the samples. In-situ diagnostics of the laser oven process is now extended to include the surface temperature of the target material. Spectral emission from the target surface is compared with black body type emission to estimate the temperature. The surface temperature seemed to correlate well with the ablation rate as well as the quality of the SWCNTs. Recent changes in improving the production rate by rastering the target and using cw laser will be presented.
Recent Advances in Synthesis and Characterization of SWCNTs produced by laser oven process
NASA Technical Reports Server (NTRS)
Arepalli, Sivaram
2004-01-01
Results from the parametric study of the two-laser oven process indicated possible improvements with flow conditions and laser characteristics (ref. 1). Higher flow rates, lower operating pressures coupled with changes in flow tube material are found to improve the nanotube yields. The collected nanotube material is analyzed using a combination of characterization techniques including SEM, TEM, TGA, Raman and UV-VIS-NIR to estimate the purity of the samples. Insitu diagnostics of the laser oven process is now extended to include the surface temperature of the target material. Spectral emission from the target surface is compared with black body type emission to estimate the temperature. The surface temperature seemed to correlate well with the ablation rate as well as the quality of the SWCNTs. Recent changes in improving the production rate by rastering the target and using cw laser will be presented.
Nie, Zhiqiang; Yang, Yufei; Tang, Zhenwu; Liu, Feng; Wang, Qi; Huang, Qifei
2014-11-01
Field monitoring was conducted to develop a polycyclic aromatic hydrocarbon (PAH) emission inventory for the magnesium (Mg) metallurgy industry in China. PAH emissions in stack gas and fly/bottom ash samples from different smelting units of a typical Mg smelter were measured and compared. Large variations of concentrations, congener patterns, and emission factors of PAHs during the oxidation and reduction stages in the Mg smelter were observed. The measured average emission factor (166,487 μg/t Mg) was significantly higher than those of other industrial sources. Annual emission from Mg metallurgy in 2012 in China was estimated at 116 kg (514 g BaPeq) for PAHs. The results of this study suggest that PAH emission from Mg industries should be considered by local government agencies. These data may be helpful for understanding PAH levels produced by the Mg industry and in developing a PAH inventory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jordan, Brett Watson
Firms that extract and produce multiple metals are an important component of mineral supply. The reaction of such firms to changes in their relevant output prices is tested econometrically for five metals using a panel representing more than 100 mines across the time period 1991-2005. Here, the estimation strategy is drawn from joint production theory, namely a flexible form, dual revenue approach with seemingly unrelated regressions (SUR) estimation. The results indicate that multi-product mines respond (in the short run) to higher prices of a particular metal by reducing output of that metal (indicative of low-grading behavior) and increasing and/or decreasingmore » output of joint metal products (indicative of substitutes and complements in supply). As a result, the price responses are not readily explained by a metal's classification as a by-product or main product based on revenue.« less
Sato, H; Hirayama, H; Yamamoto, T; Ishizawa, F; Mizugaki, M
1998-06-01
The purpose of this study was to evaluate the usefulness of extra-weak chemiluminescence (CL) measurement as a rapid method to estimate the stability of Kampo extract preparations. It was found that the Kampo drugs that emit little CL were stable, while those with higher CL were comparatively unstable with regard to the various stability markers, including change of coloration (browning), contents of specific ingredients, high molecular compounds, amino acids and sugars under various conditions of heat storage. Excellent correlation existed between the CL of Kampo drugs and the coloration (delta E* (ab)) and the other above-mentioned evaluation markers. From this investigation, it was deduced that the CL of Kampo drugs originates in the early stage of the Maillard reaction and reflects the stability of the preparations, and that CL is useful for estimating the stability of Kampo drugs.
Estimation of Saxophone Control Parameters by Convex Optimization.
Wang, Cheng-I; Smyth, Tamara; Lipton, Zachary C
2014-12-01
In this work, an approach to jointly estimating the tone hole configuration (fingering) and reed model parameters of a saxophone is presented. The problem isn't one of merely estimating pitch as one applied fingering can be used to produce several different pitches by bugling or overblowing. Nor can a fingering be estimated solely by the spectral envelope of the produced sound (as it might for estimation of vocal tract shape in speech) since one fingering can produce markedly different spectral envelopes depending on the player's embouchure and control of the reed. The problem is therefore addressed by jointly estimating both the reed (source) parameters and the fingering (filter) of a saxophone model using convex optimization and 1) a bank of filter frequency responses derived from measurement of the saxophone configured with all possible fingerings and 2) sample recordings of notes produced using all possible fingerings, played with different overblowing, dynamics and timbre. The saxophone model couples one of several possible frequency response pairs (corresponding to the applied fingering), and a quasi-static reed model generating input pressure at the mouthpiece, with control parameters being blowing pressure and reed stiffness. Applied fingering and reed parameters are estimated for a given recording by formalizing a minimization problem, where the cost function is the error between the recording and the synthesized sound produced by the model having incremental parameter values for blowing pressure and reed stiffness. The minimization problem is nonlinear and not differentiable and is made solvable using convex optimization. The performance of the fingering identification is evaluated with better accuracy than previous reported value.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paap, Scott M.; West, Todd H.; Manley, Dawn Kataoka
2013-01-01
In the current study, processes to produce either ethanol or a representative fatty acid ethyl ester (FAEE) via the fermentation of sugars liberated from lignocellulosic materials pretreated in acid or alkaline environments are analyzed in terms of economic and environmental metrics. Simplified process models are introduced and employed to estimate process performance, and Monte Carlo analyses were carried out to identify key sources of uncertainty and variability. We find that the near-term performance of processes to produce FAEE is significantly worse than that of ethanol production processes for all metrics considered, primarily due to poor fermentation yields and higher electricitymore » demands for aerobic fermentation. In the longer term, the reduced cost and energy requirements of FAEE separation processes will be at least partially offset by inherent limitations in the relevant metabolic pathways that constrain the maximum yield potential of FAEE from biomass-derived sugars.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Jingke; Stanford, Chris; Westerdale, Shawn
Here, one major background in direct searches for weakly interacting massive particles (WIMPs) comes from the deposition of radon progeny on detector surfaces. A dangerous surface background is the 206Pb nuclear recoils produced by 210Po decays. In this paper, we report the first characterization of this background in liquid argon. The scintillation signal of low energy Pb recoils is measured to be highly quenched in argon, and we estimate that the 103 keV 206Pb recoil background will produce a signal equal to that of a ~5 keV (30 keV) electron recoil ( 40Ar recoil). In addition, we demonstrate that thismore » dangerous 210Po surface background can be suppressed, using pulse shape discrimination methods, by a factor of ~100 or higher, which can make argon dark matter detectors near background-free and enhance their potential for discovery of medium- and high-mass WIMPs. Lastly, we also discuss the impact on other low background experiments.« less
Alternative Methods for Handling Attrition
Foster, E. Michael; Fang, Grace Y.
2009-01-01
Using data from the evaluation of the Fast Track intervention, this article illustrates three methods for handling attrition. Multiple imputation and ignorable maximum likelihood estimation produce estimates that are similar to those based on listwise-deleted data. A panel selection model that allows for selective dropout reveals that highly aggressive boys accumulate in the treatment group over time and produces a larger estimate of treatment effect. In contrast, this model produces a smaller treatment effect for girls. The article's conclusion discusses the strengths and weaknesses of the alternative approaches and outlines ways in which researchers might improve their handling of attrition. PMID:15358906
Al-Quwaidhi, Abdulkareem J.; Pearce, Mark S.; Sobngwi, Eugene; Critchley, Julia A.; O’Flaherty, Martin
2014-01-01
Aims To compare the estimates and projections of type 2 diabetes mellitus (T2DM) prevalence in Saudi Arabia from a validated Markov model against other modelling estimates, such as those produced by the International Diabetes Federation (IDF) Diabetes Atlas and the Global Burden of Disease (GBD) project. Methods A discrete-state Markov model was developed and validated that integrates data on population, obesity and smoking prevalence trends in adult Saudis aged ≥25 years to estimate the trends in T2DM prevalence (annually from 1992 to 2022). The model was validated by comparing the age- and sex-specific prevalence estimates against a national survey conducted in 2005. Results Prevalence estimates from this new Markov model were consistent with the 2005 national survey and very similar to the GBD study estimates. Prevalence in men and women in 2000 was estimated by the GBD model respectively at 17.5% and 17.7%, compared to 17.7% and 16.4% in this study. The IDF estimates of the total diabetes prevalence were considerably lower at 16.7% in 2011 and 20.8% in 2030, compared with 29.2% in 2011 and 44.1% in 2022 in this study. Conclusion In contrast to other modelling studies, both the Saudi IMPACT Diabetes Forecast Model and the GBD model directly incorporated the trends in obesity prevalence and/or body mass index (BMI) to inform T2DM prevalence estimates. It appears that such a direct incorporation of obesity trends in modelling studies results in higher estimates of the future prevalence of T2DM, at least in countries where obesity has been rapidly increasing. PMID:24447810
ERIC Educational Resources Information Center
Maxwell, Jane Carlisle; Pullum, Thomas W.
2001-01-01
Applied the capture-recapture model, through a Poisson regression to a time series of data for admissions to treatment from 1987 to 1996 to estimate the number of heroin addicts in Texas who are "at-risk" for treatment. The entire data set produced estimates that were lower and more plausible than those produced by drawing samples,…
An Overdetermined System for Improved Autocorrelation Based Spectral Moment Estimator Performance
NASA Technical Reports Server (NTRS)
Keel, Byron M.
1996-01-01
Autocorrelation based spectral moment estimators are typically derived using the Fourier transform relationship between the power spectrum and the autocorrelation function along with using either an assumed form of the autocorrelation function, e.g., Gaussian, or a generic complex form and applying properties of the characteristic function. Passarelli has used a series expansion of the general complex autocorrelation function and has expressed the coefficients in terms of central moments of the power spectrum. A truncation of this series will produce a closed system of equations which can be solved for the central moments of interest. The autocorrelation function at various lags is estimated from samples of the random process under observation. These estimates themselves are random variables and exhibit a bias and variance that is a function of the number of samples used in the estimates and the operational signal-to-noise ratio. This contributes to a degradation in performance of the moment estimators. This dissertation investigates the use autocorrelation function estimates at higher order lags to reduce the bias and standard deviation in spectral moment estimates. In particular, Passarelli's series expansion is cast in terms of an overdetermined system to form a framework under which the application of additional autocorrelation function estimates at higher order lags can be defined and assessed. The solution of the overdetermined system is the least squares solution. Furthermore, an overdetermined system can be solved for any moment or moments of interest and is not tied to a particular form of the power spectrum or corresponding autocorrelation function. As an application of this approach, autocorrelation based variance estimators are defined by a truncation of Passarelli's series expansion and applied to simulated Doppler weather radar returns which are characterized by a Gaussian shaped power spectrum. The performance of the variance estimators determined from a closed system is shown to improve through the application of additional autocorrelation lags in an overdetermined system. This improvement is greater in the narrowband spectrum region where the information is spread over more lags of the autocorrelation function. The number of lags needed in the overdetermined system is a function of the spectral width, the number of terms in the series expansion, the number of samples used in estimating the autocorrelation function, and the signal-to-noise ratio. The overdetermined system provides a robustness to the chosen variance estimator by expanding the region of spectral widths and signal-to-noise ratios over which the estimator can perform as compared to the closed system.
Evaluation of Bayesian Sequential Proportion Estimation Using Analyst Labels
NASA Technical Reports Server (NTRS)
Lennington, R. K.; Abotteen, K. M. (Principal Investigator)
1980-01-01
The author has identified the following significant results. A total of ten Large Area Crop Inventory Experiment Phase 3 blind sites and analyst-interpreter labels were used in a study to compare proportional estimates obtained by the Bayes sequential procedure with estimates obtained from simple random sampling and from Procedure 1. The analyst error rate using the Bayes technique was shown to be no greater than that for the simple random sampling. Also, the segment proportion estimates produced using this technique had smaller bias and mean squared errors than the estimates produced using either simple random sampling or Procedure 1.
How EIA Estimates Natural Gas Production
2004-01-01
The Energy Information Administration (EIA) publishes estimates monthly and annually of the production of natural gas in the United States. The estimates are based on data EIA collects from gas producing states and data collected by the U. S. Minerals Management Service (MMS) in the Department of Interior. The states and MMS collect this information from producers of natural gas for various reasons, most often for revenue purposes. Because the information is not sufficiently complete or timely for inclusion in EIA's Natural Gas Monthly (NGM), EIA has developed estimation methodologies to generate monthly production estimates that are described in this document.
On the Specification of Smoke Injection Heights for Aerosol Forecasting
NASA Astrophysics Data System (ADS)
da Silva, A.; Schaefer, C.; Randles, C. A.
2014-12-01
The proper forecasting of biomass burning (BB) aerosols in global or regional transport models requires not only the specification of emission rates with sufficient temporal resolution but also the injection layers of such emissions. While current near realtime biomass burning inventories such as GFAS, QFED, FINN, GBBEP and FLAMBE provide such emission rates, it is left for each modeling system to come up with its own scheme for distributing these emissions in the vertical. A number of operational aerosol forecasting models deposits BB emissions in the near surface model layers, relying on the model's parameterization of turbulent and convective transport to determine the vertical mass distribution of BB aerosols. Despite their simplicity such schemes have been relatively successful reproducing the vertical structure of BB aerosols, except for those large fires that produce enough buoyancy to puncture the PBL and deposit the smoke at higher layers. Plume Rise models such as the so-called 'Freitas model', parameterize this sub-grid buoyancy effect, but require the specification of fire size and heat fluxes, none of which is readily available in near real-time from current remotely-sensed products. In this talk we will introduce a bayesian algorithm for estimating file size and heat fluxes from MODIS brightness temperatures. For small to moderate fires the Freitas model driven by these heat flux estimates produces plume tops that are highly correlated with the GEOS-5 model estimate of PBL height. Comparison to MINX plume height estimates from MISR indicates moderate skill of this scheme predicting the injection height of large fires. As an alternative, we make use of OMPS UV aerosol index data in combination with estimates of Overshooting Convective Tops (from MODIS and Geo-stationary satellites) to detect PyCu events and specify the BB emission vertical mass distribution in such cases. We will present a discussion of case studies during the SEAC4RS field campaign in August-September 2013.
Hunter, Margaret; Meigs-Friend, Gaia; Ferrante, Jason; Takoukam Kamla, Aristide; Dorazio, Robert; Keith Diagne, Lucy; Luna, Fabia; Lanyon, Janet M.; Reid, James P.
2018-01-01
Environmental DNA (eDNA) detection is a technique used to non-invasively detect cryptic, low density, or logistically difficult-to-study species, such as imperiled manatees. For eDNA measurement, genetic material shed into the environment is concentrated from water samples and analyzed for the presence of target species. Cytochrome bquantitative PCR and droplet digital PCR eDNA assays were developed for the 3 Vulnerable manatee species: African, Amazonian, and both subspecies of the West Indian (Florida and Antillean) manatee. Environmental DNA assays can help to delineate manatee habitat ranges, high use areas, and seasonal population changes. To validate the assay, water was analyzed from Florida’s east coast containing a high-density manatee population and produced 31564 DNA molecules l-1on average and high occurrence (ψ) and detection (p) estimates (ψ = 0.84 [0.40-0.99]; p = 0.99 [0.95-1.00]; limit of detection 3 copies µl-1). Similar occupancy estimates were produced in the Florida Panhandle (ψ = 0.79 [0.54-0.97]) and Cuba (ψ = 0.89 [0.54-1.00]), while occupancy estimates in Cameroon were lower (ψ = 0.49 [0.09-0.95]). The eDNA-derived detection estimates were higher than those generated using aerial survey data on the west coast of Florida and may be effective for population monitoring. Subsequent eDNA studies could be particularly useful in locations where manatees are (1) difficult to identify visually (e.g. the Amazon River and Africa), (2) are present in patchy distributions or are on the verge of extinction (e.g. Jamaica, Haiti), and (3) where repatriation efforts are proposed (e.g. Brazil, Guadeloupe). Extension of these eDNA techniques could be applied to other imperiled marine mammal populations such as African and Asian dugongs.
Can Family Planning Service Statistics Be Used to Track Population-Level Outcomes?
Magnani, Robert J; Ross, John; Williamson, Jessica; Weinberger, Michelle
2018-03-21
The need for annual family planning program tracking data under the Family Planning 2020 (FP2020) initiative has contributed to renewed interest in family planning service statistics as a potential data source for annual estimates of the modern contraceptive prevalence rate (mCPR). We sought to assess (1) how well a set of commonly recorded data elements in routine service statistics systems could, with some fairly simple adjustments, track key population-level outcome indicators, and (2) whether some data elements performed better than others. We used data from 22 countries in Africa and Asia to analyze 3 data elements collected from service statistics: (1) number of contraceptive commodities distributed to clients, (2) number of family planning service visits, and (3) number of current contraceptive users. Data quality was assessed via analysis of mean square errors, using the United Nations Population Division World Contraceptive Use annual mCPR estimates as the "gold standard." We also examined the magnitude of several components of measurement error: (1) variance, (2) level bias, and (3) slope (or trend) bias. Our results indicate modest levels of tracking error for data on commodities to clients (7%) and service visits (10%), and somewhat higher error rates for data on current users (19%). Variance and slope bias were relatively small for all data elements. Level bias was by far the largest contributor to tracking error. Paired comparisons of data elements in countries that collected at least 2 of the 3 data elements indicated a modest advantage of data on commodities to clients. None of the data elements considered was sufficiently accurate to be used to produce reliable stand-alone annual estimates of mCPR. However, the relatively low levels of variance and slope bias indicate that trends calculated from these 3 data elements can be productively used in conjunction with the Family Planning Estimation Tool (FPET) currently used to produce annual mCPR tracking estimates for FP2020. © Magnani et al.
Evaluation of wind field statistics near and inside clouds using a coherent Doppler lidar
NASA Astrophysics Data System (ADS)
Lottman, Brian Todd
1998-09-01
This work proposes advanced techniques for measuring the spatial wind field statistics near and inside clouds using a vertically pointing solid state coherent Doppler lidar on a fixed ground based platform. The coherent Doppler lidar is an ideal instrument for high spatial and temporal resolution velocity estimates. The basic parameters of lidar are discussed, including a complete statistical description of the Doppler lidar signal. This description is extended to cases with simple functional forms for aerosol backscatter and velocity. An estimate for the mean velocity over a sensing volume is produced by estimating the mean spectra. There are many traditional spectral estimators, which are useful for conditions with slowly varying velocity and backscatter. A new class of estimators (novel) is introduced that produces reliable velocity estimates for conditions with large variations in aerosol backscatter and velocity with range, such as cloud conditions. Performance of traditional and novel estimators is computed for a variety of deterministic atmospheric conditions using computer simulated data. Wind field statistics are produced for actual data for a cloud deck, and for multi- layer clouds. Unique results include detection of possible spectral signatures for rain, estimates for the structure function inside a cloud deck, reliable velocity estimation techniques near and inside thin clouds, and estimates for simple wind field statistics between cloud layers.
Model fit evaluation in multilevel structural equation models
Ryu, Ehri
2014-01-01
Assessing goodness of model fit is one of the key questions in structural equation modeling (SEM). Goodness of fit is the extent to which the hypothesized model reproduces the multivariate structure underlying the set of variables. During the earlier development of multilevel structural equation models, the “standard” approach was to evaluate the goodness of fit for the entire model across all levels simultaneously. The model fit statistics produced by the standard approach have a potential problem in detecting lack of fit in the higher-level model for which the effective sample size is much smaller. Also when the standard approach results in poor model fit, it is not clear at which level the model does not fit well. This article reviews two alternative approaches that have been proposed to overcome the limitations of the standard approach. One is a two-step procedure which first produces estimates of saturated covariance matrices at each level and then performs single-level analysis at each level with the estimated covariance matrices as input (Yuan and Bentler, 2007). The other level-specific approach utilizes partially saturated models to obtain test statistics and fit indices for each level separately (Ryu and West, 2009). Simulation studies (e.g., Yuan and Bentler, 2007; Ryu and West, 2009) have consistently shown that both alternative approaches performed well in detecting lack of fit at any level, whereas the standard approach failed to detect lack of fit at the higher level. It is recommended that the alternative approaches are used to assess the model fit in multilevel structural equation model. Advantages and disadvantages of the two alternative approaches are discussed. The alternative approaches are demonstrated in an empirical example. PMID:24550882
Bach Kristensen, Mette; Hels, Ole; Morberg, Catrine; Marving, Jens; Bügel, Susanne; Tetens, Inge
2005-07-01
Meat increases absorption of non-haem iron in single-meal studies. The aim of the present study was to investigate, over a 5 d period, the potential increasing effect of consumption of pork meat in a whole diet on the fractional absorption of non-haem iron and the total absorption of iron, when compared to a vegetarian diet. A randomised cross-over design with 3 x 5 d whole-diet periods with diets containing Danish-produced meat, Polish-produced meat or a vegetarian diet was conducted. Nineteen healthy female subjects completed the study. All main meals in the meat diets contained 60 g of pork meat and all diets had high phytic acid content (1250 mumol/d). All main meals were extrinsically labelled with the radioactive isotope (59)Fe and absorption of iron was measured in a whole body counter. The non-haem iron absorption from the Danish meat diet was significantly higher compared to the vegetarian diet (P=0.031). The mean fractional absorption of non-haem iron was 7.9 (se1.1), 6.8 (se 1.0) and 5.3 (se 0.6) % for the Danish and Polish meat diets and vegetarian diet, respectively. Total absorption of iron was higher for both meat diets compared to the vegetarian diet (Danish meat diet: P=0.006, Polish meat diet: P=0.003). The absorption ratios of the present study were well in accordance with absorption ratios estimated using algorithms on iron bioavailability. Neither the meat diets nor the vegetarian diets fulfilled the estimated daily requirements of absorbed iron in spite of a meat intake of 180 g/d in the meat diets.
Coolbaugh, M.F.; Raines, G.L.; Zehner, R.E.; Shevenell, L.; Williams, C.F.
2006-01-01
Geothermal potential maps by themselves cannot directly be used to estimate undiscovered resources. To address the undiscovered resource base in the Great Basin, a new and relatively quantitative methodology is presented. The methodology involves three steps, the first being the construction of a data-driven probabilistic model of the location of known geothermal systems using weights of evidence. The second step is the construction of a degree-of-exploration model. This degree-of-exploration model uses expert judgment in a fuzzy logic context to estimate how well each spot in the state has been explored, using as constraints digital maps of the depth to the water table, presence of the carbonate aquifer, and the location, depth, and type of drill-holes. Finally, the exploration model and the data-driven occurrence model are combined together quantitatively using area-weighted modifications to the weights-of-evidence equations. Using this methodology in the state of Nevada, the number of undiscovered geothermal systems with reservoir temperatures ???100??C is estimated at 157, which is 3.2 times greater than the 69 known systems. Currently, nine of the 69 known systems are producing electricity. If it is conservatively assumed that an additional nine for a total of 18 of the known systems will eventually produce electricity, then the model predicts 59 known and undiscovered geothermal systems are capable of producing electricity under current economic conditions in the state, a figure that is more than six times higher than the current number. Many additional geothermal systems could potentially become economic under improved economic conditions or with improved methods of reservoir stimulation (Enhanced Geothermal Systems).This large predicted geothermal resource base appears corroborated by recent grass-roots geothermal discoveries in the state of Nevada. At least two and possibly three newly recognized geothermal systems with estimated reservoir temperatures ???150??C have been identified on the Pyramid Lake Paiute Reservation in west-central Nevada. Evidence of three blind geothermal systems has recently been uncovered near the borate-bearing playas at Rhodes, Teels, and Columbus Marshes in southwestern Nevada. Recent gold exploration drilling has resulted in at least four new geothermal discoveries, including the McGinness Hills geothermal system with an estimated reservoir temperature of roughly 200??C. All of this evidence suggests that the potential for expansion of geothermal power production in Nevada is significant.
Different approaches to assess the environmental performance of a cow manure biogas plant
NASA Astrophysics Data System (ADS)
Torrellas, Marta; Burgos, Laura; Tey, Laura; Noguerol, Joan; Riau, Victor; Palatsi, Jordi; Antón, Assumpció; Flotats, Xavier; Bonmatí, August
2018-03-01
In intensive livestock production areas, farmers must apply manure management systems to comply with governmental regulations. Biogas plants, as a source of renewable energy, have the potential to reduce environmental impacts comparing with other manure management practices. Nevertheless, manure processing at biogas plants also incurs in non-desired gas emissions that should be considered. At present, available emission calculation methods cover partially emissions produced at a biogas plant, with the subsequent difficulty in the preparation of life cycle inventories. The objective of this study is to characterise gaseous emissions: ammonia (NH3-N), methane (CH4), nitrous oxide (N2Oindirect, and N2Odirect) and hydrogen sulphide (H2S) from the anaerobic co-digestion of cow manure by using different approaches for preparing gaseous emission inventories, and to compare the different methodologies used. The chosen scenario for the study is a biogas plant located next to a dairy farm in the North of Catalonia, Spain. Emissions were calculated by two methods: field measurements and estimation, following international guidelines. International Panel on Climate Change (IPCC) guidelines were adapted to estimate emissions for the specific situation according to Tier 1, Tier 2 and Tier 3 approaches. Total air emissions at the biogas plant were calculated from the emissions produced at the three main manure storage facilities on the plant: influent storage, liquid fraction storage, and the solid fraction storage of the digestate. Results showed that most of the emissions were produced in the liquid fraction storage. Comparing measured emissions with estimated emissions, NH3, CH4, N2Oindirect and H2S total emission results were in the same order of magnitude for both methodologies, while, N2Odirect total measured emissions were one order of magnitude higher than the estimates. A Monte Carlo analysis was carried out to examine the uncertainties of emissions determined from experimental data, providing probability distribution functions. Four emission inventories were developed with the different methodologies used. Estimation methods proved to be a useful tool to determine emissions when field sampling is not possible. Nevertheless, it was not possible to establish which methodology is more reliable. Therefore, more measurements at different biogas plants should be evaluated to validate the methodologies more precisely.
Analysis of Terrestrial Conditions and Dynamics
NASA Technical Reports Server (NTRS)
Goward, S. N.
1985-01-01
An ecological model is developed to estimate annual net primary productivity of vegetation in twelve major North American biomes. Three models are adapted and combined, each addressing a different factor known to govern primary productivity, i.e., photosynthesis, respiration, and moisture availability. Measures of intercepted photosynthetically active radiation (1PAR) for input to the photosynthesis model are derived from spectral vegetation index data. Normalized Difference Vegetation Index (NDVI) data are produced from NOAA-7 Advanced Very High Resolution Radiometer (AVHRR) observations for April 1982 through March 1983. NDVI values are sampled from within the biomes at locations for which climatological data are available. Monthly estimates of Net Primary Productivity (NPP) for each sample location are generated and summed over the twelve month period. These monthly estimates are averaged to produce a single annual estimated NPP value for each biomes. Comparison of estimated NPP values with figures reported in the literature produces a correlation coefficient of 85.
Bartlett, D L; Ezzati-Rice, T M; Stokley, S; Zhao, Z
2001-05-01
The National Immunization Survey (NIS) and the National Health Interview Survey (NHIS) produce national coverage estimates for children aged 19 months to 35 months. The NIS is a cost-effective, random-digit-dialing telephone survey that produces national and state-level vaccination coverage estimates. The National Immunization Provider Record Check Study (NIPRCS) is conducted in conjunction with the annual NHIS, which is a face-to-face household survey. As the NIS is a telephone survey, potential coverage bias exists as the survey excludes children living in nontelephone households. To assess the validity of estimates of vaccine coverage from the NIS, we compared 1995 and 1996 NIS national estimates with results from the NHIS/NIPRCS for the same years. Both the NIS and the NHIS/NIPRCS produce similar results. The NHIS/NIPRCS supports the findings of the NIS.
Time scale bias in erosion rates of glaciated landscapes
Ganti, Vamsi; von Hagke, Christoph; Scherler, Dirk; Lamb, Michael P.; Fischer, Woodward W.; Avouac, Jean-Philippe
2016-01-01
Deciphering erosion rates over geologic time is fundamental for understanding the interplay between climate, tectonic, and erosional processes. Existing techniques integrate erosion over different time scales, and direct comparison of such rates is routinely done in earth science. On the basis of a global compilation, we show that erosion rate estimates in glaciated landscapes may be affected by a systematic averaging bias that produces higher estimated erosion rates toward the present, which do not reflect straightforward changes in erosion rates through time. This trend can result from a heavy-tailed distribution of erosional hiatuses (that is, time periods where no or relatively slow erosion occurs). We argue that such a distribution can result from the intermittency of erosional processes in glaciated landscapes that are tightly coupled to climate variability from decadal to millennial time scales. In contrast, we find no evidence for a time scale bias in spatially averaged erosion rates of landscapes dominated by river incision. We discuss the implications of our findings in the context of the proposed coupling between climate and tectonics, and interpreting erosion rate estimates with different averaging time scales through geologic time. PMID:27713925
NASA Astrophysics Data System (ADS)
Hamprecht, Fred A.; Peter, Christine; Daura, Xavier; Thiel, Walter; van Gunsteren, Wilfred F.
2001-02-01
We propose an approach for summarizing the output of long simulations of complex systems, affording a rapid overview and interpretation. First, multidimensional scaling techniques are used in conjunction with dimension reduction methods to obtain a low-dimensional representation of the configuration space explored by the system. A nonparametric estimate of the density of states in this subspace is then obtained using kernel methods. The free energy surface is calculated from that density, and the configurations produced in the simulation are then clustered according to the topography of that surface, such that all configurations belonging to one local free energy minimum form one class. This topographical cluster analysis is performed using basin spanning trees which we introduce as subgraphs of Delaunay triangulations. Free energy surfaces obtained in dimensions lower than four can be visualized directly using iso-contours and -surfaces. Basin spanning trees also afford a glimpse of higher-dimensional topographies. The procedure is illustrated using molecular dynamics simulations on the reversible folding of peptide analoga. Finally, we emphasize the intimate relation of density estimation techniques to modern enhanced sampling algorithms.
Basis for the ICRP’s updated biokinetic model for carbon inhaled as CO 2
Leggett, Richard W.
2017-03-02
Here, the International Commission on Radiological Protection (ICRP) is updating its biokinetic and dosimetric models for occupational intake of radionuclides (OIR) in a series of reports called the OIR series. This paper describes the basis for the ICRP's updated biokinetic model for inhalation of radiocarbon as carbon dioxide (CO 2) gas. The updated model is based on biokinetic data for carbon isotopes inhaled as carbon dioxide or injected or ingested as bicarbonatemore » $$({{{\\rm{HCO}}}_{3}}^{-}).$$ The data from these studies are expected to apply equally to internally deposited (or internally produced) carbon dioxide and bicarbonate based on comparison of excretion rates for the two administered forms and the fact that carbon dioxide and bicarbonate are largely carried in a common form (CO 2–H$${{{\\rm{CO}}}_{3}}^{-})$$ in blood. Compared with dose estimates based on current ICRP biokinetic models for inhaled carbon dioxide or ingested carbon, the updated model will result in a somewhat higher dose estimate for 14C inhaled as CO 2 and a much lower dose estimate for 14C ingested as bicarbonate.« less
Southern Ocean Carbon Dioxide and Oxygen Fluxes Detected by SOCCOM Biogeochemical Profiling Floats
NASA Astrophysics Data System (ADS)
Sarmiento, J. L.; Bushinksy, S.; Gray, A. R.
2016-12-01
The Southern Ocean is known to play an important role in the global carbon cycle, yet historically our measurements of this remote region have been sparse and heavily biased towards summer. Here we present new estimates of air-sea fluxes of carbon dioxide and oxygen calculated with measurements from autonomous biogeochemical profiling floats. At high latitudes in and southward of the Antarctic Circumpolar Current, we find a significant flux of CO2 from the ocean to the atmosphere during 2014-2016, which is particularly enhanced during winter months. These results suggest that previous estimates may be biased towards stronger Southern Ocean CO2 uptake due to undersampling in winter. We examine various implications of having a source of CO2 that is higher than previous estimates. We also find that CO2:O2 flux ratios north of the Subtropical Front are positive, consistent with the fluxes being driven by changes in solubility, while south of the Polar Front biological processes and upwelling of deep water combine to produce a negative CO2:O2 flux ratio.
Han, Mira V; Thomas, Gregg W C; Lugo-Martinez, Jose; Hahn, Matthew W
2013-08-01
Current sequencing methods produce large amounts of data, but genome assemblies constructed from these data are often fragmented and incomplete. Incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. This means that methods attempting to estimate rates of gene duplication and loss often will be misled by such errors and that rates of gene family evolution will be consistently overestimated. Here, we present a method that takes these errors into account, allowing one to accurately infer rates of gene gain and loss among genomes even with low assembly and annotation quality. The method is implemented in the newest version of the software package CAFE, along with several other novel features. We demonstrate the accuracy of the method with extensive simulations and reanalyze several previously published data sets. Our results show that errors in genome annotation do lead to higher inferred rates of gene gain and loss but that CAFE 3 sufficiently accounts for these errors to provide accurate estimates of important evolutionary parameters.
Time scale bias in erosion rates of glaciated landscapes.
Ganti, Vamsi; von Hagke, Christoph; Scherler, Dirk; Lamb, Michael P; Fischer, Woodward W; Avouac, Jean-Philippe
2016-10-01
Deciphering erosion rates over geologic time is fundamental for understanding the interplay between climate, tectonic, and erosional processes. Existing techniques integrate erosion over different time scales, and direct comparison of such rates is routinely done in earth science. On the basis of a global compilation, we show that erosion rate estimates in glaciated landscapes may be affected by a systematic averaging bias that produces higher estimated erosion rates toward the present, which do not reflect straightforward changes in erosion rates through time. This trend can result from a heavy-tailed distribution of erosional hiatuses (that is, time periods where no or relatively slow erosion occurs). We argue that such a distribution can result from the intermittency of erosional processes in glaciated landscapes that are tightly coupled to climate variability from decadal to millennial time scales. In contrast, we find no evidence for a time scale bias in spatially averaged erosion rates of landscapes dominated by river incision. We discuss the implications of our findings in the context of the proposed coupling between climate and tectonics, and interpreting erosion rate estimates with different averaging time scales through geologic time.
Estimation of Tooth Size Discrepancies among Different Malocclusion Groups.
Hasija, Narender; Bala, Madhu; Goyal, Virender
2014-05-01
Regards and Tribute: Late Dr Narender Hasija was a mentor and visionary in the light of knowledge and experience. We pay our regards with deepest gratitude to the departed soul to rest in peace. Bolton's ratios help in estimating overbite, overjet relationships, the effects of contemplated extractions on posterior occlusion, incisor relationships and identification of occlusal misfit produced by tooth size discrepancies. To determine any difference in tooth size discrepancy in anterior as well as overall ratio in different malocclusions and comparison with Bolton's study. After measuring the teeth on all 100 patients, Bolton's analysis was performed. Results were compared with Bolton's means and standard deviations. The results were also subjected to statistical analysis. Results show that the mean and standard deviations of ideal occlusion cases are comparable with those Bolton but, when the mean and standard deviation of malocclusion groups are compared with those of Bolton, the values of standard deviation are higher, though the mean is comparable. How to cite this article: Hasija N, Bala M, Goyal V. Estimation of Tooth Size Discrepancies among Different Malocclusion Groups. Int J Clin Pediatr Dent 2014;7(2):82-85.
Basis for the ICRP’s updated biokinetic model for carbon inhaled as CO 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leggett, Richard W.
Here, the International Commission on Radiological Protection (ICRP) is updating its biokinetic and dosimetric models for occupational intake of radionuclides (OIR) in a series of reports called the OIR series. This paper describes the basis for the ICRP's updated biokinetic model for inhalation of radiocarbon as carbon dioxide (CO 2) gas. The updated model is based on biokinetic data for carbon isotopes inhaled as carbon dioxide or injected or ingested as bicarbonatemore » $$({{{\\rm{HCO}}}_{3}}^{-}).$$ The data from these studies are expected to apply equally to internally deposited (or internally produced) carbon dioxide and bicarbonate based on comparison of excretion rates for the two administered forms and the fact that carbon dioxide and bicarbonate are largely carried in a common form (CO 2–H$${{{\\rm{CO}}}_{3}}^{-})$$ in blood. Compared with dose estimates based on current ICRP biokinetic models for inhaled carbon dioxide or ingested carbon, the updated model will result in a somewhat higher dose estimate for 14C inhaled as CO 2 and a much lower dose estimate for 14C ingested as bicarbonate.« less
Neto, Félix; Furnham, Adrian
2011-05-01
In this study, 148 Portuguese adults (M = 45.4 years) rated themselves and their children on overall IQ and on H. Gardner (1999) 10 intelligence subtypes. Men's self-estimates were not significantly higher than women's on any of the 11 estimates. The results were in line with previous studies, in that both sexes rated the overall intelligence of their first male children higher than the first female children. Higher parental IQ self-estimates correspond with higher IQ estimates for children. Globally parents estimated that their sons had significantly higher IQs than their daughters. In particular, parents rated their son's spiritual intelligence higher than those of their daughters. Children's age and sex, and parents' age and sex were all non-significant predictors of the overall "g" score estimates of the first two children. Participants thought verbal, mathematical, and spatial intelligence were the best indicators of the overall intelligence for self and children. There were no sex differences in experience of, or attitudes towards, intelligence testing. Results are discussed in terms of the growing literature in the self-estimates of intelligence, as well as limitations of that approach.
The effect of bovine somatotropin on the cost of producing milk: Estimates using propensity scores.
Tauer, Loren W
2016-04-01
Annual farm-level data from New York dairy farms from the years 1994 through 2013 were used to estimate the cost effect from bovine somatotropin (bST) using propensity score matching. Cost of production was computed using the whole-farm method, which subtracts sales of crops and animals from total costs under the assumption that the cost of producing those products is equal to their sales values. For a farm to be included in this data set, milk receipts on that farm must have comprised 85% or more of total receipts, indicating that these farms are primarily milk producers. Farm use of bST, where 25% or more of the herd was treated, ranged annually from 25 to 47% of the farms. The average cost effect from the use of bST was estimated to be a reduction of $2.67 per 100 kg of milk produced in 2013 dollars, although annual cost reduction estimates ranged from statistical zero to $3.42 in nominal dollars. Nearest neighbor matching techniques generated a similar estimate of $2.78 in 2013 dollars. These cost reductions estimated from the use of bST represented a cost savings of 5.5% per kilogram of milk produced. Herd-level production increase per cow from the use of bST over 20 yr averaged 1,160 kg. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Combining micro-structures and micro-algae to increase lipid production for bio-fuel
NASA Astrophysics Data System (ADS)
Vyawahare, Saurabh; Zhu, Emilly; Mestler, Troy; Estévez-Torres, André.; Austin, Robert
2011-03-01
3rd generation bio-fuels like lipid producing micro-algae are a promising source of energy that could replace our dependence on petroleum. However, until there are improvements in algae oil yields, and a reduction in the energy needed for processing, algae bio-fuels are not economically competitive with petroleum. Here, we describe our work combining micro-fabricated devices with micro-algae Neochloris oleoabundans, a species first isolated on the sand dunes of Saudi Arabia. Inserting micro-algae of varying fitness into a landscape of micro-habitats allows us to evolve and select them based on a variety of conditions like specific gravity, starvation response and Nile Red fluorescence (which is a marker for lipid production). Hence, we can both estimate the production of lipids and generate conditions that allow the creation and isolation of algae which produce higher amounts of lipids, while discarding the rest. Finally, we can use micro-fabricated structures and flocculation to de-water these high lipid producing algae, reducing the need for expensive centrifugation and filtration.
Nazir, Yusuf; Shuib, Shuwahida; Kalil, Mohd Sahaid; Song, Yuanda; Hamid, Aidil Abdul
2018-06-11
In this study, optimization of growth, lipid and DHA production of Aurantiochytrium SW1 was carried out using response surface methodology (RSM) in optimizing initial fructose concentration, agitation speed and monosodium glutamate (MSG) concentration. Central composite design was applied as the experimental design and analysis of variance (ANOVA) was used to analyze the data. ANOVA analysis revealed that the process which adequately represented by quadratic model was significant (p < 0.0001) for all the response. All the three factors were significant (p < 0.005) in influencing the biomass and lipid data while only two factors (agitation speed and MSG) gave significant effect on DHA production (p < 0.005). The estimated optimal conditions for enhanced growth, lipid and DHA production were 70 g/L fructose, 250 rpm agitation speed and 10 g/L MSG. Consequently, the quadratic model was validated by applying the estimated optimum conditions, which confirmed the model validity where 19.0 g/L biomass, 9.13 g/L lipid and 4.75 g/L of DHA were produced. The growth, lipid and DHA were 28, 36 and 35% respectively higher than that produced in the original medium prior to optimization.
Mechanisms of force production during linear accelerations in bluegill sunfish Lepomis macrochirus
NASA Astrophysics Data System (ADS)
Tytell, Eric D.; Wise, Tyler N.; Boden, Alexandra L.; Sanders, Erin K.; Schwalbe, Margot A. B.
2016-11-01
In nature, fish rarely swim steadily. Although unsteady behaviors are common, we know little about how fish change their swimming kinematics for routine accelerations, and how these changes affect the fluid dynamic forces and the wake produced. To study force production during acceleration, particle image velocimetry was used to quantify the wake of bluegill sunfish Lepomis macrochirus and to estimate the pressure field during linear accelerations and steady swimming. We separated "steady" and "unsteady" trials and quantified the forward acceleration using inertial measurement units. Compared to steady sequences, unsteady sequences had larger accelerations and higher body amplitudes. The wake consisted of single vortices shed during each tail movement (a '2S' wake). The structure did not change during acceleration, but the circulation of the vortices increased, resulting in larger forces. A fish swimming unsteadily produced significantly more force than the same fish swimming steadily, even when the accelerations were the same. This increase is likely due to increased added mass during unsteady swimming, as a result of the larger body amplitude. Pressure estimates suggest that the increase in force is correlated with more low pressure regions on the anterior body. This work was supported by ARO W911NF-14-1-0494 and NSF RCN-PLS 1062052.
Olson, D.W.
2013-01-01
Estimated 2012 world production of natural and synthetic industrial diamond was about 4.45 billion carats. During 2012, natural industrial diamonds were produced in at least 20 countries, and synthetic industrial diamond was produced in at least 12 countries. About 99 percent of the combined natural and synthetic global output was produced in Belarus, China, Ireland, Japan, Russia, South Africa and the United States. During 2012, China was the world’s leading producer of synthetic industrial diamond followed by the United States and Russia. In 2012, the two U.S. synthetic producers, one in Pennsylvania and the other in Ohio, had an estimated output of 103 million carats, valued at about $70.6 million. This was an estimated 43.7 million carats of synthetic diamond bort, grit, and dust and powder with a value of $14.5 million combined with an estimated 59.7 million carats of synthetic diamond stone with a value of $56.1 million. Also in 2012, nine U.S. firms manufactured polycrystalline diamond (PCD) from synthetic diamond grit and powder. The United States government does not collect or maintain data for either domestic PCD producers or domestic chemical vapor deposition (CVD) diamond producers for quantity or value of annual production. Current trade and consumption quantity data are not available for PCD or for CVD diamond. For these reasons, PCD and CVD diamond are not included in the industrial diamond quantitative data reported here.
Producing HIV estimates: from global advocacy to country planning and impact measurement
Mahy, Mary; Brown, Tim; Stover, John; Walker, Neff; Stanecki, Karen; Kirungi, Wilford; Garcia-Calleja, Txema; Ghys, Peter D.
2017-01-01
ABSTRACT Background: The development of global HIV estimates has been critical for understanding, advocating for and funding the HIV response. The process of generating HIV estimates has been cited as the gold standard for public health estimates. Objective: This paper provides important lessons from an international scientific collaboration and provides a useful model for those producing public health estimates in other fields. Design: Through the compilation and review of published journal articles, United Nations reports, other documents and personal experience we compiled historical information about the estimates and identified potential lessons for other public health estimation efforts. Results: Through the development of core partnerships with country teams, implementers, demographers, mathematicians, epidemiologists and international organizations, UNAIDS has led a process to develop the capacity of country teams to produce internationally comparable HIV estimates. The guidance provided by these experts has led to refinements in the estimated numbers of people living with HIV, new HIV infections and AIDS-related deaths over the past 20 years. A number of important updates to the methods since 1997 resulted in fluctuations in the estimated levels, trends and impact of HIV. The largest correction occurred between the 2005 and 2007 rounds with the additions of household survey data into the models. In 2001 the UNAIDS models at that time estimated there were 40 million people living with HIV. In 2016, improved models estimate there were 30 million (27.6–32.7 million) people living with HIV in 2001. Conclusions: Country ownership of the estimation tools has allowed for additional uses of the results than had the results been produced by researchers or a team in Geneva. Guidance from a reference group and input from country teams have led to critical improvements in the models over time. Those changes have improved countries’ and stakeholders’ understanding of the HIV epidemic. PMID:28532304
Near real-time estimation of burned area using VIIRS 375 m active fire product
NASA Astrophysics Data System (ADS)
Oliva, P.; Schroeder, W.
2016-12-01
Every year, more than 300 million hectares of land burn globally, causing significant ecological and economic consequences, and associated climatological effects as a result of fire emissions. In recent decades, burned area estimates generated from satellite data have provided systematic global information for ecological analysis of fire impacts, climate and carbon cycle models, and fire regimes studies, among many others. However, there is still need of near real-time burned area estimations in order to assess the impacts of fire and estimate smoke and emissions. The enhanced characteristics of the Visible Infrared Imaging Radiometer Suite (VIIRS) 375 m channels on board the Suomi National Polar-orbiting Partnesship (S-NPP) make possible the use of near real-time active fire detection data for burned area estimation. In this study, consecutive VIIRS 375 m active fire detections were aggregated to produce the VIIRS 375 m burned area (BA) estimation over ten ecologically diverse study areas. The accuracy of the BA estimations was assessed by comparison with Landsat-8 supervised burned area classification. The performance of the VIIRS 375 m BA estimates was dependent on the ecosystem characteristics and fire behavior. Higher accuracy was observed in forested areas characterized by large long-duration fires, while grasslands, savannas and agricultural areas showed the highest omission and commission errors. Complementing those analyses, we performed the burned area estimation of the largest fires in Oregon and Washington states during 2015 and the Fort McMurray fire in Canada 2016. The results showed good agreement with NIROPs airborne fire perimeters proving that the VIIRS 375 m BA estimations can be used for near real-time assessments of fire effects.
Griscom, Bronson W; Ellis, Peter W; Baccini, Alessandro; Marthinus, Delon; Evans, Jeffrey S; Ruslandi
2016-01-01
Forest conservation efforts are increasingly being implemented at the scale of sub-national jurisdictions in order to mitigate global climate change and provide other ecosystem services. We see an urgent need for robust estimates of historic forest carbon emissions at this scale, as the basis for credible measures of climate and other benefits achieved. Despite the arrival of a new generation of global datasets on forest area change and biomass, confusion remains about how to produce credible jurisdictional estimates of forest emissions. We demonstrate a method for estimating the relevant historic forest carbon fluxes within the Regency of Berau in eastern Borneo, Indonesia. Our method integrates best available global and local datasets, and includes a comprehensive analysis of uncertainty at the regency scale. We find that Berau generated 8.91 ± 1.99 million tonnes of net CO2 emissions per year during 2000-2010. Berau is an early frontier landscape where gross emissions are 12 times higher than gross sequestration. Yet most (85%) of Berau's original forests are still standing. The majority of net emissions were due to conversion of native forests to unspecified agriculture (43% of total), oil palm (28%), and fiber plantations (9%). Most of the remainder was due to legal commercial selective logging (17%). Our overall uncertainty estimate offers an independent basis for assessing three other estimates for Berau. Two other estimates were above the upper end of our uncertainty range. We emphasize the importance of including an uncertainty range for all parameters of the emissions equation to generate a comprehensive uncertainty estimate-which has not been done before. We believe comprehensive estimates of carbon flux uncertainty are increasingly important as national and international institutions are challenged with comparing alternative estimates and identifying a credible range of historic emissions values.
Virtual Sensors: Using Data Mining Techniques to Efficiently Estimate Remote Sensing Spectra
NASA Technical Reports Server (NTRS)
Srivastava, Ashok N.; Oza, Nikunj; Stroeve, Julienne
2004-01-01
Various instruments are used to create images of the Earth and other objects in the universe in a diverse set of wavelength bands with the aim of understanding natural phenomena. These instruments are sometimes built in a phased approach, with some measurement capabilities being added in later phases. In other cases, there may not be a planned increase in measurement capability, but technology may mature to the point that it offers new measurement capabilities that were not available before. In still other cases, detailed spectral measurements may be too costly to perform on a large sample. Thus, lower resolution instruments with lower associated cost may be used to take the majority of measurements. Higher resolution instruments, with a higher associated cost may be used to take only a small fraction of the measurements in a given area. Many applied science questions that are relevant to the remote sensing community need to be addressed by analyzing enormous amounts of data that were generated from instruments with disparate measurement capability. This paper addresses this problem by demonstrating methods to produce high accuracy estimates of spectra with an associated measure of uncertainty from data that is perhaps nonlinearly correlated with the spectra. In particular, we demonstrate multi-layer perceptrons (MLPs), Support Vector Machines (SVMs) with Radial Basis Function (RBF) kernels, and SVMs with Mixture Density Mercer Kernels (MDMK). We call this type of an estimator a Virtual Sensor because it predicts, with a measure of uncertainty, unmeasured spectral phenomena.
Estimated lead (Pb) exposures for a population of urban community gardeners.
Spliethoff, Henry M; Mitchell, Rebecca G; Shayler, Hannah; Marquez-Bravo, Lydia G; Russell-Anelli, Jonathan; Ferenz, Gretchen; McBride, Murray
2016-08-01
Urban community gardens provide affordable, locally grown, healthy foods and many other benefits. However, urban garden soils can contain lead (Pb) that may pose risks to human health. To help evaluate these risks, we measured Pb concentrations in soil, vegetables, and chicken eggs from New York City community gardens, and we asked gardeners about vegetable consumption and time spent in the garden. We then estimated Pb intakes deterministically and probabilistically for adult gardeners, children who spend time in the garden, and adult (non-gardener) household members. Most central tendency Pb intakes were below provisional total tolerable intake (PTTI) levels. High contact intakes generally exceeded PTTIs. Probabilistic estimates showed approximately 40 % of children and 10 % of gardeners exceeding PTTIs. Children's exposure came primarily from dust ingestion and exposure to higher Pb soil between beds. Gardeners' Pb intakes were comparable to children's (in µg/day) but were dominated by vegetable consumption. Adult household members ate less garden-grown produce than gardeners and had the lowest Pb intakes. Our results suggest that healthy gardening practices to reduce Pb exposure in urban community gardens should focus on encouraging cultivation of lower Pb vegetables (i.e., fruits) for adult gardeners and on covering higher Pb non-bed soils accessible to young children. However, the common practice of replacement of root-zone bed soil with clean soil (e.g., in raised beds) has many benefits and should also continue to be encouraged.
Estimated lead (Pb) exposures for a population of urban community gardeners
Spliethoff, Henry M.; Mitchell, Rebecca G.; Shayler, Hannah; Marquez-Bravo, Lydia G.; Russell-Anelli, Jonathan; Ferenz, Gretchen; McBride, Murray
2016-01-01
Urban community gardens provide affordable, locally grown, healthy foods and many other benefits. However, urban garden soils can contain lead (Pb) that may pose risks to human health. To help evaluate these risks, we measured Pb concentrations in soil, vegetables, and chicken eggs from New York City community gardens, and we asked gardeners about vegetable consumption and time spent in the garden. We then estimated Pb intakes deterministically and probabilistically for adult gardeners, children who spend time in the garden, and adult (non-gardener) household members. Most central-tendency Pb intakes were below provisional total tolerable intake (PTTI) levels. High-contact intakes generally exceeded PTTIs. Probabilistic estimates showed approximately 40% of children and 10% of gardeners exceeding PTTIs. Children’s exposure came primarily from dust ingestion and exposure to higher-Pb soil between beds. Gardeners’ Pb intakes were comparable to children’s (in µg/d) but were dominated by vegetable consumption. Adult household members ate less garden-grown produce than gardeners and had the lowest Pb intakes. Our results suggest that healthy gardening practices to reduce Pb exposure in urban community gardens should focus on encouraging cultivation of lower-Pb vegetables (i.e., fruits) for adult gardeners and on covering higher-Pb non-bed soils accessible to young children. However, the common practice of replacement of root-zone bed soil with clean soil (e.g., in raised beds) has many benefits and should also continue to be encouraged. PMID:26753554
Shear-induced aggregation dynamics in a polymer microrod suspension
NASA Astrophysics Data System (ADS)
Kumar, Pramukta S.
A non-Brownian suspension of micron scale rods is found to exhibit reversible shear-driven formation of disordered aggregates resulting in dramatic viscosity enhancement at low shear rates. Aggregate formation is imaged at low magnification using a combined rheometer and fluorescence microscope system. The size and structure of these aggregates are found to depend on shear rate and concentration, with larger aggregates present at lower shear rates and higher concentrations. Quantitative measurements of the early-stage aggregation process are modeled by a collision driven growth of porous structures which show that the aggregate density increases with a shear rate. A Krieger-Dougherty type constitutive relation and steady-state viscosity measurements are used to estimate the intrinsic viscosity of complex structures developed under shear. Higher magnification images are collected and used to validate the aggregate size versus density relationship, as well as to obtain particle flow fields via PIV. The flow fields provide a tantalizing view of fluctuations involved in the aggregation process. Interaction strength is estimated via contact force measurements and JKR theory and found to be extremely strong in comparison to shear forces present in the system, estimated using hydrodynamic arguments. All of the results are then combined to produce a consistent conceptual model of aggregation in the system that features testable consequences. These results represent a direct, quantitative, experimental study of aggregation and viscosity enhancement in rod suspension, and demonstrate a strategy for inferring inaccessible microscopic geometric properties of a dynamic system through the combination of quantitative imaging and rheology.
Computational material design for Q&P steels with plastic instability theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, G.; Choi, K. S.; Hu, X. H.
In this paper, the deformation limits of Quenching and Partitioning (Q&P) steels are examined with the plastic instability theory. For this purpose, the constituent phase properties of various Q&P steels were first experimentally obtained, and used to estimate the overall tensile stress-strain curves based on the simple rule of mixture (ROM) with the iso-strain and iso-stress assumptions. Plastic instability theory was then applied to the obtained overall stress-strain curves in order to estimate the deformation limits of the Q&P steels. A parametric study was also performed to examine the effects of various material parameters on the deformation limits of Q&Pmore » steels. Computational material design was subsequently carried out based on the information obtained from the parametric study. The results show that the plastic instability theory with iso-stress-based stress-strain curve may be used to provide the lower bound estimate of the uniform elongation (UE) for the various Q&P steels considered. The results also indicate that higher austenite stability/volume fractions, less strength difference between the primary phases, higher hardening exponents of the constituent phases are generally beneficial for the performance improvement of Q&P steels, and that various material parameters may be concurrently adjusted in a cohesive way in order to improve the performance of Q&P steel. The information from this study may be used to devise new heat treatment parameters and alloying elements to produce Q&P steels with the improved performance.« less
Functioning efficiency of intermediate coolers of multistage steam-jet ejectors of steam turbines
NASA Astrophysics Data System (ADS)
Aronson, K. E.; Ryabchikov, A. Yu.; Brodov, Yu. M.; Zhelonkin, N. V.; Murmanskii, I. B.
2017-03-01
Designs of various types of intermediate coolers of multistage ejectors are analyzed and thermal effectiveness and gas-dynamic resistance of coolers are estimated. Data on quantity of steam condensed from steam-air mixture in stage I of an ejector cooler was obtained on the basis of experimental results. It is established that the amount of steam condensed in the cooler constitutes 0.6-0.7 and is almost independent of operating steam pressure (and, consequently, of steam flow) and air amount in steam-air mixture. It is suggested to estimate the amount of condensed steam in a cooler of stage I based on comparison of computed and experimental characteristics of stage II. Computation taking this hypothesis for main types of mass produced multistage ejectors into account shows that 0.60-0.85 of steam amount should be condensed in stage I of the cooler. For ejectors with "pipe-in-pipe" type coolers (EPO-3-200) and helical coolers (EO-30), amount of condensed steam may reach 0.93-0.98. Estimation of gas-dynamic resistance of coolers shows that resistance from steam side in coolers with built-in and remote pipe bundle constitutes 100-300 Pa. Gas-dynamic resistance of "pipein- pipe" and helical type coolers is significantly higher (3-6 times) compared with pipe bundle. However, performance by "dry" (atmospheric) air is higher for ejectors with relatively high gas-dynamic resistance of coolers than those with low resistance at approximately equal operating flow values of ejectors.
Methyl mercury dynamics in a tidal wetland quantified using in situ optical measurements
Bergamaschi, B.A.; Fleck, J.A.; Downing, B.D.; Boss, E.; Pellerin, B.; Ganju, N.K.; Schoellhamer, D.H.; Byington, A.A.; Heim, W.A.; Stephenson, M.; Fujii, R.
2011-01-01
We assessed monomethylmercury (MeHg) dynamics in a tidal wetland over three seasons using a novel method that employs a combination of in situ optical measurements as concentration proxies. MeHg concentrations measured over a single spring tide were extended to a concentration time series using in situ optical measurements. Tidal fluxes were calculated using modeled concentrations and bi-directional velocities obtained acoustically. The magnitude of the flux was the result of complex interactions of tides, geomorphic features, particle sorption, and random episodic events such as wind storms and precipitation. Correlation of dissolved organic matter quality measurements with timing of MeHg release suggests that MeHg is produced in areas of fluctuating redox and not limited by buildup of sulfide. The wetland was a net source of MeHg to the estuary in all seasons, with particulate flux being much higher than dissolved flux, even though dissolved concentrations were commonly higher. Estimated total MeHg yields out of the wetland were approximately 2.5 μg m−2 yr−1—4–40 times previously published yields—representing a potential loading to the estuary of 80 g yr−1, equivalent to 3% of the river loading. Thus, export from tidal wetlands should be included in mass balance estimates for MeHg loading to estuaries. Also, adequate estimation of loads and the interactions between physical and biogeochemical processes in tidal wetlands might not be possible without long-term, high-frequency in situ measurements.
Estimation of indoor and outdoor ratios of selected volatile organic compounds in Canada
NASA Astrophysics Data System (ADS)
Xu, Jing; Szyszkowicz, Mieczyslaw; Jovic, Branka; Cakmak, Sabit; Austin, Claire C.; Zhu, Jiping
2016-09-01
Indoor air and outdoor air concentration (I/O) ratio can be used to identify the origins of volatile organic compounds (VOCs). I/O ratios of 25 VOCs in Canada were estimated based on the data collected in various areas in Canada between September 2009 and December 2011. The indoor VOC data were extracted from the Canadian Health Measures Survey (CHMS). Outdoor VOC data were obtained from Canada's National Air Pollution Surveillance (NAPS) Network. The sampling locations covered nine areas in six provinces in Canada. Indoor air concentrations were found higher than outdoor air for all studied VOCs, except for carbon tetrachloride. Two different approaches were employed to estimate the I/O ratios; both approaches produced similar I/O values. The I/O ratios obtained from this study were similar to two other Canadian studies where indoor air and outdoor air of individual dwellings were measured. However, the I/O ratios found in Canada were higher than those in European cities and in two large USA cities, possibly due to the fact that the outdoor air concentrations recorded in the Canadian studies were lower. Possible source origins identified for the studied VOCs based on their I/O ratios were similar to those reported by others. In general, chlorinated hydrocarbons, short-chain (C5, C6) n-alkanes and benzene had significant outdoor sources, while long-chain (C10sbnd C12) n-alkanes, terpenes, naphthalene and styrene had significant indoor sources. The remaining VOCs had mixed indoor and outdoor sources.
NASA Astrophysics Data System (ADS)
Zhang, Min; Gong, Zhaoning; Zhao, Wenji; Pu, Ruiliang; Liu, Ke
2016-01-01
Mapping vegetation abundance by using remote sensing data is an efficient means for detecting changes of an eco-environment. With Landsat-8 operational land imager (OLI) imagery acquired on July 31, 2013, both linear spectral mixture analysis (LSMA) and multinomial logit model (MNLM) methods were applied to estimate and assess the vegetation abundance in the Wild Duck Lake Wetland in Beijing, China. To improve mapping vegetation abundance and increase the number of endmembers in spectral mixture analysis, normalized difference vegetation index was extracted from OLI imagery along with the seven reflective bands of OLI data for estimating the vegetation abundance. Five endmembers were selected, which include terrestrial plants, aquatic plants, bare soil, high albedo, and low albedo. The vegetation abundance mapping results from Landsat OLI data were finally evaluated by utilizing a WorldView-2 multispectral imagery. Similar spatial patterns of vegetation abundance produced by both fully constrained LSMA algorithm and MNLM methods were observed: higher vegetation abundance levels were distributed in agricultural and riparian areas while lower levels in urban/built-up areas. The experimental results also indicate that the MNLM model outperformed the LSMA algorithm with smaller root mean square error (0.0152 versus 0.0252) and higher coefficient of determination (0.7856 versus 0.7214) as the MNLM model could handle the nonlinear reflection phenomenon better than the LSMA with mixed pixels.
NASA Astrophysics Data System (ADS)
Hattori, S.; Kimura, H.; Nashimoto, H.; Koba, K.; Yamada, K.; Shimizu, M.; Watanabe, H.; Yoh, M.; Yoshida, N.
2009-04-01
The sedimentary layer in the southern part of Japan is accretionary prism which includes enriched organic materials derived from sediment on oceanic plate. There is geothermal aquifer in which a large amount of methane (CH4) dissolved. Since CH4 is important as a greenhouse gas and an important natural gas fuel, revealing CH4-producing process in subsurface environment is required. To understand the process of the CH4 production, we collected the groundwater from the aquifer of 1,189-1,489 m depth, and analyzed by using stable isotope and microbial analyses. 16S rRNA gene analysis showed a dominancy of hydrogenotrophic methanogens in domain Archaea and a dominancy of anaerobic heterotrophes to be known to produce H2 and CO2 by fermentation process in domain Bacteria. The anaerobic enrichment cultures with the groundwater amended with organic substrates showed that CH4 was produced by co-culture between the fermenters and hydrogenotrophic methanogens. On the other hand, conventional isotopic estimations for the origin of CH4 using δ13C-CH4 and δD-CH4 as well as δ13C-CH4and molecular ratio of C1/(C2+C3) indicated that CH4 was derived from thermogenic pathway. The values of δ13C-CO2, however, had higher values and carbon isotope fractionation factors between CH4 and CO2(α(CO2-CH4)) were approximately 1.05 to 1.06 indicating the possibility of biogenic CH4 production. Therefore, the origin of CH4 production was estimated as mixing both thermogenic and CO2 reduction from isotopic data. Furthermore, we incubated these enriched co-cultures and measure stable carbon isotope ratios of CH4 and CO2 and stable hydrogen isotope ratios of H2O and CH4. We revealed that concentration of H2 were kept lower by these co-cultures between fermenters and hydrogenotrophic methanogens and α(CO2-CH4) values were higher than that of cultures with the ground water amended with high concentration of H2+ CO2. Hydrogen isotope fractionation factor between H2O and CH4 by these co-culture increased (αH values decreased) with increasing H2 concentration.
Measurements of methane emissions at natural gas production sites in the United States.
Allen, David T; Torres, Vincent M; Thomas, James; Sullivan, David W; Harrison, Matthew; Hendler, Al; Herndon, Scott C; Kolb, Charles E; Fraser, Matthew P; Hill, A Daniel; Lamb, Brian K; Miskimins, Jennifer; Sawyer, Robert F; Seinfeld, John H
2013-10-29
Engineering estimates of methane emissions from natural gas production have led to varied projections of national emissions. This work reports direct measurements of methane emissions at 190 onshore natural gas sites in the United States (150 production sites, 27 well completion flowbacks, 9 well unloadings, and 4 workovers). For well completion flowbacks, which clear fractured wells of liquid to allow gas production, methane emissions ranged from 0.01 Mg to 17 Mg (mean = 1.7 Mg; 95% confidence bounds of 0.67-3.3 Mg), compared with an average of 81 Mg per event in the 2011 EPA national emission inventory from April 2013. Emission factors for pneumatic pumps and controllers as well as equipment leaks were both comparable to and higher than estimates in the national inventory. Overall, if emission factors from this work for completion flowbacks, equipment leaks, and pneumatic pumps and controllers are assumed to be representative of national populations and are used to estimate national emissions, total annual emissions from these source categories are calculated to be 957 Gg of methane (with sampling and measurement uncertainties estimated at ± 200 Gg). The estimate for comparable source categories in the EPA national inventory is ~1,200 Gg. Additional measurements of unloadings and workovers are needed to produce national emission estimates for these source categories. The 957 Gg in emissions for completion flowbacks, pneumatics, and equipment leaks, coupled with EPA national inventory estimates for other categories, leads to an estimated 2,300 Gg of methane emissions from natural gas production (0.42% of gross gas production).
Measurements of methane emissions at natural gas production sites in the United States
Allen, David T.; Torres, Vincent M.; Thomas, James; Sullivan, David W.; Harrison, Matthew; Hendler, Al; Herndon, Scott C.; Kolb, Charles E.; Fraser, Matthew P.; Hill, A. Daniel; Lamb, Brian K.; Miskimins, Jennifer; Sawyer, Robert F.; Seinfeld, John H.
2013-01-01
Engineering estimates of methane emissions from natural gas production have led to varied projections of national emissions. This work reports direct measurements of methane emissions at 190 onshore natural gas sites in the United States (150 production sites, 27 well completion flowbacks, 9 well unloadings, and 4 workovers). For well completion flowbacks, which clear fractured wells of liquid to allow gas production, methane emissions ranged from 0.01 Mg to 17 Mg (mean = 1.7 Mg; 95% confidence bounds of 0.67–3.3 Mg), compared with an average of 81 Mg per event in the 2011 EPA national emission inventory from April 2013. Emission factors for pneumatic pumps and controllers as well as equipment leaks were both comparable to and higher than estimates in the national inventory. Overall, if emission factors from this work for completion flowbacks, equipment leaks, and pneumatic pumps and controllers are assumed to be representative of national populations and are used to estimate national emissions, total annual emissions from these source categories are calculated to be 957 Gg of methane (with sampling and measurement uncertainties estimated at ±200 Gg). The estimate for comparable source categories in the EPA national inventory is ∼1,200 Gg. Additional measurements of unloadings and workovers are needed to produce national emission estimates for these source categories. The 957 Gg in emissions for completion flowbacks, pneumatics, and equipment leaks, coupled with EPA national inventory estimates for other categories, leads to an estimated 2,300 Gg of methane emissions from natural gas production (0.42% of gross gas production). PMID:24043804
Statistical Cost Estimation in Higher Education: Some Alternatives.
ERIC Educational Resources Information Center
Brinkman, Paul T.; Niwa, Shelley
Recent developments in econometrics that are relevant to the task of estimating costs in higher education are reviewed. The relative effectiveness of alternative statistical procedures for estimating costs are also tested. Statistical cost estimation involves three basic parts: a model, a data set, and an estimation procedure. Actual data are used…
Patient-specific FDG dosimetry for adult males, adult females, and very low birth weight infants
NASA Astrophysics Data System (ADS)
Niven, Erin
Fluorodeoxyglucose is the most commonly used radiopharmaceutical in Positron Emission Tomography, with applications in neurology, cardiology, and oncology. Despite its routine use worldwide, the radiation absorbed dose estimates from FDG have been based primarily on data obtained from two dogs studied in 1977 and 11 adults (most likely males) studied in 1982. In addition, the dose estimates calculated for FDG have been centered on the adult male, with little or no mention of variations in the dose estimates due to sex, age, height, weight, nationality, diet, or pathological condition. Through an extensive investigation into the Medical Internal Radiation Dose schema for calculating absorbed doses, I have developed a simple patient-specific equation; this equation incorporates the parameters necessary for alterations to the mathematical values of the human model to produce an estimate more representative of the individual under consideration. I have used this method to determine the range of absorbed doses to FDG from the collection of a large quantity of biological data obtained in adult males, adult females, and very low birth weight infants. Therefore, a more accurate quantification of the dose to humans from FDG has been completed. My results show that per unit administered activity, the absorbed dose from FDG is higher for infants compared to adults, and the dose for adult women is higher than for adult men. Given an injected activity of approximately 3.7 MBq kg-1, the doses for adult men, adult women, and full-term newborns would be on the order of 5.5, 7.1, and 2.8 mSv, respectively. These absorbed doses are comparable to the doses received from other nuclear medicine procedures.
Growth and smolting in lower-mode Atlantic Salmon stocked into the Penobscot River, Maine
Zydlewski, Joseph D.; O'Malley, Andrew; Cox, Oliver; Ruksznis, Peter; Trial, Joan G.
2014-01-01
Restoration of Atlantic Salmon Salmo salar in Maine has relied on hatchery-produced fry and smolts for critical stocking strategies. Stocking fry minimizes domestication selection, but these fish have poor survival. Conversely, stocked smolts have little freshwater experience but provide higher adult returns. Lower-mode (LM) fish, those not growing fast enough to ensure smolting by the time of stocking, are a by-product of the smolt program and are an intermediate hatchery product. From 2002 to 2009, between 70,000 and 170,000 marked LM Atlantic Salmon were stocked into the Pleasant River (a tributary in the Penobscot River drainage, Maine) in late September to early October. These fish were recaptured as actively migrating smolts (screw trapping), as nonmigrants (electrofishing), and as returning adults to the Penobscot River (Veazie Dam trap). Fork length (FL) was measured and a scale sample was taken to retrospectively estimate FL at winter annulus one (FW1) using the intercept-corrected direct proportion model. The LM fish were observed to migrate as age-1, age-2, and infrequently as age-3 smolts. Those migrating as age-1 smolts had a distinctly larger estimated FL at FW1 (>112 mm) than those that remained in the river for at least one additional year. At the time of migration, age-2 and age-3 smolts were substantially larger than age-1 smolts. Returning adult Atlantic Salmon of LM origin had estimated FLs at FW1 that corresponded to smolt age (greater FL for age 1 than age 2). The LM product produces both age-1 and age-2 smolts that have greater freshwater experience than hatchery smolts and may have growth and fitness advantages. The data from this study will allow managers to better assess the probability of smolting age and manipulate hatchery growth rates to produce a targeted-size LM product.
NASA Astrophysics Data System (ADS)
Weyant, C.; Athalye, V.; Ragavan, S.; Rajarathnam, U.; Kr, B.; Lalchandani, D.; Maithel, S.; Malhotra, G.; Bhanware, P.; Thoa, V.; Phuong, N.; Baum, E.; Bond, T. C.
2012-12-01
About 150-200 billion clay bricks are produced in India every year. Most of these bricks are fired in small-scale traditional kilns that burn coal or biomass without pollution controls. Reddy and Venkataraman (2001) estimated that 8% of fossil fuel related PM2.5 emissions and 23% of black carbon emissions in India are released from brick production. Few direct emissions measurements have been done in this industry and black carbon emissions, in particular, have not been previously measured. In this study, 9 kilns representing five common brick kiln technologies were tested for aerosol properties and gaseous pollutant emissions, including optical scattering and absorption and thermal-optical OC/EC. Simple relationships are then used to estimate the radiative-forcing impact. Kiln design and fuel quality greatly affect the overall emission profiles and relative climate warming. Batch production kilns, such as the Downdraft kiln, produce the most PM2.5 (0.97 gPM2.5/fired brick) with an OC/EC fraction of 0.3. Vertical Shaft Brick kilns using internally mixed fuels produce the least PM (0.09 gPM2.5/kg fired brick) with the least EC (OC/EC = 16.5), but these kilns are expensive to implement and their use throughout Southern Asia is minimal. The most popular kiln in India, the Bull's Trench kiln, had fewer emissions per brick than the Downdraft kiln, but an even higher EC fraction (OC/EC = 0.05). The Zig-zag kiln is similar in structure to the Bull's Trench kiln, but the emission factors are significantly lower: 50% reduction for CO, 17% for PM2.5 and 60% for black carbon. This difference in emissions suggests that converting traditional Bull's Trench kilns into less polluting Zig-zag kilns would result in reduced atmospheric warming from brick production.
Using Observations from GPM and CloudSat to Produce a Climatology of Precipitation over the Ocean
NASA Astrophysics Data System (ADS)
Hayden, L.; Liu, C.
2017-12-01
Satellite based instruments are essential to the observation of precipitation at a global scale, especially over remote oceanic regions. Each instrument has its own strengths and limitations when it comes to accurately determining the rate of precipitation occurring at the surface. By using the complementary strengths of two satellite based instruments, we attempt to produce a more complete climatology of global oceanic precipitation. The Global Precipitation Measurement (GPM) Core Osbervatory's Dual-frequency Precipitation Radar (DPR) is capable of measuring precipitation producing radar reflectivity above 12 dBZ [Hamada and Takayabu 2016]. The CloudSat satellite's Cloud Profiling Radar (CPR) uses higher frequency C band (94 GHz) radiation, and is therefore capable of measuring precipitation occurring at low precipitation rates which are not detected by the GPM DPR. The precipitation estimates derived by the two satellites are combined and the results are examined. CloudSat data from July 2006 to December 2010 are used. GPM data from March 2014 through May 2016 are used. Since the two datasets do not temporally overlap, this study is conducted from a climatological standpoint. The average occurrence for different precipitation rates is calculated for both satellites. To produce the combined dataset, the precipitation from CloudSat are used for the low precipitation rates while CloudSat precipitation amount is greater than that from GPM DPR, until GPM DPR precipitation amount is higher than that from CloudSat, at which precipitation rate data from the GPM are used. By combining the two datasets, we discuss the seasonal and geo-graphical distribution of weak precipitation detected by CloudSat that is beyond the sensitivity of GPM DPR. We also hope to gain a more complete picture of the precipitation that occurs over oceanic regions.
Performance of differenced range data types in Voyager navigation
NASA Technical Reports Server (NTRS)
Taylor, T. H.; Campbell, J. K.; Jacobson, R. A.; Moultrie, B.; Nichols, R. A., Jr.; Riedel, J. E.
1982-01-01
Voyager radio navigation made use of a differenced rage data type for both Saturn encounters because of the low declination singularity of Doppler data. Nearly simultaneous two-way range from two-station baselines was explicitly differenced to produce this data type. Concurrently, a differential VLBI data type (DDOR), utilizing doubly differenced quasar-spacecraft delays, with potentially higher precision was demonstrated. Performance of these data types is investigated on the Jupiter-to-Saturn leg of Voyager 2. The statistics of performance are presented in terms of actual data noise comparisons and sample orbit estimates. Use of DDOR as a primary data type for navigation to Uranus is discussed.
Performance of differenced range data types in Voyager navigation
NASA Technical Reports Server (NTRS)
Taylor, T. H.; Campbell, J. K.; Jacobson, R. A.; Moultrie, B.; Nichols, R. A., Jr.; Riedel, J. E.
1982-01-01
Voyager radio navigation made use of differenced range data type for both Saturn encounters because of the low declination singularity of Doppler data. Nearly simultaneous two-way range from two-station baselines was explicitly differenced to produce this data type. Concurrently, a differential VLBI data type (DDOR), utilizing doubly differenced quasar-spacecraft delays, with potentially higher precision was demonstrated. Performance of these data types is investigated on the Jupiter to Saturn leg of Voyager 2. The statistics of performance are presented in terms of actual data noise comparisons and sample orbit estimates. Use of DDOR as a primary data type for navigation to Uranus is discussed.
Lorre cluster: an outcome of recent asteroid collision
NASA Astrophysics Data System (ADS)
Novakovic, B.; Dell'Oro, A.; Cellino, A.; Knezevic, Z.
2012-09-01
Here we show an example of a young asteroid cluster located in a dynamically stable region, which was produced by partial disruption of a primitive body about 30 km in size. According to our estimation it is only 1.9±0.3 Myr old, thus its post-impact evolution is very limited. The parent body had a large orbital inclination, and was subject to collisions with typical impact speeds higher by a factor of 2 than in the most common situations encountered in the main belt. For the first time we have at disposal the observable outcome of a very recent event to study high-speed collisions involving primitive asteroids.
Using r-process enhanced galaxies to estimate the neutron star merger rate at high redshift
NASA Astrophysics Data System (ADS)
Roederer, Ian
2018-01-01
The rapid neutron-capture process, or r-process, is one of the fundamental ways that stars produce heavy elements. I describe a new approach that uses the existence of r-process enhanced galaxies, like the recently discovered ultra-faint dwarf galaxy Reticulum II, to derive a rate for neutron star mergers at high redshift. This method relies on three assertions. First, several lines of reasoning point to neutron star mergers as a rare yet prolific producer of r-process elements, and one merger event is capable of enriching most of the stars in a low-mass dwarf galaxy. Second, the Local Group is cosmologically representative of the halo mass function at the mass scales of low-luminosity dwarf galaxies, and the volume that their progenitors spanned at high redshifts can be estimated from simulations. Third, many of these dwarf galaxies are extremely old, and the metals found in their stars today date from the earliest times at high redshift. These galaxies occupy a quantifiable volume of the Universe, from which the frequency of r-process enhanced galaxies can be estimated. This frequency may be interpreted as lower limit to the neutron star merger rate at a redshift (z ~ 5-10) that is much higher than is accessible to gravitational wave observatories. I will present a proof of concept demonstration using medium-resolution multi-object spectroscopy from the Michigan/Magellan Fiber System (M2FS) to recover the known r-process galaxy Reticulum II, and I will discuss future plans to apply this method to other Local Group dwarf galaxies.
Potential of Solar Energy in Kota Kinabalu, Sabah: An Estimate Using a Photovoltaic System Model
NASA Astrophysics Data System (ADS)
Markos, F. M.; Sentian, J.
2016-04-01
Solar energy is becoming popular as an alternative renewable energy to conventional energy source, particularly in the tropics, where duration and intensity of solar radiation are longer. This study is to assess the potential of solar energy generated from solar for Kota Kinabalu, a rapidly developing city in the State of Sabah, Malaysia. A year data of solar radiation was obtained using pyranometer, which was located at Universiti Malaysia Sabah (6.0367° N, 116.1186° E). It was concluded that the annual average solar radiation received in Kota Kinabalu was 182 W/m2. In estimating the potential energy generated from solar for Kota Kinabalu city area, a photovoltaic (PV) system model was used. The results showed that, Kota Kinabalu is estimated to produce 29,794 kWh/m2 of electricity from the solar radiation received in a year. This is equivalent to 0.014 MW of electricity produced just by using one solar panel. Considering the power demand in Sabah by 2020 is 1,331 MW, this model showed that the solar energy can contribute around 4% of energy for power demand, with 1 MW capacity of the PV system. 1 MW of PV system installation will require about 0.0328% from total area of the city. This assessment could suggest that, exploration for solar power energy as an alternative source of renewable energy in the city can be optimised and designed to attain significant higher percentage of contribution to the energy demand in the state.
NASA Technical Reports Server (NTRS)
Weissman, David E.; Hristova-Veleva, Svetla; Callahan, Philip
2006-01-01
The opportunity provided by satellite scatterometers to measure ocean surface winds in strong storms and hurricanes is diminished by the errors in the received backscatter (SIGMA-0) caused by the attenuation, scattering and surface roughening produced by heavy rain. Providing a good rain correction is a very challenging problem, particularly at Ku band (13.4 GHz) where rain effects are strong. Corrections to the scatterometer measurements of ocean surface winds can be pursued with either of two different methods: empirical or physical modeling. The latter method is employed in this study because of the availability of near simultaneous and collocated measurements provided by the MIDORI-II suite of instruments. The AMSR was designed to measure atmospheric water-related parameters on a spatial scale comparable to the SeaWinds scatterometer. These quantities can be converted into volumetric attenuation and scattering at the Ku-band frequency of SeaWinds. Optimal estimates of the volume backscatter and attenuation require a knowledge of the three dimensional distribution of reflectivity on a scale comparable to that of the precipitation. Studies selected near the US coastline enable the much higher resolution NEXRAD reflectivity measurements evaluate the AMSR estimates. We are also conducting research into the effects of different beam geometries and nonuniform beamfilling of precipitation within the field-of-view of the AMSR and the scatterometer. Furthermore, both AMSR and NEXRAD estimates of atmospheric correction can be used to produce corrected SIGMA-0s, which are then input to the JPL wind retrieval algorithm.
Hines, Stephanie A; Chappie, Daniel J; Lordo, Robert A; Miller, Brian D; Janke, Robert J; Lindquist, H Alan; Fox, Kim R; Ernst, Hiba S; Taft, Sarah C
2014-06-01
The Legionella species have been identified as important waterborne pathogens in terms of disease morbidity and mortality. Microbial exposure assessment is a tool that can be utilized to assess the potential of Legionella species inhalation exposure from common water uses. The screening-level exposure assessment presented in this paper developed emission factors to model aerosolization, quantitatively assessed inhalation exposures of aerosolized Legionella species or Legionella species surrogates while evaluating two generalized levels of assumed water concentrations, and developed a relative ranking of six common in-home uses of water for potential Legionella species inhalation exposure. Considerable variability in the calculated exposure dose was identified between the six identified exposure pathways, with the doses differing by over five orders of magnitude in each of the evaluated exposure scenarios. The assessment of exposure pathways that have been epidemiologically associated with legionellosis transmission (ultrasonic and cool mist humidifiers) produced higher estimated inhalation exposure doses than pathways where epidemiological evidence of transmission has been less strong (faucet and shower) or absent (toilets and therapy pool). With consideration of the large uncertainties inherent in the exposure assessment process used, a relative ranking of exposure pathways from highest to lowest exposure doses was produced using culture-based measurement data and the assumption of constant water concentration across exposure pathways. In this ranking, the ultrasonic and cool mist humidifier exposure pathways were estimated to produce the highest exposure doses, followed by the shower and faucet exposure pathways, and then the toilet and therapy pool exposure pathways. Published by Elsevier Ltd.
Health literacy and the clozapine patient.
Brosnan, Susan; Barron, Elizabeth; Sahm, L J
2012-01-01
To estimate the prevalence of limited health literacy in patients receiving clozapine for schizophrenia. To develop and produce a pharmacist-designed clozapine patient information leaflet (PIL) which has a higher readability score than the company-produced PIL. This was a cross sectional prevalence study. Ethical approval for the study was granted by the local ethics committee. Patients, over 18 years, attending the Clozapine Clinic of a Cork urban teaching hospital, were asked to participate in the study. Demographics such as gender, age, employment and smoking status, were gathered from all participants. The total daily clozapine dose, duration of clozapine treatment, and information regarding the clozapine DVD was also noted. The Rapid Estimate of Adult Literacy in Medicine (REALM) health literacy (HL) screening tool was then administered to each patient. A user-friendly PIL on clozapine was designed by the pharmacist, which was assessed for readability and compared to the company-produced PIL using the FRES and FKGL. Data were analysed using SPSS Version 15. Forty patients (65% male, 95% unemployed and 70% smokers) of average age 38.0 years (+/- 11.2) completed the REALM. The average score was 60.6 (+/- 8.7). Twenty-nine patients (72.5%) were found to have "adequate" health literacy. The remaining eleven patients were found to have either "marginal" or "low" health literacy. The pharmacist-designed PIL would have been readable by 95% of the study population, in contrast to 72.5% with the company-designed PIL. More than a quarter of the population were found to have marginal or low health literacy. Patient information should be matched to the health literacy level of the target population.
Jia, Peng; Purcell, Maureen; Pan, Guang; Wang, Jinjin; Kan, Shifu; Liu, Yin; Zheng, Xiaocong; SHi, Xiujie; He, Junqiang; Yu, Li; Hua, Qunyi; Lu, Tikang; Lan, Wensheng; Winton, James; Jin, Ningyi; Liu, Hong
2017-01-01
Infectious hematopoietic necrosis virus (IHNV) is an important pathogen of salmonid fishes. A validated universal reverse transcriptase quantitative PCR (RT-qPCR) assay that can quantify levels of IHNV in fish tissues has been previously reported. In the present study, we adapted the published set of IHNV primers and probe for use in a reverse-transcriptase droplet digital PCR (RT-ddPCR) assay for quantification of the virus in fish tissue samples. The RT-ddPCR and RT-qPCR assays detected 13 phylogenetically diverse IHNV strains, but neither assay produced detectable amplification when RNA from other fish viruses was used. The RT-ddPCR assay had a limit of detection (LOD) equating to 2.2 plaque forming units (PFU)/μl while the LOD for the RT-qPCR was 0.2 PFU/μl. Good agreement (69.4–100%) between assays was observed when used to detect IHNV RNA in cell culture supernatant and tissues from IHNV infected rainbow trout (Oncorhynchus mykiss) and arctic char (Salvelinus alpinus). Estimates of RNA copy number produced by the two assays were significantly correlated but the RT-qPCR consistently produced higher estimates than the RT-ddPCR. The analytical properties of the N gene RT-ddPCR test indicated that this method may be useful to assess IHNV RNA copy number for research and diagnostic purposes. Future work is needed to establish the within and between laboratory diagnostic performance of the RT-ddPCR assay.
12 CFR 324.153 - Internal models approach (IMA).
Code of Federal Regulations, 2014 CFR
2014-01-01
... potential decline in value of its modeled equity exposures; (ii) Are commensurate with the size, complexity... produce an estimate of potential losses for its modeled equity exposures that is no less than the estimate of potential losses produced by a VaR methodology employing a 99th percentile one-tailed confidence...
12 CFR 3.153 - Internal models approach (IMA).
Code of Federal Regulations, 2014 CFR
2014-01-01
... models that: (i) Assess the potential decline in value of its modeled equity exposures; (ii) Are...) The national bank's or Federal savings association's model must produce an estimate of potential losses for its modeled equity exposures that is no less than the estimate of potential losses produced by...
Commentary: Are Three Waves of Data Sufficient for Assessing Mediation?
ERIC Educational Resources Information Center
Reichardt, Charles S.
2011-01-01
Maxwell, Cole, and Mitchell (2011) demonstrated that simple structural equation models, when used with cross-sectional data, generally produce biased estimates of meditated effects. I extend those results by showing how simple structural equation models can produce biased estimates of meditated effects when used even with longitudinal data. Even…
High Heat Flow from Enceladus' South Polar Region Measured using 10-600/cm(exp -1) Cassini/CIRS Data
NASA Technical Reports Server (NTRS)
Howett, C. J. A.; Spencer, J. R.; Pearl, J.; Segura, M.
2011-01-01
Analysis of 2008 Cassini Composite Infrared Spectrometer (CIRS) 10 to 600/cm thermal emission spectra of Enceladus shows that for reasonable assumptions about the spatial distribution of the emission and the thermophysical properties of the solar-heated background surface, which are supported by CIRS observations of background temperatures at the edge of the active region, the endogenic power of Enceladus' south polar terrain is 15.8 +/- 3.1 GW. This is significantly higher than the previous estimate of 5.8 +/- 1.9 GW. The new value represents an improvement over the previous one, which was derived from higher wave number data (600 to 1100/cm-I) and was thus only sensitive to high-temperature emission. The mechanism capable of producing such a high endogenic power remains a mystery and challenges the current models of proposed heat production.
Rousset, Sylvie; Fardet, Anthony; Lacomme, Philippe; Normand, Sylvie; Montaurier, Christophe; Boirie, Yves; Morio, Béatrice
2015-01-01
The objective of this study was to evaluate the validity of total energy expenditure (TEE) provided by Actiheart and Armband. Normal-weight adult volunteers wore both devices either for 17 hours in a calorimetric chamber (CC, n = 49) or for 10 days in free-living conditions (FLC) outside the laboratory (n = 41). The two devices and indirect calorimetry or doubly labelled water, respectively, were used to estimate TEE in the CC group and FLC group. In the CC, the relative value of TEE error was not significant (p > 0.05) for Actiheart but significantly different from zero for Armband, showing TEE underestimation (-4.9%, p < 0.0001). However, the mean absolute values of errors were significantly different between Actiheart and Armband: 8.6% and 6.7%, respectively (p = 0.05). Armband was more accurate for estimating TEE during sleeping, rest, recovery periods and sitting-standing. Actiheart provided better estimation during step and walking. In FLC, no significant error in relative value was detected. Nevertheless, Armband produced smaller errors in absolute value than Actiheart (8.6% vs. 12.8%). The distributions of differences were more scattered around the means, suggesting a higher inter-individual variability in TEE estimated by Actiheart than by Armband. Our results show that both monitors are appropriate for estimating TEE. Armband is more effective than Actiheart at the individual level for daily light-intensity activities.
NASA Astrophysics Data System (ADS)
Xia, J.; McGuire, A. D.; Lawrence, D. M.; Burke, E.; Chen, X.; Delire, C. L.; Koven, C. D.; MacDougall, A. H.; Peng, S.; Rinke, A.; Saito, K.; Zhang, W.; Alkama, R.; Bohn, T. J.; Ciais, P.; Decharme, B.; Gouttevin, I.; Hajima, T.; Ji, D.; Krinner, G.; Lettenmaier, D. P.; Miller, P. A.; Moore, J. C.; Smith, B.; Sueyoshi, T.; Shi, Z.; Yan, L.; Liang, J.; Jiang, L.; Luo, Y.
2014-12-01
A more accurate prediction of future climate-carbon (C) cycle feedbacks requires better understanding and improved representation of the carbon cycle in permafrost regions within current earth system models. Here, we evaluated 10 terrestrial ecosystem models for their estimated net primary productivity (NPP) and its vulnerability to climate change in permafrost regions in the Northern Hemisphere. Those models were run retrospectively between 1960 and 2009. In comparison with MODIS satellite estimates, most models produce higher NPP (310 ± 12 g C m-2 yr-1) than MODIS (240 ± 20 g C m-2 yr-1) over the permafrost regions during 2000‒2009. The modeled NPP was then decomposed into gross primary productivity (GPP) and the NPP/GPP ratio (i.e., C use efficiency; CUE). By comparing the simulated GPP with a flux-tower-based database [Jung et al. Journal of Geophysical Research 116 (2011) G00J07] (JU11), we found although models only produce 10.6% higher mean GPP than JU11 over 1982‒2009, there was a two-fold disparity among models (397 to 830 g C m-2 yr-1). The model-to-model variation in GPP mainly resulted from the seasonal peak GPP and in low-latitudinal permafrost regions such as the Tibetan Plateau. Most models overestimate the CUE in permafrost regions in comparison to calculated CUE from the MODIS NPP and JU11 GPP products and observation-based estimates at 8 forest sites. The models vary in their sensitivities of NPP, GPP and CUE to historical changes in air temperature, atmospheric CO2 concentration and precipitation. For example, climate warming enhanced NPP in four models via increasing GPP but reduced NPP in two other models by decreasing both GPP and CUE. The results indicate that the model predictability of C cycle in permafrost regions can be improved by better representation of those processes controlling the seasonal maximum GPP and the CUE as well as their sensitivity to climate change.
Ants sow the seeds of global diversification in flowering plants.
Lengyel, Szabolcs; Gove, Aaron D; Latimer, Andrew M; Majer, Jonathan D; Dunn, Robert R
2009-01-01
The extraordinary diversification of angiosperm plants in the Cretaceous and Tertiary periods has produced an estimated 250,000-300,000 living angiosperm species and has fundamentally altered terrestrial ecosystems. Interactions with animals as pollinators or seed dispersers have long been suspected as drivers of angiosperm diversification, yet empirical examples remain sparse or inconclusive. Seed dispersal by ants (myrmecochory) may drive diversification as it can reduce extinction by providing selective advantages to plants and can increase speciation by enhancing geographical isolation by extremely limited dispersal distances. Using the most comprehensive sister-group comparison to date, we tested the hypothesis that myrmecochory leads to higher diversification rates in angiosperm plants. As predicted, diversification rates were substantially higher in ant-dispersed plants than in their non-myrmecochorous relatives. Data from 101 angiosperm lineages in 241 genera from all continents except Antarctica revealed that ant-dispersed lineages contained on average more than twice as many species as did their non-myrmecochorous sister groups. Contrasts in species diversity between sister groups demonstrated that diversification rates did not depend on seed dispersal mode in the sister group and were higher in myrmecochorous lineages in most biogeographic regions. Myrmecochory, which has evolved independently at least 100 times in angiosperms and is estimated to be present in at least 77 families and 11 000 species, is a key evolutionary innovation and a globally important driver of plant diversity. Myrmecochory provides the best example to date for a consistent effect of any mutualism on large-scale diversification.
Higher-level fusion for military operations based on abductive inference: proof of principle
NASA Astrophysics Data System (ADS)
Pantaleev, Aleksandar V.; Josephson, John
2006-04-01
The ability of contemporary military commanders to estimate and understand complicated situations already suffers from information overload, and the situation can only grow worse. We describe a prototype application that uses abductive inferencing to fuse information from multiple sensors to evaluate the evidence for higher-level hypotheses that are close to the levels of abstraction needed for decision making (approximately JDL levels 2 and 3). Abductive inference (abduction, inference to the best explanation) is a pattern of reasoning that occurs naturally in diverse settings such as medical diagnosis, criminal investigations, scientific theory formation, and military intelligence analysis. Because abduction is part of common-sense reasoning, implementations of it can produce reasoning traces that are very human understandable. Automated abductive inferencing can be deployed to augment human reasoning, taking advantage of computation to process large amounts of information, and to bypass limits to human attention and short-term memory. We illustrate the workings of the prototype system by describing an example of its use for small-unit military operations in an urban setting. Knowledge was encoded as it might be captured prior to engagement from a standard military decision making process (MDMP) and analysis of commander's priority intelligence requirements (PIR). The system is able to reasonably estimate the evidence for higher-level hypotheses based on information from multiple sensors. Its inference processes can be examined closely to verify correctness. Decision makers can override conclusions at any level and changes will propagate appropriately.
Comparison of Globally Complete Versions of GPCP and CMAP Monthly Precipitation Analyses
NASA Technical Reports Server (NTRS)
Curtis, Scott; Adler, Robert; Huffman, George
1998-01-01
In this study two global observational precipitation products, namely the Global Precipitation Climatology Project's (GPCP) community data set and CPC's Merged Analysis of Precipitation (CMAP), are compared on global to regional scales in the context of the different satellite and gauge data inputs and merger techniques. The average annual global precipitation rates, calculated from data common in regions/times to both GPCP and CMAP, are similar for the two. However, CMAP is larger than GPCP in the tropics because: (1) CMAP values in the tropics are adjusted month-by month to atoll gauge data in the West Pacific, which are greater than any satellite observations used; and (2) CMAP is produced from a linear combination of data inputs, which tends to give higher values than the microwave emission estimates alone to which the inputs are adjusted in the GPCP merger over the ocean. The CMAP month-to-month adjustment to the atolls also appears to introduce temporal variations throughout the tropics which are not detected by satellite-only products. On the other hand, GPCP is larger than CMAP in the high-latitude oceans, where CMAP includes the scattering based microwave estimates which are consistently smaller than the emission estimates used in both techniques. Also, in the polar regions GPCP transitions from the emission microwave estimates to the larger TOVS-based estimates. Finally, in high-latitude land areas GPCP can be significantly larger than CMAP because GPCP attempts to correct the gauge estimates for errors due to wind loss effects.
The flex track: flexible partitioning between low- and high-acuity areas of an emergency department.
Laker, Lauren F; Froehle, Craig M; Lindsell, Christopher J; Ward, Michael J
2014-12-01
Emergency departments (EDs) with both low- and high-acuity treatment areas often have fixed allocation of resources, regardless of demand. We demonstrate the utility of discrete-event simulation to evaluate flexible partitioning between low- and high-acuity ED areas to identify the best operational strategy for subsequent implementation. A discrete-event simulation was used to model patient flow through a 50-bed, urban, teaching ED that handles 85,000 patient visits annually. The ED has historically allocated 10 beds to a fast track for low-acuity patients. We estimated the effect of a flex track policy, which involved switching up to 5 of these fast track beds to serving both low- and high-acuity patients, on patient waiting times. When the high-acuity beds were not at capacity, low-acuity patients were given priority access to flexible beds. Otherwise, high-acuity patients were given priority access to flexible beds. Wait times were estimated for patients by disposition and Emergency Severity Index score. A flex track policy using 3 flexible beds produced the lowest mean patient waiting time of 30.9 minutes (95% confidence interval [CI] 30.6 to 31.2 minutes). The typical fast track approach of rigidly separating high- and low-acuity beds produced a mean patient wait time of 40.6 minutes (95% CI 40.2 to 50.0 minutes), 31% higher than that of the 3-bed flex track. A completely flexible ED, in which all beds can accommodate any patient, produced mean wait times of 35.1 minutes (95% CI 34.8 to 35.4 minutes). The results from the 3-bed flex track scenario were robust, performing well across a range of scenarios involving higher and lower patient volumes and care durations. Using discrete-event simulation, we have shown that adding some flexibility into bed allocation between low and high acuity can provide substantial reductions in overall patient waiting and a more efficient ED. Copyright © 2014 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.
Silva, Tânia L S; Morales-Torres, Sergio; Castro-Silva, Sérgio; Figueiredo, José L; Silva, Adrián M T
2017-09-15
Rising global energy demands associated to unbalanced allocation of water resources highlight the importance of water management solutions for the gas industry. Advanced drilling, completion and stimulation techniques for gas extraction, allow more economical access to unconventional gas reserves. This stimulated a shale gas revolution, besides tight gas and coalbed methane, also causing escalating water handling challenges in order to avoid a major impact on the environment. Hydraulic fracturing allied to horizontal drilling is gaining higher relevance in the exploration of unconventional gas reserves, but a large amount of wastewater (known as "produced water") is generated. Its variable chemical composition and flow rates, together with more severe regulations and public concern, have promoted the development of solutions for the treatment and reuse of such produced water. This work intends to provide an overview on the exploration and subsequent environmental implications of unconventional gas sources, as well as the technologies for treatment of produced water, describing the main results and drawbacks, together with some cost estimates. In particular, the growing volumes of produced water from shale gas plays are creating an interesting market opportunity for water technology and service providers. Membrane-based technologies (membrane distillation, forward osmosis, membrane bioreactors and pervaporation) and advanced oxidation processes (ozonation, Fenton, photocatalysis) are claimed to be adequate treatment solutions. Copyright © 2017 Elsevier Ltd. All rights reserved.
The contextual effects of social capital on health: a cross-national instrumental variable analysis.
Kim, Daniel; Baum, Christopher F; Ganz, Michael L; Subramanian, S V; Kawachi, Ichiro
2011-12-01
Past research on the associations between area-level/contextual social capital and health has produced conflicting evidence. However, interpreting this rapidly growing literature is difficult because estimates using conventional regression are prone to major sources of bias including residual confounding and reverse causation. Instrumental variable (IV) analysis can reduce such bias. Using data on up to 167,344 adults in 64 nations in the European and World Values Surveys and applying IV and ordinary least squares (OLS) regression, we estimated the contextual effects of country-level social trust on individual self-rated health. We further explored whether these associations varied by gender and individual levels of trust. Using OLS regression, we found higher average country-level trust to be associated with better self-rated health in both women and men. Instrumental variable analysis yielded qualitatively similar results, although the estimates were more than double in size in both sexes when country population density and corruption were used as instruments. The estimated health effects of raising the percentage of a country's population that trusts others by 10 percentage points were at least as large as the estimated health effects of an individual developing trust in others. These findings were robust to alternative model specifications and instruments. Conventional regression and to a lesser extent IV analysis suggested that these associations are more salient in women and in women reporting social trust. In a large cross-national study, our findings, including those using instrumental variables, support the presence of beneficial effects of higher country-level trust on self-rated health. Previous findings for contextual social capital using traditional regression may have underestimated the true associations. Given the close linkages between self-rated health and all-cause mortality, the public health gains from raising social capital within and across countries may be large. Copyright © 2011 Elsevier Ltd. All rights reserved.
Ober, J.A.
2003-01-01
Mexico and Spain are the leading producers of celestite, the most common strontium ore. Those countries produced nearly 80 percent of the estimated 360 kt (397,000 st) of celestite produced worldwide during 2002. China and Turkey are other significant celestite producers.
Mapping the Early Language Environment Using All-Day Recordings and Automated Analysis.
Gilkerson, Jill; Richards, Jeffrey A; Warren, Steven F; Montgomery, Judith K; Greenwood, Charles R; Kimbrough Oller, D; Hansen, John H L; Paul, Terrance D
2017-05-17
This research provided a first-generation standardization of automated language environment estimates, validated these estimates against standard language assessments, and extended on previous research reporting language behavior differences across socioeconomic groups. Typically developing children between 2 to 48 months of age completed monthly, daylong recordings in their natural language environments over a span of approximately 6-38 months. The resulting data set contained 3,213 12-hr recordings automatically analyzed by using the Language Environment Analysis (LENA) System to generate estimates of (a) the number of adult words in the child's environment, (b) the amount of caregiver-child interaction, and (c) the frequency of child vocal output. Child vocalization frequency and turn-taking increased with age, whereas adult word counts were age independent after early infancy. Child vocalization and conversational turn estimates predicted 7%-16% of the variance observed in child language assessment scores. Lower socioeconomic status (SES) children produced fewer vocalizations, engaged in fewer adult-child interactions, and were exposed to fewer daily adult words compared with their higher socioeconomic status peers, but within-group variability was high. The results offer new insight into the landscape of the early language environment, with clinical implications for identification of children at-risk for impoverished language environments.
Methane oxidation by termite mounds estimated by the carbon isotopic composition of methane
NASA Astrophysics Data System (ADS)
Sugimoto, Atsuko; Inoue, Tetsushi; Kirtibutr, Nit; Abe, Takuya
1998-12-01
Emission rates and carbon isotope ratios of CH4, emitted by workers of termites, and of CH4, emitted from their mounds, were observed in a dry evergreen forest in Thailand to estimate the proportion of CH4 oxidized during emission through the mound. The δ13C of CH4 emitted from a termite mound (-70.9 to -82.4‰) was higher than that of CH4 emitted by workers in the mound (-85.4 to -97. l‰). Using a fractionation factor (a = 0.987) for oxidation of CH4 which was obtained in the incubation experiment, an emission factor defined as (CH4 emitted from a termite mound/CH4 produced by termites) was calculated. The emission factor obtained in each termite mound was nearly zero for Macrotermes (fungus-growing termites), of which the nest has a thick soil wall and subterrannean termites, and 0.17 to 0.47 for Termitinae (small-mound-making termites). Global CH4 emission by termites was estimated on the basis of the CH4 emission rates by workers and termite biomass with the emission factors. The calculated result was 1.5 to 7.4 Tg/y (0.3 to 1.3% of total source), which is considerably smaller than the estimate by the IPCC [1994].
Potvin, Olivier; Dieumegarde, Louis; Duchesne, Simon
2017-08-01
We recently built normative data for FreeSurfer morphometric estimates of cortical regions using its default atlas parcellation (Desikan-Killiany or DK) according to individual and scanner characteristics. We aimed to produced similar normative values for Desikan-Killianny-Tourville (DKT) and ex vivo-based labeling protocols, as well as examine the differences between these three atlases. Surfaces, thicknesses, and volumes of cortical regions were produced using cross-sectional magnetic resonance scans from the same 2713 healthy individuals aged 18-94 years as used in the reported DK norms. Models predicting regional cortical estimates of each hemisphere were produced using age, sex, estimated intracranial volume (eTIV), scanner manufacturer and magnetic field strength (MFS) as predictors. The DKT and DK models generally included the same predictors and produced similar R 2 . Comparison between DK, DKT, ex vivo atlases normative cortical measures showed that the three protocols generally produced similar normative values. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Shyu, J. Bruce H.; Wang, Chung-Che; Wang, Yu; Shen, Chuan-Chou; Chiang, Hong-Wei; Liu, Sze-Chieh; Min, Soe; Aung, Lin Thu; Than, Oo; Tun, Soe Thura
2018-02-01
Upper-plate structures that splay out from the megathrusts are common features along major convergent plate boundaries. However, their earthquake and tsunami hazard potentials have not yet received significant attention. In this study, we identified at least one earthquake event that may have been produced by an upper-plate splay fault offshore western Myanmar, based on U-Th ages of uplifted coral microatolls. This event is likely an earthquake that was documented historically in C.E. 1848, with an estimated magnitude between 6.8 and 7.2 based on regional structural characteristics. Such magnitude is consistent with the observed co-seismic uplift amount of ∼0.5 m. Although these events are smaller in magnitude than events produced by megathrusts, they may produce higher earthquake and tsunami hazards for local coastal communities due to their proximity. Our results also indicate that earthquake events with co-seismic uplift along the coast may not necessarily produce a flight of marine terraces. Therefore, using only records of uplifted marine terraces as megathrust earthquake proxies may overlook the importance of upper-plate splay fault ruptures, and underestimate the overall earthquake frequency for future seismic and tsunami hazards along major subduction zones of the world.
Elhakeem, Ahmed; Hannam, Kimberly; Deere, Kevin C; Hartley, April; Clark, Emma M; Moss, Charlotte; Edwards, Mark H; Dennison, Elaine; Gaysin, Tim; Kuh, Diana; Wong, Andrew; Cooper, Cyrus; Cooper, Rachel; Tobias, Jon H
2018-01-01
Abstract Background High impact physical activity (PA) is thought to improve skeletal health, but its relation to other health outcomes are unclear. We investigated associations between PA impact magnitude and body mass index (BMI) in older adults. Methods Data were taken from the Cohort for Skeletal Health in Bristol and Avon (COSHIBA), Hertfordshire Cohort Study, and MRC National Survey of Health and Development. Vertical acceleration peaks from 7-day hip-worn accelerometer recordings were used to classify PA as low (0.5 < g < 1.0g), medium (1 < g < 1.5g), or higher (≥1.5g) impact. Cohort-specific associations of low, medium, and higher impact PA with BMI were examined using linear regressions and estimates combined using random-effects meta-analysis. Results A total of 1182 participants (mean age = 72.7 years, 68% female) were included. Low, medium, and higher impact PA were inversely related to BMI in initial models. After adjustment for confounders and other impacts, low, but not medium or higher, impacts were inversely related to BMI (−0.31, p < .001: overall combined standard deviation change in BMI per doubling in the number of low impacts). In adjusted analyses of body composition measured by dual-energy X-ray absorptiometry in COSHIBA, low, but not medium or higher, impacts were inversely related to total body fat mass (−0.19, p < .001) and android:gynoid fat mass ratio (−0.16, p = .01), whereas high impact PA was weakly and positively associated with lean mass (0.05, p = .06). Conclusions Greater exposure to PA producing low magnitude vertical impacts was associated with lower BMI and fat mass at older age. Low impact PA may help reduce obesity risk in older adults. PMID:29028919
Foreman, Michael G G; Guo, Ming; Garver, Kyle A; Stucchi, Dario; Chandler, Peter; Wan, Di; Morrison, John; Tuele, Darren
2015-01-01
Finite volume ocean circulation and particle tracking models are used to simulate water-borne transmission of infectious hematopoietic necrosis virus (IHNV) among Atlantic salmon (Salmo salar) farms in the Discovery Islands region of British Columbia, Canada. Historical simulations for April and July 2010 are carried out to demonstrate the seasonal impact of river discharge, wind, ultra-violet (UV) radiation, and heat flux conditions on near-surface currents, viral dispersion and survival. Numerical particles released from infected farm fish in accordance with IHNV shedding rates estimated through laboratory experiments are dispersed by model oceanic flows. Viral particles are inactivated by ambient UV radiation levels and by the natural microbial community at rates derived through laboratory studies. Viral concentration maps showing temporal and spatial changes are produced and combined with lab-determined minimum infectious dosages to estimate the infective connectivity among farms. Results demonstrate that neighbouring naïve farms can become exposed to IHNV via water-borne transport from an IHNV diseased farm, with a higher risk in April than July, and that many events in the sequence of farm outbreaks in 2001-2002 are consistent with higher risks in our farm connectivity matrix. Applications to other diseases, transfers between farmed and wild fish, and the effect of vaccinations are also discussed.
Ambient temperature and added heat wave effects on hospitalizations in California from 1999 to 2009.
Sherbakov, Toki; Malig, Brian; Guirguis, Kristen; Gershunov, Alexander; Basu, Rupa
2018-01-01
Investigators have examined how heat waves or incremental changes in temperature affect health outcomes, but few have examined both simultaneously. We utilized distributed lag nonlinear models (DLNM) to explore temperature associations and evaluate possible added heat wave effects on hospitalizations in 16 climate zones throughout California from May through October 1999-2009. We define heat waves as a period when daily mean temperatures were above the zone- and month-specific 95th percentile for at least two consecutive days. DLNMs were used to estimate climate zone-specific non-linear temperature and heat wave effects, which were then combined using random effects meta-analysis to produce an overall estimate for each. With higher temperatures, admissions for acute renal failure, appendicitis, dehydration, ischemic stroke, mental health, non-infectious enteritis, and primary diabetes were significantly increased, with added effects from heat waves observed for acute renal failure and dehydration. Higher temperatures also predicted statistically significant decreases in hypertension admissions, respiratory admissions, and respiratory diseases with secondary diagnoses of diabetes, though heat waves independently predicted an added increase in risk for both respiratory types. Our findings provide evidence that both heat wave and temperature exposures can exert effects independently. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Cruden, Brett A.; Rao, M. V. V. S.; Sharma, Surendra P.; Meyyappan, M.
2001-01-01
This work examines the accuracy of plasma neutral temperature estimates by fitting the rotational band envelope of different diatomic species in emission. Experiments are performed in an inductively coupled CF4 plasma generated in a Gaseous Electronics Conference reference cell. Visible and ultraviolet emission spectra are collected at a power of 300 W (approximately 0.7 W/cc) and pressure of 30 mtorr. The emission bands of several molecules (CF, CN, C2, CO, and SiF) are fit simultaneously for rotational and vibrational temperatures and compared. Four different rotational temperatures are obtained: 1250 K for CF and CN, 1600 K for CO, 1800 K for C2, and 2300 K for SiF. The vibrational temperatures obtained vary from 1750-5950 K, with the higher vibrational temperatures generally corresponding to the lower rotational temperatures. These results suggest that the different species have achieved different degrees of equilibration between the rotational and vibrational modes and may not be equilibrated with the translational temperatures. The different temperatures are also related to the likelihood that the species are produced by ion bombardment of the surface, with etch products like SiF, CO, and C2 having higher temperatures than species expected to have formed in the gas phase.
NASA Astrophysics Data System (ADS)
Burbidge, D.; Cummins, P. R.
2005-12-01
Since the Boxing Day tsunami various countries surrounding the Indian Ocean have been investigating the potential hazard from trans-Indian Ocean tsunami generated along the Sunda Arc, south of Indonesia. This study presents some preliminary estimates of the tsunami hazard faced by Western Australia from tsunami generated along the Arc. To estimate the hazard, a suite of tsunami spaced evenly along the subduction zone to the south of Indonesia were numerically modelled. Offshore wave heights from tsunami generated in this region are significantly higher along northwestern part of the Western Australian coast from Exmouth to the Kimberly than they are along the rest of the coast south of Exmouth. Due to the offshore bathymetry, the area around Onslow in particular may face a higher tsunami than other areas the West Australian coast. Earthquakes between Java and Timor are likely to produce the greatest hazard to northwest WA. Earthquakes off Sumatra are likely the main source of tsunami hazard to locations south of Exmouth, however the hazard here is likely to be lower than that along the north western part of the West Australian coast. Tsunami generated by other sources (eg large intra-plate events, volcanoes, landslides and asteroids) could threaten other parts of the coast.
Foreman, Michael G. G.; Guo, Ming; Garver, Kyle A.; Stucchi, Dario; Chandler, Peter; Wan, Di; Morrison, John; Tuele, Darren
2015-01-01
Finite volume ocean circulation and particle tracking models are used to simulate water-borne transmission of infectious hematopoietic necrosis virus (IHNV) among Atlantic salmon (Salmo salar) farms in the Discovery Islands region of British Columbia, Canada. Historical simulations for April and July 2010 are carried out to demonstrate the seasonal impact of river discharge, wind, ultra-violet (UV) radiation, and heat flux conditions on near-surface currents, viral dispersion and survival. Numerical particles released from infected farm fish in accordance with IHNV shedding rates estimated through laboratory experiments are dispersed by model oceanic flows. Viral particles are inactivated by ambient UV radiation levels and by the natural microbial community at rates derived through laboratory studies. Viral concentration maps showing temporal and spatial changes are produced and combined with lab-determined minimum infectious dosages to estimate the infective connectivity among farms. Results demonstrate that neighbouring naïve farms can become exposed to IHNV via water-borne transport from an IHNV diseased farm, with a higher risk in April than July, and that many events in the sequence of farm outbreaks in 2001-2002 are consistent with higher risks in our farm connectivity matrix. Applications to other diseases, transfers between farmed and wild fish, and the effect of vaccinations are also discussed. PMID:26114643
The safety of high-hazard water infrastructures in the U.S. Pacific Northwest in a changing climate
NASA Astrophysics Data System (ADS)
Chen, X.; Hossain, F.; Leung, L. R.
2017-12-01
The safety of large and aging water infrastructures is gaining attention in water management given the accelerated rate of change in landscape, climate and society. In current engineering practice, such safety is ensured by the design of infrastructure for the Probable Maximum Precipitation (PMP). Recently, several numerical modeling approaches have been proposed to modernize the conventional and ad hoc PMP estimation approach. However, the underlying physics have not been investigated and thus differing PMP estimates are obtained without clarity on their interpretation. In this study, we present a hybrid approach that takes advantage of both traditional engineering practice and modern climate science to estimate PMP for current and future climate conditions. The traditional PMP approach is improved and applied to five statistically downscaled CMIP5 model outputs, producing an ensemble of PMP estimates in the Pacific Northwest (PNW) during the historical (1970-2016) and future (2050-2099) time periods. The new historical PMP estimates are verified against the traditional estimates. PMP in the PNW will increase by 50%±30% of the current level by 2099 under the RCP8.5 scenario. Most of the increase is caused by warming, which mainly affects moisture availability through increased sea surface temperature, with minor contributions from changes in storm efficiency in the future. Moist track change tends to reduce the future PMP. Compared with extreme precipitation, PMP exhibits higher internal variability. Thus long-time records of high-quality data in both precipitation and related meteorological fields (temperature, wind fields) are required to reduce uncertainties in the ensemble PMP estimates.
Probable Maximum Precipitation in the U.S. Pacific Northwest in a Changing Climate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xiaodong; Hossain, Faisal; Leung, Lai-Yung
2017-12-22
The safety of large and aging water infrastructures is gaining attention in water management given the accelerated rate of change in landscape, climate and society. In current engineering practice, such safety is ensured by the design of infrastructure for the Probable Maximum Precipitation (PMP). Recently, several physics-based numerical modeling approaches have been proposed to modernize the conventional and ad hoc PMP estimation approach. However, the underlying physics has not been investigated and thus differing PMP estimates are obtained without clarity on their interpretation. In this study, we present a hybrid approach that takes advantage of both traditional engineering wisdom andmore » modern climate science to estimate PMP for current and future climate conditions. The traditional PMP approach is improved and applied to outputs from an ensemble of five CMIP5 models. This hybrid approach is applied in the Pacific Northwest (PNW) to produce ensemble PMP estimation for the historical (1970-2016) and future (2050-2099) time periods. The new historical PMP estimates are verified by comparing them with the traditional estimates. PMP in the PNW will increase by 50% of the current level by 2099 under the RCP8.5 scenario. Most of the increase is caused by warming, which mainly affects moisture availability, with minor contributions from changes in storm efficiency in the future. Moist track change tends to reduce the future PMP. Compared with extreme precipitation, ensemble PMP exhibits higher internal variation. Thus high-quality data of both precipitation and related meteorological fields (temperature, wind fields) are required to reduce uncertainties in the ensemble PMP estimates.« less
Cao, Qian-Jin; Xia, Hui; Yang, Xiao; Lu, Bao-Rong
2009-12-01
Transgene escape from genetically modified (GM) rice into weedy rice via gene flow may cause undesired environmental consequences. Estimating the field performance of crop-weed hybrids will facilitate our understanding of potential introgression of crop genes (including transgenes) into weedy rice populations, allowing for effective biosafety assessment. Comparative studies of three weedy rice strains and their hybrids with two GM rice lines containing different insect-resistance transgenes (CpTI or Bt/CpTI) indicated an enhanced relative performance of the crop-weed hybrids, with taller plants, more tillers, panicles, and spikelets per plant, as well as higher 1 000-seed weight, compared with the weedy rice parents, although the hybrids produced less filled seeds per plant than their weedy parents. Seeds from the F(1) hybrids had higher germination rates and produced more seedlings than the weedy parents, which correlated positively with 1 000-seed weight. The crop-weed hybrids demonstrated a generally enhanced relative performance than their weedy rice parents in our field experiments. These findings indicate that transgenes from GM rice can persist to and introgress into weedy rice populations through recurrent crop-to-weed gene flow with the aid of slightly increased relative fitness in F(1) hybrids.
ac electroosmotic pumping induced by noncontact external electrodes.
Wang, Shau-Chun; Chen, Hsiao-Ping; Chang, Hsueh-Chia
2007-09-21
Electroosmotic (EO) pumps based on dc electroosmosis is plagued by bubble generation and other electrochemical reactions at the electrodes at voltages beyond 1 V for electrolytes. These disadvantages limit their throughput and offset their portability advantage over mechanical syringe or pneumatic pumps. ac electroosmotic pumps at high frequency (>100 kHz) circumvent the bubble problem by inducing polarization and slip velocity on embedded electrodes,1 but they require complex electrode designs to produce a net flow. We report a new high-throughput ac EO pump design based on induced-polarization on the entire channel surface instead of just on the electrodes. Like dc EO pumps, our pump electrodes are outside of the load section and form a cm-long pump unit consisting of three circular reservoirs (3 mm in diameter) connected by a 1x1 mm channel. The field-induced polarization can produce an effective Zeta potential exceeding 1 V and an ac slip velocity estimated as 1 mmsec or higher, both one order of magnitude higher than earlier dc and ac pumps, giving rise to a maximum throughput of 1 mulsec. Polarization over the entire channel surface, quadratic scaling with respect to the field and high voltage at high frequency without electrode bubble generation are the reasons why the current pump is superior to earlier dc and ac EO pumps.
Rahman, Motior M; Islam, Aminul M; Azirun, Sofian M; Boyce, Amru N
2014-01-01
Bush bean, long bean, mung bean, and winged bean plants were grown with N fertilizer at rates of 0, 2, 4, and 6 g N m(-2) preceding rice planting. Concurrently, rice was grown with N fertilizer at rates of 0, 4, 8, and 12 g N m(-2). No chemical fertilizer was used in the 2nd year of crop to estimate the nitrogen agronomic efficiency (NAE), nitrogen recovery efficiency (NRE), N uptake, and rice yield when legume crops were grown in rotation with rice. Rice after winged bean grown with N at the rate of 4 g N m(-2) achieved significantly higher NRE, NAE, and N uptake in both years. Rice after winged bean grown without N fertilizer produced 13-23% higher grain yield than rice after fallow rotation with 8 g N m(-2). The results revealed that rice after winged bean without fertilizer and rice after long bean with N fertilizer at the rate of 4 g N m(-2) can produce rice yield equivalent to that of rice after fallow with N fertilizer at rates of 8 g N m(-2). The NAE, NRE, and harvest index values for rice after winged bean or other legume crop rotation indicated a positive response for rice production without deteriorating soil fertility.
STABILIZING CLOUD FEEDBACK DRAMATICALLY EXPANDS THE HABITABLE ZONE OF TIDALLY LOCKED PLANETS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang Jun; Abbot, Dorian S.; Cowan, Nicolas B., E-mail: abbot@uchicago.edu
2013-07-10
The habitable zone (HZ) is the circumstellar region where a planet can sustain surface liquid water. Searching for terrestrial planets in the HZ of nearby stars is the stated goal of ongoing and planned extrasolar planet surveys. Previous estimates of the inner edge of the HZ were based on one-dimensional radiative-convective models. The most serious limitation of these models is the inability to predict cloud behavior. Here we use global climate models with sophisticated cloud schemes to show that due to a stabilizing cloud feedback, tidally locked planets can be habitable at twice the stellar flux found by previous studies.more » This dramatically expands the HZ and roughly doubles the frequency of habitable planets orbiting red dwarf stars. At high stellar flux, strong convection produces thick water clouds near the substellar location that greatly increase the planetary albedo and reduce surface temperatures. Higher insolation produces stronger substellar convection and therefore higher albedo, making this phenomenon a stabilizing climate feedback. Substellar clouds also effectively block outgoing radiation from the surface, reducing or even completely reversing the thermal emission contrast between dayside and nightside. The presence of substellar water clouds and the resulting clement surface conditions will therefore be detectable with the James Webb Space Telescope.« less
Young, Brent; Conti, David V; Dean, Matthew D
2013-01-01
In a variety of taxa, males deploy alternative reproductive tactics to secure fertilizations. In many species, small “sneaker” males attempt to steal fertilizations while avoiding encounters with larger, more aggressive, dominant males. Sneaker males usually face a number of disadvantages, including reduced access to females and the higher likelihood that upon ejaculation, their sperm face competition from other males. Nevertheless, sneaker males represent an evolutionarily stable strategy under a wide range of conditions. Game theory suggests that sneaker males compensate for these disadvantages by investing disproportionately in spermatogenesis, by producing more sperm per unit body mass (the “fair raffle”) and/or by producing higher quality sperm (the “loaded raffle”). Here, we test these models by competing sperm from sneaker “jack” males against sperm from dominant “hooknose” males in Chinook salmon. Using two complementary approaches, we reject the fair raffle in favor of the loaded raffle and estimate that jack males were ∼1.35 times as likely as hooknose males to fertilize eggs under controlled competitive conditions. Interestingly, the direction and magnitude of this skew in paternity shifted according to individual female egg donors, suggesting cryptic female choice could moderate the outcomes of sperm competition in this externally fertilizing species. PMID:24455130
Ismail, Gehan A; Ismail, Mona M
2017-02-01
Concentrations of nine heavy metals (Cd, Co, Cr, Cu, Fe, Mn, Ni, Pb, and Zn) were determined in the green seaweed species Cladophora glomerata and Ulva compressa collected from El-Mex and Sidi Kirayr locations. The heavy metal concentrations in algal tissues were in direct correlation with their soluble concentrations in seawater with the descending order: Fe
12 CFR 217.153 - Internal models approach (IMA).
Code of Federal Regulations, 2014 CFR
2014-01-01
...-regulated institution must have one or more models that: (i) Assess the potential decline in value of its... idiosyncratic risk. (2) The Board-regulated institution's model must produce an estimate of potential losses for its modeled equity exposures that is no less than the estimate of potential losses produced by a VaR...
FOOTPRINT is a simple and user-friendly screening model to estimate the length and surface area of BTEX plumes in ground water produced from a spill of gasoline that contains ethanol. Ethanol has a potential negative impact on the natural biodegradation of BTEX compounds in groun...
An object-based approach for areal rainfall estimation and validation of atmospheric models
NASA Astrophysics Data System (ADS)
Troemel, Silke; Simmer, Clemens
2010-05-01
An object-based approach for areal rainfall estimation is applied to pseudo-radar data simulated of a weatherforecast model as well as to real radar volume data. The method aims at an as fully as possible exploitation of three-dimensional radar signals produced by precipitation generating systems during their lifetime to enhance areal rainfall estimation. Therefore tracking of radar-detected precipitation-centroids is performed and rain events are investigated using so-called Integral Radar Volume Descriptors (IRVD) containing relevant information of the underlying precipitation process. Some investigated descriptors are statistical quantities from the radar reflectivities within the boundary of a tracked rain cell like the area mean reflectivity or the compactness of a cell; others evaluate the mean vertical structure during the tracking period at the near surface reflectivity-weighted center of the cell like the mean effective efficiency or the mean echo top height. The stage of evolution of a system is given by the trend in the brightband fraction or related quantities. Furthermore, two descriptors not directly derived from radar data are considered: the mean wind shear and an orographic rainfall amplifier. While in case of pseudo-radar data a model based on a small set of IRVDs alone provides rainfall estimates of high accuracy, the application of such a model to the real world remains within the accuracies achievable with a constant Z-R-relationship. However, a combined model based on single IRVDs and the Marshall-Palmer Z-R-estimator already provides considerable enhancements even though the resolution of the data base used has room for improvement. The mean echo top height, the mean effective efficiency, the empirical standard deviation and the Marshall-Palmer estimator are detected for the final rainfall estimator. High correlations between storm height and rain rates, a shift of the probability distribution to higher values with increasing effective efficiency, and the possibility to classify continental and maritime systems using the effective efficiency confirm the informative value of the qualified descriptors. The IRVDs especially correct for the underestimation in case of intense rain events, and the information content of descriptors is most likely higher than demonstrated so far. We used quite sparse information about meteorological variables needed for the calculation of some IRVDs from single radiosoundings, and several descriptors suffered from the range-dependent vertical resolution of the reflectivity profile. Inclusion of neighbouring radars and assimilation runs of weather forecasting models will further enhance the accuracy of rainfall estimates. Finally, the clear difference between the IRVD selection from the pseudo-radar data and from the real world data hint to a new object-based avenue for the validation of higher resolution atmospheric models and for evaluating their potential to digest radar observations in data assimilation schemes.
Modified microplate method for rapid and efficient estimation of siderophore produced by bacteria.
Arora, Naveen Kumar; Verma, Maya
2017-12-01
In this study, siderophore production by various bacteria amongst the plant-growth-promoting rhizobacteria was quantified by a rapid and efficient method. In total, 23 siderophore-producing bacterial isolates/strains were taken to estimate their siderophore-producing ability by the standard method (chrome azurol sulphonate assay) as well as 96 well microplate method. Production of siderophore was estimated in percent siderophore unit by both the methods. It was observed that data obtained by both methods correlated positively with each other proving the correctness of microplate method. By the modified microplate method, siderophore production by several bacterial strains can be estimated both qualitatively and quantitatively at one go, saving time, chemicals, making it very less tedious, and also being cheaper in comparison with the method currently in use. The modified microtiter plate method as proposed here makes it far easier to screen the plant-growth-promoting character of plant-associated bacteria.
A technique for estimating the absolute gain of a photomultiplier tube
NASA Astrophysics Data System (ADS)
Takahashi, M.; Inome, Y.; Yoshii, S.; Bamba, A.; Gunji, S.; Hadasch, D.; Hayashida, M.; Katagiri, H.; Konno, Y.; Kubo, H.; Kushida, J.; Nakajima, D.; Nakamori, T.; Nagayoshi, T.; Nishijima, K.; Nozaki, S.; Mazin, D.; Mashuda, S.; Mirzoyan, R.; Ohoka, H.; Orito, R.; Saito, T.; Sakurai, S.; Takeda, J.; Teshima, M.; Terada, Y.; Tokanai, F.; Yamamoto, T.; Yoshida, T.
2018-06-01
Detection of low-intensity light relies on the conversion of photons to photoelectrons, which are then multiplied and detected as an electrical signal. To measure the actual intensity of the light, one must know the factor by which the photoelectrons have been multiplied. To obtain this amplification factor, we have developed a procedure for estimating precisely the signal caused by a single photoelectron. The method utilizes the fact that the photoelectrons conform to a Poisson distribution. The average signal produced by a single photoelectron can then be estimated from the number of noise events, without requiring analysis of the distribution of the signal produced by a single photoelectron. The signal produced by one or more photoelectrons can be estimated experimentally without any assumptions. This technique, and an example of the analysis of a signal from a photomultiplier tube, are described in this study.
Using doppler radar images to estimate aircraft navigational heading error
Doerry, Armin W [Albuquerque, NM; Jordan, Jay D [Albuquerque, NM; Kim, Theodore J [Albuquerque, NM
2012-07-03
A yaw angle error of a motion measurement system carried on an aircraft for navigation is estimated from Doppler radar images captured using the aircraft. At least two radar pulses aimed at respectively different physical locations in a targeted area are transmitted from a radar antenna carried on the aircraft. At least two Doppler radar images that respectively correspond to the at least two transmitted radar pulses are produced. These images are used to produce an estimate of the yaw angle error.
Study on Comparison of Bidding and Pricing Behavior Distinction between Estimate Methods
NASA Astrophysics Data System (ADS)
Morimoto, Emi; Namerikawa, Susumu
The most characteristic trend on bidding and pricing behavior distinction in recent years is the increasing number of bidders just above the criteria for low-price bidding investigations. The contractor's markup is the difference between the bidding price and the execution price. Therefore, the contractor's markup is the difference between criteria for low-price bidding investigations price and the execution price in the public works bid in Japan. Virtually, bidder's strategies and behavior have been controlled by public engineer's budgets. Estimation and bid are inseparably linked in the Japanese public works procurement system. The trial of the unit price-type estimation method begins in 2004. On another front, accumulated estimation method is one of the general methods in public works. So, there are two types of standard estimation methods in Japan. In this study, we did a statistical analysis on the bid information of civil engineering works for the Ministry of Land, Infrastructure, and Transportation in 2008. It presents several issues that bidding and pricing behavior is related to an estimation method (several estimation methods) for public works bid in Japan. The two types of standard estimation methods produce different results that number of bidders (decide on bid-no bid strategy) and distribution of bid price (decide on mark-up strategy).The comparison on the distribution of bid prices showed that the percentage of the bid concentrated on the criteria for low-price bidding investigations have had a tendency to get higher in the large-sized public works by the unit price-type estimation method, comparing with the accumulated estimation method. On one hand, the number of bidders who bids for public works estimated unit-price tends to increase significantly Public works estimated unit-price is likely to have been one of the factors for the construction companies to decide if they participate in the biddings.
Stoklosa, Michal; Ross, Hana
2014-05-01
To compare two different methods for estimating the size of the illicit cigarette market with each other and to contrast the estimates obtained by these two methods with the results of an industry-commissioned study. We used two observational methods: collection of data from packs in smokers' personal possession, and collection of data from packs discarded on streets. The data were obtained in Warsaw, Poland in September 2011 and October 2011. We used tests of independence to compare the results based on the two methods, and to contrast those with the estimate from the industry-commissioned discarded pack collection conducted in September 2011. We found that the proportions of cigarette packs classified as not intended for the Polish market estimated by our two methods were not statistically different. These estimates were 14.6% (95% CI 10.8% to 19.4%) using the survey data (N=400) and 15.6% (95% CI 13.2% to 18.4%) using the discarded pack data (N=754). The industry estimate (22.9%) was higher by nearly a half compared with our estimates, and this difference is statistically significant. Our findings are consistent with previous evidence of the tobacco industry exaggerating the scope of illicit trade and with the general pattern of the industry manipulating evidence to mislead the debate on tobacco control policy in many countries. Collaboration between governments and the tobacco industry to estimate tobacco tax avoidance and evasion is likely to produce upward-biased estimates of illicit cigarette trade. If governments are presented with industry estimates, they should strictly require a disclosure of all methodological details and data used in generating these estimates, and should seek advice from independent experts. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
D'Agnese, F. A.; Faunt, C.C.; Turner, A.K.; ,
1996-01-01
The recharge and discharge components of the Death Valley regional groundwater flow system were defined by techniques that integrated disparate data types to develop a spatially complex representation of near-surface hydrological processes. Image classification methods were applied to multispectral satellite data to produce a vegetation map. The vegetation map was combined with ancillary data in a GIS to delineate different types of wetlands, phreatophytes and wet playa areas. Existing evapotranspiration-rate estimates were used to calculate discharge volumes for these area. An empirical method of groundwater recharge estimation was modified to incorporate data describing soil-moisture conditions, and a recharge potential map was produced. These discharge and recharge maps were readily converted to data arrays for numerical modelling codes. Inverse parameter estimation techniques also used these data to evaluate the reliability and sensitivity of estimated values.The recharge and discharge components of the Death Valley regional groundwater flow system were defined by remote sensing and GIS techniques that integrated disparate data types to develop a spatially complex representation of near-surface hydrological processes. Image classification methods were applied to multispectral satellite data to produce a vegetation map. This map provided a basis for subsequent evapotranspiration and infiltration estimations. The vegetation map was combined with ancillary data in a GIS to delineate different types of wetlands, phreatophytes and wet playa areas. Existing evapotranspiration-rate estimates were then used to calculate discharge volumes for these areas. A previously used empirical method of groundwater recharge estimation was modified by GIS methods to incorporate data describing soil-moisture conditions, and a recharge potential map was produced. These discharge and recharge maps were readily converted to data arrays for numerical modelling codes. Inverse parameter estimation techniques also used these data to evaluate the reliability and sensitivity of estimated values.
Use of wastewater treatment plant biogas for the operation of Solid Oxide Fuel Cells (SOFCs).
Lackey, Jillian; Champagne, Pascale; Peppley, Brant
2017-12-01
Solid Oxide Fuel Cells (SOFCs) perform well on light hydrocarbon fuels, and the use of biogas derived from the anaerobic digestion (AD) of municipal wastewater sludges could provide an opportunity for the CH 4 produced to be used as a renewable fuel. Greenhouse gas (GHG), NO x , SO x , and hydrocarbon pollutant emissions would also be reduced. In this study, SOFCs were operated on AD derived biogas. Initially, different H 2 dilutions were tested (N 2 , Ar, CO 2 ) to examine the performance of tubular SOFCs. With inert gases as diluents, a decrease in cell performance was observed, however, the use of CO 2 led to a higher decrease in performance as it promoted the reverse water-gas shift (WGS) reaction, reducing the H 2 partial pressure in the gas mixture. A model was developed to predict system efficiency and GHG emissions. A higher electrical system efficiency was noted for a steam:carbon ratio of 2 compared to 1 due to the increased H 2 partial pressure in the reformate resulting from higher H 2 O concentration. Reductions in GHG emissions were estimated at 2400 tonnes CO 2 , 60 kg CH 4 and 18 kg N 2 O. SOFCs were also tested using a simulated biogas reformate mixture (66.7% H 2 , 16.1% CO, 16.5% CO 2 , 0.7% N 2 , humidified to 2.3 or 20 mol% H 2 O). Higher humidification yielded better performance as the WGS reaction produced more H 2 with additional H 2 O. It was concluded that AD-derived biogas, when cleaned to remove H 2 S, Si compounds, halides and other contaminants, could be reformed to provide a clean, renewable fuel for SOFCs. Copyright © 2016 Elsevier Ltd. All rights reserved.
Storm Physics and Lightning Properties over Northern Alabama during DC3
NASA Astrophysics Data System (ADS)
Matthee, R.; Carey, L. D.; Bain, A. L.
2013-12-01
The Deep Convective Clouds and Chemistry (DC3) experiment seeks to examine the relationship between deep moist convection (DMC) and the production of nitrogen oxides (NOx) via lightning (LNOx). The focus of this study will be to examine integrated storm microphysics and lightning properties of DMC across northern Alabama (NA) during the DC3 campaign through use of polarimetric radar [UAHuntsville's Advanced Radar for Meteorological and Operational Radar (ARMOR)] and lightning mapping [National Aeronautical and Space Administration's (NASA) north Alabama Lightning Mapping Array (NA LMA)] platforms. Specifically, ARMOR and NA LMA are being used to explore the ability of radar inferred microphysical (e.g., ice mass, graupel volume) measurements to parameterize flash rates (F) and flash area for estimation of LNOX production in cloud resolving models. The flash area was calculated by using the 'convex hull' method. This method essentially draws a polygon around all the sources that comprise a flash. From this polygon, the convex hull area that describes the minimum polygon that circumscribes the flash extent is calculated. Two storms have been analyzed so far; one on 21 May 2012 (S1) and another on 11 June 2012 (S2), both of which were aircraft-penetrated during DC3. For S1 and S2, radar reflectivity (Z) estimates of precipitation ice mass (M) within the mixed-phase zone (-10°C to -40°C) were well correlated to the trend of lightning flash rate. However, a useful radar-based F parameterization must provide accurate quantification of rates in addition to proper trends. The difference reflectivity was used to estimate Z associated with ice and then a single Z-M relation was employed to calculate M in the mixed-phase zone. Using this approach it was estimated that S1 produced an order of magnitude greater M, but produced about a third of the total amount of flashes compared to S2. Expectations based on the non-inductive charging (NIC) theory suggest that the M-to-F ratio (M/F) should be stable from storm-to-storm, amongst other factors, all else being equal. Further investigation revealed that the mean mixed-phase Z was 11 dB higher in S1 compared to S2, suggesting larger diameters and lower concentrations of ice particles in S1. Reduction by an order of magnitude of the intercept parameter (N0) of an assumed exponential ice particle size distribution within the Z-M relation for S1 resulted in a proportional reduction in S1's inferred M and therefore a more comparable M/F ratio between the storms. Flash statistics between S1 and S2 revealed the following: S1 produced 1.92 flashes/minute and a total of 102 flashes, while S2 produced 3.45 flashes/minute and a total of 307 flashes. On average, S1 (S2) produced 212 (78) sources per flash and an average flash area of 89.53 km2 (53.85 km2). Thus, S1 produced fewer flashes, a lower F, but more sources per flash and larger flash areas as compared to S2. Ongoing analysis is exploring the tuning of N0 within the Z-M relation by the mean Z in the mixed-phase zone. The suitability of various M estimates and other radar properties (graupel volume, ice fluxes, anvil ice mass) for parameterizing F, flash area and LNOX will be investigated on different storm types across NA.
New Estimates of Rhenium in the Crust: Implications for Mantle Re-Os Budgets
NASA Astrophysics Data System (ADS)
Bennett, V. C.; Sun, W.
2002-12-01
The 187Re-187Os isotopic system has provided a new probe of mantle chemical structure with, for example, now numerous studies balancing estimates of the Os isotopic compositions of the upper modern mantle with sizes and ages of proposed conjugate reservoirs stored within the deep mantle. This style of modeling is dependent upon estimates of the parent Re in the various reservoirs including total crust, upper mantle, MORB and ocean island basalts. New laser ICP-MS in situ and ID whole rock results from OIB, arc and back-arc basalts suggest Re concentrations in oceanic and crustal domains may have been greatly underestimated. For example Hawaiian OIBs show a clear distinction between subaerial and submarine erupted samples with the latter having Re much closer to the higher MORB estimates (1) than to previous OIB estimates. This difference has been attributed to Re volatility and loss during syn- and post-eruption degassing of subaerial samples. Recent work has produced similar results for submarine arc samples using both dredged glasses and melt inclusions in olivines from primitive basalts. Both have much higher average Re (ca. 1.5 and 3.4 ppb; 2,3) than literature values for arcs (ca. 0.30ppb) determined largely from sub-aerial samples, or for average crust estimated from loess (0.2 ppb; 4). If the undegassed arc samples are representative, then the total crust may have more than 5 times the Re previously estimated. Re lost during arc eruptions may ultimately be concentrated in anoxic seafloor sediments. Prior under-estimates may be linked to the extremely heterogeneous concentration (> 5 orders of magnitude) of the chalcophile, redox sensitive Re in crustal environments. If the residence time of high Re in the crust is long (>1 Ga) then, 1) much smaller reservoirs of stored Re in the deep mantle are required to balance Re depletions in the upper mantle, and 2) significant portions of the upper mantle are likely Re depleted. Alternatively Re may be rapidly recycled in oceanic sediments (short residence time) resulting in a smaller affect on Re-Os budgets, but creating areas of extreme Re heterogeneity in the upper mantle. Refs: 1. Bennett, Norman and Garcia, EPSL 2000. 2. Sun et al. (in press, Chemical Geology) 3. Sun et al. (submitted). 4. Peucker-Ehrenbrink and Jahn, G3, 2001.
Eyles, Helen; Neal, Bruce; Jiang, Yannan; Ni Mhurchu, Cliona
2016-05-28
Population exposure to food and nutrients can be estimated from household food purchases, but store surveys of foods and their composition are more available, less costly and might provide similar information. Our aim was to compare estimates of nutrient exposure from a store survey of packaged food with those from household panel food purchases. A cross-sectional store survey of all packaged foods for sale in two major supermarkets was undertaken in Auckland, New Zealand, between February and May 2012. Longitudinal household food purchase data (November 2011 to October 2012) were obtained from the nationally representative, population-weighted New Zealand Nielsen HomeScan® panel. Data on 8440 packaged food and non-alcoholic beverage products were collected in the store survey. Food purchase data were available for 1229 households and 16 812 products. Store survey data alone produced higher estimates of exposure to Na and sugar compared with estimates from household panel food purchases. The estimated mean difference in exposure to Na was 94 (95 % CI 72, 115) mg/100 g (20 % relative difference; P<0·01), to sugar 1·6 (95 % CI 0·8, 2·5) g/100 g (11 %; P<0·01), to SFA -0·3 (95 % CI -0·8, 0·3) g/100 g (6 %; P=0·3) and to energy -18 (-71, 35) kJ/100 g (2 %; P=0·51). Compared with household panel food purchases, store survey data provided a reasonable estimate of average population exposure to key nutrients from packaged foods. However, caution should be exercised in using such data to estimate population exposure to Na and sugar and in generalising these findings to other countries, as well as over time.
NASA Astrophysics Data System (ADS)
Masterenko, Dmitry A.; Metel, Alexander S.
2018-03-01
The process capability indices Cp, Cpk are widely used in the modern quality management as statistical measures of the ability of a process to produce output X within specification limits. The customer's requirement to ensure Cp ≥ 1.33 is often applied in contracts. Capability indices estimates may be calculated with the estimates of the mean µ and the variability 6σ, and for it, the quality characteristic in a sample of pieces should be measured. It requires, in turn, using advanced measuring devices and well-qualified staff. From the other hand, quality inspection by attributes, fulfilled with limit gauges (go/no-go) is much simpler and has a higher performance, but it does not give the numerical values of the quality characteristic. The described method allows estimating the mean and the variability of the process on the basis of the results of limit gauge inspection with certain lower limit LCL and upper limit UCL, which separates the pieces into three groups: where X < LCL, number of pieces is n1, where LCL ≤ X < UCL, n2 pieces, and where X ≥ UCL, n3 pieces. So-called Pittman-type estimates, developed by the author, are functions of n1, n2, n3 and allow calculation of the estimated µ and σ. Thus, Cp and Cpk also may be estimated without precise measurements. The estimates can be used in quality inspection of lots of pieces as well as in monitoring and control of the manufacturing process. It is very important for improving quality of articles in machining industry through their tolerance.
NASA Astrophysics Data System (ADS)
Terry, N.; Day-Lewis, F. D.; Werkema, D. D.; Lane, J. W., Jr.
2017-12-01
Soil moisture is a critical parameter for agriculture, water supply, and management of landfills. Whereas direct data (as from TDR or soil moisture probes) provide localized point scale information, it is often more desirable to produce 2D and/or 3D estimates of soil moisture from noninvasive measurements. To this end, geophysical methods for indirectly assessing soil moisture have great potential, yet are limited in terms of quantitative interpretation due to uncertainty in petrophysical transformations and inherent limitations in resolution. Simple tools to produce soil moisture estimates from geophysical data are lacking. We present a new standalone program, MoisturEC, for estimating moisture content distributions from electrical conductivity data. The program uses an indicator kriging method within a geostatistical framework to incorporate hard data (as from moisture probes) and soft data (as from electrical resistivity imaging or electromagnetic induction) to produce estimates of moisture content and uncertainty. The program features data visualization and output options as well as a module for calibrating electrical conductivity with moisture content to improve estimates. The user-friendly program is written in R - a widely used, cross-platform, open source programming language that lends itself to further development and customization. We demonstrate use of the program with a numerical experiment as well as a controlled field irrigation experiment. Results produced from the combined geostatistical framework of MoisturEC show improved estimates of moisture content compared to those generated from individual datasets. This application provides a convenient and efficient means for integrating various data types and has broad utility to soil moisture monitoring in landfills, agriculture, and other problems.
Gender and gender role differences in self- and other-estimates of multiple intelligences.
Szymanowicz, Agata; Furnham, Adrian
2013-01-01
This study examined participant gender and gender role differences in estimates of multiple intelligences for self, partner, and various hypothetical, stereotypical, and counter-stereotypical target persons. A general population sample of 261 British participants completed one of four questionnaires that required them to estimate their own and others' multiple intelligences and personality traits. Males estimated their general IQ slightly, but mathematic IQ significantly higher than females, who rated their social and emotional intelligence higher than males. Masculine individuals awarded themselves somewhat higher verbal and practical IQ scores than did female participants. Both participant gender and gender role differences in IQ estimates were found, with gender effects stronger in cognitive and gender role than in "personal" ability estimates. There was a significant effect of gender role on hypothetical persons' intelligence evaluations, with masculine targets receiving significantly higher intelligence estimates compared to feminine targets. More intelligent hypothetical figures were judged as more masculine and less feminine than less intelligent ones.
Gender and Gender Role Differences in Self- and Other-Estimates of Multiple Intelligences
Szymanowicz, Agata
2013-01-01
This study examined participant gender and gender role differences in estimates of multiple intelligences for self, partner, and various hypothetical, stereotypical, and counter-stereotypical target persons. A general population sample of 261 British participants completed one of four questionnaires that required them to estimate their own and others’ multiple intelligences and personality traits. Males estimated their general IQ slightly, but mathematic IQ significantly higher than females, who rated their social and emotional intelligence higher than males. Masculine individuals awarded themselves somewhat higher verbal and practical IQ scores than did female participants. Both participant gender and gender role differences in IQ estimates were found, with gender effects stronger in cognitive and gender role than in “personal” ability estimates. There was a significant effect of gender role on hypothetical persons’ intelligence evaluations, with masculine targets receiving significantly higher intelligence estimates compared to feminine targets. More intelligent hypothetical figures were judged as more masculine and less feminine than less intelligent ones. PMID:23951949
A Nonresponse Bias Analysis of the Health Information National Trends Survey (HINTS).
Maitland, Aaron; Lin, Amy; Cantor, David; Jones, Mike; Moser, Richard P; Hesse, Bradford W; Davis, Terisa; Blake, Kelly D
2017-07-01
We conducted a nonresponse bias analysis of the Health Information National Trends Survey (HINTS) 4, Cycles 1 and 3, collected in 2011 and 2013, respectively, using three analysis methods: comparison of response rates for subgroups, comparison of estimates with weighting adjustments and external benchmarks, and level-of-effort analysis. Areas with higher concentrations of low socioeconomic status, higher concentrations of young households, and higher concentrations of minority and Hispanic populations had lower response rates. Estimates of health information seeking behavior were higher in HINTS compared to the National Health Interview Survey (NHIS). The HINTS estimate of doctors always explaining things in a way that the patient understands was not significantly different from the same estimate from the Medical Expenditure Panel Survey (MEPS); however, the HINTS estimate of health professionals always spending enough time with the patient was significantly lower than the same estimate from MEPS. A level-of-effort analysis found that those who respond later in the survey field period were less likely to have looked for information about health in the past 12 months, but found only small differences between early and late respondents for the majority of estimates examined. There is some evidence that estimates from HINTS could be biased toward finding higher levels of health information seeking.
España-Romero, Vanesa; Golubic, Rajna; Martin, Kathryn R.; Hardy, Rebecca; Ekelund, Ulf; Kuh, Diana; Wareham, Nicholas J.; Cooper, Rachel; Brage, Soren
2014-01-01
Objectives To compare physical activity (PA) subcomponents from EPIC Physical Activity Questionnaire (EPAQ2) and combined heart rate and movement sensing in older adults. Methods Participants aged 60–64y from the MRC National Survey of Health and Development in Great Britain completed EPAQ2, which assesses self-report PA in 4 domains (leisure time, occupation, transportation and domestic life) during the past year and wore a combined sensor for 5 consecutive days. Estimates of PA energy expenditure (PAEE), sedentary behaviour, light (LPA) and moderate-to-vigorous PA (MVPA) were obtained from EPAQ2 and combined sensing and compared. Complete data were available in 1689 participants (52% women). Results EPAQ2 estimates of PAEE and MVPA were higher than objective estimates and sedentary time and LPA estimates were lower [bias (95% limits of agreement) in men and women were 32.3 (−61.5 to 122.6) and 29.0 (−39.2 to 94.6) kJ/kg/day for PAEE; −4.6 (−10.6 to 1.3) and −6.0 (−10.9 to −1.0) h/day for sedentary time; −171.8 (−454.5 to 110.8) and −60.4 (−367.5 to 246.6) min/day for LPA; 91.1 (−159.5 to 341.8) and 55.4 (−117.2 to 228.0) min/day for MVPA]. There were significant positive correlations between all self-reported and objectively assessed PA subcomponents (rho = 0.12 to 0.36); the strongest were observed for MVPA (rho = 0.30 men; rho = 0.36 women) and PAEE (rho = 0.26 men; rho = 0.25 women). Conclusion EPAQ2 produces higher estimates of PAEE and MVPA and lower estimates of sedentary and LPA than objective assessment. However, both methodologies rank individuals similarly, suggesting that EPAQ2 may be used in etiological studies in this population. PMID:24516543
Assessment of biomass open burning emissions in Indonesia and potential climate forcing impact
NASA Astrophysics Data System (ADS)
Permadi, Didin Agustian; Kim Oanh, Nguyen Thi
2013-10-01
This paper presents an emission inventory (EI) for biomass open burning (OB) sources including forest, agro-residue and municipal solid waste (MSW) in Indonesia for year 2007. The EI covered toxic air pollutants and greenhouse gases (GHGs) and was presented as annual and monthly average for every district, and further on a grid of 0.25° × 0.25°. A rigorous analysis of activity data and emission factor ranges was done to produce the low, best and high emission estimates for each species. Development of EI methodology for MSW OB which, to our best knowledge, has not been presented in detail in the literature was a focus of this paper. The best estimates of biomass OB emission of toxic air pollutants for the country, in Gg, were: 9.6 SO2; 98 NOx; 7411 CO; 335 NMVOC; 162 NH3; 439 PM10; 357 PM2.5; 24 BC; and 147 OC. The best emission estimates of GHGs, in Gg, were: 401 CH4, 57,247 CO2; and 3.6 N2O. The low and high values of the emission estimates for different species were found to range from -86% to +260% of the corresponding best estimates. Crop residue OB contributed more than 80% of the total biomass OB emissions, followed by forest fire of 2-12% (not including peat soil fire emission) and MSW (1-8%). An inter-annual active fires count for Indonesia showed relatively low values in 2007 which may be attributed to the high rainfall intensity under the influence of La Niña climate pattern in the year. Total estimated net climate forcing from OB in Indonesia was 110 (20 year horizon) and 73 (100 year horizon) Tg CO2 equivalents which is around 0.9-1.1% of that reported for the global biomass OB for both time horizons. The spatial distribution showed higher emissions in large urban areas in Java and Sumatra Island, while the monthly emissions indicated higher values during the dry months of August-October.
Recovering the Atmospheric Resources of Mars: Updating the MARRS Study
NASA Astrophysics Data System (ADS)
England, Christopher; Hrubes, J. Dana
2006-01-01
In 2000 a conceptual design was conducted of a plant that extracts oxygen (O2) directly from the martian atmosphere, and that makes water and carbon monoxide (CO) as by-products. Updated estimates suggest that the amount of O2 in the atmosphere is about 2.3 times greater than that used as the basis for the 2000 study. In this paper, estimates for O2 and by-products, and for energy and mass requirements based on the higher O2 value are updated. The basis for the design, termed ``MARRS'' for Mars Atmosphere Resource Recovery System, is the NASA/JSC Mars Reference Mission (MRM) requirement for O2, estimated at 5.8 kg/hr for about 500 sols. The 2000 study based its design on an atmospheric O2 content of 0.13%, the then-accepted value. Analysis now places the O2 content at about 0.3%, reducing the amount of energy and equipment proportionately. The revised estimate of the thermal power to meet MRM requirements for O2 is an average of about 52 kW, seasonally variable. The new mass estimate is 7898 kg, down from 13650 kg. The new estimate of oxygen content correspondingly reduces the amounts of by-products that can be recovered. CO, a primary fuel and propellant precursor, is produced at about 0.2 kg/kg O2. Water, possibly available at about 0.04 kg/kg O2, is believed not recoverable by the MARRS process at this lower level, even seasonally. An equation is provided for the seasonal variation in atmospheric O2 fraction based on Viking pressure measurements. Oxygen varies seasonally from about 0.25% or 0.34%, the variability affecting plant design. While the higher O2 fraction means reduced amounts of by-products from the MARRS process, large amounts of nitrogen (liquid and gas), argon gas and liquid carbon dioxide (CO2) remain available as by-products for use as respiratory agents, refrigerants, propellants, propellant precursors and working fluids for emergency or backup power, transportation, and surface operations such as drilling.
Baines, S.B.; Fisher, N.S.; Doblin, M.A.; Cutter, G.A.; Cutter, L.S.; Cole, B.
2004-01-01
The potentially toxic element selenium is first concentrated from solution to a large but highly variable degree by algae and bacteria before being passed on to consumers. The large loads of abiotic and detrital suspended particles often present in rivers and estuaries may obscure spatial and temporal patterns in Se concentrations at the base of the food web. We used radiotracers to estimate uptake of both selenite (Se(IV)) and C by intact plankton communities at two sites in the Sacramento/San Joaquin River Delta. Our goals were to determine (1) whether C and Se(IV) uptake were coupled, (2) the role of bacteria in Se(IV) uptake, and (3) the Se:C uptake ratio of newly produced organic material. Se(IV) uptake, like C uptake, was strongly related to irradiance. The shapes of both relationships were very similar except that at least 42-56% of Se(IV) uptake occurred in the dark, whereas C uptake in the dark was negligible. Of this dark Se(IV) uptake, 34-67% occurred in the 0.2-1.0-??m size fraction, indicating significant uptake by bacteria. In addition to dark uptake, total Se(IV) uptake consisted of a light-driven component that was in fixed proportion to C uptake. Our estimates of daily areal Se(IV):C uptake ratios agreed very well with particulate Se:C measured at a site dominated by phytoplankton biomass. Estimates of bacterial Se:C were 2.4-13 times higher than for the phytoplankton, suggesting that bacteriovores may be exposed to higher dietary Se concentrations than herbivores.
USDA-ARS?s Scientific Manuscript database
Walnuts are grown on almost every continent with total world-wide production estimated at over 4 billion in-shell pounds. California walnut growers, who produce 99% of the US walnut crop, produced an estimated 1.2 billion pounds on approximately 310,000 bearing acres with a farm gate value of approx...
Estimation of group means when adjusting for covariates in generalized linear models.
Qu, Yongming; Luo, Junxiang
2015-01-01
Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.
Hollis, Geoff; Westbury, Chris
2018-02-01
Large-scale semantic norms have become both prevalent and influential in recent psycholinguistic research. However, little attention has been directed towards understanding the methodological best practices of such norm collection efforts. We compared the quality of semantic norms obtained through rating scales, numeric estimation, and a less commonly used judgment format called best-worst scaling. We found that best-worst scaling usually produces norms with higher predictive validities than other response formats, and does so requiring less data to be collected overall. We also found evidence that the various response formats may be producing qualitatively, rather than just quantitatively, different data. This raises the issue of potential response format bias, which has not been addressed by previous efforts to collect semantic norms, likely because of previous reliance on a single type of response format for a single type of semantic judgment. We have made available software for creating best-worst stimuli and scoring best-worst data. We also made available new norms for age of acquisition, valence, arousal, and concreteness collected using best-worst scaling. These norms include entries for 1,040 words, of which 1,034 are also contained in the ANEW norms (Bradley & Lang, Affective norms for English words (ANEW): Instruction manual and affective ratings (pp. 1-45). Technical report C-1, the center for research in psychophysiology, University of Florida, 1999).
In-situ Production of High Density Polyethylene and Other Useful Materials on Mars
NASA Technical Reports Server (NTRS)
Flynn, Michael
2005-01-01
This paper describes a revolutionary materials structure and power storage concept based on the in-situ production of abiotic carbon 4 compounds. One of the largest single mass penalties required to support the human exploration of Mars is the surface habitat. This proposal will use physical chemical technologies to produce high density polyethylene (HDPE) inflatable structures and construction materials from Mars atmospheric CO2. The formation of polyethylene from Mars CO2 is based on the use of the Sabatier and modified Fischer Tropsch reactions. The proposed system will fully integrate with existing in-situ propellant production concepts. The technology will also be capable of supplementing human caloric requirements, providing solid and liquid fuels for energy storage, and providing significant reduction in mission risk. The NASA Mars Reference Mission Definition Team estimated that a conventional Mars surface habitat structure would weigh 10 tonnes. It is estimated that this technology could reduce this mass by 80%. This reduction in mass will significantly contribute to the reduction in total mission cost need to make a Mars mission a reality. In addition the potential reduction of risk provided by the ability to produce C4 and potentially higher carbon based materials in-situ on Mars is significant. Food, fuel, and shelter are only three of many requirements that would be impacted by this research.
Simulated effect of tobacco tax variation on population health in California.
Kaplan, R M; Ake, C F; Emery, S L; Navarro, A M
2001-02-01
This study simulated the effects of tobacco excise tax increases on population health. Five simulations were used to estimate health outcomes associated with tobacco tax policies: (1) the effects of price on smoking prevalence; (2) the effects of tobacco use on years of potential life lost; (3) the effect of tobacco use on quality of life (morbidity); (4) the integration of prevalence, mortality, and morbidity into a model of quality adjusted life years (QALYs); and (5) the development of confidence intervals around these estimates. Effects were estimated for 1 year after the tax's initiation and 75 years into the future. In California, a $0.50 tax increase and price elasticity of -0.40 would result in about 8389 QALYs (95% confidence interval [CI] = 4629, 12,113) saved the first year. Greater benefits would accrue each year until a steady state was reached after 75 years, when 52,136 QALYs (95% CI = 38,297, 66,262) would accrue each year. Higher taxes would produce even greater health benefits. A tobacco excise tax may be among a few policy options that will enhance a population's health status while making revenues available to government.
MMI attenuation and historical earthquakes in the basin and range province of western North America
Bakun, W.H.
2006-01-01
Earthquakes in central Nevada (1932-1959) were used to develop a modified Mercalli intensity (MMI) attenuation model for estimating moment magnitude M for earthquakes in the Basin and Range province of interior western North America. M is 7.4-7.5 for the 26 March 1872 Owens Valley, California, earthquake, in agreement with Beanland and Clark's (1994) M 7.6 that was estimated from geologic field observations. M is 7.5 for the 3 May 1887 Sonora, Mexico, earthquake, in agreement with Natali and Sbar's (1982) M 7.4 and Suter's (2006) M 7.5, both estimated from geologic field observations. MMI at sites in California for earthquakes in the Nevada Basin and Range apparently are not much affected by the Sierra Nevada except at sites near the Sierra Nevada where MMI is reduced. This reduction in MMI is consistent with a shadow zone produced by the root of the Sierra Nevada. In contrast, MMI assignments for earthquakes located in the eastern Sierra Nevada near the west margin of the Basin and Range are greater than predicted at sites in California. These higher MMI values may result from critical reflections due to layering near the base of the Sierra Nevada.
Estimation of Tooth Size Discrepancies among Different Malocclusion Groups
Bala, Madhu; Goyal, Virender
2014-01-01
ABSTRACT Regards and Tribute: Late Dr Narender Hasija was a mentor and visionary in the light of knowledge and experience. We pay our regards with deepest gratitude to the departed soul to rest in peace. Bolton’s ratios help in estimating overbite, overjet relationships, the effects of contemplated extractions on posterior occlusion, incisor relationships and identification of occlusal misfit produced by tooth size discrepancies. Aim: To determine any difference in tooth size discrepancy in anterior as well as overall ratio in different malocclusions and comparison with Bolton’s study. Materials and methods: After measuring the teeth on all 100 patients, Bolton’s analysis was performed. Results were compared with Bolton’s means and standard deviations. The results were also subjected to statistical analysis. Results show that the mean and standard deviations of ideal occlusion cases are comparable with those Bolton but, when the mean and standard deviation of malocclusion groups are compared with those of Bolton, the values of standard deviation are higher, though the mean is comparable. How to cite this article: Hasija N, Bala M, Goyal V. Estimation of Tooth Size Discrepancies among Different Malocclusion Groups. Int J Clin Pediatr Dent 2014;7(2):82-85. PMID:25356005
Improved and standardized method for assessing years lived with disability after injury
Polinder, S; Lyons, RA; Lund, J; Ditsuwan, V; Prinsloo, M; Veerman, JL; van Beeck, EF
2012-01-01
Abstract Objective To develop a standardized method for calculating years lived with disability (YLD) after injury. Methods The method developed consists of obtaining data on injury cases seen in emergency departments as well as injury-related hospital admissions, using the EUROCOST system to link the injury cases to disability information and employing empirical data to describe functional outcomes in injured patients. Findings Overall, 87 weights and proportions for 27 injury diagnoses involving lifelong consequences were included in the method. Almost all of the injuries investigated (96–100%) could be assigned to EUROCOST categories. The mean number of YLD per case of injury varied with the country studied. Use of the novel method resulted in estimated burdens of injury that were 3 to 8 times higher, in terms of YLD, than the corresponding estimates produced using the conventional methods employed in global burden of disease studies, which employ disability-adjusted life years. Conclusion The novel method for calculating YLD after injury can be applied in different settings, overcomes some limitations of the method used to calculate the global burden of disease, and allows more accurate estimates of the population burden of injury. PMID:22807597
Physical installation of Pelletron and electron cooling system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurh, P.
1997-09-01
Bremsstrahlung of 5 MeV electrons at a loss current of 50 microamp in the acceleration region is estimated to produce X-ray intensities of 7 Rad/sec. Radiation losses due to a misteer or sudden obstruction will of course be much higher still (estimated at 87,500 Rad/hr for a 0.5 mA beam current). It is estimated that 1.8 meters of concrete will be necessary to adequately shield the surrounding building areas at any possible Pelletron installation site. To satisfy our present electron cooling development plan, two Pelletron installations are required, the first at our development lab in the Lab B/NEF Enclosure areamore » and the second at the operational Main Injector service building, MI-30, in the main Injector ring. The same actual Pelletron and electron beam-line components will be used at both locations. The Lab B installation will allow experimentation with actual high energy electron beam to develop the optics necessary for the cooling straight while Main Injector/Recycler commissioning is taking place. The MI-30 installation is obviously the permanent home for the Pelletron when electron cooling becomes operational. Construction plans for both installations will be discussed here.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nurten Vardar; Zehra Yumurtaci
The major gaseous emissions (e.g. sulfur dioxide, nitrogen oxides, carbon dioxide, and carbon monoxide), some various organic emissions (e.g. benzene, toluene and xylenes) and some trace metals (e.g. arsenic, cobalt, chromium, manganese and nickel) generated from lignite-fired power plants in Turkey are estimated. The estimations are made separately for each one of the thirteen plants that produced electricity in 2007, because the lignite-fired thermal plants in Turkey are installed near the regions where the lignite is mined, and characteristics and composition of lignite used in each power plant are quite different from a region to another. Emission factors methodology ismore » used for the estimations. The emission factors obtained from well-known literature are then modified depending on local moisture content of lignite. Emission rates and specific emissions (per MWh) of the pollutants from the plants without electrostatic precipitators and flue-gas desulfurization systems are found to be higher than emissions from the plants having electrostatic precipitators and flue -gas desulfurization systems. Finally a projection for the future emissions due to lignite-based power plants is given. Predicted demand for the increasing generation capacity based on the lignite-fired thermal power plant, from 2008 to 2017 is around 30%. 39 refs., 13 figs., 10 tabs.« less
Marine fisheries declines viewed upside down: human impacts on consumer-driven nutrient recycling.
Layman, Craig A; Allgeier, Jacob E; Rosemond, Amy D; Dahlgren, Craig P; Yeager, Lauren A
2011-03-01
We quantified how two human impacts (overfishing and habitat fragmentation) in nearshore marine ecosystems may affect ecosystem function by altering the role of fish as nutrient vectors. We empirically quantified size-specific excretion rates of one of the most abundant fishes (gray snapper, Lutjanus griseus) in The Bahamas and combined these with surveys of fish abundance to estimate population-level excretion rates. The study was conducted across gradients of two human disturbances: overfishing and ecosystem fragmentation (estuaries bisected by roads), to evaluate how each could result in reduced population-level nutrient cycling by consumers. Mean estimated N and P excretion rates for gray snapper populations were on average 456% and 541% higher, respectively, in unfished sites. Ecosystem fragmentation resulted in significant reductions of recycling rates by snapper, with degree of creek fragmentation explaining 86% and 72% of the variance in estimated excretion for dissolved N and P, respectively. Additionally, we used nutrient limitation assays and primary producer nutrient content to provide a simple example of how marine fishery declines may affect primary production. This study provides an initial step toward integrating marine fishery declines and consumer-driven nutrient recycling to more fully understand the implications of human impacts in marine ecosystems.
Sources of atmospheric methane in the south Florida environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harriss, R.C.; Sebacher, D.I.; Bartlett, K.B.
1988-09-01
Direct measurement of methane (CH{sub 4}) flux from wetland ecosystems of south Florida demonstrates that freshwater wet prairies and inundated saw-grass marsh are the dominant sources of atmospheric CH{sub 4} in the region. Fluctuations in soil moisture are an important environmental factor controlling both seasonal and interannual fluctuations in CH{sub 4} emissions from undisturbed wetlands. Land use estimates for 1,900 and 1,973 were used to calculate regional CH{sub 4} flux. Human settlement in south Florida has modified wetland sources of CH{sub 4}, reducing the natural prairies and marsh sources by 37%. During the same period, impoundments and disturbed wetlands weremore » created which produce CH{sub 4} at rates approximately 50% higher than the natural wetlands they replaced. Preliminary estimates of urban and ruminant sources of CH{sub 4} based on extrapolation from literature data indicate these sources may now contribute approximately 23% of the total regional source. It was estimated that the integrated effects of urban and agricultural development in south Florida between 1,900 and 1,973 resulted in a 26% enhancement in CH{sub 4} flux to the troposphere. 35 refs., 3 figs., 6 tabs.« less
Stability of individual loudness functions obtained by magnitude estimation and production
NASA Technical Reports Server (NTRS)
Hellman, R. P.
1981-01-01
A correlational analysis of individual magnitude estimation and production exponents at the same frequency is performed, as is an analysis of individual exponents produced in different sessions by the same procedure across frequency (250, 1000, and 3000 Hz). Taken as a whole, the results show that individual exponent differences do not decrease by counterbalancing magnitude estimation with magnitude production and that individual exponent differences remain stable over time despite changes in stimulus frequency. Further results show that although individual magnitude estimation and production exponents do not necessarily obey the .6 power law, it is possible to predict the slope of an equal-sensation function averaged for a group of listeners from individual magnitude estimation and production data. On the assumption that individual listeners with sensorineural hearing also produce stable and reliable magnitude functions, it is also shown that the slope of the loudness-recruitment function measured by magnitude estimation and production can be predicted for individuals with bilateral losses of long duration. Results obtained in normal and pathological ears thus suggest that individual listeners can produce loudness judgements that reveal, although indirectly, the input-output characteristic of the auditory system.
Ventura, Emily E; Davis, Jaimie N; Goran, Michael I
2011-04-01
The consumption of fructose, largely in the form of high fructose corn syrup (HFCS), has risen over the past several decades and is thought to contribute negatively to metabolic health. However, the fructose content of foods and beverages produced with HFCS is not disclosed and estimates of fructose content are based on the common assumption that the HFCS used contains 55% fructose. The objective of this study was to conduct an objective laboratory analysis of the sugar content and composition in popular sugar-sweetened beverages with a particular focus on fructose content. Twenty-three sugar-sweetened beverages along with four standard solutions were analyzed for sugar profiles using high-performance liquid chromatography (HPLC) in an independent, certified laboratory. Total sugar content was calculated as well as percent fructose in the beverages that use HFCS as the sole source of fructose. Results showed that the total sugar content of the beverages ranged from 85 to 128% of what was listed on the food label. The mean fructose content in the HFCS used was 59% (range 47-65%) and several major brands appear to be produced with HFCS that is 65% fructose. Finally, the sugar profile analyses detected forms of sugar that were inconsistent with what was listed on the food labels. This analysis revealed significant deviations in sugar amount and composition relative to disclosures from producers. In addition, the tendency for use of HFCS that is higher in fructose could be contributing to higher fructose consumption than would otherwise be assumed.
Well-to-refinery emissions and net-energy analysis of China's crude-oil supply
NASA Astrophysics Data System (ADS)
Masnadi, Mohammad S.; El-Houjeiri, Hassan M.; Schunack, Dominik; Li, Yunpo; Roberts, Samori O.; Przesmitzki, Steven; Brandt, Adam R.; Wang, Michael
2018-03-01
Oil is China's second-largest energy source, so it is essential to understand the country's greenhouse gas emissions from crude-oil production. Chinese crude supply is sourced from numerous major global petroleum producers. Here, we use a per-barrel well-to-refinery life-cycle analysis model with data derived from hundreds of public and commercial sources to model the Chinese crude mix and the upstream carbon intensities and energetic productivity of China's crude supply. We generate a carbon-denominated supply curve representing Chinese crude-oil supply from 146 oilfields in 20 countries. The selected fields are estimated to emit between 1.5 and 46.9 g CO2eq MJ-1 of oil, with volume-weighted average emissions of 8.4 g CO2eq MJ-1. These estimates are higher than some existing databases, illustrating the importance of bottom-up models to support life-cycle analysis databases. This study provides quantitative insight into China's energy policy and the economic and environmental implications of China's oil consumption.
EnviroAtlas - New York, NY - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Paterson, NJ - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Fresno, CA - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Green Bay, WI - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Des Moines, IA - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Minneapolis/St. Paul, MN - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Woodbine, IA - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Phoenix, AZ - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Pittsburgh, PA - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - New Bedford, MA - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Milwaukee, WI - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
Economic evaluation of technology for a new generation biofuel production using wastes.
Koutinas, Athanasios; Kanellaki, Maria; Bekatorou, Argyro; Kandylis, Panagiotis; Pissaridi, Katerina; Dima, Agapi; Boura, Konstantina; Lappa, Katerina; Tsafrakidou, Panagiota; Stergiou, Panagiota-Yiolanda; Foukis, Athanasios; Gkini, Olga A; Papamichael, Emmanuel M
2016-01-01
An economic evaluation of an integrated technology for industrial scale new generation biofuel production using whey, vinasse, and lignocellulosic biomass as raw materials is reported. Anaerobic packed-bed bioreactors were used for organic acids production using initially synthetic media and then wastes. Butyric, lactic and acetic acid were predominately produced from vinasse, whey, and cellulose, respectively. Mass balance was calculated for a 16,000L daily production capacity. Liquid-liquid extraction was applied for recovery of the organic acids using butanol-1 as an effective extraction solvent which serves also as the alcohol for the subsequent enzyme-catalyzed esterification. The investment needed for the installation of the factory was estimated to about 1.7million€ with depreciation excepted at about 3months. For cellulosics, the installation investment was estimated to be about 7-fold higher with depreciation at about 1.5years. The proposed technology is an alternative trend in biofuel production. Copyright © 2015. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Kosch, M. J.; Vickers, H.; Ogawa, Y.; Senior, A.; Blagoveshchenskaya, N.
2014-11-01
We have developed an active ground-based technique to estimate the steady state field-aligned anomalous electric field (E*) in the topside ionosphere, up to ~600 km, using the European Incoherent Scatter (EISCAT) ionospheric modification facility and UHF incoherent scatter radar. When pumping the ionosphere with high-power high-frequency radio waves, the F region electron temperature is significantly raised, increasing the plasma pressure gradient in the topside ionosphere, resulting in ion upflow along the magnetic field line. We estimate E* using a modified ion momentum equation and the Mass Spectrometer Incoherent Scatter model. From an experiment on 23 October 2013, E* points downward with an average amplitude of ~1.6 μV/m, becoming weaker at higher altitudes. The mechanism for anomalous resistivity is thought to be low-frequency ion acoustic waves generated by the pump-induced flux of suprathermal electrons. These high-energy electrons are produced near the pump wave reflection altitude by plasma resonance and also result in observed artificially induced optical emissions.
EnviroAtlas - Austin, TX - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
Zaloshnja, Eduard; Miller, Ted; Council, Forrest; Persaud, Bhagwant
2004-01-01
This paper presents estimates for both the economic and comprehensive costs per crash for three police-coded severity groupings within 16 selected crash types and within two speed limit categories (
Zaloshnja, Eduard; Miller, Ted; Council, Forrest; Persaud, Bhagwant
2004-01-01
This paper presents estimates for both the economic and comprehensive costs per crash for three police-coded severity groupings within 16 selected crash types and within two speed limit categories (<=45 and >=50 mph). The economic costs are hard dollar costs. The comprehensive costs include economic costs and quality of life losses. We merged previously developed costs per victim keyed on the Abbreviated Injury Scale (AIS) into US crash data files that scored injuries in both the AIS and police-coded severity scales to produce per crash estimates. The most costly crashes were non-intersection fatal/disabling injury crashes on a road with a speed limit of 50 miles per hour or higher where multiple vehicles crashed head-on or a single vehicle struck a human (over 1.69 and $1.16 million per crash, respectively). The annual cost of police-reported run-off-road collisions, which include both rollovers and object impacts, represented 34% of total costs. PMID:15319129
Estimation of old field ecosystem biomass using low altitude imagery
NASA Technical Reports Server (NTRS)
Nor, S. M.; Safir, G.; Burton, T. M.; Hook, J. E.; Schultink, G.
1977-01-01
Color-infrared photography was used to evaluate the biomass of experimental plots in an old-field ecosystem that was treated with different levels of waste water from a sewage treatment facility. Cibachrome prints at a scale of approximately 1:1,600 produced from 35 mm color infrared slides were used to analyze density patterns using prepared tonal density scales and multicell grids registered to ground panels shown on the photograph. Correlations between mean tonal density and harvest biomass data gave consistently high coefficients ranging from 0.530 to 0.896 at the 0.001 significance level. Corresponding multiple regression analysis resulted in higher correlation coefficients. The results indicate that aerial infrared photography can be used to estimate standing crop biomass on waste water irrigated old field ecosystems. Combined with minimal ground truth data, this technique could enable managers of waste water irrigation projects to precisely time harvest of such systems for maximal removal of nutrients in harvested biomass.
EnviroAtlas - Cleveland, OH - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Portland, ME - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Portland, OR - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Durham, NC - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Tampa, FL - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Memphis, TN - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
Feasibility study of a Great Lakes bioenergy system.
Hacatoglu, Kevork; McLellan, P James; Layzell, David B
2011-01-01
A bioenergy production and delivery system built around the Great Lakes St. Lawrence Seaway (GLSLS) transportation corridor was assessed for its ability to mitigate energy security and climate change risks. The land area within 100 km of the GLSLS and associated railway lines was estimated to be capable of producing at least 30 Mt(dry) yr(-1) of lignocellulosic biomass with minimal adverse impacts on food and fibre production. This was estimated to be sufficient to displace all of the coal-fired electricity in Ontario plus more than 620 million L of green diesel (equivalent to 5.3% of diesel consumption in GLSLS provinces). Lifecycle greenhouse gas emissions were 88% and 76% lower than coal-fired power and conventional diesel, respectively. Production costs of $120 MWh(-1) for power and up to $30 GJ(-1) ($1.1 L(-1)) for green diesel were higher than current market prices, but a value for low-carbon energy would narrow the price differential. Copyright © 2010 Elsevier Ltd. All rights reserved.
Horse dung waste utilization as a household energy resource and estimation of biogas production
NASA Astrophysics Data System (ADS)
Umbara, Rian F.; Sumaryatie, Erni D.; Kirom, M. R.; Iskandar, Reza F.
2013-09-01
Horses are still used as traditional transportation in Soreang, West Java. About 6-7 horses can produce 25-30 kg of dung every day. Horse dung can produce biogas that can be used as an energy resource. A biogas reactor with capacity of 4 m3 has been built in Soreang. The reactor is filled with a mixture of 50 kg of horse dung and 100 liters of water every two days. This research was conducted to observe the quality of biogas produced from the reactor and to estimate the volume of biogas produced per day. The observation of daily biogas production conducted in 22 days. Laboratory tests showed that the composition of gases contained in the produced biogas consists of 56.53% of CH4, 26.98% of CO2, 12.35% of N2, 4.13% of O2, and 0.007% of H2. Daily biogas production data indicate a stationary trend. A moving average time series model is used to model the data. Using the model, it is estimated that the reactor can produce 0.240112 m3 of biogas per day, which is sufficient to meet the energy needs of a household.
Oldeschulte, David L; Halley, Yvette A; Wilson, Miranda L; Bhattarai, Eric K; Brashear, Wesley; Hill, Joshua; Metz, Richard P; Johnson, Charles D; Rollins, Dale; Peterson, Markus J; Bickhart, Derek M; Decker, Jared E; Sewell, John F; Seabury, Christopher M
2017-09-07
Northern bobwhite ( Colinus virginianus ; hereafter bobwhite) and scaled quail ( Callipepla squamata ) populations have suffered precipitous declines across most of their US ranges. Illumina-based first- (v1.0) and second- (v2.0) generation draft genome assemblies for the scaled quail and the bobwhite produced N50 scaffold sizes of 1.035 and 2.042 Mb, thereby producing a 45-fold improvement in contiguity over the existing bobwhite assembly, and ≥90% of the assembled genomes were captured within 1313 and 8990 scaffolds, respectively. The scaled quail assembly (v1.0 = 1.045 Gb) was ∼20% smaller than the bobwhite (v2.0 = 1.254 Gb), which was supported by kmer-based estimates of genome size. Nevertheless, estimates of GC content (41.72%; 42.66%), genome-wide repetitive content (10.40%; 10.43%), and MAKER-predicted protein coding genes (17,131; 17,165) were similar for the scaled quail (v1.0) and bobwhite (v2.0) assemblies, respectively. BUSCO analyses utilizing 3023 single-copy orthologs revealed a high level of assembly completeness for the scaled quail (v1.0; 84.8%) and the bobwhite (v2.0; 82.5%), as verified by comparison with well-established avian genomes. We also detected 273 putative segmental duplications in the scaled quail genome (v1.0), and 711 in the bobwhite genome (v2.0), including some that were shared among both species. Autosomal variant prediction revealed ∼2.48 and 4.17 heterozygous variants per kilobase within the scaled quail (v1.0) and bobwhite (v2.0) genomes, respectively, and estimates of historic effective population size were uniformly higher for the bobwhite across all time points in a coalescent model. However, large-scale declines were predicted for both species beginning ∼15-20 KYA. Copyright © 2017 Oldeschulte et al.
Holt, Martin; de Wit, John; Brown, Graham; Maycock, Bruce; Fairley, Christopher; Prestage, Garrett
2014-01-01
Background Behavioural surveillance and research among gay and other men who have sex with men (GMSM) commonly relies on non-random recruitment approaches. Methodological challenges limit their ability to accurately represent the population of adult GMSM. We compared the social and behavioural profiles of GMSM recruited via venue-based, online, and respondent-driven sampling (RDS) and discussed their utility for behavioural surveillance. Methods Data from four studies were selected to reflect each recruitment method. We compared demographic characteristics and the prevalence of key indicators including sexual and HIV testing practices obtained from samples recruited through different methods, and population estimates from respondent-driven sampling partition analysis. Results Overall, the socio-demographic profile of GMSM was similar across samples, with some differences observed in age and sexual identification. Men recruited through time-location sampling appeared more connected to the gay community, reported a greater number of sexual partners, but engaged in less unprotected anal intercourse with regular (UAIR) or casual partners (UAIC). The RDS sample overestimated the proportion of HIV-positive men and appeared to recruit men with an overall higher number of sexual partners. A single-website survey recruited a sample with characteristics which differed considerably from the population estimates with regards to age, ethnically diversity and behaviour. Data acquired through time-location sampling underestimated the rates of UAIR and UAIC, while RDS and online sampling both generated samples that underestimated UAIR. Simulated composite samples combining recruits from time-location and multi-website online sampling may produce characteristics more consistent with the population estimates, particularly with regards to sexual practices. Conclusion Respondent-driven sampling produced the sample that was most consistent to population estimates, but this methodology is complex and logistically demanding. Time-location and online recruitment are more cost-effective and easier to implement; using these approaches in combination may offer the potential to recruit a more representative sample of GMSM. PMID:25409440
2004 Methane and Nitrous Oxide Emissions from Manure Management in South Africa
Moeletsi, Mokhele Edmond; Tongwane, Mphethe Isaac
2015-01-01
Simple Summary Livestock manure management is one of the main sources of greenhouse gas (GHG) emissions in South Africa producing mainly methane and nitrous oxide. The emissions from this sub-category are dependent on how manure is stored. Liquid-stored manure predominantly produces methane while dry-based manure enhances mainly production of nitrous oxide. Intergovernmental Panel on Climate Change (IPCC) guidelines were utilized at different tier levels in estimating GHG emissions from manure management. The results show that methane emissions are relatively higher than nitrous oxide emissions with 3104 Gg and 2272 Gg respectively in carbon dioxide global warming equivalent. Abstract Manure management in livestock makes a significant contribution towards greenhouse gas emissions in the Agriculture; Forestry and Other Land Use category in South Africa. Methane and nitrous oxide emissions are prevalent in contrasting manure management systems; promoting anaerobic and aerobic conditions respectively. In this paper; both Tier 1 and modified Tier 2 approaches of the IPCC guidelines are utilized to estimate the emissions from South African livestock manure management. Activity data (animal population, animal weights, manure management systems, etc.) were sourced from various resources for estimation of both emissions factors and emissions of methane and nitrous oxide. The results show relatively high methane emissions factors from manure management for mature female dairy cattle (40.98 kg/year/animal), sows (25.23 kg/year/animal) and boars (25.23 kg/year/animal). Hence, contributions for pig farming and dairy cattle are the highest at 54.50 Gg and 32.01 Gg respectively, with total emissions of 134.97 Gg (3104 Gg CO2 Equivalent). Total nitrous oxide emissions are estimated at 7.10 Gg (2272 Gg CO2 Equivalent) and the three main contributors are commercial beef cattle; poultry and small-scale beef farming at 1.80 Gg; 1.72 Gg and 1.69 Gg respectively. Mitigation options from manure management must be taken with care due to divergent conducive requirements of methane and nitrous oxide emissions requirements. PMID:26479229
Zablotska, Iryna B; Frankland, Andrew; Holt, Martin; de Wit, John; Brown, Graham; Maycock, Bruce; Fairley, Christopher; Prestage, Garrett
2014-01-01
Behavioural surveillance and research among gay and other men who have sex with men (GMSM) commonly relies on non-random recruitment approaches. Methodological challenges limit their ability to accurately represent the population of adult GMSM. We compared the social and behavioural profiles of GMSM recruited via venue-based, online, and respondent-driven sampling (RDS) and discussed their utility for behavioural surveillance. Data from four studies were selected to reflect each recruitment method. We compared demographic characteristics and the prevalence of key indicators including sexual and HIV testing practices obtained from samples recruited through different methods, and population estimates from respondent-driven sampling partition analysis. Overall, the socio-demographic profile of GMSM was similar across samples, with some differences observed in age and sexual identification. Men recruited through time-location sampling appeared more connected to the gay community, reported a greater number of sexual partners, but engaged in less unprotected anal intercourse with regular (UAIR) or casual partners (UAIC). The RDS sample overestimated the proportion of HIV-positive men and appeared to recruit men with an overall higher number of sexual partners. A single-website survey recruited a sample with characteristics which differed considerably from the population estimates with regards to age, ethnically diversity and behaviour. Data acquired through time-location sampling underestimated the rates of UAIR and UAIC, while RDS and online sampling both generated samples that underestimated UAIR. Simulated composite samples combining recruits from time-location and multi-website online sampling may produce characteristics more consistent with the population estimates, particularly with regards to sexual practices. Respondent-driven sampling produced the sample that was most consistent to population estimates, but this methodology is complex and logistically demanding. Time-location and online recruitment are more cost-effective and easier to implement; using these approaches in combination may offer the potential to recruit a more representative sample of GMSM.
NASA Astrophysics Data System (ADS)
corradini, stefano; merucci, luca; guerrieri, lorenzo; pugnaghi, sergio; mcgarragh, greg; carboni, elisa; ventress, lucy; grainger, roy; scollo, simona; pardini, federica; zaksek, klemen; langmann, baerbel; bancalá, severin; stelitano, dario
2016-04-01
The volcanic ash cloud altitude is one of the most important parameter needed for the volcanic ash cloud estimations (mass, effective radius and optical depth). It is essential by modelers to initialize the ash cloud transportation models, and by volcanologists to give insights into eruption dynamics. Moreover, it is extremely important in order to reduce the disruption to flights as a result of volcanic activity whilst still ensuring safe travel. In this work, the volcanic ash cloud altitude is computed from remote sensing passive satellite data (SEVIRI, MODIS, IASI and MISR) by using the most of the existing retrieval techniques. A novel approach, based on the CO2 slicing procedure, is also shown. The comparisons among different techniques are presented and advantages and drawbacks emphasized. As test cases Etna eruptions in the period between 03 and 09 December 2015 are considered. During this time four lava fountain events occurred at the Voragine crater, forming eruption columns higher than 12 km asl and producing copious tephra fallout on volcano flanks. These events, among the biggest of the last 20 years, produced emissions that reached the stratosphere and produced a circum-global transport throughout the northern hemisphere.
Konigsfeld, Katie M; Lee, Melissa; Urata, Sarah M; Aguilera, Joe A; Milligan, Jamie R
2012-03-01
Electron deficient guanine radical species are major intermediates produced in DNA by the direct effect of ionizing irradiation. There is evidence that they react with amine groups in closely bound ligands to form covalent crosslinks. Crosslink formation is very poorly characterized in terms of quantitative rate and yield data. We sought to address this issue by using oligo-arginine ligands to model the close association of DNA and its binding proteins in chromatin. Guanine radicals were prepared in plasmid DNA by single electron oxidation. The product distribution derived from them was assayed by strand break formation after four different post-irradiation incubations. We compared the yields of DNA damage produced in the presence of four ligands in which neither, one, or both of the amino and carboxylate termini were blocked with amides. Free carboxylate groups were unreactive. Significantly higher yields of heat labile sites were observed when the amino terminus was unblocked. The rate of the reaction was characterized by diluting the unblocked amino group with its amide blocked derivative. These observations provide a means to develop quantitative estimates for the yields in which these labile sites are formed in chromatin by exposure to ionizing irradiation.
Cha, Thye San; Chen, Jian Woon; Goh, Eng Giap; Aziz, Ahmad; Loh, Saw Hong
2011-11-01
This study was undertaken to investigate the effects of different nitrate concentrations in culture medium on oil content and fatty acid composition of Chlorella vulgaris (UMT-M1) and Chlorella sorokiniana (KS-MB2). Results showed that both species produced significant higher (p<0.05) oil content at nitrate ranging from 0.18 to 0.66 mM with C. vulgaris produced 10.20-11.34% dw, while C. sorokiniana produced 15.44-17.32% dw. The major fatty acids detected include C16:0, C18:0, C18:1, C18:2 and C18:3. It is interesting to note that both species displayed differentially regulated fatty acid accumulation patterns in response to nitrate treatments at early stationary growth phase. Their potential use for biodiesel application could be enhanced by exploring the concept of binary blending of the two microalgae oils using developed mathematical equations to calculate the oil mass blending ratio and simultaneously estimated the weight percentage (wt.%) of desirable fatty acid compositions. Copyright © 2011 Elsevier Ltd. All rights reserved.
Child Mortality Estimation: Estimating Sex Differences in Childhood Mortality since the 1970s
Sawyer, Cheryl Chriss
2012-01-01
Introduction Producing estimates of infant (under age 1 y), child (age 1–4 y), and under-five (under age 5 y) mortality rates disaggregated by sex is complicated by problems with data quality and availability. Interpretation of sex differences requires nuanced analysis: girls have a biological advantage against many causes of death that may be eroded if they are disadvantaged in access to resources. Earlier studies found that girls in some regions were not experiencing the survival advantage expected at given levels of mortality. In this paper I generate new estimates of sex differences for the 1970s to the 2000s. Methods and Findings Simple fitting methods were applied to male-to-female ratios of infant and under-five mortality rates from vital registration, surveys, and censuses. The sex ratio estimates were used to disaggregate published series of both-sexes mortality rates that were based on a larger number of sources. In many developing countries, I found that sex ratios of mortality have changed in the same direction as historically occurred in developed countries, but typically had a lower degree of female advantage for a given level of mortality. Regional average sex ratios weighted by numbers of births were found to be highly influenced by China and India, the only countries where both infant mortality and overall under-five mortality were estimated to be higher for girls than for boys in the 2000s. For the less developed regions (comprising Africa, Asia excluding Japan, Latin America/Caribbean, and Oceania excluding Australia and New Zealand), on average, boys' under-five mortality in the 2000s was about 2% higher than girls'. A number of countries were found to still experience higher mortality for girls than boys in the 1–4-y age group, with concentrations in southern Asia, northern Africa/western Asia, and western Africa. In the more developed regions (comprising Europe, northern America, Japan, Australia, and New Zealand), I found that the sex ratio of infant mortality peaked in the 1970s or 1980s and declined thereafter. Conclusions The methods developed here pinpoint regions and countries where sex differences in mortality merit closer examination to ensure that both sexes are sharing equally in access to health resources. Further study of the distribution of causes of death in different settings will aid the interpretation of differences in survival for boys and girls. Please see later in the article for the Editors' Summary. PMID:22952433
Liu, Hong; Yan, Meng; Song, Enmin; Wang, Jie; Wang, Qian; Jin, Renchao; Jin, Lianghai; Hung, Chih-Cheng
2016-05-01
Myocardial motion estimation of tagged cardiac magnetic resonance (TCMR) images is of great significance in clinical diagnosis and the treatment of heart disease. Currently, the harmonic phase analysis method (HARP) and the local sine-wave modeling method (SinMod) have been proven as two state-of-the-art motion estimation methods for TCMR images, since they can directly obtain the inter-frame motion displacement vector field (MDVF) with high accuracy and fast speed. By comparison, SinMod has better performance over HARP in terms of displacement detection, noise and artifacts reduction. However, the SinMod method has some drawbacks: 1) it is unable to estimate local displacements larger than half of the tag spacing; 2) it has observable errors in tracking of tag motion; and 3) the estimated MDVF usually has large local errors. To overcome these problems, we present a novel motion estimation method in this study. The proposed method tracks the motion of tags and then estimates the dense MDVF by using the interpolation. In this new method, a parameter estimation procedure for global motion is applied to match tag intersections between different frames, ensuring specific kinds of large displacements being correctly estimated. In addition, a strategy of tag motion constraints is applied to eliminate most of errors produced by inter-frame tracking of tags and the multi-level b-splines approximation algorithm is utilized, so as to enhance the local continuity and accuracy of the final MDVF. In the estimation of the motion displacement, our proposed method can obtain a more accurate MDVF compared with the SinMod method and our method can overcome the drawbacks of the SinMod method. However, the motion estimation accuracy of our method depends on the accuracy of tag lines detection and our method has a higher time complexity. Copyright © 2015 Elsevier Inc. All rights reserved.
Automated ambiguity estimation for VLBI Intensive sessions using L1-norm
NASA Astrophysics Data System (ADS)
Kareinen, Niko; Hobiger, Thomas; Haas, Rüdiger
2016-12-01
Very Long Baseline Interferometry (VLBI) is a space-geodetic technique that is uniquely capable of direct observation of the angle of the Earth's rotation about the Celestial Intermediate Pole (CIP) axis, namely UT1. The daily estimates of the difference between UT1 and Coordinated Universal Time (UTC) provided by the 1-h long VLBI Intensive sessions are essential in providing timely UT1 estimates for satellite navigation systems and orbit determination. In order to produce timely UT1 estimates, efforts have been made to completely automate the analysis of VLBI Intensive sessions. This involves the automatic processing of X- and S-band group delays. These data contain an unknown number of integer ambiguities in the observed group delays. They are introduced as a side-effect of the bandwidth synthesis technique, which is used to combine correlator results from the narrow channels that span the individual bands. In an automated analysis with the c5++ software the standard approach in resolving the ambiguities is to perform a simplified parameter estimation using a least-squares adjustment (L2-norm minimisation). We implement L1-norm as an alternative estimation method in c5++. The implemented method is used to automatically estimate the ambiguities in VLBI Intensive sessions on the Kokee-Wettzell baseline. The results are compared to an analysis set-up where the ambiguity estimation is computed using the L2-norm. For both methods three different weighting strategies for the ambiguity estimation are assessed. The results show that the L1-norm is better at automatically resolving the ambiguities than the L2-norm. The use of the L1-norm leads to a significantly higher number of good quality UT1-UTC estimates with each of the three weighting strategies. The increase in the number of sessions is approximately 5% for each weighting strategy. This is accompanied by smaller post-fit residuals in the final UT1-UTC estimation step.
Foam imbibition in a Hele-Shaw cell via laminated microfluidic ``T-junction'' device
NASA Astrophysics Data System (ADS)
Parra, Dina; Ward, Thomas
2013-11-01
In this talk we analyze experimental results of a novel microfluidic ``T-junction'' device, made from laminated plastic, that is used to produce foam in porous media. The fluids, both Newtonian and non-Newtonian liquids and air, are driven using constant-static pressure fluid pumping. For the T-junction geometry studied there are novel observations with this type of pumping: 1) at low pressure ratios there is an increase in the liquid and total flow rates and 2) at higher pressure ratios there is a decrease in the liquid flow rate. To understand this phenomenon we visualize the drop production process near the T-junction. Furthermore, flow rates for the liquid and total volume are estimated by imbibing the foam into a Hele-Shaw cell. Foam is produced by using a mixture containing aqueous polyacrylamide of concentrations ranging from 0.01-0.10% by weight and several solution also containing a sodium-lauryl-sulfate (SLS) surfactant at concentrations ranging 0.01-0.1% by weight.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ene, D.; Andersson, K.; Jensen, M.
The European Spallation Source (ESS) will produce tritium via spallation and activation processes during operational activities. Within the location of ESS facility in Lund, Sweden site it is mandatory to demonstrate that the management strategy of the produced tritium ensures the compliance with the country regulation criteria. The aim of this paper is to give an overview of the different aspects of the tritium management in ESS facility. Besides the design parameter study of the helium coolant purification system of the target the consequences of the tritium releasing into the environment were also analyzed. Calculations show that the annual releasemore » of tritium during the normal operations represents a small fraction from the estimated total dose. However, more refined calculations of migration of activated-groundwater should be performed for higher hydraulic conductivities, with the availability of the results on soil examinations. With the assumption of 100% release of tritium to the atmosphere during the occurring of the extreme accidents, it was found as well that the total dose complies with the constraint. (authors)« less
First measurement of surface nuclear recoil background for argon dark matter searches
Xu, Jingke; Stanford, Chris; Westerdale, Shawn; ...
2017-09-19
Here, one major background in direct searches for weakly interacting massive particles (WIMPs) comes from the deposition of radon progeny on detector surfaces. A dangerous surface background is the 206Pb nuclear recoils produced by 210Po decays. In this paper, we report the first characterization of this background in liquid argon. The scintillation signal of low energy Pb recoils is measured to be highly quenched in argon, and we estimate that the 103 keV 206Pb recoil background will produce a signal equal to that of a ~5 keV (30 keV) electron recoil ( 40Ar recoil). In addition, we demonstrate that thismore » dangerous 210Po surface background can be suppressed, using pulse shape discrimination methods, by a factor of ~100 or higher, which can make argon dark matter detectors near background-free and enhance their potential for discovery of medium- and high-mass WIMPs. Lastly, we also discuss the impact on other low background experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nugraha, Andri Dian; Adisatrio, Philipus Ronnie
2013-09-09
Seismic refraction survey is one of geophysical method useful for imaging earth interior, definitely for imaging near surface. One of the common problems in seismic refraction survey is weak amplitude due to attenuations at far offset. This phenomenon will make it difficult to pick first refraction arrival, hence make it challenging to produce the near surface image. Seismic interferometry is a new technique to manipulate seismic trace for obtaining Green's function from a pair of receiver. One of its uses is for improving first refraction arrival quality at far offset. This research shows that we could estimate physical properties suchmore » as seismic velocity and thickness from virtual refraction processing. Also, virtual refraction could enhance the far offset signal amplitude since there is stacking procedure involved in it. Our results show super - virtual refraction processing produces seismic image which has higher signal-to-noise ratio than its raw seismic image. In the end, the numbers of reliable first arrival picks are also increased.« less
NASA Astrophysics Data System (ADS)
Hayer, C. S.; Wadge, G.; Edmonds, M.; Christopher, T.
2016-02-01
Since 2004, the satellite-borne Ozone Mapping Instrument (OMI) has observed sulphur dioxide (SO2) plumes during both quiescence and effusive eruptive activity at Soufrière Hills Volcano, Montserrat. On average, OMI detected a SO2 plume 4-6 times more frequently during effusive periods than during quiescence in the 2008-2010 period. The increased ability of OMI to detect SO2 during eruptive periods is mainly due to an increase in plume altitude rather than a higher SO2 emission rate. Three styles of eruptive activity cause thermal lofting of gases (Vulcanian explosions; pyroclastic flows; a hot lava dome) and the resultant plume altitudes are estimated from observations and models. Most lofting plumes from Soufrière Hills are derived from hot domes and pyroclastic flows. Although Vulcanian explosions produced the largest plumes, some produced only negligible SO2 signals detected by OMI. OMI is most valuable for monitoring purposes at this volcano during periods of lava dome growth and during explosive activity.
NASA Astrophysics Data System (ADS)
Ramsey, M. S.; Chevrel, O.; Harris, A. J. L.
2017-12-01
Satellite-based thermal infrared (TIR) observations of new volcanic activity and ongoing lava flow emplacement become increasingly more detailed with improved spatial, spectral and/or temporal resolution data. The cooling of the glassy surface is directly imaged by TIR instruments in order to determine temperature, which is then used to initiate thermo-rheological-based models. Higher temporal resolution data (i.e., minutes to hours), are used to detect new eruptions and determine the time-averaged discharge rate (TADR). Calculation of the TADR along with new observations later in time and accurate digital elevation models (DEMs) enable modeling of the advancing flow's down-slope inundation area. Better spectral and spatial resolution data, on the other hand, allow the flow's composition, small-scale morphological changes and real-time DEMs to be determined, in addition to confirming prior model predictions. Combined, these data help improve the accuracy of models such as FLOWGO. A new adaptation of this model in python (PyFLOWGO) has been used to produce the best fit eruptive conditions to the final flow morphology for the 2012-2013 eruption of Tolbachik volcano, Russia. This was the largest and most thermally-intense flow-forming eruption in the past 50 years, producing longer lava flows than that of typical Kilauea or Etna eruptions. The progress of these flows were imaged by a multiple TIR sensors at various spatial, spectral and temporal scales throughout the flow field emplacement. We have refined the model based on the high resolution data to determine the TADR and make improved estimates of cooling, viscosity, velocity and crystallinity with distance. Understanding the cooling and dynamics of basaltic surfaces ultimately produces an improved hazard forecast capability. In addition, the direct connection of the final flow morphology to the specific eruption conditions that produced it allows the eruptive conditions of older flows to be estimated.
Brady, Eoghan; Hill, Kenneth
2017-01-01
Under-five mortality estimates are increasingly used in low and middle income countries to target interventions and measure performance against global development goals. Two new methods to rapidly estimate under-5 mortality based on Summary Birth Histories (SBH) were described in a previous paper and tested with data available. This analysis tests the methods using data appropriate to each method from 5 countries that lack vital registration systems. SBH data are collected across many countries through censuses and surveys, and indirect methods often rely upon their quality to estimate mortality rates. The Birth History Imputation method imputes data from a recent Full Birth History (FBH) onto the birth, death and age distribution of the SBH to produce estimates based on the resulting distribution of child mortality. DHS FBHs and MICS SBHs are used for all five countries. In the implementation, 43 of 70 estimates are within 20% of validation estimates (61%). Mean Absolute Relative Error is 17.7.%. 1 of 7 countries produces acceptable estimates. The Cohort Change method considers the differences in births and deaths between repeated Summary Birth Histories at 1 or 2-year intervals to estimate the mortality rate in that period. SBHs are taken from Brazil's PNAD Surveys 2004-2011 and validated against IGME estimates. 2 of 10 estimates are within 10% of validation estimates. Mean absolute relative error is greater than 100%. Appropriate testing of these new methods demonstrates that they do not produce sufficiently good estimates based on the data available. We conclude this is due to the poor quality of most SBH data included in the study. This has wider implications for the next round of censuses and future household surveys across many low- and middle- income countries.
Biocompatible silicon quantum dots by ultrasound-induced solution route
NASA Astrophysics Data System (ADS)
Lee, Soojin; Cho, Woon-Jo
2004-10-01
The water-soluble silicon quantum dots (QDs) of average diameter ~3 nm were prepared in organic solvent by ultrasound-induced solution route. This speedy rout produces the silicon QDs in the size range from 2 nm to 4 nm at room temperature and ambient pressure. The product yield of QDs was estimated to be higher than 60 % based on the initial NaSi weight. The surfaces of QDs were terminated with organic molecules including biocompatible ending groups (hydroxyl, amine and carboxyl) during simple preparation. Covalent attached molecules were characterized by FT-IR spectroscopy. These water-soluble passivation of QDs has just a little effect on the optical properties of original QDs.
Laser-photofield emission from needle cathodes for low-emittance electron beams.
Ganter, R; Bakker, R; Gough, C; Leemann, S C; Paraliev, M; Pedrozzi, M; Le Pimpec, F; Schlott, V; Rivkin, L; Wrulich, A
2008-02-15
Illumination of a ZrC needle with short laser pulses (16 ps, 266 nm) while high voltage pulses (-60 kV, 2 ns, 30 Hz) are applied, produces photo-field emitted electron bunches. The electric field is high and varies rapidly over the needle surface so that quantum efficiency (QE) near the apex can be much higher than for a flat photocathode due to the Schottky effect. Up to 150 pC (2.9 A peak current) have been extracted by photo-field emission from a ZrC needle. The effective emitting area has an estimated radius below 50 microm leading to a theoretical intrinsic emittance below 0.05 mm mrad.
Damping Estimation from Free Decay Responses of Cables with MR Dampers.
Weber, Felix; Distl, Hans
2015-01-01
This paper discusses the damping measurements on cables with real-time controlled MR dampers that were performed on a laboratory scale single strand cable and on cables of the Sutong Bridge, China. The control approach aims at producing amplitude and frequency independent cable damping which is confirmed by the tests. The experimentally obtained cable damping in comparison to the theoretical value due to optimal linear viscous damping reveals that support conditions of the cable anchors, force tracking errors in the actual MR damper force, energy spillover to higher modes, and excitation and sensor cables hanging on the stay cable must be taken into consideration for the interpretation of the identified cable damping values.
Damping Estimation from Free Decay Responses of Cables with MR Dampers
Weber, Felix; Distl, Hans
2015-01-01
This paper discusses the damping measurements on cables with real-time controlled MR dampers that were performed on a laboratory scale single strand cable and on cables of the Sutong Bridge, China. The control approach aims at producing amplitude and frequency independent cable damping which is confirmed by the tests. The experimentally obtained cable damping in comparison to the theoretical value due to optimal linear viscous damping reveals that support conditions of the cable anchors, force tracking errors in the actual MR damper force, energy spillover to higher modes, and excitation and sensor cables hanging on the stay cable must be taken into consideration for the interpretation of the identified cable damping values. PMID:26167537
Antibunched emission of photon pairs via quantum Zeno blockade.
Huang, Yu-Ping; Kumar, Prem
2012-01-20
We propose a new methodology, namely, the "quantum Zeno blockade," for managing light scattering at a few-photon level in general nonlinear-optical media, such as crystals, fibers, silicon microrings, and atomic vapors. Using this tool, antibunched emission of photon pairs can be achieved, leading to potent quantum-optics applications such as deterministic entanglement generation without the need for heralding. In a practical implementation using an on-chip toroidal microcavity immersed in rubidium vapor, we estimate that high-fidelity entangled photons can be produced on-demand at MHz rates or higher, corresponding to an improvement of ≳10(7) times from the state-of-the-art. © 2012 American Physical Society
A study to determine whether cavitation occurs around dental ultrasonic scaling instruments.
Lea, S C; Price, G J; Walmsley, A D
2005-02-01
The aim of this investigation was to determine if cavitation occurred around dental ultrasonic scalers and to estimate the amount of cavitation occurring. Three styles of tip (3 x TFI-10, 3 x TFI-3, 3 x TFI-1) were used, in conjunction with a Cavitron SPS ultrasonic generator (Dentsply, USA), to insonate terephthalic acid solution. The hydroxyl radical, [*OH], concentration, produced due to cavitation from the scaler tips, was monitored by fluorescence spectroscopy. Cavitational activity was enhanced at higher power settings and at longer operating times. The tip dimensions and geometry as well as the generator power setting are both important factors that affect the production of cavitation.
NASA Technical Reports Server (NTRS)
Orme, John S.
1995-01-01
The performance seeking control algorithm optimizes total propulsion system performance. This adaptive, model-based optimization algorithm has been successfully flight demonstrated on two engines with differing levels of degradation. Models of the engine, nozzle, and inlet produce reliable, accurate estimates of engine performance. But, because of an observability problem, component levels of degradation cannot be accurately determined. Depending on engine-specific operating characteristics PSC achieves various levels performance improvement. For example, engines with more deterioration typically operate at higher turbine temperatures than less deteriorated engines. Thus when the PSC maximum thrust mode is applied, for example, there will be less temperature margin available to be traded for increasing thrust.
Smith, R.L.; Garabedian, S.P.; Brooks, M.H.
1996-01-01
The transport of many solutes in groundwater is dependent upon the relative rates of physical flow and microbial metabolism. Quantifying rates of microbial processes under subsurface conditions is difficult and is most commonly approximated using laboratory studies with aquifer materials. In this study, we measured in situ rates of denitrification in a nitrate- contaminated aquifer using small-scale, natural-gradient tracer tests and compared the results with rates obtained from laboratory incubations with aquifer core material. Activity was measured using the acetylene block technique. For the tracer tests, co-injection of acetylene and bromide into the aquifer produced a 30 ??M increase in nitrous oxide after 10 m of transport (23-30 days). An advection-dispersion transport model was modified to include an acetylene-dependent nitrous oxide production term and used to simulate the tracer breakthrough curves. The model required a 4-day lag period and a relatively low sensitivity to acetylene to match the narrow nitrous oxide breakthrough curves. Estimates of in situ denitrification rates were 0.60 and 1.51 nmol of N2O produced cm-3 aquifer day-1 for two successive tests. Aquifer core material collected from the tracer test site and incubated as mixed slurries in flasks and as intact cores yielded rates that were 1.2-26 times higher than the tracer test rate estimates. Results with the coring-dependent techniques were variable and subject to the small- scale heterogeneity within the aquifer, while the tracer tests integrated the heterogeneity along a flow path, giving a rate estimate that is more applicable to transport at the scale of the aquifer.
Geologic emissions of methane to the atmosphere.
Etiope, Giuseppe; Klusman, Ronald W
2002-12-01
The atmospheric methane budget is commonly defined assuming that major sources derive from the biosphere (wetlands, rice paddies, animals, termites) and that fossil, radiocarbon-free CH4 emission is due to and mediated by anthropogenic activity (natural gas production and distribution, and coal mining). However, the amount of radiocarbon-free CH4 in the atmosphere, estimated at approximately 20% of atmospheric CH4, is higher than the estimates from statistical data of CH4 emission from fossil fuel related anthropogenic sources. This work documents that significant amounts of "old" methane, produced within the Earth crust, can be released naturally into the atmosphere through gas permeable faults and fractured rocks. Major geologic emissions of methane are related to hydrocarbon production in sedimentary basins (biogenic and thermogenic methane) and, subordinately, to inorganic reactions (Fischer-Tropsch type) in geothermal systems. Geologic CH4 emissions include diffuse fluxes over wide areas, or microseepage, on the order of 10(0)-10(2) mg m(-2) day(-1), and localised flows and gas vents, on the order of 10(2) t y(-1), both on land and on the seafloor. Mud volcanoes producing flows of up to 10(3) t y(-1) represent the largest visible expression of geologic methane emission. Several studies have indicated that methanotrophic consumption in soil may be insufficient to consume all leaking geologic CH4 and positive fluxes into the atmosphere can take place in dry or seasonally cold environments. Unsaturated soils have generally been considered a major sink for atmospheric methane, and never a continuous, intermittent, or localised source to the atmosphere. Although geologic CH4 sources need to be quantified more accurately, a preliminary global estimate indicates that there are likely more than enough sources to provide the amount of methane required to account for the suspected missing source of fossil CH4.
Persson, U. Martin
2017-01-01
This paper presents a spatially explicit method for making regional estimates of the potential for biogas production from crop residues and manure, accounting for key technical, biochemical, environmental and economic constraints. Methods for making such estimates are important as biofuels from agricultural residues are receiving increasing policy support from the EU and major biogas producers, such as Germany and Italy, in response to concerns over unintended negative environmental and social impacts of conventional biofuels. This analysis comprises a spatially explicit estimate of crop residue and manure production for the EU at 250 m resolution, and a biogas production model accounting for local constraints such as the sustainable removal of residues, transportation of substrates, and the substrates’ biochemical suitability for anaerobic digestion. In our base scenario, the EU biogas production potential from crop residues and manure is about 0.7 EJ/year, nearly double the current EU production of biogas from agricultural substrates, most of which does not come from residues or manure. An extensive sensitivity analysis of the model shows that the potential could easily be 50% higher or lower, depending on the stringency of economic, technical and biochemical constraints. We find that the potential is particularly sensitive to constraints on the substrate mixtures’ carbon-to-nitrogen ratio and dry matter concentration. Hence, the potential to produce biogas from crop residues and manure in the EU depends to large extent on the possibility to overcome the challenges associated with these substrates, either by complementing them with suitable co-substrates (e.g. household waste and energy crops), or through further development of biogas technology (e.g. pretreatment of substrates and recirculation of effluent). PMID:28141827
Variability of measurements of sweat sodium using the regional absorbent-patch method.
Dziedzic, Christine E; Ross, Megan L; Slater, Gary J; Burke, Louise M
2014-09-01
There is interest in including recommendations for the replacement of the sodium lost in sweat in individualized hydration plans for athletes. Although the regional absorbent-patch method provides a practical approach to measuring sweat sodium losses in field conditions, there is a need to understand the variability of estimates associated with this technique. Sweat samples were collected from the forearms, chest, scapula, and thigh of 12 cyclists during 2 standardized cycling time trials in the heat and 2 in temperate conditions. Single measure analysis of sodium concentration was conducted immediately by ion-selective electrodes (ISE). A subset of 30 samples was frozen for reanalysis of sodium concentration using ISE, flame photometry (FP), and conductivity (SC). Sweat samples collected in hot conditions produced higher sweat sodium concentrations than those from the temperate environment (P = .0032). A significant difference (P = .0048) in estimates of sweat sodium concentration was evident when calculated from the forearm average (mean ± 95% CL; 64 ± 12 mmol/L) compared with using a 4-site equation (70 ± 12 mmol/L). There was a high correlation between the values produced using different analytical techniques (r2 = .95), but mean values were different between treatments (frozen FP, frozen SC > immediate ISE > frozen ISE; P < .0001). Whole-body sweat sodium concentration estimates differed depending on the number of sites included in the calculation. Environmental testing conditions should be considered in the interpretation of results. The impact of sample freezing and subsequent analytical technique was small but statistically significant. Nevertheless, when undertaken using a standardized protocol, the regional absorbent-patch method appears to be a relatively robust field test.
Einarsson, Rasmus; Persson, U Martin
2017-01-01
This paper presents a spatially explicit method for making regional estimates of the potential for biogas production from crop residues and manure, accounting for key technical, biochemical, environmental and economic constraints. Methods for making such estimates are important as biofuels from agricultural residues are receiving increasing policy support from the EU and major biogas producers, such as Germany and Italy, in response to concerns over unintended negative environmental and social impacts of conventional biofuels. This analysis comprises a spatially explicit estimate of crop residue and manure production for the EU at 250 m resolution, and a biogas production model accounting for local constraints such as the sustainable removal of residues, transportation of substrates, and the substrates' biochemical suitability for anaerobic digestion. In our base scenario, the EU biogas production potential from crop residues and manure is about 0.7 EJ/year, nearly double the current EU production of biogas from agricultural substrates, most of which does not come from residues or manure. An extensive sensitivity analysis of the model shows that the potential could easily be 50% higher or lower, depending on the stringency of economic, technical and biochemical constraints. We find that the potential is particularly sensitive to constraints on the substrate mixtures' carbon-to-nitrogen ratio and dry matter concentration. Hence, the potential to produce biogas from crop residues and manure in the EU depends to large extent on the possibility to overcome the challenges associated with these substrates, either by complementing them with suitable co-substrates (e.g. household waste and energy crops), or through further development of biogas technology (e.g. pretreatment of substrates and recirculation of effluent).
Large-area Soil Moisture Surveys Using a Cosmic-ray Rover: Approaches and Results from Australia
NASA Astrophysics Data System (ADS)
Hawdon, A. A.; McJannet, D. L.; Renzullo, L. J.; Baker, B.; Searle, R.
2017-12-01
Recent improvements in satellite instrumentation has increased the resolution and frequency of soil moisture observations, and this in turn has supported the development of higher resolution land surface process models. Calibration and validation of these products is restricted by the mismatch of scales between remotely sensed and contemporary ground based observations. Although the cosmic ray neutron soil moisture probe can provide estimates soil moisture at a scale useful for the calibration and validation purposes, it is spatially limited to a single, fixed location. This scaling issue has been addressed with the development of mobile soil moisture monitoring systems that utilizes the cosmic ray neutron method, typically referred to as a `rover'. This manuscript describes a project designed to develop approaches for undertaking rover surveys to produce soil moisture estimates at scales comparable to satellite observations and land surface process models. A custom designed, trailer-mounted rover was used to conduct repeat surveys at two scales in the Mallee region of Victoria, Australia. A broad scale survey was conducted at 36 x 36 km covering an area of a standard SMAP pixel and an intensive scale survey was conducted over a 10 x 10 km portion of the broad scale survey, which is at a scale equivalent to that used for national water balance modelling. We will describe the design of the rover, the methods used for converting neutron counts into soil moisture and discuss factors controlling soil moisture variability. We found that the intensive scale rover surveys produced reliable soil moisture estimates at 1 km resolution and the broad scale at 9 km resolution. We conclude that these products are well suited for future analysis of satellite soil moisture retrievals and finer scale soil moisture models.
Global mercury emissions from combustion in light of international fuel trading.
Chen, Yilin; Wang, Rong; Shen, Huizhong; Li, Wei; Chen, Han; Huang, Ye; Zhang, Yanyan; Chen, Yuanchen; Su, Shu; Lin, Nan; Liu, Junfeng; Li, Bengang; Wang, Xilong; Liu, Wenxin; Coveney, Raymond M; Tao, Shu
2014-01-01
The spatially resolved emission inventory is essential for understanding the fate of mercury. Previous global mercury emission inventories for fuel combustion sources overlooked the influence of fuel trading on local emission estimates of many countries, mostly developing countries, for which national emission data are not available. This study demonstrates that in many countries, the mercury content of coal and petroleum locally consumed differ significantly from those locally produced. If the mercury content in locally produced fuels were used to estimate emission, then the resulting global mercury emissions from coal and petroleum would be overestimated by 4.7 and 72%, respectively. Even higher misestimations would exist in individual countries, leading to strong spatial bias. On the basis of the available data on fuel trading and an updated global fuel consumption database, a new mercury emission inventory for 64 combustion sources has been developed. The emissions were mapped at 0.1° × 0.1° resolution for 2007 and at country resolution for a period from 1960 to 2006. The estimated global total mercury emission from all combustion sources (fossil fuel, biomass fuel, solid waste, and wildfires) in 2007 was 1454 Mg (1232-1691 Mg as interquartile range from Monte Carlo simulation), among which elementary mercury (Hg(0)), divalent gaseous mercury (Hg(2+)), and particulate mercury (Hg(p)) were 725, 548, and 181 Mg, respectively. The total emission from anthropogenic sources, excluding wildfires, was 1040 Mg (886-1248 Mg), with coal combustion contributing more than half. Globally, total annual anthropogenic mercury emission from combustion sources increased from 285 Mg (263-358 Mg) in 1960 to 1040 Mg (886-1248 Mg) in 2007, owing to an increased fuel consumption in developing countries. However, mercury emissions from developed countries have decreased since 2000.
Rao, Krishna D; Shahrawat, Renu; Bhatnagar, Aarushi
2016-09-01
The availability of reliable and comprehensive information on the health workforce is crucial for workforce planning. In India, routine information sources on the health workforce are incomplete and unreliable. This paper addresses this issue and provides a comprehensive picture of India's health workforce. Data from the 68th round (July 2011 to June 2012) of the National Sample Survey on the Employment and unemployment situation in India were analysed to produce estimates of the health workforce in India. The estimates were based on self-reported occupations, categorized using a combination of both National Classification of Occupations (2004) and National Industrial Classification (2008) codes. Findings suggest that in 2011-2012, there were 2.5 million health workers (density of 20.9 workers per 10 000 population) in India. However, 56.4% of all health workers were unqualified, including 42.3% of allopathic doctors, 27.5% of dentists, 56.1% of Ayurveda, yoga and naturopathy, Unani, Siddha and homoeopathy (AYUSH) practitioners, 58.4% of nurses and midwives and 69.2% of health associates. By cadre, there were 3.3 qualified allopathic doctors and 3.1 nurses and midwives per 10 000 population; this is around one quarter of the World Health Organization benchmark of 22.8 doctors, nurses and midwives per 10 000 population. Out of all qualified workers, 77.4% were located in urban areas, even though the urban population is only 31% of the total population of the country. This urban-rural difference was higher for allopathic doctors (density 11.4 times higher in urban areas) compared to nurses and midwives (5.5 times higher in urban areas). The study highlights several areas of concern: overall low numbers of qualified health workers; a large presence of unqualified health workers, particularly in rural areas; and large urban-rural differences in the distribution of qualified health workers.
De Crée, C; Ball, P; Seidlitz, B; Van Kranenburg, G; Geurten, P; Keizer, H A
1997-10-01
It has been hypothesized that exercise-related hypo-estrogenemia occurs as a consequence of increased competition of catecholestrogens (CE) for catechol-O-methyltransferase (COMT). This may result in higher norepinephrine (NE) concentrations, which could interfere with normal gonadotropin pulsatility. The present study investigates the effects of training on CE responses to acute exercise stress. Nine untrained eumenorrheic women (mean percentage of body fat +/-SD: 24.8 +/- 3.1%) volunteered for an intensive 5-day training program. Resting, submaximal, and maximal (tmax) exercise plasma CE, estrogen, and catecholamine responses were determined pre- and post training in both the follicular (FPh) and luteal phase (LPh). Acute exercise stress increased total primary estrogens (E) but had little effect on total 2-hydroxyestrogens (2-OHE) and 2-hydroxyestrogen-monomethylethers (2-MeOE) (= O-methylated CE after competition for catechol-O-methyltransferase). This pattern was not significantly changed by training. However, posttraining LPh mean (+/-SE) plasma E, 2-OHE, and 2-MeOE concentrations were significantly lower (P < 0.05) at each exercise intensity (for 2-OHE: 332 +/- 47 vs. 422 +/- 57 pg/mL at tmax; for 2-MeOE: 317 +/- 26 vs. 354 +/- 34 pg/mL at tmax). Training produced opposite effects on 2-OHE:E ratios (an estimation of CE formation) during acute exercise in the FPh (reduction) and LPh (increase). The 2-MeOE:2-OHE ratio (an estimation of CE activity) showed significantly higher values at tmax in both menstrual phases after training (FPh: +11%; LPh: +23%; P < 0.05). After training, NE values were significantly higher (P < 0.05). The major findings of this study were that: training lowers absolute concentrations of plasma estrogens and CE; the acute exercise challenge altered plasma estrogens but had little effect on CE; estimation of the formation and activity of CE suggests that formation and O-methylation of CE proportionately increases. These findings may be of importance for NE-mediated effects on gonadotropin release.
NASA Astrophysics Data System (ADS)
Murray, K. E.
2016-12-01
Management of produced fluids has become an important issue in Oklahoma because large volumes of saltwater are co-produced with oil and gas, and disposed into saltwater disposal wells at high rates. Petroleum production increased from 2009-2015, especially in central and north-central Oklahoma where the Mississippian and Hunton zones were redeveloped using horizontal wells and dewatering techniques that have led to a disproportional increase in produced water volumes. Improved management of co-produced water, including desalination for beneficial reuse and decreased saltwater disposal volumes, is only possible if spatial and temporal trends can be defined and related to the producing zones. It is challenging to quantify the volumes of co-produced water by region or production zone because co-produced water volumes are generally not reported. Therefore, the goal of this research is to estimate co-produced water volumes for 2008-present with an approach that can be replicated as petroleum production shifts to other regions. Oil and gas production rates from subsurface zones were multiplied by ratios of H2O:oil and H2O:gas for the respective zones. Initial H2O:oil and H2O:gas ratios were adjusted/calibrated, by zone, to maximize correlation of county-scale produced H2O estimates versus saltwater disposal volumes from 2013-2015. These calibrated ratios were then used to compute saltwater disposal volumes from 2008-2012 because of apparent data gaps in reported saltwater disposal volumes during that timeframe. This research can be used to identify regions that have the greatest need for produced water treatment systems. The next step in management of produced fluids is to explore optimal energy-efficient strategies that reduce deleterious effects.
Aarnisalo, Kaarina; Vihavainen, Elina; Rantala, Leila; Maijala, Riitta; Suihko, Maija-Liisa; Hielm, Sebastian; Tuominen, Pirkko; Ranta, Jukka; Raaska, Laura
2008-02-10
Microbial risk assessment provides a means of estimating consumer risks associated with food products. The methods can also be applied at the plant level. In this study results of microbiological analyses were used to develop a robust single plant level risk assessment. Furthermore, the prevalence and numbers of Listeria monocytogenes in marinated broiler legs in Finland were estimated. These estimates were based on information on the prevalence, numbers and genotypes of L. monocytogenes in 186 marinated broiler legs from 41 retail stores. The products were from three main Finnish producers, which produce 90% of all marinated broiler legs sold in Finland. The prevalence and numbers of L. monocytogenes were estimated by Monte Carlo simulation using WinBUGS, but the model is applicable to any software featuring standard probability distributions. The estimated mean annual number of L. monocytogenes-positive broiler legs sold in Finland was 7.2x10(6) with a 95% credible interval (CI) 6.7x10(6)-7.7x10(6). That would be 34%+/-1% of the marinated broiler legs sold in Finland. The mean number of L. monocytogenes in marinated broiler legs estimated at the sell-by-date was 2 CFU/g, with a 95% CI of 0-14 CFU/g. Producer-specific L. monocytogenes strains were recovered from the products throughout the year, which emphasizes the importance of characterizing the isolates and identifying strains that may cause problems as part of risk assessment studies. As the levels of L. monocytogenes were low, the risk of acquiring listeriosis from these products proved to be insignificant. Consequently there was no need for a thorough national level risk assessment. However, an approach using worst-case and average point estimates was applied to produce an example of single producer level risk assessment based on limited data. This assessment also indicated that the risk from these products was low. The risk-based approach presented in this work can provide estimation of public health risk on which control measures at the plant level can be based.
Risk assessment of Salmonella in Danish meatballs produced in the catering sector.
Møller, Cleide O de A; Nauta, Maarten J; Schaffner, Donald W; Dalgaard, Paw; Christensen, Bjarke B; Hansen, Tina B
2015-03-02
A modular process risk model approach was used to assess health risks associated with Salmonella spp. after consumption of the Danish meatball product (frikadeller) produced with fresh pork in a catering unit. Meatball production and consumption were described as a series of processes (modules), starting from 1.3kg meat pieces through conversion to 70g meatballs, followed by a dose response model to assess the risk of illness from consumption of these meatballs. Changes in bacterial prevalence, concentration, and unit size were modelled within each module. The risk assessment was built using observational data and models that were specific for Salmonella spp. in meatballs produced in the catering sector. Danish meatballs are often pan-fried followed by baking in an oven before consumption, in order to reach the core temperature of 75°C recommended by the Danish Food Safety Authority. However, in practice this terminal heat treatment in the oven may be accidentally omitted. Eleven production scenarios were evaluated with the model, to test the impact of heat treatments and cooling rates at different room temperatures. The risk estimates revealed that a process comprising heat treatment of meatballs to core temperatures higher than 70°C, and subsequent holding at room temperatures lower than 20°C, for no longer than 3.5h, were very effective in Salmonella control. The current Danish Food Safety Authority recommendation of cooking to an internal temperature of 75°C is conservative, at least with respect to Salmonella risk. Survival and growth of Salmonella during cooling of meatballs not heat treated in oven had a significant impact on the risk estimates, and therefore, cooling should be considered a critical step during meatball processing. Copyright © 2014. Published by Elsevier B.V.