Development of municipal solid waste classification in Korea based on fossil carbon fraction.
Lee, Jeongwoo; Kang, Seongmin; Kim, Seungjin; Kim, Ki-Hyun; Jeon, Eui-Chan
2015-10-01
Environmental problems and climate change arising from waste incineration are taken quite seriously in the world. In Korea, the waste disposal methods are largely classified into landfill, incineration, recycling, etc. and the amount of incinerated waste has risen by 24.5% from 2002. In the analysis of CO₂emissions estimations of waste incinerators fossil carbon content are main factor by the IPCC. FCF differs depending on the characteristics of waste in each country, and a wide range of default values are proposed by the IPCC. This study conducted research on the existing classifications of the IPCC and Korean waste classification systems based on FCF for accurate greenhouse gas emissions estimation of waste incineration. The characteristics possible for sorting were classified according to FCF and form. The characteristics sorted according to fossil carbon fraction were paper, textiles, rubber, and leather. Paper was classified into pure paper and processed paper; textiles were classified into cotton and synthetic fibers; and rubber and leather were classified into artificial and natural. The analysis of FCF was implemented by collecting representative samples from each classification group, by applying the 14C method, and using AMS equipment. And the analysis values were compared with the default values proposed by the IPCC. In this study of garden and park waste and plastics, the differences were within the range of the IPCC default values or the differences were negligible. However, coated paper, synthetic textiles, natural rubber, synthetic rubber, artificial leather, and other wastes showed differences of over 10% in FCF content. IPCC is comprised of largely 9 types of qualitative classifications, in emissions estimation a great difference can occur from the combined characteristics according with the existing IPCC classification system by using the minutely classified waste characteristics as in this study. Fossil carbon fraction (FCF) differs depending on the characteristics of waste in each country; and a wide range of default values are proposed by the IPCC. This study conducted research on the existing classifications of the IPCC and Korean waste classification systems based on FCF for accurate greenhouse gas emissions estimation of waste incineration.
NASA Astrophysics Data System (ADS)
Brown, L.; Armstrong Brown, S.; Jarvis, S. C.; Syed, B.; Goulding, K. W. T.; Phillips, V. R.; Sneath, R. W.; Pain, B. F.
Nitrous oxide emission from UK agriculture was estimated, using the IPCC default values of all emission factors and parameters, to be 87 Gg N 2O-N in both 1990 and 1995. This estimate was shown, however, to have an overall uncertainty of 62%. The largest component of the emission (54%) was from the direct (soil) sector. Two of the three emission factors applied within the soil sector, EF1 (direct emission from soil) and EF3 PRP (emission from pasture range and paddock) were amongst the most influential on the total estimate, producing a ±31 and +11% to -17% change in emissions, respectively, when varied through the IPCC range from the default value. The indirect sector (from leached N and deposited ammonia) contributed 29% of the total emission, and had the largest uncertainty (126%). The factors determining the fraction of N leached (Frac LEACH) and emissions from it (EF5), were the two most influential. These parameters are poorly specified and there is great potential to improve the emission estimate for this component. Use of mathematical models (NCYCLE and SUNDIAL) to predict Frac LEACH suggested that the IPCC default value for this parameter may be too high for most situations in the UK. Comparison with other UK-derived inventories suggests that the IPCC methodology may overestimate emission. Although the IPCC approach includes additional components to the other inventories (most notably emission from indirect sources), estimates for the common components (i.e. fertiliser and animals), and emission factors used, are higher than those of other inventories. Whilst it is recognised that the IPCC approach is generalised in order to allow widespread applicability, sufficient data are available to specify at least two of the most influential parameters, i.e. EF1 and Frac LEACH, more accurately, and so provide an improved estimate of nitrous oxide emissions from UK agriculture.
Weitz, Melissa; Coburn, Jeffrey B; Salinas, Edgar
2008-05-01
This paper estimates national methane emissions from solid waste disposal sites in Panama over the time period 1990-2020 using both the 2006 Intergovernmental Panel on Climate Change (IPCC) Waste Model spreadsheet and the default emissions estimate approach presented in the 1996 IPCC Good Practice Guidelines. The IPCC Waste Model has the ability to calculate emissions from a variety of solid waste disposal site types, taking into account country- or region-specific waste composition and climate information, and can be used with a limited amount of data. Countries with detailed data can also run the model with country-specific values. The paper discusses methane emissions from solid waste disposal; explains the differences between the two methodologies in terms of data needs, assumptions, and results; describes solid waste disposal circumstances in Panama; and presents the results of this analysis. It also demonstrates the Waste Model's ability to incorporate landfill gas recovery data and to make projections. The former default method methane emissions estimates are 25 Gg in 1994, and range from 23.1 Gg in 1990 to a projected 37.5 Gg in 2020. The Waste Model estimates are 26.7 Gg in 1994, ranging from 24.6 Gg in 1990 to 41.6 Gg in 2020. Emissions estimates for Panama produced by the new model were, on average, 8% higher than estimates produced by the former default methodology. The increased estimate can be attributed to the inclusion of all solid waste disposal in Panama (as opposed to only disposal in managed landfills), but the increase was offset somewhat by the different default factors and regional waste values between the 1996 and 2006 IPCC guidelines, and the use of the first-order decay model with a time delay for waste degradation in the IPCC Waste Model.
The study on biomass fraction estimate methodology of municipal solid waste incinerator in Korea.
Kang, Seongmin; Kim, Seungjin; Lee, Jeongwoo; Yun, Hyunki; Kim, Ki-Hyun; Jeon, Eui-Chan
2016-10-01
In Korea, the amount of greenhouse gases released due to waste materials was 14,800,000 t CO2eq in 2012, which increased from 5,000,000 t CO2eq in 2010. This included the amount released due to incineration, which has gradually increased since 2010. Incineration was found to be the biggest contributor to greenhouse gases, with 7,400,000 t CO2eq released in 2012. Therefore, with regards to the trading of greenhouse gases emissions initiated in 2015 and the writing of the national inventory report, it is important to increase the reliability of the measurements related to the incineration of waste materials. This research explored methods for estimating the biomass fraction at Korean MSW incinerator facilities and compared the biomass fractions obtained with the different biomass fraction estimation methods. The biomass fraction was estimated by the method using default values of fossil carbon fraction suggested by IPCC, the method using the solid waste composition, and the method using incinerator flue gas. The highest biomass fractions in Korean municipal solid waste incinerator facilities were estimated by the IPCC Default method, followed by the MSW analysis method and the Flue gas analysis method. Therefore, the difference in the biomass fraction estimate was the greatest between the IPCC Default and the Flue gas analysis methods. The difference between the MSW analysis and the flue gas analysis methods was smaller than the difference with IPCC Default method. This suggested that the use of the IPCC default method cannot reflect the characteristics of Korean waste incinerator facilities and Korean MSW. Incineration is one of most effective methods for disposal of municipal solid waste (MSW). This paper investigates the applicability of using biomass content to estimate the amount of CO2 released, and compares the biomass contents determined by different methods in order to establish a method for estimating biomass in the MSW incinerator facilities of Korea. After analyzing the biomass contents of the collected solid waste samples and the flue gas samples, the results were compared with the Intergovernmental Panel on Climate Change (IPCC) method, and it seems that to calculate the biomass fraction it is better to use the flue gas analysis method than the IPCC method. It is valuable to design and operate a real new incineration power plant, especially for the estimation of greenhouse gas emissions.
NASA Astrophysics Data System (ADS)
Langner, Andreas; Achard, Frédéric; Grassi, Giacomo
2014-05-01
The IPCC proposes three Tier levels for greenhouse gas emission monitoring with a hierarchical order in terms of accuracy as well as data requirements/complexity. While Tier 1 provides default above-ground biomass (AGB) values per ecological zone and continent, Tier 2 and 3 are either based on country-specific remote sensing or permanent sample-plot data. Due to missing capacities most developing countries have to rely on Tier 1 default values, which show highest uncertainties. Furthermore, IPCC Tier 1 values lack transparency as they are based on a variety of studies that have been repeatedly updated and combined with expert opinions, thus blurring the original data sources. A possible way to increase credibility is a conservative monitoring approach, following the principle of conservativeness, thus reducing the likelihood of unjustified payments for emission reductions not reflecting reality. For the implementation of that principle knowledge about the distribution of the biomass within each ecological zone is essential. However, such information is not available for the IPCC Tier 1 values, which only provide mean values and/or AGB ranges that are not based on a common statistical analysis. Using the pan-tropical datasets of Saatchi et al (Proc Natl Acad Sci USA, 108, 9899-9904, 2011; 1km spatial resolution) and Baccini et al (Nat Climate Change, 2:182-185, 2012; 500m spatial resolution) we calculated the mean AGB values as well as their 50% confidence intervals for each ecological zone within the DRC using Globcover2009 as forest/non-forest mask and the FAO ecological zones dataset. Such analysis is more transparent while at the same time leading to "statistically improved" Tier 1 values, potentially allowing a conservative monitoring approach by selecting the lower bound of the confidence interval for emission estimation during the reference period and the higher bound for the assessment period. Within the DRC Baccini generally delivers higher AGB estimates than Saatchi but even Baccini shows between 81t/ha and 143t/ha lower estimates for Tropical Rain Forests and Moist Deciduous Forests respectively than IPCC. While the AGB values for Tropical Dry Forest of both maps are similar to the IPCC, Tropical Mountain Systems cannot easily be compared as their IPCC data lack a mean value. A recent study by Mitchard et al (Carbon Balance and Management, 8, 10, 2013) compared both pan-tropical datasets, pointing out notable differences in the Congo basin. However, their analysis revealed that none of both maps is generally superior. Therefore, we suggest using the average of both maps as a reasonable approximation to the real but unknown AGB values, thus resulting in 213±69t/ha for Tropical Rain Forests, 94±19t/ha for Moist Deciduous Forests, 119±31t/ha for Tropical Dry Forests and 182±61t/ha for Tropical Mountain Systems of the DRC while the corresponding IPCC values are 310t/ha, 260t/ha, 120t/ha and 40-190t/ha.
O'Dwyer, Jean; Walshe, Dylan; Byrne, Kenneth A
2018-03-01
Large quantities of wood products have historically been disposed of in landfills. The fate of this vast pool of carbon plays an important role in national carbon balances and accurate emission reporting. The Republic of Ireland, like many EU countries, utilises the 2006 Intergovernmental Panel on Climate Change (IPCC) guidelines for greenhouse gas reporting in the waste sector, which provides default factors for emissions estimation. For wood products, the release of carbon is directly proportional to the decomposition of the degradable organic carbon fraction of the product, for which the IPCC provides a value of 0.5 (50%). However, in situ analytic results of the decomposition rates of carbon in landfilled wood do not corroborate this figure; suggesting that carbon emissions are likely overestimated. To assess the impact of this overestimation on emission reporting, carbon decomposition values obtained from literature and the IPCC default factor were applied to the Irish wood fraction of landfilled waste for the years 1957-2016 and compared. Univariate analysis found a statistically significant difference between carbon (methane) emissions calculated using the IPCC default factor and decomposition factors from direct measurements for softwoods (F = 45.362, p = <.001), hardwoods (F = 20.691, p = <.001) and engineered wood products (U = 4.726, p = <.001). However, there was no significant difference between emissions calculated using only the in situ analytic decomposition factors, regardless of time in landfill, location or subsequently, climate. This suggests that methane emissions from the wood fraction of landfilled waste in Ireland could be drastically overestimated; potentially by a factor of 56. The results of this study highlight the implications of emission reporting at a lower tierand prompts further research into the decomposition of wood products in landfills at a national level. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kim, Seungjin; Kang, Seongmin; Lee, Jeongwoo; Lee, Seehyung; Kim, Ki-Hyun; Jeon, Eui-Chan
2016-10-01
In this study, in order to understand accurate calculation of greenhouse gas emissions of urban solid waste incineration facilities, which are major waste incineration facilities, and problems likely to occur at this time, emissions were calculated by classifying calculation methods into 3 types. For the comparison of calculation methods, the waste characteristics ratio, dry substance content by waste characteristics, carbon content in dry substance, and (12)C content were analyzed; and in particular, CO2 concentration in incineration gases and (12)C content were analyzed together. In this study, 3 types of calculation methods were made through the assay value, and by using each calculation method, emissions of urban solid waste incineration facilities were calculated then compared. As a result of comparison, with Calculation Method A, which used the default value as presented in the IPCC guidelines, greenhouse gas emissions were calculated for the urban solid waste incineration facilities A and B at 244.43 ton CO2/day and 322.09 ton CO2/day, respectively. Hence, it showed a lot of difference from Calculation Methods B and C, which used the assay value of this study. It is determined that this was because the default value as presented in IPCC, as the world average value, could not reflect the characteristics of urban solid waste incineration facilities. Calculation Method B indicated 163.31 ton CO2/day and 230.34 ton CO2/day respectively for the urban solid waste incineration facilities A and B; also, Calculation Method C indicated 151.79 ton CO2/day and 218.99 ton CO2/day, respectively. This study intends to compare greenhouse gas emissions calculated using (12)C content default value provided by the IPCC (Intergovernmental Panel on Climate Change) with greenhouse gas emissions calculated using (12)C content and waste assay value that can reflect the characteristics of the target urban solid waste incineration facilities. Also, the concentration and (12)C content were calculated by directly collecting incineration gases of the target urban solid waste incineration facilities, and greenhouse gas emissions of the target urban solid waste incineration facilities through this survey were compared with greenhouse gas emissions, which used the previously calculated assay value of solid waste.
Biomass expansion factor and root-to-shoot ratio for Pinus in Brazil.
Sanquetta, Carlos R; Corte, Ana Pd; da Silva, Fernando
2011-09-24
The Biomass Expansion Factor (BEF) and the Root-to-Shoot Ratio (R) are variables used to quantify carbon stock in forests. They are often considered as constant or species/area specific values in most studies. This study aimed at showing tree size and age dependence upon BEF and R and proposed equations to improve forest biomass and carbon stock. Data from 70 sample Pinus spp. grown in southern Brazil trees in different diameter classes and ages were used to demonstrate the correlation between BEF and R, and forest inventory data, such as DBH, tree height and age. Total dry biomass, carbon stock and CO2 equivalent were simulated using the IPCC default values of BEF and R, corresponding average calculated from data used in this study, as well as the values estimated by regression equations. The mean values of BEF and R calculated in this study were 1.47 and 0.17, respectively. The relationship between BEF and R and the tree measurement variables were inversely related with negative exponential behavior. Simulations indicated that use of fixed values of BEF and R, either IPCC default or current average data, may lead to unreliable estimates of carbon stock inventories and CDM projects. It was concluded that accounting for the variations in BEF and R and using regression equations to relate them to DBH, tree height and age, is fundamental in obtaining reliable estimates of forest tree biomass, carbon sink and CO2 equivalent.
Review and analysis of global agricultural N₂O emissions relevant to the UK.
Buckingham, S; Anthony, S; Bellamy, P H; Cardenas, L M; Higgins, S; McGeough, K; Topp, C F E
2014-07-15
As part of a UK government funded research project to update the UK N2O inventory methodology, a systematic review of published nitrous oxide (N2O) emission factors was carried out of non-UK research, for future comparison and synthesis with the UK measurement based evidence base. The aim of the study is to assess how the UK IPCC default emission factor for N2O emissions derived from synthetic or organic fertiliser inputs (EF1) compares to international values reported in published literature. The availability of data for comparing and/or refining the UK IPCC default value and the possibility of analysing sufficient auxiliary data to propose a Tier 2 EF1 reporting strategy is evaluated. The review demonstrated a lack of consistency in reporting error bounds for fertiliser-derived EFs and N2O flux data with 8% and 44% of publications reporting EF and N2O flux error bounds respectively. There was also poor description of environmental (climate and soil) and experimental design auxiliary data. This is likely to be due to differences in study objectives, however potential improvements to soil parameter reporting are proposed. The review demonstrates that emission factors for agricultural-derived N2O emissions ranged -0.34% to 37% showing high variation compared to the UK Tier 1 IPCC EF1 default values of 1.25% (IPCC 1996) and 1% (IPPC 2006). However, the majority (83%) of EFs reported for UK-relevant soils fell within the UK IPCC EF1 uncertainty range of 0.03% to 3%. Residual maximum likelihood (REML) analysis of the data collated in the review showed that the type and rate of fertiliser N applied and soil type were significant factors influencing EFs reported. Country of emission, the length of the measurement period, the number of splits, the crop type, pH and SOC did not have a significant impact on N2O emissions. A subset of publications where sufficient data was reported for meta-analysis to be conducted was identified. Meta-analysis of effect sizes of 41 treatments demonstrated that the application of fertiliser has a significant effect on N2O emissions in comparison to control plots and that emission factors were significantly different to zero. However no significant relationships between the quantity of fertiliser applied and the effect size of the amount of N2O emitted from fertilised plots compared to control plots were found. Annual addition of fertiliser of 35 to 557 kg N/ha gave a mean increase in emissions of 2.02 ± 0.28 g N2O/ha/day compared to control treatments (p<0.01). Emission factors were significantly different from zero, with a mean emission factor estimated directly from the meta analysis of 0.17 ± 0.02%. This is lower than the IPCC 2006 Tier 1 EF1 value of 1% but falling within the uncertainty bound for the IPCC 2006 Tier 1 EF1 (0.03% to 3%). As only a small number of papers were viable for meta analysis to be conducted due to lack of reporting of the key controlling factors, the estimates of EF in this paper cannot include the true variability under conditions similar to the UK. Review-derived EFs of 0.34% to 37% and mean EF from meta-analysis of 0.17 ± 0.02% highlight variability in reporting EFs depending on the method applied and sample size. A protocol of systematic reporting of N2O emissions and key auxiliary parameters in publications across disciplines is proposed. If adopted this would strengthen the community to inform IPCC Tier 2 reporting development and reduce the uncertainty surrounding reported UK N2O emissions. Copyright © 2014 Elsevier B.V. All rights reserved.
Kaewpila, Chatchai; Sommart, Kritapon
2016-10-01
The enteric methane conversion factor ( Y m ) is an important country-specific value for the provision of precise enteric methane emissions inventory reports. The objectives of this meta-analysis were to develop and evaluate the empirical Y m models for the national level and the farm level for tropical developing countries according to the IPCC's categorization. We used datasets derived from 18 in vivo feeding experiments from 1999 to 2015 of Zebu beef cattle breeds fed low-quality crop residues and by-products. We found that the observed Y m value was 8.2% gross energy (GE) intake (~120 g methane emission head -1 day -1 ) and ranged from 4.8% to 13.7% GE intake. The IPCC default model (tier 2, Y m = 6.5% ± 1.0% GE intake) underestimated the Y m values by up to 26.1% compared with its refinement of 8.4% ± 0.4% GE intake for the national-level estimate. Both the IPCC default model and the refined model performed worse in predicting Y m trends at the farm level (root mean square prediction error [MSPE] = 15.1%-23.1%, concordance correlation coefficient [CCC] = 0.16-0.18, R 2 = .32). Seven of the extant Y m models based on a linear regression approach also showed inaccurately estimated Y m values (root MSPE = 16.2%-36.0%, CCC = 0.02-0.27, R 2 < .37). However, one of the developed models, which related to the complexity of the energy use efficiencies of the diet consumed to Y m , showed adequate accuracy at the farm level (root MSPE = 9.1%, CCC = 0.75, R 2 = .67). Our results thus suggest a new Y m model and future challenges for estimating Zebu beef cattle production in tropical developing countries.
Won, S G; Cho, W S; Lee, J E; Park, K H; Ra, C S
2014-03-01
Many studies on methane (CH4) and nitrous oxide (N2O) emissions from livestock industries have revealed that livestock production directly contributes to greenhouse gas (GHG) emissions through enteric fermentation and manure management, which causes negative impacts on animal environment sustainability. In the present study, three essential values for GHG emission were measured; i.e., i) maximum CH4 producing capacity at mesophilic temperature (37°C) from anaerobically stored manure in livestock category (B0,KM, Korean livestock manure for B0), ii) EF3(s) value representing an emission factor for direct N2O emissions from manure management system S in the country, kg N2O-N kg N(-1), at mesophilic (37°C) and thermophilic (55°C) temperatures, and iii) Nex(T) emissions showing annual N excretion for livestock category T, kg N animal(-1) yr(-1), from different livestock manure. Static incubation with and without aeration was performed to obtain the N2O and CH4 emissions from each sample, respectively. Chemical compositions of pre- and post-incubated manure were analyzed. Contents of total solids (% TS) and volatile solid (% VS), and the ratio of carbon to nitrogen (C/N) decrease significantly in all the samples by C-containing biogas generation, whereas moisture content (%) and pH increased after incubation. A big difference of total nitrogen content was not observed in pre- and post-incubation during CH4 and N2O emissions. CH4 emissions (g CH4 kg VS(-1)) from all the three manures (sows, layers and Korean cattle) were different and high C/N ratio resulted in high CH4 emission. Similarly, N2O emission was found to be affected by % VS, pH, and temperature. The B0,KM values for sows, layers, and Korean cattle obtained at 37°C are 0.0579, 0.0006, and 0.0828 m(3) CH4 kg VS(-1), respectively, which are much less than the default values in IPCC guideline (GL) except the value from Korean cattle. For sows and Korean cattle, Nex(T) values of 7.67 and 28.19 kg N yr(-1), respectively, are 2.5 fold less than those values in IPCC GL as well. However, Nex(T) value of layers 0.63 kg N yr(-1) is very similar to the default value of 0.6 kg N yr(-1) in IPCC GLs for National greenhouse gas inventories for countries such as South Korea/Asia. The EF3(s) value obtained at 37°C and 55°C were found to be far less than the default value.
Estimation of methane emission rate changes using age-defined waste in a landfill site.
Ishii, Kazuei; Furuichi, Toru
2013-09-01
Long term methane emissions from landfill sites are often predicted by first-order decay (FOD) models, in which the default coefficients of the methane generation potential and the methane generation rate given by the Intergovernmental Panel on Climate Change (IPCC) are usually used. However, previous studies have demonstrated the large uncertainty in these coefficients because they are derived from a calibration procedure under ideal steady-state conditions, not actual landfill site conditions. In this study, the coefficients in the FOD model were estimated by a new approach to predict more precise long term methane generation by considering region-specific conditions. In the new approach, age-defined waste samples, which had been under the actual landfill site conditions, were collected in Hokkaido, Japan (in cold region), and the time series data on the age-defined waste sample's methane generation potential was used to estimate the coefficients in the FOD model. The degradation coefficients were 0.0501/y and 0.0621/y for paper and food waste, and the methane generation potentials were 214.4 mL/g-wet waste and 126.7 mL/g-wet waste for paper and food waste, respectively. These coefficients were compared with the default coefficients given by the IPCC. Although the degradation coefficient for food waste was smaller than the default value, the other coefficients were within the range of the default coefficients. With these new coefficients to calculate methane generation, the long term methane emissions from the landfill site was estimated at 1.35×10(4)m(3)-CH(4), which corresponds to approximately 2.53% of the total carbon dioxide emissions in the city (5.34×10(5)t-CO(2)/y). Copyright © 2013 Elsevier Ltd. All rights reserved.
Effects of crop management, soil type, and climate on N2O emissions from Austrian Soils
NASA Astrophysics Data System (ADS)
Zechmeister-Boltenstern, Sophie; Sigmund, Elisabeth; Kasper, Martina; Kitzler, Barbara; Haas, Edwin; Wandl, Michael; Strauss, Peter; Poetzelsberger, Elisabeth; Dersch, Georg; Winiwarter, Wilfried; Amon, Barbara
2015-04-01
Within the project FarmClim ("Farming for a better climate") we assessed recent N2O emissions from two selected regions in Austria. Our aim was to deepen the understanding of Austrian N2O fluxes regarding region specific properties. Currently, N2O emissions are estimated with the IPCC default emission factor which only considers the amount of N-input as an influencing factor for N2O emissions. We evaluated the IPCC default emission factor for its validity under spatially distinct environmental conditions. For this two regions for modeling with LandscapeDNDC have been identified in this project. The benefit of using LandscapeDNDC is the detailed illustration of microbial processes in the soil. Required input data to run the model included daily climate data, vegetation properties, soil characteristics and land management. The analysis of present agricultural practices was basis for assessing the hot spots and hot moments of nitrogen emissions on a regional scale. During our work with LandscapeDNDC we were able to adapt specific model algorithms to Austrian agricultural conditions. The model revealed a strong dependency of N2O emissions on soil type. We could estimate how strongly soil texture affects N2O emissions. Based on detailed soil maps with high spatial resolution we calculated region specific contribution to N2O emissions. Accordingly we differentiated regions with deviating gas fluxes compared to the predictions by the IPCC inventory methodology. Taking region specific management practices into account (tillage, irrigation, residuals) calculation of crop rotation (fallow, catch crop, winter wheat, barley, winter barley, sugar beet, corn, potato, onion and rapeseed) resulted in N2O emissions differing by a factor of 30 depending on preceding crop and climate. A maximum of 2% of N fertilizer input was emitted as N2O. Residual N in the soil was a major factor stimulating N2O emissions. Interannual variability was affected by varying N-deposition even in case of constant management practices. High temporal resolution of model outputs enabled us to identify hot moments of N-turnover and total N2O emissions according to extreme weather events. We analysed how strongly these event based emissions, which are not accounted for by classical inventories, affect emission factors. The evaluation of the IPCC default emission factor for its validity under spatially distinct environmental conditions revealed which environmental conditions are responsible for major deviations of actual emissions from the theoretical values. Scrutinizing these conditions can help to improve climate reporting and greenhouse gas mitigation measures.
Willcock, Simon; Phillips, Oliver L.; Platts, Philip J.; Balmford, Andrew; Burgess, Neil D.; Lovett, Jon C.; Ahrends, Antje; Bayliss, Julian; Doggart, Nike; Doody, Kathryn; Fanning, Eibleis; Green, Jonathan; Hall, Jaclyn; Howell, Kim L.; Marchant, Rob; Marshall, Andrew R.; Mbilinyi, Boniface; Munishi, Pantaleon K. T.; Owen, Nisha; Swetnam, Ruth D.; Topp-Jorgensen, Elmer J.; Lewis, Simon L.
2012-01-01
Monitoring landscape carbon storage is critical for supporting and validating climate change mitigation policies. These may be aimed at reducing deforestation and degradation, or increasing terrestrial carbon storage at local, regional and global levels. However, due to data-deficiencies, default global carbon storage values for given land cover types such as ‘lowland tropical forest’ are often used, termed ‘Tier 1 type’ analyses by the Intergovernmental Panel on Climate Change (IPCC). Such estimates may be erroneous when used at regional scales. Furthermore uncertainty assessments are rarely provided leading to estimates of land cover change carbon fluxes of unknown precision which may undermine efforts to properly evaluate land cover policies aimed at altering land cover dynamics. Here, we present a repeatable method to estimate carbon storage values and associated 95% confidence intervals (CI) for all five IPCC carbon pools (aboveground live carbon, litter, coarse woody debris, belowground live carbon and soil carbon) for data-deficient regions, using a combination of existing inventory data and systematic literature searches, weighted to ensure the final values are regionally specific. The method meets the IPCC ‘Tier 2’ reporting standard. We use this method to estimate carbon storage over an area of33.9 million hectares of eastern Tanzania, reporting values for 30 land cover types. We estimate that this area stored 6.33 (5.92–6.74) Pg C in the year 2000. Carbon storage estimates for the same study area extracted from five published Africa-wide or global studies show a mean carbon storage value of ∼50% of that reported using our regional values, with four of the five studies reporting lower carbon storage values. This suggests that carbon storage may have been underestimated for this region of Africa. Our study demonstrates the importance of obtaining regionally appropriate carbon storage estimates, and shows how such values can be produced for a relatively low investment. PMID:23024764
Willcock, Simon; Phillips, Oliver L; Platts, Philip J; Balmford, Andrew; Burgess, Neil D; Lovett, Jon C; Ahrends, Antje; Bayliss, Julian; Doggart, Nike; Doody, Kathryn; Fanning, Eibleis; Green, Jonathan; Hall, Jaclyn; Howell, Kim L; Marchant, Rob; Marshall, Andrew R; Mbilinyi, Boniface; Munishi, Pantaleon K T; Owen, Nisha; Swetnam, Ruth D; Topp-Jorgensen, Elmer J; Lewis, Simon L
2012-01-01
Monitoring landscape carbon storage is critical for supporting and validating climate change mitigation policies. These may be aimed at reducing deforestation and degradation, or increasing terrestrial carbon storage at local, regional and global levels. However, due to data-deficiencies, default global carbon storage values for given land cover types such as 'lowland tropical forest' are often used, termed 'Tier 1 type' analyses by the Intergovernmental Panel on Climate Change (IPCC). Such estimates may be erroneous when used at regional scales. Furthermore uncertainty assessments are rarely provided leading to estimates of land cover change carbon fluxes of unknown precision which may undermine efforts to properly evaluate land cover policies aimed at altering land cover dynamics. Here, we present a repeatable method to estimate carbon storage values and associated 95% confidence intervals (CI) for all five IPCC carbon pools (aboveground live carbon, litter, coarse woody debris, belowground live carbon and soil carbon) for data-deficient regions, using a combination of existing inventory data and systematic literature searches, weighted to ensure the final values are regionally specific. The method meets the IPCC 'Tier 2' reporting standard. We use this method to estimate carbon storage over an area of33.9 million hectares of eastern Tanzania, reporting values for 30 land cover types. We estimate that this area stored 6.33 (5.92-6.74) Pg C in the year 2000. Carbon storage estimates for the same study area extracted from five published Africa-wide or global studies show a mean carbon storage value of ∼50% of that reported using our regional values, with four of the five studies reporting lower carbon storage values. This suggests that carbon storage may have been underestimated for this region of Africa. Our study demonstrates the importance of obtaining regionally appropriate carbon storage estimates, and shows how such values can be produced for a relatively low investment.
Noyola, A; Paredes, M G; Güereca, L P; Molina, L T; Zavala, M
2018-10-15
Wastewater treatment (WWT) may be an important source of methane (CH 4 ), a greenhouse gas with significant global warming potential. Sources of CH 4 emissions from WWT facilities can be found in the water and in the sludge process lines. Among the methodologies for estimating CH 4 emissions inventories from WWT, the more adopted are the guidelines of the Intergovernmental Panel on Climate Change (IPCC), which recommends default emission factors (Tier 1) depending on WWT systems. Recent published results show that well managed treatment facilities may emit CH 4 , due to dissolved CH 4 in the influent wastewater; in addition, biological nutrient removal also will produce this gas in the anaerobic (or anoxic) steps. However, none of these elements is considered in the current IPCC guidelines. The aim of this work is to propose modified (and new) methane correction factors (MCF) regarding the current Tier 1 IPCC guidelines for CH 4 emissions from aerobic treatment systems, with and without anaerobic sludge digesters, focusing on intertropical countries. The modifications are supported on in situ assessment of fugitive CH 4 emissions in two facilities in Mexico and on relevant literature data. In the case of well-managed centralized aerobic treatment plant, a MCF of 0.06 (instead of the current 0.0) is proposed, considering that the assumption of a CH 4 -neutral treatment facility, as established in the IPCC methodology, is not supported. Similarly, a MCF of 0.08 is proposed for biological nutrient removal processes, being a new entry in the guidelines. Finally, a one-step straightforward calculation is proposed for centralized aerobic treatment plants with anaerobic digesters that avoids confusion when selecting the appropriate default MCF based on the Tier 1 IPCC guidelines. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Soil nitrous oxide emissions after deposition of dairy cow excreta in eastern Canada.
Rochette, Philippe; Chantigny, Martin H; Ziadi, Noura; Angers, Denis A; Bélanger, Gilles; Charbonneau, Édith; Pellerin, Doris; Liang, Chang; Bertrand, Normand
2014-05-01
Urine and dung deposited by grazing dairy cows are a major source of nitrous oxide (NO), a potent greenhouse gas that contributes to stratospheric ozone depletion. In this study, we quantified the emissions of NO after deposition of dairy cow excreta onto two grassland sites with contrasting soil types in eastern Canada. Our objectives were to determine the impact of excreta type, urine-N rate, time of the year, and soil type on annual NO emissions. Emissions were monitored on sandy loam and clay soils after spring, summer, and fall urine (5 and 10 g N patch) and dung (1.75 kg fresh weight dung) applications to perennial grasses in two successive years. The mean NO emission factor (EF) for urine was 1.09% of applied N in the clay soil and 0.31% in the sandy loam soil, estimates much smaller than the default Intergovernmental Panel on Climate Change (IPCC) default value for total excreta N (2%). Despite variations in urine composition and in climatic conditions, these soil-specific EFs were similar for the two urine-N application rates. The time of the year when urine was applied had no impact on emissions from the sandy loam soil, but greater EFs were observed after summer (1.59%) than spring (1.14%) and fall (0.55%) applications in the clay soil. Dung deposition impact on NO emission was smaller than that of urine, with a mean EF of 0.15% in the sandy loam soil and 0.08% in the clay soil. Our results suggest (i) that the IPCC default EF overestimates NO emissions from grazing cattle excreta in eastern Canada by a factor of 4.3 and (ii) that a region-specific inventory methodology should account for soil type and should use specific EFs for urine and dung. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Mou, Zishen; Scheutz, Charlotte; Kjeldsen, Peter
2015-06-01
Methane (CH₄) generated from low-organic waste degradation at four Danish landfills was estimated by three first-order decay (FOD) landfill gas (LFG) generation models (LandGEM, IPCC, and Afvalzorg). Actual waste data from Danish landfills were applied to fit model (IPCC and Afvalzorg) required categories. In general, the single-phase model, LandGEM, significantly overestimated CH₄generation, because it applied too high default values for key parameters to handle low-organic waste scenarios. The key parameters were biochemical CH₄potential (BMP) and CH₄generation rate constant (k-value). In comparison to the IPCC model, the Afvalzorg model was more suitable for estimating CH₄generation at Danish landfills, because it defined more proper waste categories rather than traditional municipal solid waste (MSW) fractions. Moreover, the Afvalzorg model could better show the influence of not only the total disposed waste amount, but also various waste categories. By using laboratory-determined BMPs and k-values for shredder, sludge, mixed bulky waste, and street-cleaning waste, the Afvalzorg model was revised. The revised model estimated smaller cumulative CH₄generation results at the four Danish landfills (from the start of disposal until 2020 and until 2100). Through a CH₄mass balance approach, fugitive CH₄emissions from whole sites and a specific cell for shredder waste were aggregated based on the revised Afvalzorg model outcomes. Aggregated results were in good agreement with field measurements, indicating that the revised Afvalzorg model could provide practical and accurate estimation for Danish LFG emissions. This study is valuable for both researchers and engineers aiming to predict, control, and mitigate fugitive CH₄emissions from landfills receiving low-organic waste. Landfill operators use the first-order decay (FOD) models to estimate methane (CH₄) generation. A single-phase model (LandGEM) and a traditional model (IPCC) could result in overestimation when handling a low-organic waste scenario. Site-specific data were important and capable of calibrating key parameter values in FOD models. The comparison study of the revised Afvalzorg model outcomes and field measurements at four Danish landfills provided a guideline for revising the Pollutants Release and Transfer Registers (PRTR) model, as well as indicating noteworthy waste fractions that could emit CH₄at modern landfills.
Verifying the UK agricultural N2O emission inventory with tall tower measurements
NASA Astrophysics Data System (ADS)
Carnell, E. J.; Meneguz, E.; Skiba, U. M.; Misselbrook, T. H.; Cardenas, L. M.; Arnold, T.; Manning, A.; Dragosits, U.
2016-12-01
Nitrous oxide (N2O) is a key greenhouse gas (GHG), with a global warming potential 300 times greater than that of CO2. N2O is emitted from a variety of sources, predominantly from agriculture. Annual UK emission estimates are reported, to comply with government commitments under the United Nations Framework Convention on Climate Change (UNFCCC). The UK N2O inventory follows internationally agreed protocols and emission estimates are derived by applying emission factors to estimates of (anthropogenic) emission sources. This approach is useful for comparing anthropogenic emissions from different countries, but does not capture regional differences and inter-annual variability associated with environmental factors (such as climate and soils) and agricultural management. In recent years, the UK inventory approach has been refined to include regional information into its emissions estimates, in an attempt to reduce uncertainty. This study attempts to assess the difference between current published inventory methodology (default IPCC methodology) and an alternative approach, which incorporates the latest thinking, using data from recent work. For 2013, emission estimates made using the alternative approach were 30 % lower than those made using default IPCC methodology, due to the use of lower emission factors suggested by recent projects (Defra projects: AC0116, AC0213 and MinNO). The 2013 emissions estimates were disaggregated on a monthly basis using agricultural management (e.g. sowing dates), climate data and soil properties. The temporally disaggregated emission maps were used as input to the Met Office atmospheric dispersion model NAME, for comparison with measured N2O concentrations, at three observation stations (Tacolneston, E. England; Ridge Hill, W. England; Mace Head, W. Ireland) in the UK DECC network (Deriving Emissions linked to Climate Change). The Mace Head site, situated on the west coast of Ireland, was used to establish baseline concentrations. The trends in the modelled data were found to correspond with the observational data trends, with concentration peaks coinciding with periods of land spreading of manures and fertiliser application. The model run using the default IPCC methodology was found to correspond with the observed data more closely than the alternative approach.
NASA Astrophysics Data System (ADS)
Johansson, A. E.; Kasimir Klemedtsson, Å.; Klemedtsson, L.; Svensson, B. H.
2003-07-01
Static chamber measurements of N2O fluxes were taken during the 1998 and 1999 growth seasons in a Swedish constructed wetland receiving wastewater. The dominating plant species in different parts of the wetland were Lemna minor L., Typha latifolia L., Spirogyra sp. and Glyceria maxima (Hartm.) and Phalaris arundinacea (L.), respectively. There were large temporal and spatial variations in N2O fluxes, which ranged from consumption at -350 to emissions at 1791 μg N2O m-2 h-1. The largest positive flux occurred in October 1999 and the lowest in the middle of July 1999. The average N2O flux for the two years was 130 μg N2O m-2 h-1 (SD = 220). No significant differences in N2O fluxes were found between the years, even though the two growing seasons differed considerably with respect to both air temperature and precipitation. 15% of the fluxes were negative, showing a consumption of N2O. Consumption occurred on a few occasions at most measurement sites and ranged from 1-350 μg N2O m-2 h-1. 13-43% of the variation in N2O fluxes was explained by multiple linear regression analysis including principal components. Emission factors were calculated according to IPCC methods from the N2O fluxes in the constructed wetland. The calculated emission factors were always lower (0.02-0.27%) compared to the default factor provided by the IPCC (0.75%). Thus, direct application of the IPCC default factor may lead to overestimation of N2O fluxes from constructed wastewater-treating wetlands.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeong, Sangjae; Nam, Anwoo; Yi, Seung-Muk
Highlights: • CH{sub 4}/CO{sub 2} and CH{sub 4} + CO{sub 2}% are proposed as indices to evaluate semi-aerobic landfills. • A landfill which CH{sub 4}/CO{sub 2} > 1.0 is difficult to be categorized as semi-aerobic landfill. • Field conditions should be carefully investigated to determine landfill types. • The MCF default value for semi-aerobic landfills underestimates the methane emissions. - Abstract: According to IPCC guidelines, a semi-aerobic landfill site produces one-half of the amount of CH{sub 4} produced by an equally-sized anaerobic landfill site. Therefore categorizing the landfill type is important on greenhouse gas inventories. In order to assess semi-aerobicmore » condition in the sites and the MCF value for semi-aerobic landfill, landfill gas has been measured from vent pipes in five semi-aerobically designed landfills in South Korea. All of the five sites satisfied requirements of semi-aerobic landfills in 2006 IPCC guidelines. However, the ends of leachate collection pipes which are main entrance of air in the semi-aerobic landfill were closed in all five sites. The CH{sub 4}/CO{sub 2} ratio in landfill gas, indicator of aerobic and anaerobic decomposition, ranged from 1.08 to 1.46 which is higher than the values (0.3–1.0) reported for semi-aerobic landfill sites and is rather close to those (1.0–2.0) for anaerobic landfill sites. The low CH{sub 4} + CO{sub 2}% in landfill gas implied air intrusion into the landfill. However, there was no evidence that air intrusion has caused by semi-aerobic design and operation. Therefore, the landfills investigated in this study are difficult to be classified as semi-aerobic landfills. Also MCF of 0.5 may significantly underestimate methane emissions compared to other researches. According to the carbon mass balance analyses, the higher MCF needs to be proposed for semi-aerobic landfills. Consequently, methane emission estimate should be based on field evaluation for the semi-aerobically designed landfills.« less
2012-01-01
Background The default international accounting rules estimate the carbon emissions from forest products by assuming all harvest is immediately emitted to the atmosphere. This makes it difficult to assess the greenhouse gas (GHG) consequences of different forest management or manufacturing activities that maintain the storage of carbon. The Intergovernmental Panel on Climate Change (IPCC) addresses this issue by allowing other accounting methods. The objective of this paper is to provide a new model for estimating annual stock changes of carbon in harvested wood products (HWP). Results The model, British Columbia Harvested Wood Products version 1 (BC-HWPv1), estimates carbon stocks and fluxes for wood harvested in BC from 1965 to 2065, based on new parameters on local manufacturing, updated and new information for North America on consumption and disposal of wood and paper products, and updated parameters on methane management at landfills in the USA. Based on model results, reporting on emissions as they occur would substantially lower BC’s greenhouse gas inventory in 2010 from 48 Mt CO2 to 26 Mt CO2 because of the long-term forest carbon storage in-use and in the non-degradable material in landfills. In addition, if offset projects created under BC’s protocol reported 100 year cumulative emissions using the BC-HWPv1 the emissions would be lower by about 11%. Conclusions This research showed that the IPCC default methods overestimate the emissions North America wood products. Future IPCC GHG accounting methods could include a lower emissions factor (e.g. 0.52) multiplied by the annual harvest, rather than the current multiplier of 1.0. The simulations demonstrated that the primary opportunities for climate change mitigation are in shifting from burning mill waste to using the wood for longer-lived products. PMID:22828161
Translating landfill methane generation parameters among first-order decay models.
Krause, Max J; Chickering, Giles W; Townsend, Timothy G
2016-11-01
Landfill gas (LFG) generation is predicted by a first-order decay (FOD) equation that incorporates two parameters: a methane generation potential (L 0 ) and a methane generation rate (k). Because non-hazardous waste landfills may accept many types of waste streams, multiphase models have been developed in an attempt to more accurately predict methane generation from heterogeneous waste streams. The ability of a single-phase FOD model to predict methane generation using weighted-average methane generation parameters and tonnages translated from multiphase models was assessed in two exercises. In the first exercise, waste composition from four Danish landfills represented by low-biodegradable waste streams was modeled in the Afvalzorg Multiphase Model and methane generation was compared to the single-phase Intergovernmental Panel on Climate Change (IPCC) Waste Model and LandGEM. In the second exercise, waste composition represented by IPCC waste components was modeled in the multiphase IPCC and compared to single-phase LandGEM and Australia's Solid Waste Calculator (SWC). In both cases, weight-averaging of methane generation parameters from waste composition data in single-phase models was effective in predicting cumulative methane generation from -7% to +6% of the multiphase models. The results underscore the understanding that multiphase models will not necessarily improve LFG generation prediction because the uncertainty of the method rests largely within the input parameters. A unique method of calculating the methane generation rate constant by mass of anaerobically degradable carbon was presented (k c ) and compared to existing methods, providing a better fit in 3 of 8 scenarios. Generally, single phase models with weighted-average inputs can accurately predict methane generation from multiple waste streams with varied characteristics; weighted averages should therefore be used instead of regional default values when comparing models. Translating multiphase first-order decay model input parameters by weighted average shows that single-phase models can predict cumulative methane generation within the level of uncertainty of many of the input parameters as defined by the Intergovernmental Panel on Climate Change (IPCC), which indicates that decreasing the uncertainty of the input parameters will make the model more accurate rather than adding multiple phases or input parameters.
Yu, Zhongjie; Deng, Huanguang; Wang, Dongqi; Ye, Mingwu; Tan, Yongjie; Li, Yangjie; Chen, Zhenlou; Xu, Shiyuan
2013-10-01
Global nitrogen (N) enrichment has resulted in increased nitrous oxide (N(2)O) emission that greatly contributes to climate change and stratospheric ozone destruction, but little is known about the N(2)O emissions from urban river networks receiving anthropogenic N inputs. We examined N(2)O saturation and emission in the Shanghai city river network, covering 6300 km(2), over 27 months. The overall mean saturation and emission from 87 locations was 770% and 1.91 mg N(2)O-N m(-2) d(-1), respectively. Nitrous oxide (N(2)O) saturation did not exhibit a clear seasonality, but the temporal pattern was co-regulated by both water temperature and N loadings. Rivers draining through urban and suburban areas receiving more sewage N inputs had higher N(2)O saturation and emission than those in rural areas. Regression analysis indicated that water ammonium (NH(4)(+)) and dissolved oxygen (DO) level had great control on N(2)O production and were better predictors of N(2)O emission in urban watershed. About 0.29 Gg N(2)O-N yr(-1) N(2)O was emitted from the Shanghai river network annually, which was about 131% of IPCC's prediction using default emission values. Given the rapid progress of global urbanization, more study efforts, particularly on nitrification and its N(2)O yielding, are needed to better quantify the role of urban rivers in global riverine N(2)O emission. © 2013 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Etminan, M.; Myhre, G.; Highwood, E. J.; Shine, K. P.
2016-12-01
New calculations of the radiative forcing (RF) are presented for the three main well-mixed greenhouse gases, methane, nitrous oxide, and carbon dioxide. Methane's RF is particularly impacted because of the inclusion of the shortwave forcing; the 1750-2011 RF is about 25% higher (increasing from 0.48 W m-2 to 0.61 W m-2) compared to the value in the Intergovernmental Panel on Climate Change (IPCC) 2013 assessment; the 100 year global warming potential is 14% higher than the IPCC value. We present new simplified expressions to calculate RF. Unlike previous expressions used by IPCC, the new ones include the overlap between CO2 and N2O; for N2O forcing, the CO2 overlap can be as important as the CH4 overlap. The 1750-2011 CO2 RF is within 1% of IPCC's value but is about 10% higher when CO2 amounts reach 2000 ppm, a value projected to be possible under the extended RCP8.5 scenario.
Emission of greenhouse gases from waste incineration in Korea.
Hwang, Kum-Lok; Choi, Sang-Min; Kim, Moon-Kyung; Heo, Jong-Bae; Zoh, Kyung-Duk
2017-07-01
Greenhouse gas (GHG) emission factors previously reported from various waste incineration plants have shown significant variations according to country-specific, plant-specific, and operational conditions. The purpose of this study is to estimate GHG emissions and emission factors at nine incineration facilities in Korea by measuring the GHG concentrations in the flue gas samples. The selected incineration plants had different operation systems (i.e., stoker, fluidized bed, moving grate, rotary kiln, and kiln & stoker), and different nitrogen oxide (NO x ) removal systems (i.e., selective catalytic reduction (SCR) and selective non-catalytic reduction (SNCR)) to treat municipal solid waste (MSW), commercial solid waste (CSW), and specified waste (SW). The total mean emission factors for A and B facilities for MSW incineration were found to be 134 ± 17 kg CO 2 ton -1 , 88 ± 36 g CH 4 ton -1 , and 69 ± 16 g N 2 O ton -1 , while those for CSW incineration were 22.56 g CH 4 ton -1 and 259.76 g N 2 O ton -1 , and for SW incineration emission factors were 2959 kg CO 2 ton -1 , 43.44 g CH 4 ton -1 and 401.21 g N 2 O ton -1 , respectively. Total emissions calculated using annual incineration for MSW were 3587 ton CO 2 -eq yr -1 for A facility and 11,082 ton CO 2 -eq yr -1 for B facility, while those of IPCC default values were 13,167 ton CO 2- eq yr -1 for A facility and 32,916 ton CO 2- eq yr -1 , indicating that the emissions of IPCC default values were estimated higher than those of the plant-specific emission factors. The emission of CSW for C facility was 1403 ton CO 2 -eq yr -1 , while those of SW for D to I facilities was 28,830 ton CO 2 -eq yr -1 . The sensitivity analysis using a Monte Carlo simulation for GHG emission factors in MSW showed that the GHG concentrations have a greater impact than the incineration amount and flow rate of flue gas. For MSW incineration plants using the same stoker type in operation, the estimated emissions and emission factors of CH 4 showed the opposite trend with those of NO 2 when the NO x removal system was used, whereas there was no difference in CO 2 emissions. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Spencer, S.; Ogle, S. M.; Wirth, T. C.; Sivakami, G.
2016-12-01
The Intergovernmental Panel on Climate Change (IPCC) provides methods and guidance for estimating anthropogenic greenhouse gas emissions for reporting to the United Nations Framework Convention on Climate Change. The methods are comprehensive and require extensive data compilation, management, aggregation, documentation and calculations of source and sink categories to achieve robust emissions estimates. IPCC Guidelines describe three estimation tiers that require increasing levels of country-specific data and method complexity. Use of higher tiers should improve overall accuracy and reduce uncertainty in estimates. The AFOLU sector represents a complex set of methods for estimating greenhouse gas emissions and carbon sinks. Major AFOLU emissions and sinks include carbon dioxide (CO2) from carbon stock change in biomass, dead organic matter and soils, urea or lime application to soils, and oxidation of carbon in drained organic soils; nitrous oxide (N2O) and methane (CH4) emissions from livestock management and biomass burning; N2O from organic amendments and fertilizer application to soils, and CH4 emissions from rice cultivation. To assist inventory compilers with calculating AFOLU-sector estimates, the Agriculture and Land Use Greenhouse Gas Inventory Tool (ALU) was designed to implement Tier 1 and 2 methods using IPCC Good Practice Guidance. It guides the compiler through activity data entry, emission factor assignment, and emissions calculations while carefully maintaining data integrity. ALU also provides IPCC defaults and can estimate uncertainty. ALU was designed to simplify the AFOLU inventory compilation process at regional or national scales, disaggregating the process into a series of steps reduces the potential for errors in the compilation process. An example application has been developed using ALU to estimate methane emissions from rice production in the United States.
Trace gas emissions following deposition of excreta by grazing dairy cows in eastern Canada
NASA Astrophysics Data System (ADS)
Rochette, P.; Pelster, D. E.; Chantigny, M. H.; Angers, D. A.; Liang, C.; Belanger, G.; Ziadi, N.; Charbonneau, E.; Pellerin, D.
2012-04-01
The N2O emission factor proposed for cattle excreta N by the Tier I IPCC methodology (EF3) is 2% (IPCC, 2006). While N2O emissions from excreta deposited by grazing animals have been reported in several publications, relatively few estimated EF3 values because measurements did not cover the entire year. This study measured N2O and CH4 flux and crop dry matter (DM) yield over two years (2009 to 2011) from a clay and a sandy loam soil cultivated with Timothy grass (Phleum pratense L.). A split-plot design was used on each soil type, with different application dates (either spring, summer or autumn application) as main plots and treatment (U-50: urine 50 g N m-2, U-100: urine 100 g N m-2, dung: 60 g N m-2, and control) as the sub-plots. Regardless of application time, annual DM yield increased in all treated plots when compared to the control. Also, DM yields were generally greater when urine as opposed to dung was applied suggesting greater N-availability from the urine application. The CH4 flux from the dung plots increased for only the first two weeks after treatment while the flux from the urine plots was similar to the control plots. Cumulative N2O emissions on the U-50 and U-100 plots increased linearly with urine N rate on both soils, resulting in nearly identical mean emission factors for both urine rates. The emission factor for the urine was three times greater on the clay (1.02% of applied N on both rates) than on the sandy loam soil (0.26% (U100) and 0.31% (U50) of applied N). Cumulative N2O emissions from dung plots also differed between soil types; however the impact of soil type on N2O emissions was opposite to that of urine, with greater losses from the sandy loam (0.15%) compared with the clay soil (0.07%). These results suggest that estimates of soil N2O emissions by grazing cattle in Eastern Canada obtained using the IPCC default methodology are overestimates of actual values and that these estimates for should include a stratification according to soil type.
Green Infrastructure Tool | EPA Center for Exposure ...
2016-03-07
Units option added – SI or US units. Default option is US units Additional options added to FTABLE such as clear FTABLE Significant digits for FTABLE calculations is changed to 5 Previously a default Cd value was used for calculations (under-drain and riser) but now a user-defined value option is given Conversion options added wherever necessary Default values of suction head and hydraulic conductivity are changed based on units selected in infiltration panel Default values of Cd for riser orifice and under-drain textboxes is changed to 0.6. Previously a default increment value of 0.1 is used for all the channel panels but now user can specify the increment
NASA Astrophysics Data System (ADS)
Wu, S.; Zou, J.; Liu, S.; Chen, J.; Kong, D.; Geng, Y.
2017-12-01
Agricultural irrigation watershed covers a large area in southeast of China and is a potentially important source of carbon dioxide (CO2), methane (CH4) and nitrous oxide (N2O). However, the flux magnitudes contribution to the overall catchment greenhouse gas (GHGs) fluxes and their drivers of seasonal variability are limited in agricultural irrigation watersheds. An in-situ observation was performed to measure annual CO2, CH4 and N2O fluxes from an agricultural irrigation watershed in southeast of China from September 2014 to September 2016. GHGs fluxes were measured using floating chambers and a gas exchange model was also used to predict CH4 and N2O fluxes. All GHGs showed varied seasonally with highest fluxes in early summer (July) and lowest in winter. Estimated seasonal CH4-C fluxes (11.5-97.6 mg m-2 hr-1) and N2O-N fluxes (2.8-80.8μg m-2 hr-1) were in relative agreement with measured CH4-C fluxes (0.05-74.9mg m-2 hr-1) and N2O-N fluxes (3.9-68.7μg m-2 hr-1) fluxes using floating chambers. Both CH4 and N2O fluxes were positively related to water temperature. The CH4 fluxes were negatively related to water dissolved oxygen (DO) concentration but positively related to sediment dissolved organic carbon (DOC). The N2O fluxes were positively related to water NH4+ and NO3-. The calculated EF5-r value in this study (mean = 0.0016; range = 0.0013-0.0018) was below the current IPCC (2006) default value of 0.0025. This implied that IPCC methodology may over estimates of N2O emissions associated with nitrogen leaching and runoff from agriculture.
What is the value of biomass remote sensing data for blue carbon inventories?
NASA Astrophysics Data System (ADS)
Byrd, K. B.; Simard, M.; Crooks, S.; Windham-Myers, L.
2015-12-01
The U.S. is testing approaches for accounting for carbon emissions and removals associated with wetland management according to 2013 IPCC Wetlands Supplement guidelines. Quality of reporting is measured from low (Tier 1) to high (Tier 3) depending upon data availability and analytical capacity. The use of satellite remote sensing data to derive carbon stocks and flux provides a practical approach for moving beyond IPCC Tier 1, the global default factor approach, to support Tier 2 or Tier 3 quantification of carbon emissions or removals. We are determining the "price of precision," or the extent to which improved satellite data will continue to increase the accuracy of "blue carbon" accounting. Tidal marsh biomass values are needed to quantify aboveground carbon stocks and stock changes, and to run process-based models of carbon accumulation. Maps of tidal marsh biomass have been produced from high resolution commercial and moderate resolution Landsat satellite data with relatively low error [percent normalized RMSE (%RMSE) from 7 to 14%]. Recently for a brackish marsh in Suisun Bay, California, we used Landsat 8 data to produce a biomass map that applied the Wide Dynamic Range Vegetation Index (WDRVI) (ρNIR*0.2 - ρR)/(ρNIR*0.2+ρR) to fully vegetated pixels and the Simple Ratio index (ρRed/ρGreen) to pixels with a mix of vegetation and water. Overall RMSE was 208 g/m2, while %RMSE = 13.7%. Also, preliminary use of airborne and spaceborne RADAR data in coastal Louisiana produced a marsh biomass map with 30% error. The integration of RADAR and LiDAR with optical remote sensing data has the potential to further reduce error in biomass estimation. In 2017, nations will report back to the U.N. Framework Convention on Climate Change on their experience in applying the Wetlands Supplement guidelines. These remote sensing efforts will mark an important step toward quantifying human impacts to wetlands within the global carbon cycle.
Variable carbon losses from recurrent fires in drained tropical peatlands.
Konecny, Kristina; Ballhorn, Uwe; Navratil, Peter; Jubanski, Juilson; Page, Susan E; Tansey, Kevin; Hooijer, Aljosja; Vernimmen, Ronald; Siegert, Florian
2016-04-01
Tropical peatland fires play a significant role in the context of global warming through emissions of substantial amounts of greenhouse gases. However, the state of knowledge on carbon loss from these fires is still poorly developed with few studies reporting the associated mass of peat consumed. Furthermore, spatial and temporal variations in burn depth have not been previously quantified. This study presents the first spatially explicit investigation of fire-driven tropical peat loss and its variability. An extensive airborne Light Detection and Ranging data set was used to develop a prefire peat surface modelling methodology, enabling the spatially differentiated quantification of burned area depth over the entire burned area. We observe a strong interdependence between burned area depth, fire frequency and distance to drainage canals. For the first time, we show that relative burned area depth decreases over the first four fire events and is constant thereafter. Based on our results, we revise existing peat and carbon loss estimates for recurrent fires in drained tropical peatlands. We suggest values for the dry mass of peat fuel consumed that are 206 t ha(-1) for initial fires, reducing to 115 t ha(-1) for second, 69 t ha(-1) for third and 23 t ha(-1) for successive fires, which are 58-7% of the current IPCC Tier 1 default value for all fires. In our study area, this results in carbon losses of 114, 64, 38 and 13 t C ha(-1) for first to fourth fires, respectively. Furthermore, we show that with increasing proximity to drainage canals both burned area depth and the probability of recurrent fires increase and present equations explaining burned area depth as a function of distance to drainage canal. This improved knowledge enables a more accurate approach to emissions accounting and will support IPCC Tier 2 reporting of fire emissions. © 2015 John Wiley & Sons Ltd.
Gray Infrastructure Tool | EPA Center for Exposure ...
2016-03-07
Natural channel with flood plain panel added Default depth increment of 0.5 is used for Natural Channel with FP Units option added – SI or US units. Default option is US units Conversion options added wherever necessary Additional options added to FTABLE such as clear FTABLE Significant digits for FTABLE calculations is changed to 4 Previously a default Cd value is used for calculations (under-drain and riser) but now a user defined value is used Default values of Cd for riser orifice and under-drain textboxes is changed to 0.6 Previously a default increment value of 0.1 is used for all the channel panels but now user can specify the increment
Hiyama, Kyosuke
2015-01-01
Applying data mining techniques on a database of BIM models could provide valuable insights in key design patterns implicitly present in these BIM models. The architectural designer would then be able to use previous data from existing building projects as default values in building performance simulation software for the early phases of building design. The author has proposed the method to minimize the magnitude of the variation in these default values in subsequent design stages. This approach maintains the accuracy of the simulation results in the initial stages of building design. In this study, a more convincing argument is presented to demonstrate the significance of the new method. The variation in the ideal default values for different building design conditions is assessed first. Next, the influence of each condition on these variations is investigated. The space depth is found to have a large impact on the ideal default value of the window to wall ratio. In addition, the presence or absence of lighting control and natural ventilation has a significant influence on the ideal default value. These effects can be used to identify the types of building conditions that should be considered to determine the ideal default values.
2015-01-01
Applying data mining techniques on a database of BIM models could provide valuable insights in key design patterns implicitly present in these BIM models. The architectural designer would then be able to use previous data from existing building projects as default values in building performance simulation software for the early phases of building design. The author has proposed the method to minimize the magnitude of the variation in these default values in subsequent design stages. This approach maintains the accuracy of the simulation results in the initial stages of building design. In this study, a more convincing argument is presented to demonstrate the significance of the new method. The variation in the ideal default values for different building design conditions is assessed first. Next, the influence of each condition on these variations is investigated. The space depth is found to have a large impact on the ideal default value of the window to wall ratio. In addition, the presence or absence of lighting control and natural ventilation has a significant influence on the ideal default value. These effects can be used to identify the types of building conditions that should be considered to determine the ideal default values. PMID:26090512
A tool to evaluate local biophysical effects on temperature due to land cover change transitions
NASA Astrophysics Data System (ADS)
Perugini, Lucia; Caporaso, Luca; Duveiller, Gregory; Cescatti, Alessandro; Abad-Viñas, Raul; Grassi, Giacomo; Quesada, Benjamin
2017-04-01
Land Cover Changes (LCC) affect local, regional and global climate through biophysical variations of the surface energy budget mediated by albedo, evapotranspiration, and roughness. Assessment of the full climate impacts of anthropogenic LCC are incomplete without considering biophysical effects, but the high level of uncertainties in quantifying their impacts to date have made it impractical to offer clear advice on which policy makers could act. To overcome this barrier, we provide a tool to evaluate the biophysical impact of a matrix of land cover transitions, following a tiered methodological approach similar to the one provided by the IPCC to estimate the biogeochemical effects, i.e. through three levels of methodological complexity, from Tier 1 (i.e. default method and factors) to Tier 3 (i.e. specific methods and factors). In particular, the tool provides guidance for quantitative assessment of changes in temperature following a land cover transition. The tool focuses on temperature for two main reasons (i) it is the main variable of interest for policy makers at local and regional level, and (ii) temperature is able to summarize the impact of radiative and non-radiative processes following LULCC. The potential changes in annual air temperature that can be expected from various land cover transitions are derived from a dedicated dataset constructed by the JRC in the framework of the LUC4C FP7 project. The inputs for the dataset are air temperature values derived from satellite Earth Observation data (MODIS) and land cover characterization from the ESA Climate Change Initiative product reclassified into their IPCC land use category equivalent. This data, originally at 0.05 degree of spatial resolution, is aggregated and analysed at regional level to provide guidance on the expected temperature impact following specific LCC transitions.
Early Detection | Division of Cancer Prevention
[[{"fid":"171","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Early Detection Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Early Detection Research Group Homepage Logo","field_folder[und]":"15"},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"Early
NASA Astrophysics Data System (ADS)
Windham-Myers, L.; Holmquist, J. R.; Woo, I.; Bergamaschi, B. A.; Byrd, K. B.; Crooks, S.; Drexler, J. Z.; Feagin, R. A.; Ferner, M. C.; Gonneea, M. E.; Kroeger, K. D.; Megonigal, P.; Morris, J. T.; Schile, L. M.; Simard, M.; Sutton-Grier, A.; Takekawa, J.; Troxler, T.; Weller, D.; Callaway, J.; Herold, N.
2016-12-01
In Year 2, the NASA Blue Carbon Monitoring Systems group leveraged USDA, USFWS and NOAA datasets, extensive field datasets, and targeted remote-sensing products to address basic questions regarding the size of carbon (C) stocks, and the directions and magnitudes of C fluxes in the US coastal zone since 1996. We review the uncertainty associated with 5 major terms in our Land Use-Land Cover Change (LULCC)-based accounting, both nationally and within sentinel sites (Cape Cod, Chesapeake Bay, Everglades, Louisiana, San Francisco Bay, Puget Sound). 1) To make distinctions between tidal and non-tidal wetlands we have relied on a combination of wetland and LiDAR-derived elevation maps. Existing products appear sufficient for saline wetlands, however many freshwater wetlands (1M ha) may be tidal despite current hydrologic mapcodes. 2) We are currently estimating methane emissions using salinity regime as a proxy. Methane emissions are variable across intermediate salinities, though not captured by the current binary classification of wetlands as either fresh or saline. 3) We are currently using a combination of USDA's SSURGO and independent core data to map soil C stocks. Soil C density varies little and is consistent across depth, salinity regime, and dominant plant cover type. 4) To model soil C fluxes, with C accumulating as sea level rises and C released with erosion or oxidation, we have applied IPCC default emission factors for the 2% of tidal wetland acreage lost to water (the dominant conversion), but have modeled C gain in wetlands-remaining-wetlands (98% of CONUS tidal wetlands) based on correlations between sea-level rise and sediment accretion, with the equation - Δ soil organic C stock = Δ elevation x soil C density. 5) To quantify biomass change through time, we developed a robust (R2 > 0.6) hybrid mapping approach including object-based image analysis, multispectral data, and RADAR. Overall, soil and biomass C stocks appear readily estimated and improved from Tier 1 default values. To further reduce uncertainty in the US GHG inventory for coastal wetlands, we propose efforts to confirm the extent of tidal inundation, develop default values for methane emissions associated with intermediate salinities, and model soil C accretion, the dominant "blue carbon" sink, across continental and local gradients.
Preliminary Evaluation of Method to Monitor Landfills Resilience against Methane Emission
NASA Astrophysics Data System (ADS)
Chusna, Noor Amalia; Maryono, Maryono
2018-02-01
Methane emission from landfill sites contribute to global warming and un-proper methane treatment can pose an explosion hazard. Stakeholder and government in the cities in Indonesia been found significant difficulties to monitor the resilience of landfill from methane emission. Moreover, the management of methane gas has always been a challenging issue for long waste management service and operations. Landfills are a significant contributor to anthropogenic methane emissions. This study conducted preliminary evaluation of method to manage methane gas emission by assessing LandGem and IPCC method. From the preliminary evaluation, this study found that the IPCC method is based on the availability of current and historical country specific data regarding the waste disposed of in landfills while from the LandGEM method is an automated tool for estimating emission rates for total landfill gas this method account total gas of methane, carbon dioxide and other. The method can be used either with specific data to estimate emissions in the site or default parameters if no site-specific data are available. Both of method could be utilize to monitor the methane emission from landfill site in cities of Central Java.
40 CFR Appendix Ix to Part 266 - Methods Manual for Compliance With the BIF Regulations
Code of Federal Regulations, 2010 CFR
2010-07-01
... Systems 2.1Performance Specifications for Continuous Emission Monitoring of Carbon Monoxide and Oxygen for... Methodology for Bevill Residue Determinations 8.0Procedures for Determining Default Values for Air Pollution Control System Removal Efficiencies 8.1APCS RE Default Values for Metals 8.2APCS RE Default Values for HC1...
40 CFR Appendix Ix to Part 266 - Methods Manual for Compliance With the BIF Regulations
Code of Federal Regulations, 2011 CFR
2011-07-01
... Systems 2.1Performance Specifications for Continuous Emission Monitoring of Carbon Monoxide and Oxygen for... Methodology for Bevill Residue Determinations 8.0Procedures for Determining Default Values for Air Pollution Control System Removal Efficiencies 8.1APCS RE Default Values for Metals 8.2APCS RE Default Values for HC1...
Data Science Bowl Launched to Improve Lung Cancer Screening | Division of Cancer Prevention
[[{"fid":"2078","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Data Science Bowl Logo","field_file_image_title_text[und][0][value]":"Data Science Bowl Logo","field_folder[und]":"76"},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"Data Science Bowl
Kouazounde, J B; Gbenou, J D; Babatounde, S; Srivastava, N; Eggleston, S H; Antwi, C; Baah, J; McAllister, T A
2015-03-01
The objective of this study was to develop emission factors (EF) for methane (CH4) emissions from enteric fermentation in cattle native to Benin. Information on livestock characteristics and diet practices specific to the Benin cattle population were gathered from a variety of sources and used to estimate EF according to Tier 2 methodology of the 2006 Intergovernmental Panel on Climate Change (IPCC) Guidelines for National Greenhouse Gas Inventories. Most cattle from Benin are Bos taurus represented by Borgou, Somba and Lagune breeds. They are mainly multi-purpose, being used for production of meat, milk, hides and draft power and grazed in open pastures and crop lands comprising tropical forages and crops. Estimated enteric CH4 EFs varied among cattle breeds and subcategory owing to differences in proportions of gross energy intake expended to meet maintenance, production and activity. EFs ranged from 15.0 to 43.6, 16.9 to 46.3 and 24.7 to 64.9 kg CH4/head per year for subcategories of Lagune, Somba and Borgou cattle, respectively. Average EFs for cattle breeds were 24.8, 29.5 and 40.2 kg CH4/head per year for Lagune, Somba and Borgou cattle, respectively. The national EF for cattle from Benin was 39.5 kg CH4/head per year. This estimated EF was 27.4% higher than the default EF suggested by IPCC for African cattle with the exception of dairy cattle. The outcome of the study underscores the importance of obtaining country-specific EF to estimate global enteric CH4 emissions.
NIH Seeks Input on In-patient Clinical Research Areas | Division of Cancer Prevention
[[{"fid":"2476","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Aerial view of the National Institutes of Health Clinical Center (Building 10) in Bethesda, Maryland.","field_file_image_title_text[und][0][value]":false},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"Aerial view of
Pancreatic Cancer Detection Consortium (PCDC) | Division of Cancer Prevention
[[{"fid":"2256","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"A 3-dimensional image of a human torso highlighting the pancreas.","field_file_image_title_text[und][0][value]":false},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"A 3-dimensional image of a human torso
Finding Top-kappa Unexplained Activities in Video
2012-03-09
parameters that define an UAP instance affect the running time by varying the values of each parameter while keeping the others fixed to a default...value. Runtime of Top-k TUA. Table 1 reports the values we considered for each parameter along with the corresponding default value. Parameter Values...Default value k 1, 2, 5, All All τ 0.4, 0.6, 0.8 0.6 L 160, 200, 240, 280 200 # worlds 7 E+04, 4 E+05, 2 E+07 2 E+07 TABLE 1: Parameter values used in
Model for estimating enteric methane emissions from United States dairy and feedlot cattle.
Kebreab, E; Johnson, K A; Archibeque, S L; Pape, D; Wirth, T
2008-10-01
Methane production from enteric fermentation in cattle is one of the major sources of anthropogenic greenhouse gas emission in the United States and worldwide. National estimates of methane emissions rely on mathematical models such as the one recommended by the Intergovernmental Panel for Climate Change (IPCC). Models used for prediction of methane emissions from cattle range from empirical to mechanistic with varying input requirements. Two empirical and 2 mechanistic models (COWPOLL and MOLLY) were evaluated for their prediction ability using individual cattle measurements. Model selection was based on mean square prediction error (MSPE), concordance correlation coefficient, and residuals vs. predicted values analyses. In dairy cattle, COWPOLL had the lowest root MSPE and greatest accuracy and precision of predicting methane emissions (correlation coefficient estimate = 0.75). The model simulated differences in diet more accurately than the other models, and the residuals vs. predicted value analysis showed no mean bias (P = 0.71). In feedlot cattle, MOLLY had the lowest root MSPE with almost all errors from random sources (correlation coefficient estimate = 0.69). The IPCC model also had good agreement with observed values, and no significant mean (P = 0.74) or linear bias (P = 0.11) was detected when residuals were plotted against predicted values. A fixed methane conversion factor (Ym) might be an easier alternative to diet-dependent variable Ym. Based on the results, the 2 mechanistic models were used to simulate methane emissions from representative US diets and were compared with the IPCC model. The average Ym in dairy cows was 5.63% of GE (range 3.78 to 7.43%) compared with 6.5% +/- 1% recommended by IPCC. In feedlot cattle, the average Ym was 3.88% (range 3.36 to 4.56%) compared with 3% +/- 1% recommended by IPCC. Based on our simulations, using IPCC values can result in an overestimate of about 12.5% and underestimate of emissions by about 9.8% for dairy and feedlot cattle, respectively. In addition to providing improved estimates of emissions based on diets, mechanistic models can be used to assess mitigation options such as changing source of carbohydrate or addition of fat to decrease methane, which is not possible with empirical models. We recommend national inventories use diet-specific Ym values predicted by mechanistic models to estimate methane emissions from cattle.
Environmental analysis of sunflower production with different forms of mineral nitrogen fertilizers.
Spinelli, D; Bardi, L; Fierro, A; Jez, S; Basosi, R
2013-11-15
Environmental profiles of mineral nitrogen fertilizers were used to evaluate the environmental disturbances related to their use in cultivation systems in Europe. Since the production of mineral fertilizers requires a large amount of energy, the present study of bioenergy systems is relevant in order to achieve crop yields less dependent on fossil fuels and to reduce the environmental impact due to fertilization. In this study, the suitability of the LCA methodology to analyze the environmental impact of sunflower cultivation systems with different forms of mineral nitrogen fertilizers urea and ammonium nitrate was investigated. Effects on climate change were estimated by the use of Ecoinvent 2.2 database default value for soil N2O emission factor (1%) and local emission data (0.8%) of mineral nitrogen applied to soils. LCA analysis showed a higher impact on environmental categories (human health and ecosystem quality) for the system in which urea was used as a nitrogen source. Use of urea fertilizer showed a higher impact on resource consumption due to fossil fuel consumption. Use of mineral nitrogen fertilizers showed a higher environmental burden than other inputs required for sunflower cultivation systems under study. Urea and ammonium nitrate showed, respectively, a 7.8% and 4.9% reduced impact of N2O as greenhouse gas by using direct field data of soil N2O emission factor compared to the default soil emission factor of 2006 IPCC Guidelines. Use of ammonium nitrate as mineral nitrogen fertilizer in sunflower cultivation would have a lower impact on environmental categories considered. Further environmental analysis of available technologies for fertilizer production might be also evaluated in order to reduce the environmental impacts of each fertilizer. Copyright © 2013 Elsevier Ltd. All rights reserved.
19 CFR 113.73 - Foreign trade zone operator bond conditions.
Code of Federal Regulations, 2010 CFR
2010-04-01
... the foreign trade zone or subzone. If the principal defaults and the default involves merchandise... merchandise involved in the default, or three times the value of the merchandise involved in the default if... as may be authorized by law or regulation. It is understood and agreed that whether the default...
Tluczkiewicz, Inga; Bitsch, Annette; Hahn, Stefan; Hahn, Torsten
2010-04-01
Under the European Union (EU) Biocidal Products Directive 98/8/EC, comprehensive evaluations on substances of the Third Priority List were conducted until 31 July 2007. This list includes, among other categories, disinfectants for human hygiene (e.g., skin and surface disinfection). For environmental exposure assessment of biocides, the EU emission scenarios apply. Currently available default values for disinfectants are based on consumption data from not more than 8 hospitals and were originally assembled for other purposes. To revalidate these default values, a survey on annual consumption data was performed in 27 German hospitals. These data were analyzed to provide consumption data per bed and day and per nurse and day for particular categories of active ingredients and were compared with default values from the EU emission scenario documents. Although several deviations were detected, an overall acceptable correspondence between Emission Scenario Documents default values and the current survey data was found. (c) 2009 SETAC
NASA Astrophysics Data System (ADS)
O, Hyong-Chol; Jo, Jong-Jun; Kim, Ji-Sok
2016-02-01
We provide representations of solutions to terminal value problems of inhomogeneous Black-Scholes equations and study such general properties as min-max estimates, gradient estimates, monotonicity and convexity of the solutions with respect to the stock price variable, which are important for financial security pricing. In particular, we focus on finding representation of the gradient (with respect to the stock price variable) of solutions to the terminal value problems with discontinuous terminal payoffs or inhomogeneous terms. Such terminal value problems are often encountered in pricing problems of compound-like options such as Bermudan options or defaultable bonds with discrete default barrier, default intensity and endogenous default recovery. Our results can be used in pricing real defaultable bonds under consideration of existence of discrete coupons or taxes on coupons.
Biodiesel production in a semiarid environment: a life cycle assessment approach.
Biswas, Wahidul K; Barton, Louise; Carter, Daniel
2011-04-01
While the use of biodiesel appears to be a promising alternative to petroleum fuel, the replacement of fossil fuel by biofuel may not bring about the intended climate cooling because of the increased soil N2O emissions due to N-fertilizer applications. Using a life cycle assessment approach, we assessed the influence of soil nitrous oxide (N2O) emissions on the life cycle global warming potential of the production and combustion of biodiesel from canola oil produced in a semiarid climate. Utilizing locally measured soil N2O emissions, rather than the Intergovernmental Panel on Climate Change (IPCC) default values, decreased greenhouse gas (GHG) emissions from the production and combustion of 1 GJ biodiesel from 63 to 37 carbon dioxide equivalents (CO2-e)/GJ. GHG were 1.1 to 2.1 times lower than those from petroleum or petroleum-based diesel depending on which soil N2O emission factors were included in the analysis. The advantages of utilizing biodiesel rapidly declined when blended with petroleum diesel. Mitigation strategies that decrease emissions from the production and application of N fertilizers may further decrease the life cycle GHG emissions in the production and combustion of biodiesel.
NASA Astrophysics Data System (ADS)
Amon, Barbara; Zechmeister-Boltenstern, Sophie; Kasper, Martina; Foldal, Cecilie; Schiefer, Jasmin; Kitzler, Barbara; Schwarzl, Bettina; Zethner, Gerhard; Anderl, Michael; Sedy, Katrin; Gaugitsch, Helmut; Dersch, Georg; Baumgarten, Andreas; Haas, Edwin; Kiese, Ralf
2016-04-01
Results from a previous project "FarmClim" highlight that the IPCC default emission factor is not able to reflect region specific N2O emissions from Austrian arable soils. The methodology is limited in identifying hot spots and hot moments of N2O emissions. When estimations are based on default emission factors no recommendations can be given on optimisation measures that would lead to a reduction of soil N2O emissions. The better the knowledge is about Nitrogen and Carbon budgets in Austrian agricultural managed soils the better the situation can be reflected in the Austrian GHG emission inventory calculations. Therefore national and regionally modelled emission factors should improve the evidence for national deviation from the IPCC default emission factors and reduce the uncertainties. The overall aim of NitroAustria is to identify the drivers for N2O emissions on a regional basis taking different soil types, climate, and agricultural management into account. We use the LandscapeDNDC model to update the N2O emission factors for N fertilizer and animal manure applied to soils. Key regions in Austria were selected and region specific N2O emissions calculated. The model runs at sub-daily time steps and uses data such as maximum and minimum air temperature, precipitation, radiation, and wind speed as meteorological drivers. Further input data are used to reflect agricultural management practices, e.g., planting/harvesting, tillage, fertilizer application, irrigation and information on soil and vegetation properties for site characterization and model initialization. While at site scale, arable management data (crop cultivation, rotations, timings etc.) is obtained by experimental data from field trials or observations, at regional scale such data need to be generated using region specific proxy data such as land use and management statistics, crop cultivations and yields, crop rotations, fertilizer sales, manure resulting from livestock units etc. The farming community can only profit from NitroAustria, if model developments and results are integrated into the national emission inventory. Trade-offs between different greenhouse gas emissions and other nitrogen losses have to be discussed. The derivation of suitable mitigation options by optimization of common and evaluation of potential management practices for current and future climatic conditions is crucial to minimize threats to the environment while ensuring the long-term productivity and sustainability of agro-ecosystems. From the results gained in NitroAustria we will be able to show potential environmental impacts and propose measures for a policy framework towards climate friendly farming.
NASA Astrophysics Data System (ADS)
Eickenscheidt, T.; Heinichen, J.; Drösler, M.
2015-04-01
Drained organic soils are considered as hotspots for greenhouse gas (GHG) emissions. Particularly arable lands and intensively used grasslands have been regarded as the main producers of carbon dioxide (CO2) and nitrous oxide (N2O). However, GHG balances of former peatlands and associated organic soils not considered as peatland according to the definition of the Intergovernmental Panel on Climate Change (IPCC) have not been investigated so far. Therefore, our study addressed the question to what extent the soil organic carbon (SOC) content affects the GHG release of drained organic soils under two different land-use types (arable land and intensively used grassland). Both land-use types were established on a mollic Gleysol (named Cmedium) as well as on a sapric Histosol (named Chigh). The two soil types significantly differed in their SOC contents in the topsoil (Cmedium: 9.4-10.9% SOC; Chigh: 16.1-17.2% SOC). We determined GHG fluxes (CO2, N2O and methane (CH4)) over a period of 2 years. The daily and annual net ecosystem exchange (NEE) of CO2 was determined with the closed dynamic chamber technique and by modeling the ecosystem respiration (RECO) and the gross primary production (GPP). N2O and CH4 were determined by the close chamber technique. Estimated NEE of CO2 significantly differed between the two land-use types with lower NEE values (-6 to 1707 g CO2-C m-2 yr-1) at the arable sites and higher values (1354 to 1823 g CO2-C m-2 yr-1) at the grassland sites. No effect on NEE was found regarding the SOC content. Significantly higher annual N2O exchange rates were observed at the arable sites (0.23-0.86 g N m-2 yr-1) compared to the grassland sites (0.12-0.31 g N m-2 yr-1). Furthermore, N2O fluxes from the Chigh sites significantly exceeded those of the Cmedium sites. CH4 fluxes were found to be close to zero at all plots. Estimated global warming potential, calculated for a time horizon of 100 years (GWP100) revealed a very high release of GHGs from all plots ranging from 1837 to 7095 g CO2 eq. m-2 yr-1. Calculated global warming potential (GWP) values did not differ between soil types and partly exceeded the IPCC default emission factors of the Tier 1 approach by far. However, despite being subject to high uncertainties, the results clearly highlight the importance to adjust the IPCC guidelines for organic soils not falling under the definition, to avoid a significant underestimation of GHG emissions in the corresponding sectors of the national climate reporting. Furthermore, the present results revealed that mainly the land-use including the management and not the SOC content is responsible for the height of GHG exchange from intensive farming on drained organic soils.
Chemopreventive Agent Development | Division of Cancer Prevention
[[{"fid":"174","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Chemoprevenentive Agent Development Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Chemoprevenentive Agent Development Research Group Homepage
Prostate and Urologic Cancer | Division of Cancer Prevention
[[{"fid":"183","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Prostate and Urologic Cancer Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Prostate and Urologic Cancer Research Group Homepage
N2O release from agro-biofuel production negates global warming reduction by replacing fossil fuels
NASA Astrophysics Data System (ADS)
Crutzen, P. J.; Mosier, A. R.; Smith, K. A.; Winiwarter, W.
2008-01-01
The relationship, on a global basis, between the amount of N fixed by chemical, biological or atmospheric processes entering the terrestrial biosphere, and the total emission of nitrous oxide (N2O), has been re-examined, using known global atmospheric removal rates and concentration growth of N2O as a proxy for overall emissions. For both the pre-industrial period and in recent times, after taking into account the large-scale changes in synthetic N fertiliser production, we find an overall conversion factor of 3-5% from newly fixed N to N2O-N. We assume the same factor to be valid for biofuel production systems. It is covered only in part by the default conversion factor for "direct" emissions from agricultural crop lands (1%) estimated by IPCC (2006), and the default factors for the "indirect" emissions (following volatilization/deposition and leaching/runoff of N: 0.35-0.45%) cited therein. However, as we show in the paper, when additional emissions included in the IPCC methodology, e.g. those from livestock production, are included, the total may not be inconsistent with that given by our "top-down" method. When the extra N2O emission from biofuel production is calculated in "CO2-equivalent" global warming terms, and compared with the quasi-cooling effect of "saving" emissions of fossil fuel derived CO2, the outcome is that the production of commonly used biofuels, such as biodiesel from rapeseed and bioethanol from corn (maize), depending on N fertilizer uptake efficiency by the plants, can contribute as much or more to global warming by N2O emissions than cooling by fossil fuel savings. Crops with less N demand, such as grasses and woody coppice species, have more favourable climate impacts. This analysis only considers the conversion of biomass to biofuel. It does not take into account the use of fossil fuel on the farms and for fertilizer and pesticide production, but it also neglects the production of useful co-products. Both factors partially compensate each other. This needs to be analyzed in a full life cycle assessment.
Verifying the UK N_{2}O emission inventory with tall tower measurements
NASA Astrophysics Data System (ADS)
Carnell, Ed; Meneguz, Elena; Skiba, Ute; Misselbrook, Tom; Cardenas, Laura; Arnold, Tim; Manning, Alistair; Dragosits, Ulli
2016-04-01
Nitrous oxide (N2O) is a key greenhouse gas (GHG), with a global warming potential ˜300 times greater than that of CO2. N2O is emitted from a variety of sources, predominantly from agriculture. Annual UK emission estimates are reported, to comply with government commitments under the United Nations Framework Convention on Climate Change (UNFCCC). The UK N2O inventory follows internationally agreed protocols and emission estimates are derived by applying emission factors to estimates of (anthropogenic) emission sources. This approach is useful for comparing anthropogenic emissions from different countries, but does not capture regional differences and inter-annual variability associated with environmental factors (such as climate and soils) and agricultural management. In recent years, the UK inventory approach has been refined to include regional information into its emissions estimates (e.g. agricultural management data), in an attempt to reduce uncertainty. This study attempts to assess the difference between current published inventory methodology (default IPCC methodology) and a revised approach, which incorporates the latest thinking, using data from recent work. For 2013, emission estimates made using the revised approach were 30 % lower than those made using default IPCC methodology, due to the use of lower emission factors suggested by recent projects (www.ghgplatform.org.uk, Defra projects: AC0116, AC0213 and MinNO). The 2013 emissions estimates were disaggregated on a monthly basis using agricultural management (e.g. sowing dates), climate data and soil properties. The temporally disaggregated emission maps were used as input to the Met Office atmospheric dispersion model NAME, for comparison with measured N2O concentrations, at three observation stations (Tacolneston, E England; Ridge Hill, W England; Mace Head, W Ireland) in the UK DECC network (Deriving Emissions linked to Climate Change). The Mace Head site, situated on the west coast of Ireland, was used to establish baseline concentrations. The trends in the modelled data were found to fit with the observational data trends, with concentration peaks coinciding with periods of fertiliser application and land spreading of manures. The model run using the 'experimental' approach was found to give a closer agreement with the observed data.
Code of Federal Regulations, 2011 CFR
2011-07-01
... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Municipal Solid Waste Landfills Pt. 98, Subpt. NN, Table NN-2 Table NN-2 to Subpart HH of Part 98—Lookup Default Values...
Code of Federal Regulations, 2011 CFR
2011-07-01
... Calculation Methodology 1 of This Subpart Fuel Default high heating value factor Default CO2 emission factor (kg CO2/MMBtu) Natural Gas 1.028 MMBtu/Mscf 53.02 Propane 3.822 MMBtu/bbl 61.46 Normal butane 4.242...
Code of Federal Regulations, 2013 CFR
2013-07-01
... Calculation Methodology 1 of This Subpart Fuel Default high heating value factor Default CO2 emission factor (kg CO2/MMBtu) Natural Gas 1.028 MMBtu/Mscf 53.02 Propane 3.822 MMBtu/bbl 61.46 Normal butane 4.242...
Code of Federal Regulations, 2012 CFR
2012-07-01
... Calculation Methodology 1 of This Subpart Fuel Default high heating value factor Default CO2 emission factor (kg CO2/MMBtu) Natural Gas 1.028 MMBtu/Mscf 53.02 Propane 3.822 MMBtu/bbl 61.46 Normal butane 4.242...
A Simulation Program with Latency Exploitation for the Transient Analysis of Digital Circuits.
1983-08-01
PW PER) Examples: VIN 3 0 PULSE(-5 5 iNS iNS iNS 50NS lOONS) parameters default values units Vi (initial value) volts or amps V2 (pulsed value) volts...TAUl TD2 TAU2)mU Examples: VIN 3 0 EXP(-5 0 2NS 30NS 60NS 40NS) parameters default values units V1 (initial value) volts or amps V2 (pulsed value
NASA Astrophysics Data System (ADS)
Wu, Xianjun; Di, Qian; Li, Yao; Zhao, Xiaojie
2009-02-01
Recently, evidences from fMRI studies have shown that there was decreased activity among the default-mode network in Alzheimer's disease (AD), and DTI researches also demonstrated that demyelinations exist in white matter of AD patients. Therefore, combining these two MRI methods may help to reveal the relationship between white matter damages and alterations of the resting state functional connectivity network. In the present study, we tried to address this issue by means of correlation analysis between DTI and resting state fMRI images. The default-mode networks of AD and normal control groups were compared to find the areas with significantly declined activity firstly. Then, the white matter regions whose fractional anisotropy (FA) value correlated with this decline were located through multiple regressions between the FA values and the BOLD response of the default networks. Among these correlating white matter regions, those whose FA values also declined were found by a group comparison between AD patients and healthy elderly control subjects. Our results showed that the areas with decreased activity among default-mode network included left posterior cingulated cortex (PCC), left medial temporal gyrus et al. And the damaged white matter areas correlated with the default-mode network alterations were located around left sub-gyral temporal lobe. These changes may relate to the decreased connectivity between PCC and medial temporal lobe (MTL), and thus correlate with the deficiency of default-mode network activity.
Understanding the origins of uncertainty in landscape-scale variations of emissions of nitrous oxide
NASA Astrophysics Data System (ADS)
Milne, Alice; Haskard, Kathy; Webster, Colin; Truan, Imogen; Goulding, Keith
2014-05-01
Nitrous oxide is a potent greenhouse gas which is over 300 times more radiatively effective than carbon dioxide. In the UK, the agricultural sector is estimated to be responsible for over 80% of nitrous oxide emissions, with these emissions resulting from livestock and farmers adding nitrogen fertilizer to soils. For the purposes of reporting emissions to the IPCC, the estimates are calculated using simple models whereby readily-available national or international statistics are combined with IPCC default emission factors. The IPCC emission factor for direct emissions of nitrous oxide from soils has a very large uncertainty. This is primarily because the variability of nitrous oxide emissions in space is large and this results in uncertainty that may be regarded as sample noise. To both reduce uncertainty through improved modelling, and to communicate an understanding of this uncertainty, we must understand the origins of the variation. We analysed data on nitrous oxide emission rate and some other soil properties collected from a 7.5-km transect across contrasting land uses and parent materials in eastern England. We investigated the scale-dependence and spatial uniformity of the correlations between soil properties and emission rates from farm to landscape scale using wavelet analysis. The analysis revealed a complex pattern of scale-dependence. Emission rates were strongly correlated with a process-specific function of the water-filled pore space at the coarsest scale and nitrate at intermediate and coarsest scales. We also found significant correlations between pH and emission rates at the intermediate scales. The wavelet analysis showed that these correlations were not spatially uniform and that at certain scales changes in parent material coincided with significant changes in correlation. Our results indicate that, at the landscape scale, nitrate content and water-filled pore space are key soil properties for predicting nitrous oxide emissions and should therefore be incorporated into process models and emission factors for inventory calculations.
NASA Astrophysics Data System (ADS)
Eickenscheidt, T.; Heinichen, J.; Drösler, M.
2015-09-01
Drained organic soils are considered to be hotspots for greenhouse gas (GHG) emissions. Arable lands and intensively used grasslands, in particular, have been regarded as the main producers of carbon dioxide (CO2) and nitrous oxide (N2O). However, GHG balances of former peatlands and associated organic soils not considered to be peatland according to the definition of the Intergovernmental Panel on Climate Change (IPCC) have not been investigated so far. Therefore, our study addressed the question to what extent the soil organic carbon (SOC) content affects the GHG release of drained organic soils under two different land-use types (arable land and intensively used grassland). Both land-use types were established on a Mollic Gleysol (labeled Cmedium) as well as on a Sapric Histosol (labeled Chigh). The two soil types differed significantly in their SOC contents in the topsoil (Cmedium: 9.4-10.9 % SOC; Chigh: 16.1-17.2 % SOC). We determined GHG fluxes over a period of 1 or 2 years in case of N2O or methane (CH4) and CO2, respectively. The daily and annual net ecosystem exchange (NEE) of CO2 was determined by measuring NEE and the ecosystem respiration (RECO) with the closed dynamic chamber technique and by modeling the RECO and the gross primary production (GPP). N2O and CH4 were measured with the static closed chamber technique. Estimated NEE of CO2 differed significantly between the two land-use types, with lower NEE values (-6 to 1707 g CO2-C m-2 yr-1) at the arable sites and higher values (1354 to 1823 g CO2-C m-2 yr-1) at the grassland sites. No effect on NEE was found regarding the SOC content. Significantly higher annual N2O exchange rates were observed at the arable sites (0.23-0.86 g N m-2 yr-1) than at the grassland sites (0.12-0.31 g N m-2 yr-1). Furthermore, N2O fluxes from the Chigh sites significantly exceeded those of the Cmedium sites. CH4 fluxes were found to be close to zero at all plots. Estimated global warming potential, calculated for a time horizon of 100 years (GWP100) revealed a very high release of GHGs from all plots ranging from 1837 to 7095 g CO2 eq. m-2 yr-1. Calculated global warming potential (GWP) values did not differ between soil types and partly exceeded the IPCC default emission factors of the Tier 1 approach by far. However, despite being subject to high uncertainties, the results clearly highlight the importance of adjusting the IPCC guidelines for organic soils not falling under the definition in order to avoid a significant underestimation of GHG emissions in the corresponding sectors of the national climate reporting. Furthermore, the present results revealed that mainly the type of land-use, including the management type, and not the SOC content is responsible for the height of GHG exchange from intensive farming on drained organic soils.
The Application of Optimal Defaults to Improve Elementary School Lunch Selections: Proof of Concept
ERIC Educational Resources Information Center
Loeb, Katharine L.; Radnitz, Cynthia; Keller, Kathleen L.; Schwartz, Marlene B.; Zucker, Nancy; Marcus, Sue; Pierson, Richard N.; Shannon, Michael; DeLaurentis, Danielle
2018-01-01
Background: In this study, we applied behavioral economics to optimize elementary school lunch choices via parent-driven decisions. Specifically, this experiment tested an optimal defaults paradigm, examining whether strategically manipulating the health value of a default menu could be co-opted to improve school-based lunch selections. Methods:…
24 CFR 203.370 - Pre-foreclosure sales.
Code of Federal Regulations, 2010 CFR
2010-04-01
... sold by the mortgagor, after default and prior to foreclosure, at its current fair market value (less... determined by the Secretary, which default is the result of an adverse and unavoidable financial situation... whose current fair market value, compared to the amount needed to discharge the mortgage, meets the...
Cancer Biomarkers | Division of Cancer Prevention
[[{"fid":"175","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Cancer Biomarkers Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Cancer Biomarkers Research Group Homepage Logo","field_folder[und]":"15"},"type":"media","attributes":{"alt":"Cancer Biomarkers Research Group Homepage Logo","title":"Cancer
Gastrointestinal and Other Cancers | Division of Cancer Prevention
[[{"fid":"181","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Gastrointestinal and Other Cancers Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Gastrointestinal and Other Cancers Research Group Homepage Logo","field_folder[und]":"15"},"type":"media","attributes":{"alt":"Gastrointestinal and Other
Biometry | Division of Cancer Prevention
[[{"fid":"66","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Biometry Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Biometry Research Group Homepage Logo","field_folder[und]":"15"},"type":"media","attributes":{"alt":"Biometry Research Group Homepage Logo","title":"Biometry Research Group Homepage
A multistage crucible of revision and approval shapes IPCC policymaker summaries.
Mach, Katharine J; Freeman, Patrick T; Mastrandrea, Michael D; Field, Christopher B
2016-08-01
Intergovernmental Panel on Climate Change (IPCC) member governments approve each report's summary for policymakers (SPM) by consensus, discussing and agreeing on each sentence in a plenary session with scientist authors. A defining feature of IPCC assessment, the governmental approval process builds joint ownership of current knowledge by scientists and governments. The resulting SPM revisions have been extensively discussed in anecdotes, interviews, and perspectives, but they have not been comprehensively analyzed. We provide an in-depth evaluation of IPCC SPM revisions, establishing an evidential basis for understanding their nature. Revisions associated with governmental review and approval generally expand SPMs, with SPM text growing by 17 to 53% across recent assessment reports. Cases of high political sensitivity and failure to reach consensus are notable exceptions, resulting in SPM contractions. In contrast to recent claims, we find that IPCC SPMs are as readable, for multiple metrics of reading ease, as other professionally edited assessment summaries. Across reading-ease metrics, some SPMs become more readable through governmental review and approval, whereas others do not. In an SPM examined through the entire revision process, most revisions associated with governmental review and approval occurred before the start of the government-approval plenary session. These author revisions emphasize clarity, scientific rigor, and explanation. In contrast, the subsequent plenary revisions place greater emphasis especially on policy relevance, comprehensiveness of examples, and nuances of expert judgment. Overall, the value added by the IPCC process emerges in a multistage crucible of revision and approval, as individuals together navigate complex science-policy terrain.
Time varying default barrier as an agreement rules on bond contract
NASA Astrophysics Data System (ADS)
Maruddani, Di Asih I.; Safitri, Diah; Hoyyi, Abdul
2018-05-01
There are some default time rules on contract agreement of a bond. The classical default time is known as Merton Model. The most important characteristic of Merton’s model is the restriction of default time to the maturity of the debt, not taking into consideration the possibility of an early default. If the firm’s value falls down to minimal level before the maturity of the debt, but it is able to recover and meet the debt’s payment at maturity, the default would be avoided in Merton’ s approach. Merton model has been expanded by Hull & White [6] and Avellaneda & Zhu [1]. They introduced time-varying default barrier for modelling distance to default process. This model use time-varying variable as a barrier. In this paper, we give a valuation of a bond with time-varying default barrier agreement. We use straight forward integration for obtaining equity and liability equation. This theory is applied in Indonesian corporate bond.
Kataev, G V; Korotkov, A D; Kireev, M V; Medvedev, S V
2013-01-01
In the present article it was shown that the functional connectivity of brain structures, revealed by factor analysis of resting PET CBF and rCMRglu data, is an adequate tool to study the default mode of the human brain. The identification of neuroanatomic systems of default mode (default mode network) during routine clinical PET investigations is important for further studying the functional organization of the normal brain and its reorganizations in pathological conditions.
Breast and Gynecologic Cancer | Division of Cancer Prevention
[[{"fid":"184","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Breast and Gynecologic Cancer Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Breast and Gynecologic Cancer Research Group Homepage Logo","field_folder[und]":"15"},"type":"media","attributes":{"alt":"Breast and Gynecologic Cancer Research
Community Oncology and Prevention Trials | Division of Cancer Prevention
[[{"fid":"168","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Early Detection Research Group Homepage Image","field_file_image_title_text[und][0][value]":"Early Detection Research Group Homepage Image","field_folder[und]":"15"},"type":"media","attributes":{"alt":"Early Detection Research Group Homepage Image","title":"Early
Lung and Upper Aerodigestive Cancer | Division of Cancer Prevention
[[{"fid":"180","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Lung and Upper Aerodigestive Cancer Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Lung and Upper Aerodigestive Cancer Research Group Homepage Logo","field_folder[und]":"15"},"type":"media","attributes":{"alt":"Lung and Upper Aerodigestive
Risk Factors and Mortality Associated with Default from Multidrug-Resistant Tuberculosis Treatment
Franke, Molly F.; Appleton, Sasha C.; Bayona, Jaime; Arteaga, Fernando; Palacios, Eda; Llaro, Karim; Shin, Sonya S.; Becerra, Mercedes C.; Murray, Megan B.; Mitnick, Carole D.
2008-01-01
Background Completing treatment for multidrug-resistant (MDR) tuberculosis (TB) may be more challenging than completing first-line TB therapy, especially in resource poor settings. The objectives of this study were to (1) identify risk factors for default from MDR TB therapy; (2) quantify mortality among patients who default; and (3) identify risk factors for death following default. Methods We performed a retrospective chart review to identify risk factors for default and conducted home visits to assess mortality among patients who defaulted. Results 67 of 671 patients (10.0%) defaulted. The median time to default was 438 days (interquartile range [IQR]: 152−710), and 40.3% of patients had culture-positive sputum at the time of default. Substance use (hazard ratio [HR]: 2.96, 95% confidence interval [CI]: [1.56, 5.62], p-value [p]=0.001), substandard housing conditions (HR: 1.83, CI: [1.07, 3.11], p=0.03), later year of enrollment (HR: 1.62, CI: [1.09, 2.41], p=0.02) and health district (p=0.02) predicted default in a multivariable analysis. Severe adverse events did not predict default. Of 47 (70.1%) patients who defaulted and were successfully traced, 25 (53.2%) had died. Poor bacteriologic response, less than a year of treatment at default, low education level, and diagnosis with a psychiatric disorder significantly predicted death after default in a multivariable analysis. Conclusions The proportion of patients who defaulted from MDR TB treatment was relatively low. The large proportion of patients who defaulted while culture-positive underscores the public health importance of minimizing default. Prognosis for patients who defaulted was poor. Interventions aimed at preventing default may reduce TB-related mortality. PMID:18462099
A Neural Network Approach to Estimating the Allowance for Bad Debt
ERIC Educational Resources Information Center
Joyner, Donald Thomas
2011-01-01
The granting of credit is a necessary risk of doing business. If companies only accepted cash, sales would be negatively impacted. In a perfect world, all consumers would pay their bills when they become due. However, the fact is that some consumers do default on debt. Companies are willing to accept default risk because the value of defaults does…
A multistage crucible of revision and approval shapes IPCC policymaker summaries
Mach, Katharine J.; Freeman, Patrick T.; Mastrandrea, Michael D.; Field, Christopher B.
2016-01-01
Intergovernmental Panel on Climate Change (IPCC) member governments approve each report’s summary for policymakers (SPM) by consensus, discussing and agreeing on each sentence in a plenary session with scientist authors. A defining feature of IPCC assessment, the governmental approval process builds joint ownership of current knowledge by scientists and governments. The resulting SPM revisions have been extensively discussed in anecdotes, interviews, and perspectives, but they have not been comprehensively analyzed. We provide an in-depth evaluation of IPCC SPM revisions, establishing an evidential basis for understanding their nature. Revisions associated with governmental review and approval generally expand SPMs, with SPM text growing by 17 to 53% across recent assessment reports. Cases of high political sensitivity and failure to reach consensus are notable exceptions, resulting in SPM contractions. In contrast to recent claims, we find that IPCC SPMs are as readable, for multiple metrics of reading ease, as other professionally edited assessment summaries. Across reading-ease metrics, some SPMs become more readable through governmental review and approval, whereas others do not. In an SPM examined through the entire revision process, most revisions associated with governmental review and approval occurred before the start of the government-approval plenary session. These author revisions emphasize clarity, scientific rigor, and explanation. In contrast, the subsequent plenary revisions place greater emphasis especially on policy relevance, comprehensiveness of examples, and nuances of expert judgment. Overall, the value added by the IPCC process emerges in a multistage crucible of revision and approval, as individuals together navigate complex science-policy terrain. PMID:27532046
40 CFR Table Tt-1 to Subpart Tt - Default DOC and Decay Rate Values for Industrial Waste Landfills
Code of Federal Regulations, 2012 CFR
2012-07-01
... Industrial Waste Landfills TT Table TT-1 to Subpart TT Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Industrial Waste Landfills Pt. 98, Subpt. TT, Table TT Table TT-1 to Subpart TT—Default DOC and Decay Rate Values for Industrial...
NASA Astrophysics Data System (ADS)
Kim, Gil Won; Jeong, Seung Tak; Kim, Gun Yeob; Kim, Pil Joo; Kim, Sang Yoon
2016-08-01
Fertilization with urea can lead to a loss of carbon dioxide (CO2) that was fixed during the industrial production process. The extent of atmospheric CO2 removal from urea manufacturing was estimated by the Industrial Processes and Product Use sector (IPPU sector). On its basis, the Intergovernmental Panel on Climate Change (IPCC) has proposed a value of 0.2 Mg C per Mg urea (available in 2006 revised IPCC guidelines for greenhouse gas inventories), which is the mass fractions of C in urea, as the CO2 emission coefficient from urea for the agricultural sector. Notably, due to the possibility of bicarbonate leaching to waters, all C in urea might not get released as CO2 to the atmosphere. Hence, in order to provide an accurate value of the CO2 emission coefficient from applied urea in the rice ecosystem, the CO2 emission factors were characterized under different levels of 13C-urea applied paddy field in the current study. The total CO2 fluxes and rice grain yields increased significantly with increasing urea application (110-130 kg N ha-1) and thereafter, decreased. However, with increasing 13C-urea application, a significant and proportional increase of the 13CO2sbnd C emissions from 13C-urea was also observed. From the relationships between urea application levels and 13CO2sbnd C fluxes from 13C-urea, the CO2sbnd C emission factor from urea was estimated to range between 0.0143 and 0.0156 Mg C per Mg urea. Thus, the CO2sbnd C emission factor of this study is less than that of the value proposed by IPCC. Therefore, for the first time, we propose to revise the current IPCC guideline value of CO2sbnd C emission factor from urea as 0.0143-0.0156 Mg C per Mg urea for Korean paddy soils.
Estimates of N2O, NO and NH3 Emissions From Croplands in East, Southeast and South Asia
NASA Astrophysics Data System (ADS)
Yan, X.; Ohara, T.; Akimoto, H.
2002-12-01
Agricultural activities have greatly altered the global nitrogen cycle and produced nitrogenous gases of environmentally significance. More than half of the global chemical nitrogen fertilizer is used for crop production in East, Southeast and South Asia where rice the center of nutrition. Emissions of nitrous oxide (N2O), nitric oxide (NO) and ammonia (NH3) from croplands in this region were estimated by considering both background emission and emissions resulted from nitrogen added to croplands, including chemical nitrogen, animal manure used as fertilizer, biological fixed nitrogen and nitrogen in crop residue returned to field. Background emission fluxes of N2O and NO from croplands were estimated at 1.16 and 0.52 kg N ha-1yr-1, respectively. A fertilizer-induced N2O emission factor of 1.25% for upland was adopted from IPCC guidelines, and a factor of 0.25% was derived for paddy field from measurements. Total N2O emission from croplands in the region was estimated at 1.16 Tg N yr-1, with 41% contributed by background emission which was not considered in previous global estimates. However, the average fertilizer-induced N2O emission is only 0.93%, lower than the default IPCC value of 1.25% due to the low emission factor from paddy field. A fertilizer-induced NO emission factor of 0.66% for upland was derived from field measurements, and a factor of 0.13% was assumed for paddy field. Total NO emission was 572 Gg N yr-1 in the region, with 38% due to background emission. Average fertilizer-induce NO emission factor was 0.48%. Extrapolating this estimate to global scale will result in a global NO emission from cropland of 1.6 Tg N yr-1, smaller than other global estimates. Total NH3 emission was estimated at 11.8 Tg N yr-1. The use of urea and ammonium bicarbonate and the cultivation of rice lead to a high average NH3 loss rate of chemical fertilizer in the region. Emissions were distributed at 0.5° grid by using a global landuse database.
NASA Astrophysics Data System (ADS)
Zhong, Jia; Wei, Yuansong; Wan, Hefeng; Wu, Yulong; Zheng, Jiaxi; Han, Shenghui; Zheng, Bofu
2013-12-01
Greenhouse gas (GHG) emissions from animal manure management are of great concern in China. However, there are still great uncertainties about China's GHG inventory due to the GHG emission factors partly used default values from the Intergovernmental Panel of Climate Change (IPCC) guidelines. The purpose of this study was to use a case study in Beijing to determine the regional GHG emission factors based on the combination of swine manure composting and land application of the compost with both on-site examination and a life cycle assessment (LCA). The results showed that the total GHG emission factor was 240 kgCO2eq tDS-1 (dry solids), including the direct GHG emission factor of 115 kgCO2eq tDS-1 for swine manure composting and 48 kgCO2eq tDS-1 for land application of the compost. Among the total GHG emissions of 5.06 kgCH4 tDS-1 and 0.13 kgN2O tDS-1, the swine manure composting contributed approximately 89% to CH4 emissions while land application accounted for 92% of N2O emission. Meanwhile, the GHG emission profile from the full process in Beijing in 2015 and 2020 was predicted by the scenario analysis. The composting and land application is a cost-effective way for animal manure management in China considering GHG emissions.
Dornic, N; Ficheux, A S; Bernard, A; Roudot, A C
2017-08-01
The notes of guidance for the testing of cosmetic ingredients and their safety evaluation by the Scientific Committee on Consumer Safety (SCCS) is a document dedicated to ensuring the safety of European consumers. This contains useful data for risk assessment such as default values for Skin Surface Area (SSA). A more in-depth study of anthropometric data across Europe reveals considerable variations. The default SSA value was derived from a study on the Dutch population, which is known to be one of the tallest nations in the World. This value could be inadequate for shorter populations of Europe. Data were collected in a survey on cosmetic consumption in France. Probabilistic treatment of these data and analysis of the case of methylisothiazolinone, a sensitizer recently evaluated by a deterministic approach submitted to SCCS, suggest that the default value for SSA used in the quantitative risk assessment might not be relevant for a significant share of the French female population. Others female populations of Southern Europe may also be excluded. This is of importance given that some studies show an increasing risk of developping skin sensitization among women. The disparities in anthropometric data across Europe should be taken into consideration. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kauffman, J Boone; Bhomia, Rupesh K
2017-01-01
Globally, it is recognized that blue carbon ecosystems, especially mangroves, often sequester large quantities of carbon and are of interest for inclusion in climate change mitigation strategies. While 19% of the world's mangroves are in Africa, they are among the least investigated of all blue carbon ecosystems. We quantified total ecosystem carbon stocks in 33 different mangrove stands along the Atlantic coast of West-Central Africa from Senegal to Southern Gabon spanning large gradients of latitude, soil properties, porewater salinity, and precipitation. Mangrove structure ranged from low and dense stands that were <1m in height and >35,000 trees ha-1 to tall and open stands >40m in height and <100 ha-1. Tremendous variation in ecosystem carbon (C) stocks was measured ranging from 154 to 1,484 Mg C ha-1. The mean total ecosystem carbon stock for all mangroves of West-Central Africa was 799 Mg C ha-1. Soils comprised an average of 86% of the total carbon stock. The greatest carbon stocks were found in the tall mangroves of Liberia and Gabon North with a mean >1,000 Mg C ha-1. The lowest carbon stocks were found in the low mangroves of the semiarid region of Senegal (463 Mg C ha-1) and in mangroves on coarse-textured soils in Gabon South (541 Mg C ha-1). At the scale of the entirety of West-Central Africa, total ecosystem carbon stocks were poorly correlated to aboveground ecosystem carbon pools, precipitation, latitude and soil salinity (r2 = ≤0.07 for all parameters). Based upon a sample of 158 sites from Africa, Asia and Latin America that were sampled in a similar manner to this study, the global mean of carbon stocks for mangroves is 885 Mg C ha-1. The ecosystem carbon stocks of mangroves for West-Central Africa are slightly lower than those of Latin America (940 Mg C ha-1) and Asia (1049 Mg C ha-1) but substantially higher than the default Intergovernmental Panel on Climate Change (IPCC) values for mangroves (511 Mg C ha-1). This study provides an improved estimation of default estimates (Tier 1 values) of mangroves for Asia, Latin America, and West Central Africa.
Bhomia, Rupesh K.
2017-01-01
Globally, it is recognized that blue carbon ecosystems, especially mangroves, often sequester large quantities of carbon and are of interest for inclusion in climate change mitigation strategies. While 19% of the world’s mangroves are in Africa, they are among the least investigated of all blue carbon ecosystems. We quantified total ecosystem carbon stocks in 33 different mangrove stands along the Atlantic coast of West-Central Africa from Senegal to Southern Gabon spanning large gradients of latitude, soil properties, porewater salinity, and precipitation. Mangrove structure ranged from low and dense stands that were <1m in height and >35,000 trees ha-1 to tall and open stands >40m in height and <100 ha-1. Tremendous variation in ecosystem carbon (C) stocks was measured ranging from 154 to 1,484 Mg C ha-1. The mean total ecosystem carbon stock for all mangroves of West-Central Africa was 799 Mg C ha-1. Soils comprised an average of 86% of the total carbon stock. The greatest carbon stocks were found in the tall mangroves of Liberia and Gabon North with a mean >1,000 Mg C ha-1. The lowest carbon stocks were found in the low mangroves of the semiarid region of Senegal (463 Mg C ha-1) and in mangroves on coarse-textured soils in Gabon South (541 Mg C ha-1). At the scale of the entirety of West-Central Africa, total ecosystem carbon stocks were poorly correlated to aboveground ecosystem carbon pools, precipitation, latitude and soil salinity (r2 = ≤0.07 for all parameters). Based upon a sample of 158 sites from Africa, Asia and Latin America that were sampled in a similar manner to this study, the global mean of carbon stocks for mangroves is 885 Mg C ha-1. The ecosystem carbon stocks of mangroves for West-Central Africa are slightly lower than those of Latin America (940 Mg C ha-1) and Asia (1049 Mg C ha-1) but substantially higher than the default Intergovernmental Panel on Climate Change (IPCC) values for mangroves (511 Mg C ha-1). This study provides an improved estimation of default estimates (Tier 1 values) of mangroves for Asia, Latin America, and West Central Africa. PMID:29131832
NASA Astrophysics Data System (ADS)
Maruddani, Di Asih I.; Rosadi, Dedi; Gunardic, Abdurakhman
2015-02-01
The value of a corporate bond is conventionally expressed in terms of zero coupon bond. In practice, the most common form of debt instrument is coupon bond and allows early default before maturity as safety covenant for the bondholder. This paper study valuation for one period coupon bond, a coupon bond that only give one time coupon at the bond period. It assumes that the model give bondholder the right to reorganize a firm if its value falls below a given barrier. Revised first passage time approach is applied for default time rule. As a result, formulas of equity, liability, and probability of default is derived for this specified model. Straightforward integration under risk neutral pricing is used for deriving those formulas. For the application, bond of Bank Rakyat Indonesia (BRI) as one of the largest bank in Indonesia is analyzed. R computing show that value of the equity is IDR 453.724.549.000.000, the liability is IDR 2.657.394.000.000, and the probability if default is 5.645305E-47 %.
How IPCC Science-Policy Interactions Shape Its Policymaker Summaries
NASA Astrophysics Data System (ADS)
Mach, K. J.; Freeman, P. T.; Mastrandrea, M.; Field, C. B.
2016-12-01
Government approval is a defining feature of the Intergovernmental Panel on Climate Change (IPCC) assessment process. In plenary sessions with scientist authors, IPCC member governments discuss and agree each sentence of every report's summary for policymakers (SPM). This consensus-based approval builds joint ownership of scientific knowledge by both scientists and governments. The approval process and its resulting SPM revisions have received extensive attention in published anecdotes and perspectives, but without comprehensive evaluation to date. We present the results of an in-depth analysis of IPCC SPM revisions, providing an evidence basis for understanding a complex science-policy interaction. Revisions resulting from governmental review and approval expand SPMs. SPM text lengthens by 17 to 53% in recent assessment summaries. Political sensitivities and associated failures of consensus have led to prominent exceptions resulting in SPM contractions. Contrasting recent assertions, we find IPCC SPMs to be as readable as other professionally edited assessment summaries, for multiple measures of reading ease. Across metrics, some SPMs, but not all, become more readable through the revision process. We additionally examine each revision in an SPM for which we have deep familiarity. Most of the SPM's revisions occur prior to the in-person government-approval session, and they emphasize different purposes compared to revisions made during the approval session. Revisions prior to the in-person session largely pertain to clarity, scientific rigor, and explanation, whereas the subsequent in-person government-approval revisions place more emphasis on policy relevance, comprehensiveness of examples, and nuances of expert judgment. The value added in the IPCC government-approval process emerges through multiple stages of revision and approval, as scientists and governments together navigate a complex science-policy interaction.
Marquart, Hans; Warren, Nicholas D; Laitinen, Juha; van Hemmen, Joop J
2006-07-01
Dermal exposure needs to be addressed in regulatory risk assessment of chemicals. The models used so far are based on very limited data. The EU project RISKOFDERM has gathered a large number of new measurements on dermal exposure to industrial chemicals in various work situations, together with information on possible determinants of exposure. These data and information, together with some non-RISKOFDERM data were used to derive default values for potential dermal exposure of the hands for so-called 'TGD exposure scenarios'. TGD exposure scenarios have similar values for some very important determinant(s) of dermal exposure, such as amount of substance used. They form narrower bands within the so-called 'RISKOFDERM scenarios', which cluster exposure situations according to the same purpose of use of the products. The RISKOFDERM scenarios in turn are narrower bands within the so-called Dermal Exposure Operation units (DEO units) that were defined in the RISKOFDERM project to cluster situations with similar exposure processes and exposure routes. Default values for both reasonable worst case situations and typical situations were derived, both for single datasets and, where possible, for combined datasets that fit the same TGD exposure scenario. The following reasonable worst case potential hand exposures were derived from combined datasets: (i) loading and filling of large containers (or mixers) with large amounts (many litres) of liquids: 11,500 mg per scenario (14 mg cm(-2) per scenario with surface of the hands assumed to be 820 cm(2)); (ii) careful mixing of small quantities (tens of grams in <1l): 4.1 mg per scenario (0.005 mg cm(-2) per scenario); (iii) spreading of (viscous) liquids with a comb on a large surface area: 130 mg per scenario (0.16 mg cm(-2) per scenario); (iv) brushing and rolling of (relatively viscous) liquid products on surfaces: 6500 mg per scenario (8 mg cm(-2) per scenario) and (v) spraying large amounts of liquids (paints, cleaning products) on large areas: 12,000 mg per scenario (14 mg cm(-2) per scenario). These default values are considered useful for estimating exposure for similar substances in similar situations with low uncertainty. Several other default values based on single datasets can also be used, but lead to estimates with a higher uncertainty, due to their more limited basis. Sufficient analogy in all described parameters of the scenario, including duration, is needed to enable proper use of the default values. The default values lead to similar estimates as the RISKOFDERM dermal exposure model that was based on the same datasets, but uses very different parameters. Both approaches are preferred over older general models, such as EASE, that are not based on data from actual dermal exposure situations.
NASA Astrophysics Data System (ADS)
Brown, L.; Syed, B.; Jarvis, S. C.; Sneath, R. W.; Phillips, V. R.; Goulding, K. W. T.; Li, C.
A mechanistic model of N 2O emission from agricultural soil (DeNitrification-DeComposition—DNDC) was modified for application to the UK, and was used as the basis of an inventory of N 2O emission from UK agriculture in 1990. UK-specific input data were added to DNDC's database and the ability to simulate daily C and N inputs from grazing animals and applied animal waste was added to the model. The UK version of the model, UK-DNDC, simulated emissions from 18 different crop types on the 3 areally dominant soils in each county. Validation of the model at the field scale showed that predictions matched observations well. Emission factors for the inventory were calculated from estimates of N 2O emission from UK-DNDC, in order to maintain direct comparability with the IPCC approach. These, along with activity data, were included in a transparent spreadsheet format. Using UK-DNDC, the estimate of N 2O-N emission from UK current agricultural practice in 1990 was 50.9 Gg. This total comprised 31.7 Gg from the soil sector, 5.9 Gg from animals and 13.2 Gg from the indirect sector. The range of this estimate (using the range of soil organic C for each soil used) was 30.5-62.5 Gg N. Estimates of emissions in each sector were compared to those calculated using the IPCC default methodology. Emissions from the soil and indirect sectors were smaller with the UK-DNDC approach than with the IPCC methodology, while emissions from the animal sector were larger. The model runs suggested a relatively large emission from agricultural land that was not attributable to current agricultural practices (33.8 Gg in total, 27.4 Gg from the soil sector). This 'background' component is partly the result of historical agricultural land use. It is not normally included in inventories of emission, but would increase the total emission of N 2O-N from agricultural land in 1990 to 78.3 Gg.
NASA Astrophysics Data System (ADS)
Wilson, D.; Dixon, S. D.; Artz, R. R. E.; Smith, T. E. L.; Evans, C. D.; Owen, H. J. F.; Archer, E.; Renou-Wilson, F.
2015-09-01
Drained peatlands are significant hotspots of carbon dioxide (CO2) emissions and may also be more vulnerable to fire with its associated gaseous emissions. Under the United Nations Framework Convention on Climate Change (UNFCCC) and the Kyoto Protocol, greenhouse gas (GHG) emissions from peatlands managed for extraction are reported on an annual basis. However, the Tier 1 (default) emission factors (EFs) provided in the IPCC 2013 Wetlands Supplement for this land use category may not be representative in all cases and countries are encouraged to move to higher-tier reporting levels with reduced uncertainty levels based on country- or regional-specific data. In this study, we quantified (1) CO2-C emissions from nine peat extraction sites in the Republic of Ireland and the United Kingdom, which were initially disaggregated by land use type (industrial versus domestic peat extraction), and (2) a range of GHGs that are released to the atmosphere with the burning of peat. Drainage-related methane (CH4) and nitrous oxide (N2O) emissions as well as CO2-C emissions associated with the off-site decomposition of horticultural peat were not included here. Our results show that net CO2-C emissions were strongly controlled by soil temperature at the industrial sites (bare peat) and by soil temperature and leaf area index at the vegetated domestic sites. Our derived EFs of 1.70 (±0.47) and 1.64 (±0.44) t CO2-C ha-1 yr-1 for the industrial and domestic sites respectively are considerably lower than the Tier 1 EF (2.8 ± 1.7 t CO2-C ha-1 yr-1) provided in the Wetlands Supplement. We propose that the difference between our derived values and the Wetlands Supplement value is due to differences in peat quality and, consequently, decomposition rates. Emissions from burning of the peat (g kg-1 dry fuel burned) were estimated to be approximately 1346 CO2, 8.35 methane (CH4), 218 carbon monoxide (CO), 1.53 ethane (C2H6), 1.74 ethylene (C2H4), 0.60 methanol (CH3OH), 2.21 hydrogen cyanide (HCN) and 0.73 ammonia (NH3), and this emphasises the importance of understanding the full suite of trace gas emissions from biomass burning. Our results highlight the importance of generating reliable Tier 2 values for different regions and land use categories. Furthermore, given that the IPCC Tier 1 EF was only based on 20 sites (all from Canada and Fennoscandia), we suggest that data from another 9 sites significantly expand the global data set, as well as adding a new region.
How prior preferences determine decision-making frames and biases in the human brain
Lopez-Persem, Alizée; Domenech, Philippe; Pessiglione, Mathias
2016-01-01
Understanding how option values are compared when making a choice is a key objective for decision neuroscience. In natural situations, agents may have a priori on their preferences that create default policies and shape the neural comparison process. We asked participants to make choices between items belonging to different categories (e.g., jazz vs. rock music). Behavioral data confirmed that the items taken from the preferred category were chosen more often and more rapidly, which qualified them as default options. FMRI data showed that baseline activity in classical brain valuation regions, such as the ventromedial Prefrontal Cortex (vmPFC), reflected the strength of prior preferences. In addition, evoked activity in the same regions scaled with the default option value, irrespective of the eventual choice. We therefore suggest that in the brain valuation system, choices are framed as comparisons between default and alternative options, which might save some resource but induce a decision bias. DOI: http://dx.doi.org/10.7554/eLife.20317.001 PMID:27864918
Charm and beauty quark masses in the MMHT2014 global PDF analysis.
Harland-Lang, L A; Martin, A D; Motylinski, P; Thorne, R S
We investigate the variation in the MMHT2014 PDFs when we allow the heavy-quark masses [Formula: see text] and [Formula: see text] to vary away from their default values. We make PDF sets available in steps of [Formula: see text] and [Formula: see text], and present the variation in the PDFs and in the predictions. We examine the comparison to the HERA data on charm and beauty structure functions and note that in each case the heavy-quark data, and the inclusive data, have a slight preference for lower masses than our default values. We provide PDF sets with three and four active quark flavours, as well as the standard value of five flavours. We use the pole mass definition of the quark masses, as in the default MMHT2014 analysis, but briefly comment on the [Formula: see text] definition.
Regional landfills methane emission inventory in Malaysia.
Abushammala, Mohammed F M; Noor Ezlin Ahmad Basri; Basri, Hassan; Ahmed Hussein El-Shafie; Kadhum, Abdul Amir H
2011-08-01
The decomposition of municipal solid waste (MSW) in landfills under anaerobic conditions produces landfill gas (LFG) containing approximately 50-60% methane (CH(4)) and 30-40% carbon dioxide (CO(2)) by volume. CH(4) has a global warming potential 21 times greater than CO(2); thus, it poses a serious environmental problem. As landfills are the main method for waste disposal in Malaysia, the major aim of this study was to estimate the total CH(4) emissions from landfills in all Malaysian regions and states for the year 2009 using the IPCC, 1996 first-order decay (FOD) model focusing on clean development mechanism (CDM) project applications to initiate emission reductions. Furthermore, the authors attempted to assess, in quantitative terms, the amount of CH(4) that would be emitted from landfills in the period from 1981-2024 using the IPCC 2006 FOD model. The total CH(4) emission using the IPCC 1996 model was estimated to be 318.8 Gg in 2009. The Northern region had the highest CH(4) emission inventory, with 128.8 Gg, whereas the Borneo region had the lowest, with 24.2 Gg. It was estimated that Pulau Penang state produced the highest CH(4) emission, 77.6 Gg, followed by the remaining states with emission values ranging from 38.5 to 1.5 Gg. Based on the IPCC 1996 FOD model, the total Malaysian CH( 4) emission was forecast to be 397.7 Gg by 2020. The IPCC 2006 FOD model estimated a 201 Gg CH(4) emission in 2009, and estimates ranged from 98 Gg in 1981 to 263 Gg in 2024.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Heat Values for Various Types of Fuel C Table C-1 to Subpart C of Part 98 Protection of Environment... Stationary Fuel Combustion Sources Pt. 98, Subpt. C, Table C-1 Table C-1 to Subpart C of Part 98—Default CO2... input from MSW and/or tires; and (c) small batch incinerators that combust no more than 1,000 tons of...
An Improved Approach to Estimate Methane Emissions from Coal Mining in China.
Zhu, Tao; Bian, Wenjing; Zhang, Shuqing; Di, Pingkuan; Nie, Baisheng
2017-11-07
China, the largest coal producer in the world, is responsible for over 50% of the total global methane (CH 4 ) emissions from coal mining. However, the current emission inventory of CH4 from coal mining has large uncertainties because of the lack of localized emission factors (EFs). In this study, province-level CH4 EFs from coal mining in China were developed based on the data analysis of coal production and corresponding discharged CH4 emissions from 787 coal mines distributed in 25 provinces with different geological and operation conditions. Results show that the spatial distribution of CH 4 EFs is highly variable with values as high as 36 m3/t and as low as 0.74 m3/t. Based on newly developed CH 4 EFs and activity data, an inventory of the province-level CH4 emissions was built for 2005-2010. Results reveal that the total CH 4 emissions in China increased from 11.5 Tg in 2005 to 16.0 Tg in 2010. By constructing a gray forecasting model for CH 4 EFs and a regression model for activity, the province-level CH 4 emissions from coal mining in China are forecasted for the years of 2011-2020. The estimates are compared with other published inventories. Our results have a reasonable agreement with USEPA's inventory and are lower by a factor of 1-2 than those estimated using the IPCC default EFs. This study could help guide CH 4 mitigation policies and practices in China.
NASA Astrophysics Data System (ADS)
Senzeba, K. T.; Rajkumari, S.; Bhadra, A.; Bandyopadhyay, A.
2016-04-01
Snowmelt run-off model (SRM) based on degree-day approach has been employed to evaluate the change in snow-cover depletion and corresponding streamflow under different projected climatic scenarios for an eastern Himalayan catchment in India. Nuranang catchment located at Tawang district of Arunachal Pradesh with an area of 52 km2 is selected for the present study with an elevation range of 3143-4946 m above mean sea level. Satellite images from October to June of the selected hydrological year 2006-2007 were procured from National Remote Sensing Centre, Hyderabad. Snow cover mapping is done using NDSI method. Based on long term meteorological data, temperature and precipitation data of selected hydrological year are normalized to represent present climatic condition. The projected temperature and precipitation data are downloaded from NCAR's GIS data portal for different emission scenarios (SRES), viz., A1B, A2, B1; and IPCC commitment (non-SRES) scenario for different future years (2020, 2030, 2040 and 2050). Projected temperature and precipitation data are obtained at desired location by spatially interpolating the gridded data and then by statistical downscaling using linear regression. Snow depletion curves for all projected scenarios are generated for the study area and compared with conventional depletion curve for present climatic condition. Changes in cumulative snowmelt depth for different future years are highest under A1B and lowest under IPCC commitment, whereas A2 and B1 values are in-between A1B and IPCC commitment. Percentage increase in streamflow for different future years follows almost the same trend as change in precipitation from present climate under all projected climatic scenarios. Hence, it was concluded that for small catchments having seasonal snow cover, the total streamflow under projected climatic scenarios in future years will be primarily governed by the change in precipitation and not by change in snowmelt depth. Advancing of depletion curves for different future years are highest under A1B and lowest under IPCC commitment. A2 and B1 values are in-between A1B and IPCC commitment.
Information relevant to KABAM and explanations of default parameters used to define the 7 trophic levels. KABAM is a simulation model used to predict pesticide concentrations in aquatic regions for use in exposure assessments.
Wang, Zhen; Scott, W Casan; Williams, E Spencer; Ciarlo, Michael; DeLeo, Paul C; Brooks, Bryan W
2018-04-01
Uncertainty factors (UFs) are commonly used during hazard and risk assessments to address uncertainties, including extrapolations among mammals and experimental durations. In risk assessment, default values are routinely used for interspecies extrapolation and interindividual variability. Whether default UFs are sufficient for various chemical uses or specific chemical classes remains understudied, particularly for ingredients in cleaning products. Therefore, we examined publicly available acute median lethal dose (LD50), and reproductive and developmental no-observed-adverse-effect level (NOAEL) and lowest-observed-adverse-effect level (LOAEL) values for the rat model (oral). We employed probabilistic chemical toxicity distributions to identify likelihoods of encountering acute, subacute, subchronic and chronic toxicity thresholds for specific chemical categories and ingredients in cleaning products. We subsequently identified thresholds of toxicological concern (TTC) and then various UFs for: 1) acute (LD50s)-to-chronic (reproductive/developmental NOAELs) ratios (ACRs), 2) exposure duration extrapolations (e.g., subchronic-to-chronic; reproductive/developmental), and 3) LOAEL-to-NOAEL ratios considering subacute/acute developmental responses. These ratios (95% CIs) were calculated from pairwise threshold levels using Monte Carlo simulations to identify UFs for all ingredients in cleaning products. Based on data availability, chemical category-specific UFs were also identified for aliphatic acids and salts, aliphatic alcohols, inorganic acids and salts, and alkyl sulfates. In a number of cases, derived UFs were smaller than default values (e.g., 10) employed by regulatory agencies; however, larger UFs were occasionally identified. Such UFs could be used by assessors instead of relying on default values. These approaches for identifying mammalian TTCs and diverse UFs represent robust alternatives to application of default values for ingredients in cleaning products and other chemical classes. Findings can also support chemical substitutions during alternatives assessment, and data dossier development (e.g., read across), identification of TTCs, and screening-level hazard and risk assessment when toxicity data is unavailable for specific chemicals. Copyright © 2018 Elsevier Ltd. All rights reserved.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Heat Values for Various Types of Fuel C Table C-1 to Subpart C of Part 98 Protection of Environment... Stationary Fuel Combustion Sources Pt. 98, Subpt. C, Table C-1 Table C-1 to Subpart C of Part 98—Default CO2... exception of ethylene. 2 Ethylene HHV determined at 41 °F (5 °C) and saturation pressure. 3 Use of this...
17 CFR 230.239T - Temporary exemption for eligible credit default swaps.
Code of Federal Regulations, 2010 CFR
2010-04-01
... be delivered if there is a credit-related event or whose value is used to determine the amount of the... a payout if there is a default or other credit event involving identified obligation(s) or... agreement; (iii) Notional amount upon which payment obligations are calculated; (iv) Credit-related events...
Revised spatially distributed global livestock emissions
NASA Astrophysics Data System (ADS)
Asrar, G.; Wolf, J.; West, T. O.
2015-12-01
Livestock play an important role in agricultural carbon cycling through consumption of biomass and emissions of methane. Quantification and spatial distribution of methane and carbon dioxide produced by livestock is needed to develop bottom-up estimates for carbon monitoring. These estimates serve as stand-alone international emissions estimates, as input to global emissions modeling, and as comparisons or constraints to flux estimates from atmospheric inversion models. Recent results for the US suggest that the 2006 IPCC default coefficients may underestimate livestock methane emissions. In this project, revised coefficients were calculated for cattle and swine in all global regions, based on reported changes in body mass, quality and quantity of feed, milk production, and management of living animals and manure for these regions. New estimates of livestock methane and carbon dioxide emissions were calculated using the revised coefficients and global livestock population data. Spatial distribution of population data and associated fluxes was conducted using the MODIS Land Cover Type 5, version 5.1 (i.e. MCD12Q1 data product), and a previously published downscaling algorithm for reconciling inventory and satellite-based land cover data at 0.05 degree resolution. Preliminary results for 2013 indicate greater emissions than those calculated using the IPCC 2006 coefficients. Global total enteric fermentation methane increased by 6%, while manure management methane increased by 38%, with variation among species and regions resulting in improved spatial distributions of livestock emissions. These new estimates of total livestock methane are comparable to other recently reported studies for the entire US and the State of California. These new regional/global estimates will improve the ability to reconcile top-down and bottom-up estimates of methane production as well as provide updated global estimates for use in development and evaluation of Earth system models.
NASA Astrophysics Data System (ADS)
Mohan, Riya Rachel
2018-04-01
Green House Gas (GHG) emissions are the major cause of global warming and climate change. Carbon dioxide (CO2) is the main GHG emitted through human activities, at the household level, by burning fuels for cooking and lighting. As per the 2006 methodology of the Inter-governmental Panel on Climate Change (IPCC), the energy sector is divided into various sectors like electricity generation, transport, fugitive, 'other' sectors, etc. The 'other' sectors under energy include residential, commercial, agriculture and fisheries. Time series GHG emission estimates were prepared for the residential, commercial, agriculture and fisheries sectors in India, for the time period 2005 to 2014, to understand the historical emission changes in 'other' sector. Sectoral activity data, with respect to fuel consumption, were collected from various ministry reports like Indian Petroleum and Natural Gas Statistics, Energy Statistics, etc. The default emission factor(s) from IPCC 2006 were used to calculate the emissions for each activity and sector-wise CO2, CH4, N2O and CO2e emissions were compiled. It was observed that the residential sector generates the highest GHG emissions, followed by the agriculture/fisheries and commercial sector. In the residential sector, LPG, kerosene, and fuelwood are the major contributors of emissions, whereas diesel is the main contributor to the commercial, agriculture and fisheries sectors. CO2e emissions have been observed to rise at a cumulative annual growth rate of 0.6%, 9.11%, 7.94% and 5.26% for the residential, commercial, agriculture and fisheries sectors, respectively. In addition to the above, a comparative study of the sectoral inventories from the national inventories, published by Ministry of Environment, Forest and Climate Change, for 2007 and 2010 was also performed.
Agricultural soil greenhouse gas emissions: a review of national inventory methods.
Lokupitiya, Erandathie; Paustian, Keith
2006-01-01
Parties to the United Nations Framework Convention on Climate Change (UNFCCC) are required to submit national greenhouse gas (GHG) inventories, together with information on methods used in estimating their emissions. Currently agricultural activities contribute a significant portion (approximately 20%) of global anthropogenic GHG emissions, and agricultural soils have been identified as one of the main GHG source categories within the agricultural sector. However, compared to many other GHG sources, inventory methods for soils are relatively more complex and have been implemented only to varying degrees among member countries. This review summarizes and evaluates the methods used by Annex 1 countries in estimating CO2 and N2O emissions in agricultural soils. While most countries utilize the Intergovernmental Panel on Climate Change (IPCC) default methodology, several Annex 1 countries are developing more advanced methods that are tailored for specific country circumstances. Based on the latest national inventory reporting, about 56% of the Annex 1 countries use IPCC Tier 1 methods, about 26% use Tier 2 methods, and about 18% do not estimate or report N2O emissions from agricultural soils. More than 65% of the countries do not report CO2 emissions from the cultivation of mineral soils, organic soils, or liming, and only a handful of countries have used country-specific, Tier 3 methods. Tier 3 methods usually involve process-based models and detailed, geographically specific activity data. Such methods can provide more robust, accurate estimates of emissions and removals but require greater diligence in documentation, transparency, and uncertainty assessment to ensure comparability between countries. Availability of detailed, spatially explicit activity data is a major constraint to implementing higher tiered methods in many countries.
Multisite experimental cost study of intensive psychiatric community care.
Rosenheck, R; Neale, M; Leaf, P; Milstein, R; Frisman, L
1995-01-01
A 2-year experimental cost study of 10 Intensive Psychiatric Community Care (IPCC) programs was conducted at Department of Veterans Affairs (VA) medical centers in the Northeast. High hospital users were randomly assigned to either IPCC (n = 454) or standard VA care (n = 419) at four neuropsychiatric (NP) and six general medical and surgical (GMS) hospitals. National computerized data were used to track all VA health care service usage and costs for 2 years following program entry. At 9 of the 10 sites, IPCC treatment resulted in reduced inpatient service usage. Overall, for IPCC patients compared with control patients, average inpatient usage was 89 days (33%) less while average cost per patient (for IPCC inpatient, and outpatient services) was $15,556 (20%) less. Additionally, costs for IPCC patients compared with control patients were $33,295 (29%) less at NP sites but were $6,273 (15%) greater at GMS sites. At both NP and GMS sites, costs were lower for IPCC patients in two subgroups: veterans over age 45 and veterans with high levels of inpatient service use before program entry. No interaction was noted between the impact of IPCC on costs and other clinical or sociodemographic characteristics. Similarly, no linear relationship was observed between the intensity of IPCC services and the impact of IPCC on VA costs, although the two sites that did not fully implement the IPCC program had the poorest results. With these sites excluded, the total cost of care for IPCC patients at GMS sites was $579 (3%) more per year than that for the control patients.
VizieR Online Data Catalog: FAMA code for stellar parameters and abundances (Magrini+, 2013)
NASA Astrophysics Data System (ADS)
Magrini, L.; Randich, S.; Friel, E.; Spina, L.; Jacobson, H.; Cantat-Gaudin, T.; Donati, P.; Baglioni, R.; Maiorca, E.; Bragaglia, A.; Sordo, R.; Vallenari, A.
2013-07-01
FAMA v.1, July 2013, distributed with MOOGv2013 and Kurucz models. Perl Codes: read_out2.pl read_final.pl driver.pl sclipping_26.0.pl sclipping_final.pl sclipping_26.1.pl confronta.pl fama.pl Model atmopheres and interpolator (Kurucz models): MODEL_ATMO MOOG_files: files to compile MOOG (the most recent version of MOOG can be obtained from http://www.as.utexas.edu/~chris/moog.html) FAMAmoogfiles: files to update when compiling MOOG OUTPUT: directory in which the results will be stored, contains a sm macro to produce final plots automoog.par: files with parameters for FAMA 1) OUTPUTdir 2) MOOGdir 3) modelsdir 4) 1.0 (default) percentage of the dispersion of FeI abundances to be considered to compute the errors on the stellar parameters, 1.0 means 100%, thus to compute e.g., the error on Teff we allow to code to find the Teff corresponding to a slope given by σ(FeI)/range(EP). 5) 1.2 (default) σ clipping for FeI lines 6) 1.0 (default) σ clipping for FeII lines 7) 1.0 (default) σ clipping for the other elements 8) 1.0 (default) value of the QP parameter, higher values mean less strong convergence criteria. star.iron: EWs in the correct format to test the code sun.par: initial parameters for the test (1 data file).
40 CFR 98.463 - Calculating GHG emissions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... generation using Equation TT-1 of this section. ER29NO11.004 Where: GCH4 = Modeled methane generation in....464(b)(4)(i), use a default value of 1.0. MCF = Methane correction factor (fraction). Use the default... paragraphs (a)(2)(ii)(A) and (B) of this section when historical production or processing data are available...
Estimation of CO2 emissions from waste incinerators: Comparison of three methods.
Lee, Hyeyoung; Yi, Seung-Muk; Holsen, Thomas M; Seo, Yong-Seok; Choi, Eunhwa
2018-03-01
Climate-relevant CO 2 emissions from waste incineration were compared using three methods: making use of CO 2 concentration data, converting O 2 concentration and waste characteristic data, and using a mass balance method following Intergovernmental Panel on Climate Change (IPCC) guidelines. For the first two methods, CO 2 and O 2 concentrations were measured continuously from 24 to 86 days. The O 2 conversion method in comparison to the direct CO 2 measurement method had a 4.8% mean difference in daily CO 2 emissions for four incinerators where analyzed waste composition data were available. However, the IPCC method had a higher difference of 13% relative to the direct CO 2 measurement method. For three incinerators using designed values for waste composition, the O 2 conversion and IPCC methods in comparison to the direct CO 2 measurement method had mean differences of 7.5% and 89%, respectively. Therefore, the use of O 2 concentration data measured for monitoring air pollutant emissions is an effective method for estimating CO 2 emissions resulting from waste incineration. Copyright © 2017 Elsevier Ltd. All rights reserved.
Linguistic analysis of IPCC summaries for policymakers and associated coverage
NASA Astrophysics Data System (ADS)
Barkemeyer, Ralf; Dessai, Suraje; Monge-Sanz, Beatriz; Renzi, Barbara Gabriella; Napolitano, Giulio
2016-03-01
The Intergovernmental Panel on Climate Change (IPCC) Summary for Policymakers (SPM) is the most widely read section of IPCC reports and the main springboard for the communication of its assessment reports. Previous studies have shown that communicating IPCC findings to a variety of scientific and non-scientific audiences presents significant challenges to both the IPCC and the mass media. Here, we employ widely established sentiment analysis tools and readability metrics to explore the extent to which information published by the IPCC differs from the presentation of respective findings in the popular and scientific media between 1990 and 2014. IPCC SPMs clearly stand out in terms of low readability, which has remained relatively constant despite the IPCC’s efforts to consolidate and readjust its communications policy. In contrast, scientific and quality newspaper coverage has become increasingly readable and emotive. Our findings reveal easy gains that could be achieved in making SPMs more accessible for non-scientific audiences.
New chairman takes helm at Climate Change Panel
NASA Astrophysics Data System (ADS)
Showstack, Randy
An Indian industrial engineer and economist who supports the Kyoto Protocol, and who has sharply criticized the administration of George W. Bush on the climate change issue for not doing enough to curb greenhouse gas emissions, won the first-ever contested election for chairman of the Intergovernmental Panel on Climate Change (IPCC) during a meeting on 19 April.Rajendra Pachauri is the first representative from a developing country to chair the IPCC, a panel of about 2,500 experts on a wide range of areas related to climate change. The IPCC was established in 1988 by the World Meteorological Organization and the United Nations Environment Programme. In total, the IPCC currently includes 192 member states. Although the bulk of the IPCC's work is conducted by three technical working groups, the chairman plays a key role in facilitating the overall process of the IPCC, organizing the scientific debate within the IPCC, and serving as chief spokesman.
Entropy measure of credit risk in highly correlated markets
NASA Astrophysics Data System (ADS)
Gottschalk, Sylvia
2017-07-01
We compare the single and multi-factor structural models of corporate default by calculating the Jeffreys-Kullback-Leibler divergence between their predicted default probabilities when asset correlations are either high or low. Single-factor structural models assume that the stochastic process driving the value of a firm is independent of that of other companies. A multi-factor structural model, on the contrary, is built on the assumption that a single firm's value follows a stochastic process correlated with that of other companies. Our main results show that the divergence between the two models increases in highly correlated, volatile, and large markets, but that it is closer to zero in small markets, when asset correlations are low and firms are highly leveraged. These findings suggest that during periods of financial instability, when asset volatility and correlations increase, one of the models misreports actual default risk.
A Value-Added Model to Measure Higher Education Returns on Government Investment
ERIC Educational Resources Information Center
Sparks, Roland J.
2011-01-01
The cost of college is increasing faster than inflation with the government funding over 19 million student loans that have a current outstanding balance of over $850 billion in 2010. Student default rates for 2008 averaged 7% but for some colleges, default rates were as high as 46.8%. Congress is demanding answers from colleges and universities…
Code of Federal Regulations, 2012 CFR
2012-07-01
... 52.07 Biomass Fuels—Liquid mmBtu/gallon kg CO2/mmBtu Ethanol 0.084 68.44 Biodiesel 0.128 73.84 Biodiesel (100%) 0.128 73.84 Rendered Animal Fat 0.125 71.06 Vegetable Oil 0.120 81.55 1 Use of this default...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 52.07 Biomass Fuels—Liquid mmBtu/gallon kg CO2/mmBtu Ethanol 0.084 68.44 Biodiesel 0.128 73.84 Biodiesel (100%) 0.128 73.84 Rendered Animal Fat 0.125 71.06 Vegetable Oil 0.120 81.55 1 Use of this default...
Solar cycle length hypothesis appears to support the ipcc on global warming
NASA Astrophysics Data System (ADS)
Laut, P.; Gundermann, J.
1998-12-01
Since the discovery of a striking correlation between 1-2-2-2-1 filtered solar cycle lengths and the 11-year running average of northern hemisphere land air temperatures, there have been widespread speculations as to whether these findings would rule out any significant contributions to global warming from the enhanced concentrations of greenhouse gases. The solar hypothesis (as we shall term this assumption) claims that solar activity causes a significant component of the global mean temperature to vary in phase opposite to the filtered solar cycle lengths. In an earlier article we have demonstrated that for data covering the period 1860-1980 the solar hypothesis does not rule out any significant contribution from man-made greenhouse gases and sulphate aerosols. The present analysis goes a step further. We analyse the period 1579-1987 and find that the solar hypothesis-instead of contradicting-appears to support the assumption of a significant warming due to human activities. We have tentatively corrected the historical northern hemisphere land air temperature anomalies by removing the assumed effects of human activities. These are represented by northern hemisphere land air temperature anomalies calculated as the contributions from man-made greenhouse gases and sulphate aerosols by using an upwelling diffusion-energy balance model similar to the model of [Wigley and Raper, 1993] employed in the Second Assessment Report of The Intergovernmental Panel on Climate Change (IPCC). It turns out that the agreement of the filtered solar cycle lengths with the corrected temperature anomalies is substantially better than with the historical anomalies, with the mean square deviation reduced by 36% for a climate sensitivity of 2.5°C, the central value of the IPCC assessment, and by 43% for the best-fit value of 1.7°C. Therefore our findings support a total reversal of the common assumption that a verification of the solar hypothesis would challenge the IPCC assessment of man-made global warming.
Intra-operative peritoneal lavage for colorectal cancer
Passot, Guillaume; Mohkam, Kayvan; Cotte, Eddy; Glehen, Olivier
2014-01-01
Free cancer cells can be detected in peritoneal fluid at the time of colorectal surgery. Peritoneal lavage in colorectal surgery for cancer is not used in routine, and the prognostic significance of intraperitoneal free cancer cells (IPCC) remains unclear. Data concerning the technique of peritoneal lavage to detect IPCC and its timing regarding colorectal resection are scarce. However, positive IPCC might be the first step of peritoneal spread in colorectal cancers, which could lead to early specific treatments. Because of the important heterogeneity of IPCC determination in reported studies, no treatment have been proposed to patients with positive IPCC. Herein, we provide an overview of IPCC detection and its impact on recurrence and survival, and we suggest further multi-institutional studies to evaluate new treatment strategies. PMID:24616569
Introduction of risk size in the determination of uncertainty factor UFL in risk assessment
NASA Astrophysics Data System (ADS)
Xue, Jinling; Lu, Yun; Velasquez, Natalia; Yu, Ruozhen; Hu, Hongying; Liu, Zhengtao; Meng, Wei
2012-09-01
The methodology for using uncertainty factors in health risk assessment has been developed for several decades. A default value is usually applied for the uncertainty factor UFL, which is used to extrapolate from LOAEL (lowest observed adverse effect level) to NAEL (no adverse effect level). Here, we have developed a new method that establishes a linear relationship between UFL and the additional risk level at LOAEL based on the dose-response information, which represents a very important factor that should be carefully considered. This linear formula makes it possible to select UFL properly in the additional risk range from 5.3% to 16.2%. Also the results remind us that the default value 10 may not be conservative enough when the additional risk level at LOAEL exceeds 16.2%. Furthermore, this novel method not only provides a flexible UFL instead of the traditional default value, but also can ensure a conservative estimation of the UFL with fewer errors, and avoid the benchmark response selection involved in the benchmark dose method. These advantages can improve the estimation of the extrapolation starting point in the risk assessment.
An Alternative Default Soil Organic Carbon Method for National GHG Inventory Reporting to the UNFCCC
NASA Astrophysics Data System (ADS)
Ogle, S. M.; Gurung, R.; Klepfer, A.; Spencer, S.; Breidt, J.
2016-12-01
Estimating soil organic C stocks is challenging because of the large amount of data needed to evaluate the impact of land use and management on this terrestrial C pool. Moreover, some of the required data are rarely collected by governments through surveys programs, and are not typically available in remote sensing products. Examples include data on organic amendments, cover crops, crop rotation sequences, vegetated fallows, and fertilization practices. Due to these difficulties, only about 20% of the countries report soil organic C stock changes in their national communications to the UNFCCC. Yet, C sequestration in soils represents one of the least expensive options for reducing greenhouse gas emissions, and has the largest potential for mitigation in the agricultural sector. In order to facilitate reporting, we developed an alternative approach to the current default method provided by the Intergovernmental Panel on Climate Change (IPCC) for estimating soil organic C stock changes in mineral soils. The alternative method estimates the steady-state C stocks for a three pool model given annual crop yields or net primary production as the main input, along with monthly average temperature, total precipitation and soil texture data. Yield data are commonly available in a national agricultural census, and global datasets exists with adequate data for weather and soil texture if national datasets are not available. Tillage and irrigation data are also needed to address the impact of these practices on decomposition rates. The change in steady-state stocks is assumed to occur over a few decades. A Bayesian analysis framework has been developed to derive probability distribution functions for the parameters, and the method is being applied in a global analysis of soil organic carbon stock changes.
Modeling fluctuations in default-mode brain network using a spiking neural network.
Yamanishi, Teruya; Liu, Jian-Qin; Nishimura, Haruhiko
2012-08-01
Recently, numerous attempts have been made to understand the dynamic behavior of complex brain systems using neural network models. The fluctuations in blood-oxygen-level-dependent (BOLD) brain signals at less than 0.1 Hz have been observed by functional magnetic resonance imaging (fMRI) for subjects in a resting state. This phenomenon is referred to as a "default-mode brain network." In this study, we model the default-mode brain network by functionally connecting neural communities composed of spiking neurons in a complex network. Through computational simulations of the model, including transmission delays and complex connectivity, the network dynamics of the neural system and its behavior are discussed. The results show that the power spectrum of the modeled fluctuations in the neuron firing patterns is consistent with the default-mode brain network's BOLD signals when transmission delays, a characteristic property of the brain, have finite values in a given range.
Zhang, Shuzi; Dai, Hehua; Wan, Ni; Moore, Yolonda; Dai, Zhenhua
2011-01-01
Insulin-producing cell clusters (IPCCs) have recently been generated in vitro from adipose tissue-derived stem cells (ASCs) to circumvent islet shortage. However, it is unknown how long they can survive upon transplantation, whether they are eventually rejected by recipients, and how their long-term survival can be induced to permanently cure type 1 diabetes. IPCC graft survival is critical for their clinical application and this issue must be systematically addressed prior to their in-depth clinical trials. Here we found that IPCC grafts that differentiated from murine ASCs in vitro, unlike their freshly isolated islet counterparts, did not survive long-term in syngeneic mice, suggesting that ASC-derived IPCCs have intrinsic survival disadvantage over freshly isolated islets. Indeed, β cells retrieved from IPCC syngrafts underwent faster apoptosis than their islet counterparts. However, blocking both Fas and TNF receptor death pathways inhibited their apoptosis and restored their long-term survival in syngeneic recipients. Furthermore, blocking CD40-CD154 costimulation and Fas/TNF signaling induced long-term IPCC allograft survival in overwhelming majority of recipients. Importantly, Fas-deficient IPCC allografts exhibited certain immune privilege and enjoyed long-term survival in diabetic NOD mice in the presence of CD28/CD40 joint blockade while their islet counterparts failed to do so. Long-term survival of ASC-derived IPCC syngeneic grafts requires blocking Fas and TNF death pathways, whereas blocking both death pathways and CD28/CD40 costimulation is needed for long-term IPCC allograft survival in diabetic NOD mice. Our studies have important clinical implications for treating type 1 diabetes via ASC-derived IPCC transplantation. © 2011 Zhang et al.
Zhang, Shuzi; Dai, Hehua; Wan, Ni; Moore, Yolonda; Dai, Zhenhua
2011-01-01
Background Insulin-producing cell clusters (IPCCs) have recently been generated in vitro from adipose tissue-derived stem cells (ASCs) to circumvent islet shortage. However, it is unknown how long they can survive upon transplantation, whether they are eventually rejected by recipients, and how their long-term survival can be induced to permanently cure type 1 diabetes. IPCC graft survival is critical for their clinical application and this issue must be systematically addressed prior to their in-depth clinical trials. Methodology/Principal Findings Here we found that IPCC grafts that differentiated from murine ASCs in vitro, unlike their freshly isolated islet counterparts, did not survive long-term in syngeneic mice, suggesting that ASC-derived IPCCs have intrinsic survival disadvantage over freshly isolated islets. Indeed, β cells retrieved from IPCC syngrafts underwent faster apoptosis than their islet counterparts. However, blocking both Fas and TNF receptor death pathways inhibited their apoptosis and restored their long-term survival in syngeneic recipients. Furthermore, blocking CD40-CD154 costimulation and Fas/TNF signaling induced long-term IPCC allograft survival in overwhelming majority of recipients. Importantly, Fas-deficient IPCC allografts exhibited certain immune privilege and enjoyed long-term survival in diabetic NOD mice in the presence of CD28/CD40 joint blockade while their islet counterparts failed to do so. Conclusions/Significance Long-term survival of ASC-derived IPCC syngeneic grafts requires blocking Fas and TNF death pathways, whereas blocking both death pathways and CD28/CD40 costimulation is needed for long-term IPCC allograft survival in diabetic NOD mice. Our studies have important clinical implications for treating type 1 diabetes via ASC-derived IPCC transplantation. PMID:22216347
Flight dynamics analysis and simulation of heavy lift airships, volume 4. User's guide: Appendices
NASA Technical Reports Server (NTRS)
Emmen, R. D.; Tischler, M. B.
1982-01-01
This table contains all of the input variables to the three programs. The variables are arranged according to the name list groups in which they appear in the data files. The program name, subroutine name, definition and, where appropriate, a default input value and any restrictions are listed with each variable. The default input values are user supplied, not generated by the computer. These values remove a specific effect from the calculations, as explained in the table. The phrase "not used' indicates that a variable is not used in the calculations and are for identification purposes only. The engineering symbol, where it exists, is listed to assist the user in correlating these inputs with the discussion in the Technical Manual.
Use of Navier-Stokes methods for the calculation of high-speed nozzle flow fields
NASA Technical Reports Server (NTRS)
Georgiadis, Nicholas J.; Yoder, Dennis A.
1994-01-01
Flows through three reference nozzles have been calculated to determine the capabilities and limitations of the widely used Navier-Stokes solver, PARC. The nozzles examined have similar dominant flow characteristics as those considered for supersonic transport programs. Flows from an inverted velocity profile (IVP) nozzle, an under expanded nozzle, and an ejector nozzle were examined. PARC calculations were obtained with its standard algebraic turbulence model, Thomas, and the two-equation turbulence model, Chien k-epsilon. The Thomas model was run with the default coefficient of mixing set at both 0.09 and a larger value of 0.13 to improve the mixing prediction. Calculations using the default value substantially underpredicted the mixing for all three flows. The calculations obtained with the higher mixing coefficient better predicted mixing in the IVP and underexpanded nozzle flows but adversely affected PARC's convergence characteristics for the IVP nozzle case. The ejector nozzle case did not converge with the Thomas model and the higher mixing coefficient. The Chien k-epsilon results were in better agreement with the experimental data overall than were those of the Thomas run with the default mixing coefficient, but the default boundary conditions for k and epsilon underestimated the levels of mixing near the nozzle exits.
Jha, Arvind K; Sharma, C; Singh, Nahar; Ramesh, R; Purvaja, R; Gupta, Prabhat K
2008-03-01
Municipal solid waste generation rate is over-riding the population growth rate in all mega-cities in India. Greenhouse gas emission inventory from landfills of Chennai has been generated by measuring the site specific emission factors in conjunction with relevant activity data as well as using the IPCC methodologies for CH4 inventory preparation. In Chennai, emission flux ranged from 1.0 to 23.5mg CH4m(-2)h(-1), 6 to 460microg N2Om(-2)h(-1) and 39 to 906mg CO2m(2)h(-1) at Kodungaiyur and 0.9 to 433mg CH4m(-2)h(-1), 2.7 to 1200microg N2Om(-2)h(-1) and 12.3 to 964.4mg CO2m(-2)h(-1) at Perungudi. CH4 emission estimates were found to be about 0.12Gg in Chennai from municipal solid waste management for the year 2000 which is lower than the value computed using IPCC, 1996 [IPCC, 1996. Report of the 12th session of the intergovernmental panel of climate change, Mexico City, 1996] methodologies.
Maruza, Magda; Albuquerque, Maria F P Militão; Coimbra, Isabella; Moura, Líbia V; Montarroyos, Ulisses R; Miranda Filho, Demócrito B; Lacerda, Heloísa R; Rodrigues, Laura C; Ximenes, Ricardo A A
2011-12-16
Concomitant treatment of Human Immunodeficiency Virus (HIV) infection and tuberculosis (TB) presents a series of challenges for treatment compliance for both providers and patients. We carried out this study to identify risk factors for default from TB treatment in people living with HIV. We conducted a cohort study to monitor HIV/TB co-infected subjects in Pernambuco, Brazil, on a monthly basis, until completion or default of treatment for TB. Logistic regression was used to calculate crude and adjusted odds ratios, 95% confidence intervals and P-values. From a cohort of 2310 HIV subjects, 390 individuals (16.9%) who had started treatment after a diagnosis of TB were selected, and data on 273 individuals who completed or defaulted on treatment for TB were analyzed. The default rate was 21.7% and the following risk factors were identified: male gender, smoking and CD4 T-cell count less than 200 cells/mm3. Age over 29 years, complete or incomplete secondary or university education and the use of highly active antiretroviral therapy (HAART) were identified as protective factors for the outcome. The results point to the need for more specific actions, aiming to reduce the default from TB treatment in males, younger adults with low education, smokers and people with CD4 T-cell counts < 200 cells/mm3. Default was less likely to occur in patients under HAART, reinforcing the strategy of early initiation of HAART in individuals with TB.
Pricing for a basket of LCDS under fuzzy environments.
Wu, Liang; Liu, Jie-Fang; Wang, Jun-Tao; Zhuang, Ya-Ming
2016-01-01
This paper looks at both the prepayment risks of housing mortgage loan credit default swaps (LCDS) as well as the fuzziness and hesitation of investors as regards prepayments by borrowers. It further discusses the first default pricing of a basket of LCDS in a fuzzy environment by using stochastic analysis and triangular intuition-based fuzzy set theory. Through the 'fuzzification' of the sensitivity coefficient in the prepayment intensity, this paper describes the dynamic features of mortgage housing values using the One-factor copula function and concludes with a formula for 'fuzzy' pricing the first default of a basket of LCDS. Using analog simulation to analyze the sensitivity of hesitation, we derive a model that considers what the LCDS fair premium is in a fuzzy environment, including a pure random environment. In addition, the model also shows that a suitable pricing range will give investors more flexible choices and make the predictions of the model closer to real market values.
NASA Astrophysics Data System (ADS)
Fitton, N.; Datta, A.; Hastings, A.; Kuhnert, M.; Topp, C. F. E.; Cloy, J. M.; Rees, R. M.; Cardenas, L. M.; Williams, J. R.; Smith, K.; Chadwick, D.; Smith, P.
2014-09-01
The United Kingdom currently reports nitrous oxide emissions from agriculture using the IPCC default Tier 1 methodology. However Tier 1 estimates have a large degree of uncertainty as they do not account for spatial variations in emissions. Therefore biogeochemical models such as DailyDayCent (DDC) are increasingly being used to provide a spatially disaggregated assessment of annual emissions. Prior to use, an assessment of the ability of the model to predict annual emissions should be undertaken, coupled with an analysis of how model inputs influence model outputs, and whether the modelled estimates are more robust that those derived from the Tier 1 methodology. The aims of the study were (a) to evaluate if the DailyDayCent model can accurately estimate annual N2O emissions across nine different experimental sites, (b) to examine its sensitivity to different soil and climate inputs across a number of experimental sites and (c) to examine the influence of uncertainty in the measured inputs on modelled N2O emissions. DailyDayCent performed well across the range of cropland and grassland sites, particularly for fertilized fields indicating that it is robust for UK conditions. The sensitivity of the model varied across the sites and also between fertilizer/manure treatments. Overall our results showed that there was a stronger correlation between the sensitivity of N2O emissions to changes in soil pH and clay content than the remaining input parameters used in this study. The lower the initial site values for soil pH and clay content, the more sensitive DDC was to changes from their initial value. When we compared modelled estimates with Tier 1 estimates for each site, we found that DailyDayCent provided a more accurate representation of the rate of annual emissions.
Regional Climate Change Hotspots over Africa
NASA Astrophysics Data System (ADS)
Anber, U.
2009-04-01
Regional Climate Change Index (RCCI), is developed based on regional mean precipitation change, mean surface air temperature change, and change in precipitation and temperature interannual variability. The RCCI is a comparative index designed to identify the most responsive regions to climate change, or Hot- Spots. The RCCI is calculated for Seven land regions over North Africa and Arabian region from the latest set of climate change projections by 14 global climates for the A1B, A2 and B1 IPCC emission scenarios. The concept of climate change can be approaches from the viewpoint of vulnerability or from that of climate response. In the former case a Hot-Spot can be defined as a region for which potential climate change impacts on the environment or different activity sectors can be particularly pronounced. In the other case, a Hot-Spot can be defined as a region whose climate is especially responsive to global change. In particular, the characterization of climate change response-based Hot-Spot can provide key information to identify and investigate climate change Hot-Spots based on results from multi-model ensemble of climate change simulations performed by modeling groups from around the world as contributions to the Assessment Report of Intergovernmental Panel on Climate Change (IPCC). A Regional Climate Change Index (RCCI) is defined based on four variables: change in regional mean surface air temperature relative to the global average temperature change ( or Regional Warming Amplification Factor, RWAF ), change in mean regional precipitation ( , of present day value ), change in regional surface air temperature interannual variability ( ,of present day value), change in regional precipitation interannual variability ( , of present day value ). In the definition of the RCCI it is important to include quantities other than mean change because often mean changes are not the only important factors for specific impacts. We thus also include inter annual variability, which is critical for many activity sectors, such as agriculture and water management. The RCCI is calculated for the above mentioned set of global climate change simulations and is inter compared across regions to identify climate change, Hot- Spots, that is regions with the largest values of RCCI. It is important to stress that, as will be seen, the RCCI is a comparative index, that is a small RCCI value does not imply a small absolute change, but only a small climate response compared to other regions. The models used are: CCMA-3-T47 CNRM-CM3 CSIRO-MK3 GFDL-CM2-0 GISS-ER INMCM3 IPSL-CM4 MIROC3-2M MIUB-ECHO-G MPI-ECHAM5 MRI-CGCM2 NCAR-CCSM3 NCAR-PCM1 UKMO-HADCM3 Note that the 3 IPCC emission scenarios, A1B, B1 and A2 almost encompass the entire IPCC scenario range, the A2 being close to the high end of the range, the B1 close to the low end and the A1B lying toward the middle of the range. The model data are obtained from the IPCC site and are interpolated onto a common 1 degree grid to facilitate intercomparison. The RCCI is here defined as in Giorgi (2006), except that the entire yea is devided into two six months periods, D J F M A M and J J A S O N. RCCI=[n(∆P)+n(∆σP)+n(RWAF)+n(∆σT)]D...M + [n(∆P)+n(∆σP)+n(RWAF)+n(∆σT)]J…N (1)
Towards the Sixth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC)
NASA Astrophysics Data System (ADS)
Fuglestvedt, J. S.; Masson-Delmotte, V.; Zhai, P.; Pirani, A.
2016-12-01
The IPCC, set up in 1988 by WMO and UNEP, is the international body for assessing the science related to climate change. The reports of the IPCC include Assessments, Synthesis and Special Reports (and their Summaries for Policymakers), as well as Methodological Reports, providing policymakers with regular assessments of the scientific basis of climate change, its impacts and future risks, and options for adaptation and mitigation. These assessments are policy-relevant, but not policy-prescriptive, and based on the assessment of the published literature. The assessments of the IPCC follow precise procedures to ensure that they provide a rigorous and balanced scientific information. Particularly critical is the volunteer involvment of tens of scientists involved in the scoping of each report, as well as the work of hundreds of Coordinating Lead Authors and Lead Authors of reports, with the complementary expertise of hundreds of sollicited Contributing Authors. The review process plays a key role in the open and transparent process underlying the IPCC reports. It is organized in multiple rounds and mobilizes thousands of other experts, a process monitored by Review Editors. The author teams develop rigorous methodologies to report the degree of confidence associated with each finding and report information with uncertainty. As a result, successive IPCC reports provide regular steps to determine matured climate science, through robust findings, but also emerging research pathways, and facilitate science maturation through analyses of multiple perspectives provided by the scientific literature in a comprehensive approach. While the IPCC does not conduct its own scientific research, the timeline of the IPCC reports acts as a stimulation for the research community, especially for internationally coordinated research programmes associated with global climate projections. These aspects will be developed in this presentation, with a focus on Working Group I (the physical science basis), and the 6th Assessment Report (AR6). For more information, see : www.ipcc.ch For new special reports planned in 2018-2019 : http://www.ipcc.ch/activities/activities.shtml For the strategic planning schedule for the AR6 : http://www.ipcc.ch/activities/pdf/ar6_WSPSchedule_07072016.pdf
Aggarwal, M; Fisher, P; Hüser, A; Kluxen, F M; Parr-Dobrzanski, R; Soufi, M; Strupp, C; Wiemann, C; Billington, R
2015-06-01
Dermal absorption is a key parameter in non-dietary human safety assessments for agrochemicals. Conservative default values and other criteria in the EFSA guidance have substantially increased generation of product-specific in vitro data and in some cases, in vivo data. Therefore, data from 190 GLP- and OECD guideline-compliant human in vitro dermal absorption studies were published, suggesting EFSA defaults and criteria should be revised (Aggarwal et al., 2014). This follow-up article presents data from an additional 171 studies and also the combined dataset. Collectively, the data provide consistent and compelling evidence for revision of EFSA's guidance. This assessment covers 152 agrochemicals, 19 formulation types and representative ranges of spray concentrations. The analysis used EFSA's worst-case dermal absorption definition (i.e., an entire skin residue, except for surface layers of stratum corneum, is absorbed). It confirmed previously proposed default values of 6% for liquid and 2% for solid concentrates, irrespective of active substance loading, and 30% for all spray dilutions, irrespective of formulation type. For concentrates, absorption from solvent-based formulations provided reliable read-across for other formulation types, as did water-based products for solid concentrates. The combined dataset confirmed that absorption does not increase linearly beyond a 5-fold increase in dilution. Finally, despite using EFSA's worst-case definition for absorption, a rationale for routinely excluding the entire stratum corneum residue, and ideally the entire epidermal residue in in vitro studies, is presented. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
VanderZaag, A. C.; MacDonald, J. D.; Evans, L.; Vergé, X. P. C.; Desjardins, R. L.
2013-09-01
Methane emissions from manure management represent an important mitigation opportunity, yet emission quantification methods remain crude and do not contain adequate detail to capture changes in agricultural practices that may influence emissions. Using the Canadian emission inventory methodology as an example, this letter explores three key aspects for improving emission quantification: (i) obtaining emission measurements to improve and validate emission model estimates, (ii) obtaining more useful activity data, and (iii) developing a methane emission model that uses the available farm management activity data. In Canada, national surveys to collect manure management data have been inconsistent and not designed to provide quantitative data. Thus, the inventory has not been able to accurately capture changes in management systems even between manure stored as solid versus liquid. To address this, we re-analyzed four farm management surveys from the past decade and quantified the significant change in manure management which can be linked to the annual agricultural survey to create a continuous time series. In the dairy industry of one province, for example, the percentage of manure stored as liquid increased by 300% between 1991 and 2006, which greatly affects the methane emission estimates. Methane emissions are greatest from liquid manure, but vary by an order of magnitude depending on how the liquid manure is managed. Even if more complete activity data are collected on manure storage systems, default Intergovernmental Panel on Climate Change (IPCC) guidance does not adequately capture the impacts of management decisions to reflect variation among farms and regions in inventory calculations. We propose a model that stays within the IPCC framework but would be more responsive to farm management by generating a matrix of methane conversion factors (MCFs) that account for key factors known to affect methane emissions: temperature, retention time and inoculum. This MCF matrix would be populated using a mechanistic emission model verified with on-farm emission measurements. Implementation of these MCF values will require re-analysis of farm surveys to quantify liquid manure emptying frequency and timing, and will rely on the continued collection of this activity data in the future. For model development and validation, emission measurement campaigns will be needed on representative farms over at least one full year, or manure management cycle (whichever is longer). The proposed approach described in this letter is long-term, but is required to establish baseline data for emissions from manure management systems. With these improvements, the manure management emission inventory will become more responsive to the changing practices on Canadian livestock farms.
Cotte, E; Peyrat, P; Piaton, E; Chapuis, F; Rivoire, M; Glehen, O; Arvieux, C; Mabrut, J-Y; Chipponi, J; Gilly, F-N
2013-07-01
In digestive cancers, the prognostic significance of intraperitoneal free cancer cells remains unclear (IPCC). The main objective of this study was to assess the prognostic significance of IPCC in colorectal and gastric adenocarcinoma. The secondary objectives were to evaluate the predictive significance of IPCC for the development of peritoneal carcinomatosis (PC) and to evaluate the prevalence of synchronous PC and IPCC. This was a prospective multicentre study. All patients undergoing surgery for a digestive tract cancer had peritoneal cytology taken. Patients with gastric and colorectal cancer with no residual tumour after surgery and no evidence of PC were followed-up for 2 years. The primary end point was overall survival. Between 2002 and 2007, 1364 patients were enrolled and 956 were followed-up over 2 years. Prevalence of IPCC was 5.7% in colon cancer, 0.6% in rectal cancer and 19.5% in gastric cancer. The overall 2-year survival rate for patients with IPCC was 34.7% versus 86.8% for patients with negative cytology (p<0.0001). By multivariate analysis, IPCC was not an independent prognostic factor. No relationship between cytology and recurrence was found. The presence of IPCC was not an independent prognostic and didn't add any additional prognostic information to the usual prognostic factors related to the tumour (pTNM and differentiation). Moreover the presence of IPCC detected with this method didn't appear to predict development of PC. Peritoneal cytology using conventional staining doesn't seem to be a useful tool for the staging of colorectal and gastric cancers. Copyright © 2013 Elsevier Ltd. All rights reserved.
Bekelman, Justin E.; Suneja, Gita; Guzzo, Thomas; Evan Pollack, Craig; Armstrong, Katrina; Epstein, Andrew J.
2013-01-01
Purpose National attention has focused on whether urology-radiation oncology practice integration – known as integrated prostate cancer centers (IPCCs) – contributes to use of intensity-modulated radiation therapy (IMRT), a common and expensive treatment for prostate cancer. Methods We examined prostate cancer treatment patterns pre- and post-conversion of a urology practice to an IPCC in July, 2006. Using the SEER-Medicare database, we identified patients age ≥ 65 years diagnosed in one state-wide registry with non-metastatic prostate cancer between 2004 and 2007 and classified patients into 3 groups: (1) those seen by IPCC physicians (exposure group); (2) those living in the same hospital referral region (HRR) and not seen by IPCC physicians (HRR-control group); and (3) those living elsewhere in the state (state-control group). We compared changes in treatment among the 3 groups, adjusting for patient, clinical, and socio-economic factors. Results Compared with the 8.1 percentage point (ppt) increase in adjusted IMRT use in the state-control group, IMRT increased 20.3 ppts (95% confidence interval [CI] 13.4, 27.1) in the IPCC group and 19.2 ppts (95% CI 9.6, 28.9) in the HRR-control group. Androgen-deprivation therapy (ADT), for which Medicare reimbursement declined sharply, decreased similarly in the IPCC and HRR-control groups. Prostatectomy declined significantly in the IPCC group. Conclusions Coincident with the conversion of a urology group practice to an IPCC, we observed increases in IMRT and decreases in ADT among patients seen by IPCC physicians and those seen in the surrounding healthcare market that were not observed in the remainder of the state. PMID:23399652
Cost-effectiveness of intensive psychiatric community care for high users of inpatient services.
Rosenheck, R A; Neale, M S
1998-05-01
This 2-year experimental study evaluated the effectiveness and cost of 10 intensive psychiatric community care (IPCC) programs at Department of Veterans Affairs medical centers in the northeastern United States. High users of inpatient services were randomly assigned to either IPCC or standard Department of Veterans Affairs care at 6 general medical and surgical hospitals (n=271 vs 257) and 4 neuropsychiatric hospitals (n=183 vs 162). Patient interviews every 6 months and national computerized data were used to assess clinical outcomes, health service use, health care costs, and non-health care costs. There was only 1 significant clinical difference between groups across follow-up periods: IPCC patients at general medical and surgical sites had higher community living skills. However, at the final interview, IPCC patients at general medical and surgical sites showed significantly lower symptoms, higher functioning, and greater satisfaction with services. Treatment with IPCC significantly reduced hospital use only at neuropsychiatric sites (320 vs 513 days, P<.001). Total societal costs, including the cost of IPCC, were lower for IPCC at neuropsychiatric sites ($82,454 vs $116,651, P<.001), but greater at general medical and surgical sites ($51,537 vs $46,491, P<.01). When 2 sites that incompletely implemented the model were dropped from the analysis, costs at general medical and surgical sites were $38 lower for IPCC (P=.26). At acute care hospitals, IPCC treatment is associated with greater long-term clinical improvement and, when fully implemented, is cost-neutral. At long-stay hospitals treating older, less-functional patients, it is not associated with clinical or functional improvement but generates substantial cost savings. Intensive psychiatric community care thus has beneficial, but somewhat different, outcome profiles at different types of hospitals.
Sharma, Rashmi K; Cameron, Kenzie A; Chmiel, Joan S; Von Roenn, Jamie H; Szmuilowicz, Eytan; Prigerson, Holly G; Penedo, Frank J
2015-11-10
Inpatient palliative care consultation (IPCC) may help address barriers that limit the use of hospice and the receipt of symptom-focused care for racial/ethnic minorities, yet little is known about disparities in the rates of IPCC. We evaluated the association between race/ethnicity and rates of IPCC for patients with advanced cancer. Patients with metastatic cancer who were hospitalized between January 1, 2009, and December 31, 2010, at an urban academic medical center participated in the study. Patient-level multivariable logistic regression was used to evaluate the association between race/ethnicity and IPCC. A total of 6,288 patients (69% non-Hispanic white, 19% African American, and 6% Hispanic) were eligible. Of these patients, 16% of whites, 22% of African Americans, and 20% of Hispanics had an IPCC (overall P < .001). Compared with whites, African Americans had a greater likelihood of receiving an IPCC (odds ratio, 1.21; 95% CI, 1.01 to 1.44), even after adjusting for insurance, hospitalizations, marital status, and illness severity. Among patients who received an IPCC, African Americans had a higher median number of days from IPCC to death compared with whites (25 v 17 days; P = .006), and were more likely than Hispanics (59% v 41%; P = .006), but not whites, to be referred to hospice. Inpatient settings may neutralize some racial/ethnic differences in access to hospice and palliative care services; however, irrespective of race/ethnicity, rates of IPCC remain low and occur close to death. Additional research is needed to identify interventions to improve access to palliative care in the hospital for all patients with advanced cancer. © 2015 by American Society of Clinical Oncology.
Sharma, Rashmi K.; Cameron, Kenzie A.; Chmiel, Joan S.; Von Roenn, Jamie H.; Szmuilowicz, Eytan; Prigerson, Holly G.; Penedo, Frank J.
2015-01-01
Purpose Inpatient palliative care consultation (IPCC) may help address barriers that limit the use of hospice and the receipt of symptom-focused care for racial/ethnic minorities, yet little is known about disparities in the rates of IPCC. We evaluated the association between race/ethnicity and rates of IPCC for patients with advanced cancer. Patients and Methods Patients with metastatic cancer who were hospitalized between January 1, 2009, and December 31, 2010, at an urban academic medical center participated in the study. Patient-level multivariable logistic regression was used to evaluate the association between race/ethnicity and IPCC. Results A total of 6,288 patients (69% non-Hispanic white, 19% African American, and 6% Hispanic) were eligible. Of these patients, 16% of whites, 22% of African Americans, and 20% of Hispanics had an IPCC (overall P < .001). Compared with whites, African Americans had a greater likelihood of receiving an IPCC (odds ratio, 1.21; 95% CI, 1.01 to 1.44), even after adjusting for insurance, hospitalizations, marital status, and illness severity. Among patients who received an IPCC, African Americans had a higher median number of days from IPCC to death compared with whites (25 v 17 days; P = .006), and were more likely than Hispanics (59% v 41%; P = .006), but not whites, to be referred to hospice. Conclusion Inpatient settings may neutralize some racial/ethnic differences in access to hospice and palliative care services; however, irrespective of race/ethnicity, rates of IPCC remain low and occur close to death. Additional research is needed to identify interventions to improve access to palliative care in the hospital for all patients with advanced cancer. PMID:26324373
CMIP6 Citation Services and the Data Services of the IPCC Data Distribution Centre for AR6
NASA Astrophysics Data System (ADS)
Stockhause, Martina; Lautenschlager, Michael
2017-04-01
As a result of the experiences from CMIP5 the two services contributed by DKRZ to the CMIP research infrastructure have been improved for CMIP6: the Citation Services and the Services of the IPCC Data Distribution Centre (DDC, http://ipcc-data.org). 1. Data Citation Services: Within CMIP5 it took a couple of years before the data was citable with their DataCite DOIs. The DataCite DOI registration by the WDC Climate at DKRZ (World Data Center Climate at the Climate Computing Center) requires data transfer and long-term archival at DKRZ according to DDC's quality standards. Based on a request from WGCM (Working Group on Climate Models) an additional early citation possibility for the evolving CMIP6 data was added to the citation service (http://cmip6cite.wdc-climate.de). 2. IPCC DDC Services: WDC Climate has been hosting the IPCC DDC's Reference Data Archive for the climate model output underlying the IPCC Assessment Reports (ARs) since the Second Assessment Report in 1995. One task of the DDC is the support of the IPCC Working Groups (WGs) and their authors. The WG support was not sufficient for AR5 resulting in WG I setting up and maintaining their own CMIP5 data repository hosting a data subset. The DDC will open DKRZ's CMIP data pool as an additional DDC service for the IPCC authors using a synergy with the interests of the national climate community. Within the PICO the Citation and the IPCC DDC services will be presented from a user's perspective. The connections to and integration into the infrastructure for CMIP6 (see https://www.earthsystemcog.org/projects/wip/) will be explained.
2011-01-01
Background Concomitant treatment of Human Immunodeficiency Virus (HIV) infection and tuberculosis (TB) presents a series of challenges for treatment compliance for both providers and patients. We carried out this study to identify risk factors for default from TB treatment in people living with HIV. Methods We conducted a cohort study to monitor HIV/TB co-infected subjects in Pernambuco, Brazil, on a monthly basis, until completion or default of treatment for TB. Logistic regression was used to calculate crude and adjusted odds ratios, 95% confidence intervals and P-values. Results From a cohort of 2310 HIV subjects, 390 individuals (16.9%) who had started treatment after a diagnosis of TB were selected, and data on 273 individuals who completed or defaulted on treatment for TB were analyzed. The default rate was 21.7% and the following risk factors were identified: male gender, smoking and CD4 T-cell count less than 200 cells/mm3. Age over 29 years, complete or incomplete secondary or university education and the use of highly active antiretroviral therapy (HAART) were identified as protective factors for the outcome. Conclusion The results point to the need for more specific actions, aiming to reduce the default from TB treatment in males, younger adults with low education, smokers and people with CD4 T-cell counts < 200 cells/mm3. Default was less likely to occur in patients under HAART, reinforcing the strategy of early initiation of HAART in individuals with TB. PMID:22176628
Quantitative and descriptive comparison of four acoustic analysis systems: vowel measurements.
Burris, Carlyn; Vorperian, Houri K; Fourakis, Marios; Kent, Ray D; Bolt, Daniel M
2014-02-01
This study examines accuracy and comparability of 4 trademarked acoustic analysis software packages (AASPs): Praat, WaveSurfer, TF32, and CSL by using synthesized and natural vowels. Features of AASPs are also described. Synthesized and natural vowels were analyzed using each of the AASP's default settings to secure 9 acoustic measures: fundamental frequency (F0), formant frequencies (F1-F4), and formant bandwidths (B1-B4). The discrepancy between the software measured values and the input values (synthesized, previously reported, and manual measurements) was used to assess comparability and accuracy. Basic AASP features are described. Results indicate that Praat, WaveSurfer, and TF32 generate accurate and comparable F0 and F1-F4 data for synthesized vowels and adult male natural vowels. Results varied by vowel for women and children, with some serious errors. Bandwidth measurements by AASPs were highly inaccurate as compared with manual measurements and published data on formant bandwidths. Values of F0 and F1-F4 are generally consistent and fairly accurate for adult vowels and for some child vowels using the default settings in Praat, WaveSurfer, and TF32. Manipulation of default settings yields improved output values in TF32 and CSL. Caution is recommended especially before accepting F1-F4 results for children and B1-B4 results for all speakers.
Wang, Yajing; Guo, Jingheng; Vogt, Rolf David; Mulder, Jan; Wang, Jingguo; Zhang, Xiaoshan
2018-02-01
Nitrous oxide (N 2 O) is a greenhouse gas that also plays the primary role in stratospheric ozone depletion. The use of nitrogen fertilizers is known as the major reason for atmospheric N 2 O increase. Empirical bottom-up models therefore estimate agricultural N 2 O inventories using N loading as the sole predictor, disregarding the regional heterogeneities in soil inherent response to external N loading. Several environmental factors have been found to influence the response in soil N 2 O emission to N fertilization, but their interdependence and relative importance have not been addressed properly. Here, we show that soil pH is the chief factor explaining regional disparities in N 2 O emission, using a global meta-analysis of 1,104 field measurements. The emission factor (EF) of N 2 O increases significantly (p < .001) with soil pH decrease. The default EF value of 1.0%, according to IPCC (Intergovernmental Panel on Climate Change) for agricultural soils, occurs at soil pH 6.76. Moreover, changes in EF with N fertilization (i.e. ΔEF) is also negatively correlated (p < .001) with soil pH. This indicates that N 2 O emission in acidic soils is more sensitive to changing N fertilization than that in alkaline soils. Incorporating our findings into bottom-up models has significant consequences for regional and global N 2 O emission inventories and reconciling them with those from top-down models. Moreover, our results allow region-specific development of tailor-made N 2 O mitigation measures in agriculture. © 2017 John Wiley & Sons Ltd.
Nitrous oxide emissions in Chinese vegetable systems: A meta-analysis.
Wang, Xiaozhong; Zou, Chunqin; Gao, Xiaopeng; Guan, Xilin; Zhang, Wushuai; Zhang, Yueqiang; Shi, Xiaojun; Chen, Xinping
2018-08-01
China accounts for more than half of the world's vegetable production, and identifying the contribution of vegetable production to nitrous oxide (N 2 O) emissions in China is therefore important. We performed a meta-analysis that included 153 field measurements of N 2 O emissions from 21 field studies in China. Our goal was to quantify N 2 O emissions and fertilizer nitrogen (N) based-emission factors (EFs) in Chinese vegetable systems and to clarify the effects of rates and types of N fertilizer in both open-field and greenhouse systems. The results indicated that the intensive vegetable systems in China had an average N 2 O emission of 3.91 kg N 2 O-N ha -1 and an EF of 0.69%. Although the EF was lower than the IPCC default value of 1.0%, the average N 2 O emission was generally greater than in other cropping systems due to greater input of N fertilizers. The EFs were similar in greenhouse vs. open-field systems but N 2 O emissions were about 1.4 times greater in greenhouses. The EFs were not affected by N rate, but N 2 O emissions for both open-field and greenhouse systems increased with N rate. The total and fertilizer-induced N 2 O emissions, as well as EFs, were unaffected by the type of fertilizers in greenhouse system under same N rates. In addition to providing basic information about N 2 O emissions from Chinese vegetable systems, the results suggest that N 2 O emissions could be reduced without reducing yields by treating vegetable systems in China with a combination of synthetic N fertilizer and manure at optimized economic rates. Copyright © 2018 Elsevier Ltd. All rights reserved.
VLSI (Very Large Scale Integration) Design Tools Reference Manual - Release 1.0.
1983-10-01
Pulse PULSE(VI V2 TD TR TF PW PER) Examples: VIN 3 0 PULSE(-i 1 2NS 2NS 2NS SONS lOONS) parameter default units V1 (initial value) Volts or Amps V2...VO VA FREQ TD THETA) Examples: VIN 3 0 SIN(0 1 OOMEG 1NS 1EO) parameter default value units VO (offset) Volts or Amps VA (amplitude) Volts or Amps...TD to TSTOP V O+VAe (-(" -nTD)%)in(2iFRJEQ (tim +TD)) t ° I. 3. Exponential EXP(V1 V2 TD1 TAU1 TD2 TAU2) Examples: VIN 3 0 EXP(-4 -1 2NS 3ONS 6ONS
The decay of wood in landfills in contrasting climates in Australia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ximenes, Fabiano, E-mail: fabiano.ximenes@dpi.nsw.gov.au; Björdal, Charlotte; Cowie, Annette
Highlights: • We examine decay in wood from landfills in contrasting environments in Australia. • Analysis is based on changes in chemical composition and microscopy. • Climate did not influence levels of decay observed. • Microscopy of retrieved samples revealed most of the decay was aerobic in nature. • Current default factors for wood decay in landfills overestimate methane emissions. - Abstract: Wood products in landfill are commonly assumed to decay within several decades, returning the carbon contained therein to the atmosphere, with about half the carbon released as methane. However, the rate and extent of decay is not wellmore » known, as very few studies have examined the decay of wood products in landfills. This study reports on the findings from landfill excavations conducted in the Australian cities of Sydney and Cairns located in temperate and tropical environments, respectively. The objective of this study was to determine whether burial of the wood in warmer, more tropical conditions in Cairns would result in greater levels of decay than occurs in the temperate environment of Sydney. Wood samples recovered after 16–44 years in landfill were examined through physical, chemical and microscopic analyses, and compared with control samples to determine the carbon loss. There was typically little or no decay in the wood samples analysed from the landfill in Sydney. Although there was significant decay in rainforest wood species excavated from Cairns, decay levels for wood types that were common to both Cairns and Sydney landfills were similar. The current Intergovernmental Panel on Climate Change (IPCC, 2006) default decay factor for organic materials in landfills is 50%. In contrast, the carbon loss determined for Pinus radiata recovered from Sydney and Cairns landfills was 7.9% and 4.4%, respectively, and 0% for Agathis sp. This suggests that climate did not influence decay, and that the more extensive levels of decay observed for some wood samples from Cairns indicates that those wood types were more susceptible to biodegradation. Microscopic analyses revealed that most decay patterns observed in samples analysed from Sydney were consistent with aerobic fungal decay. Only a minor portion of the microbial decay was due to erosion bacteria active in anaerobic/near anaerobic environments. The findings of this study strongly suggest that models that adopt current accepted default factors for the decay of wood in landfills greatly overestimate methane emissions.« less
76 FR 6651 - Intergovernmental Panel on Climate Change Special Report Review
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-07
... time that they accept the overall report. Principles and procedures for the IPCC and its preparation of..._documents/ipcc-principles-appendix-a.pdf (pdf) http://ipcc.ch/organization/organization_procedures.shtml In.... The following section of the report discusses risk management at the local, national and international...
Smith, Joel B; Schneider, Stephen H; Oppenheimer, Michael; Yohe, Gary W; Hare, William; Mastrandrea, Michael D; Patwardhan, Anand; Burton, Ian; Corfee-Morlot, Jan; Magadza, Chris H D; Füssel, Hans-Martin; Pittock, A Barrie; Rahman, Atiq; Suarez, Avelino; van Ypersele, Jean-Pascal
2009-03-17
Article 2 of the United Nations Framework Convention on Climate Change [United Nations (1992) http://unfccc.int/resource/docs/convkp/conveng.pdf. Accessed February 9, 2009] commits signatory nations to stabilizing greenhouse gas concentrations in the atmosphere at a level that "would prevent dangerous anthropogenic interference (DAI) with the climate system." In an effort to provide some insight into impacts of climate change that might be considered DAI, authors of the Third Assessment Report (TAR) of the Intergovernmental Panel on Climate Change (IPCC) identified 5 "reasons for concern" (RFCs). Relationships between various impacts reflected in each RFC and increases in global mean temperature (GMT) were portrayed in what has come to be called the "burning embers diagram." In presenting the "embers" in the TAR, IPCC authors did not assess whether any single RFC was more important than any other; nor did they conclude what level of impacts or what atmospheric concentrations of greenhouse gases would constitute DAI, a value judgment that would be policy prescriptive. Here, we describe revisions of the sensitivities of the RFCs to increases in GMT and a more thorough understanding of the concept of vulnerability that has evolved over the past 8 years. This is based on our expert judgment about new findings in the growing literature since the publication of the TAR in 2001, including literature that was assessed in the IPCC Fourth Assessment Report (AR4), as well as additional research published since AR4. Compared with results reported in the TAR, smaller increases in GMT are now estimated to lead to significant or substantial consequences in the framework of the 5 "reasons for concern."
Smith, Joel B.; Schneider, Stephen H.; Oppenheimer, Michael; Yohe, Gary W.; Hare, William; Mastrandrea, Michael D.; Patwardhan, Anand; Burton, Ian; Corfee-Morlot, Jan; Magadza, Chris H. D.; Füssel, Hans-Martin; Pittock, A. Barrie; Rahman, Atiq; Suarez, Avelino; van Ypersele, Jean-Pascal
2009-01-01
Article 2 of the United Nations Framework Convention on Climate Change [United Nations (1992) http://unfccc.int/resource/docs/convkp/conveng.pdf. Accessed February 9, 2009] commits signatory nations to stabilizing greenhouse gas concentrations in the atmosphere at a level that “would prevent dangerous anthropogenic interference (DAI) with the climate system.” In an effort to provide some insight into impacts of climate change that might be considered DAI, authors of the Third Assessment Report (TAR) of the Intergovernmental Panel on Climate Change (IPCC) identified 5 “reasons for concern” (RFCs). Relationships between various impacts reflected in each RFC and increases in global mean temperature (GMT) were portrayed in what has come to be called the “burning embers diagram.” In presenting the “embers” in the TAR, IPCC authors did not assess whether any single RFC was more important than any other; nor did they conclude what level of impacts or what atmospheric concentrations of greenhouse gases would constitute DAI, a value judgment that would be policy prescriptive. Here, we describe revisions of the sensitivities of the RFCs to increases in GMT and a more thorough understanding of the concept of vulnerability that has evolved over the past 8 years. This is based on our expert judgment about new findings in the growing literature since the publication of the TAR in 2001, including literature that was assessed in the IPCC Fourth Assessment Report (AR4), as well as additional research published since AR4. Compared with results reported in the TAR, smaller increases in GMT are now estimated to lead to significant or substantial consequences in the framework of the 5 “reasons for concern.” PMID:19251662
Climate sensitivity uncertainty: when is good news bad?
Freeman, Mark C; Wagner, Gernot; Zeckhauser, Richard J
2015-11-28
Climate change is real and dangerous. Exactly how bad it will get, however, is uncertain. Uncertainty is particularly relevant for estimates of one of the key parameters: equilibrium climate sensitivity--how eventual temperatures will react as atmospheric carbon dioxide concentrations double. Despite significant advances in climate science and increased confidence in the accuracy of the range itself, the 'likely' range has been 1.5-4.5°C for over three decades. In 2007, the Intergovernmental Panel on Climate Change (IPCC) narrowed it to 2-4.5°C, only to reverse its decision in 2013, reinstating the prior range. In addition, the 2013 IPCC report removed prior mention of 3°C as the 'best estimate'. We interpret the implications of the 2013 IPCC decision to lower the bottom of the range and excise a best estimate. Intuitively, it might seem that a lower bottom would be good news. Here we ask: when might apparently good news about climate sensitivity in fact be bad news in the sense that it lowers societal well-being? The lowered bottom value also implies higher uncertainty about the temperature increase, definitely bad news. Under reasonable assumptions, both the lowering of the lower bound and the removal of the 'best estimate' may well be bad news. © 2015 The Author(s).
Methods for Assessing Uncertainties in Climate Change, Impacts and Responses (Invited)
NASA Astrophysics Data System (ADS)
Manning, M. R.; Swart, R.
2009-12-01
Assessing the scientific uncertainties or confidence levels for the many different aspects of climate change is particularly important because of the seriousness of potential impacts and the magnitude of economic and political responses that are needed to mitigate climate change effectively. This has made the treatment of uncertainty and confidence a key feature in the assessments carried out by the Intergovernmental Panel on Climate Change (IPCC). Because climate change is very much a cross-disciplinary area of science, adequately dealing with uncertainties requires recognition of their wide range and different perspectives on assessing and communicating those uncertainties. The structural differences that exist across disciplines are often embedded deeply in the corresponding literature that is used as the basis for an IPCC assessment. The assessment of climate change science by the IPCC has from its outset tried to report the levels of confidence and uncertainty in the degree of understanding in both the underlying multi-disciplinary science and in projections for future climate. The growing recognition of the seriousness of this led to the formation of a detailed approach for consistent treatment of uncertainties in the IPCC’s Third Assessment Report (TAR) [Moss and Schneider, 2000]. However, in completing the TAR there remained some systematic differences between the disciplines raising concerns about the level of consistency. So further consideration of a systematic approach to uncertainties was undertaken for the Fourth Assessment Report (AR4). The basis for the approach used in the AR4 was developed at an expert meeting of scientists representing many different disciplines. This led to the introduction of a broader way of addressing uncertainties in the AR4 [Manning et al., 2004] which was further refined by lengthy discussions among many IPCC Lead Authors, for over a year, resulting in a short summary of a standard approach to be followed for that assessment [IPCC, 2005]. This paper extends a review of the treatment of uncertainty in the IPCC assessments by Swart et al [2009]. It is shown that progress towards consistency has been made but that there also appears to be a need for continued use of several complementary approaches in order to cover the wide range of circumstances across different disciplines involved in climate change. While this reflects the situation in the science community, it also raises the level of complexity for policymakers and other users of the assessments who would prefer one common consensus approach. References IPCC (2005), Guidance Notes for Lead Authors of the IPCC Fourth Assessment Report on Addressing Uncertainties, IPCC, Geneva. Manning, M., et al. (2004), IPCC Workshop on Describing Scientific Uncertainties in Climate Change to Support Analysis of Risk and of Options. IPCC Moss, R., and S. Schneider (2000), Uncertainties, in Guidance Papers on the Cross Cutting Issues of the Third Assessment Report of the IPCC, edited by R. Pachauri, et al., Intergovernmental Panel on Climate Change (IPCC), Geneva. Swart, R., et al. (2009), Agreeing to disagree: uncertainty management in assessing climate change, impacts and responses by the IPCC Climatic Change, 92(1-2), 1 - 29.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levin, Alan; Chaves, Chris
2015-04-04
The Department of Energy (DOE) has performed an evaluation of the technical bases for the default value for the atmospheric dispersion parameter χ/Q. This parameter appears in the calculation of radiological dose at the onsite receptor location (co-located worker at 100 meters) in safety analysis of DOE nuclear facilities. The results of the calculation are then used to determine whether safety significant engineered controls should be established to prevent and/or mitigate the event causing the release of hazardous material. An evaluation of methods for calculation of the dispersion of potential chemical releases for the purpose of estimating the chemical exposuremore » at the co-located worker location was also performed. DOE’s evaluation consisted of: (a) a review of the regulatory basis for the default χ/Q dispersion parameter; (b) an analysis of this parameter’s sensitivity to various factors that affect the dispersion of radioactive material; and (c) performance of additional independent calculations to assess the appropriate use of the default χ/Q value.« less
Anterior Cingulate Engagement in a Foraging Context Reflects Choice Difficulty, Not Foraging Value
Shenhav, Amitai; Straccia, Mark A.; Cohen, Jonathan D.; Botvinick, Matthew M.
2014-01-01
Previous theories predict that human dorsal anterior cingulate (dACC) should respond to decision difficulty. An alternative theory has been recently advanced which proposes that dACC evolved to represent the value of “non-default,” foraging behavior, calling into question its role in choice difficulty. However, this new theory does not take into account that choosing whether or not to pursue foraging-like behavior can also be more difficult than simply resorting to a “default.” The results of two neuroimaging experiments show that dACC is only associated with foraging value when foraging value is confounded with choice difficulty; when the two are dissociated, dACC engagement is only explained by choice difficulty, and not the value of foraging. In addition to refuting this new theory, our studies help to formalize a fundamental connection between choice difficulty and foraging-like decisions, while also prescribing a solution for a common pitfall in studies of reward-based decision making. PMID:25064851
Djae, Tanalou; Bravin, Matthieu N; Garnier, Cédric; Doelsch, Emmanuel
2017-04-01
Parameterizing speciation models by setting the percentage of dissolved organic matter (DOM) that is reactive (% r-DOM) toward metal cations at a single 65% default value is very common in predictive ecotoxicology. The authors tested this practice by comparing the free copper activity (pCu 2+ = -log 10 [Cu 2+ ]) measured in 55 soil sample solutions with pCu 2+ predicted with the Windermere humic aqueous model (WHAM) parameterized by default. Predictions of Cu toxicity to soil organisms based on measured or predicted pCu 2+ were also compared. Default WHAM parameterization substantially skewed the prediction of measured pCu 2+ by up to 2.7 pCu 2+ units (root mean square residual = 0.75-1.3) and subsequently the prediction of Cu toxicity for microbial functions, invertebrates, and plants by up to 36%, 45%, and 59% (root mean square residuals ≤9 %, 11%, and 17%), respectively. Reparametrizing WHAM by optimizing the 2 DOM binding properties (i.e., % r-DOM and the Cu complexation constant) within a physically realistic value range much improved the prediction of measured pCu 2+ (root mean square residual = 0.14-0.25). Accordingly, this WHAM parameterization successfully predicted Cu toxicity for microbial functions, invertebrates, and plants (root mean square residual ≤3.4%, 4.4%, and 5.8%, respectively). Thus, it is essential to account for the real heterogeneity in DOM binding properties for relatively accurate prediction of Cu speciation in soil solution and Cu toxic effects on soil organisms. Environ Toxicol Chem 2017;36:898-905. © 2016 SETAC. © 2016 SETAC.
Simulating Soil C Stock with the Process-based Model CQESTR
NASA Astrophysics Data System (ADS)
Gollany, H.; Liang, Y.; Rickman, R.; Albrecht, S.; Follett, R.; Wilhelm, W.; Novak, J.; Douglas, C.
2009-04-01
The prospect of storing carbon (C) in soil, as soil organic matter (SOM), provides an opportunity for agriculture to contribute to the reduction of carbon dioxide in the atmosphere while enhancing soil properties. Soil C models are useful for examining the complex interactions between crop, soil management practices and climate and their effects on long-term carbon storage or loss. The process-based carbon model CQESTR, pronounced ‘sequester,' was developed by USDA-ARS scientists at the Columbia Plateau Conservation Research Center, Pendleton, Oregon, USA. It computes the rate of biological decomposition of crop residues or organic amendments as they convert to SOM. CQESTR uses readily available field-scale data to assess long-term effects of cropping systems or crop residue removal on SOM accretion/loss in agricultural soil. Data inputs include weather, above- ground and below-ground biomass additions, N content of residues and amendments, soil properties, and management factors such as tillage and crop rotation. The model was calibrated using information from six long-term experiments across North America (Florence, SC, 19 yrs; Lincoln, NE, 26 yrs; Hoytville, OH, 31 yrs; Breton, AB, 60 yrs; Pendleton, OR, 76 yrs; and Columbia, MO, >100 yrs) having a range of soil properties and climate. CQESTR was validated using data from several additional long-term experiments (8 - 106 yrs) across North America having a range of SOM (7.3 - 57.9 g SOM/kg). Regression analysis of 306 pairs of predicted and measured SOM data under diverse climate, soil texture and drainage classes, and agronomic practices at 13 agricultural sites resulted in a linear relationship with an r2 of 0.95 (P < 0.0001) and a 95% confidence interval of 4.3 g SOM/kg. Estimated SOC values from CQESTR and IPCC (the Intergovernmental Panel on Climate Change) were compared to observed values in three relatively long-term experiments (20 - 24 years). At one site, CQESTR and IPCC estimates of SOC stocks were within 5% of each other for three rotations. At a second site, decreasing tillage intensity increased SOC stocks for winter wheat-fallow rotation for both observed and estimated values by CQESTR and IPCC. At the third site, CQESTR simulated an increase in SOC stocks with increased fertility levels, while IPCC estimates of SOC stocks did not reflect an increase. The CQESTR model successfully predicts SOM dynamics from various management practices and offers the potential for C sequestration planning for C credits or to guide crop residue removal for bio-energy production without degrading the soil resource, environmental quality, or productivity.
Vanhaudenhuyse, Audrey; Noirhomme, Quentin; Tshibanda, Luaba J.-F.; Bruno, Marie-Aurelie; Boveroux, Pierre; Schnakers, Caroline; Soddu, Andrea; Perlbarg, Vincent; Ledoux, Didier; Brichant, Jean-François; Moonen, Gustave; Maquet, Pierre; Greicius, Michael D.
2010-01-01
The ‘default network’ is defined as a set of areas, encompassing posterior-cingulate/precuneus, anterior cingulate/mesiofrontal cortex and temporo-parietal junctions, that show more activity at rest than during attention-demanding tasks. Recent studies have shown that it is possible to reliably identify this network in the absence of any task, by resting state functional magnetic resonance imaging connectivity analyses in healthy volunteers. However, the functional significance of these spontaneous brain activity fluctuations remains unclear. The aim of this study was to test if the integrity of this resting-state connectivity pattern in the default network would differ in different pathological alterations of consciousness. Fourteen non-communicative brain-damaged patients and 14 healthy controls participated in the study. Connectivity was investigated using probabilistic independent component analysis, and an automated template-matching component selection approach. Connectivity in all default network areas was found to be negatively correlated with the degree of clinical consciousness impairment, ranging from healthy controls and locked-in syndrome to minimally conscious, vegetative then coma patients. Furthermore, precuneus connectivity was found to be significantly stronger in minimally conscious patients as compared with unconscious patients. Locked-in syndrome patient’s default network connectivity was not significantly different from controls. Our results show that default network connectivity is decreased in severely brain-damaged patients, in proportion to their degree of consciousness impairment. Future prospective studies in a larger patient population are needed in order to evaluate the prognostic value of the presented methodology. PMID:20034928
Environment Modeling Using Runtime Values for JPF-Android
NASA Technical Reports Server (NTRS)
van der Merwe, Heila; Tkachuk, Oksana; Nel, Seal; van der Merwe, Brink; Visser, Willem
2015-01-01
Software applications are developed to be executed in a specific environment. This environment includes external native libraries to add functionality to the application and drivers to fire the application execution. For testing and verification, the environment of an application is simplified abstracted using models or stubs. Empty stubs, returning default values, are simple to generate automatically, but they do not perform well when the application expects specific return values. Symbolic execution is used to find input parameters for drivers and return values for library stubs, but it struggles to detect the values of complex objects. In this work-in-progress paper, we explore an approach to generate drivers and stubs based on values collected during runtime instead of using default values. Entry-points and methods that need to be modeled are instrumented to log their parameters and return values. The instrumented applications are then executed using a driver and instrumented libraries. The values collected during runtime are used to generate driver and stub values on- the-fly that improve coverage during verification by enabling the execution of code that previously crashed or was missed. We are implementing this approach to improve the environment model of JPF-Android, our model checking and analysis tool for Android applications.
Shepherd, Anita; Yan, Xiaoyuan; Nayak, Dali; Newbold, Jamie; Moran, Dominic; Dhanoa, Mewa Singh; Goulding, Keith; Smith, Pete; Cardenas, Laura M.
2015-01-01
China accounts for a third of global nitrogen fertilizer consumption. Under an International Panel on Climate Change (IPCC) Tier 2 assessment, emission factors (EFs) are developed for the major crop types using country-specific data. IPCC advises a separate calculation for the direct nitrous oxide (N2O) emissions of rice cultivation from that of cropland and the consideration of the water regime used for irrigation. In this paper we combine these requirements in two independent analyses, using different data quality acceptance thresholds, to determine the influential parameters on emissions with which to disaggregate and create N2O EFs. Across China, the N2O EF for lowland horticulture was slightly higher (between 0.74% and 1.26% of fertilizer applied) than that for upland crops (values ranging between 0.40% and 1.54%), and significantly higher than for rice (values ranging between 0.29% and 0.66% on temporarily drained soils, and between 0.15% and 0.37% on un-drained soils). Higher EFs for rice were associated with longer periods of drained soil and the use of compound fertilizer; lower emissions were associated with the use of urea or acid soils. Higher EFs for upland crops were associated with clay soil, compound fertilizer or maize crops; lower EFs were associated with sandy soil and the use of urea. Variation in emissions for lowland vegetable crops was closely associated with crop type. The two independent analyses in this study produced consistent disaggregated N2O EFs for rice and mixed crops, showing that the use of influential cropping parameters can produce robust EFs for China. PMID:26865831
Jeong, Sangjae; Nam, Anwoo; Yi, Seung-Muk; Kim, Jae Young
2015-02-01
According to IPCC guidelines, a semi-aerobic landfill site produces one-half of the amount of CH4 produced by an equally-sized anaerobic landfill site. Therefore categorizing the landfill type is important on greenhouse gas inventories. In order to assess semi-aerobic condition in the sites and the MCF value for semi-aerobic landfill, landfill gas has been measured from vent pipes in five semi-aerobically designed landfills in South Korea. All of the five sites satisfied requirements of semi-aerobic landfills in 2006 IPCC guidelines. However, the ends of leachate collection pipes which are main entrance of air in the semi-aerobic landfill were closed in all five sites. The CH4/CO2 ratio in landfill gas, indicator of aerobic and anaerobic decomposition, ranged from 1.08 to 1.46 which is higher than the values (0.3-1.0) reported for semi-aerobic landfill sites and is rather close to those (1.0-2.0) for anaerobic landfill sites. The low CH4+CO2% in landfill gas implied air intrusion into the landfill. However, there was no evidence that air intrusion has caused by semi-aerobic design and operation. Therefore, the landfills investigated in this study are difficult to be classified as semi-aerobic landfills. Also MCF of 0.5 may significantly underestimate methane emissions compared to other researches. According to the carbon mass balance analyses, the higher MCF needs to be proposed for semi-aerobic landfills. Consequently, methane emission estimate should be based on field evaluation for the semi-aerobically designed landfills. Copyright © 2015. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Shepherd, Anita; Yan, Xiaoyuan; Nayak, Dali; Newbold, Jamie; Moran, Dominic; Dhanoa, Mewa Singh; Goulding, Keith; Smith, Pete; Cardenas, Laura M.
2015-12-01
China accounts for a third of global nitrogen fertilizer consumption. Under an International Panel on Climate Change (IPCC) Tier 2 assessment, emission factors (EFs) are developed for the major crop types using country-specific data. IPCC advises a separate calculation for the direct nitrous oxide (N2O) emissions of rice cultivation from that of cropland and the consideration of the water regime used for irrigation. In this paper we combine these requirements in two independent analyses, using different data quality acceptance thresholds, to determine the influential parameters on emissions with which to disaggregate and create N2O EFs. Across China, the N2O EF for lowland horticulture was slightly higher (between 0.74% and 1.26% of fertilizer applied) than that for upland crops (values ranging between 0.40% and 1.54%), and significantly higher than for rice (values ranging between 0.29% and 0.66% on temporarily drained soils, and between 0.15% and 0.37% on un-drained soils). Higher EFs for rice were associated with longer periods of drained soil and the use of compound fertilizer; lower emissions were associated with the use of urea or acid soils. Higher EFs for upland crops were associated with clay soil, compound fertilizer or maize crops; lower EFs were associated with sandy soil and the use of urea. Variation in emissions for lowland vegetable crops was closely associated with crop type. The two independent analyses in this study produced consistent disaggregated N2O EFs for rice and mixed crops, showing that the use of influential cropping parameters can produce robust EFs for China.
Koga, Hiroyuki; Okawada, Manabu; Doi, Takashi; Miyano, Go; Lane, Geoffrey J; Yamataka, Atsuyuki
2015-10-01
During surgery for choledochal cyst (CC), any intrapancreatic CC (IPCC) must also be excised to prevent postoperative pancreatitis and stone formation. We report our technique for laparoscopic total IPCC excision (n = 16; mean age 6.0 years). We insert a fine ureteroscope with a light source into the opened CC through an extra 3.9-mm trocar placed in the epigastrium through a minute incision to identify the pancreatic duct orifice. By pulling the end of the ureteroscope emerging from the trocar gently to withdraw the tip from the pancreatic duct to where distal dissection was ceased under laparoscopic view, the IPCC can be measured. If longer than 5 mm, the distal CC is dissected further caudally until it is less than 5 mm. For accuracy, the distal CC is elevated with a suture that is exteriorized and clamped to provide constant traction. The IPCC was able to be measured in 11/16 (68 %). Initial lengths measured were 3-10 mm (5.2 ± 2.7 mm). Final IPCC were all 5 mm or less. Surgery was uncomplicated without any pancreatic duct injury and postoperative recovery was unremarkable. Follow-up MRI at 32 months showed no IPCC in any case. Measuring the IPCC enables total CC excision, thus reducing the potential for postoperative complications.
USDA-ARS?s Scientific Manuscript database
Models are often used to quantify how land use change and management impact soil organic carbon (SOC) stocks because it is often not feasible to use direct measuring methods. Because models are simplifications of reality, it is essential to compare model outputs with measured values to evaluate mode...
MMA, A Computer Code for Multi-Model Analysis
Poeter, Eileen P.; Hill, Mary C.
2007-01-01
This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will be well served by the default methods provided. To use the default methods, the only required input for MMA is a list of directories where the files for the alternate models are located. Evaluation and development of model-analysis methods are active areas of research. To facilitate exploration and innovation, MMA allows the user broad discretion to define alternatives to the default procedures. For example, MMA allows the user to (a) rank models based on model criteria defined using a wide range of provided and user-defined statistics in addition to the default AIC, AICc, BIC, and KIC criteria, (b) create their own criteria using model measures available from the code, and (c) define how each model criterion is used to calculate related posterior model probabilities. The default model criteria rate models are based on model fit to observations, the number of observations and estimated parameters, and, for KIC, the Fisher information matrix. In addition, MMA allows the analysis to include an evaluation of estimated parameter values. This is accomplished by allowing the user to define unreasonable estimated parameter values or relative estimated parameter values. An example of the latter is that it may be expected that one parameter value will be less than another, as might be the case if two parameters represented the hydraulic conductivity of distinct materials such as fine and coarse sand. Models with parameter values that violate the user-defined conditions are excluded from further consideration by MMA. Ground-water models are used as examples in this report, but MMA can be used to evaluate any set of models for which the required files have been produced. MMA needs to read files from a separate directory for each alternative model considered. The needed files are produced when using the Sensitivity-Analysis or Parameter-Estimation mode of UCODE_2005, or, possibly, the equivalent capability of another program. MMA is constructed using
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-21
... Reviewers to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC) ACTION... Intergovernmental Panel on Climate Change (IPCC). SUMMARY: The U.S. Department of State invites recommendations for... Intergovernmental Panel on Climate Change (IPCC), which will be developed and finalized over the coming four years...
Code of Federal Regulations, 2012 CFR
2012-01-01
... or market value characteristics and the credit quality of transferred financial assets (together with... with maximizing the net present value of the financial asset. Servicers shall have the authority to modify assets to address reasonably foreseeable default, and to take other action to maximize the value...
[The climate debate: the facts].
van den Broeke, Michiel R
2009-01-01
The first report by the Intergovernmental Panel on Climate Change (IPCC) appeared almost 20 years ago. Environmental contamination has a negative effect on the environment in which we live. However, the public at large is confused about the ins and outs of climate change. Managers, politicians, various kinds of advisors, scientists, so-called experts, sceptics and journalists have all taken it upon themselves to lead the debate. Whose task is it to ensure a sound discussion? Surely it is the IPCC's task. However, most politicians and many journalists, and even many scientists, do not take the trouble to read the entire IPCC report or parts of it. As a consequence, much nonsense is published and broadcast. An effective procedure to deal with the climate problem starts with a fair discussion of the scientific evidence. My advice is: just read the free IPCC report: http://www.ipcc.ch/ and click on 'WG I The Physical Science Basis'.
The North American Forest Sector Outlook Study 2006-2030
Jeffrey P. Prestemon; Joseph Buongiorno
2012-01-01
Projections for the United States and Canada to 2030 have been made with a global model to account for concurrent changes in other countries. Three future scenarios were investigated: two IPCC-based scenarios assuming the rapid growth of wood-based energy, and one IPCC-based scenario without this assumption. The model, under the IPCC scenarios, accounted for trends in...
Popescu, V; Battaglini, M; Hoogstrate, W S; Verfaillie, S C J; Sluimer, I C; van Schijndel, R A; van Dijk, B W; Cover, K S; Knol, D L; Jenkinson, M; Barkhof, F; de Stefano, N; Vrenken, H
2012-07-16
Brain atrophy studies often use FSL-BET (Brain Extraction Tool) as the first step of image processing. Default BET does not always give satisfactory results on 3DT1 MR images, which negatively impacts atrophy measurements. Finding the right alternative BET settings can be a difficult and time-consuming task, which can introduce unwanted variability. To systematically analyze the performance of BET in images of MS patients by varying its parameters and options combinations, and quantitatively comparing its results to a manual gold standard. Images from 159 MS patients were selected from different MAGNIMS consortium centers, and 16 different 3DT1 acquisition protocols at 1.5 T or 3T. Before running BET, one of three pre-processing pipelines was applied: (1) no pre-processing, (2) removal of neck slices, or (3) additional N3 inhomogeneity correction. Then BET was applied, systematically varying the fractional intensity threshold (the "f" parameter) and with either one of the main BET options ("B" - bias field correction and neck cleanup, "R" - robust brain center estimation, or "S" - eye and optic nerve cleanup) or none. For comparison, intracranial cavity masks were manually created for all image volumes. FSL-FAST (FMRIB's Automated Segmentation Tool) tissue-type segmentation was run on all BET output images and on the image volumes masked with the manual intracranial cavity masks (thus creating the gold-standard tissue masks). The resulting brain tissue masks were quantitatively compared to the gold standard using Dice overlap coefficient (DOC). Normalized brain volumes (NBV) were calculated with SIENAX. NBV values obtained using for SIENAX other BET settings than default were compared to gold standard NBV with the paired t-test. The parameter/preprocessing/options combinations resulted in 20,988 BET runs. The median DOC for default BET (f=0.5, g=0) was 0.913 (range 0.321-0.977) across all 159 native scans. For all acquisition protocols, brain extraction was substantially improved for lower values of "f" than the default value. Using native images, optimum BET performance was observed for f=0.2 with option "B", giving median DOC=0.979 (range 0.867-0.994). Using neck removal before BET, optimum BET performance was observed for f=0.1 with option "B", giving median DOC 0.983 (range 0.844-0.996). Using the above BET-options for SIENAX instead of default, the NBV values obtained from images after neck removal with f=0.1 and option "B" did not differ statistically from NBV values obtained with gold-standard. Although default BET performs reasonably well on most 3DT1 images of MS patients, the performance can be improved substantially. The removal of the neck slices, either externally or within BET, has a marked positive effect on the brain extraction quality. BET option "B" with f=0.1 after removal of the neck slices seems to work best for all acquisition protocols. Copyright © 2012 Elsevier Inc. All rights reserved.
Zeng, Yeting; Wang, Xinrui; Xie, Feilai; Zheng, Zhiyong
2017-08-01
Ischemic pseudo-cellular crescent (IPCC) that is induced by ischemia and composed of hyperplastic glomerular parietal epithelial cells resembles cellular crescent. In this study, we aimed to assess the clinical and pathological features of IPCC in renal biopsy to avoid over-diagnosis and to determine the diagnostic basis. 4 IPCC cases diagnosed over a 4-year period (2012-2015) were evaluated for the study. Meanwhile, 5 cases of ANCA-associated glomerulonephritis and 5 cases of lupus nephritis (LN) were selected as control. Appropriate clinical data, morphology, and immunohistochemical features of all cases were retrieved. Results showed that the basement membrane of glomerulus with IPCC appeared as a concentric twisted ball, and glomerular cells of the lesion were reduced even entirely absent, and the adjacent afferent arterioles showed sclerosis or luminal stenosis. Furthermore, immune globulin deposition, vasculitis, and fibrinous exudate have not been observed in IPCC. While the cellular crescents showed diverse characteristics in both morphology and immunostaining in the control group. Therefore, these results indicated that IPCC is a sort of ischemic reactive hyperplasia and associated with sclerosis, stenosis, or obstruction of adjacent afferent arterioles, which is clearly different from cellular crescents result from glomerulonephritis. Copyright © 2017 Elsevier GmbH. All rights reserved.
NASA Astrophysics Data System (ADS)
Mörner, Nils-Axel
2014-05-01
Sea level may rise due to glacier melting, heat expansion of the oceanic water column, and redistribution of the waster masses - all these factors can be handled as to rates and amplitudes (provided one knows what one is talking about). In key areas over the entire Indian Ocean and in many Pacific Islands there are no traces of and sea level rise over the last 40-50 years. This is also the case for test-areas like Venice and the North Sea coasts. In the Kattegatt Sea one can fix the sea level factor to a maximum rise of 1.0-0.9 mm/year over the last century. The 204 tide gauges selected by NOAA for their global sea level monitoring provide a strong and sharp maximum (of 182 sites) in the range of 0.0-2.0 mm/yr. Satellite altimetry is said to give a rise of 3.2 mm/yr; this, however, is a value achieved after a quite subjective and surely erroneous "correction". The IPCC is talking about exceptionally much higher rates, and even worse are some "boy scouts" desperate try to launce real horror ratios. Physical laws set the frames of the rate and amount of ice melting, and so do records of events in the past (i.e. the geological records). During the Last Ice Age so much ice was accumulated on land, that the sea level dropped by about 120 m. When the process was reversed and ice melted under exceptionally strong climate forcing, sea level rose at a maximum rate of about 10 mm/yr (a meter per century). This can never happen under today's climate conditions. Even with IPCC's hypothetical scenarios, the true sea rise must be far less. When people like Rahmstorf (claiming 1 m or more by 2100) and Hansen (claiming a 4 m rise from 2080 to 2100) give their values, they exceed what is possible according to physical laws and accumulated geological knowledge. The expansion of the oceanic water column may reach amounts of sea level rise in the order of a few centimetres, at the most a decimetre. Old temperature measurements may record a temperature rise over the last 50 years in the order of 0.4o C. The improved ARGO measurements starting 2004 give virtually no change, however. The physically possible amount of expansion decreases, of course, with the decreasing water columns towards the coasts, and at the coasts it is zero (±0.0 mm). The redistribution of water masses in response to the Earth's rotation, surface current beat, wind stress, air pressure, etc. is an important factor. It gives local to regional changes, cancelled out on the global scale, however. From a geoethical point of view, it is of course quite blameworthy that IPCC excels in spreading these horror scenarios of a rapid, even accelerating, sea level rise. Besides, modern understanding of the planetary-solar-terrestrial interaction shows that we are now on our way into grand solar minimum with severely colder climate - that is just the opposite to IPCC's talk about an accelerating warming. In science we should debate - but we should not dictate (as IPCC insist upon), and it is here the perspectives of geoethics comes into the picture.
NASA Astrophysics Data System (ADS)
Di, Zhenhua; Duan, Qingyun; Wang, Chen; Ye, Aizhong; Miao, Chiyuan; Gong, Wei
2018-03-01
Forecasting skills of the complex weather and climate models have been improved by tuning the sensitive parameters that exert the greatest impact on simulated results based on more effective optimization methods. However, whether the optimal parameter values are still work when the model simulation conditions vary, which is a scientific problem deserving of study. In this study, a highly-effective optimization method, adaptive surrogate model-based optimization (ASMO), was firstly used to tune nine sensitive parameters from four physical parameterization schemes of the Weather Research and Forecasting (WRF) model to obtain better summer precipitation forecasting over the Greater Beijing Area in China. Then, to assess the applicability of the optimal parameter values, simulation results from the WRF model with default and optimal parameter values were compared across precipitation events, boundary conditions, spatial scales, and physical processes in the Greater Beijing Area. The summer precipitation events from 6 years were used to calibrate and evaluate the optimal parameter values of WRF model. Three boundary data and two spatial resolutions were adopted to evaluate the superiority of the calibrated optimal parameters to default parameters under the WRF simulations with different boundary conditions and spatial resolutions, respectively. Physical interpretations of the optimal parameters indicating how to improve precipitation simulation results were also examined. All the results showed that the optimal parameters obtained by ASMO are superior to the default parameters for WRF simulations for predicting summer precipitation in the Greater Beijing Area because the optimal parameters are not constrained by specific precipitation events, boundary conditions, and spatial resolutions. The optimal values of the nine parameters were determined from 127 parameter samples using the ASMO method, which showed that the ASMO method is very highly-efficient for optimizing WRF model parameters.
77 FR 46699 - Honey From the People's Republic of China: Preliminary Results of Review
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-06
... quantity and value, its separate rate status, structure and affiliations, sales process, accounting and... quantity and value, separate rate status, structure and affiliations, sales process, accounting and... (CIT August 10, 2009) (''Commerce may, of course, begin its total AFA selection process by defaulting...
Mackey, Scott; Olafsson, Valur; Aupperle, Robin L; Lu, Kun; Fonzo, Greg A; Parnass, Jason; Liu, Thomas; Paulus, Martin P
2016-09-01
The significance of why a similar set of brain regions are associated with the default mode network and value-related neural processes remains to be clarified. Here, we examined i) whether brain regions exhibiting willingness-to-pay (WTP) task-related activity are intrinsically connected when the brain is at rest, ii) whether these regions overlap spatially with the default mode network, and iii) whether individual differences in choice behavior during the WTP task are reflected in functional brain connectivity at rest. Blood-oxygen-level dependent (BOLD) signal was measured by functional magnetic resonance imaging while subjects performed the WTP task and at rest with eyes open. Brain regions that tracked the value of bids during the WTP task were used as seed regions in an analysis of functional connectivity in the resting state data. The seed in the ventromedial prefrontal cortex was functionally connected to core regions of the WTP task-related network. Brain regions within the WTP task-related network, namely the ventral precuneus, ventromedial prefrontal and posterior cingulate cortex overlapped spatially with publically available maps of the default mode network. Also, those individuals with higher functional connectivity during rest between the ventromedial prefrontal cortex and the ventral striatum showed greater preference consistency during the WTP task. Thus, WTP task-related regions are an intrinsic network of the brain that corresponds spatially with the default mode network, and individual differences in functional connectivity within the WTP network at rest may reveal a priori biases in choice behavior.
Mackey, Scott; Olafsson, Valur; Aupperle, Robin; Lu, Kun; Fonzo, Greg; Parnass, Jason; Liu, Thomas; Paulus, Martin P.
2015-01-01
The significance of why a similar set of brain regions are associated with the default mode network and value-related neural processes remains to be clarified. Here, we examined i) whether brain regions exhibiting willingness-to-pay (WTP) task-related activity are intrinsically connected when the brain is at rest, ii) whether these regions overlap spatially with the default mode network, and iii) whether individual differences in choice behavior during the WTP task are reflected in functional brain connectivity at rest. Blood-oxygen-level dependent (BOLD) signal was measured by functional magnetic resonance imaging while subjects performed the WTP task and at rest with eyes open. Brain regions that tracked the value of bids during the WTP task were used as seed regions in an analysis of functional connectivity in the resting state data. The seed in the ventromedial prefrontal cortex was functionally connected to core regions of the WTP task-related network. Brain regions within the WTP task-related network, namely the ventral precuneus, ventromedial prefrontal and posterior cingulate cortex overlapped spatially with publically available maps of the default mode network. Also, those individuals with higher functional connectivity during rest between the ventromedial prefrontal cortex and the ventral striatum showed greater preference consistency during the WTP task. Thus, WTP task-related regions are an intrinsic network of the brain that corresponds spatially with the default mode network, and individual differences in functional connectivity within the WTP network at rest may reveal a priori biases in choice behavior. PMID:26271206
Neural correlates of childhood trauma with executive function in young healthy adults.
Lu, Shaojia; Pan, Fen; Gao, Weijia; Wei, Zhaoguo; Wang, Dandan; Hu, Shaohua; Huang, Manli; Xu, Yi; Li, Lingjiang
2017-10-03
The aim of this study was to investigate the relationship among childhood trauma, executive impairments, and altered resting-state brain function in young healthy adults. Twenty four subjects with childhood trauma and 24 age- and gender-matched subjects without childhood trauma were recruited. Executive function was assessed by a series of validated test procedures. Localized brain activity was evaluated by fractional amplitude of low frequency fluctuation (fALFF) method and compared between two groups. Areas with altered fALFF were further selected as seeds in subsequent functional connectivity analysis. Correlations of fALFF and connectivity values with severity of childhood trauma and executive dysfunction were analyzed as well. Subjects with childhood trauma exhibited impaired executive function as assessed by Wisconsin Card Sorting Test and Stroop Color Word Test. Traumatic individuals also showed increased fALFF in the right precuneus and decreased fALFF in the right superior temporal gyrus. Significant correlations of specific childhood trauma severity with executive dysfunction and fALFF value in the right precuneus were found in the whole sample. In addition, individuals with childhood trauma also exhibited diminished precuneus-based connectivity in default mode network with left ventromedial prefrontal cortex, left orbitofrontal cortex, and right cerebellum. Decreased default mode network connectivity was also associated with childhood trauma severity and executive dysfunction. The present findings suggest that childhood trauma is associated with executive deficits and aberrant default mode network functions even in healthy adults. Moreover, this study demonstrates that executive dysfunction is related to disrupted default mode network connectivity.
Tubiello, Francesco N; Salvatore, Mirella; Ferrara, Alessandro F; House, Jo; Federici, Sandro; Rossi, Simone; Biancalani, Riccardo; Condor Golec, Rocio D; Jacobs, Heather; Flammini, Alessandro; Prosperi, Paolo; Cardenas-Galindo, Paola; Schmidhuber, Josef; Sanz Sanchez, Maria J; Srivastava, Nalin; Smith, Pete
2015-01-10
We refine the information available through the IPCC AR5 with regard to recent trends in global GHG emissions from agriculture, forestry and other land uses (AFOLU), including global emission updates to 2012. Using all three available AFOLU datasets employed for analysis in the IPCC AR5, rather than just one as done in the IPCC AR5 WGIII Summary for Policy Makers, our analyses point to a down-revision of global AFOLU shares of total anthropogenic emissions, while providing important additional information on subsectoral trends. Our findings confirm that the share of AFOLU emissions to the anthropogenic total declined over time. They indicate a decadal average of 28.7 ± 1.5% in the 1990s and 23.6 ± 2.1% in the 2000s and an annual value of 21.2 ± 1.5% in 2010. The IPCC AR5 had indicated a 24% share in 2010. In contrast to previous decades, when emissions from land use (land use, land use change and forestry, including deforestation) were significantly larger than those from agriculture (crop and livestock production), in 2010 agriculture was the larger component, contributing 11.2 ± 0.4% of total GHG emissions, compared to 10.0 ± 1.2% of the land use sector. Deforestation was responsible for only 8% of total anthropogenic emissions in 2010, compared to 12% in the 1990s. Since 2010, the last year assessed by the IPCC AR5, new FAO estimates indicate that land use emissions have remained stable, at about 4.8 Gt CO 2 eq yr -1 in 2012. Emissions minus removals have also remained stable, at 3.2 Gt CO 2 eq yr -1 in 2012. By contrast, agriculture emissions have continued to grow, at roughly 1% annually, and remained larger than the land use sector, reaching 5.4 Gt CO 2 eq yr -1 in 2012. These results are useful to further inform the current climate policy debate on land use, suggesting that more efforts and resources should be directed to further explore options for mitigation in agriculture, much in line with the large efforts devoted to REDD+ in the past decade. © 2015 John Wiley & Sons Ltd.
The impact of manual threshold selection in medical additive manufacturing.
van Eijnatten, Maureen; Koivisto, Juha; Karhu, Kalle; Forouzanfar, Tymour; Wolff, Jan
2017-04-01
Medical additive manufacturing requires standard tessellation language (STL) models. Such models are commonly derived from computed tomography (CT) images using thresholding. Threshold selection can be performed manually or automatically. The aim of this study was to assess the impact of manual and default threshold selection on the reliability and accuracy of skull STL models using different CT technologies. One female and one male human cadaver head were imaged using multi-detector row CT, dual-energy CT, and two cone-beam CT scanners. Four medical engineers manually thresholded the bony structures on all CT images. The lowest and highest selected mean threshold values and the default threshold value were used to generate skull STL models. Geometric variations between all manually thresholded STL models were calculated. Furthermore, in order to calculate the accuracy of the manually and default thresholded STL models, all STL models were superimposed on an optical scan of the dry female and male skulls ("gold standard"). The intra- and inter-observer variability of the manual threshold selection was good (intra-class correlation coefficients >0.9). All engineers selected grey values closer to soft tissue to compensate for bone voids. Geometric variations between the manually thresholded STL models were 0.13 mm (multi-detector row CT), 0.59 mm (dual-energy CT), and 0.55 mm (cone-beam CT). All STL models demonstrated inaccuracies ranging from -0.8 to +1.1 mm (multi-detector row CT), -0.7 to +2.0 mm (dual-energy CT), and -2.3 to +4.8 mm (cone-beam CT). This study demonstrates that manual threshold selection results in better STL models than default thresholding. The use of dual-energy CT and cone-beam CT technology in its present form does not deliver reliable or accurate STL models for medical additive manufacturing. New approaches are required that are based on pattern recognition and machine learning algorithms.
The fundamental theorem of asset pricing under default and collateral in finite discrete time
NASA Astrophysics Data System (ADS)
Alvarez-Samaniego, Borys; Orrillo, Jaime
2006-08-01
We consider a financial market where time and uncertainty are modeled by a finite event-tree. The event-tree has a length of N, a unique initial node at the initial date, and a continuum of branches at each node of the tree. Prices and returns of J assets are modeled, respectively, by a R2JxR2J-valued stochastic process . In this framework we prove a version of the Fundamental Theorem of Asset Pricing which applies to defaultable securities backed by exogenous collateral suffering a contingent linear depreciation.
IPCC Report Calls Climate Changes Unprecedented
NASA Astrophysics Data System (ADS)
Showstack, Randy
2013-10-01
Warming of the Earth's climate "is unequivocal and since the 1950s many of the observed changes are unprecedented over decades to millennia," according to a new assessment report by the Intergovernmental Panel on Climate Change (IPCC). The 27 September summary for policy makers of IPCC's report "Climate Change 2013: The Physical Science Basis" also states that "it is extremely likely that human influence has been the dominant cause of the observed warming since the mid-20th century."
A General Linear Model (GLM) was used to evaluate the deviation of predicted values from expected values for a complex environmental model. For this demonstration, we used the default level interface of the Regional Mercury Cycling Model (R-MCM) to simulate epilimnetic total mer...
Hasker, Epco; Khodjikhanov, Maksad; Usarova, Shakhnoz; Asamidinov, Umid; Yuldashova, Umida; van der Werf, Marieke J; Uzakova, Gulnoz; Veen, Jaap
2008-07-22
In Tashkent (Uzbekistan), TB treatment is provided in accordance with the DOTS strategy. Of 1087 pulmonary TB patients started on treatment in 2005, 228 (21%) defaulted. This study investigates who the defaulters in Tashkent are, when they default and why they default. We reviewed the records of 126 defaulters (cases) and 132 controls and collected information on time of default, demographic factors, social factors, potential risk factors for default, characteristics of treatment and recorded reasons for default. Unemployment, being a pensioner, alcoholism and homelessness were associated with default. Patients defaulted mostly during the intensive phase, while they were hospitalized (61%), or just before they were to start the continuation phase (26%). Reasons for default listed in the records were various, 'Refusal of further treatment' (27%) and 'Violation of hospital rules' (18%) were most frequently recorded. One third of the recorded defaulters did not really default but continued treatment under 'non-DOTS' conditions. Whereas patient factors such as unemployment, being a pensioner, alcoholism and homelessness play a role, there are also system factors that need to be addressed to reduce default. Such system factors include the obligatory admission in TB hospitals and the inadequately organized transition from hospitalized to ambulatory treatment.
Hasker, Epco; Khodjikhanov, Maksad; Usarova, Shakhnoz; Asamidinov, Umid; Yuldashova, Umida; Werf, Marieke J van der; Uzakova, Gulnoz; Veen, Jaap
2008-01-01
Background In Tashkent (Uzbekistan), TB treatment is provided in accordance with the DOTS strategy. Of 1087 pulmonary TB patients started on treatment in 2005, 228 (21%) defaulted. This study investigates who the defaulters in Tashkent are, when they default and why they default. Methods We reviewed the records of 126 defaulters (cases) and 132 controls and collected information on time of default, demographic factors, social factors, potential risk factors for default, characteristics of treatment and recorded reasons for default. Results Unemployment, being a pensioner, alcoholism and homelessness were associated with default. Patients defaulted mostly during the intensive phase, while they were hospitalized (61%), or just before they were to start the continuation phase (26%). Reasons for default listed in the records were various, 'Refusal of further treatment' (27%) and 'Violation of hospital rules' (18%) were most frequently recorded. One third of the recorded defaulters did not really default but continued treatment under 'non-DOTS' conditions. Conclusion Whereas patient factors such as unemployment, being a pensioner, alcoholism and homelessness play a role, there are also system factors that need to be addressed to reduce default. Such system factors include the obligatory admission in TB hospitals and the inadequately organized transition from hospitalized to ambulatory treatment. PMID:18647400
Zomer, Robert J; Neufeldt, Henry; Xu, Jianchu; Ahrends, Antje; Bossio, Deborah; Trabucco, Antonio; van Noordwijk, Meine; Wang, Mingcheng
2016-07-20
Agroforestry systems and tree cover on agricultural land make an important contribution to climate change mitigation, but are not systematically accounted for in either global carbon budgets or national carbon accounting. This paper assesses the role of trees on agricultural land and their significance for carbon sequestration at a global level, along with recent change trends. Remote sensing data show that in 2010, 43% of all agricultural land globally had at least 10% tree cover and that this has increased by 2% over the previous ten years. Combining geographically and bioclimatically stratified Intergovernmental Panel on Climate Change (IPCC) Tier 1 default estimates of carbon storage with this tree cover analysis, we estimated 45.3 PgC on agricultural land globally, with trees contributing >75%. Between 2000 and 2010 tree cover increased by 3.7%, resulting in an increase of >2 PgC (or 4.6%) of biomass carbon. On average, globally, biomass carbon increased from 20.4 to 21.4 tC ha(-1). Regional and country-level variation in stocks and trends were mapped and tabulated globally, and for all countries. Brazil, Indonesia, China and India had the largest increases in biomass carbon stored on agricultural land, while Argentina, Myanmar, and Sierra Leone had the largest decreases.
Bank Regulation: Analysis of the Failure of Superior Bank, FSB, Hinsdale, Illinois
2002-02-07
statement of financial position based on the fair value . The best evidence of fair value is a quoted market price in an active market, but if there is no...market price, the value must be estimated. In estimating the fair value of retained interests, valuation techniques include estimating the present...about interest rates, default, prepayment, and volatility. In 1999, FASB explained that when estimating the fair value for 7FAS No. 140: Accounting for
Overestimating resource value and its effects on fighting decisions.
Dugatkin, Lee Alan; Dugatkin, Aaron David
2011-01-01
Much work in behavioral ecology has shown that animals fight over resources such as food, and that they make strategic decisions about when to engage in such fights. Here, we examine the evolution of one, heretofore unexamined, component of that strategic decision about whether to fight for a resource. We present the results of a computer simulation that examined the evolution of over- or underestimating the value of a resource (food) as a function of an individual's current hunger level. In our model, animals fought for food when they perceived their current food level to be below the mean for the environment. We considered seven strategies for estimating food value: 1) always underestimate food value, 2) always overestimate food value, 3) never over- or underestimate food value, 4) overestimate food value when hungry, 5) underestimate food value when hungry, 6) overestimate food value when relatively satiated, and 7) underestimate food value when relatively satiated. We first competed all seven strategies against each other when they began at approximately equal frequencies. In such a competition, two strategies--"always overestimate food value," and "overestimate food value when hungry"--were very successful. We next competed each of these strategies against the default strategy of "never over- or underestimate," when the default strategy was set at 99% of the population. Again, the strategies of "always overestimate food value" and "overestimate food value when hungry" fared well. Our results suggest that overestimating food value when deciding whether to fight should be favored by natural selection.
NASA Astrophysics Data System (ADS)
Yao, Zhigang; Xue, Zuo; He, Ruoying; Bao, Xianwen; Song, Jun
2016-08-01
A multivariate statistical downscaling method is developed to produce regional, high-resolution, coastal surface wind fields based on the IPCC global model predictions for the U.S. east coastal ocean, the Gulf of Mexico (GOM), and the Caribbean Sea. The statistical relationship is built upon linear regressions between the empirical orthogonal function (EOF) spaces of a cross- calibrated, multi-platform, multi-instrument ocean surface wind velocity dataset (predictand) and the global NCEP wind reanalysis (predictor) over a 10 year period from 2000 to 2009. The statistical relationship is validated before applications and its effectiveness is confirmed by the good agreement between downscaled wind fields based on the NCEP reanalysis and in-situ surface wind measured at 16 National Data Buoy Center (NDBC) buoys in the U.S. east coastal ocean and the GOM during 1992-1999. The predictand-predictor relationship is applied to IPCC GFDL model output (2.0°×2.5°) of downscaled coastal wind at 0.25°×0.25° resolution. The temporal and spatial variability of future predicted wind speeds and wind energy potential over the study region are further quantified. It is shown that wind speed and power would significantly be reduced in the high CO2 climate scenario offshore of the mid-Atlantic and northeast U.S., with the speed falling to one quarter of its original value.
NASA Astrophysics Data System (ADS)
Barrett, K.
2017-12-01
Scientific integrity is the hallmark of any assessment and is a paramount consideration in the Intergovernmental Panel on Climate Change (IPCC) assessment process. Procedures are in place for rigorous scientific review and to quantify confidence levels and uncertainty in the communication of key findings. However, the IPCC is unique in that its reports are formally accepted by governments through consensus agreement. This presentation will present the unique requirements of the IPCC intergovernmental assessment and discuss the advantages and challenges of its approach.
Geochemical monitoring for potential environmental impacts of geologic sequestration of CO2
Kharaka, Yousif K.; Cole, David R.; Thordsen, James J.; Gans, Kathleen D.; Thomas, Randal B.
2013-01-01
Carbon dioxide sequestration is now considered an important component of the portfolio of options for reducing greenhouse gas emissions to stabilize their atmospheric levels at values that would limit global temperature increases to the target of 2 °C by the end of the century (Pacala and Socolow 2004; IPCC 2005, 2007; Benson and Cook 2005; Benson and Cole 2008; IEA 2012; Romanak et al. 2013). Increased anthropogenic emissions of CO2 have raised its atmospheric concentrations from about 280 ppmv during pre-industrial times to ~400 ppmv today, and based on several defined scenarios, CO2 concentrations are projected to increase to values as high as 1100 ppmv by 2100 (White et al. 2003; IPCC 2005, 2007; EIA 2012; Global CCS Institute 2012). An atmospheric CO2 concentration of 450 ppmv is generally the accepted level that is needed to limit global temperature increases to the target of 2 °C by the end of the century. This temperature limit likely would moderate the adverse effects related to climate change that could include sea-level rise from the melting of alpine glaciers and continental ice sheets and from the ocean warming; increased frequency and intensity of wildfires, floods, droughts, and tropical storms; and changes in the amount, timing, and distribution of rain, snow, and runoff (IPCC 2007; Sundquist et al. 2009; IEA 2012). Rising atmospheric CO2 concentrations are also increasing the amount of CO2 dissolved in ocean water lowering its pH from 8.1 to 8.0, with potentially disruptive effects on coral reefs, plankton and marine ecosystems (Adams and Caldeira 2008; Schrag 2009; Sundquist et al. 2009). Sedimentary basins in general and deep saline aquifers in particular are being investigated as possible repositories for the large volumes of anthropogenic CO2 that must be sequestered to mitigate global warming and related climate changes (Hitchon 1996; Benson and Cole 2008; Verma and Warwick 2011).
Activities of NASA's Global Modeling Initiative (GMI) in the Assessment of Subsonic Aircraft Impact
NASA Technical Reports Server (NTRS)
Rodriquez, J. M.; Logan, J. A.; Rotman, D. A.; Bergmann, D. J.; Baughcum, S. L.; Friedl, R. R.; Anderson, D. E.
2004-01-01
The Intergovernmental Panel on Climate Change estimated a peak increase in ozone ranging from 7-12 ppbv (zonal and annual average, and relative to a baseline with no aircraft), due to the subsonic aircraft in the year 2015, corresponding to aircraft emissions of 1.3 TgN/year. This range of values presumably reflects differences in model input (e.g., chemical mechanism, ground emission fluxes, and meteorological fields), and algorithms. The model implemented by the Global Modeling Initiative allows testing the impact of individual model components on the assessment calculations. We present results of the impact of doubling the 1995 aircraft emissions of NOx, corresponding to an extra 0.56 TgN/year, utilizing meteorological data from NASA's Data Assimilation Office (DAO), the Goddard Institute for Space Studies (GISS), and the Middle Atmosphere Community Climate Model, version 3 (MACCM3). Comparison of results to observations can be used to assess the model performance. Peak ozone perturbations ranging from 1.7 to 2.2 ppbv of ozone are calculated using the different fields. These correspond to increases in total tropospheric ozone ranging from 3.3 to 4.1 Tg/Os. These perturbations are consistent with the IPCC results, due to the difference in aircraft emissions. However, the range of values calculated is much smaller than in IPCC.
40 CFR 98.464 - Monitoring and QA/QC requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... = Degradable organic content of waste stream in Year X (weight fraction, wet basis) FDOC = Fraction of the volatile residue that is degradable organic carbon (weight fraction). Use a default value of 0.6...
Design of a Sixteen Bit Pipelined Adder Using CMOS Bulk P-Well Technology.
1984-12-01
node’s current value. These rules are based on the assumption that the event that was last calculated reflects the latest configuraticn of the network...Lines beginning with - are treated as ll comment. The parameter names and their default values are: ;configuration file for ’standard’ MPC procem capm .2a
Defaulters among lung cancer patients in a suburban district in a developing country.
Ng, T H; How, S H; Kuan, Y C; Fauzi, A R
2012-01-01
This study was carried out to determine the prevalence, patient's characteristic and reasons for defaulting follow-up and treatment among patients with lung cancer. Patients with histologically confirmed lung cancer were recruited. Patient's detailed demographic data, occupation, socioeconomic status, and educational level of both the patients and their children were recorded. Defaulters were classified as either intermittent or persistent defaulters. By using Chi-square test, defaulter status was compared with various demographic and disease characteristic factors. The reasons for default were determined. Ninety five patients were recruited. Among them, 81.1% patients were males; 66.3% were Malays. The mean age (SD) was 60 ± 10.5 years. About 46.3% of the patients had Eastern Cooperation Oncology Group (ECOG) functional status 0/1 and 96.8% of the patients presented with advanced stage (Stage 3b or 4). Overall, 20 patients (21.1%) were defaulters (35.0% intermittent defaulters; 65.0% persistent defaulters). Among the intermittent defaulters, 8 patients defaulted once and one patient defaulted 3 times. Among the 20 defaulters, only 2 (10%) patients turned up for the second follow-up appointment after telephone reminder. Two main reasons for default were 'too ill to come' (38.5.5%) and logistic difficulties (23.1%). No correlation was found between patient education, children education, income, ECOG status, stage of the disease, race, and gender with the defaulter rate. Defaulter rate among lung cancer patients was 21.1%. Children education level is the only significant factor associated with the defaulter rate.
Fifth IPCC Assessment Report Now Out
NASA Astrophysics Data System (ADS)
Kundzewicz, Zbigniew W.
2014-01-01
The Fifth Assessment Report (AR5) of the Intergovernmental Panel on Climate Change (IPCC) is now available. It provides policymakers with an assessment of information on climate change, its impacts and possible response options (adaptation and mitigation). Summaries for policymakers of three reports of IPCC working groups and of the Synthesis Report have now been approved by IPCC plenaries. This present paper reports on the most essential findings in AR5. It briefly informs on the contents of reports of all IPCC working groups. It discusses the physical science findings, therein observed changes (ubiquitous warming, shrinking cryosphere, sea level rise, changes in precipitation and extremes, and biogeochemical cycles). It deals with the drivers of climate change, progress in climate system understanding (evaluation of climate models, quantification of climate system responses), and projections for the future. It reviews impacts, adaptation and vulnerability, including observed changes, key risks, key reasons for concern, sectors and systems, and managing risks and building resilience. Finally, mitigation of climate change is discussed, including greenhouse gas emissions in the past, present and future, and mitigation in sectors. It is hoped that the present article will encourage the readership of this journal to dive into the AR5 report that provides a wealth of useful information.
O'Reilly, Jessica; Oreskes, Naomi; Oppenheimer, Michael
2012-10-01
How and why did the scientific consensus about sea level rise due to the disintegration of the West Antarctic Ice Sheet (WAIS), expressed in the third Intergovernmental Panel on Climate Change (IPCC) assessment, disintegrate on the road to the fourth? Using ethnographic interviews and analysis of IPCC documents, we trace the abrupt disintegration of the WAIS consensus. First, we provide a brief historical overview of scientific assessments of the WAIS. Second, we provide a detailed case study of the decision not to provide a WAIS prediction in the Fourth Assessment Report. Third, we discuss the implications of this outcome for the general issue of scientists and policymakers working in assessment organizations to make projections. IPCC authors were less certain about potential WAIS futures than in previous assessment reports in part because of new information, but also because of the outcome of cultural processes within the IPCC, including how people were selected for and worked together within their writing groups. It became too difficult for IPCC assessors to project the range of possible futures for WAIS due to shifts in scientific knowledge as well as in the institutions that facilitated the interpretations of this knowledge.
NASA Technical Reports Server (NTRS)
Maynard, Nancy G.
2012-01-01
Dr. Nancy Maynard was invited by the Alaska Forum on the Environment to participate in a Panel Discussion to discuss (1) background about what the US NCA and International IPCC assessments are, (2) the impact the assessments have on policy-making, (3) the process for participation in both assessments, (4) how we can increase participation by Indigenous Peoples such as Native Americans and Alaska Natives, (5) How we can increase historical and current impacts input from Native communities through stories, oral history, "grey" literature, etc. The session will be chaired by Dr. Bull Bennett, a cochair of the US NCA's chapter on "Native and Tribal Lands and Resources" and Dr. Maynard is the other co-chair of that chapter and they will discuss the latest activities under the NCA process relevant to Native Americans and Alaska Natives. Dr. Maynard is also a Lead Author of the "Polar Regions" chapter of the IPCC WG2 (5th Assessment) and she will describes some of the latest approaches by the IPCC to entrain more Indigenous peoples into the IPCC process.
Acute and chronic environmental effects of clandestine methamphetamine waste.
Kates, Lisa N; Knapp, Charles W; Keenan, Helen E
2014-09-15
The illicit manufacture of methamphetamine (MAP) produces substantial amounts of hazardous waste that is dumped illegally. This study presents the first environmental evaluation of waste produced from illicit MAP manufacture. Chemical oxygen demand (COD) was measured to assess immediate oxygen depletion effects. A mixture of five waste components (10mg/L/chemical) was found to have a COD (130 mg/L) higher than the European Union wastewater discharge regulations (125 mg/L). Two environmental partition coefficients, K(OW) and K(OC), were measured for several chemicals identified in MAP waste. Experimental values were input into a computer fugacity model (EPI Suite™) to estimate environmental fate. Experimental log K(OW) values ranged from -0.98 to 4.91, which were in accordance with computer estimated values. Experimental K(OC) values ranged from 11 to 72, which were much lower than the default computer values. The experimental fugacity model for discharge to water estimates that waste components will remain in the water compartment for 15 to 37 days. Using a combination of laboratory experimentation and computer modelling, the environmental fate of MAP waste products was estimated. While fugacity models using experimental and computational values were very similar, default computer models should not take the place of laboratory experimentation. Copyright © 2014 Elsevier B.V. All rights reserved.
Radium concentration factors and their use in health and environmental risk assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meinhold, A.F.; Hamilton, L.D.
1991-12-31
Radium is known to be taken up by aquatic animals, and tends to accumulate in bone, shell and exoskeleton. The most common approach to estimating the uptake of a radionuclide by aquatic animals for use in health and environmental risk assessments is the concentration factor method. The concentration factor method relates the concentration of a contaminant in an organism to the concentration in the surrounding water. Site specific data are not usually available, and generic, default values are often used in risk assessment studies. This paper describes the concentration factor method, summarizes some of the variables which may influence themore » concentration factor for radium, reviews reported concentration factors measured in marine environments and presents concentration factors derived from data collected in a study in coastal Louisiana. The use of generic default values for the concentration factor is also discussed.« less
Radium concentration factors and their use in health and environmental risk assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meinhold, A.F.; Hamilton, L.D.
1991-01-01
Radium is known to be taken up by aquatic animals, and tends to accumulate in bone, shell and exoskeleton. The most common approach to estimating the uptake of a radionuclide by aquatic animals for use in health and environmental risk assessments is the concentration factor method. The concentration factor method relates the concentration of a contaminant in an organism to the concentration in the surrounding water. Site specific data are not usually available, and generic, default values are often used in risk assessment studies. This paper describes the concentration factor method, summarizes some of the variables which may influence themore » concentration factor for radium, reviews reported concentration factors measured in marine environments and presents concentration factors derived from data collected in a study in coastal Louisiana. The use of generic default values for the concentration factor is also discussed.« less
Default "Gunel and Dickey" Bayes factors for contingency tables.
Jamil, Tahira; Ly, Alexander; Morey, Richard D; Love, Jonathon; Marsman, Maarten; Wagenmakers, Eric-Jan
2017-04-01
The analysis of R×C contingency tables usually features a test for independence between row and column counts. Throughout the social sciences, the adequacy of the independence hypothesis is generally evaluated by the outcome of a classical p-value null-hypothesis significance test. Unfortunately, however, the classical p-value comes with a number of well-documented drawbacks. Here we outline an alternative, Bayes factor method to quantify the evidence for and against the hypothesis of independence in R×C contingency tables. First we describe different sampling models for contingency tables and provide the corresponding default Bayes factors as originally developed by Gunel and Dickey (Biometrika, 61(3):545-557 (1974)). We then illustrate the properties and advantages of a Bayes factor analysis of contingency tables through simulations and practical examples. Computer code is available online and has been incorporated in the "BayesFactor" R package and the JASP program ( jasp-stats.org ).
Risk factors and mortality associated with default from multidrug-resistant tuberculosis treatment.
Franke, Molly F; Appleton, Sasha C; Bayona, Jaime; Arteaga, Fernando; Palacios, Eda; Llaro, Karim; Shin, Sonya S; Becerra, Mercedes C; Murray, Megan B; Mitnick, Carole D
2008-06-15
Completing treatment for multidrug-resistant (MDR) tuberculosis (TB) may be more challenging than completing first-line TB therapy, especially in resource-poor settings. The objectives of this study were to (1) identify risk factors for default from MDR TB therapy (defined as prolonged treatment interruption), (2) quantify mortality among patients who default from treatment, and (3) identify risk factors for death after default from treatment. We performed a retrospective chart review to identify risk factors for default from MDR TB therapy and conducted home visits to assess mortality among patients who defaulted from such therapy. Sixty-seven (10.0%) of 671 patients defaulted from MDR TB therapy. The median time to treatment default was 438 days (interquartile range, 152-710 days), and 27 (40.3%) of the 67 patients who defaulted from treatment had culture-positive sputum at the time of default. Substance use (hazard ratio, 2.96; 95% confidence interval, 1.56-5.62; P = .001), substandard housing conditions (hazard ratio, 1.83; 95% confidence interval, 1.07-3.11; P = .03), later year of enrollment (hazard ratio, 1.62, 95% confidence interval, 1.09-2.41; P = .02), and health district (P = .02) predicted default from therapy in a multivariable analysis. Severe adverse events did not predict default from therapy. Forty-seven (70.1%) of 67 patients who defaulted from therapy were successfully traced; of these, 25 (53.2%) had died. Poor bacteriologic response, <1 year of treatment at the time of default, low education level, and diagnosis with a psychiatric disorder significantly predicted death after default in a multivariable analysis. The proportion of patients who defaulted from MDR TB treatment was relatively low. The large proportion of patients who had culture-positive sputum at the time of treatment default underscores the public health importance of minimizing treatment default. Prognosis for patients who defaulted from therapy was poor. Interventions aimed at preventing treatment default may reduce TB-related mortality.
ERIC Educational Resources Information Center
Daniels, Randell W.
2013-01-01
Default management practices and their relationship to the student loan default rate in public two-year community colleges was the focus of this investigation. Five research questions regarding written default management plans, default management practices, process management, accountability, and other factors impacting default guided the study.…
A review of uncertainty visualization within the IPCC reports
NASA Astrophysics Data System (ADS)
Nocke, Thomas; Reusser, Dominik; Wrobel, Markus
2015-04-01
Results derived from climate model simulations confront non-expert users with a variety of uncertainties. This gives rise to the challenge that the scientific information must be communicated such that it can be easily understood, however, the complexity of the science behind is still incorporated. With respect to the assessment reports of the IPCC, the situation is even more complicated, because heterogeneous sources and multiple types of uncertainties need to be compiled together. Within this work, we systematically (1) analyzed the visual representation of uncertainties in the IPCC AR4 and AR5 reports, and (2) executed a questionnaire to evaluate how different user groups such as decision-makers and teachers understand these uncertainty visualizations. Within the first step, we classified visual uncertainty metaphors for spatial, temporal and abstract representations. As a result, we clearly identified a high complexity of the IPCC visualizations compared to standard presentation graphics, sometimes even integrating two or more uncertainty classes / measures together with the "certain" (mean) information. Further we identified complex written uncertainty explanations within image captions even within the "summary reports for policy makers". In the second step, based on these observations, we designed a questionnaire to investigate how non-climate experts understand these visual representations of uncertainties, how visual uncertainty coding might hinder the perception of the "non-uncertain" data, and if alternatives for certain IPCC visualizations exist. Within the talk/poster, we will present first results from this questionnaire. Summarizing, we identified a clear trend towards complex images within the latest IPCC reports, with a tendency to incorporate as much as possible information into the visual representations, resulting in proprietary, non-standard graphic representations that are not necessarily easy to comprehend on one glimpse. We conclude that further translation is required to (visually) present the IPCC results to non-experts, providing tailored static and interactive visualization solutions for different user groups.
Global Mean Temperature Timeseries Projections from GCMs: The Implications of Rebasing
NASA Astrophysics Data System (ADS)
Chapman, S. C.; Stainforth, D. A.; Watkins, N. W.
2017-12-01
Global climate models are assessed by comparison with observations through several benchmarks. One highlighted by the InterGovernmental Panel on Climate Change (IPCC) is their ability to reproduce "general features of the global and annual mean surface temperature changes over the historical period" [1,2] and to simulate "a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend" [3]. These aspects of annual mean global mean temperature (GMT) change are presented as one feature demonstrating the relevance of these models for climate projections. Here we consider a formal interpretation of "general features" and discuss the implications of this approach to model assessment and intercomparison, for the interpretation of GCM projections. Following the IPCC, we interpret a major element of "general features" as being the slow timescale response to external forcings. (Shorter timescale behaviour such as the response to volcanic eruptions are also elements of "general features" but are not considered here.) Also following the IPCC, we consider only GMT anomalies. The models have absolute temperatures which range over about 3K so this means their timeseries (and the observations) are rebased. We show that rebasing in combination with general agreement, implies a separation of scales which limits the degree to which sub-global behaviour can feedback on the global response. It also implies a degree of linearity in the GMT slow timescale response. For each individual model these implications only apply over the range of absolute temperatures simulated by the model in historic simulations. Taken together, however, they imply consequences over a wider range of GMTs. [1] IPCC, Fifth Assessment Report, Working Group 1, Technical Summary: Stocker et al. 2013. [2] IPCC, Fifth Assessment Report, Working Group 1, Chapter 9 - "Evaluation of Climate Models": Flato et al. 2013. [3] IPCC, Fifth Assessment Report, Working Group 1, Summary for Policy Makers: IPCC, 2013.
NASA Astrophysics Data System (ADS)
He, Yixin; Wang, Xiaofeng; Chen, Huai; Yuan, Xingzhong; Wu, Ning; Zhang, Yuewei; Yue, Junsheng; Zhang, Qiaoyong; Diao, Yuanbin; Zhou, Lilei
2017-12-01
Watershed urbanization, an integrated anthropogenic perturbation, is another considerable global concern in addition to that of global warming and may significantly enrich the N loadings of watersheds, which then greatly influences the nitrous oxide (N2O) production and fluxes of these aquatic systems. However, little is known about the N2O dynamics in human-dominated metropolitan river networks. In this study, we present the temporal and spatial variations in N2O saturation and emission in the Chongqing metropolitan river network, which is undergoing intensified urbanization. The N2O saturation and fluxes at 84 sampling sites ranged from 126% to 10536% and from 4.5 to 1566.8 μmol N2O m-2 d-1, with means of 1780% and 261 μmol N2O m-2 d-1. The riverine N2O saturation and fluxes increased along with the urbanization gradient and urbanization rate, with disproportionately higher values in urban rivers due to the N2O-rich sewage inputs and enriched in situ N substrates. We found a clear seasonal pattern of N2O saturation, which was co-regulated by both water temperature and precipitation. Regression analysis indicated that the N substrates and dissolved oxygen (DO) that controlled nitrogen metabolism acted as good predictors of the N2O emissions of urban river networks. Particularly, phosphorus (P) and hydromorphological factors (water velocity, river size and bottom substrate) had stronger relationships with the N2O saturation and could also be used to predict the N2O emission hotspots in regions with rapid urbanization. In addition, the default emission factors (EF5-r) used in the Intergovernmental Panel on Climate Change (IPCC) methodology may need revision given the differences among the physical and chemical factors in different rivers, especially urban rivers.
Verheijen, Lieneke M; Aerts, Rien; Brovkin, Victor; Cavender-Bares, Jeannine; Cornelissen, Johannes H C; Kattge, Jens; van Bodegom, Peter M
2015-08-01
Earth system models demonstrate large uncertainty in projected changes in terrestrial carbon budgets. The lack of inclusion of adaptive responses of vegetation communities to the environment has been suggested to hamper the ability of modeled vegetation to adequately respond to environmental change. In this study, variation in functional responses of vegetation has been added to an earth system model (ESM) based on ecological principles. The restriction of viable mean trait values of vegetation communities by the environment, called 'habitat filtering', is an important ecological assembly rule and allows for determination of global scale trait-environment relationships. These relationships were applied to model trait variation for different plant functional types (PFTs). For three leaf traits (specific leaf area, maximum carboxylation rate at 25 °C, and maximum electron transport rate at 25 °C), relationships with multiple environmental drivers, such as precipitation, temperature, radiation, and CO2 , were determined for the PFTs within the Max Planck Institute ESM. With these relationships, spatiotemporal variation in these formerly fixed traits in PFTs was modeled in global change projections (IPCC RCP8.5 scenario). Inclusion of this environment-driven trait variation resulted in a strong reduction of the global carbon sink by at least 33% (2.1 Pg C yr(-1) ) from the 2nd quarter of the 21st century onward compared to the default model with fixed traits. In addition, the mid- and high latitudes became a stronger carbon sink and the tropics a stronger carbon source, caused by trait-induced differences in productivity and relative respirational costs. These results point toward a reduction of the global carbon sink when including a more realistic representation of functional vegetation responses, implying more carbon will stay airborne, which could fuel further climate change. © 2015 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Yan, X.; Zhou, W.
2017-12-01
The Taihu Lake region (TLR) is one of the most intensive agricultural regions with high nitrogen (N) loading in eastern China. Large inputs of synthetic N fertilizer have led to a series of environmental problems including eutrophication of surface waters, nitrate (NO3-) pollution of groundwater. To fully evaluate the risk of NO3- on groundwater environments, it is necessary to know the natural NO3- removal ability. In this study, denitrification capacity was assessed for two years through measuring the concentration of different N species (NO3-, NH4+, TN, excess N2 and dissolved N2O) in groundwater below three typical agricultural land-use types in the TLR. The results suggested that the conversion of paddy field (PF) to vineyard (VY) and vegetable (VF) significantly increased the groundwater NO3-N concentration, but denitrification consumed 76%, 83% and 65% of the groundwater NO3-N in VY, VF and PF, respectively. Because of the low O2 and high DOC concentrations in groundwater, denitrification activity was high in the study sites, resulting in high excess N2 accumulation in groundwater, and the concentration even exceeded the total active N in the deep layer. The large amounts of excess N2 observed in the VY and VF over all the sample times indicated that considerable N was stored as gaseous N2 in groundwater and should not be ignored in balancing N budgets in aquifers where denitrification is high. Our results also demonstrated that the indirect N2O emission factor (EF5-g) in VY (0.0052)and VF (0.0057)was significantly higher than PF (0.0011)as well as higher than the IPCC default values (0.0025. In view of the increasing trend of paddy fields being converted to uplands combined with the low GWT in the TLR, we thus concluded that the risk of NO3- contamination in groundwater and indirect N2O emission will intensify below arable land.
Asayama, Shinichiro; Ishii, Atsushi
2014-02-01
The Intergovernmental Panel on Climate Change (IPCC) plays a significant role in bridging the boundary between climate science and politics. Media coverage is crucial for understanding how climate science is communicated and embedded in society. This study analyzes the discursive construction of the IPCC in three Japanese newspapers from 1988 to 2007 in terms of the science-politics boundary. The results show media discourses engaged in boundary-work which rhetorically separated science and politics, and constructed the iconic image of the IPCC as a pure scientific authority. In the linkages between the global and national arenas of climate change, the media "domesticate" the issue, translating the global nature of climate change into a discourse that suits the national context. We argue that the Japanese media's boundary-work is part of the media domestication that reconstructed the boundary between climate science and politics reflecting the Japanese context.
;Agreement; in the IPCC Confidence measure
NASA Astrophysics Data System (ADS)
Rehg, William; Staley, Kent
2017-02-01
The Intergovernmental Panel on Climate Change (IPCC) has, in its most recent Assessment Report (AR5), articulated guidelines for evaluating and communicating uncertainty that include a qualitative scale of confidence. We examine one factor included in that scale: the "degree of agreement." Some discussions of the degree of agreement in AR5 suggest that the IPCC is employing a consensus-oriented social epistemology. We consider the application of the degree of agreement factor in practice in AR5. Our findings, though based on a limited examination, suggest that agreement attributions do not so much track the overall consensus among investigators as the degree to which relevant research findings substantively converge in offering support for IPCC claims. We articulate a principle guiding confidence attributions in AR5 that centers not on consensus but on the notion of support. In concluding, we tentatively suggest a pluralist approach to the notion of support.
A Systems Model for Power Technology Assessment
NASA Technical Reports Server (NTRS)
Hoffman, David J.
2002-01-01
A computer model is under continuing development at NASA Glenn Research Center that enables first-order assessments of space power technology. The model, an evolution of NASA Glenn's Array Design Assessment Model (ADAM), is an Excel workbook that consists of numerous spreadsheets containing power technology performance data and sizing algorithms. Underlying the model is a number of databases that contain default values for various power generation, energy storage and power management and distribution component parameters. These databases are actively maintained by a team of systems analysts so that they contain state-of-art data as well as the most recent technology performance projections. Sizing of the power subsystems can be accomplished either by using an assumed mass specific power (W/kg) or energy (Wh/kg) or by a bottoms-up calculation that accounts for individual component performance and masses. The power generation, energy storage and power management and distribution subsystems are sized for given mission requirements for a baseline case and up to three alternatives. This allows four different power systems to be sized and compared using consistent assumptions and sizing algorithms. The component sizing models contained in the workbook are modular so that they can be easily maintained and updated. All significant input values have default values loaded from the databases that can be over-written by the user. The default data and sizing algorithms for each of the power subsystems are described in some detail. The user interface and workbook navigational features are also discussed. Finally, an example study case that illustrates the model's capability is presented.
Valerie Esposito; Spencer Phillips; Roelof Boumans; Azur Moulaert; Jennifer Boggs
2011-01-01
The Intergovernmental Panel on Climate Change (IPCC) (2007) reports a likely 2 °C to 4.5 °C temperature rise in the upcoming decades. This warming is likely to affect ecosystems and their ability to provide services that benefit human well-being. Ecosystem services valuation (ESV), meanwhile, has emerged as a way to recognize the economic value embodied in these...
Default neglect in attempts at social influence.
Zlatev, Julian J; Daniels, David P; Kim, Hajin; Neale, Margaret A
2017-12-26
Current theories suggest that people understand how to exploit common biases to influence others. However, these predictions have received little empirical attention. We consider a widely studied bias with special policy relevance: the default effect, which is the tendency to choose whichever option is the status quo. We asked participants (including managers, law/business/medical students, and US adults) to nudge others toward selecting a target option by choosing whether to present that target option as the default. In contrast to theoretical predictions, we find that people often fail to understand and/or use defaults to influence others, i.e., they show "default neglect." First, in one-shot default-setting games, we find that only 50.8% of participants set the target option as the default across 11 samples ( n = 2,844), consistent with people not systematically using defaults at all. Second, when participants have multiple opportunities for experience and feedback, they still do not systematically use defaults. Third, we investigate beliefs related to the default effect. People seem to anticipate some mechanisms that drive default effects, yet most people do not believe in the default effect on average, even in cases where they do use defaults. We discuss implications of default neglect for decision making, social influence, and evidence-based policy.
26 CFR 1.503(b)-1 - Prohibited transactions.
Code of Federal Regulations, 2010 CFR
2010-04-01
... otherwise disposed of in default of repayment of the loan, the value and liquidity of which security is such... to the issuer by the purchaser. For rules relating to loan of funds to, or investment of funds in...
NASA Astrophysics Data System (ADS)
Ekenes, K.
2017-12-01
This presentation will outline the process of creating a web application for exploring large amounts of scientific geospatial data using modern automated cartographic techniques. Traditional cartographic methods, including data classification, may inadvertently hide geospatial and statistical patterns in the underlying data. This presentation demonstrates how to use smart web APIs that quickly analyze the data when it loads, and provides suggestions for the most appropriate visualizations based on the statistics of the data. Since there are just a few ways to visualize any given dataset well, it is imperative to provide smart default color schemes tailored to the dataset as opposed to static defaults. Since many users don't go beyond default values, it is imperative that they are provided with smart default visualizations. Multiple functions for automating visualizations are available in the Smart APIs, along with UI elements allowing users to create more than one visualization for a dataset since there isn't a single best way to visualize a given dataset. Since bivariate and multivariate visualizations are particularly difficult to create effectively, this automated approach removes the guesswork out of the process and provides a number of ways to generate multivariate visualizations for the same variables. This allows the user to choose which visualization is most appropriate for their presentation. The methods used in these APIs and the renderers generated by them are not available elsewhere. The presentation will show how statistics can be used as the basis for automating default visualizations of data along continuous ramps, creating more refined visualizations while revealing the spread and outliers of the data. Adding interactive components to instantaneously alter visualizations allows users to unearth spatial patterns previously unknown among one or more variables. These applications may focus on a single dataset that is frequently updated, or configurable for a variety of datasets from multiple sources.
Asfaw, Abiyot Getachew; Koye, Digsu Negese; Demssie, Amsalu Feleke; Zeleke, Ejigu Gebeye; Gelaw, Yalemzewod Assefa
2016-01-01
Immunization is a cost effective interventions of vaccine preventable disease. There is still, 2.5 million children die by vaccine preventable disease every year in developing countries. In Ethiopia, default to fully completion of child immunization is high and determinants of default to completions are not explored well in the study setting. The aim of the study was to identify determinants of default to fully completion of immunization among children between ages 12 to 23 months in Sodo Zurea District, Southern Ethiopia. Community based unmatched case-control study was conducted. Census was done to identify cases and controls before the actual data collection. A total of 344 samples (172 cases and 172 controls) were selected by simple random sampling technique. Cases were children in the age group of 12 to 23 months old who missed at least one dose from the recommended schedule. Bivariable and multivariable binary logistic regression was used to identify the determinant factors. Odds ratio, 95%CI and p - value less than 0.05 was used to measure the presence and strength of the association. Mothers of infants who are unable to read and write (AOR=8.9; 95%CI: 2.4, 33.9) and attended primary school (AOR=4.1; 95% CI:1.4-15.8), mothers who had no postnatal care follow up (AOR=0.4; 95%CI: 0.3, 0.7), good maternal knowledge towards immunization (AOR= 0.5; 95% CI: 0.3, 0.8) and maternal favorable perception towards uses of health institution for maternal and child care (AOR= 0.2; 95% CI: 0.1, 0.6) were significant determinant factors to default to fully completion of immunization. Working on maternal education, postnatal care follow up, promoting maternal knowledge and perception about child immunization are recommended measures to mitigate defaults to complete immunization.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-08
... to the trade.\\6\\ NSCC will then look to CDS for the satisfaction of the clearance and settlement... indistinguishable from the risk of a clearing broker default, but because the value of the trades of the Canadian broker-dealers cleared through the mechanism is likely to be small in comparison to the values cleared...
40 CFR 60.759 - Specifications for active collection systems.
Code of Federal Regulations, 2013 CFR
2013-07-01
...: Qi = 2 k Lo Mi (e-kt i) (CNMOC) (3.6 × 10−9) where, Qi = NMOC emission rate from the ith section, megagrams per year k = methane generation rate constant, year−1 Lo = methane generation potential, cubic... performed, the default values for k, LO and CNMOC provided in § 60.754(a)(1) or the alternative values from...
40 CFR 60.759 - Specifications for active collection systems.
Code of Federal Regulations, 2014 CFR
2014-07-01
...: Qi = 2 k Lo Mi (e-kt i) (CNMOC) (3.6 × 10−9) where, Qi = NMOC emission rate from the ith section, megagrams per year k = methane generation rate constant, year−1 Lo = methane generation potential, cubic... performed, the default values for k, LO and CNMOC provided in § 60.754(a)(1) or the alternative values from...
40 CFR 60.759 - Specifications for active collection systems.
Code of Federal Regulations, 2012 CFR
2012-07-01
...: Qi = 2 k Lo Mi (e-kt i) (CNMOC) (3.6 × 10−9) where, Qi = NMOC emission rate from the ith section, megagrams per year k = methane generation rate constant, year−1 Lo = methane generation potential, cubic... performed, the default values for k, LO and CNMOC provided in § 60.754(a)(1) or the alternative values from...
40 CFR 60.759 - Specifications for active collection systems.
Code of Federal Regulations, 2011 CFR
2011-07-01
...: Qi = 2 k Lo Mi (e-kt i) (CNMOC) (3.6 × 10−9) where, Qi = NMOC emission rate from the ith section, megagrams per year k = methane generation rate constant, year−1 Lo = methane generation potential, cubic... performed, the default values for k, LO and CNMOC provided in § 60.754(a)(1) or the alternative values from...
ERIC Educational Resources Information Center
Gordovil-Merino, Amalia; Guardia-Olmos, Joan; Pero-Cebollero, Maribel
2012-01-01
In this paper, we used simulations to compare the performance of classical and Bayesian estimations in logistic regression models using small samples. In the performed simulations, conditions were varied, including the type of relationship between independent and dependent variable values (i.e., unrelated and related values), the type of variable…
Acikalin, M Yavuz; Gorgolewski, Krzysztof J; Poldrack, Russell A
2017-01-01
Previous research has provided qualitative evidence for overlap in a number of brain regions across the subjective value network (SVN) and the default mode network (DMN). In order to quantitatively assess this overlap, we conducted a series of coordinate-based meta-analyses (CBMA) of results from 466 functional magnetic resonance imaging experiments on task-negative or subjective value-related activations in the human brain. In these analyses, we first identified significant overlaps and dissociations across activation foci related to SVN and DMN. Second, we investigated whether these overlapping subregions also showed similar patterns of functional connectivity, suggesting a shared functional subnetwork. We find considerable overlap between SVN and DMN in subregions of central ventromedial prefrontal cortex (cVMPFC) and dorsal posterior cingulate cortex (dPCC). Further, our findings show that similar patterns of bidirectional functional connectivity between cVMPFC and dPCC are present in both networks. We discuss ways in which our understanding of how subjective value (SV) is computed and represented in the brain can be synthesized with what we know about the DMN, mind-wandering, and self-referential processing in light of our findings.
Quantifying Uncertainties in N2O Emission Due to N Fertilizer Application in Cultivated Areas
Philibert, Aurore; Loyce, Chantal; Makowski, David
2012-01-01
Nitrous oxide (N2O) is a greenhouse gas with a global warming potential approximately 298 times greater than that of CO2. In 2006, the Intergovernmental Panel on Climate Change (IPCC) estimated N2O emission due to synthetic and organic nitrogen (N) fertilization at 1% of applied N. We investigated the uncertainty on this estimated value, by fitting 13 different models to a published dataset including 985 N2O measurements. These models were characterized by (i) the presence or absence of the explanatory variable “applied N”, (ii) the function relating N2O emission to applied N (exponential or linear function), (iii) fixed or random background (i.e. in the absence of N application) N2O emission and (iv) fixed or random applied N effect. We calculated ranges of uncertainty on N2O emissions from a subset of these models, and compared them with the uncertainty ranges currently used in the IPCC-Tier 1 method. The exponential models outperformed the linear models, and models including one or two random effects outperformed those including fixed effects only. The use of an exponential function rather than a linear function has an important practical consequence: the emission factor is not constant and increases as a function of applied N. Emission factors estimated using the exponential function were lower than 1% when the amount of N applied was below 160 kg N ha−1. Our uncertainty analysis shows that the uncertainty range currently used by the IPCC-Tier 1 method could be reduced. PMID:23226430
Risk factors for treatment default among re-treatment tuberculosis patients in India, 2006.
Jha, Ugra Mohan; Satyanarayana, Srinath; Dewan, Puneet K; Chadha, Sarabjit; Wares, Fraser; Sahu, Suvanand; Gupta, Devesh; Chauhan, L S
2010-01-25
Under India's Revised National Tuberculosis Control Programme (RNTCP), >15% of previously-treated patients in the reported 2006 patient cohort defaulted from anti-tuberculosis treatment. To assess the timing, characteristics, and risk factors for default amongst re-treatment TB patients. For this case-control study, in 90 randomly-selected programme units treatment records were abstracted from all 2006 defaulters from the RNTCP re-treatment regimen (cases), with one consecutively-selected non-defaulter per case. Patients who interrupted anti-tuberculosis treatment for >2 months were classified as defaulters. 1,141 defaulters and 1,189 non-defaulters were included. The median duration of treatment prior to default was 81 days (25%-75% interquartile range 44-117 days) and documented retrieval efforts after treatment interruption were inadequate. Defaulters were more likely to have been male (adjusted odds ratio [aOR] 1.4, 95% confidence interval [CI] 1.2-1.7), have previously defaulted anti-tuberculosis treatment (aOR 1.3 95%CI 1.1-1.6], have previous treatment from non-RNTCP providers (AOR 1.3, 95%CI 1.0-1.6], or have public health facility-based treatment observation (aOR 1.3, 95%CI 1.1-1.6). Amongst the large number of re-treatment patients in India, default occurs early and often. Improved pre-treatment counseling and community-based treatment provision may reduce default rates. Efforts to retrieve treatment interrupters prior to default require strengthening.
An evaluation of the treatment of risk and uncertainties in the IPCC reports on climate change.
Aven, Terje; Renn, Ortwin
2015-04-01
Few global threats rival global climate change in scale and potential consequence. The principal international authority assessing climate risk is the Intergovernmental Panel on Climate Change (IPCC). Through repeated assessments the IPCC has devoted considerable effort and interdisciplinary competence to articulating a common characterization of climate risk and uncertainties. We have reviewed the assessment and its foundation for the Fifth Assessment Reports published in 2013 and 2014, in particular the guidance note for lead authors of the fifth IPCC assessment report on consistent treatment of uncertainties. Our analysis shows that the work carried out by the ICPP is short of providing a theoretically and conceptually convincing foundation on the treatment of risk and uncertainties. The main reasons for our assessment are: (i) the concept of risk is given a too narrow definition (a function of consequences and probability/likelihood); and (ii) the reports lack precision in delineating their concepts and methods. The goal of this article is to contribute to improving the handling of uncertainty and risk in future IPCC studies, thereby obtaining a more theoretically substantiated characterization as well as enhanced scientific quality for risk analysis in this area. Several suggestions for how to improve the risk and uncertainty treatment are provided. © 2014 Society for Risk Analysis.
Voice over internet protocol with prepaid calling card solutions
NASA Astrophysics Data System (ADS)
Gunadi, Tri
2001-07-01
The VoIP technology is growing up rapidly, it has big network impact on PT Telkom Indonesia, the bigger telecommunication operator in Indonesia. Telkom has adopted VoIP and one other technology, Intelligent Network (IN). We develop those technologies together in one service product, called Internet Prepaid Calling Card (IPCC). IPCC is becoming new breakthrough for the Indonesia telecommunication services especially on VoIP and Prepaid Calling Card solutions. Network architecture of Indonesia telecommunication consists of three layer, Local, Tandem and Trunck Exchange layer. Network development researches for IPCC architecture are focus on network overlay hierarchy, Internet and PSTN. With this design hierarchy the goal of Interworking PSTN, VoIP and IN calling card, become reality. Overlay design for IPCC is not on Trunck Exchange, this is the new architecture, these overlay on Tandem and Local Exchange, to make the faster call processing. The nodes added: Gateway (GW) and Card Management Center (CMC) The GW do interfacing between PSTN and Internet Network used ISDN-PRA and Ethernet. The other functions are making bridge on circuit (PSTN) with packet (VoIP) based and real time billing process. The CMC used for data storage, pin validation, report activation, tariff system, directory number and all the administration transaction. With two nodes added the IPCC service offered to the market.
Evaluation of Global Observations-Based Evapotranspiration Datasets and IPCC AR4 Simulations
NASA Technical Reports Server (NTRS)
Mueller, B.; Seneviratne, S. I.; Jimenez, C.; Corti, T.; Hirschi, M.; Balsamo, G.; Ciais, P.; Dirmeyer, P.; Fisher, J. B.; Guo, Z.;
2011-01-01
Quantification of global land evapotranspiration (ET) has long been associated with large uncertainties due to the lack of reference observations. Several recently developed products now provide the capacity to estimate ET at global scales. These products, partly based on observational data, include satellite ]based products, land surface model (LSM) simulations, atmospheric reanalysis output, estimates based on empirical upscaling of eddycovariance flux measurements, and atmospheric water balance datasets. The LandFlux-EVAL project aims to evaluate and compare these newly developed datasets. Additionally, an evaluation of IPCC AR4 global climate model (GCM) simulations is presented, providing an assessment of their capacity to reproduce flux behavior relative to the observations ]based products. Though differently constrained with observations, the analyzed reference datasets display similar large-scale ET patterns. ET from the IPCC AR4 simulations was significantly smaller than that from the other products for India (up to 1 mm/d) and parts of eastern South America, and larger in the western USA, Australia and China. The inter-product variance is lower across the IPCC AR4 simulations than across the reference datasets in several regions, which indicates that uncertainties may be underestimated in the IPCC AR4 models due to shared biases of these simulations.
Creating History: By Design or by Default.
ERIC Educational Resources Information Center
Baugher, Shirley L.
1989-01-01
The author presents social demographic forecasts for the future. She examines social, economic, and political transitions in U.S. society separately and argues that the transitions that society makes depend ultimately on the values upon which individuals choose to act. (CH)
Code of Federal Regulations, 2011 CFR
2011-07-01
... 34 Education 3 2011-07-01 2011-07-01 false Draft cohort default rates and your ability to... OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Cohort Default Rates § 668.204 Draft cohort.... (1) We notify you of your draft cohort default rate before your official cohort default rate is...
Code of Federal Regulations, 2014 CFR
2014-07-01
... 34 Education 3 2014-07-01 2014-07-01 false Draft cohort default rates and your ability to... OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Two Year Cohort Default Rates § 668.185 Draft...) General. (1) We notify you of your draft cohort default rate before your official cohort default rate is...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 34 Education 3 2011-07-01 2011-07-01 false Draft cohort default rates and your ability to... OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Two Year Cohort Default Rates § 668.185 Draft...) General. (1) We notify you of your draft cohort default rate before your official cohort default rate is...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 34 Education 3 2013-07-01 2013-07-01 false Draft cohort default rates and your ability to... OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Two Year Cohort Default Rates § 668.185 Draft...) General. (1) We notify you of your draft cohort default rate before your official cohort default rate is...
Code of Federal Regulations, 2012 CFR
2012-07-01
... 34 Education 3 2012-07-01 2012-07-01 false Draft cohort default rates and your ability to... OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Two Year Cohort Default Rates § 668.185 Draft...) General. (1) We notify you of your draft cohort default rate before your official cohort default rate is...
Code of Federal Regulations, 2012 CFR
2012-07-01
... 34 Education 3 2012-07-01 2012-07-01 false Draft cohort default rates and your ability to... OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Cohort Default Rates § 668.204 Draft cohort.... (1) We notify you of your draft cohort default rate before your official cohort default rate is...
Code of Federal Regulations, 2014 CFR
2014-07-01
... 34 Education 3 2014-07-01 2014-07-01 false Draft cohort default rates and your ability to... OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Cohort Default Rates § 668.204 Draft cohort.... (1) We notify you of your draft cohort default rate before your official cohort default rate is...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 34 Education 3 2013-07-01 2013-07-01 false Draft cohort default rates and your ability to... OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Cohort Default Rates § 668.204 Draft cohort.... (1) We notify you of your draft cohort default rate before your official cohort default rate is...
Risk Factors for Treatment Default among Re-Treatment Tuberculosis Patients in India, 2006
Jha, Ugra Mohan; Satyanarayana, Srinath; Dewan, Puneet K.; Chadha, Sarabjit; Wares, Fraser; Sahu, Suvanand; Gupta, Devesh; Chauhan, L. S.
2010-01-01
Setting Under India's Revised National Tuberculosis Control Programme (RNTCP), >15% of previously-treated patients in the reported 2006 patient cohort defaulted from anti-tuberculosis treatment. Objective To assess the timing, characteristics, and risk factors for default amongst re-treatment TB patients. Methodology For this case-control study, in 90 randomly-selected programme units treatment records were abstracted from all 2006 defaulters from the RNTCP re-treatment regimen (cases), with one consecutively-selected non-defaulter per case. Patients who interrupted anti-tuberculosis treatment for >2 months were classified as defaulters. Results 1,141 defaulters and 1,189 non-defaulters were included. The median duration of treatment prior to default was 81 days (25%–75% interquartile range 44–117 days) and documented retrieval efforts after treatment interruption were inadequate. Defaulters were more likely to have been male (adjusted odds ratio [aOR] 1.4, 95% confidence interval [CI] 1.2–1.7), have previously defaulted anti-tuberculosis treatment (aOR 1.3 95%CI 1.1–1.6], have previous treatment from non-RNTCP providers (AOR 1.3, 95%CI 1.0–1.6], or have public health facility-based treatment observation (aOR 1.3, 95%CI 1.1–1.6). Conclusions Amongst the large number of re-treatment patients in India, default occurs early and often. Improved pre-treatment counseling and community-based treatment provision may reduce default rates. Efforts to retrieve treatment interrupters prior to default require strengthening. PMID:20111727
Treatment of uncertainties in the IPCC: a philosophical analysis
NASA Astrophysics Data System (ADS)
Jebeile, J.; Drouet, I.
2014-12-01
The IPCC produces scientific reports out of findings on climate and climate change. Because the findings are uncertain in many respects, the production of reports requires aggregating assessments of uncertainties of different kinds. This difficult task is currently regulated by the Guidance note for lead authors of the IPCC fifth assessment report on consistent treatment of uncertainties. The note recommends that two metrics—i.e. confidence and likelihood— be used for communicating the degree of certainty in findings. Confidence is expressed qualitatively "based on the type, amount, quality, and consistency of evidence […] and the degree of agreement", while likelihood is expressed probabilistically "based on statistical analysis of observations or model results, or expert judgment". Therefore, depending on the evidence evaluated, authors have the choice to present either an assigned level of confidence or a quantified measure of likelihood. But aggregating assessments of uncertainties of these two different kinds express distinct and conflicting methodologies. So the question arises whether the treatment of uncertainties in the IPCC is rationally justified. In order to answer the question, it is worth comparing the IPCC procedures with the formal normative theories of epistemic rationality which have been developed by philosophers. These theories—which include contributions to the philosophy of probability and to bayesian probabilistic confirmation theory—are relevant for our purpose because they are commonly used to assess the rationality of common collective jugement formation based on uncertain knowledge. In this paper we make the comparison and pursue the following objectives: i/we determine whether the IPCC confidence and likelihood can be compared with the notions of uncertainty targeted by or underlying the formal normative theories of epistemic rationality; ii/we investigate whether the formal normative theories of epistemic rationality justify treating uncertainty along those two dimensions, and indicate how this can be avoided.
Unleashing Expert Judgment in the IPCC's Fifth Assessment Report
NASA Astrophysics Data System (ADS)
Freeman, P. T.; Mach, K. J.; Mastrandrea, M.; Field, C. B.
2016-12-01
IPCC assessments are critical vehicles for evaluating and synthesizing existing knowledge about climate change, its impacts, and potential options for adaptation and mitigation. In these assessments, rigorous expert judgment is essential for characterizing current scientific understanding including persistent and complex uncertainties related to climate change. Over its history the IPCC has iteratively developed frameworks for evaluating and communicating what is known and what is not known about climate change science. In this presentation, we explore advances and challenges in approaches to evaluating and communicating expert judgment in the Intergovernmental Panel on Climate Change's Fifth Assessment Report (IPCC AR5). We present an analysis of the frequency of the use of calibrated degree-of-certainty terms in the policymaker summaries from the IPCC's AR5 and Fourth Assessment Report (AR4). We find that revised guidance for IPCC author teams in the AR5 improved the development of balanced judgments on scientific evidence across disciplines. Overall, degree-of-certainty terms are more abundant in the AR5 policymaker summaries compared to those of the AR4, demonstrating an increased commitment to extensively and transparently characterizing expert judgments underpinning report conclusions. This analysis also shows that while working groups still favor different degree-of-certainty scales in the AR5, authors employed a wider array of degree-of-certainty scales to communicate expert judgment supporting report findings compared to the policymaker summaries of the AR4. Finally, our analysis reveals greater inclusion of lower-certainty findings in the AR5 as compared to the AR4, critical for communicating a fuller range of possible climate change impacts and response options. Building on our findings we propose a simpler, more transparent, and more rigorous framework for developing and communicating expert judgments in future climate and environmental assessments.
Kumar, Anil; Girdhar, Anita; Chakma, Joy Kumar; Girdhar, Bhuwneswar Kumar
2015-01-01
Aim. To study the magnitude of default, time of default, its causes, and final clinical outcome. Methods. Data collected in active surveys in Agra is analyzed. Patients were given treatment after medical confirmation and were followed up. The treatment default and other clinical outcomes were recorded. Results. Patients who defaulted have comparable demographic characteristics. However, among defaulters more women (62.7% in PB, 42.6% in MB) were seen than those in treatment completers (PB 52.7% and MB 35.9%). Nerve involvement was high in treatment completers: 45.7% in PB and 91.3% in MB leprosy. Overall default rate was lower (14.8%) in ROM than (28.8%) in standard MDT for PB leprosy (χ 1 2 = 11.6, P = 0.001) and also for MB leprosy: 9.1% in ROM compared to 34.5% in MDT (χ 1 2 = 6.0, P = 0.015). Default rate was not different (28.8% versus 34.5%, P > 0.05) in both types of leprosy given MDT. Most patients defaulted at early stage of treatment and mainly due to manageable side effects. Conclusion. The default in standard MDT both for PB and MB leprosy was observed to be significantly higher than in ROM treatment. Most defaults occurred at early stage of treatment and major contribution of default is due to side effects like drowsiness, weakness, vomiting, diarrhea, and so forth, related to poor general health. Although about half of the defaulters were observed to be cured 2.2% in PB-MDT and 10.9% of MB-MDT developed disability. This is an issue due to default. Attempts are needed to increase treatment compliance. The use of specially designed disease related health education along with easily administered drug regimens may help to reduce default. PMID:25705679
Nudge for (the Public) Good: How Defaults Can Affect Cooperation
Fosgaard, Toke R.; Piovesan, Marco
2015-01-01
In this paper we test the effect of non-binding defaults on the level of contribution to a public good. We manipulate the default numbers appearing on the decision screen to nudge subjects toward a free-rider strategy or a perfect conditional cooperator strategy. Our results show that the vast majority of our subjects did not adopt the default numbers, but their stated strategy was affected by the default. Moreover, we find that our manipulation spilled over to a subsequent repeated public goods game where default was not manipulated. Here we found that subjects who previously saw the free rider default were significantly less cooperative than those who saw the perfect conditional cooperator default. PMID:26717569
Nudge for (the Public) Good: How Defaults Can Affect Cooperation.
Fosgaard, Toke R; Piovesan, Marco
2015-01-01
In this paper we test the effect of non-binding defaults on the level of contribution to a public good. We manipulate the default numbers appearing on the decision screen to nudge subjects toward a free-rider strategy or a perfect conditional cooperator strategy. Our results show that the vast majority of our subjects did not adopt the default numbers, but their stated strategy was affected by the default. Moreover, we find that our manipulation spilled over to a subsequent repeated public goods game where default was not manipulated. Here we found that subjects who previously saw the free rider default were significantly less cooperative than those who saw the perfect conditional cooperator default.
Regan, R. Steven; Markstrom, Steven L.; Hay, Lauren E.; Viger, Roland J.; Norton, Parker A.; Driscoll, Jessica M.; LaFontaine, Jacob H.
2018-01-08
This report documents several components of the U.S. Geological Survey National Hydrologic Model of the conterminous United States for use with the Precipitation-Runoff Modeling System (PRMS). It provides descriptions of the (1) National Hydrologic Model, (2) Geospatial Fabric for National Hydrologic Modeling, (3) PRMS hydrologic simulation code, (4) parameters and estimation methods used to compute spatially and temporally distributed default values as required by PRMS, (5) National Hydrologic Model Parameter Database, and (6) model extraction tool named Bandit. The National Hydrologic Model Parameter Database contains values for all PRMS parameters used in the National Hydrologic Model. The methods and national datasets used to estimate all the PRMS parameters are described. Some parameter values are derived from characteristics of topography, land cover, soils, geology, and hydrography using traditional Geographic Information System methods. Other parameters are set to long-established default values and computation of initial values. Additionally, methods (statistical, sensitivity, calibration, and algebraic) were developed to compute parameter values on the basis of a variety of nationally-consistent datasets. Values in the National Hydrologic Model Parameter Database can periodically be updated on the basis of new parameter estimation methods and as additional national datasets become available. A companion ScienceBase resource provides a set of static parameter values as well as images of spatially-distributed parameters associated with PRMS states and fluxes for each Hydrologic Response Unit across the conterminuous United States.
Boyd, Ashleigh S; Wood, Kathryn J
2010-06-04
The fully differentiated progeny of ES cells (ESC) may eventually be used for cell replacement therapy (CRT). However, elements of the innate immune system may contribute to damage or destruction of these tissues when transplanted. Herein, we assessed the hitherto ill-defined contribution of the early innate immune response in CRT after transplantation of either ESC derived insulin producing cell clusters (IPCCs) or adult pancreatic islets. Ingress of neutrophil or macrophage cells was noted immediately at the site of IPCC transplantation, but this infiltration was attenuated by day three. Gene profiling identified specific inflammatory cytokines and chemokines that were either absent or sharply reduced by three days after IPCC transplantation. Thus, IPCC transplantation provoked less of an early immune response than pancreatic islet transplantation. Our study offers insights into the characteristics of the immune response of an ESC derived tissue in the incipient stages following transplantation and suggests potential strategies to inhibit cell damage to ensure their long-term perpetuation and functionality in CRT.
Lead and Arsenic Bioaccessibility and Speciation as a Function of Soil Particle Size
Bioavailability research of soil metals has advanced considerably from default values to validated in vitro bioaccessibility (IVBA) assays for site-specific risk assessment. Previously, USEPA determined that the soil-size fraction representative of dermal adherence and consequent...
Santalla, Estela; Córdoba, Verónica; Blanco, Gabriel
2013-08-01
The objective of this work was the application of 2006 Intergovernmental Panel on Climate Change (IPCC) Guidelines for the estimation of methane and nitrous oxide emissions from the waste sector in Argentina as a preliminary exercise for greenhouse gas (GHG) inventory development and to compare with previous inventories based on 1996 IPCC Guidelines. Emissions projections to 2030 were evaluated under two scenarios--business as usual (BAU), and mitigation--and the calculations were done by using the ad hoc developed IPCC software. According to local activity data, in the business-as-usual scenario, methane emissions from solid waste disposal will increase by 73% by 2030 with respect to the emissions of year 2000. In the mitigation scenario, based on the recorded trend of methane captured in landfills, a decrease of 50% from the BAU scenario should be achieved by 2030. In the BAU scenario, GHG emissions from domestic wastewater will increase 63% from 2000 to 2030. Methane emissions from industrial wastewater, calculated from activity data of dairy, swine, slaughterhouse, citric, sugar, and wine sectors, will increase by 58% from 2000 to 2030 while methane emissions from domestic will increase 74% in the same period. Results show that GHG emissions calculated from 2006 IPCC Guidelines resulted in lower levels than those reported in previous national inventories for solid waste disposal and domestic wastewater categories, while levels were 18% higher for industrial wastewater. The implementation of the 2006 IPCC Guidelines for National Greenhouse Inventories is now considering by the UNFCCC for non-Annex I countries in order to enhance the compilation of inventories based on comparable good practice methods. This work constitutes the first GHG emissions estimation from the waste sector of Argentina applying the 2006 IPCC Guidelines and the ad doc developed software. It will contribute to identifying the main differences between the models applied in the estimation of methane emissions on the key categories of waste emission sources and to comparing results with previous inventories based on 1996 IPCC Guidelines.
Environmental health risk assessment and management for global climate change
NASA Astrophysics Data System (ADS)
Carter, P.
2014-12-01
This environmental health risk assessment and management approach for atmospheric greenhouse gas (GHG) pollution is based almost entirely on IPCC AR5 (2014) content, but the IPCC does not make recommendations. Large climate model uncertainties may be large environmental health risks. In accordance with environmental health risk management, we use the standard (IPCC-endorsed) formula of risk as the product of magnitude times probability, with an extremely high standard of precaution. Atmospheric GHG pollution, causing global warming, climate change and ocean acidification, is increasing as fast as ever. Time is of the essence to inform and make recommendations to governments and the public. While the 2ºC target is the only formally agreed-upon policy limit, for the most vulnerable nations, a 1.5ºC limit is being considered by the UNFCCC Secretariat. The Climate Action Network International (2014), representing civil society, recommends that the 1.5ºC limit be kept open and that emissions decline from 2015. James Hansen et al (2013) have argued that 1ºC is the danger limit. Taking into account committed global warming, its millennial duration, multiple large sources of amplifying climate feedbacks and multiple adverse impacts of global warming and climate change on crops, and population health impacts, all the IPCC AR5 scenarios carry extreme environmental health risks to large human populations and to the future of humanity as a whole. Our risk consideration finds that 2ºC carries high risks of many catastrophic impacts, that 1.5ºC carries high risks of many disastrous impacts, and that 1ºC is the danger limit. IPCC AR4 (2007) showed that emissions must be reversed by 2015 for a 2ºC warming limit. For the IPCC AR5 only the best-case scenario RCP2.6, is projected to stay under 2ºC by 2100 but the upper range is just above 2ºC. It calls for emissions to decline by 2020. We recommend that for catastrophic environmental health risk aversion, emissions decline from 2015 (CAN International 2014), and if policy makers are limited to the IPCC AR5 we recommend RCP2.6, with emissions declining by 2020.
NASA Astrophysics Data System (ADS)
Raschke, E.; Kinne, S.
2013-05-01
Multi-year average radiative flux maps of three satellite data-sets (CERES, ISSCP and GEWEX-SRB) are compared to each other and to typical values by global modeling (median values of results of 20 climate models of the 4th IPCC Assessment). Diversity assessments address radiative flux products and at the top of the atmosphere (TOA) and the surface, with particular attention to impacts by clouds. Involving both data from surface and TOA special attention is given to the vertical radiation flux divergence and on the infrared Greenhouse effect, which are rarely shown in literature.
Intergovernmental Panel on Climate Change. First Assessment Report Overview.
ERIC Educational Resources Information Center
International Environmental Affairs, 1991
1991-01-01
Presented are policymakers' summaries of the three working groups of the Intergovernmental Panel on Climate Change (IPCC)--science, impacts, and response strategies, the report of the IPCC Special Committee on the Participation of Developing Countries, and a discussion of international cooperation and future work. (CW)
IPCC Methodologies for the Waste Sector: Past, Present, and Future
USDA-ARS?s Scientific Manuscript database
The reporting of national greenhouse gas (GHG) emissions began more than a decade ago by the signatory countries of the United Nations Framework Convention on Climate Change (UNFCCC). National GHG inventories rely on the evolving Intergovernmental Panel on Climate Change (IPCC) national GHG inventor...
Zomer, Robert J.; Neufeldt, Henry; Xu, Jianchu; Ahrends, Antje; Bossio, Deborah; Trabucco, Antonio; van Noordwijk, Meine; Wang, Mingcheng
2016-01-01
Agroforestry systems and tree cover on agricultural land make an important contribution to climate change mitigation, but are not systematically accounted for in either global carbon budgets or national carbon accounting. This paper assesses the role of trees on agricultural land and their significance for carbon sequestration at a global level, along with recent change trends. Remote sensing data show that in 2010, 43% of all agricultural land globally had at least 10% tree cover and that this has increased by 2% over the previous ten years. Combining geographically and bioclimatically stratified Intergovernmental Panel on Climate Change (IPCC) Tier 1 default estimates of carbon storage with this tree cover analysis, we estimated 45.3 PgC on agricultural land globally, with trees contributing >75%. Between 2000 and 2010 tree cover increased by 3.7%, resulting in an increase of >2 PgC (or 4.6%) of biomass carbon. On average, globally, biomass carbon increased from 20.4 to 21.4 tC ha−1. Regional and country-level variation in stocks and trends were mapped and tabulated globally, and for all countries. Brazil, Indonesia, China and India had the largest increases in biomass carbon stored on agricultural land, while Argentina, Myanmar, and Sierra Leone had the largest decreases. PMID:27435095
The advantage of calculating emission reduction with local emission factor in South Sumatera region
NASA Astrophysics Data System (ADS)
Buchari, Erika
2017-11-01
Green House Gases (GHG) which have different Global Warming Potential, usually expressed in CO2 equivalent. German has succeeded in emission reduction of CO2 in year 1990s, while Japan since 2001 increased load factor of public transports. Indonesia National Medium Term Development Plan, 2015-2019, has set up the target of minimum 26% and maximum 41% National Emission Reduction in 2019. Intergovernmental Panel on Climate Change (IPCC), defined three types of accuracy in counting emission of GHG, as tier 1, tier 2, and tier 3. In tier 1, calculation is based on fuel used and average emission (default), which is obtained from statistical data. While in tier 2, calculation is based fuel used and local emission factors. Tier 3 is more accurate from those in tier 1 and 2, and the calculation is based on fuel used from modelling method or from direct measurement. This paper is aimed to evaluate the calculation with tier 2 and tier 3 in South Sumatera region. In 2012, Regional Action Plan for Greenhouse Gases of South Sumatera for 2020 is about 6,569,000 ton per year and with tier 3 is about without mitigation and 6,229,858.468 ton per year. It was found that the calculation in tier 3 is more accurate in terms of fuel used of variation vehicles so that the actions of mitigation can be planned more realistically.
46 CFR 298.41 - Remedies after default.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 8 2011-10-01 2011-10-01 false Remedies after default. 298.41 Section 298.41 Shipping... Defaults and Remedies, Reporting Requirements, Applicability of Regulations § 298.41 Remedies after default... governing remedies after a default, which relate to our rights and duties, the rights and duties of the...
46 CFR 298.41 - Remedies after default.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 8 2010-10-01 2010-10-01 false Remedies after default. 298.41 Section 298.41 Shipping... Defaults and Remedies, Reporting Requirements, Applicability of Regulations § 298.41 Remedies after default... governing remedies after a default, which relate to our rights and duties, the rights and duties of the...
Brosschot, Jos F; Verkuil, Bart; Thayer, Julian F
2016-06-01
From a combined neurobiological and evolution-theoretical perspective, the stress response is a subcortically subserved response to uncertainty that is not 'generated' but 'default': the stress response is 'always there' but as long as safety is perceived, the stress response is under tonic prefrontal inhibition, reflected by high vagally mediated heart rate variability. Uncertainty of safety leads to disinhibiting the default stress response, even in the absence of threat. Due to the stress response's survival value, this 'erring on the side of caution' is passed to us via our genes. Thus, intolerance of uncertainty is not acquired during the life cycle, but is a given property of all living organisms, only to be alleviated in situations of which the safety is learned. When the latter is deficient, generalized unsafety ensues, which underlies chronic anxiety and stress and their somatic health risks, as well as other highly prevalent conditions carrying such risks, including loneliness, obesity, aerobic unfitness and old age. Copyright © 2016 Elsevier Ltd. All rights reserved.
Human health risk assessment due to global warming--a case study of the Gulf countries.
Husain, Tahir; Chaudhary, Junaid Rafi
2008-12-01
Accelerated global warming is predicted by the Intergovernmental Panel on Climatic Change (IPCC) due to increasing anthropogenic greenhouse gas emissions. The climate changes are anticipated to have a long-term impact on human health, marine and terrestrial ecosystems, water resources and vegetation. Due to rising sea levels, low lying coastal regions will be flooded, farmlands will be threatened and scarcity of fresh water resources will be aggravated. This will in turn cause increased human suffering in different parts of the world. Spread of disease vectors will contribute towards high mortality, along with the heat related deaths. Arid and hot climatic regions will face devastating effects risking survival of the fragile plant species, wild animals, and other desert ecosystems. The paper presents future changes in temperature, precipitation and humidity and their direct and indirect potential impacts on human health in the coastal regions of the Gulf countries including Yemen, Oman, United Arab Emirates, Qatar, and Bahrain. The analysis is based on the long-term changes in the values of temperature, precipitation and humidity as predicted by the global climatic simulation models under different scenarios of GHG emission levels. Monthly data on temperature, precipitation, and humidity were retrieved from IPCC databases for longitude 41.25 degrees E to 61.875 degrees E and latitude 9.278 degrees N to 27.833 degrees N. Using an average of 1970 to 2000 values as baseline, the changes in the humidity, temperature and precipitation were predicted for the period 2020 to 2050 and 2070 to 2099. Based on epidemiological studies on various diseases associated with the change in temperature, humidity and precipitation in arid and hot regions, empirical models were developed to assess human health risk in the Gulf region to predict elevated levels of diseases and mortality rates under different emission scenarios as developed by the IPCC.The preliminary assessment indicates increased mortality rates due to cardiovascular and respiratory illnesses, thermal stress, and increased frequency of infectious vector borne diseases in the region between 2070 and 2099.
Thresholds for Chemically Induced Toxicity: Theories and Evidence
Regulatory agencies define “science policies” as a means of proceeding with risk assessments and management decisions in the absence of all the data these bodies would like. Policies may include the use of default assumptions, values and methodologies. The U.S. EPA 20...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mather, Barry A; Boemer, Jens C.; Vittal, Eknath
The response of low voltage networks with high penetration of PV systems to transmission network faults will, in the future, determine the overall power system performance during certain hours of the year. The WECC distributed PV system model (PVD1) is designed to represent small-scale distribution-connected systems. Although default values are provided by WECC for the model parameters, tuning of those parameters seems to become important in order to accurately estimate the partial loss of distributed PV systems for bulk system studies. The objective of this paper is to describe a new methodology to determine the WECC distributed PV system (PVD1)more » model parameters and to derive parameter sets obtained for six distribution circuits of a Californian investor-owned utility with large amounts of distributed PV systems. The results indicate that the parameters for the partial loss of distributed PV systems may differ significantly from the default values provided by WECC.« less
Petersen, Nick; Perrin, David; Newhauser, Wayne; Zhang, Rui
2017-01-01
The purpose of this study was to evaluate the impact of selected configuration parameters that govern multileaf collimator (MLC) transmission and rounded leaf offset in a commercial treatment planning system (TPS) (Pinnacle 3 , Philips Medical Systems, Andover, MA, USA) on the accuracy of intensity-modulated radiation therapy (IMRT) dose calculation. The MLC leaf transmission factor was modified based on measurements made with ionization chambers. The table of parameters containing rounded-leaf-end offset values was modified by measuring the radiation field edge as a function of leaf bank position with an ionization chamber in a scanning water-tank dosimetry system and comparing the locations to those predicted by the TPS. The modified parameter values were validated by performing IMRT quality assurance (QA) measurements on 19 gantry-static IMRT plans. Planar dose measurements were performed with radiographic film and a diode array (MapCHECK2) and compared to TPS calculated dose distributions using default and modified configuration parameters. Based on measurements, the leaf transmission factor was changed from a default value of 0.001 to 0.005. Surprisingly, this modification resulted in a small but statistically significant worsening of IMRT QA gamma-index passing rate, which revealed that the overall dosimetric accuracy of the TPS depends on multiple configuration parameters in a manner that is coupled and not intuitive because of the commissioning protocol used in our clinic. The rounded leaf offset table had little room for improvement, with the average difference between the default and modified offset values being -0.2 ± 0.7 mm. While our results depend on the current clinical protocols, treatment unit and TPS used, the methodology used in this study is generally applicable. Different clinics could potentially obtain different results and improve their dosimetric accuracy using our approach.
Finding the CO[subscript 2] Culprit
ERIC Educational Resources Information Center
Clary, Renee; Wandersee, James
2015-01-01
In 2013, the Intergovernmental Panel on Climate Change (IPCC) released its fifth report, attributing 95% of "all" climate warming--from the 1950s through today--to humans. Not only did the report--like previous IPCC reports dating back to 1990--accredit global warming to anthropogenic carbon dioxide emissions, but over time the vast…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-01
... Climate Change (IPCC), Impacts, Adaptation & Vulnerability. SUMMARY: The United States Global Change... on Climate Change (IPCC), Impacts, Adaptation & Vulnerability. The United Nations Environment... socio-economic information for understanding the scientific basis of climate change, potential impacts...
Use of cellular phone contacts to increase return rates for immunization services in Kenya.
Mokaya, Evans; Mugoya, Isaac; Raburu, Jane; Shimp, Lora
2017-01-01
In Kenya, failure to complete immunization schedules by children who previously accessed immunization services is an obstacle to ensuring that children are fully immunized. Home visit approaches used to track defaulting children have not been successful in reducing the drop-out rate. This study tested the use of phone contacts as an approach for tracking immunization defaulters in twelve purposively-selected facilities in three districts of western Kenya. For nine months, children accessing immunization services in the facilities were tracked and caregivers were asked their reasons for defaulting. In all of the facilities, caregiver phone ownership was above 80%. In 11 of the 12 facilities, defaulter rates between pentavalent1 and pentavalent3 vaccination doses reduced significantly to within the acceptable level of < 10%. Caregivers provided reliable contact information and health workers positively perceived phone-based defaulter communications. Tracking a defaulter required on average 2 minutes by voice and Ksh 6 ($ 0.07). Competing tasks and concerns about vaccinating sick children and side-effects were the most cited reasons for caregivers defaulting. Notably, a significant number of children categorised as defaulters had been vaccinated in a different facility (and were therefore "false defaulters"). Use of phone contacts for follow-up is a feasible and cost-effective method for tracking defaulters. This approach should complement traditional home visits, especially for caregivers without phones. Given communication-related reasons for defaulting, it is important that immunization programs scale-up community education activities. A system for health facilities to share details of defaulting children should be established to reduce "false defaulters".
Time of default in tuberculosis patients on directly observed treatment.
Pardeshi, Geeta S
2010-09-01
Default remains an important challenge for the Revised National Tuberculosis Control Programme, which has achieved improved cure rates. This study describes the pattern of time of default in patients on DOTS. Tuberculosis Unit in District Tuberculosis Centre, Yavatmal, India; Retrospective cohort study. This analysis was done among the cohort of patients of registered at the Tuberculosis Unit during the year 2004. The time of default was assessed from the tuberculosis register. The sputum smear conversion and treatment outcome were also assessed. Kaplan-Meier plots and log rank tests. Overall, the default rate amongst the 716 patients registered at the Tuberculosis Unit was 10.33%. There was a significant difference in the default rate over time between the three DOTS categories (log rank statistic= 15.49, P=0.0004). Amongst the 331 smear-positive patients, the cumulative default rates at the end of intensive phase were 4% and 16%; while by end of treatment period, the default rates were 6% and 31% in category I and category II, respectively. A majority of the smear-positive patients in category II belonged to the group 'treatment after default' (56/95), and 30% of them defaulted during re-treatment. The sputum smear conversion rate at the end of intensive phase was 84%. Amongst 36 patients without smear conversion at the end of intensive phase, 55% had treatment failure. Patients defaulting in intensive phase of treatment and without smear conversion at the end of intensive phase should be retrieved on a priority basis. Default constitutes not only a major reason for patients needing re-treatment but also a risk for repeated default.
Children's Responses to Line Spacing in Early Reading Books or "Holes to Tell Which Line You're On"
ERIC Educational Resources Information Center
Reynolds, Linda; Walker, Sue; Duncan, Alison
2006-01-01
This paper describes a study designed to find out whether children's reading would be affected by line spacing that is wider or narrower than the commonly used default values. The realistic, high quality test material was set using a range of four different line spacing values, and twenty-four children in Years 1 and 2 (between five and seven…
Method of Characteristic (MOC) Nozzle Flowfield Solver - User’s Guide and Input Manual
2013-01-01
Description: Axi or Planar calculation. Value Description Default 0.0 Planer solution 1.0 Axisymmetric solution * &INPUT: NI Date Type: Integer...angle error !... !... Set Control values !... DELTA = 1.0 !1 axi, 0 planer (Mass flux not working correctly) NI = 81...DELTA = 1.0 !1 axi, 0 planer NI = 71 !NUMBER OF RADIAL POINTS ON INFLOW PLANE (Max 99) NT = 35 !NUMBER OF
A Web-based tool for UV irradiance data: predictions for European and Southeast Asian sites.
Kift, Richard; Webb, Ann R; Page, John; Rimmer, John; Janjai, Serm
2006-01-01
There are a range of UV models available, but one needs significant pre-existing knowledge and experience in order to be able to use them. In this article a comparatively simple Web-based model developed for the SoDa (Integration and Exploitation of Networked Solar Radiation Databases for Environment Monitoring) project is presented. This is a clear-sky model with modifications for cloud effects. To determine if the model produces realistic UV data the output is compared with 1 year sets of hourly measurements at sites in the United Kingdom and Thailand. The accuracy of the output depends on the input, but reasonable results were obtained with the use of the default database inputs and improved when pyranometer instead of modeled data provided the global radiation input needed to estimate the UV. The average modeled values of UV for the UK site were found to be within 10% of measurements. For the tropical sites in Thailand the average modeled values were within 1120% of measurements for the four sites with the use of the default SoDa database values. These results improved when pyranometer data and TOMS ozone data from 2002 replaced the standard SoDa database values, reducing the error range for all four sites to less than 15%.
Student Loan Defaults in Texas: Yesterday, Today, and Tomorrow.
ERIC Educational Resources Information Center
Webster, Jeff; Meyer, Don; Arnold, Adreinne
In 1988, the Texas student aid community addressed the issue of defaults in the guaranteed student loan program, creating a strategic default initiative. In June 1998, this same group of student aid officials met again to examine the current status of defaults and to share ideas on ways to prevent defaults. This report was intended as a resource…
34 CFR 674.5 - Federal Perkins Loan program cohort default rate and penalties.
Code of Federal Regulations, 2011 CFR
2011-07-01
... from an institution's cohort default rate calculation if the loan is— (A) Discharged due to death or... 34 Education 3 2011-07-01 2011-07-01 false Federal Perkins Loan program cohort default rate and... Provisions § 674.5 Federal Perkins Loan program cohort default rate and penalties. (a) Default penalty. If an...
34 CFR 674.5 - Federal Perkins Loan program cohort default rate and penalties.
Code of Federal Regulations, 2014 CFR
2014-07-01
... from an institution's cohort default rate calculation if the loan is— (A) Discharged due to death or... 34 Education 3 2014-07-01 2014-07-01 false Federal Perkins Loan program cohort default rate and... Provisions § 674.5 Federal Perkins Loan program cohort default rate and penalties. (a) Default penalty. If an...
34 CFR 674.5 - Federal Perkins Loan program cohort default rate and penalties.
Code of Federal Regulations, 2010 CFR
2010-07-01
... from an institution's cohort default rate calculation if the loan is— (A) Discharged due to death or... 34 Education 3 2010-07-01 2010-07-01 false Federal Perkins Loan program cohort default rate and... Provisions § 674.5 Federal Perkins Loan program cohort default rate and penalties. (a) Default penalty. If an...
34 CFR 674.5 - Federal Perkins Loan program cohort default rate and penalties.
Code of Federal Regulations, 2013 CFR
2013-07-01
... from an institution's cohort default rate calculation if the loan is— (A) Discharged due to death or... 34 Education 3 2013-07-01 2013-07-01 false Federal Perkins Loan program cohort default rate and... Provisions § 674.5 Federal Perkins Loan program cohort default rate and penalties. (a) Default penalty. If an...
34 CFR 674.5 - Federal Perkins Loan program cohort default rate and penalties.
Code of Federal Regulations, 2012 CFR
2012-07-01
... from an institution's cohort default rate calculation if the loan is— (A) Discharged due to death or... 34 Education 3 2012-07-01 2012-07-01 false Federal Perkins Loan program cohort default rate and... Provisions § 674.5 Federal Perkins Loan program cohort default rate and penalties. (a) Default penalty. If an...
A reduced-form intensity-based model under fuzzy environments
NASA Astrophysics Data System (ADS)
Wu, Liang; Zhuang, Yaming
2015-05-01
The external shocks and internal contagion are the important sources of default events. However, the external shocks and internal contagion effect on the company is not observed, we cannot get the accurate size of the shocks. The information of investors relative to the default process exhibits a certain fuzziness. Therefore, using randomness and fuzziness to study such problems as derivative pricing or default probability has practical needs. But the idea of fuzzifying credit risk models is little exploited, especially in a reduced-form model. This paper proposes a new default intensity model with fuzziness and presents a fuzzy default probability and default loss rate, and puts them into default debt and credit derivative pricing. Finally, the simulation analysis verifies the rationality of the model. Using fuzzy numbers and random analysis one can consider more uncertain sources in the default process of default and investors' subjective judgment on the financial markets in a variety of fuzzy reliability so as to broaden the scope of possible credit spreads.
NASA Astrophysics Data System (ADS)
Harris, Adam
2014-05-01
The Intergovernmental Panel on Climate Change (IPCC) prescribes that the communication of risk and uncertainty information pertaining to scientific reports, model predictions etc. be communicated with a set of 7 likelihood expressions. These range from "Extremely likely" (intended to communicate a likelihood of greater than 99%) through "As likely as not" (33-66%) to "Extremely unlikely" (less than 1%). Psychological research has investigated the degree to which these expressions are interpreted as intended by the IPCC, both within and across cultures. I will present a selection of this research and demonstrate some problems associated with communicating likelihoods in this way, as well as suggesting some potential improvements.
Tier 1 Rice Model for Estimating Pesticide Concentrations in Rice Paddies
The Tier 1 Rice Model estimates screening level aquatic concentrations of pesticides in rice paddies. It is a simple pesticide soil:water partitioning model with default values for water volume, soil mass, and organic carbon. Pesticide degradation is not considered in the mode...
Default Trends in Major Postsecondary Education Sectors.
ERIC Educational Resources Information Center
Merisotis, Jamie P.
1988-01-01
Information on GSL defaults in five states is reviewed: California, Illinois, Massachusetts, New Jersey, and Pennsylvania. Default rates are defined and levels of default are examined using a variety of analytical methods. (Author/MLW)
UFO (UnFold Operator) default data format
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kissel, L.; Biggs, F.; Marking, T.R.
The default format for the storage of x,y data for use with the UFO code is described. The format assumes that the data stored in a file is a matrix of values; two columns of this matrix are selected to define a function of the form y = f(x). This format is specifically designed to allow for easy importation of data obtained from other sources, or easy entry of data using a text editor, with a minimum of reformatting. This format is flexible and extensible through the use of inline directives stored in the optional header of the file. Amore » special extension of the format implements encoded data which significantly reduces the storage required as compared wth the unencoded form. UFO supports several extensions to the file specification that implement execute-time operations, such as, transformation of the x and/or y values, selection of specific columns of the matrix for association with the x and y values, input of data directly from other formats (e.g., DAMP and PFF), and a simple type of library-structured file format. Several examples of the use of the format are given.« less
Cherkaoui, Imad; Sabouni, Radia; Ghali, Iraqi; Kizub, Darya; Billioux, Alexander C; Bennani, Kenza; Bourkadi, Jamal Eddine; Benmamoun, Abderrahmane; Lahlou, Ouafae; Aouad, Rajae El; Dooley, Kelly E
2014-01-01
Public tuberculosis (TB) clinics in urban Morocco. Explore risk factors for TB treatment default and develop a prediction tool. Assess consequences of default, specifically risk for transmission or development of drug resistance. Case-control study comparing patients who defaulted from TB treatment and patients who completed it using quantitative methods and open-ended questions. Results were interpreted in light of health professionals' perspectives from a parallel study. A predictive model and simple tool to identify patients at high risk of default were developed. Sputum from cases with pulmonary TB was collected for smear and drug susceptibility testing. 91 cases and 186 controls enrolled. Independent risk factors for default included current smoking, retreatment, work interference with adherence, daily directly observed therapy, side effects, quick symptom resolution, and not knowing one's treatment duration. Age >50 years, never smoking, and having friends who knew one's diagnosis were protective. A simple scoring tool incorporating these factors was 82.4% sensitive and 87.6% specific for predicting default in this population. Clinicians and patients described additional contributors to default and suggested locally-relevant intervention targets. Among 89 cases with pulmonary TB, 71% had sputum that was smear positive for TB. Drug resistance was rare. The causes of default from TB treatment were explored through synthesis of qualitative and quantitative data from patients and health professionals. A scoring tool with high sensitivity and specificity to predict default was developed. Prospective evaluation of this tool coupled with targeted interventions based on our findings is warranted. Of note, the risk of TB transmission from patients who default treatment to others is likely to be high. The commonly-feared risk of drug resistance, though, may be low; a larger study is required to confirm these findings.
GLOBAL CHANGE RESEARCH NEWS #3: IPCC SPECIAL REPORT ON "LAND USE, LAND USE CHANGE, AND FORESTRY"
ORD is participating in the development of an Intergovernmental Panel on Climate Change (IPCC) Special Report on "Land Use, Land Use Change and Forestry." Preparation of the Special Report was requested by the Conference of the Parties(COP) to the United Nations Framework Conve...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-27
... and heat waves over most land areas will likely increase (IPCC 2007, pp. 13, 53). The IPCC predicts..., future projections for the Southwest include increased temperatures; more intense and longer-lasting heat..., droughty, and deficient in nutrients. Species that occupy such sites have been called ``stress- tolerators...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-25
... on Climate Change (IPCC), Mitigation of Climate Change SUMMARY: The United States Global Change... Panel on Climate Change (IPCC), Mitigation of Climate Change. The United Nations Environment Programme...-economic information for understanding the scientific basis of climate change, potential impacts, and...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-26
... Climate Change (IPCC), Climate Change 2013: The Physical Science Basis Summary: The United States Global... Panel on Climate Change (IPCC) Climate Change 2013: The Physical Science Basis. The United Nations..., and socio-economic information for understanding the scientific basis of climate change, potential...
Temporal Considerations of Carbon Sequestration in LCA
James Salazar; Richard Bergman
2013-01-01
Accounting for carbon sequestration in LCA illustrates the limitations of a single global warming characterization factor. Typical cradle-to-grave LCA models all emissions from end-of-life processes and then characterizes these flows by IPCC GWP (100-yr) factors. A novel method estimates climate change impact by characterizing annual emissions with the IPCC GHG forcing...
Southern United States climate, land use, and forest conditions
David N. Wear; Thomas L. Mote; J. Marshall Shepherd; K. C. Benita; Christopher W. Strother
2014-01-01
The Intergovernmental Panel on Climate Change (IPCC) has concluded, with 90% certainty, that human or "anthropogenic" activities (emissions of greenhouse gases, aerosols and pollution, landuse/land-cover change) have altered global temperature patterns over the past 100-150 years (IPCC 2007a). Such temperature changes have a set of cascading, and sometimes...
Boyd, Ashleigh S.; Wood, Kathryn J.
2010-01-01
Background The fully differentiated progeny of ES cells (ESC) may eventually be used for cell replacement therapy (CRT). However, elements of the innate immune system may contribute to damage or destruction of these tissues when transplanted. Methodology/Principal Findings Herein, we assessed the hitherto ill-defined contribution of the early innate immune response in CRT after transplantation of either ESC derived insulin producing cell clusters (IPCCs) or adult pancreatic islets. Ingress of neutrophil or macrophage cells was noted immediately at the site of IPCC transplantation, but this infiltration was attenuated by day three. Gene profiling identified specific inflammatory cytokines and chemokines that were either absent or sharply reduced by three days after IPCC transplantation. Thus, IPCC transplantation provoked less of an early immune response than pancreatic islet transplantation. Conclusions/Significance Our study offers insights into the characteristics of the immune response of an ESC derived tissue in the incipient stages following transplantation and suggests potential strategies to inhibit cell damage to ensure their long-term perpetuation and functionality in CRT. PMID:20532031
Beaty, Roger E; Christensen, Alexander P; Benedek, Mathias; Silvia, Paul J; Schacter, Daniel L
2017-03-01
Functional neuroimaging research has recently revealed brain network interactions during performance on creative thinking tasks-particularly among regions of the default and executive control networks-but the cognitive mechanisms related to these interactions remain poorly understood. Here we test the hypothesis that the executive control network can interact with the default network to inhibit salient conceptual knowledge (i.e., pre-potent responses) elicited from memory during creative idea production. Participants studied common noun-verb pairs and were given a cued-recall test with corrective feedback to strengthen the paired association in memory. They then completed a verb generation task that presented either a previously studied noun (high-constraint) or an unstudied noun (low-constraint), and were asked to "think creatively" while searching for a novel verb to relate to the presented noun. Latent Semantic Analysis of verbal responses showed decreased semantic distance values in the high-constraint (i.e., interference) condition, which corresponded to increased neural activity within regions of the default (posterior cingulate cortex and bilateral angular gyri), salience (right anterior insula), and executive control (left dorsolateral prefrontal cortex) networks. Independent component analysis of intrinsic functional connectivity networks extended this finding by revealing differential interactions among these large-scale networks across the task conditions. The results suggest that interactions between the default and executive control networks underlie response inhibition during constrained idea production, providing insight into specific neurocognitive mechanisms supporting creative cognition. Copyright © 2017 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Gillen, Andrew
2013-01-01
Student college loan default rates have nearly doubled in recent years. The three-year default rate exceeds 13 percent nationally. Tracking and reporting default rates is a crucial means of monitoring how well higher education dollars are spent. Yet, the way default data is gathered, measured, and reported by the federal government clouds…
Lalor, Maeve K.; Greig, Jane; Allamuratova, Sholpan; Althomsons, Sandy; Tigay, Zinaida; Khaemraev, Atadjan; Braker, Kai; Telnov, Oleksander; du Cros, Philipp
2013-01-01
Background The Médecins Sans Frontières project of Uzbekistan has provided multidrug-resistant tuberculosis treatment in the Karakalpakstan region since 2003. Rates of default from treatment have been high, despite psychosocial support, increasing particularly since programme scale-up in 2007. We aimed to determine factors associated with default in multi- and extensively drug-resistant tuberculosis patients who started treatment between 2003 and 2008 and thus had finished approximately 2 years of treatment by the end of 2010. Methods A retrospective cohort analysis of multi- and extensively drug-resistant tuberculosis patients enrolled in treatment between 2003 and 2008 compared baseline demographic characteristics and possible risk factors for default. Default was defined as missing ≥60 consecutive days of treatment (all drugs). Data were routinely collected during treatment and entered in a database. Potential risk factors for default were assessed in univariate analysis using chi-square test and in multivariate analysis with logistic regression. Results 20% (142/710) of patients defaulted after a median of 6 months treatment (IQR 2.6–9.9). Factors associated with default included severity of resistance patterns (pre-extensively drug-resistant/extensively drug-resistant tuberculosis adjusted odds ratio 0.52, 95%CI: 0.31–0.86), previous default (2.38, 1.09–5.24) and age >45 years (1.77, 1.10–2.87). The default rate was 14% (42/294) for patients enrolled 2003–2006 and 24% (100/416) for 2007–2008 enrolments (p = 0.001). Conclusions Default from treatment was high and increased with programme scale-up. It is essential to ensure scale-up of treatment is accompanied with scale-up of staff and patient support. A successful first course of tuberculosis treatment is important; patients who had previously defaulted were at increased risk of default and death. The protective effect of severe resistance profiles suggests that understanding disease severity or fear may motivate against default. Targeted health education and support for at-risk patients after 5 months of treatment when many begin to feel better may decrease default. PMID:24223148
Chida, Natasha; Ansari, Zara; Hussain, Hamidah; Jaswal, Maria; Symes, Stephen; Khan, Aamir J; Mohammed, Shama
2015-01-01
Non-adherence to tuberculosis therapy can lead to drug resistance, prolonged infectiousness, and death; therefore, understanding what causes treatment default is important. Pakistan has one of the highest burdens of tuberculosis in the world, yet there have been no qualitative studies in Pakistan that have specifically examined why default occurs. We conducted a mixed methods study at a tuberculosis clinic in Karachi to understand why patients with drug-susceptible tuberculosis default from treatment, and to identify factors associated with default. Patients attending this clinic pick up medications weekly and undergo family-supported directly observed therapy. In-depth interviews were administered to 21 patients who had defaulted. We also compared patients who defaulted with those who were cured, had completed, or had failed treatment in 2013. Qualitative analyses showed the most common reasons for default were the financial burden of treatment, and medication side effects and beliefs. The influence of finances on other causes of default was also prominent, as was concern about the effect of treatment on family members. In quantitative analysis, of 2120 patients, 301 (14.2%) defaulted. Univariate analysis found that male gender (OR: 1.34, 95% CI: 1.04-1.71), being 35-59 years of age (OR: 1.54, 95% CI: 1.14-2.08), or being 60 years of age or older (OR: 1.84, 95% CI: 1.17-2.88) were associated with default. After adjusting for gender, disease site, and patient category, being 35-59 years of age (aOR: 1.49, 95% CI: 1.10-2.03) or 60 years of age or older (aOR: 1.76, 95% CI: 1.12-2.77) were associated with default. In multivariate analysis age was the only variable associated with default. This lack of identifiable risk factors and our qualitative findings imply that default is complex and often due to extrinsic and medication-related factors. More tolerable medications, improved side effect management, and innovative cost-reduction measures are needed to reduce default from tuberculosis treatment.
Chida, Natasha; Ansari, Zara; Hussain, Hamidah; Jaswal, Maria; Symes, Stephen; Khan, Aamir J.; Mohammed, Shama
2015-01-01
Purpose Non-adherence to tuberculosis therapy can lead to drug resistance, prolonged infectiousness, and death; therefore, understanding what causes treatment default is important. Pakistan has one of the highest burdens of tuberculosis in the world, yet there have been no qualitative studies in Pakistan that have specifically examined why default occurs. We conducted a mixed methods study at a tuberculosis clinic in Karachi to understand why patients with drug-susceptible tuberculosis default from treatment, and to identify factors associated with default. Patients attending this clinic pick up medications weekly and undergo family-supported directly observed therapy. Methods In-depth interviews were administered to 21 patients who had defaulted. We also compared patients who defaulted with those who were cured, had completed, or had failed treatment in 2013. Results Qualitative analyses showed the most common reasons for default were the financial burden of treatment, and medication side effects and beliefs. The influence of finances on other causes of default was also prominent, as was concern about the effect of treatment on family members. In quantitative analysis, of 2120 patients, 301 (14.2%) defaulted. Univariate analysis found that male gender (OR: 1.34, 95% CI: 1.04–1.71), being 35–59 years of age (OR: 1.54, 95% CI: 1.14–2.08), or being 60 years of age or older (OR: 1.84, 95% CI: 1.17–2.88) were associated with default. After adjusting for gender, disease site, and patient category, being 35–59 years of age (aOR: 1.49, 95% CI: 1.10–2.03) or 60 years of age or older (aOR: 1.76, 95% CI: 1.12–2.77) were associated with default. Conclusions In multivariate analysis age was the only variable associated with default. This lack of identifiable risk factors and our qualitative findings imply that default is complex and often due to extrinsic and medication-related factors. More tolerable medications, improved side effect management, and innovative cost-reduction measures are needed to reduce default from tuberculosis treatment. PMID:26562787
Lalor, Maeve K; Greig, Jane; Allamuratova, Sholpan; Althomsons, Sandy; Tigay, Zinaida; Khaemraev, Atadjan; Braker, Kai; Telnov, Oleksander; du Cros, Philipp
2013-01-01
The Médecins Sans Frontières project of Uzbekistan has provided multidrug-resistant tuberculosis treatment in the Karakalpakstan region since 2003. Rates of default from treatment have been high, despite psychosocial support, increasing particularly since programme scale-up in 2007. We aimed to determine factors associated with default in multi- and extensively drug-resistant tuberculosis patients who started treatment between 2003 and 2008 and thus had finished approximately 2 years of treatment by the end of 2010. A retrospective cohort analysis of multi- and extensively drug-resistant tuberculosis patients enrolled in treatment between 2003 and 2008 compared baseline demographic characteristics and possible risk factors for default. Default was defined as missing ≥60 consecutive days of treatment (all drugs). Data were routinely collected during treatment and entered in a database. Potential risk factors for default were assessed in univariate analysis using chi-square test and in multivariate analysis with logistic regression. 20% (142/710) of patients defaulted after a median of 6 months treatment (IQR 2.6-9.9). Factors associated with default included severity of resistance patterns (pre-extensively drug-resistant/extensively drug-resistant tuberculosis adjusted odds ratio 0.52, 95%CI: 0.31-0.86), previous default (2.38, 1.09-5.24) and age >45 years (1.77, 1.10-2.87). The default rate was 14% (42/294) for patients enrolled 2003-2006 and 24% (100/416) for 2007-2008 enrolments (p = 0.001). Default from treatment was high and increased with programme scale-up. It is essential to ensure scale-up of treatment is accompanied with scale-up of staff and patient support. A successful first course of tuberculosis treatment is important; patients who had previously defaulted were at increased risk of default and death. The protective effect of severe resistance profiles suggests that understanding disease severity or fear may motivate against default. Targeted health education and support for at-risk patients after 5 months of treatment when many begin to feel better may decrease default.
Toward Ada Verification: A Collection of Relevant Topics
1986-06-01
presumably it is this- if there are no default values, a programming error which results in failure to initialize a variable is more likely to advertise ... disavantages tu using AVID. First, TDL is a more complicated interface than first-order logic (as used in the CSG). Second, AVID is unsupported and
Code of Federal Regulations, 2014 CFR
2014-07-01
... fraction values in the following table for solvent blends for which you do not have test data or... spirits 64742-89-6 0.15 Toluene. 14. Low aromatic white spirit 64742-82-1 0 None. 15. Mineral spirits...
Code of Federal Regulations, 2014 CFR
2014-07-01
... fraction values in the following table for solvent blends for which you do not have test data or... spirits 64742-89-6 0.15 Toluene. 14. Low aromatic white spirit 64742-82-1 0 None. 15. Mineral spirits...
Code of Federal Regulations, 2012 CFR
2012-07-01
... fraction values in the following table for solvent blends for which you do not have test data or... spirits 64742-89-6 0.15 Toluene. 14. Low aromatic white spirit 64742-82-1 0 None. 15. Mineral spirits...
Code of Federal Regulations, 2012 CFR
2012-07-01
... fraction values in the following table for solvent blends for which you do not have test data or... spirits 64742-89-6 0.15 Toluene. 14. Low aromatic white spirit 64742-82-1 0 None. 15. Mineral spirits...
Provides detailed guidance to the user on how to select input parameters for running the Terrestrial Investigation Model (TIM) and recommendations for default values that can be used when no chemical-specific or species-specific information are available.
Default Network Modulation and Large-Scale Network Interactivity in Healthy Young and Old Adults
Schacter, Daniel L.
2012-01-01
We investigated age-related changes in default, attention, and control network activity and their interactions in young and old adults. Brain activity during autobiographical and visuospatial planning was assessed using multivariate analysis and with intrinsic connectivity networks as regions of interest. In both groups, autobiographical planning engaged the default network while visuospatial planning engaged the attention network, consistent with a competition between the domains of internalized and externalized cognition. The control network was engaged for both planning tasks. In young subjects, the control network coupled with the default network during autobiographical planning and with the attention network during visuospatial planning. In old subjects, default-to-control network coupling was observed during both planning tasks, and old adults failed to deactivate the default network during visuospatial planning. This failure is not indicative of default network dysfunction per se, evidenced by default network engagement during autobiographical planning. Rather, a failure to modulate the default network in old adults is indicative of a lower degree of flexible network interactivity and reduced dynamic range of network modulation to changing task demands. PMID:22128194
Default network connectivity as a vulnerability marker for obsessive compulsive disorder.
Peng, Z W; Xu, T; He, Q H; Shi, C Z; Wei, Z; Miao, G D; Jing, J; Lim, K O; Zuo, X N; Chan, R C K
2014-05-01
Aberrant functional connectivity within the default network is generally assumed to be involved in the pathophysiology of obsessive compulsive disorder (OCD); however, the genetic risk of default network connectivity in OCD remains largely unknown. Here, we systematically investigated default network connectivity in 15 OCD patients, 15 paired unaffected siblings and 28 healthy controls. We sought to examine the profiles of default network connectivity in OCD patients and their siblings, exploring the correlation between abnormal default network connectivity and genetic risk for this population. Compared with healthy controls, OCD patients exhibited reduced strength of default network functional connectivity with the posterior cingulate cortex (PCC), and increased functional connectivity in the right inferior frontal lobe, insula, superior parietal cortex and superior temporal cortex, while their unaffected first-degree siblings only showed reduced local connectivity in the PCC. These findings suggest that the disruptions of default network functional connectivity might be associated with family history of OCD. The decreased default network connectivity in both OCD patients and their unaffected siblings may serve as a potential marker of OCD.
Compound Extremes and Bunched Black (or Grouped Grey) Swans.
NASA Astrophysics Data System (ADS)
Watkins, Nicholas
2013-04-01
Observed "wild" natural fluctuations may differ substantially in their character. Some events may be genuinely unforeseen (and unforeseeable), as with Taleb's "black swans". These may occur singly, or may have their impact further magnified by being ``bunched" in time. Some of the others may, however, be the rare extreme events from a light-tailed underlying distribution. Studying their occurrence may then be tractable with the methods of extreme value theory [e.g. Coles, 2001], suitably adapted to allow correlation if that is observed to be present. Yet others may belong to a third broad class, described in today's presentation [ reviewed in Watkins, GRL Frontiers, 2013, doi: 10.1002/grl.50103]. Such "bursty" time series may show comparatively frequent high amplitude events, and/or long range correlations between successive values. The frequent large values due to the first of these effects, modelled in economics by Mandelbrot in 1963 using heavy- tailed probability distributions, can give rise to an "IPCC type I" burst composed of successive wild events. Conversely, long range dependence, even in a light-tailed Gaussian model like Mandelbrot and van Ness' fractional Brownian motion, can integrate ``mild" events into an extreme "IPCC type III" burst. I will show how a standard statistical time series model, linear fractional stable motion (LFSM), which descends from the two special cases advocated by Mandelbrot, allows these two effects to be varied independently, and will present results from a preliminary study of such bursts in LFSM. The consequences for burst scaling when low frequency effects due to dissipation (FARIMA models), and multiplicative cascades (such as multifractals) are included will also be discussed, and the physical assumptions and constraints associated with making a given choice of model.
Impacts of past and future climate change on wind energy resources in the United States
NASA Astrophysics Data System (ADS)
McCaa, J. R.; Wood, A.; Eichelberger, S.; Westrick, K.
2009-12-01
The links between climate change and trends in wind energy resources have important potential implications for the wind energy industry, and have received significant attention in recent studies. We have conducted two studies that provide insights into the potential for climate change to affect future wind power production. In one experiment, we projected changes in power capacity for a hypothetical wind farm located near Kennewick, Washington, due to greenhouse gas-induced climate change, estimated using a set of regional climate model simulations. Our results show that the annual wind farm power capacity is projected to decrease 1.3% by 2050. In a wider study focusing on wind speed instead of power, we analyzed projected changes in wind speed from 14 different climate simulations that were performed in support of the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR4). Our results show that the predicted ensemble mean changes in annual mean wind speeds are expected to be modest. However, seasonal changes and changes predicted by individual models are large enough to affect the profitability of existing and future wind projects. The majority of the model simulations reveal that near-surface wind speed values are expected to shift poleward in response to the IPCC A2 emission scenario, particularly during the winter season. In the United States, most models agree that the mean annual wind speed values will increase in a region extending from the Great Lakes southward across the Midwest and into Texas. Decreased values, though, are predicted across most of the western United States. However, these predicted changes have a strong seasonal dependence, with wind speed increases over most of the United States during the winter and decreases over the northern United States during the summer.
Estimating the Contrail Impact on Climate Using the UK Met Office Model
NASA Astrophysics Data System (ADS)
Rap, A.; Forster, P. M.
2008-12-01
With air travel predicted to increase over the coming century, the emissions associated with air traffic are expected to have a significant warming effect on climate. According to current best estimates, an important contribution comes from contrails. However, as reported by the IPCC fourth assessment report, these current best estimates still have a high uncertainty. The development and validation of contrail parameterizations in global climate models is therefore very important. This current study develops a contrail parameterization within the UK Met Office Climate Model. Using this new parameterization, we estimate that for the 2002 traffic, the global mean annual contrail coverage is approximately 0.11%, a value which in good agreement with several other estimates. The corresponding contrail radiative forcing (RF) is calculated to be approximately 4 and 6 mWm-2 in all-sky and clear-sky conditions, respectively. These values lie within the lower end of the RF range reported by the latest IPCC assessment. The relatively high cloud masking effect on contrails observed by our parameterization compared with other studies is investigated, and a possible cause for this difference is suggested. The effect of the diurnal variations of air traffic on both contrail coverage and contrail RF is also investigated. The new parameterization is also employed in thirty-year slab-ocean model runs in order to give one of the first insights into contrail effects on daily temperature range and the climate impact of contrails.
Integrating Representation Learning and Skill Learning in a Human-Like Intelligent Agent
2013-06-21
of 10 full-year controlled studies [Koedinger and MacLaren, 1997]. Nevertheless, the quality of the personalized instructions depends largely on the...relation among its children . The value of the direction field can be d, h, or v. d is the default value set for grammar rules that have only one child ...nearly comparable performance while significantly reducing the amount of knowledge engineering effort needed. 6.3 Experimental Study on
Developing a java android application of KMV-Merton default rate model
NASA Astrophysics Data System (ADS)
Yusof, Norliza Muhamad; Anuar, Aini Hayati; Isa, Norsyaheeda Natasha; Zulkafli, Sharifah Nursyuhada Syed; Sapini, Muhamad Luqman
2017-11-01
This paper presents a developed java android application for KMV-Merton model in predicting the defaut rate of a firm. Predicting default rate is essential in the risk management area as default risk can be immediately transmitted from one entity to another entity. This is the reason default risk is known as a global risk. Although there are several efforts, instruments and methods used to manage the risk, it is said to be insufficient. To the best of our knowledge, there has been limited innovation in developing the default risk mathematical model into a mobile application. Therefore, through this study, default risk is predicted quantitatively using the KMV-Merton model. The KMV-Merton model has been integrated in the form of java program using the Android Studio Software. The developed java android application is tested by predicting the levels of default risk of the three different rated companies. It is found that the levels of default risk are equivalent to the ratings of the respective companies. This shows that the default rate predicted by the KMV-Merton model using the developed java android application can be a significant tool to the risk mangement field. The developed java android application grants users an alternative to predict level of default risk within less procedure.
Chan, C M H; Wan Ahmad, W A; Md Yusof, M; Ho, G F; Krupat, E
2015-11-01
Defaulting is an important issue across all medical specialties, but much more so in cancer as delayed or incomplete treatment has been shown to result in worse clinical outcomes such as treatment resistance, disease progression as well as lower survival. Our objective was to identify psychosocial variables and characteristics associated with default among cancer patients. A total of 467 consecutive adult cancer patients attending the oncology clinic at a single academic medical centre completed the Hospital Anxiety and Depression Scale and reported their preference for psychological support at baseline, 4-6 weeks and 12-18 months follow-up. Default was defined as refusal, delay or discontinuation of treatment or visit, despite the ability to do so. A total of 159 of 467 (34.0%) cancer patients were defaulters. Of these 159 defaulters, 89 (56.0%) desired psychological support, compared to only 13 (4.2%) of 308 non-defaulters. Using a logistic regression, patients who were defaulters had 52 times higher odds (P = 0.001; 95% confidence interval 20.61-134.47) of desiring psychological support than non-defaulters after adjusting for covariates. These findings suggest that defaulters should be offered psychological support which may increase cancer treatment acceptance rates and improve survival. © 2015 John Wiley & Sons Ltd.
Harvested wood products : basis for future methodological development
Kenneth E. Skog
2003-01-01
The IPCC Guidelines (IPCC 1997) provide an outline of how harvested wood could be treated in national greenhouse gas (GHG) inventories. This section shows the relation of that outline to the approaches and estimation methods to be presented in this Appendix. Wood and paper products are referred to as harvested wood products (HWP). It does not include carbon in...
IPCC reasons for concern regarding climate change risks
NASA Astrophysics Data System (ADS)
O'Neill, Brian C.; Oppenheimer, Michael; Warren, Rachel; Hallegatte, Stephane; Kopp, Robert E.; Pörtner, Hans O.; Scholes, Robert; Birkmann, Joern; Foden, Wendy; Licker, Rachel; Mach, Katharine J.; Marbaix, Phillippe; Mastrandrea, Michael D.; Price, Jeff; Takahashi, Kiyoshi; van Ypersele, Jean-Pascal; Yohe, Gary
2017-01-01
The reasons for concern framework communicates scientific understanding about risks in relation to varying levels of climate change. The framework, now a cornerstone of the IPCC assessments, aggregates global risks into five categories as a function of global mean temperature change. We review the framework's conceptual basis and the risk judgments made in the most recent IPCC report, confirming those judgments in most cases in the light of more recent literature and identifying their limitations. We point to extensions of the framework that offer complementary climate change metrics to global mean temperature change and better account for possible changes in social and ecological system vulnerability. Further research should systematically evaluate risks under alternative scenarios of future climatic and societal conditions.
Default patterns of patients attending clinics for sexually transmitted diseases.
Mahony, J D; Bevan, J; Wall, B
1978-01-01
The influence of gender, propaganda, and treatment methods was studied in relation to default behaviour of patients with sexually transmitted diseases. The overall default rate of men and women was similar, but a larger proportion of men defaulted after the initial visit, while the biggest fall-out in women was after the second attendance at the clinic. The institution of a propaganda campaign was followed by a reduction in defaulting. The statistical significance of this is open to question, however: moreover the observed improvement in default rate was not maintained once the propaganda had been relaxed. Men treated for non-gonococcal urethritis by a regimen which included one injection a week for three weeks showed a highly significantly lower default rate compared with those who received tablets alone. PMID:580413
Predictors of Default from Treatment for Tuberculosis: a Single Center Case–Control Study in Korea
2016-01-01
Default from tuberculosis (TB) treatment could exacerbate the disease and result in the emergence of drug resistance. This study identified the risk factors for default from TB treatment in Korea. This single-center case–control study analyzed 46 default cases and 100 controls. Default was defined as interrupting treatment for 2 or more consecutive months. The reasons for default were mainly incorrect perception or information about TB (41.3%) and experience of adverse events due to TB drugs (41.3%). In univariate analysis, low income (< 2,000 US dollars/month, 88.1% vs. 68.4%, P = 0.015), absence of TB stigma (4.3% vs. 61.3%, P < 0.001), treatment by a non-pulmonologist (74.1% vs. 25.9%, P < 0.001), history of previous treatment (37.0% vs. 19.0%, P = 0.019), former defaulter (15.2% vs. 2.0%, P = 0.005), and combined extrapulmonary TB (54.3% vs. 34.0%, P = 0.020) were significant risk factors for default. In multivariate analysis, the absence of TB stigma (adjusted odd ratio [aOR]: 46.299, 95% confidence interval [CI]: 8.078–265.365, P < 0.001), treatment by a non-pulmonologist (aOR: 14.567, 95% CI: 3.260–65.089, P < 0.001), former defaulters (aOR: 33.226, 95% CI: 2.658–415.309, P = 0.007), and low income (aOR: 5.246, 95% CI: 1.249–22.029, P = 0.024) were independent predictors of default from TB treatment. In conclusion, patients with absence of disease stigma, treated by a non-pulmonologist, who were former defaulters, and with low income should be carefully monitored during TB treatment in Korea to avoid treatment default. PMID:26839480
Predictors of Default from Treatment for Tuberculosis: a Single Center Case-Control Study in Korea.
Park, Cheol-Kyu; Shin, Hong-Joon; Kim, Yu-Il; Lim, Sung-Chul; Yoon, Jeong-Sun; Kim, Young-Su; Kim, Jung-Chul; Kwon, Yong-Soo
2016-02-01
Default from tuberculosis (TB) treatment could exacerbate the disease and result in the emergence of drug resistance. This study identified the risk factors for default from TB treatment in Korea. This single-center case-control study analyzed 46 default cases and 100 controls. Default was defined as interrupting treatment for 2 or more consecutive months. The reasons for default were mainly incorrect perception or information about TB (41.3%) and experience of adverse events due to TB drugs (41.3%). In univariate analysis, low income (< 2,000 US dollars/month, 88.1% vs. 68.4%, P = 0.015), absence of TB stigma (4.3% vs. 61.3%, P < 0.001), treatment by a non-pulmonologist (74.1% vs. 25.9%, P < 0.001), history of previous treatment (37.0% vs. 19.0%, P = 0.019), former defaulter (15.2% vs. 2.0%, P = 0.005), and combined extrapulmonary TB (54.3% vs. 34.0%, P = 0.020) were significant risk factors for default. In multivariate analysis, the absence of TB stigma (adjusted odd ratio [aOR]: 46.299, 95% confidence interval [CI]: 8.078-265.365, P < 0.001), treatment by a non-pulmonologist (aOR: 14.567, 95% CI: 3.260-65.089, P < 0.001), former defaulters (aOR: 33.226, 95% CI: 2.658-415.309, P = 0.007), and low income (aOR: 5.246, 95% CI: 1.249-22.029, P = 0.024) were independent predictors of default from TB treatment. In conclusion, patients with absence of disease stigma, treated by a non-pulmonologist, who were former defaulters, and with low income should be carefully monitored during TB treatment in Korea to avoid treatment default.
Kodama, Hitoshi; Miyata, Yoshimasa; Kuwajima, Mami; Izuchi, Ryoichi; Kobayashi, Ayumi; Gyoja, Fuki; Onuma, Takeshi A; Kumano, Gaku; Nishida, Hiroki
2016-08-01
During embryonic induction, the responding cells invoke an induced developmental program, whereas in the absence of an inducing signal, they assume a default uninduced cell fate. Suppression of the default fate during the inductive event is crucial for choice of the binary cell fate. In contrast to the mechanisms that promote an induced cell fate, those that suppress the default fate have been overlooked. Upon induction, intracellular signal transduction results in activation of genes encoding key transcription factors for induced tissue differentiation. It is elusive whether an induced key transcription factor has dual functions involving suppression of the default fates and promotion of the induced fate, or whether suppression of the default fate is independently regulated by other factors that are also downstream of the signaling cascade. We show that during ascidian embryonic induction, default fates were suppressed by multifold redundant mechanisms. The key transcription factor, Twist-related.a, which is required for mesenchyme differentiation, and another independent transcription factor, Lhx3, which is dispensable for mesenchyme differentiation, sequentially and redundantly suppress the default muscle fate in induced mesenchyme cells. Similarly in notochord induction, Brachyury, which is required for notochord differentiation, and other factors, Lhx3 and Mnx, are likely to suppress the default nerve cord fate redundantly. Lhx3 commonly suppresses the default fates in two kinds of induction. Mis-activation of the autonomously executed default program in induced cells is detrimental to choice of the binary cell fate. Multifold redundant mechanisms would be required for suppression of the default fate to be secure. Copyright © 2016 Elsevier Inc. All rights reserved.
True status of smear-positive pulmonary tuberculosis defaulters in Malawi.
Kruyt, M. L.; Kruyt, N. D.; Boeree, M. J.; Harries, A. D.; Salaniponi, F. M.; van Noord, P. A.
1999-01-01
The article reports the results of a study to determine the true outcome of 8 months of treatment received by smear-positive pulmonary tuberculosis (PTB) patients who had been registered as defaulters in the Queen Elizabeth Central Hospital (QECH) and Mlambe Mission Hospital (MMH), Blantyre, Malawi. The treatment outcomes were documented from the tuberculosis registers of all patients registered between 1 October 1994 and 30 September 1995. The true treatment outcome for patients who had been registered as defaulters was determined by making personal inquiries at the treatment units and the residences of patients or relatives and, in a few cases, by writing to the appropriate postal address. Interviews were carried out with patients who had defaulted and were still alive and with matched, fully compliant PTB patients who had successfully completed the treatment to determine the factors associated with defaulter status. Of the 1099 patients, 126 (11.5%) had been registered as defaulters, and the true treatment outcome was determined for 101 (80%) of the latter; only 22 were true defaulters, 31 had completed the treatment, 31 had died during the treatment period, and 17 had left the area. A total of 8 of the 22 true defaulters were still alive and were compared with the compliant patients. Two significant characteristics were associated with the defaulters; they were unmarried; and they did not know the correct duration of antituberculosis treatment. Many of the smear-positive tuberculosis patients who had been registered as defaulters in the Blantyre district were found to have different treatment outcomes, without defaulting. The quality of reporting in the health facilities must therefore be improved in order to exclude individuals who are not true defaulters. PMID:10361755
Ghali, Iraqi; Kizub, Darya; Billioux, Alexander C.; Bennani, Kenza; Bourkadi, Jamal Eddine; Benmamoun, Abderrahmane; Lahlou, Ouafae; Aouad, Rajae El; Dooley, Kelly E.
2014-01-01
Setting Public tuberculosis (TB) clinics in urban Morocco. Objective Explore risk factors for TB treatment default and develop a prediction tool. Assess consequences of default, specifically risk for transmission or development of drug resistance. Design Case-control study comparing patients who defaulted from TB treatment and patients who completed it using quantitative methods and open-ended questions. Results were interpreted in light of health professionals’ perspectives from a parallel study. A predictive model and simple tool to identify patients at high risk of default were developed. Sputum from cases with pulmonary TB was collected for smear and drug susceptibility testing. Results 91 cases and 186 controls enrolled. Independent risk factors for default included current smoking, retreatment, work interference with adherence, daily directly observed therapy, side effects, quick symptom resolution, and not knowing one’s treatment duration. Age >50 years, never smoking, and having friends who knew one’s diagnosis were protective. A simple scoring tool incorporating these factors was 82.4% sensitive and 87.6% specific for predicting default in this population. Clinicians and patients described additional contributors to default and suggested locally-relevant intervention targets. Among 89 cases with pulmonary TB, 71% had sputum that was smear positive for TB. Drug resistance was rare. Conclusion The causes of default from TB treatment were explored through synthesis of qualitative and quantitative data from patients and health professionals. A scoring tool with high sensitivity and specificity to predict default was developed. Prospective evaluation of this tool coupled with targeted interventions based on our findings is warranted. Of note, the risk of TB transmission from patients who default treatment to others is likely to be high. The commonly-feared risk of drug resistance, though, may be low; a larger study is required to confirm these findings. PMID:24699682
Projections of wind-waves in South China Sea for the 21st century
NASA Astrophysics Data System (ADS)
Mohammed, Aboobacker; Dykyi, Pavlo; Zheleznyak, Mark; Tkalich, Pavel
2013-04-01
IPCC-coordinated work has been completed within Fourth Assessment Report (AR4) to project climate and ocean variables for the 21st century using coupled atmospheric-ocean General Circulation Models (GCMs). GCMs are not having a wind-wave variable due to a poor grid resolution; therefore, dynamical downscaling of wind-waves to the regional scale is advisable using well established models, such as Wave Watch III (WWIII) and SWAN. Rectilinear-coordinates WWIII model is adapted for the far field comprising the part of Pacific and Indian Oceans centered at the South China Sea and Sunda Shelf (90 °E-130 °E, 10 °S - 26.83 °N) with a resolution of 10' (about 18 km). Near-field unstructured-mesh SWAN model covers Sunda Shelf and centered on Singapore Strait, while reading lateral boundary values from WWIII model. The unstructured grid has the coarsest resolution in the South China Sea (6 to 10 km), medium resolution in the Malacca Strait (1 to 2 km), and the finest resolution in the Singapore Strait (400 m) and along the Singapore coastline (up to 100 m). Following IPCC methodology, the model chain is validated climatologically for the past period 1961-1990 against Voluntary Observing Ship (VOS) data; additionally, the models are validated using recent high-resolution satellite data. The calibrated model chain is used to project waves to 21st century using WRF-downscaled wind speed output of CCSM GCM run for A1FI climate change scenario. To comply with IPCC methodology the entire modeling period is split into three 30-years periods for which statistical parameters are computed individually. Time series of significant wave height at key points near Singapore and on ship sea routes in the SCS are statistically analysed to get probability distribution functions (PDFs) of extreme values. Climatological maps of mean and maximum significant wave height (SWH) values, and mean wave period are built for Singapore region for each 30-yrs period. Linear trends of mean SWH values for northeast (NE) and southwest (SW) monsoons have been derived. The maximum values of predicted 100 year return period (YRP) SWH are obtained for the 1st 30-yrs period (2011-2040). In the deep eastern part of the Singapore, 100yrp SWH are 2.4 - 2.8 m, whereas those at the shallow nearshore areas are 1.7-2.3 m. On the ship routes at Sunda Shelf the 100 YRP SWHs are 1.1 - 3.2 m, and those at the SCS routes are 3.6 - 10.4 m. The biggest changes in future against hindcasted SWH is in first 30-yrs, where extreme 100 YRP SWH will grow up in the range from 36%-120% at points near Singapore and to 39%-108% at ship sea routes.
48 CFR 609.405-70 - Termination action decision.
Code of Federal Regulations, 2010 CFR
2010-10-01
...) For overseas posts, A/OPE. (b) Termination for default. Termination for default under a contract's default clause is appropriate when the circumstances giving rise to the debarment or suspension also constitute a default in the contractor's performance of that contract. Debarment or suspension of the...
Determinants of default from pulmonary tuberculosis treatment in Kuwait.
Zhang, Qing; Gaafer, Mohamed; El Bayoumy, Ibrahim
2014-01-01
To determine the prevalence and risk factors of default from pulmonary tuberculosis treatment in Kuwait. Retrospective study. We studied all patients who were registered for pulmonary tuberculosis treatment between January 1, 2010, and December 31, 2012, and admitted into TB wards in El Rashid Center or treated in the outpatient clinic in TB Control Unit. There were 110 (11.5%) patients who defaulted from treatment. Fifty-six percent of those who defaulted did so in the first 2 months of treatment and 86.4% of them were still bacteriologically positive at the time of default. Key risk factors associated with noncompliance were male sex, low educational level, non-Kuwaiti nations, history of default, and history of concomitant diabetes mellitus, liver disease, or lung cancer. Multiple drug resistance was also associated with default from treatment. Default from treatment may be partially responsible for the persistent relatively high rates of tuberculosis in Kuwait. Health professionals and policy makers should ensure that all barriers to treatment are removed and that incentives are used to encourage treatment compliance.
SIM_ADJUST -- A computer code that adjusts simulated equivalents for observations or predictions
Poeter, Eileen P.; Hill, Mary C.
2008-01-01
This report documents the SIM_ADJUST computer code. SIM_ADJUST surmounts an obstacle that is sometimes encountered when using universal model analysis computer codes such as UCODE_2005 (Poeter and others, 2005), PEST (Doherty, 2004), and OSTRICH (Matott, 2005; Fredrick and others (2007). These codes often read simulated equivalents from a list in a file produced by a process model such as MODFLOW that represents a system of interest. At times values needed by the universal code are missing or assigned default values because the process model could not produce a useful solution. SIM_ADJUST can be used to (1) read a file that lists expected observation or prediction names and possible alternatives for the simulated values; (2) read a file produced by a process model that contains space or tab delimited columns, including a column of simulated values and a column of related observation or prediction names; (3) identify observations or predictions that have been omitted or assigned a default value by the process model; and (4) produce an adjusted file that contains a column of simulated values and a column of associated observation or prediction names. The user may provide alternatives that are constant values or that are alternative simulated values. The user may also provide a sequence of alternatives. For example, the heads from a series of cells may be specified to ensure that a meaningful value is available to compare with an observation located in a cell that may become dry. SIM_ADJUST is constructed using modules from the JUPITER API, and is intended for use on any computer operating system. SIM_ADJUST consists of algorithms programmed in Fortran90, which efficiently performs numerical calculations.
NASA Astrophysics Data System (ADS)
Norton, P. A., II
2015-12-01
The U. S. Geological Survey is developing a National Hydrologic Model (NHM) to support consistent hydrologic modeling across the conterminous United States (CONUS). The Precipitation-Runoff Modeling System (PRMS) simulates daily hydrologic and energy processes in watersheds, and is used for the NHM application. For PRMS each watershed is divided into hydrologic response units (HRUs); by default each HRU is assumed to have a uniform hydrologic response. The Geospatial Fabric (GF) is a database containing initial parameter values for input to PRMS and was created for the NHM. The parameter values in the GF were derived from datasets that characterize the physical features of the entire CONUS. The NHM application is composed of more than 100,000 HRUs from the GF. Selected parameter values commonly are adjusted by basin in PRMS using an automated calibration process based on calibration targets, such as streamflow. Providing each HRU with distinct values that captures variability within the CONUS may improve simulation performance of the NHM. During calibration of the NHM by HRU, selected parameter values are adjusted for PRMS based on calibration targets, such as streamflow, snow water equivalent (SWE) and actual evapotranspiration (AET). Simulated SWE, AET, and runoff were compared to value ranges derived from multiple sources (e.g. the Snow Data Assimilation System, the Moderate Resolution Imaging Spectroradiometer (i.e. MODIS) Global Evapotranspiration Project, the Simplified Surface Energy Balance model, and the Monthly Water Balance Model). This provides each HRU with a distinct set of parameter values that captures the variability within the CONUS, leading to improved model performance. We present simulation results from the NHM after preliminary calibration, including the results of basin-level calibration for the NHM using: 1) default initial GF parameter values, and 2) parameter values calibrated by HRU.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 3 2010-04-01 2010-04-01 false Default. 210.16 Section 210.16 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION INVESTIGATIONS OF UNFAIR PRACTICES IN IMPORT TRADE ADJUDICATION AND ENFORCEMENT Motions § 210.16 Default. (a) Definition of default. (1) A party shall be found in...
Code of Federal Regulations, 2011 CFR
2011-04-01
... 19 Customs Duties 3 2011-04-01 2011-04-01 false Default. 210.16 Section 210.16 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION INVESTIGATIONS OF UNFAIR PRACTICES IN IMPORT TRADE ADJUDICATION AND ENFORCEMENT Motions § 210.16 Default. (a) Definition of default. (1) A party shall be found in...
Code of Federal Regulations, 2013 CFR
2013-04-01
... 19 Customs Duties 3 2013-04-01 2013-04-01 false Default. 210.16 Section 210.16 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION INVESTIGATIONS OF UNFAIR PRACTICES IN IMPORT TRADE ADJUDICATION AND ENFORCEMENT Motions § 210.16 Default. (a) Definition of default. (1) A party shall be found in...
Code of Federal Regulations, 2014 CFR
2014-04-01
... 19 Customs Duties 3 2014-04-01 2014-04-01 false Default. 210.16 Section 210.16 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION INVESTIGATIONS OF UNFAIR PRACTICES IN IMPORT TRADE ADJUDICATION AND ENFORCEMENT Motions § 210.16 Default. (a) Definition of default. (1) A party shall be found in...
7 CFR 1980.470 - Defaults by borrower.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 14 2010-01-01 2009-01-01 true Defaults by borrower. 1980.470 Section 1980.470...) PROGRAM REGULATIONS (CONTINUED) GENERAL Business and Industrial Loan Program § 1980.470 Defaults by... property management. A. In case of any monetary or significant non-monetary default under the loan...
The informatics capability maturity of integrated primary care centres in Australia.
Liaw, Siaw-Teng; Kearns, Rachael; Taggart, Jane; Frank, Oliver; Lane, Riki; Tam, Michael; Dennis, Sarah; Walker, Christine; Russell, Grant; Harris, Mark
2017-09-01
Integrated primary care requires systems and service integration along with financial incentives to promote downward substitution to a single entry point to care. Integrated Primary Care Centres (IPCCs) aim to improve integration by co-location of health services. The Informatics Capability Maturity (ICM) describes how well health organisations collect, manage and share information; manage eHealth technology, implementation, change, data quality and governance; and use "intelligence" to improve care. Describe associations of ICM with systems and service integration in IPCCs. Mixed methods evaluation of IPCCs in metropolitan and rural Australia: an enhanced general practice, four GP Super Clinics, a "HealthOne" (private-public partnership) and a Community Health Centre. Data collection methods included self-assessed ICM, document review, interviews, observations in practice and assessment of electronic health record data. Data was analysed and compared across IPCCs. The IPCCs demonstrated a range of funding models, ownership, leadership, organisation and ICM. Digital tools were used with varying effectiveness to collect, use and share data. Connectivity was problematic, requiring "work-arounds" to communicate and share information. The lack of technical, data and software interoperability standards, clinical coding and secure messaging were barriers to data collection, integration and sharing. Strong leadership and governance was important for successful implementation of robust and secure eHealth systems. Patient engagement with eHealth tools was suboptimal. ICM is positively associated with integration of data, systems and care. Improved ICM requires a health workforce with eHealth competencies; technical, semantic and software standards; adequate privacy and security; and good governance and leadership. Copyright © 2017 Elsevier B.V. All rights reserved.
Gay-Antaki, Miriam; Liverman, Diana
2018-02-27
The Intergovernmental Panel on Climate Change (IPCC) is an authoritative and influential source of reports on climate change. The lead authors of IPCC reports include scientists from around the world, but questions have been raised about the dominance of specific disciplines in the report and the disproportionate number of scholars from the Global North. In this paper, we analyze the as-yet-unexamined issue of gender and IPCC authorship, looking at changes in gender balance over time and analyzing women's views about their experience and barriers to full participation, not only as women but also at the intersection of nationality, race, command of English, and discipline. Over time, we show that the proportion of female IPCC authors has seen a modest increase from less than 5% in 1990 to more than 20% in the most recent assessment reports. Based on responses from over 100 women IPCC authors, we find that many women report a positive experience in the way in which they are treated and in their ability to influence the report, although others report that some women were poorly represented and heard. We suggest that an intersectional lens is important: not all women experience the same obstacles: they face multiple and diverse barriers associated with social identifiers such as race, nationality, command of English, and disciplinary affiliation. The scientific community benefits from including all scientists, including women and those from the Global South. This paper documents barriers to participation and identifies opportunities to diversify climate science. Copyright © 2018 the Author(s). Published by PNAS.
Gay-Antaki, Miriam; Liverman, Diana
2018-01-01
The Intergovernmental Panel on Climate Change (IPCC) is an authoritative and influential source of reports on climate change. The lead authors of IPCC reports include scientists from around the world, but questions have been raised about the dominance of specific disciplines in the report and the disproportionate number of scholars from the Global North. In this paper, we analyze the as-yet-unexamined issue of gender and IPCC authorship, looking at changes in gender balance over time and analyzing women’s views about their experience and barriers to full participation, not only as women but also at the intersection of nationality, race, command of English, and discipline. Over time, we show that the proportion of female IPCC authors has seen a modest increase from less than 5% in 1990 to more than 20% in the most recent assessment reports. Based on responses from over 100 women IPCC authors, we find that many women report a positive experience in the way in which they are treated and in their ability to influence the report, although others report that some women were poorly represented and heard. We suggest that an intersectional lens is important: not all women experience the same obstacles: they face multiple and diverse barriers associated with social identifiers such as race, nationality, command of English, and disciplinary affiliation. The scientific community benefits from including all scientists, including women and those from the Global South. This paper documents barriers to participation and identifies opportunities to diversify climate science. PMID:29440422
Use of cellular phone contacts to increase return rates for immunization services in Kenya
Mokaya, Evans; Mugoya, Isaac; Raburu, Jane; Shimp, Lora
2017-01-01
Introduction In Kenya, failure to complete immunization schedules by children who previously accessed immunization services is an obstacle to ensuring that children are fully immunized. Home visit approaches used to track defaulting children have not been successful in reducing the drop-out rate. Methods This study tested the use of phone contacts as an approach for tracking immunization defaulters in twelve purposively-selected facilities in three districts of western Kenya. For nine months, children accessing immunization services in the facilities were tracked and caregivers were asked their reasons for defaulting. Results In all of the facilities, caregiver phone ownership was above 80%. In 11 of the 12 facilities, defaulter rates between pentavalent1 and pentavalent3 vaccination doses reduced significantly to within the acceptable level of < 10%. Caregivers provided reliable contact information and health workers positively perceived phone-based defaulter communications. Tracking a defaulter required on average 2 minutes by voice and Ksh 6 ($ 0.07). Competing tasks and concerns about vaccinating sick children and side-effects were the most cited reasons for caregivers defaulting. Notably, a significant number of children categorised as defaulters had been vaccinated in a different facility (and were therefore “false defaulters”). Conclusion Use of phone contacts for follow-up is a feasible and cost-effective method for tracking defaulters. This approach should complement traditional home visits, especially for caregivers without phones. Given communication-related reasons for defaulting, it is important that immunization programs scale-up community education activities. A system for health facilities to share details of defaulting children should be established to reduce “false defaulters”. PMID:29138660
Choosers, Obstructed Choosers, and Nonchoosers: A Framework for Defaulting in Schooling Choices
ERIC Educational Resources Information Center
Delale-O'Connor, Lori
2018-01-01
Background/Context: Prior research overlooks the importance of drawing distinctions within the category of defaulters or "nonchoosers" in schooling choices. Defaulters are both a theoretically and empirically interesting population, and understanding the processes by which families come to or are assigned the default school offers…
7 CFR 3575.75 - Defaults by borrower.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 15 2010-01-01 2010-01-01 false Defaults by borrower. 3575.75 Section 3575.75... AGRICULTURE GENERAL Community Programs Guaranteed Loans § 3575.75 Defaults by borrower. (a) Lender... default. The lender will continue to keep the Agency informed on a bimonthly basis until such time as the...
42 CFR 1001.1501 - Default of health education loan or scholarship obligations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 5 2011-10-01 2011-10-01 false Default of health education loan or scholarship... Permissive Exclusions § 1001.1501 Default of health education loan or scholarship obligations. (a... individual that the Public Health Service (PHS) determines is in default on repayments of scholarship...
42 CFR 1001.1501 - Default of health education loan or scholarship obligations.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 5 2013-10-01 2013-10-01 false Default of health education loan or scholarship... Permissive Exclusions § 1001.1501 Default of health education loan or scholarship obligations. (a... individual that the Public Health Service (PHS) determines is in default on repayments of scholarship...
42 CFR 1001.1501 - Default of health education loan or scholarship obligations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 5 2010-10-01 2010-10-01 false Default of health education loan or scholarship... Permissive Exclusions § 1001.1501 Default of health education loan or scholarship obligations. (a... individual that the Public Health Service (PHS) determines is in default on repayments of scholarship...
42 CFR 1001.1501 - Default of health education loan or scholarship obligations.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 5 2014-10-01 2014-10-01 false Default of health education loan or scholarship... Permissive Exclusions § 1001.1501 Default of health education loan or scholarship obligations. (a... individual that the Public Health Service (PHS) determines is in default on repayments of scholarship...
42 CFR 1001.1501 - Default of health education loan or scholarship obligations.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 5 2012-10-01 2012-10-01 false Default of health education loan or scholarship... Permissive Exclusions § 1001.1501 Default of health education loan or scholarship obligations. (a... individual that the Public Health Service (PHS) determines is in default on repayments of scholarship...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 2 2010-01-01 2010-01-01 false Default. 110.110 Section 110.110 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) EXPORT AND IMPORT OF NUCLEAR EQUIPMENT AND MATERIAL Hearings § 110.110 Default. When a participant fails to act within a specified time, the presiding officer may consider him in default, issue an...
Code of Federal Regulations, 2011 CFR
2011-04-01
...) General. The respondent may be found in default, upon motion, for failure to file a timely response to the Government's complaint. The motion shall include a copy of the complaint and a proposed default order, and... motion. (b) Default order. The ALJ shall issue a decision on the motion within 15 days after the...
ERIC Educational Resources Information Center
Department of Education, Washington, DC. Default Management Div.
This guide is designed to assist schools with their Federal Family Education Loan Program (FFEL) and the William D. Ford Federal Direct Loan (Direct Loan) Program cohort default rate. The guide is a reference tool in understanding cohort default rates and processes. This guide incorporates two former guides, the "Draft Cohort Default Rate…
24 CFR 907.3 - Bases for substantial default.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 24 Housing and Urban Development 4 2013-04-01 2013-04-01 false Bases for substantial default. 907.3 Section 907.3 Housing and Urban Development REGULATIONS RELATING TO HOUSING AND URBAN DEVELOPMENT... DEVELOPMENT SUBSTANTIAL DEFAULT BY A PUBLIC HOUSING AGENCY § 907.3 Bases for substantial default. (a...
24 CFR 907.3 - Bases for substantial default.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 4 2011-04-01 2011-04-01 false Bases for substantial default. 907.3 Section 907.3 Housing and Urban Development REGULATIONS RELATING TO HOUSING AND URBAN DEVELOPMENT... DEVELOPMENT SUBSTANTIAL DEFAULT BY A PUBLIC HOUSING AGENCY § 907.3 Bases for substantial default. (a...
24 CFR 907.3 - Bases for substantial default.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 24 Housing and Urban Development 4 2012-04-01 2012-04-01 false Bases for substantial default. 907.3 Section 907.3 Housing and Urban Development REGULATIONS RELATING TO HOUSING AND URBAN DEVELOPMENT... DEVELOPMENT SUBSTANTIAL DEFAULT BY A PUBLIC HOUSING AGENCY § 907.3 Bases for substantial default. (a...
Code of Federal Regulations, 2010 CFR
2010-01-01
... of overpayments, delinquencies, or defaults. 1261.413 Section 1261.413 Aeronautics and Space NATIONAL...) § 1261.413 Analysis of costs; automation; prevention of overpayments, delinquencies, or defaults. The... internal controls to identify causes, if any, of overpayments, delinquencies, and defaults, and establish...
48 CFR 49.403 - Termination of cost-reimbursement contracts for default.
Code of Federal Regulations, 2010 CFR
2010-10-01
...-reimbursement contracts for default. 49.403 Section 49.403 Federal Acquisition Regulations System FEDERAL... of cost-reimbursement contracts for default. (a) The right to terminate a cost-reimbursement contract... case by the clause. (b) Settlement of a cost-reimbursement contract terminated for default is subject...
{ background-color:#fff; padding-bottom:20px; } .navbar-default ul li { background-color:#1E8728; margin: 0 , .navbar-default .navbar-nav > .active > a:focus { background-color: #004C09; color: #fff; } .navbar -default .navbar-nav > li > a { color:#fff; } .navbar-default .navbar-nav > li > a:hover
Moss, Andrew; Brodie, Jon; Furnas, Miles
2005-01-01
The Australian and New Zealand Guidelines for Fresh and Marine Water Quality (ANZECC Guidelines) provide default national guideline values for a wide range of indicators of relevance to the protection of the ecological condition of natural waters. However, the ANZECC Guidelines also place a strong emphasis on the need to develop more locally relevant guidelines. Using a structured framework, this paper explores indicators and regional data sets that can be used to develop more locally relevant guidelines for the Great Barrier Reef World Heritage Area (GBRWHA). The paper focuses on the water quality impacts of adjacent catchments on the GBRWHA with the key stressors addressed being nutrients, sediments and agricultural chemicals. Indicators relevant to these stressors are discussed including both physico-chemical pressure indicators and biological condition indicators. Where adequate data sets are available, guideline values are proposed. Generally, data were much more readily available for physico-chemical pressure indicators than for biological condition indicators. Specifically, guideline values are proposed for the major nutrients nitrogen (N) and phosphorus (P) and for chlorophyll-a. More limited guidelines are proposed for sediment related indicators. For most agricultural chemicals, the ANZECC Guidelines are likely to remain the default of choice for some time but it is noted that there is data in the literature that could be used to develop more locally relevant guidelines.
40 CFR 98.463 - Calculating GHG emissions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... generation using Equation TT-1 of this section. ER29NO11.004 Where: GCH4 = Modeled methane generation in... = Methane correction factor (fraction). Use the default value of 1 unless there is active aeration of waste... paragraphs (a)(2)(ii)(A) and (B) of this section when historical production or processing data are available...
40 CFR 98.463 - Calculating GHG emissions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... generation using Equation TT-1 of this section. ER29NO11.004 Where: GCH4 = Modeled methane generation in... = Methane correction factor (fraction). Use the default value of 1 unless there is active aeration of waste... paragraphs (a)(2)(ii)(A) and (B) of this section when historical production or processing data are available...
ERIC Educational Resources Information Center
Beretvas, S. Natasha; Murphy, Daniel L.
2013-01-01
The authors assessed correct model identification rates of Akaike's information criterion (AIC), corrected criterion (AICC), consistent AIC (CAIC), Hannon and Quinn's information criterion (HQIC), and Bayesian information criterion (BIC) for selecting among cross-classified random effects models. Performance of default values for the 5…
77 FR 3559 - Energy Conservation Program for Consumer Products: Test Procedures for Refrigerators...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-25
..., which is typical of an approach enabled by more sophisticated electronic controls. Id. The interim final... and long- time automatic defrost or variable defrost control and adjust the default values of maximum... accurate measurement of the energy use of products with variable defrost control. DATES: The amendments are...
On the zeroth-order hamiltonian for CASPT2 calculations of spin crossover compounds.
Vela, Sergi; Fumanal, Maria; Ribas-Ariño, Jordi; Robert, Vincent
2016-04-15
Complete active space self-consistent field theory (CASSCF) calculations and subsequent second-order perturbation theory treatment (CASPT2) are discussed in the evaluation of the spin-states energy difference (ΔH(elec)) of a series of seven spin crossover (SCO) compounds. The reference values have been extracted from a combination of experimental measurements and DFT + U calculations, as discussed in a recent article (Vela et al., Phys Chem Chem Phys 2015, 17, 16306). It is definitely proven that the critical IPEA parameter used in CASPT2 calculations of ΔH(elec), a key parameter in the design of SCO compounds, should be modified with respect to its default value of 0.25 a.u. and increased up to 0.50 a.u. The satisfactory agreement observed previously in the literature might result from an error cancellation originated in the default IPEA, which overestimates the stability of the HS state, and the erroneous atomic orbital basis set contraction of carbon atoms, which stabilizes the LS states. © 2015 Wiley Periodicals, Inc.
Lu, Shaojia; Gao, Weijia; Wei, Zhaoguo; Wang, Dandan; Hu, Shaohua; Huang, Manli; Xu, Yi; Li, Lingjiang
2017-06-01
Childhood trauma confers great risk for the development of multiple psychiatric disorders; however, the neural basis for this association is still unknown. The present resting-state functional magnetic resonance imaging study aimed to detect the effects of childhood trauma on brain function in a group of young healthy adults. In total, 24 healthy individuals with childhood trauma and 24 age- and sex-matched adults without childhood trauma were recruited. Each participant underwent resting-state functional magnetic resonance imaging scanning. Intra-regional brain activity was evaluated by regional homogeneity method and compared between groups. Areas with altered regional homogeneity were further selected as seeds in subsequent functional connectivity analysis. Statistical analyses were performed by setting current depression and anxiety as covariates. Adults with childhood trauma showed decreased regional homogeneity in bilateral superior temporal gyrus and insula, and the right inferior parietal lobule, as well as increased regional homogeneity in the right cerebellum and left middle temporal gyrus. Regional homogeneity values in the left middle temporal gyrus, right insula and right cerebellum were correlated with childhood trauma severity. In addition, individuals with childhood trauma also exhibited altered default mode network, cerebellum-default mode network and insula-default mode network connectivity when the left middle temporal gyrus, right cerebellum and right insula were selected as seed area, respectively. The present outcomes suggest that childhood trauma is associated with disturbed intrinsic brain function, especially the default mode network, in adults even without psychiatric diagnoses, which may mediate the relationship between childhood trauma and psychiatric disorders in later life.
24 CFR 886.314 - Financial default.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Financial default. 886.314 Section... Program for the Disposition of HUD-Owned Projects § 886.314 Financial default. In the event of a financial... payments to the mortgagee until such time as the default is cured, or until some other time agreeable to...
17 CFR 201.155 - Default; motion to set aside default.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 17 Commodity and Securities Exchanges 2 2010-04-01 2010-04-01 false Default; motion to set aside default. 201.155 Section 201.155 Commodity and Securities Exchanges SECURITIES AND EXCHANGE COMMISSION... instituting proceedings, the allegations of which may be deemed to be true, if that party fails: (1) To appear...
33 CFR 20.310 - Default by respondent.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Pleadings and Motions § 20.310 Default by respondent. (a) The ALJ may find a respondent in default upon failure to file a timely answer to the complaint or, after motion, upon failure to appear at a conference or hearing without good cause shown. (b) Each motion for default must conform to the rules of form...
33 CFR 20.310 - Default by respondent.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Pleadings and Motions § 20.310 Default by respondent. (a) The ALJ may find a respondent in default upon failure to file a timely answer to the complaint or, after motion, upon failure to appear at a conference or hearing without good cause shown. (b) Each motion for default must conform to the rules of form...
22 CFR 221.21 - Event of Default; Application for Compensation; payment.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 22 Foreign Relations 1 2013-04-01 2013-04-01 false Event of Default; Application for Compensation... GUARANTEE STANDARD TERMS AND CONDITIONS Procedure for Obtaining Compensation § 221.21 Event of Default; Application for Compensation; payment. At any time after an Event of Default, as this term is defined in an...
22 CFR 204.21 - Event of default; Application for compensation; Payment.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Event of default; Application for compensation... STANDARD TERMS AND CONDITIONS Procedure for Obtaining Compensation § 204.21 Event of default; Application for compensation; Payment. (a) Within one year after an Event of Default, as this term is defined in...
22 CFR 221.21 - Event of Default; Application for Compensation; payment.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 22 Foreign Relations 1 2012-04-01 2012-04-01 false Event of Default; Application for Compensation... GUARANTEE STANDARD TERMS AND CONDITIONS Procedure for Obtaining Compensation § 221.21 Event of Default; Application for Compensation; payment. At any time after an Event of Default, as this term is defined in an...
42 CFR 23.28 - What events constitute default?
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 1 2012-10-01 2012-10-01 false What events constitute default? 23.28 Section 23.28... SERVICE CORPS Private Practice Special Loans for Former Corps Members § 23.28 What events constitute default? The following events will constitute defaults of the loan agreement: (a) Failure to make full...
22 CFR 204.21 - Event of default; Application for compensation; Payment.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 22 Foreign Relations 1 2013-04-01 2013-04-01 false Event of default; Application for compensation... STANDARD TERMS AND CONDITIONS Procedure for Obtaining Compensation § 204.21 Event of default; Application for compensation; Payment. (a) Within one year after an Event of Default, as this term is defined in...
42 CFR 23.28 - What events constitute default?
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 1 2013-10-01 2013-10-01 false What events constitute default? 23.28 Section 23.28... SERVICE CORPS Private Practice Special Loans for Former Corps Members § 23.28 What events constitute default? The following events will constitute defaults of the loan agreement: (a) Failure to make full...
22 CFR 204.21 - Event of default; Application for compensation; Payment.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 22 Foreign Relations 1 2011-04-01 2011-04-01 false Event of default; Application for compensation... STANDARD TERMS AND CONDITIONS Procedure for Obtaining Compensation § 204.21 Event of default; Application for compensation; Payment. (a) Within one year after an Event of Default, as this term is defined in...
22 CFR 221.21 - Event of Default; Application for Compensation; payment.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 22 Foreign Relations 1 2011-04-01 2011-04-01 false Event of Default; Application for Compensation... GUARANTEE STANDARD TERMS AND CONDITIONS Procedure for Obtaining Compensation § 221.21 Event of Default; Application for Compensation; payment. At any time after an Event of Default, as this term is defined in an...
22 CFR 204.21 - Event of default; Application for compensation; Payment.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 22 Foreign Relations 1 2012-04-01 2012-04-01 false Event of default; Application for compensation... STANDARD TERMS AND CONDITIONS Procedure for Obtaining Compensation § 204.21 Event of default; Application for compensation; Payment. (a) Within one year after an Event of Default, as this term is defined in...
42 CFR 23.28 - What events constitute default?
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false What events constitute default? 23.28 Section 23.28... SERVICE CORPS Private Practice Special Loans for Former Corps Members § 23.28 What events constitute default? The following events will constitute defaults of the loan agreement: (a) Failure to make full...
22 CFR 204.21 - Event of default; Application for compensation; Payment.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 22 Foreign Relations 1 2014-04-01 2014-04-01 false Event of default; Application for compensation... STANDARD TERMS AND CONDITIONS Procedure for Obtaining Compensation § 204.21 Event of default; Application for compensation; Payment. (a) Within one year after an Event of Default, as this term is defined in...
42 CFR 23.28 - What events constitute default?
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 1 2014-10-01 2014-10-01 false What events constitute default? 23.28 Section 23.28... SERVICE CORPS Private Practice Special Loans for Former Corps Members § 23.28 What events constitute default? The following events will constitute defaults of the loan agreement: (a) Failure to make full...
22 CFR 221.21 - Event of Default; Application for Compensation; payment.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 22 Foreign Relations 1 2014-04-01 2014-04-01 false Event of Default; Application for Compensation... GUARANTEE STANDARD TERMS AND CONDITIONS Procedure for Obtaining Compensation § 221.21 Event of Default; Application for Compensation; payment. At any time after an Event of Default, as this term is defined in an...
24 CFR 27.15 - Notice of default and foreclosure sale.
Code of Federal Regulations, 2010 CFR
2010-04-01
... sale. 27.15 Section 27.15 Housing and Urban Development Office of the Secretary, Department of Housing... Foreclosure of Multifamily Mortgages § 27.15 Notice of default and foreclosure sale. (a) Within 45 days after... serving a Notice of Default and Foreclosure Sale. (b) The Notice of Default and Foreclosure Sale shall...
Default risk modeling beyond the first-passage approximation: extended Black-Cox model.
Katz, Yuri A; Shokhirev, Nikolai V
2010-07-01
We develop a generalization of the Black-Cox structural model of default risk. The extended model captures uncertainty related to firm's ability to avoid default even if company's liabilities momentarily exceeding its assets. Diffusion in a linear potential with the radiation boundary condition is used to mimic a company's default process. The exact solution of the corresponding Fokker-Planck equation allows for derivation of analytical expressions for the cumulative probability of default and the relevant hazard rate. Obtained closed formulas fit well the historical data on global corporate defaults and demonstrate the split behavior of credit spreads for bonds of companies in different categories of speculative-grade ratings with varying time to maturity. Introduction of the finite rate of default at the boundary improves valuation of credit risk for short time horizons, which is the key advantage of the proposed model. We also consider the influence of uncertainty in the initial distance to the default barrier on the outcome of the model and demonstrate that this additional source of incomplete information may be responsible for nonzero credit spreads for bonds with very short time to maturity.
Default risk modeling with position-dependent killing
NASA Astrophysics Data System (ADS)
Katz, Yuri A.
2013-04-01
Diffusion in a linear potential in the presence of position-dependent killing is used to mimic a default process. Different assumptions regarding transport coefficients, initial conditions, and elasticity of the killing measure lead to diverse models of bankruptcy. One “stylized fact” is fundamental for our consideration: empirically default is a rather rare event, especially in the investment grade categories of credit ratings. Hence, the action of killing may be considered as a small parameter. In a number of special cases we derive closed-form expressions for the entire term structure of the cumulative probability of default, its hazard rate, and intensity. Comparison with historical data on aggregate global corporate defaults confirms the validity of the perturbation method for estimations of long-term probability of default for companies with high credit quality. On a single company level, we implement the derived formulas to estimate the one-year likelihood of default of Enron on a daily basis from August 2000 to August 2001, three months before its default, and compare the obtained results with forecasts of traditional structural models.
Kapella, B K; Anuwatnonthakate, A; Komsakorn, S; Moolphate, S; Charusuntonsri, P; Limsomboon, P; Wattanaamornkiat, W; Nateniyom, S; Varma, J K
2009-02-01
Thailand's Tuberculosis (TB) Active Surveillance Network in four provinces in Thailand. As treatment default is common in mobile and foreign populations, we evaluated risk factors for default among non-Thai TB patients in Thailand. Observational cohort study using TB program data. Analysis was restricted to patients with an outcome categorized as cured, completed, failure or default. We used multivariate analysis to identify factors associated with default, including propensity score analysis, to adjust for factors associated with receiving directly observed treatment (DOT). During October 2004-September 2006, we recorded data for 14359 TB patients, of whom 995 (7%) were non-Thais. Of the 791 patients analyzed, 313 (40%) defaulted. In multivariate analysis, age>or=45 years (RR 1.47, 95%CI 1.25-1.74), mobility (RR 2.36, 95%CI 1.77-3.14) and lack of DOT (RR 2.29, 95%CI 1.45-3.61) were found to be significantly associated with default among non-Thais. When controlling for propensity to be assigned DOT, the risk of default remained increased in those not assigned DOT (RR 1.99, 95%CI 1.03-3.85). In non-Thai TB patients, DOT was the only modifiable factor associated with default. Using DOT may help improve TB treatment outcomes in non-Thai TB patients.
Variations in algorithm implementation among quantitative texture analysis software packages
NASA Astrophysics Data System (ADS)
Foy, Joseph J.; Mitta, Prerana; Nowosatka, Lauren R.; Mendel, Kayla R.; Li, Hui; Giger, Maryellen L.; Al-Hallaq, Hania; Armato, Samuel G.
2018-02-01
Open-source texture analysis software allows for the advancement of radiomics research. Variations in texture features, however, result from discrepancies in algorithm implementation. Anatomically matched regions of interest (ROIs) that captured normal breast parenchyma were placed in the magnetic resonance images (MRI) of 20 patients at two time points. Six first-order features and six gray-level co-occurrence matrix (GLCM) features were calculated for each ROI using four texture analysis packages. Features were extracted using package-specific default GLCM parameters and using GLCM parameters modified to yield the greatest consistency among packages. Relative change in the value of each feature between time points was calculated for each ROI. Distributions of relative feature value differences were compared across packages. Absolute agreement among feature values was quantified by the intra-class correlation coefficient. Among first-order features, significant differences were found for max, range, and mean, and only kurtosis showed poor agreement. All six second-order features showed significant differences using package-specific default GLCM parameters, and five second-order features showed poor agreement; with modified GLCM parameters, no significant differences among second-order features were found, and all second-order features showed poor agreement. While relative texture change discrepancies existed across packages, these differences were not significant when consistent parameters were used.
Adjusting the fairshare policy to prevent computing power loss
NASA Astrophysics Data System (ADS)
Dal Pra, Stefano
2017-10-01
On a typical WLCG site providing batch access to computing resources according to a fairshare policy, the idle time lapse after a job ends and before a new one begins on a given slot is negligible if compared to the duration of typical jobs. The overall amount of these intervals over a time window increases with the size of the cluster and the inverse of job duration and can be considered equivalent to an average number of unavailable slots over that time window. This value has been investigated for the Tier-1 at CNAF, and observed to occasionally grow and reach up to more than the 10% of the about 20,000 available computing slots. Analysis reveals that this happens when a sustained rate of short jobs is submitted to the cluster and dispatched by the batch system. Because of how the default fairshare policy works, it increases the dynamic priority of those users mostly submitting short jobs, since they are not accumulating runtime, and will dispatch more of their jobs at the next round, thus worsening the situation until the submission flow ends. To address this problem the default behaviour of the fairshare have been altered by adding a correcting term to the default formula for the dynamic priority. The LSF batch system, currently adopted at CNAF, provides a way to define its value by invoking a C function, which returns it for each user in the cluster. The correcting term works by rounding up to a minimum defined runtime the most recently done jobs. Doing so, each short job looks almost like a regular one and the dynamic priority value settles to a proper value. The net effect is a reduction of the dispatching rate of short jobs and, consequently, the average number of available slots greatly improves. Furthermore, a potential starvation problem, actually observed at least once is also prevented. After describing short jobs and reporting about their impact on the cluster, possible workarounds are discussed and the selected solution is motivated. Details on the most critical aspects of the implementation are explained and the observed results are presented.
From vegetation zones to climatypes: Effects of climate warming on Siberian ecosystems
N. M. Tchebakova; G. E. Rehfeldt; E. I. Parfenova
2010-01-01
Evidence for global warming over the past 200 years is overwhelming, based on both direct weather observation and indirect physical and biological indicators such as retreating glaciers and snow/ice cover, increasing sea level, and longer growing seasons (IPCC 2001, 2007). On the background of global warming at a rate of 0.6°C during the twentieth century (IPCC 2001),...
D. T. Price; D. W. McKenney; L. A. Joyce; R. M. Siltanen; P. Papadopol; K. Lawrence
2011-01-01
Projections of future climate were selected for four well-established general circulation models (GCMs) forced by each of three greenhouse gas (GHG) emissions scenarios recommended by the Intergovernmental Panel on Climate Change (IPCC), namely scenarios A2, A1B, and B1 of the IPCC Special Report on Emissions Scenarios. Monthly data for the period 1961-2100 were...
Framing the future in the Southern United States climate, land use, and forest conditions
David N. Wear; Thomas L. Mote; J. Marshall Shepherd; K.C. Binita; Christopher W. Strother
2014-01-01
The Intergovernmental Panel on Climate Change (IPCC) has concluded, with 90% certainty, that human or âanthropogenicâ activities (emissions of greenhouse gases, aerosols and pollution, landuse/ land-cover change) have altered global temperature patterns over the past 100-150 years (IPCC 2007a). Such temperature changes have a set of cascading, and sometimes amplifying...
34 CFR Appendix A to Subpart N of... - Sample Default Prevention Plan
Code of Federal Regulations, 2011 CFR
2011-07-01
... 34 Education 3 2011-07-01 2011-07-01 false Sample Default Prevention Plan A Appendix A to Subpart N of Part 668 Education Regulations of the Offices of the Department of Education (Continued) OFFICE... Default Rates Appendix A to Subpart N of Part 668—Sample Default Prevention Plan This appendix is provided...
ERIC Educational Resources Information Center
General Accounting Office, Washington, DC. Health, Education, and Human Services Div.
This report to Congress analyzes student loan default rates at historically black colleges and universities (HBCUs), focusing on student characteristics which may predict the likelihood of default. The study examined available student databases for characteristics identified by previous studies as related to level of student loan defaults. Among…
7 CFR 4287.145 - Default by borrower.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 15 2010-01-01 2010-01-01 false Default by borrower. 4287.145 Section 4287.145... Loans § 4287.145 Default by borrower. (a) The lender must notify the Agency when a borrower is 30 days past due on a payment or is otherwise in default of the Loan Agreement. Form FmHA 1980-44, “Guaranteed...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 34 Education 3 2010-07-01 2010-07-01 false Draft cohort default rates and your ability to challenge before official cohort default rates are issued. 668.204 Section 668.204 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 34 Education 3 2010-07-01 2010-07-01 false Draft cohort default rates and your ability to challenge before official cohort default rates are issued. 668.185 Section 668.185 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT...
49 CFR 260.47 - Events of default for direct loans.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 4 2012-10-01 2012-10-01 false Events of default for direct loans. 260.47 Section... REHABILITATION AND IMPROVEMENT FINANCING PROGRAM Procedures To Be Followed in the Event of Default § 260.47 Events of default for direct loans. (a) Upon the Borrower's failure to make a scheduled payment, or upon...
49 CFR 260.47 - Events of default for direct loans.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 4 2011-10-01 2011-10-01 false Events of default for direct loans. 260.47 Section... REHABILITATION AND IMPROVEMENT FINANCING PROGRAM Procedures To Be Followed in the Event of Default § 260.47 Events of default for direct loans. (a) Upon the Borrower's failure to make a scheduled payment, or upon...
49 CFR 260.45 - Events of default for guaranteed loans.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 4 2011-10-01 2011-10-01 false Events of default for guaranteed loans. 260.45... REHABILITATION AND IMPROVEMENT FINANCING PROGRAM Procedures To Be Followed in the Event of Default § 260.45 Events of default for guaranteed loans. (a) If the Borrower is more than 30 days past due on a payment or...
49 CFR 260.45 - Events of default for guaranteed loans.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Events of default for guaranteed loans. 260.45... REHABILITATION AND IMPROVEMENT FINANCING PROGRAM Procedures To Be Followed in the Event of Default § 260.45 Events of default for guaranteed loans. (a) If the Borrower is more than 30 days past due on a payment or...
49 CFR 260.45 - Events of default for guaranteed loans.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 4 2012-10-01 2012-10-01 false Events of default for guaranteed loans. 260.45... REHABILITATION AND IMPROVEMENT FINANCING PROGRAM Procedures To Be Followed in the Event of Default § 260.45 Events of default for guaranteed loans. (a) If the Borrower is more than 30 days past due on a payment or...
49 CFR 260.45 - Events of default for guaranteed loans.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 4 2013-10-01 2013-10-01 false Events of default for guaranteed loans. 260.45... REHABILITATION AND IMPROVEMENT FINANCING PROGRAM Procedures To Be Followed in the Event of Default § 260.45 Events of default for guaranteed loans. (a) If the Borrower is more than 30 days past due on a payment or...
49 CFR 260.47 - Events of default for direct loans.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 4 2014-10-01 2014-10-01 false Events of default for direct loans. 260.47 Section... REHABILITATION AND IMPROVEMENT FINANCING PROGRAM Procedures To Be Followed in the Event of Default § 260.47 Events of default for direct loans. (a) Upon the Borrower's failure to make a scheduled payment, or upon...
49 CFR 260.47 - Events of default for direct loans.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Events of default for direct loans. 260.47 Section... REHABILITATION AND IMPROVEMENT FINANCING PROGRAM Procedures To Be Followed in the Event of Default § 260.47 Events of default for direct loans. (a) Upon the Borrower's failure to make a scheduled payment, or upon...
49 CFR 260.47 - Events of default for direct loans.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 4 2013-10-01 2013-10-01 false Events of default for direct loans. 260.47 Section... REHABILITATION AND IMPROVEMENT FINANCING PROGRAM Procedures To Be Followed in the Event of Default § 260.47 Events of default for direct loans. (a) Upon the Borrower's failure to make a scheduled payment, or upon...
49 CFR 260.45 - Events of default for guaranteed loans.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 4 2014-10-01 2014-10-01 false Events of default for guaranteed loans. 260.45... REHABILITATION AND IMPROVEMENT FINANCING PROGRAM Procedures To Be Followed in the Event of Default § 260.45 Events of default for guaranteed loans. (a) If the Borrower is more than 30 days past due on a payment or...
Consumer default risk assessment in a banking institution
NASA Astrophysics Data System (ADS)
Costa e Silva, Eliana; Lopes, Isabel Cristina; Correia, Aldina; Faria, Susana
2016-12-01
Credit scoring is an application of financial risk forecasting to consumer lending. In this study, statistical analysis is applied to credit scoring data from a financial institution to evaluate the default risk of consumer loans. The default risk was found to be influenced by the spread, the age of the consumer, the number of credit cards owned by the consumer. A lower spread, a higher number of credit cards and a younger age of the borrower are factors that decrease the risk of default. Clients receiving the salary in the same banking institution of the loan have less chances of default than clients receiving their salary in another institution. We also found that clients in the lowest income tax echelon have more propensity to default.
Nicholson, Fiona; Bhogal, Anne; Cardenas, Laura; Chadwick, Dave; Misselbrook, Tom; Rollett, Alison; Taylor, Matt; Thorman, Rachel; Williams, John
2017-09-01
The anaerobic digestion of food waste for energy recovery produces a nutrient-rich digestate which is a valuable source of crop available nitrogen (N). As with any 'new' material being recycled to agricultural land it is important to develop best management practices that maximise crop available N supply, whilst minimising emissions to the environment. In this study, ammonia (NH 3 ) and nitrous oxide (N 2 O) emissions to air and nitrate (NO 3 - ) leaching losses to water following digestate, compost and livestock manure applications to agricultural land were measured at 3 sites in England and Wales. Ammonia emissions were greater from applications of food-based digestate (c.40% of total N applied) than from livestock slurry (c.30% of total N applied) due to its higher ammonium-N content (mean 5.6 kg/t compared with 1-2 kg/t for slurry) and elevated pH (mean 8.3 compared with 7.7 for slurry). Whilst bandspreading was effective at reducing NH 3 emissions from slurry compared with surface broadcasting it was not found to be an effective mitigation option for food-based digestate in this study. The majority of the NH 3 losses occurred within 6 h of spreading highlighting the importance of rapid soil incorporation as a method for reducing NH 3 emissions. Nitrous oxide losses from food-based digestates were low, with emission factors all less than the IPCC default value of 1% (mean 0.45 ± 0.15%). Overwinter NO 3 - leaching losses from food-based digestate were similar to those from pig slurry, but much greater than from pig farmyard manure or compost. Both gaseous N losses and NO 3 - leaching from green and green/food composts were low, indicating that, in these terms, compost can be considered as an 'environmentally benign' material. These findings have been used in the development of best practice guidelines which provide a framework for the responsible use of digestates and composts in agriculture. Copyright © 2017 Elsevier Ltd. All rights reserved.
De Rosa, Daniele; Rowlings, David W; Biala, Johannes; Scheer, Clemens; Basso, Bruno; Grace, Peter R
2018-05-11
Accounting for nitrogen (N) release from organic amendments (OA) can reduce the use of synthetic N-fertiliser, sustain crop production, and potentially reduce soil borne greenhouse gases (GHG) emissions. However, it is difficult to assess the GHG mitigation potential for OA as a substitute of N-fertiliser over the long term due to only part of the organic N added to soil is being released in the first year after application. High-resolution nitrous oxide (N 2 O) and carbon dioxide (CO 2 ) emissions monitored from a horticultural crop rotation over 2.5 years from conventional urea application rates were compared to treatments receiving an annual application of raw and composted chicken manure combined with conventional and reduced N-fertiliser rates. The repeated application of composted manure did not increase annual N 2 O emissions while the application of raw manure resulted in N 2 O emissions up to 35.2 times higher than the zero N fertiliser treatment and up to 4.7 times higher than conventional N-fertiliser rate due to an increase in C and N availability following the repeated application of raw OA. The main factor driving N 2 O emissions was the incorporation of organic material accompanied by high soil moisture while the application of synthetic N-fertiliser induced only short-term N 2 O emission pulse. The average annual N 2 O emission factor calculated accounting for the total N applied including OA was equal to 0.27 ± 0.17%, 3.7 times lower than the IPCC default value. Accounting for the estimated N release from OA only enabled a more realistic N 2 O emission factor to be defined for organically amended field that was equal to 0.48 ± 0.3%. This study demonstrated that accounting for the N released from repeated application of composted rather than raw manure can be a viable pathway to reduce N 2 O emissions and maintain soil fertility. Copyright © 2017. Published by Elsevier B.V.
ENSO Simulation in Coupled Ocean-Atmosphere Models: Are the Current Models Better?
DOE Office of Scientific and Technical Information (OSTI.GOV)
AchutaRao, K; Sperber, K R
Maintaining a multi-model database over a generation or more of model development provides an important framework for assessing model improvement. Using control integrations, we compare the simulation of the El Nino/Southern Oscillation (ENSO), and its extratropical impact, in models developed for the 2007 Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report with models developed in the late 1990's (the so-called Coupled Model Intercomparison Project-2 [CMIP2] models). The IPCC models tend to be more realistic in representing the frequency with which ENSO occurs, and they are better at locating enhanced temperature variability over the eastern Pacific Ocean. When compared withmore » reanalyses, the IPCC models have larger pattern correlations of tropical surface air temperature than do the CMIP2 models during the boreal winter peak phase of El Nino. However, for sea-level pressure and precipitation rate anomalies, a clear separation in performance between the two vintages of models is not as apparent. The strongest improvement occurs for the modeling groups whose CMIP2 model tended to have the lowest pattern correlations with observations. This has been checked by subsampling the multi-century IPCC simulations in a manner to be consistent with the single 80-year time segment available from CMIP2. Our results suggest that multi-century integrations may be required to statistically assess model improvement of ENSO. The quality of the El Nino precipitation composite is directly related to the fidelity of the boreal winter precipitation climatology, highlighting the importance of reducing systematic model error. Over North America distinct improvement of El Nino forced boreal winter surface air temperature, sea-level pressure, and precipitation rate anomalies in the IPCC models occurs. This improvement, is directly proportional to the skill of the tropical El Nino forced precipitation anomalies.« less
Rescuing Data from International Scientific Assessments: A Case Study
NASA Astrophysics Data System (ADS)
Downs, R. R.; Chen, R. S.; Xing, X.
2016-12-01
International scientific assessments such as the Millennium Ecosystem Assessment (MA) and the Intergovernmental Panel on Climate Change (IPCC) assessments represent significant efforts by the global scientific community to review, synthesize, and communicate diverse scientific knowledge, data, and information to support societal decision making on pressing problems such as resource management and climate change. To support the transparency, integrity, and usability of these assessments, it is vital that the underlying data used in these assessments be made openly available and usable by diverse stakeholders. Unfortunately, due to the many geographically dispersed contributors to assessments of this kind, as well as the severe time pressures and limited resources when assessments are conducted, appropriate management and preservation of these data are not always a priority. This can lead to the need to "rescue" key data to ensure their long-term preservation, integrity, accessibility, and appropriate reuse, especially in subsequent assessments. We describe here efforts over two decades to rescue selected data from the MA and IPCC assessments, to work with assessment authors and other contributors to validate and document assessment data, and to develop appropriate levels of data stewardship in light of potential user needs and constrained resources. The IPCC efforts are supported by the IPCC Data Distribution Center (DDC), which is operated collaboratively by the Center for Environmental Data Analysis in the United Kingdom, the World Data Center-Climate in Germany, and the NASA Socioeconomic Data and Applications Center (SEDAC) in the U.S. With the sixth IPCC assessment cycle now starting, a key challenge is to help the assessment community improve data management during the assessment process to reduce the risks of data loss, inadequate documentation, incomplete provenance, unnecessary data restrictions, and other problems.
A Real Options Approach to Valuing the Risk Transfer in a Multi-Year Procurement Contract
2009-10-01
asset follows a Brownian motion process where the returns have a lognormal distribution. H. BLACK-SCHOLES MODEL The value of the put option p on...risk in a firm-fixed-price contract. The government also provides interest-free financing that can greatly reduce the amount of capital a contractor...structured finance and credit default swap applications. 8 E. OPTIONS THEORY We will use closed form BS-type option pricing methods to estimate the
Vertical Ship Motion Study for Ambrose Entrance Channel, New York
2014-05-01
channels, PIANC Bulletin 1971, Vol. 1, No. 7, 17-20. Hardy, T. A. 1993. The attenuation of spectral transformation of wind waves on a coral reef ...A80(12): 95 p. Hearn, C. J. 1999. Wave -breaking hydrodynamics within coral reef systems and the effect of changing relative sea level, Journal of...Values of cf applied for coral reefs range from 0.05 to 0.40 (Hardy 1993; Hearn 1999 and Lowe et al. 2005). CMS- Wave uses a default value of cf
The Preliminary Pollutant Limit Value Approach: Manual for Users
1988-07-01
48 5.2.3 Plant Consumption by Dairy Cows (Upd) ............. 48 5.2.4 Water Consumption by Dairy Cows (Uwd) ............. 48 5.2.5 Soil...other equations include the effect of concurrent consumption of soil by grazing cows (equation 19), and for contaminated water intake, such as from a...ingestion of soil by dairy cow , kg/day. A default value of 0.87 kg/day is suggested (see Section 5.2.5) 4.2.6 Direct Soil Intake Two pathway equations are
1992-03-06
and their respective value. Macro Parameter Macro Value SACCSIZE 32 $ AL IGNMENT 4 $COUNT-LAST 2 147 483 647 SDEFAULT KMNSIZE 2147483648 $DEFAULT-STOR...The subprogram raise..exception- Azif a raises the exception -described by the information record supplied as parameter. -In addition to the subprogram
Effects of pay-for-performance system on tuberculosis default cases control and treatment in Taiwan.
Tsai, Wen-Chen; Kung, Pei-Tseng; Khan, Mahmud; Campbell, Claudia; Yang, Wen-Ta; Lee, Tsuey-Fong; Li, Ya-Hsin
2010-09-01
In order to make tuberculosis (TB) treatment more effective and to lower the default rate of the disease, the Bureau of National Health Insurance (BNHI) in Taiwan implemented the "pay-for-performance on Tuberculosis" program (P4P on TB) in 2004. The purpose of this study is to investigate the effectiveness of the P4P system in terms of default rate. This is a retrospective study. National Health Insurance Research Datasets in Taiwan from 2002 to 2005 has been used for the study. The study compared the differences of TB default rate before and after the implementation of P4P program, between participating and non-participating hospitals, and between P4P hospitals with and without case managers. Furthermore, logistic regression analysis was conducted to explore the related factors influencing TB patients default treatment after TB detected. The treatment default rate after "P4P on TB" was 11.37% compared with the 15.56% before "P4P on TB" implementation. The treatment default rate in P4P hospitals was 10.67% compared to 12.7% in non-P4P hospitals. In addition, the default rate was 10.4% in hospitals with case managers compared with 12.68% in hospitals without case managers. The results of the study showed that "P4P on TB" program improved the treatment default rate for TB patients. In addition, case managers improved the treatment outcome in controlling patients' default rate. Copyright 2010 The British Infection Society. Published by Elsevier Ltd. All rights reserved.
Jayakumar, Niranjana; Gnanasekaran, Dhivyalakshmi
2014-01-01
Background: Revised National Tuberculosis Control Programme (RNTCP) in India has achieved improved cure rates. Objectives: This study describes the achievements under RNTCP in terms of conversion rates, treatment outcomes and pattern of time of default in patients on directly observed short-course treatment for Tuberculosis in Puducherry, Southern India. Settings: Retrospective cohort study; Tuberculosis Unit in District Tuberculosis Centre, Puducherry, India. Materials and Methods: Cohort analysis of patients of registered at the Tuberculosis Unit during 1st and 2nd quarter of the year 2011. Details about sputum conversion, treatment outcome and time of default were obtained from the tuberculosis register. Statistical Analysis: Kaplan-Meier plots & log rank tests. Results: RNTCP targets with respect to success rate (85.7%), death rate (2.7%) and failure rate (2.1%) in new cases have been achieved but the sputum conversion rate (88%) and default rate (5.9%) targets have not been achieved. The overall default rate for all registered TB patients was 7.4%; significantly higher in category II. In retreatment cases registered as treatment after default, the default rate was high (9%). The cumulative default rate; though similar in the initial two months of treatment; was consistently higher in category II as compared to that in category I. Nearly 40% of all defaulters interrupted treatment between the second and fourth month after treatment initiation. Conclusion: Defaulting from treatment is more common among the retreatment cases and usually occurs during the transition phase from intensive phase to continuation phase. PMID:25478371
Jakubowiak, W M; Bogorodskaya, E M; Borisov, S E; Borisov, E S; Danilova, I D; Danilova, D I; Kourbatova, E V; Kourbatova, E K
2007-01-01
Tuberculosis (TB) services in six Russian regions in which social support programmes for TB patients were implemented. To identify risk factors for default and to evaluate possible impact of social support. Retrospective study of new pulmonary smear-positive and smear-negative TB patients registered during the second and third quarters of the 2003. Data were analysed in a case-control study including default patients as cases and successfully treated patients as controls, using multivariate logistic regression modelling. A total of 1805 cases of pulmonary TB were enrolled. Default rates in the regions were 2.3-6.3%. On multivariate analysis, risk factors independently associated with default outcome included: unemployment (OR 4.44; 95%CI 2.23-8.86), alcohol abuse (OR 1.99; 95%CI 1.04-3.81), and homelessness (OR 3.49; 95%CI 1.25-9.77). Social support reduced the default outcome (OR 0.13; 95%CI 0.06-0.28), controlling for age, sex, region, residence and acid-fast bacilli (AFB) smear of sputum. Unemployment, alcohol abuse and homelessness were associated with increased default outcome among new TB patients, while social support for TB patients reduced default. Further prospective randomised studies are necessary to evaluate the impact and to determine the most cost-effective social support for improving treatment outcomes of TB in patients in Russia, especially among populations at risk of default.
Sitienei, J; Kipruto, H; Mansour, O; Ndisha, M; Hanson, C; Wambu, R; Addona, V
2015-09-01
In 2012, the World Health Organization estimated that there were 120,000 new cases and 9500 deaths due to tuberculosis (TB) in Kenya. Almost a quarter of the cases were not detected, and the treatment of 4% of notified cases ended in default. To identify the determinants of anti-tuberculosis treatment default. Data from 2012 and 2013 were retrieved from a national case-based electronic data recording system. A comparison was made between new pulmonary TB patients for whom treatment was interrupted vs. those who successfully completed treatment. A total of 106,824 cases were assessed. Human immunodeficiency virus infection was the single most influential risk factor for default (aOR 2.7). More than 94% of patients received family-based directly observed treatment (DOT) and were more likely to default than patients who received DOT from health care workers (aOR 2.0). Caloric nutritional support was associated with lower default rates (aOR 0.89). Males were more likely to default than females (aOR 1.6). Patients cared for in the private sector were less likely to default than those in the public sector (aOR 0.86). Understanding the factors contributing to default can guide future program improvements and serve as a proxy to understanding the factors that constrain access to care among undetected cases.
Uncertainty of tipping elements on risk analysis in hydrology under climate change
NASA Astrophysics Data System (ADS)
Kiguchi, M.; Iseri, Y.; Tawatari, R.; Kanae, S.; Oki, T.
2015-12-01
Risk analysis in this study characterizes the events that could be caused by climate change and estimates their effects on society. In order to characterize climate change risks, events that might be caused by climate change will be investigated focusing on critical geophysical phenomena such as changes in thermohaline circulation (THC) in oceans and the large-scale melting of the Greenland and other ice sheets. The results of numerical experiments with climate models and paleoclimate studies will be referenced in listing up these phenomena. The trigger mechanisms, tendency to occur and relationship of these phenomena to global climate will be clarified. To clarify that relationship between the RCP scenarios and tipping elements, we identified which year tipping elements in case of "Arctic summer sea ice" and "Greenland ice sheet" are appeared using the increase of global average temperature in 5 GCMs under RCP (2.6, 4.5, 6.0, and 8.5) from Zickfeld et al. (2013) and IPCC (2013), and tipping point of each tipping elements from IPCC (2013). In case of "Greenland ice sheet" (Tipping point takes a value within the range of 1.0oC and 4.0oC), we found that "Greenland ice sheet" may melt down when the tipping point is 1.0oC as lowest value. On the other hand, when tipping point sets as 4.0oC, it may not melt down except for RCP 8.5. As above, we show the uncertainty of tipping point itself. In future, it is necessary how to reflect such uncertainty in risk analysis in hydrology.
Code of Federal Regulations, 2010 CFR
2010-01-01
... diversity between management and ownership as required by § 108.150. (g) SBA remedies for events of default... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Events of default and SBA's... Company's Noncompliance With Terms of Leverage § 108.1810 Events of default and SBA's remedies for NMVC...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Events of default and SBA's... Noncompliance With Terms of Leverage § 107.1810 Events of default and SBA's remedies for Licensee's... time of their issuance. (b) Automatic events of default. The occurrence of one or more of the events in...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 15 2010-01-01 2010-01-01 false Events of default and the Secretary's remedies for... With Terms of Leverage § 4290.1810 Events of default and the Secretary's remedies for RBIC's... and as if fully set forth in the Debentures. (b) Automatic events of default. The occurrence of one or...
Student Loan Default: Do Characteristics of Four-Year Institutions Contribute to the Puzzle?
ERIC Educational Resources Information Center
Webber, Karen L.; Rogers, Sharon L.
2010-01-01
College student debt and loan default are growing concerns in the United States. For each U.S. institution, the federal government is now reporting a cohort default rate, which is the percent of students who defaulted on their loan, averaged over a three-year period. Previous studies have amply shown that student characteristics are strongly…
ERIC Educational Resources Information Center
Fraas, Charlotte J.
Congress, over the past decade, has enacted a number of laws with provisions aimed at preventing defaults and improving collections on defaulted student loans. This report presents a synopsis of legislative provisions enacted to combat student loan defaults beginning with the Education Amendments of 1980. The laws included in the report are:…
Multivariate Analysis of Student Loan Defaulters at Texas A&M University
ERIC Educational Resources Information Center
Steiner, Matt; Teszler, Natali
2005-01-01
In an effort to better understand student loan default behavior at Texas A&M University (TAMU), the research staff at TG, at the request of TAMU, conducted a study of the relationship between loan default, on the one hand, and many student and borrower characteristics, on the other hand. The study examines the default behavior of 12,776…
Predicting Student Loan Default for the University of Texas at Austin
ERIC Educational Resources Information Center
Herr, Elizabeth; Burt, Larry
2005-01-01
During spring 2001, Noel-Levitz created a student loan default model for the University of Texas at Austin (UT Austin). The goal of this project was to identify students most likely to default, to identify as risk elements those characteristics that contributed to student loan default, and to use these risk elements to plan and implement targeted,…
ERIC Educational Resources Information Center
Seifert, Charles F.; Wordern, Lorenz
2004-01-01
The cost of student loan defaults is a growing problem. At the beginning of this century, defaulted student loans exceed $25 billion (Student Aid News, 2001). In addition to the costs borne by the taxpayer as the federal government purchases defaulted accounts, there are costs incurred by schools, lenders, loan servicers, and guaranty agencies for…
The maturing architecture of the brain's default network
Fair, Damien A.; Cohen, Alexander L.; Dosenbach, Nico U. F.; Church, Jessica A.; Miezin, Francis M.; Barch, Deanna M.; Raichle, Marcus E.; Petersen, Steven E.; Schlaggar, Bradley L.
2008-01-01
In recent years, the brain's “default network,” a set of regions characterized by decreased neural activity during goal-oriented tasks, has generated a significant amount of interest, as well as controversy. Much of the discussion has focused on the relationship of these regions to a “default mode” of brain function. In early studies, investigators suggested that, the brain's default mode supports “self-referential” or “introspective” mental activity. Subsequently, regions of the default network have been more specifically related to the “internal narrative,” the “autobiographical self,” “stimulus independent thought,” “mentalizing,” and most recently “self-projection.” However, the extant literature on the function of the default network is limited to adults, i.e., after the system has reached maturity. We hypothesized that further insight into the network's functioning could be achieved by characterizing its development. In the current study, we used resting-state functional connectivity MRI (rs-fcMRI) to characterize the development of the brain's default network. We found that the default regions are only sparsely functionally connected at early school age (7–9 years old); over development, these regions integrate into a cohesive, interconnected network. PMID:18322013
Gler, M T; Podewils, L J; Munez, N; Galipot, M; Quelapio, M I D; Tupasi, T E
2012-07-01
In the Philippines, programmatic treatment of drug-resistant tuberculosis (TB) was initiated by the Tropical Disease Foundation in 1999 and transitioned to the National TB Program in 2006. To determine patient and socio-demographic characteristics associated with default, and the impact of patient support measures on default. Retrospective cohort analysis of 583 MDR-TB patients treated from 1999 to 2006. A total of 88 (15%) patients defaulted from treatment. The median follow-up time for patients who defaulted was 289 days (range 1-846). In multivariate analysis adjusted for age, sex and previous TB treatment, receiving a greater number of treatment drugs (≥ 5 vs. 2-3 drugs, HR 7.2, 95%CI 3.3-16.0, P < 0.001) was significantly associated with an increased risk of default, while decentralization reduced the risk of default (HR 0.3, 95%CI 0.2-0.7, P < 0.001). Improving access to treatment for MDR-TB through decentralization of care to centers near the patient's residence reduced the risk of default. Further research is needed to evaluate the feasibility, impact and cost-effectiveness of decentralized care models for MDR-TB treatment.
Default risk modeling beyond the first-passage approximation: Extended Black-Cox model
NASA Astrophysics Data System (ADS)
Katz, Yuri A.; Shokhirev, Nikolai V.
2010-07-01
We develop a generalization of the Black-Cox structural model of default risk. The extended model captures uncertainty related to firm’s ability to avoid default even if company’s liabilities momentarily exceeding its assets. Diffusion in a linear potential with the radiation boundary condition is used to mimic a company’s default process. The exact solution of the corresponding Fokker-Planck equation allows for derivation of analytical expressions for the cumulative probability of default and the relevant hazard rate. Obtained closed formulas fit well the historical data on global corporate defaults and demonstrate the split behavior of credit spreads for bonds of companies in different categories of speculative-grade ratings with varying time to maturity. Introduction of the finite rate of default at the boundary improves valuation of credit risk for short time horizons, which is the key advantage of the proposed model. We also consider the influence of uncertainty in the initial distance to the default barrier on the outcome of the model and demonstrate that this additional source of incomplete information may be responsible for nonzero credit spreads for bonds with very short time to maturity.
Guerrilla Video: A New Protocol for Producing Classroom Video
ERIC Educational Resources Information Center
Fadde, Peter; Rich, Peter
2010-01-01
Contemporary changes in pedagogy point to the need for a higher level of video production value in most classroom video, replacing the default video protocol of an unattended camera in the back of the classroom. The rich and complex environment of today's classroom can be captured more fully using the higher level, but still easily manageable,…
Relieving Consumer Overindebtedness in South Africa: Policy Reviews and Recommendations
ERIC Educational Resources Information Center
Ssebagala, Ralph Abbey
2017-01-01
A large fraction of South African consumers are highly leveraged, inadequately insured, and/or own little to no assets of value, which increases their exposure not only to idiosyncratic risk but also to severe indebtedness and/or default. This scenario can present negative ramifications that lead well beyond the confines of individual households.…
The Agency's guidance for the derivation of RfD and RfC values call for the downward adjustment of exposure-response levels observed in animals and/or humans to account for the potentially greater sensitivity of humans as compared to test animals (UFA) and the differential sensit...
JEDI Transmission Line Model | Jobs and Economic Development Impact Models
, reasonable default values are provided. Individual projects may vary and when possible, project specific data Line Model rel. TL12.23.16. JEDI Transmission Line Model User Reference Guide Using MS Excel 2007 When ;High." Set the level to "Medium" or "Low" and then re-open the JEDI worksheet
Code of Federal Regulations, 2013 CFR
2013-07-01
... Industrial Waste Landfills TT Table TT-1 to Subpart TT of Part 98 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Industrial Waste... for Industrial Waste Landfills Industry/Waste Type DOC(weight fraction, wet basis) k[dry climatea] (yr...
Code of Federal Regulations, 2014 CFR
2014-07-01
... Industrial Waste Landfills TT Table TT-1 to Subpart TT of Part 98 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Industrial Waste... for Industrial Waste Landfills Industry/Waste Type DOC(weight fraction, wet basis) k[dry climatea] (yr...
Code of Federal Regulations, 2010 CFR
2010-07-01
... GREENHOUSE GAS REPORTING Suppliers of Natural Gas and Natural Gas Liquids § 98.408 Definitions. All terms...) Natural Gas 1.027 MMBtu/Mscf 53.02 Propane 3.836 MMBtu/bbl 63.02 Normal butane 4.326 MMBtu/bbl 64.93... Unit Default CO2 emission value(MT CO2/Unit) Natural Gas Mscf 0.054452 Propane Barrel 0.241745 Normal...
Climate change and future land use in the United States: an economic approach
David Haim; Ralph J. Alig; Andrew J. Plantinga; Brent Sohngen
2011-01-01
An econometric land-use model is used to project regional and national land-use changes in the United States under two IPCC emissions scenarios. The key driver of land-use change in the model is county-level measures of net returns to five major land uses. The net returns are modified for the IPCC scenarios according to assumed trends in population and income and...
Fischer, Helen; Schütte, Stefanie; Depoux, Anneliese; Amelung, Dorothee; Sauerborn, Rainer
2018-04-27
Graphs are prevalent in the reports of the Intergovernmental Panel on Climate Change (IPCC), often depicting key points and major results. However, the popularity of graphs in the IPCC reports contrasts with a neglect of empirical tests of their understandability. Here we put the understandability of three graphs taken from the Health chapter of the Fifth Assessment Report to an empirical test. We present a pilot study where we evaluate objective understanding (mean accuracy in multiple-choice questions) and subjective understanding (self-assessed confidence in accuracy) in a sample of attendees of the United Nations Climate Change Conference in Marrakesh, 2016 (COP22), and a student sample. Results show a mean objective understanding of M = 0.33 for the COP sample, and M = 0.38 for the student sample. Subjective and objective understanding were unrelated for the COP22 sample, but associated for the student sample. These results suggest that (i) understandability of the IPCC health chapter graphs is insufficient, and that (ii) particularly COP22 attendees lacked insight into which graphs they did, and which they did not understand. Implications for the construction of graphs to communicate health impacts of climate change to decision-makers are discussed.
Precipitation extreme changes exceeding moisture content increases in MIROC and IPCC climate models
Sugiyama, Masahiro; Shiogama, Hideo; Emori, Seita
2010-01-01
Precipitation extreme changes are often assumed to scale with, or are constrained by, the change in atmospheric moisture content. Studies have generally confirmed the scaling based on moisture content for the midlatitudes but identified deviations for the tropics. In fact half of the twelve selected Intergovernmental Panel on Climate Change (IPCC) models exhibit increases faster than the climatological-mean precipitable water change for high percentiles of tropical daily precipitation, albeit with significant intermodel scatter. Decomposition of the precipitation extreme changes reveals that the variations among models can be attributed primarily to the differences in the upward velocity. Both the amplitude and vertical profile of vertical motion are found to affect precipitation extremes. A recently proposed scaling that incorporates these dynamical effects can capture the basic features of precipitation changes in both the tropics and midlatitudes. In particular, the increases in tropical precipitation extremes significantly exceed the precipitable water change in Model for Interdisciplinary Research on Climate (MIROC), a coupled general circulation model with the highest resolution among IPCC climate models whose precipitation characteristics have been shown to reasonably match those of observations. The expected intensification of tropical disturbances points to the possibility of precipitation extreme increases beyond the moisture content increase as is found in MIROC and some of IPCC models. PMID:20080720
Rafique, Rashad; Fienen, Michael N.; Parkin, Timothy B.; Anex, Robert P.
2013-01-01
DayCent is a biogeochemical model of intermediate complexity widely used to simulate greenhouse gases (GHG), soil organic carbon and nutrients in crop, grassland, forest and savannah ecosystems. Although this model has been applied to a wide range of ecosystems, it is still typically parameterized through a traditional “trial and error” approach and has not been calibrated using statistical inverse modelling (i.e. algorithmic parameter estimation). The aim of this study is to establish and demonstrate a procedure for calibration of DayCent to improve estimation of GHG emissions. We coupled DayCent with the parameter estimation (PEST) software for inverse modelling. The PEST software can be used for calibration through regularized inversion as well as model sensitivity and uncertainty analysis. The DayCent model was analysed and calibrated using N2O flux data collected over 2 years at the Iowa State University Agronomy and Agricultural Engineering Research Farms, Boone, IA. Crop year 2003 data were used for model calibration and 2004 data were used for validation. The optimization of DayCent model parameters using PEST significantly reduced model residuals relative to the default DayCent parameter values. Parameter estimation improved the model performance by reducing the sum of weighted squared residual difference between measured and modelled outputs by up to 67 %. For the calibration period, simulation with the default model parameter values underestimated mean daily N2O flux by 98 %. After parameter estimation, the model underestimated the mean daily fluxes by 35 %. During the validation period, the calibrated model reduced sum of weighted squared residuals by 20 % relative to the default simulation. Sensitivity analysis performed provides important insights into the model structure providing guidance for model improvement.
Kendall, Emily A.; Theron, Danie; Franke, Molly F.; van Helden, Paul; Victor, Thomas C.; Murray, Megan B.; Warren, Robin M.; Jacobson, Karen R.
2013-01-01
Background Default from multidrug-resistant tuberculosis (MDR-TB) treatment remains a major barrier to cure and epidemic control. We sought to identify patient risk factors for default from MDR-TB treatment and high-risk time periods for default in relation to hospitalization and transition to outpatient care. Methods We retrospectively analyzed a cohort of 225 patients who initiated MDR-TB treatment between 2007 through 2010 at a rural TB hospital in the Western Cape Province, South Africa. Results Fifty percent of patients were cured or completed treatment, 27% defaulted, 14% died, 4% failed treatment, and 5% transferred out. Recent alcohol use was common (63% of patients). In multivariable proportional hazards regression, older age (hazard ratio [HR]= 0.97 [95% confidence interval 0.94-0.99] per year of greater age), formal housing (HR=0.38 [0.19-0.78]), and steady employment (HR=0.41 [0.19-0.90]) were associated with decreased risk of default, while recent alcohol use (HR=2.1 [1.1-4.0]), recent drug use (HR=2.0 [1.0-3.6]), and Coloured (mixed ancestry) ethnicity (HR=2.3 [1.1-5.0]) were associated with increased risk of default (P<0.05). Defaults occurred throughout the first 18 months of the two-year treatment course but were especially frequent among alcohol users after discharge from the initial four-to-five-month in-hospital phase of treatment, with the highest default rates occurring among alcohol users within two months of discharge. Default rates during the first two months after discharge were also elevated for patients who received care from mobile clinics. Conclusions Among patients who were not cured or did not complete MDR-TB treatment, the majority defaulted from treatment. Younger, economically-unstable patients and alcohol and drug users were particularly at risk. For alcohol users as well as mobile-clinic patients, the early outpatient treatment phase is a high-risk period for default that could be targeted in efforts to increase treatment completion rates. PMID:24349518
Kendall, Emily A; Theron, Danie; Franke, Molly F; van Helden, Paul; Victor, Thomas C; Murray, Megan B; Warren, Robin M; Jacobson, Karen R
2013-01-01
Default from multidrug-resistant tuberculosis (MDR-TB) treatment remains a major barrier to cure and epidemic control. We sought to identify patient risk factors for default from MDR-TB treatment and high-risk time periods for default in relation to hospitalization and transition to outpatient care. We retrospectively analyzed a cohort of 225 patients who initiated MDR-TB treatment between 2007 through 2010 at a rural TB hospital in the Western Cape Province, South Africa. Fifty percent of patients were cured or completed treatment, 27% defaulted, 14% died, 4% failed treatment, and 5% transferred out. Recent alcohol use was common (63% of patients). In multivariable proportional hazards regression, older age (hazard ratio [HR]= 0.97 [95% confidence interval 0.94-0.99] per year of greater age), formal housing (HR=0.38 [0.19-0.78]), and steady employment (HR=0.41 [0.19-0.90]) were associated with decreased risk of default, while recent alcohol use (HR=2.1 [1.1-4.0]), recent drug use (HR=2.0 [1.0-3.6]), and Coloured (mixed ancestry) ethnicity (HR=2.3 [1.1-5.0]) were associated with increased risk of default (P<0.05). Defaults occurred throughout the first 18 months of the two-year treatment course but were especially frequent among alcohol users after discharge from the initial four-to-five-month in-hospital phase of treatment, with the highest default rates occurring among alcohol users within two months of discharge. Default rates during the first two months after discharge were also elevated for patients who received care from mobile clinics. Among patients who were not cured or did not complete MDR-TB treatment, the majority defaulted from treatment. Younger, economically-unstable patients and alcohol and drug users were particularly at risk. For alcohol users as well as mobile-clinic patients, the early outpatient treatment phase is a high-risk period for default that could be targeted in efforts to increase treatment completion rates.
Marx, Florian M; Dunbar, Rory; Enarson, Donald A; Beyers, Nulda
2012-01-01
High rates of recurrent tuberculosis after successful treatment have been reported from different high burden settings in Sub-Saharan Africa. However, little is known about the rate of smear-positive tuberculosis after treatment default. In particular, it is not known whether or not treatment defaulters continue to be or become again smear-positive and thus pose a potential for transmission of infection to others. To investigate, in a high tuberculosis burden setting, the rate of re-treatment for smear-positive tuberculosis among cases defaulting from standardized treatment compared to successfully treated cases. Retrospective cohort study among smear-positive tuberculosis cases treated between 1996 and 2008 in two urban communities in Cape Town, South Africa. Episodes of re-treatment for smear-positive tuberculosis were ascertained via probabilistic record linkage. Survival analysis and Poisson regression were used to compare the rate of smear-positive tuberculosis after treatment default to that after successful treatment. A total of 2,136 smear-positive tuberculosis cases were included in the study. After treatment default, the rate of re-treatment for smear-positive tuberculosis was 6.86 (95% confidence interval [CI]: 5.59-8.41) per 100 person-years compared to 2.09 (95% CI: 1.81-2.41) after cure (adjusted Hazard Ratio [aHR]: 3.97; 95% CI: 3.00-5.26). Among defaulters, the rate was inversely associated with treatment duration and sputum conversion prior to defaulting. Smear grade at start of the index treatment episode (Smear3+: aHR 1.61; 95%CI 1.11-2.33) was independently associated with smear-positive tuberculosis re-treatment, regardless of treatment outcome. In this high-burden setting, there is a high rate of subsequent smear-positive tuberculosis after treatment default. Treatment defaulters are therefore likely to contribute to the pool of infectious source cases in the community. Our findings underscore the importance of preventing treatment default, as a means of successful tuberculosis control in high-burden settings.
Pinnock, Farena; Parlar, Melissa; Hawco, Colin; Hanford, Lindsay; Hall, Geoffrey B.
2017-01-01
This study assessed whether cortical thickness across the brain and regionally in terms of the default mode, salience, and central executive networks differentiates schizophrenia patients and healthy controls with normal range or below-normal range cognitive performance. Cognitive normality was defined using the MATRICS Consensus Cognitive Battery (MCCB) composite score (T = 50 ± 10) and structural magnetic resonance imaging was used to generate cortical thickness data. Whole brain analysis revealed that cognitively normal range controls (n = 39) had greater cortical thickness than both cognitively normal (n = 17) and below-normal range (n = 49) patients. Cognitively normal controls also demonstrated greater thickness than patients in regions associated with the default mode and salience, but not central executive networks. No differences on any thickness measure were found between cognitively normal range and below-normal range controls (n = 24) or between cognitively normal and below-normal range patients. In addition, structural covariance between network regions was high and similar across subgroups. Positive and negative symptom severity did not correlate with thickness values. Cortical thinning across the brain and regionally in relation to the default and salience networks may index shared aspects of the psychotic psychopathology that defines schizophrenia with no relation to cognitive impairment. PMID:28348889
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Changhyun; Park, Sungsu; Kim, Daehyun
2015-10-01
The Madden-Julian Oscillation (MJO), the dominant mode of tropical intraseasonal variability, influences weather and climate in the extratropics through atmospheric teleconnection. In this study, two simulations using the Community Atmosphere Model version 5 (CAM5) - one with the default shallow and deep convection schemes and the other with the Unified Convection scheme (UNICON) - are employed to examine the impacts of cumulus parameterizations on the simulation of the boreal wintertime MJO teleconnection in the Northern Hemisphere. We demonstrate that the UNICON substantially improves the MJO teleconnection. When the UNICON is employed, the simulated circulation anomalies associated with the MJO bettermore » resemble the observed counterpart, compared to the simulation with the default convection schemes. Quantitatively, the pattern correlation for the 300-hPa geopotential height anomalies between the simulations and observation increases from 0.07 for the default schemes to 0.54 for the UNICON. These circulation anomalies associated with the MJO further help to enhance the surface air temperature and precipitation anomalies over North America, although room for improvement is still evident. Initial value calculations suggest that the realistic MJO teleconnection with the UNICON is not attributed to the changes in the background wind, but primarily to the improved tropical convective heating associated with the MJO.« less
Why do Patients in Pre-Anti Retroviral Therapy (ART) Care Default: A Cross-Sectional Study.
Chakravarty, Jaya; Kansal, Sangeeta; Tiwary, Narendra; Sundar, Shyam
2016-01-01
Approximately, 40% of the patients registered in the National AIDS Control Program in India are not on antiretroviral therapy (ART), i.e., are in pre-ART care. However, there are scarce data regarding the retention of pre-ART patients under routine program conditions. The main objective of this study was to find out the reasons for default among patients in pre-ART care. Patients enrolled in the ART Centre, Banaras Hindu University (BHU) between January and December 2009 and in pre-ART care were included in the study. Defaulters were those pre-ART patients who missed their last appointment of CD4 count by more than 1 month. Defaulters were traced telephonically in 2011 and those who returned and gave their consent for the study were interviewed using a semi-structured questionnaire. Out of 620 patients in pre-ART care, 384 (68.2%) were defaulters. One hundred forty-four of the defaulters were traced and only 83 reached the ART center for interview. Among defaulters who did not reach the ART center, illiterate and unmarried were significantly more and mean duration from registration to default was also significantly less as compared to those who came back for the interview. Most defaulters gave more than one reason for defaulting that were as follows: Inconvenient clinic timings (98%), need for multiple mode of transport (92%), perceived improved health (65%), distance of center from home (61%), lack of social support (62%), and financial difficulty (59%). Active tracing of pre-ART patients through outreach and strengthening of the Link ART centers will improve the retention of patients in the program.
The effect of a default-based nudge on the choice of whole wheat bread.
van Kleef, Ellen; Seijdell, Karen; Vingerhoeds, Monique H; de Wijk, René A; van Trijp, Hans C M
2018-02-01
Consumer choices are often influenced by the default option presented. This study examines the effect of whole wheat bread as a default option in a sandwich choice situation. Whole wheat bread consists of 100% whole grain and is healthier than other bread types that are commonly consumed, such as brown or white bread. A pilot survey (N = 291) examined the strength of combinations of toppings and bread type as carrier to select stimuli for the main study. In the main experimental study consisting of a two (bread type) by two (topping type) between-subjects design, participants (N = 226) were given a free sandwich at a university stand with either a relatively unhealthy deep-fried snack (croquette) or a healthy topping. About half of the participants were offered a whole wheat bun unless they asked for white bun, and the other half were offered a white bun unless they asked for a whole wheat bun. Regardless of the topping, the results show that when the whole wheat bun was the default option, 108 out of 115 participants (94%) decided to stick with this default option. When the default of bread offered was white, 89 out of 111 participants (80%) similarly chose to stick with this default. Across conditions, participants felt equally free to make a choice. The attractiveness of and willingness to pay for the sandwich were not affected by default type of bread. This study demonstrated a strong default effect of bread type. This clearly shows the benefit of steering consumers towards a healthier bread choice, by offering healthier default bread at various locations such as restaurants, schools and work place canteens. Copyright © 2017 Elsevier Ltd. All rights reserved.
The use of Meteonorm weather generator for climate change studies
NASA Astrophysics Data System (ADS)
Remund, J.; Müller, S. C.; Schilter, C.; Rihm, B.
2010-09-01
The global climatological database Meteonorm (www.meteonorm.com) is widely used as meteorological input for simulation of solar applications and buildings. It's a combination of a climate database, a spatial interpolation tool and a stochastic weather generator. Like this typical years with hourly or minute time resolution can be calculated for any site. The input of Meteonorm for global radiation is the Global Energy Balance Archive (GEBA, http://proto-geba.ethz.ch). All other meteorological parameters are taken from databases of WMO and NCDC (periods 1961-90 and 1996-2005). The stochastic generation of global radiation is based on a Markov chain model for daily values and an autoregressive model for hourly and minute values (Aguiar and Collares-Pereira, 1988 and 1992). The generation of temperature is based on global radiation and measured distribution of daily temperature values of approx. 5000 sites. Meteonorm generates also additional parameters like precipitation, wind speed or radiation parameters like diffuse and direct normal irradiance. Meteonorm can also be used for climate change studies. Instead of climate values, the results of IPCC AR4 results are used as input. From all 18 public models an average has been made at a resolution of 1°. The anomalies of the parameters temperature, precipitation and global radiation and the three scenarios B1, A1B and A2 have been included. With the combination of Meteonorm's current database 1961-90, the interpolation algorithms and the stochastic generation typical years can be calculated for any site, for different scenarios and for any period between 2010 and 2200. From the analysis of variations of year to year and month to month variations of temperature, precipitation and global radiation of the past ten years as well of climate model forecasts (from project prudence, http://prudence.dmi.dk) a simple autoregressive model has been formed which is used to generate realistic monthly time series of future periods. Meteonorm can therefore be used as a relatively simple method to enhance the spatial and temporal resolution instead of using complicated and time consuming downscaling methods based on regional climate models. The combination of Meteonorm, gridded historical (based on work of Luterbach et al.) and IPCC results has been used for studies of vegetation simulation between 1660 and 2600 (publication of first version based on IS92a scenario and limited time period 1950 - 2100: http://www.pbl.nl/images/H5_Part2_van%20CCE_opmaak%28def%29_tcm61-46625.pdf). It's also applicable for other adaptation studies for e.g. road surfaces or building simulation. In Meteonorm 6.1 one scenario (IS92a) and one climate model has been included (Hadley CM3). In the new Meteonorm 7 (coming spring 2011) the model averages of the three above mentioned scenarios of the IPCC AR4 will be included.
The application of defaults to optimize parents' health-based choices for children.
Loeb, Katharine L; Radnitz, Cynthia; Keller, Kathleen; Schwartz, Marlene B; Marcus, Sue; Pierson, Richard N; Shannon, Michael; DeLaurentis, Danielle
2017-06-01
Optimal defaults is a compelling model from behavioral economics and the psychology of human decision-making, designed to shape or "nudge" choices in a positive direction without fundamentally restricting options. The current study aimed to test the effectiveness of optimal (less obesogenic) defaults and parent empowerment priming on health-based decisions with parent-child (ages 3-8) dyads in a community-based setting. Two proof-of-concept experiments (one on breakfast food selections and one on activity choice) were conducted comparing the main and interactive effects of optimal versus suboptimal defaults, and parent empowerment priming versus neutral priming, on parents' health-related choices for their children. We hypothesized that in each experiment, making the default option more optimal will lead to more frequent health-oriented choices, and that priming parents to be the ultimate decision-makers on behalf of their child's health will potentiate this effect. Results show that in both studies, default condition, but not priming condition or the interaction between default and priming, significantly predicted choice (healthier vs. less healthy option). There was also a significant main effect for default condition (and no effect for priming condition or the interaction term) on the quantity of healthier food children consumed in the breakfast experiment. These pilot studies demonstrate that optimal defaults can be practicably implemented to improve parents' food and activity choices for young children. Results can inform policies and practices pertaining to obesogenic environmental factors in school, restaurant, and home environments. Copyright © 2017 Elsevier Ltd. All rights reserved.
Social and Clinical Characteristics of Immigrants with Tuberculosis in South Korea.
Min, Gee Ho; Kim, Young; Lee, Jong Seok; Oh, Jee Youn; Hur, Gyu Young; Lee, Young Seok; Min, Kyung Hoon; Lee, Sung Yong; Kim, Je Hyeong; Shin, Chol; Lee, Seung Heon
2017-05-01
To determine the social and clinical characteristics of immigrants with tuberculosis (TB) in South Korea. The registered adult TB patients who were diagnosed and treated in Korea Medical Centers from January 2013 to December 2015 were analyzed retrospectively. A total of 105 immigrants with TB were compared to 932 native Korean TB patients. Among these 105 immigrants with TB, 86 (82%) were Korean-Chinese. The rate of drug-susceptible TB were lower in the immigrants group than in the native Korean group [odds ratio (OR): 0.46; 95% confidence interval (CI): 0.22-0.96, p=0.035]. Cure rate was higher in the immigrant group than in the native Korean group (OR: 2.03; 95% CI: 1.26-3.28, p=0.003). Treatment completion rate was lower in the immigrant group than in the native Korean group (OR: 0.50; 95% CI: 0.33-0.74, p=0.001). However, treatment success rate showed no significant difference between two groups (p=0.141). Lost to follow up (default) rate was higher in the immigrant group than in the native Korean group after adjusting for age and drug resistance (OR: 3.61; 95% CI: 1.36-9.61, p=0.010). There was no difference between defaulter and non-defaulter in clinical characteristics or types of visa among these immigrants (null p value). However, 43 TB patients with recent immigration were diagnosed as TB even though they had been screened as normal at the time of immigration. Endeavor to reduce the default rate of immigrants with TB and reinforce TB screening during the immigration process must be performed for TB infection control in South Korea. © Copyright: Yonsei University College of Medicine 2017
NASA Astrophysics Data System (ADS)
Kawecki, Stacey; Steiner, Allison L.
2018-01-01
We examine how aerosol composition affects precipitation intensity using the Weather and Research Forecasting Model with Chemistry (version 3.6). By changing the prescribed default hygroscopicity values to updated values from laboratory studies, we test model assumptions about individual component hygroscopicity values of ammonium, sulfate, nitrate, and organic species. We compare a baseline simulation (BASE, using default hygroscopicity values) with four sensitivity simulations (SULF, increasing the sulfate hygroscopicity; ORG, decreasing organic hygroscopicity; SWITCH, using a concentration-dependent hygroscopicity value for ammonium; and ALL, including all three changes) to understand the role of aerosol composition on precipitation during a mesoscale convective system (MCS). Overall, the hygroscopicity changes influence the spatial patterns of precipitation and the intensity. Focusing on the maximum precipitation in the model domain downwind of an urban area, we find that changing the individual component hygroscopicities leads to bulk hygroscopicity changes, especially in the ORG simulation. Reducing bulk hygroscopicity (e.g., ORG simulation) initially causes fewer activated drops, weakened updrafts in the midtroposphere, and increased precipitation from larger hydrometeors. Increasing bulk hygroscopicity (e.g., SULF simulation) simulates more numerous and smaller cloud drops and increases precipitation. In the ALL simulation, a stronger cold pool and downdrafts lead to precipitation suppression later in the MCS evolution. In this downwind region, the combined changes in hygroscopicity (ALL) reduces the overprediction of intense events (>70 mm d-1) and better captures the range of moderate intensity (30-60 mm d-1) events. The results of this single MCS analysis suggest that aerosol composition can play an important role in simulating high-intensity precipitation events.
Replacing Fortran Namelists with JSON
NASA Astrophysics Data System (ADS)
Robinson, T. E., Jr.
2017-12-01
Maintaining a log of input parameters for a climate model is very important to understanding potential causes for answer changes during the development stages. Additionally, since modern Fortran is now interoperable with C, a more modern approach to software infrastructure to include code written in C is necessary. Merging these two separate facets of climate modeling requires a quality control for monitoring changes to input parameters and model defaults that can work with both Fortran and C. JSON will soon replace namelists as the preferred key/value pair input in the GFDL model. By adding a JSON parser written in C into the model, the input can be used by all functions and subroutines in the model, errors can be handled by the model instead of by the internal namelist parser, and the values can be output into a single file that is easily parsable by readily available tools. Input JSON files can handle all of the functionality of a namelist while being portable between C and Fortran. Fortran wrappers using unlimited polymorphism are crucial to allow for simple and compact code which avoids the need for many subroutines contained in an interface. Errors can be handled with more detail by providing information about location of syntax errors or typos. The output JSON provides a ground truth for values that the model actually uses by providing not only the values loaded through the input JSON, but also any default values that were not included. This kind of quality control on model input is crucial for maintaining reproducibility and understanding any answer changes resulting from changes in the input.
Jenkins, H E; Ciobanu, A; Plesca, V; Crudu, V; Galusca, I; Soltan, V; Cohen, T
2013-03-01
The Republic of Moldova, in Eastern Europe, has among the highest reported nationwide proportions of tuberculosis (TB) patients with multidrug-resistant tuberculosis (MDR-TB) worldwide. Default has been associated with increased mortality and amplification of drug resistance, and may contribute to the high MDR-TB rates in Moldova. To assess risk factors and timing of default from treatment for non-MDR-TB from 2007 to 2010. A retrospective analysis of routine surveillance data on all non-MDR-TB patients reported. A total of 14.7% of non-MDR-TB patients defaulted from treatment during the study period. Independent risk factors for default included sociodemographic factors, such as homelessness, living alone, less formal education and spending substantial time outside Moldova in the year prior to diagnosis; and health-related factors such as human immunodeficiency virus co-infection, greater lung pathology and increasing TB drug resistance. Anti-tuberculosis treatment is usually initiated within an institutional setting in Moldova, and the default risk was highest in the month following the phase of hospitalized treatment (among civilians) and after leaving prison (among those diagnosed while incarcerated). Targeted interventions to increase treatment adherence for patients at highest risk of default, and improving the continuity of care for patients transitioning from institutional to community care may substantially reduce risk of default.
Jenkins, Helen E.; Ciobanu, Anisoara; Plesca, Valeriu; Crudu, Valeriu; Galusca, Irina; Soltan, Viorel; Cohen, Ted
2013-01-01
SUMMARY Setting The Republic of Moldova, Eastern Europe, 2007–2010. Moldova has among the highest reported nationwide proportions of TB patients with multidrug-resistant tuberculosis (MDR-TB) worldwide. Objective To assess risk factors and timing of default from treatment for non-MDR-TB. Default has been associated with increased mortality and amplification of drug resistance and may contribute to the high MDR-TB rates in Moldova. Design A retrospective analysis of routine surveillance data on all non-MDR-TB patients reported. Results 14.7% of non-MDR-TB patients defaulted from treatment during the study period. Independent risk factors for default included sociodemographic factors (i.e. homelessness, living alone, less formal education and spending substantial time outside Moldova in the year prior to diagnosis) and health-related factors (i.e. HIV-coinfection, greater lung pathology, and increasing TB drug resistance). TB treatment is usually initiated within an institutional setting in Moldova and the default risk was highest in the month following the hospitalized treatment phase (among civilians) and after leaving prison (among those diagnosed while incarcerated). Conclusions Targeted interventions to increase treatment adherence for patients at highest risk of default and improving the continuity of care for patients transitioning from institutional to community care may substantially reduce the default risk. PMID:23407226
Local Risk-Minimization for Defaultable Claims with Recovery Process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biagini, Francesca, E-mail: biagini@mathematik.uni-muenchen.de; Cretarola, Alessandra, E-mail: alessandra.cretarola@dmi.unipg.it
We study the local risk-minimization approach for defaultable claims with random recovery at default time, seen as payment streams on the random interval [0,{tau} Logical-And T], where T denotes the fixed time-horizon. We find the pseudo-locally risk-minimizing strategy in the case when the agent information takes into account the possibility of a default event (local risk-minimization with G-strategies) and we provide an application in the case of a corporate bond. We also discuss the problem of finding a pseudo-locally risk-minimizing strategy if we suppose the agent obtains her information only by observing the non-defaultable assets.
Summary for Policymakers IPCC Fourth Assessment Report, WorkingGroup III
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barker, Terry; Bashmakov, Igor; Bernstein, Lenny
2007-04-30
A. Introduction 1. The Working Group III contribution to theIPCC Fourth Assessment Report (AR4) focuses on new literature on thescientific, technological, environmental, economic and social aspects ofmitigation of climate change, published since the IPCC Third AssessmentReport (TAR) and the Special Reports on COB2B Capture and Storage (SRCCS)and on Safeguarding the Ozone Layer and the Global Climate System (SROC).The following summary is organised into six sections after thisintroduction: - Greenhouse gas (GHG) emission trends, - Mitigation in theshort and medium term, across different economic sectors (until 2030), -Mitigation in the long-term (beyond 2030), - Policies, measures andinstruments to mitigate climate change,more » - Sustainable development andclimate change mitigation, - Gaps in knowledge. References to thecorresponding chapter sections are indicated at each paragraph in squarebrackets. An explanation of terms, acronyms and chemical symbols used inthis SPM can be found in the glossary to the main report.« less
iPcc: a novel feature extraction method for accurate disease class discovery and prediction
Ren, Xianwen; Wang, Yong; Zhang, Xiang-Sun; Jin, Qi
2013-01-01
Gene expression profiling has gradually become a routine procedure for disease diagnosis and classification. In the past decade, many computational methods have been proposed, resulting in great improvements on various levels, including feature selection and algorithms for classification and clustering. In this study, we present iPcc, a novel method from the feature extraction perspective to further propel gene expression profiling technologies from bench to bedside. We define ‘correlation feature space’ for samples based on the gene expression profiles by iterative employment of Pearson’s correlation coefficient. Numerical experiments on both simulated and real gene expression data sets demonstrate that iPcc can greatly highlight the latent patterns underlying noisy gene expression data and thus greatly improve the robustness and accuracy of the algorithms currently available for disease diagnosis and classification based on gene expression profiles. PMID:23761440
Santos, M M O; van Elk, A G P; Romanel, C
2015-12-01
Solid waste disposal sites (SWDS) - especially landfills - are a significant source of methane, a greenhouse gas. Although having the potential to be captured and used as a fuel, most of the methane formed in SWDS is emitted to the atmosphere, mainly in developing countries. Methane emissions have to be estimated in national inventories. To help this task the Intergovernmental Panel on Climate Change (IPCC) has published three sets of guidelines. In addition, the Kyoto Protocol established the Clean Development Mechanism (CDM) to assist the developed countries to offset their own greenhouse gas emissions by assisting other countries to achieve sustainable development while reducing emissions. Based on methodologies provided by the IPCC regarding SWDS, the CDM Executive Board has issued a tool to be used by project developers for estimating baseline methane emissions in their project activities - on burning biogas from landfills or on preventing biomass to be landfilled and so avoiding methane emissions. Some inconsistencies in the first two IPCC guidelines have already been pointed out in an Annex of IPCC latest edition, although with hidden details. The CDM tool uses a model for methane estimation that takes on board parameters, factors and assumptions provided in the latest IPCC guidelines, while using in its core equation the one of the second IPCC edition with its shortcoming as well as allowing a misunderstanding of the time variable. Consequences of wrong ex-ante estimation of baseline emissions regarding CDM project activities can be of economical or environmental type. Example of the first type is the overestimation of 18% in an actual project on biogas from landfill in Brazil that harms its developers; of the second type, the overestimation of 35% in a project preventing municipal solid waste from being landfilled in China, which harms the environment, not for the project per se but for the undue generated carbon credits. In a simulated landfill - the same amount of waste for 20 years -, the error would be an overestimation of 25% if the CDM project activity starts from the very first year or an underestimation of 15% if it starts just after the landfill closure. Therefore, a correction in the tool to calculate emissions from landfills as adopted by the CDM Executive Board is needed. Moreover, in countries not using the latest IPCC guidelines, which provides clear formulas to prevent misunderstandings, inventory compilers can also benefit from this paper by having more accurate results in national GHG inventories related to solid waste disposal, especially when increasing amounts of waste are landfilled, which is the case of the developing countries. Copyright © 2015 Elsevier Ltd. All rights reserved.
The implications of rebasing global mean temperature timeseries for GCM based climate projections
NASA Astrophysics Data System (ADS)
Stainforth, David; Chapman, Sandra; Watkins, Nicholas
2017-04-01
Global climate and earth system models are assessed by comparison with observations through a number of metrics. The InterGovernmental Panel on Climate Change (IPCC) highlights in particular their ability to reproduce "general features of the global and annual mean surface temperature changes over the historical period" [1,2] and to simulate "a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend" [3]. This focus on annual mean global mean temperature (hereafter GMT) change is presented as an important element in demonstrating the relevance of these models for climate projections. Any new model or new model version whose historic simulations fail to reproduce the "general features " and 20th century trends is likely therefore to undergo further tuning. Thus this focus could have implications for model development. Here we consider a formal interpretation of "general features" and discuss the implications of this approach to model assessment and intercomparison, for the interpretation of GCM projections. Following the IPCC, we interpret a major element of "general features" as being the slow timescale response to external forcings. (Shorter timescale behaviour such as the response to volcanic eruptions are also elements of "general features" but are not considered here.) Also following the IPCC, we consider only GMT anomalies i.e. changes with respect to some period. Since the models have absolute temperatures which range over about 3K (roughly observed GMT +/- 1.5K) this means their timeseries (and the observations) are rebased. We present timeseries of the slow timescale response of the CMIP5 models rebased to late-20th century temperatures and to mid-19th century temperatures. We provide a mathematical interpretation of this approach to model assessment and discuss two consequences. First is a separation of scales which limits the degree to which sub-global behaviour can feedback on the global response. Second, is an implication of linearity in the GMT response (to the extent that the slow-timescale response of the historic simulations is consistent with observations, and given their uncertainties). For each individual model these consequences only apply over the range of absolute temperatures simulated by the model in historic simulations. Taken together, however, they imply consequences over a much wider range of GMTs. The analysis suggests that this aspect of model evaluation risks providing a model development pressure which acts against a wide exploration of physically plausible responses; in particular against an exploration of potentially globally significant nonlinear responses and feedbacks. [1] IPCC, Fifth Assessment Report, Working Group 1, Technical Summary: Stocker et al. 2013. [2] IPCC, Fifth Assessment Report, Working Group 1, Chapter 9 - "Evaluation of Climate Models": Flato et al. 2013. [3] IPCC, Fifth Assessment Report, Working Group 1, Summary for Policy Makers: IPCC, 2013.
34 CFR 668.193 - Loan servicing appeals.
Code of Federal Regulations, 2011 CFR
2011-07-01
... default rate; or (2) Any cohort default rate upon which a loss of eligibility under § 668.187 is based. (b... request for preclaims or default aversion assistance to the guaranty agency; and (ii) Submit a...
34 CFR 668.193 - Loan servicing appeals.
Code of Federal Regulations, 2010 CFR
2010-07-01
... default rate; or (2) Any cohort default rate upon which a loss of eligibility under § 668.187 is based. (b... request for preclaims or default aversion assistance to the guaranty agency; and (ii) Submit a...
Partitioning Residue-derived and Residue-induced Emissions of N2O Using 15N-labelled Crop Residues
NASA Astrophysics Data System (ADS)
Farrell, R. E.; Carverhill, J.; Lemke, R.; Knight, J. D.
2014-12-01
Estimates of N2O emissions in Canada indicate that 17% of all agriculture-based emissions are associated with the decomposition of crop residues. However, research specific to the western Canadian prairies (including Saskatchewan) has shown that the N2O emission factor for N sources in this region typically ranges between 0.2 and 0.6%, which is well below the current IPCC default emission factor of 1.0%. Thus, it stands to reason that emissions from crop residues should also be lower than those calculated using the current IPCC emission factor. Current data indicates that residue decomposition, N mineralization and N2O production are affected by a number of factors such as C:N ratio and chemical composition of the residue, soil type, and soil water content; thus, a bench-scale incubation study was conducted to examine the effects of soil type and water content on N2O emissions associated with the decomposition of different crop residues. The study was carried out using soils from the Black, Dark Brown, Brown, and Gray soil zones and was conducted at both 50% and 70% water-filled pore space (WFPS); the soils were amended with 15N-labeled residues of wheat, pea, canola, and flax, or with an equivalent amount of 15N-labeled urea; 15N2O production was monitored using a Picarro G5101-i isotopic N2O analyzer. Crop residue additions to the soils resulted in both direct and indirect emissions of N2O, with residue derived emissions (RDE; measured as 15N2O) generally exceeding residue-induced emissions (RIE) at 50% WFPS—with RDEs ranging from 42% to 88% (mean = 58%) of the total N2O. Conversely, at 70% WFPS, RDEs were generally lower than RIEs—ranging from 21% to 83% (mean = 48%). Whereas both water content and soil type had an impact on N2O production, there was a clear and consistent trend in the emission factors for the residues; i.e., emissions were always greatest for the canola residue and lowest for the wheat residue and urea fertilizer; and intermediate for pea and flax. Results of this research demonstrate that—under the right environmental conditions—there is considerable potential for both direct and indirect N2O emissions during crop residue decomposition. Moreover, emission factors for the various crop residues tended to increase in the order: wheat ≤ urea < pea < flax << canola.
ERIC Educational Resources Information Center
Blanchette, Cornelia M.
This report examines the effectiveness of recent federal government efforts through amendments to the Higher Education Act (1993) to reduce student loan defaults. Key measures to curb defaults had been to make schools with high student loan default rates ineligible for federal student loan programs. However, many institutions have challenged…
Computer Center CDC Libraries/NSRDC (Subprograms).
1981-02-01
TRANSFORM." COMM, OF THE ACM, VOL, 10, NO. 10, OCTOBER 1967. 3. SYSTEM/360 SCIENTIFIC SUBROUTINE PACKAGE, IBM TECHNICAL PUBLICATONS DEPARTMENT, 1967...VARIABLE 3) UP TO 9 DEPENDENT VARIABLES PER PLOT. FUNCTIONAL CATEGORIES: J5 LANGUAGE: FORTRAN IV USAGE COMMON /PLO/ NRUN, NPLOT, ITP .6), ITY(6), ITX(61...PLO/ NRUN - NUMBER OF THIS RUN iDEFAULT: 1) NPLOT - NUMBER OF PLOT (DEFAULT: 1 ITP - PAGE TITLE (DEFAULT: BLANK) ITY - Y TITLE (DEFAULT: BLANK) ITX - X
The brain's default network: origins and implications for the study of psychosis.
Buckner, Randy L
2013-09-01
The brain's default network is a set of regions that is spontaneously active during passive moments. The network is also active during directed tasks that require participants to remember past events or imagine upcoming events. One hypothesis is that the network facilitates construction of mental models (simulations) that can be used adaptively in many contexts. Extensive research has considered whether disruption of the default network may contribute to disease. While an intriguing possibility, a specific challenge to this notion is the fact that it is difficult to accurately measure the default network in patients where confounds of head motion and compliance are prominent. Nonetheless, some intriguing recent findings suggest that dysfunctional interactions between front-oparietal control systems and the default network contribute to psychosis. Psychosis may be a network disturbance that manifests as disordered thought partly because it disrupts the fragile balance between the default network and competing brain systems.
The brain's default network: origins and implications for the study of psychosis
Buckner, Randy L.
2013-01-01
The brain's default network is a set of regions that is spontaneously active during passive moments. The network is also active during directed tasks that require participants to remember past events or imagine upcoming events. One hypothesis is that the network facilitates construction of mental models (simulations) that can be used adaptively in many contexts. Extensive research has considered whether disruption of the default network may contribute to disease. While an intriguing possibility, a specific challenge to this notion is the fact that it is difficult to accurately measure the default network in patients where confounds of head motion and compliance are prominent. Nonetheless, some intriguing recent findings suggest that dysfunctional interactions between front-oparietal control systems and the default network contribute to psychosis. Psychosis may be a network disturbance that manifests as disordered thought partly because it disrupts the fragile balance between the default network and competing brain systems. PMID:24174906
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-12
... Communications Systems Group, Inc. (ACS) for Alaska, and using the default value of ``1'' for the regional cost adjustment for the U.S. Virgin Islands, which has the effect of increasing labor costs. Lastly, the Bureau... Puerto Rico Telephone Company, Inc. (PRTC) and Virgin Islands Telephone Corporation d/b/a Innovative...
Combining uncertainty factors in deriving human exposure levels of noncarcinogenic toxicants.
Kodell, R L; Gaylor, D W
1999-01-01
Acceptable levels of human exposure to noncarcinogenic toxicants in environmental and occupational settings generally are derived by reducing experimental no-observed-adverse-effect levels (NOAELs) or benchmark doses (BDs) by a product of uncertainty factors (Barnes and Dourson, Ref. 1). These factors are presumed to ensure safety by accounting for uncertainty in dose extrapolation, uncertainty in duration extrapolation, differential sensitivity between humans and animals, and differential sensitivity among humans. The common default value for each uncertainty factor is 10. This paper shows how estimates of means and standard deviations of the approximately log-normal distributions of individual uncertainty factors can be used to estimate percentiles of the distribution of the product of uncertainty factors. An appropriately selected upper percentile, for example, 95th or 99th, of the distribution of the product can be used as a combined uncertainty factor to replace the conventional product of default factors.
2011-01-01
Background Successful treatment of tuberculosis (TB) involves taking anti-tuberculosis drugs for at least six months. Poor adherence to treatment means patients remain infectious for longer, are more likely to relapse or succumb to tuberculosis and could result in treatment failure as well as foster emergence of drug resistant tuberculosis. Kenya is among countries with high tuberculosis burden globally. The purpose of this study was to determine the duration tuberculosis patients stay in treatment before defaulting and factors associated with default in Nairobi. Methods A Case-Control study; Cases were those who defaulted from treatment and Controls those who completed treatment course between January 2006 and March 2008. All (945) defaulters and 1033 randomly selected controls from among 5659 patients who completed treatment course in 30 high volume sites were enrolled. Secondary data was collected using a facility questionnaire. From among the enrolled, 120 cases and 154 controls were randomly selected and interviewed to obtain primary data not routinely collected. Data was analyzed using SPSS and Epi Info statistical software. Univariate and multivariate logistic regression analysis to determine association and Kaplan-Meier method to determine probability of staying in treatment over time were applied. Results Of 945 defaulters, 22.7% (215) and 20.4% (193) abandoned treatment within first and second months (intensive phase) of treatment respectively. Among 120 defaulters interviewed, 16.7% (20) attributed their default to ignorance, 12.5% (15) to traveling away from treatment site, 11.7% (14) to feeling better and 10.8% (13) to side-effects. On multivariate analysis, inadequate knowledge on tuberculosis (OR 8.67; 95% CI 1.47-51.3), herbal medication use (OR 5.7; 95% CI 1.37-23.7), low income (OR 5.57, CI 1.07-30.0), alcohol abuse (OR 4.97; 95% CI 1.56-15.9), previous default (OR 2.33; 95% CI 1.16-4.68), co-infection with Human immune-deficient Virus (HIV) (OR 1.56; 95% CI 1.25-1.94) and male gender (OR 1.43; 95% CI 1.15-1.78) were independently associated with default. Conclusion The rate of defaulting was highest during initial two months, the intensive phase of treatment. Multiple factors were attributed by defaulting patients as cause for abandoning treatment whereas several were independently associated with default. Enhanced patient pre-treatment counseling and education about TB is recommended. PMID:21906291
Predictors and mortality associated with treatment default in pulmonary tuberculosis.
Kliiman, K; Altraja, A
2010-04-01
To identify risk factors for default from pulmonary tuberculosis (TB) treatment and to assess mortality associated with default in Estonia. All patients with culture-confirmed pulmonary TB who started treatment during 2003-2005 were included in a retrospective cohort study. In 1107 eligible patients, the treatment success rate was 81.5% and the default rate 9.4% (respectively 60.4% and 17.0% in multidrug-resistant TB [MDR-TB]). Independent predictors of treatment default were alcohol abuse (OR 3.22, 95%CI 1.93-5.38), unemployment (OR 3.05, 95%CI 1.84-5.03), MDR-TB (OR 2.17, 95%CI 1.35-3.50), urban residence (OR 1.85, 95%CI 1.00-3.42) and previous incarceration (OR 1.78, 95%CI 1.05-3.03). Of the defaulters, 29.4% died during follow-up (median survival 342.0 days). Cox regression analysis revealed that unemployment was associated with all-cause and TB-related mortality among defaulters (respectively HR 4.58, 95%CI 1.05-20.1 and HR 11.2, 95%CI 1.58-80.2). HIV infection (HR 51.2, 95%CI 6.06-432), sputum smear positivity (HR 9.59, 95%CI 1.79-51.4), MDR-TB (HR 8.56, 95%CI 1.81-40.4) and previous TB (HR 5.15, 95%CI 1.64-16.2) were predictors of TB-related mortality. The main risk factors for treatment default can be influenced. Interventions to reduce default should therefore concentrate on socially disadvantaged patients and prevention of alcohol abuse, with special attention given to MDR-TB patients.
Evaluation and improvement of wastewater treatment plant performance using BioWin
NASA Astrophysics Data System (ADS)
Oleyiblo, Oloche James; Cao, Jiashun; Feng, Qian; Wang, Gan; Xue, Zhaoxia; Fang, Fang
2015-03-01
In this study, the activated sludge model implemented in the BioWin® software was validated against full-scale wastewater treatment plant data. Only two stoichiometric parameters ( Y p/acetic and the heterotrophic yield ( Y H)) required calibration. The value 0.42 was used for Y p/acetic in this study, while the default value of the BioWin® software is 0.49, making it comparable with the default values of the corresponding parameter (yield of phosphorus release to substrate uptake ) used in ASM2, ASM2d, and ASM3P, respectively. Three scenarios were evaluated to improve the performance of the wastewater treatment plant, the possibility of wasting sludge from either the aeration tank or the secondary clarifier, the construction of a new oxidation ditch, and the construction of an equalization tank. The results suggest that construction of a new oxidation ditch or an equalization tank for the wastewater treatment plant is not necessary. However, sludge should be wasted from the aeration tank during wet weather to reduce the solids loading of the clarifiers and avoid effluent violations. Therefore, it is recommended that the design of wastewater treatment plants (WWTPs) should include flexibility to operate the plants in various modes. This is helpful in selection of the appropriate operating mode when necessary, resulting in substantial reductions in operating costs.
Ronald Raunikar; Joseph Buongiorno; James A. Turner; Shushuai Zhu
2010-01-01
The Global Forest Products Model (GFPM) was modified to link the forest sector to two scenarios of the Intergovernmental Panel on Climate Change (IPCC), and to represent the utilization of fuelwood and industrial roundwood to produce biofuels. The scenarios examined were a subset of the âstory linesâ prepared by the IPCC. Each scenario has projections of population and...
National-Level Multi-Hazard Risk Assessments in Sub-Saharan Africa
NASA Astrophysics Data System (ADS)
Murnane, R. J.; Balog, S.; Fraser, S. A.; Jongman, B.; Van Ledden, M.; Phillips, E.; Simpson, A.
2017-12-01
National-level risk assessments can provide important baseline information for decision-making on risk management and risk financing strategies. In this study, multi-hazard risk assessments were undertaken for 9 countries in Sub-Saharan Africa: Cape Verde, Ethiopia, Kenya, Niger, Malawi, Mali, Mozambique, Senegal and Uganda. The assessment was part of the Building Disaster Resilience in Sub-Saharan Africa Program and aimed at supporting the development of multi-risk financing strategies to help African countries make informed decisions to mitigate the socio-economic, fiscal and financial impacts of disasters. The assessments considered hazards and exposures consistent with the years 2010 and 2050. We worked with multiple firms to develop the hazard, exposure and vulnerability data and the risk results. The hazards include: coastal flood, drought, earthquake, landslide, riverine flood, tropical cyclone wind and storm surge, and volcanoes. For hazards expected to vary with climate, the 2050 hazard is based on the IPCC RCP 6.0. Geolocated exposure data for 2010 and 2050 at a 15 arc second ( 0.5 km) resolution includes: structures as a function of seven development patterns; transportation networks including roads, bridges, tunnels and rail; critical facilities such as schools, hospitals, energy facilities and government buildings; crops; population; and, gross domestic product (GDP). The 2050 exposure values for population are based on the IPCC SSP 2. Values for other exposure data are a function of population change. Vulnerability was based on openly available vulnerability functions. Losses were based on replacement values (e.g., cost/m2 or cost/km). Risk results are provided in terms of annual average loss and a variety of return periods at the national and Admin 1 levels. Assessments of recent historical events are used to validate the model results. In the future, it would be useful to use hazard footprints of historical events for validation purposes. The results will be visualized in a set of national risk profile documents intended to form the basis for conversations with governments on risk reduction and risk financing strategies.
Assessing modelled spatial distributions of ice water path using satellite data
NASA Astrophysics Data System (ADS)
Eliasson, S.; Buehler, S. A.; Milz, M.; Eriksson, P.; John, V. O.
2010-05-01
The climate models used in the IPCC AR4 show large differences in monthly mean cloud ice. The most valuable source of information that can be used to potentially constrain the models is global satellite data. For this, the data sets must be long enough to capture the inter-annual variability of Ice Water Path (IWP). PATMOS-x was used together with ISCCP for the annual cycle evaluation in Fig. 7 while ECHAM-5 was used for the correlation with other models in Table 3. A clear distinction between ice categories in satellite retrievals, as desired from a model point of view, is currently impossible. However, long-term satellite data sets may still be used to indicate the climatology of IWP spatial distribution. We evaluated satellite data sets from CloudSat, PATMOS-x, ISCCP, MODIS and MSPPS in terms of monthly mean IWP, to determine which data sets can be used to evaluate the climate models. IWP data from CloudSat cloud profiling radar provides the most advanced data set on clouds. As CloudSat data are too short to evaluate the model data directly, it was mainly used here to evaluate IWP from the other satellite data sets. ISCCP and MSPPS were shown to have comparatively low IWP values. ISCCP shows particularly low values in the tropics, while MSPPS has particularly low values outside the tropics. MODIS and PATMOS-x were in closest agreement with CloudSat in terms of magnitude and spatial distribution, with MODIS being the best of the two. As PATMOS-x extends over more than 25 years and is in fairly close agreement with CloudSat, it was chosen as the reference data set for the model evaluation. In general there are large discrepancies between the individual climate models, and all of the models show problems in reproducing the observed spatial distribution of cloud-ice. Comparisons consistently showed that ECHAM-5 is the GCM from IPCC AR4 closest to satellite observations.
Pearce, Warren; Holmberg, Kim; Hellsten, Iina; Nerlich, Brigitte
2014-01-01
In September 2013 the Intergovernmental Panel on Climate Change published its Working Group 1 report, the first comprehensive assessment of physical climate science in six years, constituting a critical event in the societal debate about climate change. This paper analyses the nature of this debate in one public forum: Twitter. Using statistical methods, tweets were analyzed to discover the hashtags used when people tweeted about the IPCC report, and how Twitter users formed communities around their conversational connections. In short, the paper presents the topics and tweeters at this particular moment in the climate debate. The most used hashtags related to themes of science, geographical location and social issues connected to climate change. Particularly noteworthy were tweets connected to Australian politics, US politics, geoengineering and fracking. Three communities of Twitter users were identified. Researcher coding of Twitter users showed how these varied according to geographical location and whether users were supportive, unsupportive or neutral in their tweets about the IPCC. Overall, users were most likely to converse with users holding similar views. However, qualitative analysis suggested the emergence of a community of Twitter users, predominantly based in the UK, where greater interaction between contrasting views took place. This analysis also illustrated the presence of a campaign by the non-governmental organization Avaaz, aimed at increasing media coverage of the IPCC report. PMID:24718388
Pearce, Warren; Holmberg, Kim; Hellsten, Iina; Nerlich, Brigitte
2014-01-01
In September 2013 the Intergovernmental Panel on Climate Change published its Working Group 1 report, the first comprehensive assessment of physical climate science in six years, constituting a critical event in the societal debate about climate change. This paper analyses the nature of this debate in one public forum: Twitter. Using statistical methods, tweets were analyzed to discover the hashtags used when people tweeted about the IPCC report, and how Twitter users formed communities around their conversational connections. In short, the paper presents the topics and tweeters at this particular moment in the climate debate. The most used hashtags related to themes of science, geographical location and social issues connected to climate change. Particularly noteworthy were tweets connected to Australian politics, US politics, geoengineering and fracking. Three communities of Twitter users were identified. Researcher coding of Twitter users showed how these varied according to geographical location and whether users were supportive, unsupportive or neutral in their tweets about the IPCC. Overall, users were most likely to converse with users holding similar views. However, qualitative analysis suggested the emergence of a community of Twitter users, predominantly based in the UK, where greater interaction between contrasting views took place. This analysis also illustrated the presence of a campaign by the non-governmental organization Avaaz, aimed at increasing media coverage of the IPCC report.
Sanchez-Padilla, E; Marquer, C; Kalon, S; Qayyum, S; Hayrapetyan, A; Varaine, F; Bastard, M; Bonnet, M
2014-02-01
Armenia, a country with a high prevalence of drug-resistant tuberculosis (DR-TB). To identify factors related to default from DR-TB treatment in Yerevan. Using a retrospective cohort design, we compared defaulters with patients who were cured, completed or failed treatment. Patients who initiated DR-TB treatment from 2005 to 2011 were included in the study. A qualitative survey was conducted including semi-structured interviews with defaulters and focus group discussions with care providers. Of 381 patients, 193 had achieved treatment success, 24 had died, 51 had failed treatment and 97 had defaulted. The number of drugs to which the patient was resistant at admission (aRR 1.16, 95%CI 1.05-1.27), the rate of treatment interruption based on patient's decision (aRR 1.03, 95%CI 1.02-1.05), the rate of side effects (aRR 1.18, 95%CI 1.09-1.27), and absence of culture conversion during the intensive phase (aRR 0.47, 95%CI 0.31-0.71) were independently associated with default from treatment. In the qualitative study, poor treatment tolerance, a perception that treatment was inefficient, lack of information, incorrect perception of being cured, working factors and behavioural problems were factors related to treatment default. In addition to economic reasons, poor tolerance of and poor response to treatment were the main factors associated with treatment default.
Meditation leads to reduced default mode network activity beyond an active task
Garrison, Kathleen A.; Zeffiro, Thomas A.; Scheinost, Dustin; Constable, R. Todd; Brewer, Judson A.
2015-01-01
Meditation has been associated with relatively reduced activity in the default mode network, a brain network implicated in self-related thinking and mind wandering. However, previous imaging studies have typically compared meditation to rest despite other studies reporting differences in brain activation patterns between meditators and controls at rest. Moreover, rest is associated with a range of brain activation patterns across individuals that has only recently begun to be better characterized. Therefore, this study compared meditation to another active cognitive task, both to replicate findings that meditation is associated with relatively reduced default mode network activity, and to extend these findings by testing whether default mode activity was reduced during meditation beyond the typical reductions observed during effortful tasks. In addition, prior studies have used small groups, whereas the current study tested these hypotheses in a larger group. Results indicate that meditation is associated with reduced activations in the default mode network relative to an active task in meditators compared to controls. Regions of the default mode showing a group by task interaction include the posterior cingulate/precuneus and anterior cingulate cortex. These findings replicate and extend prior work indicating that suppression of default mode processing may represent a central neural process in long-term meditation, and suggest that meditation leads to relatively reduced default mode processing beyond that observed during another active cognitive task. PMID:25904238
29 CFR 4219.32 - Interest on overdue, defaulted and overpaid withdrawal liability.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., as reported by the Board of Governors of the Federal Reserve System in Statistical Release H.15... default, the date of the missed payment that gave rise to the delinquency or the default. (e) Date paid...
Kizub, D; Ghali, I; Sabouni, R; Bourkadi, J E; Bennani, K; El Aouad, R; Dooley, K E
2012-09-01
In Morocco, tuberculosis (TB) treatment default is increasing in some urban areas. To provide a detailed description of factors that contribute to patient default and solutions from the point of view of health care professionals who participate in TB care. In-depth interviews were conducted with 62 physicians and nurses at nine regional public pulmonary clinics and local health clinics. Participants had a median of 24 years of experience in health care. Treatment default was seen as a result of multilevel factors related to the patient (lack of means, being a migrant worker, distance to treatment site, poor understanding of treatment, drug use, mental illness), medical team (high patient load, low motivation, lack of resources for tracking defaulters), treatment organization (poor communication between treatment sites, no systematic strategy for patient education or tracking, incomplete record keeping), and health care system and society. Tailored recommendations for low- and higher-cost interventions are provided. Interventions to enhance TB treatment completion should take into account the local context and multilevel factors that contribute to default. Qualitative studies involving health care workers directly involved in TB care can be powerful tools to identify contributing factors and define strategies to help reduce treatment default.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 15 2010-01-01 2010-01-01 false Defaults. 2201.33 Section 2201.33 Agriculture Regulations of the Department of Agriculture (Continued) LOCAL TELEVISION LOAN GUARANTEE BOARD LOCAL TELEVISION LOAN GUARANTEE PROGRAM-PROGRAM REGULATIONS Loan Guarantees § 2201.33 Defaults. (a) In determining...
NASA Astrophysics Data System (ADS)
Seneviratne, S. I.; Nicholls, N.; Easterling, D.; Goodess, C. M.; Kanae, S.; Kossin, J.; Luo, Y.; Marengo, J.; McInnes, K.; Rahimi, M.; Reichstein, M.; Sorteberg, A.; Vera, C.; Zhang, X.
2012-04-01
In April 2009, the Intergovernmental Panel on Climate Change (IPCC) decided to prepare a new special report with involvement of the UN International Strategy for Disaster Reduction (ISDR) on the topic "Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation" (SREX, http://ipcc-wg2.gov/SREX/). This special report reviews the scientific literature on past and projected changes in weather and climate extremes, and the relevance of such changes to disaster risk reduction and climate change adaptation. The SREX Summary for Policymakers was approved at an IPCC Plenary session on November 14-18, 2011, and the full report is planned for release in February 2012. This presentation will provide an overview on the structure and contents of the SREX, focusing on Chapter 3: "Changes in climate extremes and their impacts on the natural physical environment" [1]. It will in particular present the main findings of the chapter, including differences between the SREX's conclusions and those of the IPCC Fourth Assessment of 2007, and the implications of this new assessment for disaster risk reduction. Finally, aspects relevant to impacts on the biogeochemical cycles will also be addressed. [1] Seneviratne, S.I., N. Nicholls, D. Easterling, C.M. Goodess, S. Kanae, J. Kossin, Y. Luo, J. Marengo, K. McInnes, M. Rahimi, M. Reichstein, A. Sorteberg, C. Vera, and X. Zhang, 2012: Changes in climate extremes and their impacts on the natural physical environment. In: Intergovernmental Panel on Climate Change Special Report on Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation [Field, C. B., Barros, V., Stocker, T.F., Qin, D., Dokken, D., Ebi, K.L., Mastrandrea, M. D., Mach, K. J., Plattner, G.-K., Allen, S. K., Tignor, M. and P. M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA
Karanjekar, Richa V; Bhatt, Arpita; Altouqui, Said; Jangikhatoonabad, Neda; Durai, Vennila; Sattler, Melanie L; Hossain, M D Sahadat; Chen, Victoria
2015-12-01
Accurately estimating landfill methane emissions is important for quantifying a landfill's greenhouse gas emissions and power generation potential. Current models, including LandGEM and IPCC, often greatly simplify treatment of factors like rainfall and ambient temperature, which can substantially impact gas production. The newly developed Capturing Landfill Emissions for Energy Needs (CLEEN) model aims to improve landfill methane generation estimates, but still require inputs that are fairly easy to obtain: waste composition, annual rainfall, and ambient temperature. To develop the model, methane generation was measured from 27 laboratory scale landfill reactors, with varying waste compositions (ranging from 0% to 100%); average rainfall rates of 2, 6, and 12 mm/day; and temperatures of 20, 30, and 37°C, according to a statistical experimental design. Refuse components considered were the major biodegradable wastes, food, paper, yard/wood, and textile, as well as inert inorganic waste. Based on the data collected, a multiple linear regression equation (R(2)=0.75) was developed to predict first-order methane generation rate constant values k as functions of waste composition, annual rainfall, and temperature. Because, laboratory methane generation rates exceed field rates, a second scale-up regression equation for k was developed using actual gas-recovery data from 11 landfills in high-income countries with conventional operation. The Capturing Landfill Emissions for Energy Needs (CLEEN) model was developed by incorporating both regression equations into the first-order decay based model for estimating methane generation rates from landfills. CLEEN model values were compared to actual field data from 6 US landfills, and to estimates from LandGEM and IPCC. For 4 of the 6 cases, CLEEN model estimates were the closest to actual. Copyright © 2015 Elsevier Ltd. All rights reserved.
Compound Extremes and Bunched Black (or Grouped Grey) Swans
NASA Astrophysics Data System (ADS)
Watkins, N. W.
2014-12-01
Observed "wild" natural fluctuations may differ substantially in their character. Some events may be genuinelyunforeseen (and unforeseeable), as with Taleb's "black swans". These may occur singly, or may have their impactfurther magnified by being "bunched" in time. Some of the others may, however, be the rare extreme events from alight-tailed underlying distribution. Studying their occurrence may then be tractable with the methods of extremevalue theory [e.g. Coles, 2001], suitably adapted to allow correlation if that is observed to be present. Yet others may belong to a third broad class, described in today's presentation [ reviewed in Watkins, GRLFrontiers, 2013, doi: 10.1002/grl.50103]. Such "bursty" time series may show comparatively frequent highamplitude events, and/or long range correlations between successive values. The frequent large values due to thefirst of these effects, modelled in economics by Mandelbrot in 1963 using heavy- tailed probability distributions,can give rise to an "IPCC type I" burst composed of successive wild events. Conversely, long range dependence,even in a light-tailed Gaussian model like Mandelbrot and van Ness' fractional Brownian motion, can integrate"mild" events into an extreme "IPCC type III" burst. I will show how a standard statistical time series model, linear fractional stable motion (LFSM), which de-scends from the two special cases advocated by Mandelbrot, allows these two effects to be varied independently,and will present results from a preliminary study of such bursts in LFSM. The consequences for burst scaling whenlow frequency effects due to dissipation (FARIMA models), and multiplicative cascades (such as multifractals)are included will also be discussed, and the physical assumptions and constraints associated with making a givenchoice of model.
Ezechi, Oliver Chukwujekwu; Petterson, Karen Odberg; Gbajabiamila, Titilola A; Idigbe, Ifeoma Eugenia; Kuyoro, Olutunmike; Ujah, Innocent Achaya Otobo; Ostergren, Per Olof
2014-03-31
Increasingly evidence is emerging from south East Asia, southern and east Africa on the burden of default to follow up care after a positive cervical cancer screening/diagnosis, which impacts negatively on cervical cancer prevention and control. Unfortunately little or no information exists on the subject in the West Africa sub region. This study was designed to determine the proportion of and predictors and reasons for default from follow up care after positive cervical cancer screen. Women who screen positive at community cervical cancer screening using direct visual inspection were followed up to determine the proportion of default and associated factors. Multivariate logistic regression was used to determine independent predictors of default. One hundred and eight (16.1%) women who screened positive to direct visual inspection out of 673 were enrolled into the study. Fifty one (47.2%) out of the 108 women that screened positive defaulted from follow-up appointment. Women who were poorly educated (OR: 3.1, CI: 2.0 - 5.2), or lived more than 10 km from the clinic (OR: 2.0, CI: 1.0 - 4.1), or never screened for cervical cancer before (OR: 3.5, CI:3:1-8.4) were more likely to default from follow-up after screening positive for precancerous lesion of cervix . The main reasons for default were cost of transportation (48.6%) and time constraints (25.7%). The rate of default was high (47.2%) as a result of unaffordable transportation cost and limited time to keep the scheduled appointment. A change from the present strategy that involves multiple visits to a "see and treat" strategy in which both testing and treatment are performed at a single visit is recommended.
Sarangi, S S; Dutt, D
2014-07-01
In India in 2010, 14.1% of retreatment of TB patients' treatment outcome was 'default'. Since 2002, in Paschim Midnapur District (West Bengal), it has been around 15-20%. To determine the timing, characteristics and risk factors associated with default among retreatment TB patients on DOTS. It was a case control study, conducted in six TB units (TU) of Paschim Midnapur District, which were selected by simple random sampling. Data was collected from treatment records of TUs/DTC. Data was also collected through interviews of the patients using the same pre-tested semi-structured questionnaire from 87 defaulters and 86 consecutively registered non-defaulters registered in first quarter, 2009 to second quarter, 2010. Median duration of treatment taken before default was 121 days (inter-quartile range of 64-176 days). Median number of doses of treatment taken before default was 36 (inter -quartile range of 26-63 doses). No retrieval action was documented in 57.5% cases. Retrieval was done between 0-7 days of missed doses in 29.9% cases. Multiple logistic regression analysis indicated the following important risk factors for default at 95% confidence interval: male-sex limit: [aOR 3.957 (1.162-13.469)], alcoholic inebriation[ aOR6.076 (2.088-17.675)], distance from DOT centre [aOR 4.066 (1.675-9.872)], number of missed doses during treatment [aOR 1.849 (1.282-2.669)] and no initial home visit [aOR 10.607 (2.286 -49.221)]. In Paschim Midnapur district, default of retreatment TB occurs mostly after a few doses in continuation phase. Initial home visit, patient provider meeting, retrieval action, community-based treatment as per RNTCP guidelines are required to uplift the programme.
Morality constrains the default representation of what is possible.
Phillips, Jonathan; Cushman, Fiery
2017-05-02
The capacity for representing and reasoning over sets of possibilities, or modal cognition, supports diverse kinds of high-level judgments: causal reasoning, moral judgment, language comprehension, and more. Prior research on modal cognition asks how humans explicitly and deliberatively reason about what is possible but has not investigated whether or how people have a default, implicit representation of which events are possible. We present three studies that characterize the role of implicit representations of possibility in cognition. Collectively, these studies differentiate explicit reasoning about possibilities from default implicit representations, demonstrate that human adults often default to treating immoral and irrational events as impossible, and provide a case study of high-level cognitive judgments relying on default implicit representations of possibility rather than explicit deliberation.
34 CFR 668.217 - Default prevention plans.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 34 Education 3 2010-07-01 2010-07-01 false Default prevention plans. 668.217 Section 668.217 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Cohort Default Rates § 668.217...
24 CFR 985.109 - Default under the Annual Contributions Contract (ACC).
Code of Federal Regulations, 2011 CFR
2011-04-01
... Contributions Contract (ACC). 985.109 Section 985.109 Housing and Urban Development REGULATIONS RELATING TO... § 985.109 Default under the Annual Contributions Contract (ACC). HUD may determine that an PHA's failure... required by HUD constitutes a default under the ACC. ...
24 CFR 985.109 - Default under the Annual Contributions Contract (ACC).
Code of Federal Regulations, 2010 CFR
2010-04-01
... Contributions Contract (ACC). 985.109 Section 985.109 Housing and Urban Development Regulations Relating to... § 985.109 Default under the Annual Contributions Contract (ACC). HUD may determine that an PHA's failure... required by HUD constitutes a default under the ACC. ...
29 CFR 2570.5 - Consequences of default.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 9 2010-07-01 2010-07-01 false Consequences of default. 2570.5 Section 2570.5 Labor Regulations Relating to Labor (Continued) EMPLOYEE BENEFITS SECURITY ADMINISTRATION, DEPARTMENT OF LABOR... ERISA Section 502(i) § 2570.5 Consequences of default. For prohibited transaction penalty proceedings...
CSIR Contribution to Defining Adaptive Capacity in the Context of Environmental Change
2015-06-30
of Environmental Change 5a. CONTRACT NUMBER W911NF-14-1-0113 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6 . AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK...Change page 4 The metrics were used to identify areas of vulnerability within the Mississippi River basin and Nile River basin region. The IPCC ...The opportunities for improving the communities’ adaptive capacity (in accordance with the IPCC framework) relate to the reduction of the hazard
Hydrogeological Controls on Regional-Scale Indirect Nitrous Oxide Emission Factors for Rivers.
Cooper, Richard J; Wexler, Sarah K; Adams, Christopher A; Hiscock, Kevin M
2017-09-19
Indirect nitrous oxide (N 2 O) emissions from rivers are currently derived using poorly constrained default IPCC emission factors (EF 5r ) which yield unreliable flux estimates. Here, we demonstrate how hydrogeological conditions can be used to develop more refined regional-scale EF 5r estimates required for compiling accurate national greenhouse gas inventories. Focusing on three UK river catchments with contrasting bedrock and superficial geologies, N 2 O and nitrate (NO 3 - ) concentrations were analyzed in 651 river water samples collected from 2011 to 2013. Unconfined Cretaceous Chalk bedrock regions yielded the highest median N 2 O-N concentration (3.0 μg L -1 ), EF 5r (0.00036), and N 2 O-N flux (10.8 kg ha -1 a -1 ). Conversely, regions of bedrock confined by glacial deposits yielded significantly lower median N 2 O-N concentration (0.8 μg L -1 ), EF 5r (0.00016), and N 2 O-N flux (2.6 kg ha -1 a -1 ), regardless of bedrock type. Bedrock permeability is an important control in regions where groundwater is unconfined, with a high N 2 O yield from high permeability chalk contrasting with significantly lower median N 2 O-N concentration (0.7 μg L -1 ), EF 5r (0.00020), and N 2 O-N flux (2.0 kg ha -1 a -1 ) on lower permeability unconfined Jurassic mudstone. The evidence presented here demonstrates EF 5r can be differentiated by hydrogeological conditions and thus provide a valuable proxy for generating improved regional-scale N 2 O emission estimates.
26 CFR 20.2041-1 - Powers of appointment; in general.
Code of Federal Regulations, 2013 CFR
2013-04-01
... Chapter 11. For example, if a trust created by S provides for payment of the income to A for life with... income to A's widow, W, for her life and for payment of the remainder to A's estate, the value of A's... to A for life, then to W for life, with power in A to appoint the remainder by will and in default of...
26 CFR 20.2041-1 - Powers of appointment; in general.
Code of Federal Regulations, 2012 CFR
2012-04-01
... Chapter 11. For example, if a trust created by S provides for payment of the income to A for life with... income to A's widow, W, for her life and for payment of the remainder to A's estate, the value of A's... to A for life, then to W for life, with power in A to appoint the remainder by will and in default of...
26 CFR 20.2041-1 - Powers of appointment; in general.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Chapter 11. For example, if a trust created by S provides for payment of the income to A for life with... income to A's widow, W, for her life and for payment of the remainder to A's estate, the value of A's... to A for life, then to W for life, with power in A to appoint the remainder by will and in default of...
40 CFR 1066.610 - Dilution air background correction.
Code of Federal Regulations, 2014 CFR
2014-07-01
.... a = atomic hydrogen-to-carbon ratio of the test fuel. You may measure a or use default values from Table 1 of 40 CFR 1065.655. b = atomic oxygen-to-carbon ratio of the test fuel. You may measure b or use.... ER28AP14.100 Where: x CO2 = amount of CO2 measured in the sample over the test interval. x NMHC = amount of...
40 CFR Table Nn-2 to Subpart Nn of... - Default Values for Calculation Methodology 2 of This Subpart
Code of Federal Regulations, 2014 CFR
2014-07-01
... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Natural Gas and Natural Gas Liquids Pt. 98, Subpt. NN, Table NN-2 Table NN-2 to Subpart NN of Part 98.../Unit) 1 Natural Gas Mscf 0.0544 Propane Barrel 0.241 Normal butane Barrel 0.281 Ethane Barrel 0.170...
Kimeu, Muthusi; Burmen, Barbara; Audi, Beryl; Adega, Anne; Owuor, Karen; Arodi, Susan; Bii, Dennis; Zielinski-Gutiérrez, Emily
2016-01-01
This retrospective cohort analysis was conducted to describe the association between adherence to clinic appointments and mortality, one year after enrollment into HIV care. We examined appointment-adherence for newly enrolled patients between January 2011 and December 2012 at a regional referral hospital in western Kenya. The outcomes of interest were patient default, risk factors for repeat default, and year-one risk of death. Of 582 enrolled patients, 258 (44%) were defaulters. GEE revealed that once having been defaulters, patients were significantly more likely to repeatedly default (OR 1.4; 95% CI 1.12-1.77), especially the unemployed (OR 1.43; 95% CI 1.07-1.91), smokers (OR 2.22; 95% CI 1.31-3.76), and those with no known disclosure (OR 2.17; 95% CI 1.42-3.3). Nineteen patients (3%) died during the follow-up period. Cox proportional hazards revealed that the risk of death was significantly higher among defaulters (HR 3.12; 95% CI 1.2-8.0) and increased proportionally to the rate of patient default; HR was 4.05 (95% CI1.38-11.81) and 4.98 (95% CI 1.45-17.09) for a cumulative of 4-60 and ≥60 days elapsed between all scheduled and actual clinic appointment dates, respectively. Risk factors for repeat default suggest a need to deliver targeted adherence programs.
Risk factors for treatment default among adult tuberculosis patients in Indonesia.
Rutherford, M E; Hill, P C; Maharani, W; Sampurno, H; Ruslami, R
2013-10-01
Defaulting from anti-tuberculosis treatment hinders tuberculosis (TB) control. To identify potential defaulters. We conducted a cohort study in newly diagnosed Indonesian TB patients. We administered a questionnaire, prospectively identified defaulters (discontinued treatment ≥ 2 weeks) and assessed risk factors using Cox's regression. Of 249 patients, 39 (16%) defaulted, 61% in the first 2 months. Default was associated with liver disease (HR 3.40, 95%CI 1.02-11.78), chest pain (HR 2.25, 95%CI 1.06-4.77), night sweats (HR 1.98, 95%CI 1.03-3.79), characteristics of the head of the household (self-employed, HR 2.47, 95%CI 1.15-5.34; patient's mother, HR 7.72, 95%CI 1.66-35.88), household wealth (HR 4.24, 95%CI 1.12-16.09), walking to clinic (HR 4.53, 95%CI 1.39-14.71), being unaccompanied at diagnosis (HR 30.49, 95%CI 7.55-123.07) or when collecting medication (HR 3.34, 95%CI 1.24-8.98) and low level of satisfaction with the clinic (HR 3.85, 95%CI 1.17-12.62) or doctors (HR 2.45, 95%CI 1.18-5.10). Health insurance (HR 0.24, 95%CI 0.07-0.74) and paying for diagnosis (HR 0.14, 95%CI 0.04-0.48) were protective. Defaulting is common and occurs early. Interventions that improve clinic services, strengthen patient support and increase insurance coverage may reduce default in Indonesia.
Who are the patients that default tuberculosis treatment? - space matters!
Nunes, C; Duarte, R; Veiga, A M; Taylor, B
2017-04-01
The goals of this article are: (i) to understand how individual characteristics affect the likelihood of patients defaulting their pulmonary tuberculosis (PTB) treatment regimens; (ii) to quantify the predictive capacity of these risk factors; and (iii) to quantify and map spatial variation in the risk of defaulting. We used logistic regression models and generalized additive models with a spatial component to determine the odds of default across continental Portugal. We focused on new PTB cases, diagnosed between 2000 and 2013, and included some individual information (sex, age, residence area, alcohol abuse, intravenous drug use, homelessness, HIV, imprisonment status). We found that the global default rate was 4·88%, higher in individuals with well-known risk profiles (males, immigrants, HIV positive, homeless, prisoners, alcohol and drug users). Of specific epidemiological interest was that our geographical analysis found that Portugal's main urban areas (the two biggest cities) and one tourist region have higher default rates compared to the rest of the country, after adjusting for the previously mentioneded risk factors. The challenge of treatment defaulting, either due to other individual non-measured characteristics, healthcare system failure or patient recalcitrance requires further analysis in the spatio-temporal domain. Our findings suggest the presence of significant within-country variation in the risk of defaulting that cannot be explained by these classical individual risk factors alone. The methods we advocate are simple to implement and could easily be applied to other diseases.
Meditation leads to reduced default mode network activity beyond an active task.
Garrison, Kathleen A; Zeffiro, Thomas A; Scheinost, Dustin; Constable, R Todd; Brewer, Judson A
2015-09-01
Meditation has been associated with relatively reduced activity in the default mode network, a brain network implicated in self-related thinking and mind wandering. However, previous imaging studies have typically compared meditation to rest, despite other studies having reported differences in brain activation patterns between meditators and controls at rest. Moreover, rest is associated with a range of brain activation patterns across individuals that has only recently begun to be better characterized. Therefore, in this study we compared meditation to another active cognitive task, both to replicate the findings that meditation is associated with relatively reduced default mode network activity and to extend these findings by testing whether default mode activity was reduced during meditation, beyond the typical reductions observed during effortful tasks. In addition, prior studies had used small groups, whereas in the present study we tested these hypotheses in a larger group. The results indicated that meditation is associated with reduced activations in the default mode network, relative to an active task, for meditators as compared to controls. Regions of the default mode network showing a Group × Task interaction included the posterior cingulate/precuneus and anterior cingulate cortex. These findings replicate and extend prior work indicating that the suppression of default mode processing may represent a central neural process in long-term meditation, and they suggest that meditation leads to relatively reduced default mode processing beyond that observed during another active cognitive task.
Mothersill, Omar; Tangney, Noreen; Morris, Derek W; McCarthy, Hazel; Frodl, Thomas; Gill, Michael; Corvin, Aiden; Donohoe, Gary
2017-06-01
Resting-state functional magnetic resonance imaging (rs-fMRI) has repeatedly shown evidence of altered functional connectivity of large-scale networks in schizophrenia. The relationship between these connectivity changes and behaviour (e.g. symptoms, neuropsychological performance) remains unclear. Functional connectivity in 27 patients with schizophrenia or schizoaffective disorder, and 25 age and gender matched healthy controls was examined using rs-fMRI. Based on seed regions from previous studies, we examined functional connectivity of the default, cognitive control, affective and attention networks. Effects of symptom severity and theory of mind performance on functional connectivity were also examined. Patients showed increased connectivity between key nodes of the default network including the precuneus and medial prefrontal cortex compared to controls (p<0.01, FWE-corrected). Increasing positive symptoms and increasing theory of mind performance were both associated with altered connectivity of default regions within the patient group (p<0.01, FWE-corrected). This study confirms previous findings of default hyper-connectivity in schizophrenia spectrum patients and reveals an association between altered default connectivity and positive symptom severity. As a novel find, this study also shows that default connectivity is correlated to and predictive of theory of mind performance. Extending these findings by examining the effects of emerging social cognition treatments on both default connectivity and theory of mind performance is now an important goal for research. Copyright © 2016 Elsevier B.V. All rights reserved.
Morais, Sérgio Alberto; Delerue-Matos, Cristina; Gabarrell, Xavier
2013-03-15
In life cycle impact assessment (LCIA) models, the sorption of the ionic fraction of dissociating organic chemicals is not adequately modeled because conventional non-polar partitioning models are applied. Therefore, high uncertainties are expected when modeling the mobility, as well as the bioavailability for uptake by exposed biota and degradation, of dissociating organic chemicals. Alternative regressions that account for the ionized fraction of a molecule to estimate fate parameters were applied to the USEtox model. The most sensitive model parameters in the estimation of ecotoxicological characterization factors (CFs) of micropollutants were evaluated by Monte Carlo analysis in both the default USEtox model and the alternative approach. Negligible differences of CFs values and 95% confidence limits between the two approaches were estimated for direct emissions to the freshwater compartment; however the default USEtox model overestimates CFs and the 95% confidence limits of basic compounds up to three orders and four orders of magnitude, respectively, relatively to the alternative approach for emissions to the agricultural soil compartment. For three emission scenarios, LCIA results show that the default USEtox model overestimates freshwater ecotoxicity impacts for the emission scenarios to agricultural soil by one order of magnitude, and larger confidence limits were estimated, relatively to the alternative approach. Copyright © 2013 Elsevier B.V. All rights reserved.
The Co-evolution of Climate Models and the Intergovernmental Panel on Climate Change
NASA Astrophysics Data System (ADS)
Somerville, R. C.
2010-12-01
As recently as the 1950s, global climate models, or GCMs, did not exist, and the notion that man-made carbon dioxide might lead to significant climate change was not regarded as a serious possibility by most experts. Today, of course, the prospect or threat of exactly this type of climate change dominates the science and ranks among the most pressing issues confronting all mankind. Indeed, the prevailing scientific view throughout the first half of the twentieth century was that adding carbon dioxide to the atmosphere would have only a negligible effect on climate. The science of climate change caused by atmospheric carbon dioxide changes has thus undergone a genuine revolution. An extraordinarily rapid development of global climate models has also characterized this period, especially in the three decades since about 1980. In these three decades, the number of GCMs has greatly increased, and their physical and computational aspects have both markedly improved. Modeling progress has been enabled by many scientific advances, of course, but especially by a massive increase in available computer power, with supercomputer speeds increasing by roughly a factor of a million in the three decades from about 1980 to 2010. This technological advance has permitted a rapid increase in the physical comprehensiveness of GCMs as well as in spatial computational resolution. In short, GCMs have dramatically evolved over time, in exactly the same recent period as popular interest and scientific concern about anthropogenic climate change have markedly increased. In parallel, a unique international organization, the Intergovernmental Panel on Climate Change, or IPCC, has also recently come into being and also evolved rapidly. Today, the IPCC has become widely respected and globally influential. The IPCC was founded in 1988, and its history is thus even shorter than that of GCMs. Yet, its stature today is such that a series of IPCC reports assessing climate change science has already been endorsed by many leading scientific professional societies and academies of science worldwide. These reports are considered as definitive summaries of the state of the science. In 2007, in recognition of its exceptional accomplishments, the IPCC shared the Nobel Peace Prize equally with Al Gore. The present era is characterized not only by the reality and seriousness of human-caused climate change, but also by a young yet powerful science that enables us to understand much about the climate change that has occurred already and that awaits in the future. The development of GCMs is a critical part of the scientific story, and the development of the IPCC is a key factor in connecting the science to the perceptions and priorities of the global public and policymakers. GCMs and the IPCC have co-evolved and strongly influenced one another, as both scientists and the world at large have worked to confront the challenge of climate change.
24 CFR 907.7 - Remedies for substantial default.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 4 2011-04-01 2011-04-01 false Remedies for substantial default... URBAN DEVELOPMENT SUBSTANTIAL DEFAULT BY A PUBLIC HOUSING AGENCY § 907.7 Remedies for substantial... staff; or (3) Provide assistance deemed necessary, in the discretion of HUD, to remedy emergency...
7 CFR 1779.75 - Defaults by borrower.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 12 2010-01-01 2010-01-01 false Defaults by borrower. 1779.75 Section 1779.75 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE (CONTINUED) WATER AND WASTE DISPOSAL PROGRAMS GUARANTEED LOANS § 1779.75 Defaults by borrower. (a...
10 CFR 609.15 - Default, demand, payment, and collateral liquidation.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Default, demand, payment, and collateral liquidation. 609.15 Section 609.15 Energy DEPARTMENT OF ENERGY (CONTINUED) ASSISTANCE REGULATIONS LOAN GUARANTEES FOR PROJECTS THAT EMPLOY INNOVATIVE TECHNOLOGIES § 609.15 Default, demand, payment, and collateral liquidation...
24 CFR 266.515 - Record retention.
Code of Federal Regulations, 2010 CFR
2010-04-01
... FINANCE AGENCY RISK-SHARING PROGRAM FOR INSURED AFFORDABLE MULTIFAMILY PROJECT LOANS Project Management... insurance remains in force. (b) Defaults and claims. Records pertaining to a mortgage default and claim must be retained from the date of default through final settlement of the claim for a period of no less...
45 CFR 672.10 - Default order.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Welfare Regulations Relating to Public Welfare (Continued) NATIONAL SCIENCE FOUNDATION ENFORCEMENT AND..., an admission of all facts alleged in the complaint and a waiver of respondent's right to a hearing on... with the Hearing Clerk. (c) Contents of a default order. A default order shall include findings of fact...
45 CFR 672.10 - Default order.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Welfare Regulations Relating to Public Welfare (Continued) NATIONAL SCIENCE FOUNDATION ENFORCEMENT AND..., an admission of all facts alleged in the complaint and a waiver of respondent's right to a hearing on... with the Hearing Clerk. (c) Contents of a default order. A default order shall include findings of fact...
45 CFR 672.10 - Default order.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Welfare Regulations Relating to Public Welfare (Continued) NATIONAL SCIENCE FOUNDATION ENFORCEMENT AND..., an admission of all facts alleged in the complaint and a waiver of respondent's right to a hearing on... with the Hearing Clerk. (c) Contents of a default order. A default order shall include findings of fact...
45 CFR 672.10 - Default order.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Welfare Regulations Relating to Public Welfare (Continued) NATIONAL SCIENCE FOUNDATION ENFORCEMENT AND..., an admission of all facts alleged in the complaint and a waiver of respondent's right to a hearing on... with the Hearing Clerk. (c) Contents of a default order. A default order shall include findings of fact...
45 CFR 672.10 - Default order.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Welfare Regulations Relating to Public Welfare (Continued) NATIONAL SCIENCE FOUNDATION ENFORCEMENT AND..., an admission of all facts alleged in the complaint and a waiver of respondent's right to a hearing on... with the Hearing Clerk. (c) Contents of a default order. A default order shall include findings of fact...
47 CFR 51.707 - Default proxies for incumbent LECs' transport and termination rates.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 3 2011-10-01 2011-10-01 false Default proxies for incumbent LECs' transport... (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERCONNECTION Reciprocal Compensation for Transport and Termination of Telecommunications Traffic § 51.707 Default proxies for incumbent LECs' transport and...
47 CFR 51.707 - Default proxies for incumbent LECs' transport and termination rates.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 3 2010-10-01 2010-10-01 false Default proxies for incumbent LECs' transport... (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERCONNECTION Reciprocal Compensation for Transport and Termination of Telecommunications Traffic § 51.707 Default proxies for incumbent LECs' transport and...