Sample records for reference case scenario

  1. Assessing Health Impacts of Pictorial Health Warning Labels on Cigarette Packs in Korea Using DYNAMO-HIA

    PubMed Central

    2017-01-01

    Objectives This study aimed to predict the 10-year impacts of the introduction of pictorial warning labels (PWLs) on cigarette packaging in 2016 in Korea for adults using DYNAMO-HIA. Methods In total, four scenarios were constructed to better understand the potential health impacts of PWLs: two for PWLs and the other two for a hypothetical cigarette tax increase. In both policies, an optimistic and a conservative scenario were constructed. The reference scenario assumed the 2015 smoking rate would remain the same. Demographic data and epidemiological data were obtained from various sources. Differences in the predicted smoking prevalence and prevalence, incidence, and mortality from diseases were compared between the reference scenario and the four policy scenarios. Results It was predicted that the optimistic PWLs scenario (PWO) would lower the smoking rate by 4.79% in males and 0.66% in females compared to the reference scenario in 2017. However, the impact on the reduction of the smoking rate was expected to diminish over time. PWO will prevent 85 238 cases of diabetes, 67 948 of chronic obstructive pulmonary disease, 31 526 of ischemic heart disease, 21 036 of lung cancer, and 3972 prevalent cases of oral cancer in total over the 10-year span due to the reductions in smoking prevalence. The impacts of PWO are expected to be between the impact of the optimistic and the conservative cigarette tax increase scenarios. The results were sensitive to the transition probability of smoking status. Conclusions The introduction of PWLs in 2016 in Korea is expected reduce smoking prevalence and disease cases for the next 10 years, but regular replacements of PWLs are needed for persistent impacts. PMID:28768403

  2. Assessing Health Impacts of Pictorial Health Warning Labels on Cigarette Packs in Korea Using DYNAMO-HIA.

    PubMed

    Kang, Eunjeong

    2017-07-01

    This study aimed to predict the 10-year impacts of the introduction of pictorial warning labels (PWLs) on cigarette packaging in 2016 in Korea for adults using DYNAMO-HIA. In total, four scenarios were constructed to better understand the potential health impacts of PWLs: two for PWLs and the other two for a hypothetical cigarette tax increase. In both policies, an optimistic and a conservative scenario were constructed. The reference scenario assumed the 2015 smoking rate would remain the same. Demographic data and epidemiological data were obtained from various sources. Differences in the predicted smoking prevalence and prevalence, incidence, and mortality from diseases were compared between the reference scenario and the four policy scenarios. It was predicted that the optimistic PWLs scenario (PWO) would lower the smoking rate by 4.79% in males and 0.66% in females compared to the reference scenario in 2017. However, the impact on the reduction of the smoking rate was expected to diminish over time. PWO will prevent 85 238 cases of diabetes, 67 948 of chronic obstructive pulmonary disease, 31 526 of ischemic heart disease, 21 036 of lung cancer, and 3972 prevalent cases of oral cancer in total over the 10-year span due to the reductions in smoking prevalence. The impacts of PWO are expected to be between the impact of the optimistic and the conservative cigarette tax increase scenarios. The results were sensitive to the transition probability of smoking status. The introduction of PWLs in 2016 in Korea is expected reduce smoking prevalence and disease cases for the next 10 years, but regular replacements of PWLs are needed for persistent impacts.

  3. Responding to cough presentations: an interview study with Cambodian pharmacies participating in a National Tuberculosis Referral Program.

    PubMed

    Bell, Carolyn A; Pichenda, Koeut; Ilomäki, Jenni; Duncan, Gregory J; Eang, Mao Tan; Saini, Bandana

    2016-04-01

    Asia-Pacific carries a high burden of respiratory-related mortality. Timely referral and detection of tuberculosis cases optimizes patient and public health outcomes. Registered private pharmacies in Cambodia participate in a National Tuberculosis Referral Program to refer clients with cough suggestive of tuberculosis to public sector clinics for diagnosis and care. The objective of this study was to investigate clinical intentions of pharmacy staff when presented with a hypothetical case of a client with prolonged cough suggestive of tuberculosis. A random sample of 180 pharmacies was selected. Trained interviewers administered a hypothetical case scenario to trained pharmacy staff. Participants provided 'yes'/'no' responses to five clinical actions presented in the scenario. Actions were not mutually exclusive. Data were tabulated and compared using chi-square tests or Fisher's exact tests. Overall, 156 (92%) participants would have referred the symptomatic client in the case scenario. Participants who would have referred the client were less likely to sell a cough medicine (42% vs. 100%, P < 0.001) and less likely to sell an antibiotic (19% vs. 79%, P < 0.001) than those who would not have referred the client. Involving pharmacies in a Referral Program may have introduced concepts of appropriate clinical care when responding to clients presenting with cough suggestive of tuberculosis. However, results showed enhancing clinical competence among all referral programme participants particularly among non-referring pharmacies and those making concurrent sales of cough-related products would optimize pharmacy-initiated referral. Further research into actual clinical practices at Referral Program pharmacies would be justified. © 2015 John Wiley & Sons, Ltd.

  4. Long-term US energy outlook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friesen, G.

    Chase Econometrics summarizes the assumptions underlying long-term US energy forecasts. To illustrate the uncertainty involved in forecasting for the period to the year 2000, they compare Chase Econometrics forecasts with some recent projections prepared by the DOE Office of Policy, Planning and Analysis for the annual National Energy Policy Plan supplement. Scenario B, the mid-range reference case, is emphasized. The purpose of providing Scenario B as well as Scenarios A and C as alternate cases is to show the sensitivity of oil price projections to small swings in energy demand. 4 tables.

  5. Exploring the reversibility of marine climate change impacts in temperature overshoot scenarios

    NASA Astrophysics Data System (ADS)

    Zickfeld, K.; Li, X.; Tokarska, K.; Kohfeld, K. E.

    2017-12-01

    Artificial carbon dioxide removal (CDR) from the atmosphere has been proposed as a measure for mitigating climate change and restoring the climate system to a `safe' state after overshoot. Previous studies have demonstrated that the changes in surface air temperature due to anthropogenic CO2 emissions can be reversed through CDR, while some oceanic properties, for example thermosteric sea level rise, show a delay in their response to CDR. This research aims to investigate the reversibility of changes in ocean conditions after implementation of CDR with a focus on ocean biogeochemical properties. To achieve this, we analyze climate model simulations based on two sets of emission scenarios. We first use RCP2.6 and its extension until year 2300 as the reference scenario and design several temperature and cumulative CO2 emissions "overshoot" scenarios based on other RCPs, which represents cases with less ambitious mitigation policies in the near term that temporarily exceed the 2 °C target adopted by the Paris Agreement. In addition, we use a set of emission scenarios with a reference scenario limiting warming to 1.5°C in the long term and two overshoot scenarios. The University of Victoria Earth System Climate Model (UVic ESCM), a climate model of intermediate complexity, is forced with these emission scenarios. We compare the response of select ocean variables (seawater temperature, pH, dissolved oxygen) in the overshoot scenarios to that in the respective reference scenario at the time the same amount of cumulative emissions is achieved. Our results suggest that the overshoot and subsequent return to a reference CO2 cumulative emissions level would leave substantial impacts on the marine environment. Although the changes in global mean sea surface variables (temperature, pH and dissolved oxygen) are largely reversible, global mean ocean temperature, dissolved oxygen and pH differ significantly from those in the reference scenario. Large ocean areas exhibit temperature increase and pH and dissolved oxygen decrease relative to the reference scenario without cumulative CO2 emissions overshoot. Furthermore, our results show that the higher the level of overshoot, the lower the reversibility of changes in the marine environment.

  6. Alternative Geothermal Power Production Scenarios

    DOE Data Explorer

    Sullivan, John

    2014-03-14

    The information given in this file pertains to Argonne LCAs of the plant cycle stage for a set of ten new geothermal scenario pairs, each comprised of a reference and improved case. These analyses were conducted to compare environmental performances among the scenarios and cases. The types of plants evaluated are hydrothermal binary and flash and Enhanced Geothermal Systems (EGS) binary and flash plants. Each scenario pair was developed by the LCOE group using GETEM as a way to identify plant operational and resource combinations that could reduce geothermal power plant LCOE values. Based on the specified plant and well field characteristics (plant type, capacity, capacity factor and lifetime, and well numbers and depths) for each case of each pair, Argonne generated a corresponding set of material to power ratios (MPRs) and greenhouse gas and fossil energy ratios.

  7. Fuel Cycle Analysis Framework Base Cases for the IAEA/INPRO GAINS Collaborative Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brent Dixon

    Thirteen countries participated in the Collaborative Project GAINS “Global Architecture of Innovative Nuclear Energy Systems Based on Thermal and Fast Reactors Including a Closed Fuel Cycle”, which was the primary activity within the IAEA/INPRO Program Area B: “Global Vision on Sustainable Nuclear Energy” for the last three years. The overall objective of GAINS was to develop a standard framework for assessing future nuclear energy systems taking into account sustainable development, and to validate results through sample analyses. This paper details the eight scenarios that constitute the GAINS framework base cases for analysis of the transition to future innovative nuclear energymore » systems. The framework base cases provide a reference for users of the framework to start from in developing and assessing their own alternate systems. Each base case is described along with performance results against the GAINS sustainability evaluation metrics. The eight cases include four using a moderate growth projection and four using a high growth projection for global nuclear electricity generation through 2100. The cases are divided into two sets, addressing homogeneous and heterogeneous scenarios developed by GAINS to model global fuel cycle strategies. The heterogeneous world scenario considers three separate nuclear groups based on their fuel cycle strategies, with non-synergistic and synergistic cases. The framework base case analyses results show the impact of these different fuel cycle strategies while providing references for future users of the GAINS framework. A large number of scenario alterations are possible and can be used to assess different strategies, different technologies, and different assumptions about possible futures of nuclear power. Results can be compared to the framework base cases to assess where these alternate cases perform differently versus the sustainability indicators.« less

  8. The Directed Case Method.

    ERIC Educational Resources Information Center

    Cliff, William H.; Curtin, Leslie Nesbitt

    2000-01-01

    Provides an example of a directed case on human anatomy and physiology. Uses brief real life newspaper articles and clinical descriptions of medical reference texts to describe an actual, fictitious, or composite event. Includes interrelated human anatomy and physiology topics in the scenario. (YDS)

  9. Differences in case-mix can influence the comparison of standardised mortality ratios even with optimal risk adjustment: an analysis of data from paediatric intensive care.

    PubMed

    Manktelow, Bradley N; Evans, T Alun; Draper, Elizabeth S

    2014-09-01

    The publication of clinical outcomes for consultant surgeons in 10 specialties within the NHS has, along with national clinical audits, highlighted the importance of measuring and reporting outcomes with the aim of monitoring quality of care. Such information is vital to be able to identify good and poor practice and to inform patient choice. The need to adequately adjust outcomes for differences in case-mix has long been recognised as being necessary to provide 'like-for-like' comparisons between providers. However, directly comparing values of the standardised mortality ratio (SMR) between different healthcare providers can be misleading even when the risk-adjustment perfectly quantifies the risk of a poor outcome in the reference population. An example is shown from paediatric intensive care. Using observed case-mix differences for 33 paediatric intensive care units (PICUs) in the UK and Ireland for 2009-2011, SMRs were calculated under four different scenarios where, in each scenario, all of the PICUs were performing identically for each patient type. Each scenario represented a clinically plausible difference in outcome from the reference population. Despite the fact that the outcome for any patient was the same no matter which PICU they were to be admitted to, differences between the units were seen when compared using the SMR: scenario 1, 1.07-1.21; scenario 2, 1.00-1.14; scenario 3, 1.04-1.13; scenario 4, 1.00-1.09. Even if two healthcare providers are performing equally for each type of patient, if their patient populations differ in case-mix their SMRs will not necessarily take the same value. Clinical teams and commissioners must always keep in mind this weakness of the SMR when making decisions. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  10. Future Scenarios of Livestock and Land Use in Brazil

    NASA Astrophysics Data System (ADS)

    Costa, M. H.; Abrahão, G. M.

    2016-12-01

    Brazil currently has about 213 M cattle heads in 151 M ha of pastures. In the last 40 years, both the top 5% and the average stocking rate are increasing exponentially in Brazil, while the relative yield gap has been constant. Using these historical relationships, we estimate future scenarios of livestock and land use in Brazil. We assume a reference scenario for the top 5%, in which pasturelands are adequately fertilized, soil is not compacted and well drained, grasses are never burned, pastures are divided in 8 subdivisions of regular area, are cattle is rotated through the subdivisions. The reference scenario does not consider irrigation or feed supplementation. We calibrate a computer model and run it for the pasturelands throughout the entire country. We conclude that current pastures have about 20% efficiency to raise cattle compared to the reference scenario. Considering the reference scenario, we predict an equilibrium will be reached in about 100 years, with top 5% with about 9.3 heads per ha and the average 4.3 heads per ha, or 600 M heads of livestock. Considering a more pessimistic scenario, which considers an inflection of the curve in present times, we predict an equilibrium will be reached in about 60 years, with the top 5% stocking rate equal to 4.3 heads per ha and the average equal to 2.2 heads per ha, or 300 M heads of livestock. Both cases represent a considerable expansion of the livestock, maybe even higher than the growth of the global demands for beef. These scenarios indicate that not all existing pasturelands need to be used in the future - a significant part of them may be converted to croplands, which will also contribute to the reduction of deforestation.

  11. Electric energy savings from new technologies. Revision 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harrer, B.J.; Kellogg, M.A.; Lyke, A.J.

    1986-09-01

    Purpose of the report is to provide information about the electricity-saving potential of new technologies to OCEP that it can use in developing alternative long-term projections of US electricity consumption. Low-, base-, and high-case scenarios of the electricity savings for 10 technologies were prepared. The total projected annual savings for the year 2000 for all 10 technologies were 137 billion kilowatt hours (BkWh), 279 BkWh, and 470 BkWh, respectively, for the three cases. The magnitude of these savings projections can be gauged by comparing them to the Department's reference case projection for the 1985 National Energy Policy Plan. In themore » Department's reference case, total consumption in 2000 is projected to be 3319 BkWh. Because approximately 75% of the base-case estimate of savings are already incorporated into the reference projection, only 25% of the savings estimated here should be subtracted from the reference projection for analysis purposes.« less

  12. Web-GIS oriented systems viability for municipal solid waste selective collection optimization in developed and transient economies.

    PubMed

    Rada, E C; Ragazzi, M; Fedrizzi, P

    2013-04-01

    Municipal solid waste management is a multidisciplinary activity that includes generation, source separation, storage, collection, transfer and transport, processing and recovery, and, last but not least, disposal. The optimization of waste collection, through source separation, is compulsory where a landfill based management must be overcome. In this paper, a few aspects related to the implementation of a Web-GIS based system are analyzed. This approach is critically analyzed referring to the experience of two Italian case studies and two additional extra-European case studies. The first case is one of the best examples of selective collection optimization in Italy. The obtained efficiency is very high: 80% of waste is source separated for recycling purposes. In the second reference case, the local administration is going to be faced with the optimization of waste collection through Web-GIS oriented technologies for the first time. The starting scenario is far from an optimized management of municipal solid waste. The last two case studies concern pilot experiences in China and Malaysia. Each step of the Web-GIS oriented strategy is comparatively discussed referring to typical scenarios of developed and transient economies. The main result is that transient economies are ready to move toward Web oriented tools for MSW management, but this opportunity is not yet well exploited in the sector. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Base-Case 1% Yield Increase (BC1), All Energy Crops scenario of the 2016 Billion Ton Report

    DOE Data Explorer

    Davis, Maggie R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000181319328); Hellwinkel, Chad [University of Tennessee] (ORCID:0000000173085058); Eaton, Laurence [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000312709626); Langholtz, Matthew H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000281537154); Turhollow, Anthony [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000228159350); Brandt, Craig [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000214707379); Myers, Aaron (ORCID:0000000320373827)

    2016-07-13

    Scientific reason for data generation: to serve as the base-case scenario for the BT16 volume 1 agricultural scenarios to compare these projections of potential biomass supplies against a reference case (agricultural baseline 10.11578/1337885). The simulation runs from 2015 through 2040; a starting year of 2014 is used but not reported. How each parameter was produced (methods), format, and relationship to other data in the data set: This exogenous price simulations (also referred to as “specified-price” simulations) introduces a farmgate price, and POLYSYS solves for biomass supplies that may be brought to market in response to these prices. In specified-price scenarios, a specified farmgate price is offered constantly in all counties over all years of the simulation. This simulation begins in 2015 with an offered farmgate price for primary crop residues only between 2015 and 2018 and long-term contracts for dedicated crops beginning in 2019. Expected mature energy crop yield grows at a compounding rate of 1% beginning in 2016. The yield growth assumptions are fixed after crops are planted such that yield gains do not apply to crops already planted, but new plantings do take advantage of the gains in expected yield growth. Instruments used: Policy Analysis System –POLYSYS (version POLYS2015_V10_alt_JAN22B), an agricultural policy modeling system of U.S. agriculture (crops and livestock), supplied by the University of Tennessee Institute of Agriculture, Agricultural Policy Analysis Center.

  14. An Economic Aspect of the AVOID Programme: Analysis Using the AIM/CGE Model

    NASA Astrophysics Data System (ADS)

    Matsumoto, Ken'ichi; Masui, Toshihiko

    2010-05-01

    This presentation purposes to show the results of the analysis that the AIM/CGE [Global] model contributed to Work Stream 1 of the AVOID programme. Three economic models participate in this WS to analyze the economic aspects of defined climate policies, and the AIM/CGE [Global] model is one of them. The reference scenario is SRES A1B and five policy scenarios (2016.R2.H, 2016.R4.L, 2016.R5.L, 2030.R2.H, and 2030.R5.L) are considered. The climate policies are expressed as emissions pathways of several gases such as greenhouse gases and aerosols. The AIM/CGE [Global] model is a recursive dynamic global CGE model with 21 industrial sectors and 24 world regions. These definitions are based on the GTAP6 database and it is used as the economic data of the base year. Some important characteristics of this model can be summarized as follows: power generation by various sources (from non-renewables to renewables) are considered; CCS technology is modeled; biomass energy (both traditional and purpose-grown) production and consumption are included; not only CO2 emissions but also other gases are considered; international markets are modeled for international trade of some fossil fuels; relationships between the costs and resource reserves of fossil fuels are modeled. The model is run with 10-year time steps until 2100. For the reference case, there are no constraints and the model is run based on the drivers (assumptions on GDP and population for A1B) and AEEI. The reference case does not have the same emissions pathways as the prescribed emissions for A1B in AVOID. For scenario cases, the model is run under emissions constraints. In particular, for each policy scenario, the constraint on each gas in each 10-year step is derived. The percentage reduction in emissions that occurs between the AVOID A1B scenario and the particular policy scenario, for each gas in each 10-year period is first calculated, and then these percentage reductions are applied to the AIM reference case to derive the constraints for each gas over the 21st century. The main results provided to AVOID were carbon prices and GDP for each scenario case. About the carbon prices, the results show that the higher the emissions reduction rate and the earlier the peak, the higher the carbon prices will be, and the prices tend to be higher over time (536/tCO2 in 2100 for 2016.R5.L). These trends are quite different from those of the E3MG model which assumes constant carbon tax for each scenario (232/tCO2 in 2100 for 2016.R5.L). In addition, the higher carbon prices are necessary in the AIM/CGE model than the E3MG model, especially in the latter half of the century. About the GDP trends, the results indicate that negative GDP changes occur for all scenarios cases, and higher GDP damage is observed as the reduction rate becomes higher and the peak comes earlier (-7.04% in 2100 for 2016.R5.L). These trends are extremely different from those of the E3MG model which shows positive GDP effects (+4.89% in 2100 for 2016.R5.L). The differences of the results among the two models are caused by (1) technological change assumptions, (2) revenue recycling methodology, (3) timing of emissions cuts, and (4) modeling approaches. We expect to have a more detailed discussion at the session.

  15. Alaska North Slope regional gas hydrate production modeling forecasts

    USGS Publications Warehouse

    Wilson, S.J.; Hunter, R.B.; Collett, T.S.; Hancock, S.; Boswell, R.; Anderson, B.J.

    2011-01-01

    A series of gas hydrate development scenarios were created to assess the range of outcomes predicted for the possible development of the "Eileen" gas hydrate accumulation, North Slope, Alaska. Production forecasts for the "reference case" were built using the 2002 Mallik production tests, mechanistic simulation, and geologic studies conducted by the US Geological Survey. Three additional scenarios were considered: A "downside-scenario" which fails to identify viable production, an "upside-scenario" describes results that are better than expected. To capture the full range of possible outcomes and balance the downside case, an "extreme upside scenario" assumes each well is exceptionally productive.Starting with a representative type-well simulation forecasts, field development timing is applied and the sum of individual well forecasts creating the field-wide production forecast. This technique is commonly used to schedule large-scale resource plays where drilling schedules are complex and production forecasts must account for many changing parameters. The complementary forecasts of rig count, capital investment, and cash flow can be used in a pre-appraisal assessment of potential commercial viability.Since no significant gas sales are currently possible on the North Slope of Alaska, typical parameters were used to create downside, reference, and upside case forecasts that predict from 0 to 71??BM3 (2.5??tcf) of gas may be produced in 20 years and nearly 283??BM3 (10??tcf) ultimate recovery after 100 years.Outlining a range of possible outcomes enables decision makers to visualize the pace and milestones that will be required to evaluate gas hydrate resource development in the Eileen accumulation. Critical values of peak production rate, time to meaningful production volumes, and investments required to rule out a downside case are provided. Upside cases identify potential if both depressurization and thermal stimulation yield positive results. An "extreme upside" case captures the full potential of unconstrained development with widely spaced wells. The results of this study indicate that recoverable gas hydrate resources may exist in the Eileen accumulation and that it represents a good opportunity for continued research. ?? 2010 Elsevier Ltd.

  16. Impacts of potential CO2-reduction policies on air quality in the United States.

    PubMed

    Trail, Marcus A; Tsimpidi, Alexandra P; Liu, Peng; Tsigaridis, Kostas; Hu, Yongtao; Rudokas, Jason R; Miller, Paul J; Nenes, Athanasios; Russell, Armistead G

    2015-04-21

    Impacts of emissions changes from four potential U.S. CO2 emission reduction policies on 2050 air quality are analyzed using the community multiscale air quality model (CMAQ). Future meteorology was downscaled from the Goddard Institute for Space Studies (GISS) ModelE General Circulation Model (GCM) to the regional scale using the Weather Research Forecasting (WRF) model. We use emissions growth factors from the EPAUS9r MARKAL model to project emissions inventories for two climate tax scenarios, a combined transportation and energy scenario, a biomass energy scenario and a reference case. Implementation of a relatively aggressive carbon tax leads to improved PM2.5 air quality compared to the reference case as incentives increase for facilities to install flue-gas desulfurization (FGD) and carbon capture and sequestration (CCS) technologies. However, less capital is available to install NOX reduction technologies, resulting in an O3 increase. A policy aimed at reducing CO2 from the transportation sector and electricity production sectors leads to reduced emissions of mobile source NOX, thus reducing O3. Over most of the U.S., this scenario leads to reduced PM2.5 concentrations. However, increased primary PM2.5 emissions associated with fuel switching in the residential and industrial sectors leads to increased organic matter (OM) and PM2.5 in some cities.

  17. Alaska OCS socioeconomic studies program: St. George basin petroleum development scenarios, Anchorage impact analysis. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ender, R.L.; Gorski, S.

    1981-10-01

    The report consists of an update to the Anchorage socioeconomic and physical baseline and infrastructure standards used to forecast impacts with and without OCS oil and gas development in Alaska. This material is found in Technical Report 43, Volumes 1 and 2 entitled 'Gulf of Alaska and Lower Cook Inlet Petroleum Development Scenarios, Anchorage Socioeconomic and Physical Baseline and Anchorage Impact Analysis.' These updates should be read in conjunction with the above report. In addition, the Anchorage base case and petroleum development scenarios for the St. George Basin are given. These sections are written to stand alone without reference.

  18. San Pedro River Basin Data Browser (http://fws-case-12.nmsu.edu/SanPedro/)

    EPA Science Inventory

    Acquisition of primary spatial data and database development are initial features of any type of landscape assessment project. They provide contemporary land cover and the ancillary datasets necessary to establish reference condition and develop alternative future scenarios that ...

  19. Factors influencing general practitioner referral of patients developing end-stage renal failure: a standardised case-analysis study.

    PubMed

    Montgomery, Anthony J; McGee, Hannah M; Shannon, William; Donohoe, John

    2006-09-13

    To understand why treatment referral rates for ESRF are lower in Ireland than in other European countries, an investigation of factors influencing general practitioner referral of patients developing ESRF was conducted. Randomly selected general practitioners (N = 51) were interviewed using 32 standardised written patient scenarios to elicit referral strategies. General practitioner referral levels and thresholds for patients developing end-stage renal disease; referral routes (nephrologist vs other physicians); influence of patient age, marital status and co-morbidity on referral. Referral levels varied widely with the full range of cases (0-32; median = 15) referred by different doctors after consideration of first laboratory results. Less than half (44%) of cases were referred to a nephrologist. Patient age (40 vs 70 years), marital status, co-morbidity (none vs rheumatoid arthritis) and general practitioner prior specialist renal training (yes or no) did not influence referral rates. Many patients were not referred to a specialist at creatinine levels of 129 micromol/l (47% not referred) or 250 micromol/l (45%). While all patients were referred at higher levels (350 and 480 micromol/l), referral to a nephrologist decreased in likelihood as scenarios became more complex; 28% at 129 micromol/l creatinine; 28% at 250 micromol/l; 18% at 350 micromol/l and 14% at 480 micromol/l. Referral levels and routes were not influenced by general practitioner age, sex or practice location. Most general practitioners had little current contact with chronic renal patients (mean number in practice = 0.7, s.d. = 1.3). The very divergent management patterns identified highlight the need for guidance to general practitioners on appropriate management of this serious condition.

  20. Cost-efficiency analyses for the US of biosimilar filgrastim-sndz, reference filgrastim, pegfilgrastim, and pegfilgrastim with on-body injector in the prophylaxis of chemotherapy-induced (febrile) neutropenia.

    PubMed

    McBride, Ali; Campbell, Kim; Bikkina, Mohan; MacDonald, Karen; Abraham, Ivo; Balu, Sanjeev

    2017-10-01

    Guidelines recommend prophylaxis with granulocyte colony-stimulating factor for chemotherapy-induced (febrile) neutropenia (CIN/FN) based on regimen myelotoxicity and patient-related risk factors. The aim was to conduct a cost-efficiency analysis for the US of the direct acquisition and administration costs of the recently approved biosimilar filgrastim-sndz (Zarxio EP2006) with reference to filgrastim (Neupogen), pegfilgrastim (Neulasta), and a pegfilgrastim injection device (Neulasta Onpro; hereafter pegfilgrastim-injector) for CIN/FN prophylaxis. A cost-efficiency analysis of the prophylaxis of one patient during one chemotherapy cycle under 1-14 days' time horizon was conducted using the unit dose average selling price (ASP) and Current Procedural Terminology (CPT) codes for subcutaneous prophylactic injection under four scenarios: cost of medication only (COSTMED), patient self-administration (SELFADMIN), healthcare provider (HCP) initiating administration followed by self-administration (HCPSTART), and HCP providing full administration (HCPALL). Two case studies were created to illustrate real-world clinical implications. The analyses were replicated using wholesale acquisition cost (WAC). Using ASP + CPT, cost savings achieved with filgrastim-sndz relative to reference filgrastim ranged from $65 (1 day) to $916 (14 days) across all scenarios. Relative to pegfilgrastim, savings with filgrastim-sndz ranged from $834 (14 days) up to $3,666 (1 day) under the COSTMED, SELFADMIN, and HPOSTART scenarios; and from $284 (14 days) up to $3,666 (1 day) under the HPOALL scenario. Similar to the cost-savings compared to pegfilgrastim, filgrastim-sndz achieved savings relative to pegfilgrastim-injector: from $834 (14 days) to $3,666 (1 day) under the COSTMED scenario, from $859 (14 days) to $3,692 (1 day) under SELFADMIN, from $817 (14 days) to $3,649 (1 day) under HPOSTART, and from $267 (14 days) to $3,649 (1 day) under HPOALL. Cost savings of filgrastim-sndz using WAC + CPT were even greater under all scenarios. Prophylaxis with filgrastim-sndz, a biosimilar filgrastim, was associated consistently with significant cost-savings over prophylaxis with reference filgrastim, pegfilgrastim, and pegfilgrastim-injector, and this across various administration scenarios.

  1. Application of automation and robotics to lunar surface human exploration operations

    NASA Technical Reports Server (NTRS)

    Woodcock, Gordon R.; Sherwood, Brent; Buddington, Patricia A.; Bares, Leona C.; Folsom, Rolfe; Mah, Robert; Lousma, Jack

    1990-01-01

    Major results of a study applying automation and robotics to lunar surface base buildup and operations concepts are reported. The study developed a reference base scenario with specific goals, equipment concepts, robot concepts, activity schedules and buildup manifests. It examined crew roles, contingency cases and system reliability, and proposed a set of technologies appropriate and necessary for effective lunar operations. This paper refers readers to four companion papers for quantitative details where appropriate.

  2. Web-GIS oriented systems viability for municipal solid waste selective collection optimization in developed and transient economies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rada, E.C., E-mail: Elena.Rada@ing.unitn.it; Ragazzi, M.; Fedrizzi, P.

    Highlights: ► As an appropriate solution for MSW management in developed and transient countries. ► As an option to increase the efficiency of MSW selective collection. ► As an opportunity to integrate MSW management needs and services inventories. ► As a tool to develop Urban Mining actions. - Abstract: Municipal solid waste management is a multidisciplinary activity that includes generation, source separation, storage, collection, transfer and transport, processing and recovery, and, last but not least, disposal. The optimization of waste collection, through source separation, is compulsory where a landfill based management must be overcome. In this paper, a few aspectsmore » related to the implementation of a Web-GIS based system are analyzed. This approach is critically analyzed referring to the experience of two Italian case studies and two additional extra-European case studies. The first case is one of the best examples of selective collection optimization in Italy. The obtained efficiency is very high: 80% of waste is source separated for recycling purposes. In the second reference case, the local administration is going to be faced with the optimization of waste collection through Web-GIS oriented technologies for the first time. The starting scenario is far from an optimized management of municipal solid waste. The last two case studies concern pilot experiences in China and Malaysia. Each step of the Web-GIS oriented strategy is comparatively discussed referring to typical scenarios of developed and transient economies. The main result is that transient economies are ready to move toward Web oriented tools for MSW management, but this opportunity is not yet well exploited in the sector.« less

  3. Climate Change Effects of Forest Management and Substitution of Carbon-Intensive Materials and Fossil Fuels

    NASA Astrophysics Data System (ADS)

    Sathre, R.; Gustavsson, L.; Haus, S.; Lundblad, M.; Lundström, A.; Ortiz, C.; Truong, N.; Wikberg, P. E.

    2016-12-01

    Forests can play several roles in climate change mitigation strategies, for example as a reservoir for storing carbon and as a source of renewable materials and energy. To better understand the linkages and possible trade-offs between different forest management strategies, we conduct an integrated analysis where both sequestration of carbon in growing forests and the effects of substituting carbon intensive products within society are considered. We estimate the climate effects of directing forest management in Sweden towards increased carbon storage in forests, with more land set-aside for protection, or towards increased forest production for the substitution of carbon-intensive materials and fossil fuels, relative to a reference case of current forest management. We develop various scenarios of forest management and biomass use to estimate the carbon balances of the forest systems, including ecological and technological components, and their impacts on the climate in terms of cumulative radiative forcing over a 100-year period. For the reference case of current forest management, increasing the harvest of forest residues is found to give increased climate benefits. A scenario with increased set-aside area and the current level of forest residue harvest begins with climate benefits compared to the reference scenario, but the benefits cannot be sustained for 100 years because the rate of carbon storage in set-aside forests diminishes over time as the forests mature, but the demand for products and fuels remains. The most climatically beneficial scenario, expressed as reduced cumulative radiative forcing, in both the short and long terms is a strategy aimed at high forest production, high residue recovery rate, and high efficiency utilization of harvested biomass. Active forest management with high harvest level and efficient forest product utilization will provide more climate benefit, compared to reducing harvest and storing more carbon in the forest. Figure. Schematic diagram of complete modelled forest system including ecological and technological components, showing major flows of carbon.

  4. Prospective Conversion: Data Transfer between Fossil and New Microcomputer Technologies in Libraries.

    ERIC Educational Resources Information Center

    Vratny-Watts, Janet; Valauskas, Edward J.

    1989-01-01

    Discusses the technological changes that will necessitate the prospective conversion of library data over the next decade and addresses the problems of converting data from obsolete personal computers to newer models that feature radically different operating systems. Three case studies are used to illustrate possible scenarios. (11 references)…

  5. Proposed Requirements-driven User-scenario Development Protocol for the Belmont Forum E-Infrastructure and Data Management Cooperative Research Agreement

    NASA Astrophysics Data System (ADS)

    Wee, B.; Car, N.; Percivall, G.; Allen, D.; Fitch, P. G.; Baumann, P.; Waldmann, H. C.

    2014-12-01

    The Belmont Forum E-Infrastructure and Data Management Cooperative Research Agreement (CRA) is designed to foster a global community to collaborate on e-infrastructure challenges. One of the deliverables is an implementation plan to address global data infrastructure interoperability challenges and align existing domestic and international capabilities. Work package three (WP3) of the CRA focuses on the harmonization of global data infrastructure for sharing environmental data. One of the subtasks under WP3 is the development of user scenarios that guide the development of applicable deliverables. This paper describes the proposed protocol for user scenario development. It enables the solicitation of user scenarios from a broad constituency, and exposes the mechanisms by which those solicitations are evaluated against requirements that map to the Belmont Challenge. The underlying principle of traceability forms the basis for a structured, requirements-driven approach resulting in work products amenable to trade-off analyses and objective prioritization. The protocol adopts the ISO Reference Model for Open Distributed Processing (RM-ODP) as a top level framework. User scenarios are developed within RM-ODP's "Enterprise Viewpoint". To harmonize with existing frameworks, the protocol utilizes the conceptual constructs of "scenarios", "use cases", "use case categories", and use case templates as adopted by recent GEOSS Architecture Implementation Project (AIP) deliverables and CSIRO's eReefs project. These constructs are encapsulated under the larger construct of "user scenarios". Once user scenarios are ranked by goodness-of-fit to the Belmont Challenge, secondary scoring metrics may be generated, like goodness-of-fit to FutureEarth science themes. The protocol also facilitates an assessment of the ease of implementing given user scenario using existing GEOSS AIP deliverables. In summary, the protocol results in a traceability graph that can be extended to coordinate across research programmes. If implemented using appropriate technologies and harmonized with existing ontologies, this approach enables queries, sensitivity analyses, and visualization of complex relationships.

  6. General dental practitioner's views on dental general anaesthesia services.

    PubMed

    Threlfall, A G; King, D; Milsom, K M; Blinkhom, A S; Tickle, M

    2007-06-01

    Policy has recently changed on provision of dental general anaesthetic services in England. The aim of this study was to investigate general dental practitioners' views about dental general anaesthetics, the reduction in its availability and the impact on care of children with toothache. Qualitative study using semi-structured interviews and clinical case scenarios. General dental practitioners providing NHS services in the North West of England. 93 general dental practitioners were interviewed and 91 answered a clinical case scenario about the care they would provide for a 7-year-old child with multiple decayed teeth presenting with toothache. Scenario responses showed variation; 8% would immediately refer for general anaesthesia, 25% would initially prescribe antibiotics, but the majority would attempt to either restore or extract the tooth causing pain. Interview responses also demonstrated variation in care, however most dentists agree general anaesthesia has a role for nervous children but only refer as a last resort. The responses indicated an increase in inequalities, and that access to services did not match population needs, leaving some children waiting in pain. Most general dental practitioners support moving dental general anaesthesia into hospitals but some believe that it has widened health inequalities and there is also a problem associated with variation in treatment provision. Additional general anaesthetic services in some areas with high levels of tooth decay are needed and evidence based guidelines about caring for children with toothache are required.

  7. The space station assembly phase: Flight telerobotic servicer feasibility. Volume 2: Methodology and case study

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey H.; Gyamfi, Max A.; Volkmer, Kent; Zimmerman, Wayne F.

    1987-01-01

    A methodology is described for examining the feasibility of a Flight Telerobotic Servicer (FTS) using two assembly scenarios, defined at the EVA task level, for the 30 shuttle flights (beginning with MB-1) over a four-year period. Performing all EVA tasks by crew only is compared to a scenario in which crew EVA is augmented by FTS. A reference FTS concept is used as a technology baseline and life-cycle cost analysis is performed to highlight cost tradeoffs. The methodology, procedure, and data used to complete the analysis are documented in detail.

  8. Analysis of JT-60SA operational scenarios

    NASA Astrophysics Data System (ADS)

    Garzotti, L.; Barbato, E.; Garcia, J.; Hayashi, N.; Voitsekhovitch, I.; Giruzzi, G.; Maget, P.; Romanelli, M.; Saarelma, S.; Stankiewitz, R.; Yoshida, M.; Zagórski, R.

    2018-02-01

    Reference scenarios for the JT-60SA tokamak have been simulated with one-dimensional transport codes to assess the stationary state of the flat-top phase and provide a profile database for further physics studies (e.g. MHD stability, gyrokinetic analysis) and diagnostics design. The types of scenario considered vary from pulsed standard H-mode to advanced non-inductive steady-state plasmas. In this paper we present the results obtained with the ASTRA, CRONOS, JINTRAC and TOPICS codes equipped with the Bohm/gyro-Bohm, CDBM and GLF23 transport models. The scenarios analysed here are: a standard ELMy H-mode, a hybrid scenario and a non-inductive steady state plasma, with operational parameters from the JT-60SA research plan. Several simulations of the scenarios under consideration have been performed with the above mentioned codes and transport models. The results from the different codes are in broad agreement and the main plasma parameters generally agree well with the zero dimensional estimates reported previously. The sensitivity of the results to different transport models and, in some cases, to the ELM/pedestal model has been investigated.

  9. Numerical analyses of baseline JT-60SA design concepts with the COREDIV code

    NASA Astrophysics Data System (ADS)

    Zagórski, R.; Gałązka, K.; Ivanova-Stanik, I.; Stępniewski, W.; Garzotti, L.; Giruzzi, G.; Neu, R.; Romanelli, M.

    2017-06-01

    JT-60SA reference design scenarios at high (#3) and low (#2) density have been analyzed with the help of the self-consistent COREDIV code. Simulations results for a standard C wall and full W wall have been compared in terms of the influence of impurities, both intrinsic (C, W) and seeded (N, Ar, Ne, Kr), on the radiation losses and plasma parameters. For scenario #3 in a C environment, the regime of detachment on divertor plates can be achieved with N or Ne seeding, whereas for the low density and high power scenario (#2), the C and seeding impurity radiation does not effectively reduce power to the targets. In this case, only an increase of either average density or edge density together with Kr seeding might help to develop conditions with strong radiation losses and semi-detached conditions in the divertor. The calculations show that, in the case of a W divertor, the power load to the plate is mitigated by seeding and the central plasma dilution is smaller compared to the C divertor. For the high density case (#3) with Ne seeding, operation in full detachment mode is predicted. Ar seems to be an optimal choice for the low-density high-power scenario #2, showing a wide operating window, whereas Ne leads to high plasma dilution at high seeding levels albeit not achieving semi-detached conditions in the divertor.

  10. Energy crops on landfills: functional, environmental, and costs analysis of different landfill configurations.

    PubMed

    Pivato, Alberto; Garbo, Francesco; Moretto, Marco; Lavagnolo, Maria Cristina

    2018-02-09

    The cultivation of energy crops on landfills represents an important challenge for the near future, as the possibility to use devalued sites for energy production is very attractive. In this study, four scenarios have been assessed and compared with respect to a reference case defined for northern Italy. The scenarios were defined taking into consideration current energy crops issues. In particular, the first three scenarios were based on energy maximisation, phytotreatment ability, and environmental impact, respectively. The fourth scenario was a combination of these characteristics emphasised by the previous scenarios. A multi-criteria analysis, based on economic, energetic, and environmental aspects, was performed. From the analysis, the best scenario resulted to be the fourth, with its ability to pursue several objectives simultaneously and obtain the best score relatively to both environmental and energetic criteria. On the contrary, the economic criterion emerges as weak, as all the considered scenarios showed some limits from this point of view. Important indications for future designs can be derived. The decrease of leachate production due to the presence of energy crops on the top cover, which enhances evapotranspiration, represents a favourable but critical aspect in the definition of the results.

  11. An Inquiry-Oriented Approach to Span and Linear Independence: The Case of the Magic Carpet Ride Sequence

    ERIC Educational Resources Information Center

    Wawro, Megan; Rasmussen, Chris; Zandieh, Michelle; Sweeney, George Franklin; Larson, Christine

    2012-01-01

    In this paper we present an innovative instructional sequence for an introductory linear algebra course that supports students' reinvention of the concepts of span, linear dependence, and linear independence. Referred to as the Magic Carpet Ride sequence, the problems begin with an imaginary scenario that allows students to build rich imagery and…

  12. Operations analysis for lunar surface construction: Results of two office of exploration case studies

    NASA Astrophysics Data System (ADS)

    Bell, Lisa Y.; Boles, Walter; Smith, Alvin

    1991-08-01

    In an environment of intense competition for Federal funding, the U.S. space research community is responsible for developing a feasible, cost-effective approach to establishing a surface base on the moon to fulfill long-term Government objectives. This report presents the results of a construction operations analysis of two lunar scenarios provided by the National Aeronautics and Space Administration (NASA). Activities necessary to install the lunar base surface elements are defined and scheduled, based on the productivities and availability of the base resources allocated to the projects depicted in each scenario. The only construction project in which the required project milestones were not completed within the nominal timeframe was the initial startup phase of NASA's FY89 Lunar Evolution Case Study (LECS), primarily because this scenario did not include any Earth-based telerobotic site preparation before the arrival of the first crew. The other scenario analyzed. Reference Mission A from NASA's 90-Day Study of the Human Exploration of the Moon and Mars, did use telerobotic site preparation before the manned phase of the base construction. Details of the analysis for LECS are provided, including spreadsheets indicating quantities of work and Gantt charts depicting the general schedule for the work. This level of detail is not presented for the scenario based on the 90-Day Study because many of the projects include the same (or similar) surface elements and facilities.

  13. Methodology for Generating Conflict Scenarios by Time Shifting Recorded Traffic Data

    NASA Technical Reports Server (NTRS)

    Paglione, Mike; Oaks, Robert; Bilimoria, Karl D.

    2003-01-01

    A methodology is presented for generating conflict scenarios that can be used as test cases to estimate the operational performance of a conflict probe. Recorded air traffic data is time shifted to create traffic scenarios featuring conflicts with characteristic properties similar to those encountered in typical air traffic operations. First, a reference set of conflicts is obtained from trajectories that are computed using birth points and nominal flight plans extracted from recorded traffic data. Distributions are obtained for several primary properties (e.g., encounter angle) that are most likely to affect the performance of a conflict probe. A genetic algorithm is then utilized to determine the values of time shifts for the recorded track data so that the primary properties of conflicts generated by the time shifted data match those of the reference set. This methodology is successfully demonstrated using recorded traffic data for the Memphis Air Route Traffic Control Center; a key result is that the required time shifts are less than 5 min for 99% of the tracks. It is also observed that close matching of the primary properties used in this study additionally provides a good match for some other secondary properties.

  14. Choosing relatives for DNA identification of missing persons.

    PubMed

    Ge, Jianye; Budowle, Bruce; Chakraborty, Ranajit

    2011-01-01

    DNA-based analysis is integral to missing person identification cases. When direct references are not available, indirect relative references can be used to identify missing persons by kinship analysis. Generally, more reference relatives render greater accuracy of identification. However, it is costly to type multiple references. Thus, at times, decisions may need to be made on which relatives to type. In this study, pedigrees for 37 common reference scenarios with 13 CODIS STRs were simulated to rank the information content of different combinations of relatives. The results confirm that first-order relatives (parents and fullsibs) are the most preferred relatives to identify missing persons; fullsibs are also informative. Less genetic dependence between references provides a higher on average likelihood ratio. Distant relatives may not be helpful solely by autosomal markers. But lineage-based Y chromosome and mitochondrial DNA markers can increase the likelihood ratio or serve as filters to exclude putative relationships. © 2010 American Academy of Forensic Sciences.

  15. Colorful Twisted Top Partners and Partnerium at the LHC

    NASA Astrophysics Data System (ADS)

    Kats, Yevgeny; McCullough, Matthew; Perez, Gilad; Soreq, Yotam; Thaler, Jesse

    2017-06-01

    In scenarios that stabilize the electroweak scale, the top quark is typically accompanied by partner particles. In this work, we demonstrate how extended stabilizing symmetries can yield scalar or fermionic top partners that transform as ordinary color triplets but carry exotic electric charges. We refer to these scenarios as "hypertwisted" since they involve modifications to hypercharge in the top sector. As proofs of principle, we construct two hypertwisted scenarios: a supersymmetric construction with spin-0 top partners, and a composite Higgs construction with spin-1/2 top partners. In both cases, the top partners are still phenomenologically compatible with the mass range motivated by weak-scale naturalness. The phenomenology of hypertwisted scenarios is diverse, since the lifetimes and decay modes of the top partners are model dependent. The novel coupling structure opens up search channels that do not typically arise in top-partner scenarios, such as pair production of top-plus-jet resonances. Furthermore, hypertwisted top partners are typically sufficiently long lived to form "top-partnerium" bound states that decay predominantly via annihilation, motivating searches for rare narrow resonances with diboson decay modes.

  16. Impacts of feeding less food-competing feedstuffs to livestock on global food system sustainability.

    PubMed

    Schader, Christian; Muller, Adrian; Scialabba, Nadia El-Hage; Hecht, Judith; Isensee, Anne; Erb, Karl-Heinz; Smith, Pete; Makkar, Harinder P S; Klocke, Peter; Leiber, Florian; Schwegler, Patrizia; Stolze, Matthias; Niggli, Urs

    2015-12-06

    Increasing efficiency in livestock production and reducing the share of animal products in human consumption are two strategies to curb the adverse environmental impacts of the livestock sector. Here, we explore the room for sustainable livestock production by modelling the impacts and constraints of a third strategy in which livestock feed components that compete with direct human food crop production are reduced. Thus, in the outmost scenario, animals are fed only from grassland and by-products from food production. We show that this strategy could provide sufficient food (equal amounts of human-digestible energy and a similar protein/calorie ratio as in the reference scenario for 2050) and reduce environmental impacts compared with the reference scenario (in the most extreme case of zero human-edible concentrate feed: greenhouse gas emissions -18%; arable land occupation -26%, N-surplus -46%; P-surplus -40%; non-renewable energy use -36%, pesticide use intensity -22%, freshwater use -21%, soil erosion potential -12%). These results occur despite the fact that environmental efficiency of livestock production is reduced compared with the reference scenario, which is the consequence of the grassland-based feed for ruminants and the less optimal feeding rations based on by-products for non-ruminants. This apparent contradiction results from considerable reductions of animal products in human diets (protein intake per capita from livestock products reduced by 71%). We show that such a strategy focusing on feed components which do not compete with direct human food consumption offers a viable complement to strategies focusing on increased efficiency in production or reduced shares of animal products in consumption. © 2015 The Authors.

  17. Impacts of feeding less food-competing feedstuffs to livestock on global food system sustainability

    PubMed Central

    Hecht, Judith; Isensee, Anne; Smith, Pete; Makkar, Harinder P. S.; Klocke, Peter; Leiber, Florian; Stolze, Matthias; Niggli, Urs

    2015-01-01

    Increasing efficiency in livestock production and reducing the share of animal products in human consumption are two strategies to curb the adverse environmental impacts of the livestock sector. Here, we explore the room for sustainable livestock production by modelling the impacts and constraints of a third strategy in which livestock feed components that compete with direct human food crop production are reduced. Thus, in the outmost scenario, animals are fed only from grassland and by-products from food production. We show that this strategy could provide sufficient food (equal amounts of human-digestible energy and a similar protein/calorie ratio as in the reference scenario for 2050) and reduce environmental impacts compared with the reference scenario (in the most extreme case of zero human-edible concentrate feed: greenhouse gas emissions −18%; arable land occupation −26%, N-surplus −46%; P-surplus −40%; non-renewable energy use −36%, pesticide use intensity −22%, freshwater use −21%, soil erosion potential −12%). These results occur despite the fact that environmental efficiency of livestock production is reduced compared with the reference scenario, which is the consequence of the grassland-based feed for ruminants and the less optimal feeding rations based on by-products for non-ruminants. This apparent contradiction results from considerable reductions of animal products in human diets (protein intake per capita from livestock products reduced by 71%). We show that such a strategy focusing on feed components which do not compete with direct human food consumption offers a viable complement to strategies focusing on increased efficiency in production or reduced shares of animal products in consumption. PMID:26674194

  18. Emission Impacts of Electric Vehicles in the US Transportation Sector Following Optimistic Cost and Efficiency Projections.

    PubMed

    Keshavarzmohammadian, Azadeh; Henze, Daven K; Milford, Jana B

    2017-06-20

    This study investigates emission impacts of introducing inexpensive and efficient electric vehicles into the US light duty vehicle (LDV) sector. Scenarios are explored using the ANSWER-MARKAL model with a modified version of the Environmental Protection Agency's (EPA) 9-region database. Modified cost and performance projections for LDV technologies are adapted from the National Research Council (2013) optimistic case. Under our optimistic scenario (OPT) we find 15% and 47% adoption of battery electric vehicles (BEVs) in 2030 and 2050, respectively. In contrast, gasoline vehicles (ICEVs) remain dominant through 2050 in the EPA reference case (BAU). Compared to BAU, OPT gives 16% and 36% reductions in LDV greenhouse gas (GHG) emissions for 2030 and 2050, respectively, corresponding to 5% and 9% reductions in economy-wide emissions. Total nitrogen oxides, volatile organic compounds, and SO 2 emissions are similar in the two scenarios due to intersectoral shifts. Moderate, economy-wide GHG fees have little effect on GHG emissions from the LDV sector but are more effective in the electricity sector. In the OPT scenario, estimated well-to-wheels GHG emissions from full-size BEVs with 100-mile range are 62 gCO 2 -e mi -1 in 2050, while those from full-size ICEVs are 121 gCO 2 -e mi -1 .

  19. Importance of anthropogenic climate impact, sampling error and urban development in sewer system design.

    PubMed

    Egger, C; Maurer, M

    2015-04-15

    Urban drainage design relying on observed precipitation series neglects the uncertainties associated with current and indeed future climate variability. Urban drainage design is further affected by the large stochastic variability of precipitation extremes and sampling errors arising from the short observation periods of extreme precipitation. Stochastic downscaling addresses anthropogenic climate impact by allowing relevant precipitation characteristics to be derived from local observations and an ensemble of climate models. This multi-climate model approach seeks to reflect the uncertainties in the data due to structural errors of the climate models. An ensemble of outcomes from stochastic downscaling allows for addressing the sampling uncertainty. These uncertainties are clearly reflected in the precipitation-runoff predictions of three urban drainage systems. They were mostly due to the sampling uncertainty. The contribution of climate model uncertainty was found to be of minor importance. Under the applied greenhouse gas emission scenario (A1B) and within the period 2036-2065, the potential for urban flooding in our Swiss case study is slightly reduced on average compared to the reference period 1981-2010. Scenario planning was applied to consider urban development associated with future socio-economic factors affecting urban drainage. The impact of scenario uncertainty was to a large extent found to be case-specific, thus emphasizing the need for scenario planning in every individual case. The results represent a valuable basis for discussions of new drainage design standards aiming specifically to include considerations of uncertainty. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Cost Optimal Elastic Auto-Scaling in Cloud Infrastructure

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, S.; Sidhanta, S.; Ganguly, S.; Nemani, R. R.

    2014-12-01

    Today, elastic scaling is critical part of leveraging cloud. Elastic scaling refers to adding resources only when it is needed and deleting resources when not in use. Elastic scaling ensures compute/server resources are not over provisioned. Today, Amazon and Windows Azure are the only two platform provider that allow auto-scaling of cloud resources where servers are automatically added and deleted. However, these solution falls short of following key features: A) Requires explicit policy definition such server load and therefore lacks any predictive intelligence to make optimal decision; B) Does not decide on the right size of resource and thereby does not result in cost optimal resource pool. In a typical cloud deployment model, we consider two types of application scenario: A. Batch processing jobs → Hadoop/Big Data case B. Transactional applications → Any application that process continuous transactions (Requests/response) In reference of classical queuing model, we are trying to model a scenario where servers have a price and capacity (size) and system can add delete servers to maintain a certain queue length. Classical queueing models applies to scenario where number of servers are constant. So we cannot apply stationary system analysis in this case. We investigate the following questions 1. Can we define Job queue and use the metric to define such a queue to predict the resource requirement in a quasi-stationary way? Can we map that into an optimal sizing problem? 2. Do we need to get into a level of load (CPU/Data) on server level to characterize the size requirement? How do we learn that based on Job type?

  1. Virtual gait training for children with cerebral palsy using the Lokomat gait orthosis.

    PubMed

    Koenig, Alexander; Wellner, Mathias; Köneke, Susan; Meyer-Heim, Andreas; Lünenburger, Lars; Riener, Robert

    2008-01-01

    The Lokomat gait orthosis was developed in the Spinal Cord Injury Center at the University Hospital Balgrist Zurich and provides automatic gait training for patients with neurological gait impairments, such as Cerebral Palsy (CP). Each patient undergoes a task-oriented Lokomat rehabilitation training program via a virtual reality setup. In four virtual scenarios, the patient is able to exercise tasks such as wading through water, playing soccer, overstepping obstacles or training in a street scenario, each task offering varying levels of difficulty. Patients provided positive feedback in reference to the utilized haptic method, specifically addressing the sufficient degree of realism. In a single case study, we verified the task difficulty.

  2. Local digital control of power electronic converters in a dc microgrid based on a-priori derivation of switching surfaces

    NASA Astrophysics Data System (ADS)

    Banerjee, Bibaswan

    In power electronic basedmicrogrids, the computational requirements needed to implement an optimized online control strategy can be prohibitive. The work presented in this dissertation proposes a generalized method of derivation of geometric manifolds in a dc microgrid that is based on the a-priori computation of the optimal reactions and trajectories for classes of events in a dc microgrid. The proposed states are the stored energies in all the energy storage elements of the dc microgrid and power flowing into them. It is anticipated that calculating a large enough set of dissimilar transient scenarios will also span many scenarios not specifically used to develop the surface. These geometric manifolds will then be used as reference surfaces in any type of controller, such as a sliding mode hysteretic controller. The presence of switched power converters in microgrids involve different control actions for different system events. The control of the switch states of the converters is essential for steady state and transient operations. A digital memory look-up based controller that uses a hysteretic sliding mode control strategy is an effective technique to generate the proper switch states for the converters. An example dcmicrogrid with three dc-dc boost converters and resistive loads is considered for this work. The geometric manifolds are successfully generated for transient events, such as step changes in the loads and the sources. The surfaces corresponding to a specific case of step change in the loads are then used as reference surfaces in an EEPROM for experimentally validating the control strategy. The required switch states corresponding to this specific transient scenario are programmed in the EEPROM as a memory table. This controls the switching of the dc-dc boost converters and drives the system states to the reference manifold. In this work, it is shown that this strategy effectively controls the system for a transient condition such as step changes in the loads for the example case.

  3. Bridging the sanitation gap between disaster relief and development.

    PubMed

    Lai, Ka-Man; Ramirez, Claudia; Liu, Weilong; Kirilova, Darina; Vick, David; Mari, Joe; Smith, Rachel; Lam, Ho-Yin; Ostovari, Afshin; Shibakawa, Akifumi; Liu, Yang; Samant, Sidharth; Osaro, Lucky

    2015-10-01

    By interpreting disasters as opportunities to initiate the fulfilment of development needs, realise the vulnerability of the affected community and environment, and extend the legacy of relief funds and effort, this paper builds upon the concept linking relief, rehabilitation and development (LRRD) in the sanitation sector. It aims to use a composite of case studies to devise a framework for a semi-hypothetical scenario to identify critical components and generic processes for a LRRD action plan. The scenario is based on a latrine wetland sanitation system in a Muslim community. Several sub-frameworks are developed: (i) latrine design; (ii) assessment of human waste treatment; (iii) connective sanitation promotion strategy; and (iv) ecological systems and environmental services for sanitation and development. This scenario illustrates the complex issues involved in LRRD in sanitation work and provides technical notes and references for a legacy plan for disaster relief and development. © 2015 The Author(s). Disasters © Overseas Development Institute, 2015.

  4. Emotion recognition bias for contempt and anger in body dysmorphic disorder.

    PubMed

    Buhlmann, Ulrike; Etcoff, Nancy L; Wilhelm, Sabine

    2006-03-01

    Body dysmorphic disorder (BDD) patients are preoccupied with imagined defects or flaws in appearance (e.g., size or shape of nose). They are afraid of negative evaluations by others and often suffer significant morbidity including hospitalization and suicide attempts. Many patients experience ideas of reference, e.g., they often believe others take special notice of their "flaw". Facial expressions play an important role in conveying negative or positive feelings, and sympathy or rejection. In this study, we investigated emotion recognition deficits in 18 BDD patients and 18 healthy controls. Participants were presented with two questionnaires accompanying facial photographs. One questionnaire included self-referent scenarios ("Imagine that the bank teller is looking at you. What is his facial expression like?"), whereas the other one included other-referent scenarios ("Imagine that the bank teller is looking at a friend of yours," etc.), and participants were asked to identify the corresponding emotion (e.g., anger, contempt, neutral, or surprise). Overall, BDD patients, relative to controls, had difficulty identifying emotional expressions in self-referent scenarios. They misinterpreted more expressions as contemptuous and angry in self-referent scenarios than did controls. However, they did not have significantly more difficulties identifying emotional expressions in other-referent scenarios than controls. Thus, poor insight and ideas of reference, common in BDD, might be related to a bias for misinterpreting other people's emotional expressions as negative. Perceiving others as rejecting might reinforce concerns about one's personal perceived ugliness and social desirability.

  5. 4% Yield Increase (HH4), All Energy Crops scenario of the 2016 Billion Ton Report

    DOE Data Explorer

    Davis, Maggie R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000181319328); Hellwinkel, Chad [University of Tennessee] (ORCID:0000000173085058); Eaton, Laurence [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000312709626); Langholtz, Matthew H [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000281537154); Turhollow, Anthony [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000228159350); Brandt, Craig [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000214707379); Myers, Aaron [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000320373827)

    2016-07-13

    Scientific reason for data generation: to serve as an alternate high-yield scenario for the BT16 volume 1 agricultural scenarios to compare these projections of potential biomass supplies against a reference case (agricultural baseline 10.11578/1337885). The simulation runs from 2015 through 2040; a starting year of 2014 is used but not reported. Date the data set was last modified: 02/02/2016. How each parameter was produced (methods), format, and relationship to other data in the data set: This exogenous price simulations (also referred to as “specified-price” simulations) introduces a farmgate price, and POLYSYS solves for biomass supplies that may be brought to market in response to these prices. In specified-price scenarios, a specified farmgate price is offered constantly in all counties over all years of the simulation. This simulation begins in 2015 with an offered farmgate price for primary crop residues only between 2015 and 2018 and long-term contracts for dedicated crops beginning in 2019. Expected mature energy crop yield grows at a compounding rate of 4% beginning in 2016. The yield growth assumptions are fixed after crops are planted such that yield gains do not apply. Instruments used: Policy Analysis System –POLYSYS (version POLYS2015_V10_alt_JAN22B), an agricultural policy modeling system of U.S. agriculture (crops and livestock), supplied by the University of Tennessee Institute of Agriculture, Agricultural Policy Analysis Center.

  6. Trade-offs across space, time, and ecosystem services

    USGS Publications Warehouse

    Rodriguez, J.P.; Beard, T.D.; Bennett, E.M.; Cumming, Graeme S.; Cork, S.J.; Agard, J.; Dobson, A.P.; Peterson, G.D.

    2006-01-01

    Ecosystem service (ES) trade-offs arise from management choices made by humans, which can change the type, magnitude, and relative mix of services provided by ecosystems. Trade-offs occur when the provision of one ES is reduced as a consequence of increased use of another ES. In some cases, a trade-off may be an explicit choice; but in others, trade-offs arise without premeditation or even awareness that they are taking place. Trade-offs in ES can be classified along three axes: spatial scale, temporal scale, and reversibility. Spatial scale refers to whether the effects of the trade-off are felt locally or at a distant location. Temporal scale refers to whether the effects take place relatively rapidly or slowly. Reversibility expresses the likelihood that the perturbed ES may return to its original state if the perturbation ceases. Across all four Millennium Ecosystem Assessment scenarios and selected case study examples, trade-off decisions show a preference for provisioning, regulating, or cultural services (in that order). Supporting services are more likely to be "taken for granted." Cultural ES are almost entirely unquantified in scenario modeling; therefore, the calculated model results do not fully capture losses of these services that occur in the scenarios. The quantitative scenario models primarily capture the services that are perceived by society as more important - provisioning and regulating ecosystem services - and thus do not fully capture trade-offs of cultural and supporting services. Successful management policies will be those that incorporate lessons learned from prior decisions into future management actions. Managers should complement their actions with monitoring programs that, in addition to monitoring the short-term provisions of services, also monitor the long-term evolution of slowly changing variables. Policies can then be developed to take into account ES trade-offs at multiple spatial and temporal scales. Successful strategies will recognize the inherent complexities of ecosystem management and will work to develop policies that minimize the effects of ES trade-offs. Copyright ?? 2006 by the author(s).

  7. Economic implications of mercury exposure in the context of the global mercury treaty: Hair mercury levels and estimated lost economic productivity in selected developing countries.

    PubMed

    Trasande, Leonardo; DiGangi, Joseph; Evers, David C; Petrlik, Jindrich; Buck, David G; Šamánek, Jan; Beeler, Bjorn; Turnquist, Madeline A; Regan, Kevin

    2016-12-01

    Several developing countries have limited or no information about exposures near anthropogenic mercury sources and no studies have quantified costs of mercury pollution or economic benefits to mercury pollution prevention in these countries. In this study, we present data on mercury concentrations in human hair from subpopulations in developing countries most likely to benefit from the implementation of the Minamata Convention on Mercury. These data are then used to estimate economic costs of mercury exposure in these communities. Hair samples were collected from sites located in 15 countries. We used a linear dose-response relationship that previously identified a 0.18 IQ point decrement per part per million (ppm) increase in hair mercury, and modeled a base case scenario assuming a reference level of 1 ppm, and a second scenario assuming no reference level. We then estimated the corresponding increases in intellectual disability and lost Disability-Adjusted Life Years (DALY). A total of 236 participants provided hair samples for analysis, with an estimated population at risk of mercury exposure near the 15 sites of 11,302,582. Average mercury levels were in the range of 0.48 ppm-4.60 ppm, and 61% of all participants had hair mercury concentrations greater than 1 ppm, the level that approximately corresponds to the USA EPA reference dose. An additional 1310 cases of intellectual disability attributable to mercury exposure were identified annually (4110 assuming no reference level), resulting in 16,501 lost DALYs (51,809 assuming no reference level). A total of $77.4 million in lost economic productivity was estimated assuming a 1 ppm reference level and $130 million if no reference level was used. We conclude that significant mercury exposures occur in developing and transition country communities near sources named in the Minamata Convention, and our estimates suggest that a large economic burden could be avoided by timely implementation of measures to prevent mercury exposures. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Human exploration mission studies

    NASA Technical Reports Server (NTRS)

    Cataldo, Robert L.

    1989-01-01

    The Office of Exploration has established a process whereby all NASA field centers and other NASA Headquarters offices participate in the formulation and analysis of a wide range of mission strategies. These strategies were manifested into specific scenarios or candidate case studies. The case studies provided a systematic approach into analyzing each mission element. First, each case study must address several major themes and rationale including: national pride and international prestige, advancement of scientific knowledge, a catalyst for technology, economic benefits, space enterprise, international cooperation, and education and excellence. Second, the set of candidate case studies are formulated to encompass the technology requirement limits in the life sciences, launch capabilities, space transfer, automation, and robotics in space operations, power, and propulsion. The first set of reference case studies identify three major strategies: human expeditions, science outposts, and evolutionary expansion. During the past year, four case studies were examined to explore these strategies. The expeditionary missions include the Human Expedition to Phobos and Human Expedition to Mars case studies. The Lunar Observatory and Lunar Outpost to Early Mars Evolution case studies examined the later two strategies. This set of case studies established the framework to perform detailed mission analysis and system engineering to define a host of concepts and requirements for various space systems and advanced technologies. The details of each mission are described and, specifically, the results affecting the advanced technologies required to accomplish each mission scenario are presented.

  9. An inverse approach to perturb historical rainfall data for scenario-neutral climate impact studies

    NASA Astrophysics Data System (ADS)

    Guo, Danlu; Westra, Seth; Maier, Holger R.

    2018-01-01

    Scenario-neutral approaches are being used increasingly for climate impact assessments, as they allow water resource system performance to be evaluated independently of climate change projections. An important element of these approaches is the generation of perturbed series of hydrometeorological variables that form the inputs to hydrologic and water resource assessment models, with most scenario-neutral studies to-date considering only shifts in the average and a limited number of other statistics of each climate variable. In this study, a stochastic generation approach is used to perturb not only the average of the relevant hydrometeorological variables, but also attributes such as the intermittency and extremes. An optimization-based inverse approach is developed to obtain hydrometeorological time series with uniform coverage across the possible ranges of rainfall attributes (referred to as the 'exposure space'). The approach is demonstrated on a widely used rainfall generator, WGEN, for a case study at Adelaide, Australia, and is shown to be capable of producing evenly-distributed samples over the exposure space. The inverse approach expands the applicability of the scenario-neutral approach in evaluating a water resource system's sensitivity to a wider range of plausible climate change scenarios.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xing, Y; Macq, B; Bondar, L

    Purpose: To quantify the accuracy in predicting the Bragg peak position using simulated in-room measurements of prompt gamma (PG) emissions for realistic treatment error scenarios that combine several sources of errors. Methods: Prompt gamma measurements by a knife-edge slit camera were simulated using an experimentally validated analytical simulation tool. Simulations were performed, for 143 treatment error scenarios, on an anthropomorphic phantom and a pencil beam scanning plan for nasal cavity. Three types of errors were considered: translation along each axis, rotation around each axis, and CT-calibration errors with magnitude ranging respectively, between −3 and 3 mm, −5 and 5 degrees,more » and between −5 and +5%. We investigated the correlation between the Bragg peak (BP) shift and the horizontal shift of PG profiles. The shifts were calculated between the planned (reference) position and the position by the error scenario. The prediction error for one spot was calculated as the absolute difference between the PG profile shift and the BP shift. Results: The PG shift was significantly and strongly correlated with the BP shift for 92% of the cases (p<0.0001, Pearson correlation coefficient R>0.8). Moderate but significant correlations were obtained for all cases that considered only CT-calibration errors and for 1 case that combined translation and CT-errors (p<0.0001, R ranged between 0.61 and 0.8). The average prediction errors for the simulated scenarios ranged between 0.08±0.07 and 1.67±1.3 mm (grand mean 0.66±0.76 mm). The prediction error was moderately correlated with the value of the BP shift (p=0, R=0.64). For the simulated scenarios the average BP shift ranged between −8±6.5 mm and 3±1.1 mm. Scenarios that considered combinations of the largest treatment errors were associated with large BP shifts. Conclusion: Simulations of in-room measurements demonstrate that prompt gamma profiles provide reliable estimation of the Bragg peak position for complex error scenarios. Yafei Xing and Luiza Bondar are funded by BEWARE grants from the Walloon Region. The work presents simulations results for a prompt gamma camera prototype developed by IBA.« less

  11. Industrial research for transmutation scenarios

    NASA Astrophysics Data System (ADS)

    Camarcat, Noel; Garzenne, Claude; Le Mer, Joël; Leroyer, Hadrien; Desroches, Estelle; Delbecq, Jean-Michel

    2011-04-01

    This article presents the results of research scenarios for americium transmutation in a 22nd century French nuclear fleet, using sodium fast breeder reactors. We benchmark the americium transmutation benefits and drawbacks with a reference case consisting of a hypothetical 60 GWe fleet of pure plutonium breeders. The fluxes in the various parts of the cycle (reactors, fabrication plants, reprocessing plants and underground disposals) are calculated using EDF's suite of codes, comparable in capabilities to those of other research facilities. We study underground thermal heat load reduction due to americium partitioning and repository area minimization. We endeavor to estimate the increased technical complexity of surface facilities to handle the americium fluxes in special fuel fabrication plants, americium fast burners, special reprocessing shops, handling equipments and transport casks between those facilities.

  12. Severe anaemia associated with Plasmodium falciparum infection in children: consequences for additional blood sampling for research.

    PubMed

    Kuijpers, Laura Maria Francisca; Maltha, Jessica; Guiraud, Issa; Kaboré, Bérenger; Lompo, Palpouguini; Devlieger, Hugo; Van Geet, Chris; Tinto, Halidou; Jacobs, Jan

    2016-06-02

    Plasmodium falciparum infection may cause severe anaemia, particularly in children. When planning a diagnostic study on children suspected of severe malaria in sub-Saharan Africa, it was questioned how much blood could be safely sampled; intended blood volumes (blood cultures and EDTA blood) were 6 mL (children aged <6 years) and 10 mL (6-12 years). A previous review [Bull World Health Organ. 89: 46-53. 2011] recommended not to exceed 3.8 % of total blood volume (TBV). In a simulation exercise using data of children previously enrolled in a study about severe malaria and bacteraemia in Burkina Faso, the impact of this 3.8 % safety guideline was evaluated. For a total of 666 children aged >2 months to <12 years, data of age, weight and haemoglobin value (Hb) were available. For each child, the estimated TBV (TBVe) (mL) was calculated by multiplying the body weight (kg) by the factor 80 (ml/kg). Next, TBVe was corrected for the degree of anaemia to obtain the functional TBV (TBVf). The correction factor consisted of the rate 'Hb of the child divided by the reference Hb'; both the lowest ('best case') and highest ('worst case') reference Hb values were used. Next, the exact volume that a 3.8 % proportion of this TBVf would present was calculated and this volume was compared to the blood volumes that were intended to be sampled. When applied to the Burkina Faso cohort, the simulation exercise pointed out that in 5.3 % (best case) and 11.4 % (worst case) of children the blood volume intended to be sampled would exceed the volume as defined by the 3.8 % safety guideline. Highest proportions would be in the age groups 2-6 months (19.0 %; worst scenario) and 6 months-2 years (15.7 %; worst case scenario). A positive rapid diagnostic test for P. falciparum was associated with an increased risk of violating the safety guideline in the worst case scenario (p = 0.016). Blood sampling in children for research in P. falciparum endemic settings may easily violate the proposed safety guideline when applied to TBVf. Ethical committees and researchers should be wary of this and take appropriate precautions.

  13. Analysis of LNG peakshaving-facility release-prevention systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pelto, P.J.; Baker, E.G.; Powers, T.B.

    1982-05-01

    The purpose of this study is to provide an analysis of release prevention systems for a reference LNG peakshaving facility. An overview assessment of the reference peakshaving facility, which preceeded this effort, identified 14 release scenarios which are typical of the potential hazards involved in the operation of LNG peakshaving facilities. These scenarios formed the basis for this more detailed study. Failure modes and effects analysis and fault tree analysis were used to estimate the expected frequency of each release scenario for the reference peakshaving facility. In addition, the effectiveness of release prevention, release detection, and release control systems weremore » evaluated.« less

  14. 3% Yield Increase (HH3), All Energy Crops scenario of the 2016 Billion Ton Report

    DOE Data Explorer

    Davis, Maggie R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000181319328); Hellwinkel, Chad [University of Tennessee] (ORCID:0000000173085058); Eaton, Laurence [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000312709626); Langholtz, Matthew H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000281537154); Turhollow, Anthony [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000228159350); Brandt, Craig [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000214707379); Myers, Aaron (ORCID:0000000320373827)

    2016-07-13

    Scientific reason for data generation: to serve as an alternate high-yield scenario for the BT16 volume 1 agricultural scenarios to compare these projections of potential biomass supplies against a reference case (agricultural baseline 10.11578/1337885). The simulation runs from 2015 through 2040; a starting year of 2014 is used but not reported. Date the data set was last modified: 02/02/2016 How each parameter was produced (methods), format, and relationship to other data in the data set: This exogenous price simulations (also referred to as “specified-price” simulations) introduces a farmgate price, and POLYSYS solves for biomass supplies that may be brought to market in response to these prices. In specified-price scenarios, a specified farmgate price is offered constantly in all counties over all years of the simulation. This simulation begins in 2015 with an offered farmgate price for primary crop residues only between 2015 and 2018 and long-term contracts for dedicated crops beginning in 2019. Expected mature energy crop yield grows at a compounding rate of 3% beginning in 2016. The yield growth assumptions are fixed after crops are planted such that yield gains do not apply to crops already planted, but new plantings do take advantage of the gains in expected yield growth. Instruments used: Policy Analysis System –POLYSYS (version POLYS2015_V10_alt_JAN22B), an agricultural policy modeling system of U.S. agriculture (crops and livestock), supplied by the University of Tennessee Institute of Agriculture, Agricultural Policy Analysis Center.

  15. 2% Yield Increase (HH2), All Energy Crops scenario of the 2016 Billion Ton Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Maggie R.; Hellwinkel, Chad; Eaton, Laurence

    Scientific reason for data generation: to serve as an alternate high-yield scenario for the BT16 volume 1 agricultural scenarios to compare these projections of potential biomass supplies against a reference case (agricultural baseline 10.11578/1337885). The simulation runs from 2015 through 2040; a starting year of 2014 is used but not reported. Date the data set was last modified: 02/02/2016 How each parameter was produced (methods), format, and relationship to other data in the data set: This exogenous price simulations (also referred to as “specified-price” simulations) introduces a farmgate price, and POLYSYS solves for biomass supplies that may be brought tomore » market in response to these prices. In specified-price scenarios, a specified farmgate price is offered constantly in all counties over all years of the simulation. This simulation begins in 2015 with an offered farmgate price for primary crop residues only between 2015 and 2018 and long-term contracts for dedicated crops beginning in 2019. Expected mature energy crop yield grows at a compounding rate of 2% beginning in 2016. The yield growth assumptions are fixed after crops are planted such that yield gains do not apply to crops already planted, but new plantings do take advantage of the gains in expected yield growth. Instruments used: Policy Analysis System –POLYSYS (version POLYS2015_V10_alt_JAN22B), an agricultural policy modeling system of U.S. agriculture (crops and livestock), supplied by the University of Tennessee Institute of Agriculture, Agricultural Policy Analysis Center.« less

  16. Effect of density feedback on the two-route traffic scenario with bottleneck

    NASA Astrophysics Data System (ADS)

    Sun, Xiao-Yan; Ding, Zhong-Jun; Huang, Guo-Hua

    2016-12-01

    In this paper, we investigate the effect of density feedback on the two-route scenario with a bottleneck. The simulation and theory analysis shows that there exist two critical vehicle entry probabilities αc1 and αc2. When vehicle entry probability α≤αc1, four different states, i.e. free flow state, transition state, maximum current state and congestion state are identified in the system, which correspond to three critical reference densities. However, in the interval αc1<α<αc2, the free flow and transition state disappear, and there is only congestion state when α≥αc2. According to the results, traffic control center can adjust the reference density so that the system is in maximum current state. In this case, the capacity of the traffic system reaches maximum so that drivers can make full use of the roads. We hope that the study results can provide good advice for alleviating traffic jam and be useful to traffic control center for designing advanced traveller information systems.

  17. Conservation planning under uncertainty in urban development and vegetation dynamics

    PubMed Central

    Carmel, Yohay

    2018-01-01

    Systematic conservation planning is a framework for optimally locating and prioritizing areas for conservation. An often-noted shortcoming of most conservation planning studies is that they do not address future uncertainty. The selection of protected areas that are intended to ensure the long-term persistence of biodiversity is often based on a snapshot of the current situation, ignoring processes such as climate change. Scenarios, in the sense of being accounts of plausible futures, can be utilized to identify conservation area portfolios that are robust to future uncertainty. We compared three approaches for utilizing scenarios in conservation area selection: considering a full set of scenarios (all-scenarios portfolio), assuming the realization of specific scenarios, and a reference strategy based on the current situation (current distributions portfolio). Our objective was to compare the robustness of these approaches in terms of their relative performance across future scenarios. We focused on breeding bird species in Israel’s Mediterranean region. We simulated urban development and vegetation dynamics scenarios 60 years into the future using DINAMICA-EGO, a cellular-automata simulation model. For each scenario, we mapped the target species’ available habitat distribution, identified conservation priority areas using the site-selection software MARXAN, and constructed conservation area portfolios using the three aforementioned strategies. We then assessed portfolio performance based on the number of species for which representation targets were met in each scenario. The all-scenarios portfolio consistently outperformed the other portfolios, and was more robust to ‘errors’ (e.g., when an assumed specific scenario did not occur). On average, the all-scenarios portfolio achieved representation targets for five additional species compared with the current distributions portfolio (approximately 33 versus 28 species). Our findings highlight the importance of considering a broad and meaningful set of scenarios, rather than relying on the current situation, the expected occurrence of specific scenarios, or the worst-case scenario. PMID:29621330

  18. Conservation planning under uncertainty in urban development and vegetation dynamics.

    PubMed

    Troupin, David; Carmel, Yohay

    2018-01-01

    Systematic conservation planning is a framework for optimally locating and prioritizing areas for conservation. An often-noted shortcoming of most conservation planning studies is that they do not address future uncertainty. The selection of protected areas that are intended to ensure the long-term persistence of biodiversity is often based on a snapshot of the current situation, ignoring processes such as climate change. Scenarios, in the sense of being accounts of plausible futures, can be utilized to identify conservation area portfolios that are robust to future uncertainty. We compared three approaches for utilizing scenarios in conservation area selection: considering a full set of scenarios (all-scenarios portfolio), assuming the realization of specific scenarios, and a reference strategy based on the current situation (current distributions portfolio). Our objective was to compare the robustness of these approaches in terms of their relative performance across future scenarios. We focused on breeding bird species in Israel's Mediterranean region. We simulated urban development and vegetation dynamics scenarios 60 years into the future using DINAMICA-EGO, a cellular-automata simulation model. For each scenario, we mapped the target species' available habitat distribution, identified conservation priority areas using the site-selection software MARXAN, and constructed conservation area portfolios using the three aforementioned strategies. We then assessed portfolio performance based on the number of species for which representation targets were met in each scenario. The all-scenarios portfolio consistently outperformed the other portfolios, and was more robust to 'errors' (e.g., when an assumed specific scenario did not occur). On average, the all-scenarios portfolio achieved representation targets for five additional species compared with the current distributions portfolio (approximately 33 versus 28 species). Our findings highlight the importance of considering a broad and meaningful set of scenarios, rather than relying on the current situation, the expected occurrence of specific scenarios, or the worst-case scenario.

  19. An assessment of the inter-rater reliability of the ASA physical status score in the orthopaedic trauma population.

    PubMed

    Ihejirika, Rivka C; Thakore, Rachel V; Sathiyakumar, Vasanth; Ehrenfeld, Jesse M; Obremskey, William T; Sethi, Manish K

    2015-04-01

    Although recent literature has demonstrated the utility of the ASA score in predicting postoperative length of stay, complication risk and potential utilization of other hospital resources, the ASA score has been inconsistently assigned by anaesthesia providers. This study tested the reliability of assignment of the ASA score classification by both attending anaesthesiologists and anaesthesia residents specifically among the orthopaedic trauma patient population. Nine case-based scenarios were created involving preoperative patients with isolated operative orthopaedic trauma injuries. The cases were created and assigned a reference score by both an attending anaesthesiologist and orthopaedic trauma surgeon. Attending and resident anaesthesiologists were asked to assign an ASA score for each case. Rater versus reference and inter-rater agreement amongst respondents was then analyzed utilizing Fleiss's Kappa and weighted and unweighted Cohen's Kappa. Thirty three individuals provided ASA scores for each of the scenarios. The average rater versus reference reliability was substantial (Kw=0.78, SD=0.131, 95% CI=0.73-0.83). The average rater versus reference Kuw was also substantial (Kuw=0.64, SD=0.21, 95% CI=0.56-0.71). The inter-rater reliability as evaluated by Fleiss's Kappa was moderate (K=0.51, p<.001). An inter-rater comparison within the group of attendings (K=0.50, p<.001) and within the group of residents were both moderate (K=0.55, p<.001). There was a significant increase in the level of inter-rater reliability from the self-reported 'very uncomfortable' participants to the 'very comfortable' participants (uncomfortable K=0.43, comfortable K=0.59, p<.001). This study shows substantial agreement strength for reliability of the ASA score among anaesthesiologists when evaluating orthopaedic trauma patients. The significant increase in inter-rater reliability based on anaesthesiologists' comfort with the ASA scoring method implies a need for further evaluation of ASA assessment training and routine use on the ground. These findings support the use of the ASA score as a statistically reliable tool in orthopaedic trauma. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Dispersion modeling of accidental releases of toxic gases - utility for the fire brigades.

    NASA Astrophysics Data System (ADS)

    Stenzel, S.; Baumann-Stanzer, K.

    2009-09-01

    Several air dispersion models are available for prediction and simulation of the hazard areas associated with accidental releases of toxic gases. The most model packages (commercial or free of charge) include a chemical database, an intuitive graphical user interface (GUI) and automated graphical output for effective presentation of results. The models are designed especially for analyzing different accidental toxic release scenarios ("worst-case scenarios”), preparing emergency response plans and optimal countermeasures as well as for real-time risk assessment and management. The research project RETOMOD (reference scenarios calculations for toxic gas releases - model systems and their utility for the fire brigade) was conducted by the Central Institute for Meteorology and Geodynamics (ZAMG) in cooperation with the Viennese fire brigade, OMV Refining & Marketing GmbH and Synex Ries & Greßlehner GmbH. RETOMOD was funded by the KIRAS safety research program of the Austrian Ministry of Transport, Innovation and Technology (www.kiras.at). The main tasks of this project were 1. Sensitivity study and optimization of the meteorological input for modeling of the hazard areas (human exposure) during the accidental toxic releases. 2. Comparison of several model packages (based on reference scenarios) in order to estimate the utility for the fire brigades. For the purpose of our study the following models were tested and compared: ALOHA (Areal Location of Hazardous atmosphere, EPA), MEMPLEX (Keudel av-Technik GmbH), Trace (Safer System), Breeze (Trinity Consulting), SAM (Engineering office Lohmeyer). A set of reference scenarios for Chlorine, Ammoniac, Butane and Petrol were proceed, with the models above, in order to predict and estimate the human exposure during the event. Furthermore, the application of the observation-based analysis and forecasting system INCA, developed in the Central Institute for Meteorology and Geodynamics (ZAMG) in case of toxic release was investigated. INCA (Integrated Nowcasting through Comprehensive Analysis) data are calculated operationally with 1 km horizontal resolution and based on the weather forecast model ALADIN. The meteorological field's analysis with INCA include: Temperature, Humidity, Wind, Precipitation, Cloudiness and Global Radiation. In the frame of the project INCA data were compared with measurements from the meteorological observational network, conducted at traffic-near sites in Vienna. INCA analysis and very short term forecast fields (up to 6 hours) are found to be an advanced possibility to provide on-line meteorological input for the model package used by the fire brigade. Since the input requirements differ from model to model, and the outputs are based on unequal criteria for toxic area and exposure, a high degree of caution in the interpretation of the model results is required - especially in the case of slow wind speeds, stable atmospheric condition, and flow deflection by buildings in the urban area or by complex topography.

  1. Untangling Consequential Futures: Discovering Self-Consistent Regional and Global Multi-Sector Change Scenarios

    NASA Astrophysics Data System (ADS)

    Lamontagne, J. R.; Reed, P. M.

    2017-12-01

    Impacts and adaptations to global change largely occur at regional scales, yet they are shaped globally through the interdependent evolution of the climate, energy, agriculture, and industrial systems. It is important for regional actors to account for the impacts of global changes on their systems in a globally consistent but regionally relevant way. This can be challenging because emerging global reference scenarios may not reflect regional challenges. Likewise, regionally specific scenarios may miss important global feedbacks. In this work, we contribute a scenario discovery framework to identify regionally-specific decision relevant scenarios from an ensemble of scenarios of global change. To this end, we generated a large ensemble of time evolving regional, multi-sector global change scenarios by a full factorial sampling of the underlying assumptions in the emerging shared socio-economic pathways (SSPs), using the Global Change Assessment Model (GCAM). Statistical and visual analytics were then used to discover which SSP assumptions are particularly consequential for various regions, considering a broad range of time-evolving metrics that encompass multiple spatial scales and sectors. In an illustrative examples, we identify the most important global change narratives to inform water resource scenarios for several geographic regions using the proposed scenario discovery framework. Our results highlight the importance of demographic and agricultural evolution compared to technical improvements in the energy sector. We show that narrowly sampling a few canonical reference scenarios provides a very narrow view of the consequence space, increasing the risk of tacitly ignoring major impacts. Even optimistic scenarios contain unintended, disproportionate regional impacts and intergenerational transfers of consequence. Formulating consequential scenarios of deeply and broadly uncertain futures requires a better exploration of which quantitative measures of consequences are important, for whom are they important, where, and when. To this end, we have contributed a large database of climate change futures that can support `backwards' scenario generation techniques, that capture a broader array of consequences than those that emerge from limited sampling of a few reference scenarios.

  2. CERISE, a French radioprotection code, to assess the radiological impact and acceptance criteria of installations for material handling, and recycling or disposal of very low-level radioactive waste

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santucci, P.; Guetat, P.

    1993-12-31

    This document describes the code CERISE, Code d`Evaluations Radiologiques Individuelles pour des Situations en Enterprise et dans l`Environnement. This code has been developed in the frame of European studies to establish acceptance criteria of very low-level radioactive waste and materials. This code is written in Fortran and runs on PC. It calculates doses received by the different pathways: external exposure, ingestion, inhalation and skin contamination. Twenty basic scenarios are already elaborated, which have been determined from previous studies. Calculations establish the relation between surface, specific and/or total activities, and doses. Results can be expressed as doses for an average activitymore » unit, or as average activity limits for a set of reference doses (defined for each scenario analyzed). In this last case, the minimal activity values and the corresponding limiting scenarios, are selected and summarized in a final table.« less

  3. Lunar Outpost Life Support Architecture Study Based on a High-Mobility Exploration Scenario

    NASA Technical Reports Server (NTRS)

    Lange, Kevin E.; Anderson, Molly S.

    2010-01-01

    This paper presents results of a life support architecture study based on a 2009 NASA lunar surface exploration scenario known as Scenario 12. The study focuses on the assembly complete outpost configuration and includes pressurized rovers as part of a distributed outpost architecture in both stand-alone and integrated configurations. A range of life support architectures are examined reflecting different levels of closure and distributed functionality. Monte Carlo simulations are used to assess the sensitivity of results to volatile high-impact mission variables, including the quantity of residual Lander oxygen and hydrogen propellants available for scavenging, the fraction of crew time away from the outpost on excursions, total extravehicular activity hours, and habitat leakage. Surpluses or deficits of water and oxygen are reported for each architecture, along with fixed and 10-year total equivalent system mass estimates relative to a reference case. System robustness is discussed in terms of the probability of no water or oxygen resupply as determined from the Monte Carlo simulations.

  4. Ash fallout scenarios at Vesuvius: Numerical simulations and implications for hazard assessment

    NASA Astrophysics Data System (ADS)

    Macedonio, G.; Costa, A.; Folch, A.

    2008-12-01

    Volcanic ash fallout subsequent to a possible renewal of the Vesuvius activity represents a serious threat to the highly urbanized area around the volcano. In order to assess the relative hazard we consider three different possible scenarios such as those following Plinian, Sub-Plinian, and violent Strombolian eruptions. Reference eruptions for each scenario are similar to the 79 AD (Pompeii), the 1631 AD (or 472 AD) and the 1944 AD Vesuvius events, respectively. Fallout deposits for the first two scenarios are modeled using HAZMAP, a model based on a semi-analytical solution of the 2D advection-diffusion-sedimentation equation. In contrast, fallout following a violent Strombolian event is modeled by means of FALL3D, a numerical model based on the solution of the full 3D advection-diffusion-sedimentation equation which is valid also within the atmospheric boundary layer. Inputs for models are total erupted mass, eruption column height, bulk grain-size, bulk component distribution, and a statistical set of wind profiles obtained by the NCEP/NCAR re-analysis. We computed ground load probability maps for different ash loadings. In the case of a Sub-Plinian scenario, the most representative tephra loading maps in 16 cardinal directions were also calculated. The probability maps obtained for the different scenarios are aimed to give support to the risk mitigation strategies.

  5. Land-Use Change and the Billion Ton 2016 Resource Assessment: Understanding the Effects of Land Management on Environmental Indicators

    NASA Astrophysics Data System (ADS)

    Kline, K. L.; Eaton, L. M.; Efroymson, R.; Davis, M. R.; Dunn, J.; Langholtz, M. H.

    2016-12-01

    The federal government, led by the U.S. Department of Energy (DOE), quantified potential U.S. biomass resources for expanded production of renewable energy and bioproducts in the 2016 Billion-Ton Report: Advancing Domestic Resources for a Thriving Bioeconomy (BT16) (DOE 2016). Volume 1 of the report provides analysis of projected supplies from 2015 to2040. Volume 2 (forthcoming) evaluates changes in environmental indicators for water quality and quantity, carbon, air quality, and biodiversity associated with production scenarios in BT16 volume 1. This presentation will review land-use allocations under the projected biomass production scenarios and the changes in land management that are implied, including drivers of direct and indirect LUC. National and global concerns such as deforestation and displacement of food production are addressed. The choice of reference scenario, input parameters and constraints (e.g., regarding land classes, availability, and productivity) drive LUC results in any model simulation and are reviewed to put BT16 impacts into context. The principal LUC implied in BT16 supply scenarios involves the transition of 25-to-47 million acres (net) from annual crops in 2015 baseline to perennial cover by 2040 under the base case and 3% yield growth case, respectively. We conclude that clear definitions of land parameters and effects are essential to assess LUC. A lack of consistency in parameters and outcomes of historic LUC analysis in the U.S. underscores the need for science-based approaches.

  6. Regional air quality management aspects of climate change: impact of climate mitigation options on regional air emissions.

    PubMed

    Rudokas, Jason; Miller, Paul J; Trail, Marcus A; Russell, Armistead G

    2015-04-21

    We investigate the projected impact of six climate mitigation scenarios on U.S. emissions of carbon dioxide (CO2), sulfur dioxide (SO2), and nitrogen oxides (NOX) associated with energy use in major sectors of the U.S. economy (commercial, residential, industrial, electricity generation, and transportation). We use the EPA U.S. 9-region national database with the MARKet Allocation energy system model to project emissions changes over the 2005 to 2050 time frame. The modeled scenarios are two carbon tax, two low carbon transportation, and two biomass fuel choice scenarios. In the lower carbon tax and both biomass fuel choice scenarios, SO2 and NOX achieve reductions largely through pre-existing rules and policies, with only relatively modest additional changes occurring from the climate mitigation measures. The higher carbon tax scenario projects greater declines in CO2 and SO2 relative to the 2050 reference case, but electricity sector NOX increases. This is a result of reduced investments in power plant NOX controls in earlier years in anticipation of accelerated coal power plant retirements, energy penalties associated with carbon capture systems, and shifting of NOX emissions in later years from power plants subject to a regional NOX cap to those in regions not subject to the cap.

  7. The use of scenarios for long-range planning by investor-owned electric utilities in the Pacific Northwest

    NASA Astrophysics Data System (ADS)

    Lyons, John V.

    Scenario planning is a method of organizing and understanding large amounts of quantitative and qualitative data for leaders to make better strategic decisions. There is a lack of academic research about scenario planning with a subsequent shortage of definitions and theories. This study utilized a case study methodology to analyze scenario planning by investor-owned electric utilities in the Pacific Northwest in their integrated resource planning (IRP) process. The cases include Avista Corporation, Idaho Power, PacifiCorp, Portland General Electric, and Puget Sound Energy. This study sought to determine how scenario planning was used, what scenario approach was used, the scenario outcomes, and the similarities and differences in the scenario planning processes. The literature review of this study covered the development of scenario planning, common definitions and theories, approaches to scenario development, and scenario outcomes. A research methodology was developed to classify the scenario development approach into intuitive, hybrid, or quantitative approaches; and scenario outcomes of changed thinking, stories of plausible futures, improved decision making, and enhanced organizational learning. The study found all three forms of scenario planning in the IRPs. All of the cases used a similar approach to IRP development. All of the cases had at least improved decision making as an outcome of scenario planning. Only one case demonstrated all four scenario outcomes. A critical finding was a correlation between the use of the intuitive approach and the use of all scenario outcomes. Another major finding was the unique use of predetermined elements, which are normally consistent across scenarios, but became critical uncertainties in some of the scenarios in the cases for this study. This finding will need to be confirmed by future research as unique to the industry or an aberration. An unusually high number of scenarios were found for cases using the hybrid approach, which was unexpected based on the literature. This work expanded the methods for studying scenario planning, enhanced the body of scholarly works on scenario planning, and provided a starting point for additional research concerning the use of scenario planning by electric utilities.

  8. Putting flow-ecology relationships into practice: A decision-support system to assess fish community response to water-management scenarios

    USGS Publications Warehouse

    Cartwright, Jennifer M.; Caldwell, Casey; Nebiker, Steven; Knight, Rodney

    2017-01-01

    This paper presents a conceptual framework to operationalize flow–ecology relationships into decision-support systems of practical use to water-resource managers, who are commonly tasked with balancing multiple competing socioeconomic and environmental priorities. We illustrate this framework with a case study, whereby fish community responses to various water-management scenarios were predicted in a partially regulated river system at a local watershed scale. This case study simulates management scenarios based on interactive effects of dam operation protocols, withdrawals for municipal water supply, effluent discharges from wastewater treatment, and inter-basin water transfers. Modeled streamflow was integrated with flow–ecology relationships relating hydrologic departure from reference conditions to fish species richness, stratified by trophic, reproductive, and habitat characteristics. Adding a hypothetical new water-withdrawal site was predicted to increase the frequency of low-flow conditions with adverse effects for several fish groups. Imposition of new reservoir release requirements was predicted to enhance flow and fish species richness immediately downstream of the reservoir, but these effects were dissipated further downstream. The framework presented here can be used to translate flow–ecology relationships into evidence-based management by developing decision-support systems for conservation of riverine biodiversity while optimizing water availability for human use.

  9. RENEB accident simulation exercise.

    PubMed

    Brzozowska, Beata; Ainsbury, Elizabeth; Baert, Annelot; Beaton-Green, Lindsay; Barrios, Leonardo; Barquinero, Joan Francesc; Bassinet, Celine; Beinke, Christina; Benedek, Anett; Beukes, Philip; Bortolin, Emanuela; Buraczewska, Iwona; Burbidge, Christopher; De Amicis, Andrea; De Angelis, Cinzia; Della Monaca, Sara; Depuydt, Julie; De Sanctis, Stefania; Dobos, Katalin; Domene, Mercedes Moreno; Domínguez, Inmaculada; Facco, Eva; Fattibene, Paola; Frenzel, Monika; Monteiro Gil, Octávia; Gonon, Géraldine; Gregoire, Eric; Gruel, Gaëtan; Hadjidekova, Valeria; Hatzi, Vasiliki I; Hristova, Rositsa; Jaworska, Alicja; Kis, Enikő; Kowalska, Maria; Kulka, Ulrike; Lista, Florigio; Lumniczky, Katalin; Martínez-López, Wilner; Meschini, Roberta; Moertl, Simone; Moquet, Jayne; Noditi, Mihaela; Oestreicher, Ursula; Orta Vázquez, Manuel Luis; Palma, Valentina; Pantelias, Gabriel; Montoro Pastor, Alegria; Patrono, Clarice; Piqueret-Stephan, Laure; Quattrini, Maria Cristina; Regalbuto, Elisa; Ricoul, Michelle; Roch-Lefevre, Sandrine; Roy, Laurence; Sabatier, Laure; Sarchiapone, Lucia; Sebastià, Natividad; Sommer, Sylwester; Sun, Mingzhu; Suto, Yumiko; Terzoudi, Georgia; Trompier, Francois; Vral, Anne; Wilkins, Ruth; Zafiropoulos, Demetre; Wieser, Albrecht; Woda, Clemens; Wojcik, Andrzej

    2017-01-01

    The RENEB accident exercise was carried out in order to train the RENEB participants in coordinating and managing potentially large data sets that would be generated in case of a major radiological event. Each participant was offered the possibility to activate the network by sending an alerting email about a simulated radiation emergency. The same participant had to collect, compile and report capacity, triage categorization and exposure scenario results obtained from all other participants. The exercise was performed over 27 weeks and involved the network consisting of 28 institutes: 21 RENEB members, four candidates and three non-RENEB partners. The duration of a single exercise never exceeded 10 days, while the response from the assisting laboratories never came later than within half a day. During each week of the exercise, around 4500 samples were reported by all service laboratories (SL) to be examined and 54 scenarios were coherently estimated by all laboratories (the standard deviation from the mean of all SL answers for a given scenario category and a set of data was not larger than 3 patient codes). Each participant received training in both the role of a reference laboratory (activating the network) and of a service laboratory (responding to an activation request). The procedures in the case of radiological event were successfully established and tested.

  10. [Health impact assessment of policies for municipal solid waste management: findings of the SESPIR Project].

    PubMed

    Ranzi, Andrea; Ancona, Carla; Angelini, Paola; Badaloni, Chiara; Cernigliaro, Achille; Chiusolo, Monica; Parmagnani, Federica; Pizzuti, Renato; Scondotto, Salvatore; Cadum, Ennio; Forastiere, Francesco; Lauriola, Paolo

    2014-01-01

    The SESPIR Project (Epidemiological Surveillance of Health Status of Resident Population Around the Waste Treatment Plants) assessed the impact on health of residents nearby incinerators, landfills and mechanical biological treatment plants in five Italian regions (Emilia-Romagna, Piedmont, Lazio, Campania, and Sicily). The assessment procedure took into account the available knowledge on health effects of waste disposal facilities. Analyses were related to three different scenarios: a Baseline scenario, referred to plants active in 2008-2009; the regional future scenario, with plants expected in the waste regional plans; a virtuous scenario (Green 2020), based on a policy management of municipal solid waste (MSW) through the reduction of production and an intense recovery policy. Facing with a total population of around 24 million for the 5 regions, the residents nearby the plants were more than 380,000 people at Baseline. Such a population is reduced to approximately 330.000 inhabitants and 170.000 inhabitants in the regional and Green 2020 scenarios, respectively. The health impact was assessed for the period 2008-2040. At Baseline, 1-2 cases per year of cancer attributable to MSW plants were estimated, as well as 26 cases per year of adverse pregnancy outcomes (including low birth weight and birth defects), 102 persons with respiratory symptoms, and about a thousand affected from annoyance caused by odours. These annual estimates are translated into 2,725 years of life with disability (DALYs) estimated for the entire period. The DALYs are reduced by approximately 20% and 80% in the two future scenarios. Even in these cases, health impact is given by the greater effects on pregnancy and the annoyance associated with the odours of plants. In spite of the limitations due to the inevitable assumptions required by the present exercise, the proposed methodology is suitable for a first approach to assess different policies that can be adopted in regional planning in the field of waste management. The greatest reduction in health impact is achieved with a virtuous policy of reducing waste production and a significant increase in the collection and recycling of waste.

  11. General practitioners' attitude to sport and exercise medicine services: a questionnaire-based survey.

    PubMed

    Kassam, H; Tzortziou Brown, V; O'Halloran, P; Wheeler, P; Fairclough, J; Maffulli, N; Morrissey, D

    2014-12-01

    Sport and exercise medicine (SEM) aims to manage sporting injuries and promote physical activity. This study explores general practitioners' (GPs) awareness, understanding and utilisation of their local SEM services. A questionnaire survey, including patient case scenarios, was administered between February and May 2011. 693 GPs working in Cardiff and Vale, Leicester and Tower Hamlets were invited to participate. 244 GPs responded to the questionnaire (35.2% response rate). Less than half (46%; 112/244) were aware of their nearest SEM service and only 38% (92/244) had a clear understanding on referral indications. The majority (82%; 199/244) felt confident advising less active patients about exercise. There were divergent management opinions about the case scenarios of patients who were SEM referral candidates. Overall, GPs were significantly more likely to refer younger patients and patients with sport-related problems rather than patients who would benefit from increasing their activity levels in order to prevent or manage chronic conditions (p<0.01). GPs with previous SEM training were significantly more likely to refer (p<0.01). The majority (62%; 151/244) had never referred patients to their local SEM clinics but of those who had 75% (70/93) rated the service as good. There is a lack of awareness and understanding among GPs on the role of SEM within the National Health Service which may be resulting in suboptimal utilisation especially for patients who could benefit from increasing their activity levels. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  12. Utility of the AAOS Appropriate Use Criteria (AUC) for Pediatric Supracondylar Humerus Fractures in Clinical Practice.

    PubMed

    Ibrahim, Talal; Hegazy, Abdelsalam; Abulhail, Safa I S; Ghomrawi, Hassan M K

    2017-01-01

    The American Academy of Orthopaedic Surgeons (AAOS) recently developed an Appropriate Use Criteria (AUC) for pediatric supracondylar humerus fractures (PSHF). The AUC is intended to improve quality of care by informing surgeon decision making. The aim of our study was to cross-reference the management of operatively treated PSHF with the AAOS-published AUC. The AUC for PSHF include 220 patient scenarios, based on different combinations of 6 factors. For each patient scenario, 8 treatment options are evaluated as "appropriate," "maybe appropriate," and "rarely appropriate." We retrospectively reviewed the medical charts and radiographs of all operatively treated PSHF at our hospital from January 2013 to December 2014 and determined the appropriateness of the treatment. Over the study period, 94 children (mean age: 5.2 y; 51 male, 43 female) were admitted with PSHF and underwent a surgical procedure (type IIA: 7, type IIB: 14, type III: 70, flexion type: 3). Only 8 of the 220 scenarios were observed in our patient cohort. The most frequent scenario was represented by a type III fracture, palpable distal pulse, no nerve injury, closed soft-tissue envelope, no radius/ulna fracture, and typical swelling. Of the 94 fractures, the AUC was "appropriate" for 84 cases and "maybe appropriate" for 9 cases. There was only 1 case of "rarely appropriate" management. Closed reduction with lateral pinning and immobilization was the most prevalent treatment option (58.5%). The rate of appropriateness was not affected by the operating surgeon. However, the definition of a case as emergent had a significant impact on the rate of appropriateness. Application of the AUC to actual clinical data was relatively simple. The majority of operatively treated PSHF (89.4%) were managed appropriately. With the introduction of electronic medical charts, an AUC application becomes attractive and easy for orthopaedic surgeons to utilize in clinical practice. However, validity studies of the AUC in different clinical settings are still required. Level IV.

  13. Climate change impact assessment on Veneto and Friuli Plain groundwater. Part I: an integrated modeling approach for hazard scenario construction.

    PubMed

    Baruffi, F; Cisotto, A; Cimolino, A; Ferri, M; Monego, M; Norbiato, D; Cappelletto, M; Bisaglia, M; Pretner, A; Galli, A; Scarinci, A; Marsala, V; Panelli, C; Gualdi, S; Bucchignani, E; Torresan, S; Pasini, S; Critto, A; Marcomini, A

    2012-12-01

    Climate change impacts on water resources, particularly groundwater, is a highly debated topic worldwide, triggering international attention and interest from both researchers and policy makers due to its relevant link with European water policy directives (e.g. 2000/60/EC and 2007/118/EC) and related environmental objectives. The understanding of long-term impacts of climate variability and change is therefore a key challenge in order to address effective protection measures and to implement sustainable management of water resources. This paper presents the modeling approach adopted within the Life+ project TRUST (Tool for Regional-scale assessment of groUndwater Storage improvement in adaptation to climaTe change) in order to provide climate change hazard scenarios for the shallow groundwater of high Veneto and Friuli Plain, Northern Italy. Given the aim to evaluate potential impacts on water quantity and quality (e.g. groundwater level variation, decrease of water availability for irrigation, variations of nitrate infiltration processes), the modeling approach integrated an ensemble of climate, hydrologic and hydrogeologic models running from the global to the regional scale. Global and regional climate models and downscaling techniques were used to make climate simulations for the reference period 1961-1990 and the projection period 2010-2100. The simulation of the recent climate was performed using observed radiative forcings, whereas the projections have been done prescribing the radiative forcings according to the IPCC A1B emission scenario. The climate simulations and the downscaling, then, provided the precipitation, temperatures and evapo-transpiration fields used for the impact analysis. Based on downscaled climate projections, 3 reference scenarios for the period 2071-2100 (i.e. the driest, the wettest and the mild year) were selected and used to run a regional geomorphoclimatic and hydrogeological model. The final output of the model ensemble produced information about the potential variations of the water balance components (e.g. river discharge, groundwater level and volume) due to climate change. Such projections were used to develop potential hazard scenarios for the case study area, to be further applied within climate change risk assessment studies for groundwater resources and associated ecosystems. This paper describes the models' chain and the methodological approach adopted in the TRUST project and analyzes the hazard scenarios produced in order to investigate climate change risks for the case study area. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. African Security Challenges: Now and Over the Horizon. Refugees, Internally-Displaced Persons, and Militancy in Africa: Current and/or Future Threat?

    DTIC Science & Technology

    2010-01-01

    asserted that in Africa, the general form this problem takes today might be different than the form it took in the past. Citing the 1994 Rwanda case as...community in promoting U.S. response to the genocide there. African Security Challenges: Now and Over the Horizon Working Group Discussion Report...nightmare scenario for humanitarian organizations and ultimately led to international war between Rwanda and Zaire. Using Rwanda as a reference

  15. Global Transportation Energy Consumption Examination of Scenarios to 2040 using ITEDD

    EIA Publications

    2017-01-01

    Energy consumption in the transportation sector is evolving. Over the next 25 years, the U.S. Energy Information Administration’s (EIA) International Energy Outlook (IEO) 2016 Reference case projects that Organization for Economic Cooperation and Development (OECD) countries’ transportation energy consumption will remain relatively flat. In contrast, non-OECD countries will grow to levels higher than in OECD countries by the early 2020s. This rapid non-OECD growth results in continued transportation energy consumption growth through at least 2040.

  16. Decommissioning of offshore oil and gas facilities: a comparative assessment of different scenarios.

    PubMed

    Ekins, Paul; Vanner, Robin; Firebrace, James

    2006-06-01

    A material and energy flow analysis, with corresponding financial flows, was carried out for different decommissioning scenarios for the different elements of an offshore oil and gas structure. A comparative assessment was made of the non-financial (especially environmental) outcomes of the different scenarios, with the reference scenario being to leave all structures in situ, while other scenarios envisaged leaving them on the seabed or removing them to shore for recycling and disposal. The costs of each scenario, when compared with the reference scenario, give an implicit valuation of the non-financial outcomes (e.g. environmental improvements), should that scenario be adopted by society. The paper concludes that it is not clear that the removal of the topsides and jackets of large steel structures to shore, as currently required by regulations, is environmentally justified; that concrete structures should certainly be left in place; and that leaving footings, cuttings and pipelines in place, with subsequent monitoring, would also be justified unless very large values were placed by society on a clear seabed and trawling access.

  17. Medical Content Searching, Retrieving, and Sharing Over the Internet: Lessons Learned From the mEducator Through a Scenario-Based Evaluation

    PubMed Central

    Spachos, Dimitris; Mylläri, Jarkko; Giordano, Daniela; Dafli, Eleni; Mitsopoulou, Evangelia; Schizas, Christos N; Pattichis, Constantinos; Nikolaidou, Maria

    2015-01-01

    Background The mEducator Best Practice Network (BPN) implemented and extended standards and reference models in e-learning to develop innovative frameworks as well as solutions that enable specialized state-of-the-art medical educational content to be discovered, retrieved, shared, and re-purposed across European Institutions, targeting medical students, doctors, educators and health care professionals. Scenario-based evaluation for usability testing, complemented with data from online questionnaires and field notes of users’ performance, was designed and utilized for the evaluation of these solutions. Objective The objective of this work is twofold: (1) to describe one instantiation of the mEducator BPN solutions (mEducator3.0 - “MEdical Education LINnked Arena” MELINA+) with a focus on the metadata schema used, as well as on other aspects of the system that pertain to usability and acceptance, and (2) to present evaluation results on the suitability of the proposed metadata schema for searching, retrieving, and sharing of medical content and with respect to the overall usability and acceptance of the system from the target users. Methods A comprehensive evaluation methodology framework was developed and applied to four case studies, which were conducted in four different countries (ie, Greece, Cyprus, Bulgaria and Romania), with a total of 126 participants. In these case studies, scenarios referring to creating, sharing, and retrieving medical educational content using mEducator3.0 were used. The data were collected through two online questionnaires, consisting of 36 closed-ended questions and two open-ended questions that referred to mEducator 3.0 and through the use of field notes during scenario-based evaluations. Results The main findings of the study showed that even though the informational needs of the mEducator target groups were addressed to a satisfactory extent and the metadata schema supported content creation, sharing, and retrieval from an end-user perspective, users faced difficulties in achieving a shared understanding of the meaning of some metadata fields and in correctly managing the intellectual property rights of repurposed content. Conclusions The results of this evaluation impact researchers, medical professionals, and designers interested in using similar systems for educational content sharing in medical and other domains. Recommendations on how to improve the search, retrieval, identification, and obtaining of medical resources are provided, by addressing issues of content description metadata, content description procedures, and intellectual property rights for re-purposed content. PMID:26453250

  18. System-Level Logistics for Dual Purpose Canister Disposal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalinina, Elena A.

    2014-06-03

    The analysis presented in this report investigated how the direct disposal of dual purpose canisters (DPCs) may be affected by the use of standard transportation aging and disposal canisters (STADs), early or late start of the repository, and the repository emplacement thermal power limits. The impacts were evaluated with regard to the availability of the DPCs for emplacement, achievable repository acceptance rates, additional storage required at an interim storage facility (ISF) and additional emplacement time compared to the corresponding repackaging scenarios, and fuel age at emplacement. The result of this analysis demonstrated that the biggest difference in the availability ofmore » UNF for emplacement between the DPC-only loading scenario and the DPCs and STADs loading scenario is for a repository start date of 2036 with a 6 kW thermal power limit. The differences are also seen in the availability of UNF for emplacement between the DPC-only loading scenario and the DPCs and STADs loading scenario for the alternative with a 6 kW thermal limit and a 2048 start date, and for the alternatives with a 10 kW thermal limit and 2036 and 2048 start dates. The alternatives with disposal of UNF in both DPCs and STADs did not require additional storage, regardless of the repository acceptance rate, as compared to the reference repackaging case. In comparison to the reference repackaging case, alternatives with the 18 kW emplacement thermal limit required little to no additional emplacement time, regardless of the repository start time, the fuel loading scenario, or the repository acceptance rate. Alternatives with the 10 kW emplacement thermal limit and the DPCs and STADs fuel loading scenario required some additional emplacement time. The most significant decrease in additional emplacement time occurred in the alternative with the 6 kW thermal limit and the 2036 repository starting date. The average fuel age at emplacement ranges from 46 to 88 years. The maximum fuel age at emplacement ranges from 81 to 146 years. The difference in the average and maximum age of fuel at emplacement between the DPC-only and the DPCs and STADs fuel loading scenarios becomes less significant as the repository thermal limit increases and as the repository start date increases. In general, the role of STADs is to store young (30 year or younger) high burnup (45 GWD/MTU or higher) fuel. Recommendations for future study include detailed evaluation of the feasible alternatives with regard to the costs and factors not considered in this analysis, such as worker dose, dose to members of the public, and economic benefits to host entities. It is also recommended to conduct an additional analysis to evaluate the assumption regarding the transportability and disposability of DPCs for the next iteration of the direct disposal of DPCs study.« less

  19. ARAMIS project: a comprehensive methodology for the identification of reference accident scenarios in process industries.

    PubMed

    Delvosalle, Christian; Fievez, Cécile; Pipart, Aurore; Debray, Bruno

    2006-03-31

    In the frame of the Accidental Risk Assessment Methodology for Industries (ARAMIS) project, this paper aims at presenting the work carried out in the part of the project devoted to the definition of accident scenarios. This topic is a key-point in risk assessment and serves as basis for the whole risk quantification. The first result of the work is the building of a methodology for the identification of major accident hazards (MIMAH), which is carried out with the development of generic fault and event trees based on a typology of equipment and substances. The term "major accidents" must be understood as the worst accidents likely to occur on the equipment, assuming that no safety systems are installed. A second methodology, called methodology for the identification of reference accident scenarios (MIRAS) takes into account the influence of safety systems on both the frequencies and possible consequences of accidents. This methodology leads to identify more realistic accident scenarios. The reference accident scenarios are chosen with the help of a tool called "risk matrix", crossing the frequency and the consequences of accidents. This paper presents both methodologies and an application on an ethylene oxide storage.

  20. Organ and effective dose rate coefficients for submersion exposure in occupational settings

    DOE PAGES

    Veinot, K. G.; Y-12 National Security Complex, Oak Ridge, TN; Dewji, S. A.; ...

    2017-08-24

    External dose coefficients for environmental exposure scenarios are often computed using assumption on infinite or semi-infinite radiation sources. For example, in the case of a person standing on contaminated ground, the source is assumed to be distributed at a given depth (or between various depths) and extending outwards to an essentially infinite distance. In the case of exposure to contaminated air, the person is modeled as standing within a cloud of infinite, or semi-infinite, source distribution. However, these scenarios do not mimic common workplace environments where scatter off walls and ceilings may significantly alter the energy spectrum and dose coefficients.more » In this study, dose rate coefficients were calculated using the International Commission on Radiological Protection (ICRP) reference voxel phantoms positioned in rooms of three sizes representing an office, laboratory, and warehouse. For each room size calculations using the reference phantoms were performed for photons, electrons, and positrons as the source particles to derive mono-energetic dose rate coefficients. Since the voxel phantoms lack the resolution to perform dose calculations at the sensitive depth for the skin, a mathematical phantom was developed and calculations were performed in each room size with the three source particle types. Coefficients for the noble gas radionuclides of ICRP Publication 107 (e.g., Ne, Ar, Kr, Xe, and Rn) were generated by folding the corresponding photon, electron, and positron emissions over the mono-energetic dose rate coefficients. Finally, results indicate that the smaller room sizes have a significant impact on the dose rate per unit air concentration compared to the semi-infinite cloud case. For example, for Kr-85 the warehouse dose rate coefficient is 7% higher than the office dose rate coefficient while it is 71% higher for Xe-133.« less

  1. Organ and effective dose rate coefficients for submersion exposure in occupational settings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veinot, K. G.; Y-12 National Security Complex, Oak Ridge, TN; Dewji, S. A.

    External dose coefficients for environmental exposure scenarios are often computed using assumption on infinite or semi-infinite radiation sources. For example, in the case of a person standing on contaminated ground, the source is assumed to be distributed at a given depth (or between various depths) and extending outwards to an essentially infinite distance. In the case of exposure to contaminated air, the person is modeled as standing within a cloud of infinite, or semi-infinite, source distribution. However, these scenarios do not mimic common workplace environments where scatter off walls and ceilings may significantly alter the energy spectrum and dose coefficients.more » In this study, dose rate coefficients were calculated using the International Commission on Radiological Protection (ICRP) reference voxel phantoms positioned in rooms of three sizes representing an office, laboratory, and warehouse. For each room size calculations using the reference phantoms were performed for photons, electrons, and positrons as the source particles to derive mono-energetic dose rate coefficients. Since the voxel phantoms lack the resolution to perform dose calculations at the sensitive depth for the skin, a mathematical phantom was developed and calculations were performed in each room size with the three source particle types. Coefficients for the noble gas radionuclides of ICRP Publication 107 (e.g., Ne, Ar, Kr, Xe, and Rn) were generated by folding the corresponding photon, electron, and positron emissions over the mono-energetic dose rate coefficients. Finally, results indicate that the smaller room sizes have a significant impact on the dose rate per unit air concentration compared to the semi-infinite cloud case. For example, for Kr-85 the warehouse dose rate coefficient is 7% higher than the office dose rate coefficient while it is 71% higher for Xe-133.« less

  2. Problems encountered when defining Arctic amplification as a ratio

    PubMed Central

    Hind, Alistair; Zhang, Qiong; Brattström, Gudrun

    2016-01-01

    In climate change science the term ‘Arctic amplification’ has become synonymous with an estimation of the ratio of a change in Arctic temperatures compared with a broader reference change under the same period, usually in global temperatures. Here, it is shown that this definition of Arctic amplification comes with a suite of difficulties related to the statistical properties of the ratio estimator itself. Most problematic is the complexity of categorizing uncertainty in Arctic amplification when the global, or reference, change in temperature is close to 0 over a period of interest, in which case it may be impossible to set bounds on this uncertainty. An important conceptual distinction is made between the ‘Ratio of Means’ and ‘Mean Ratio’ approaches to defining a ratio estimate of Arctic amplification, as they do not only possess different uncertainty properties regarding the amplification factor, but are also demonstrated to ask different scientific questions. Uncertainty in the estimated range of the Arctic amplification factor using the latest global climate models and climate forcing scenarios is expanded upon and shown to be greater than previously demonstrated for future climate projections, particularly using forcing scenarios with lower concentrations of greenhouse gases. PMID:27461918

  3. Problems encountered when defining Arctic amplification as a ratio.

    PubMed

    Hind, Alistair; Zhang, Qiong; Brattström, Gudrun

    2016-07-27

    In climate change science the term 'Arctic amplification' has become synonymous with an estimation of the ratio of a change in Arctic temperatures compared with a broader reference change under the same period, usually in global temperatures. Here, it is shown that this definition of Arctic amplification comes with a suite of difficulties related to the statistical properties of the ratio estimator itself. Most problematic is the complexity of categorizing uncertainty in Arctic amplification when the global, or reference, change in temperature is close to 0 over a period of interest, in which case it may be impossible to set bounds on this uncertainty. An important conceptual distinction is made between the 'Ratio of Means' and 'Mean Ratio' approaches to defining a ratio estimate of Arctic amplification, as they do not only possess different uncertainty properties regarding the amplification factor, but are also demonstrated to ask different scientific questions. Uncertainty in the estimated range of the Arctic amplification factor using the latest global climate models and climate forcing scenarios is expanded upon and shown to be greater than previously demonstrated for future climate projections, particularly using forcing scenarios with lower concentrations of greenhouse gases.

  4. A new item response theory model to adjust data allowing examinee choice

    PubMed Central

    Costa, Marcelo Azevedo; Braga Oliveira, Rivert Paulo

    2018-01-01

    In a typical questionnaire testing situation, examinees are not allowed to choose which items they answer because of a technical issue in obtaining satisfactory statistical estimates of examinee ability and item difficulty. This paper introduces a new item response theory (IRT) model that incorporates information from a novel representation of questionnaire data using network analysis. Three scenarios in which examinees select a subset of items were simulated. In the first scenario, the assumptions required to apply the standard Rasch model are met, thus establishing a reference for parameter accuracy. The second and third scenarios include five increasing levels of violating those assumptions. The results show substantial improvements over the standard model in item parameter recovery. Furthermore, the accuracy was closer to the reference in almost every evaluated scenario. To the best of our knowledge, this is the first proposal to obtain satisfactory IRT statistical estimates in the last two scenarios. PMID:29389996

  5. Evaluating hydrological response to forecasted land-use change—scenario testing with the automated geospatial watershed assessment (AGWA) tool

    USGS Publications Warehouse

    Kepner, William G.; Semmens, Darius J.; Hernandez, Mariano; Goodrich, David C.

    2009-01-01

    Envisioning and evaluating future scenarios has emerged as a critical component of both science and social decision-making. The ability to assess, report, map, and forecast the life support functions of ecosystems is absolutely critical to our capacity to make informed decisions to maintain the sustainable nature of our ecosystem services now and into the future. During the past two decades, important advances in the integration of remote imagery, computer processing, and spatial-analysis technologies have been used to develop landscape information that can be integrated with hydrologic models to determine long-term change and make predictive inferences about the future. Two diverse case studies in northwest Oregon (Willamette River basin) and southeastern Arizona (San Pedro River) were examined in regard to future land use scenarios relative to their impact on surface water conditions (e.g., sediment yield and surface runoff) using hydrologic models associated with the Automated Geospatial Watershed Assessment (AGWA) tool. The base reference grid for land cover was modified in both study locations to reflect stakeholder preferences 20 to 60 yrs into the future, and the consequences of landscape change were evaluated relative to the selected future scenarios. The two studies provide examples of integrating hydrologic modeling with a scenario analysis framework to evaluate plausible future forecasts and to understand the potential impact of landscape change on ecosystem services.

  6. Application of the BRAFO tiered approach for benefit-risk assessment to case studies on heat processing contaminants.

    PubMed

    Schütte, Katrin; Boeing, Heiner; Hart, Andy; Heeschen, Walther; Reimerdes, Ernst H; Santare, Dace; Skog, Kerstin; Chiodini, Alessandro

    2012-11-01

    The aim of the European Funded Project BRAFO (benefit-risk analysis of foods) project was to develop a framework that allows quantitative comparison of human health risks and benefits of foods based on a common scale of measurement. This publication describes the application of the BRAFO methodology to three different case studies: the formation of acrylamide in potato and cereal based products, the formation of benzo(a)pyrene through smoking and grilling of meat and fish and the heat-treatment of milk. Reference, alternative scenario and target population represented the basic structure to test the tiers of the framework. Various intervention methods intended to reduce acrylamide in potato and cereal products were evaluated against the historical production methods. In conclusion the benefits of the acrylamide-reducing measures were considered prevailing. For benzo(a)pyrene, three illustrated alternative scenarios were evaluated against the most common smoking practice. The alternative scenarios were assessed as delivering benefits, introducing only minimal potential risks. Similar considerations were made for heat treatment of milk where the comparison of the microbiological effects of heat treatment, physico-chemical changes of milk constituents with positive and negative health effects was assessed. In general, based on data available, benefits of the heat treatment were outweighing any risks. Copyright © 2012 ILSI Europe. Published by Elsevier Ltd.. All rights reserved.

  7. 3-D numerical evaluation of density effects on tracer tests.

    PubMed

    Beinhorn, M; Dietrich, P; Kolditz, O

    2005-12-01

    In this paper we present numerical simulations carried out to assess the importance of density-dependent flow on tracer plume development. The scenario considered in the study is characterized by a short-term tracer injection phase into a fully penetrating well and a natural hydraulic gradient. The scenario is thought to be typical for tracer tests conducted in the field. Using a reference case as a starting point, different model parameters were changed in order to determine their importance to density effects. The study is based on a three-dimensional model domain. Results were interpreted using concentration contours and a first moment analysis. Tracer injections of 0.036 kg per meter of saturated aquifer thickness do not cause significant density effects assuming hydraulic gradients of at least 0.1%. Higher tracer input masses, as used for geoelectrical investigations, may lead to buoyancy-induced flow in the early phase of a tracer test which in turn impacts further plume development. This also holds true for shallow aquifers. Results of simulations with different tracer injection rates and durations imply that the tracer input scenario has a negligible effect on density flow. Employing model cases with different realizations of a log conductivity random field, it could be shown that small variations of hydraulic conductivity in the vicinity of the tracer injection well have a major control on the local tracer distribution but do not mask effects of buoyancy-induced flow.

  8. SU-F-BRA-01: A Procedure for the Fast Semi-Automatic Localization of Catheters Using An Electromagnetic Tracker (EMT) for Image-Guided Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damato, A; Viswanathan, A; Cormack, R

    2015-06-15

    Purpose: To evaluate the feasibility of brachytherapy catheter localization through use of an EMT and 3D image set. Methods: A 15-catheter phantom mimicking an interstitial implantation was built and CT-scanned. Baseline catheter reconstruction was performed manually. An EMT was used to acquire the catheter coordinates in the EMT frame of reference. N user-identified catheter tips, without catheter number associations, were used to establish registration with the CT frame of reference. Two algorithms were investigated: brute-force registration (BFR), in which all possible permutation of N identified tips with the EMT tips were evaluated; and signature-based registration (SBR), in which a distancemore » matrix was used to generate a list of matching signatures describing possible N-point matches with the registration points. Digitization error (average of the distance between corresponding EMT and baseline dwell positions; average, standard deviation, and worst-case scenario over all possible registration-point selections) and algorithm inefficiency (maximum number of rigid registrations required to find the matching fusion for all possible selections of registration points) were calculated. Results: Digitization errors on average <2 mm were observed for N ≥5, with standard deviation <2 mm for N ≥6, and worst-case scenario error <2 mm for N ≥11. Algorithm inefficiencies were: N = 5, 32,760 (BFR) and 9900 (SBR); N = 6, 360,360 (BFR) and 21,660 (SBR); N = 11, 5.45*1010 (BFR) and 12 (SBR). Conclusion: A procedure was proposed for catheter reconstruction using EMT and only requiring user identification of catheter tips without catheter localization. Digitization errors <2 mm were observed on average with 5 or more registration points, and in any scenario with 11 or more points. Inefficiency for N = 11 was 9 orders of magnitude lower for SBR than for BFR. Funding: Kaye Family Award.« less

  9. Assessing the Formation of Experience-Based Gender Expectations in an Implicit Learning Scenario

    PubMed Central

    Öttl, Anton; Behne, Dawn M.

    2017-01-01

    The present study investigates the formation of new word-referent associations in an implicit learning scenario, using a gender-coded artificial language with spoken words and visual referents. Previous research has shown that when participants are explicitly instructed about the gender-coding system underlying an artificial lexicon, they monitor the frequency of exposure to male vs. female referents within this lexicon, and subsequently use this probabilistic information to predict the gender of an upcoming referent. In an explicit learning scenario, the auditory and visual gender cues are necessarily highlighted prior to acqusition, and the effects previously observed may therefore depend on participants' overt awareness of these cues. To assess whether the formation of experience-based expectations is dependent on explicit awareness of the underlying coding system, we present data from an experiment in which gender-coding was acquired implicitly, thereby reducing the likelihood that visual and auditory gender cues are used strategically during acquisition. Results show that even if the gender coding system was not perfectly mastered (as reflected in the number of gender coding errors), participants develop frequency based expectations comparable to those previously observed in an explicit learning scenario. In line with previous findings, participants are quicker at recognizing a referent whose gender is consistent with an induced expectation than one whose gender is inconsistent with an induced expectation. At the same time however, eyetracking data suggest that these expectations may surface earlier in an implicit learning scenario. These findings suggest that experience-based expectations are robust against manner of acquisition, and contribute to understanding why similar expectations observed in the activation of stereotypes during the processing of natural language stimuli are difficult or impossible to suppress. PMID:28936186

  10. Evolution of forensic odontology: An overview

    PubMed Central

    Balachander, N.; Babu, N. Aravindha; Jimson, Sudha; Priyadharsini, C.; Masthan, K. M. K.

    2015-01-01

    Forensic dentistry or forensic odontology admits dentists’ participation or identification of the victim and assisting legal and criminal issues. It refers to the proper handling, examination, identification and evaluation of dental evidence. This article summarizes the evolution of forensic odontology that started right from Garden of Eden to the modern scenario in identification of the gang rape case which happened in the state capital. Forensic dentistry plays a significant role in identifying the victims of crime, deceased individuals through the examination of anatomical structures, dental appliances and dental restorations. PMID:26015703

  11. Identifying and treating a life-threatening disease.

    PubMed

    Cole, Beverley

    2014-02-01

    Meningococcal septicaemia is a life-threatening condition that all nurses working in emergency and urgent care settings are likely to come across during their careers. This article presents, and reflects on, a case study involving a woman with the disease whose signs and symptoms were atypical, and who was not therefore diagnosed with the condition immediately. The author aims to raise awareness among emergency nurses and nurse practitioners of the atypical signs and symptoms of the infection, and its consequences. The article also discusses how referring to patient scenarios can improve practice.

  12. Large Ensemble Analytic Framework for Consequence-Driven Discovery of Climate Change Scenarios

    NASA Astrophysics Data System (ADS)

    Lamontagne, Jonathan R.; Reed, Patrick M.; Link, Robert; Calvin, Katherine V.; Clarke, Leon E.; Edmonds, James A.

    2018-03-01

    An analytic scenario generation framework is developed based on the idea that the same climate outcome can result from very different socioeconomic and policy drivers. The framework builds on the Scenario Matrix Framework's abstraction of "challenges to mitigation" and "challenges to adaptation" to facilitate the flexible discovery of diverse and consequential scenarios. We combine visual and statistical techniques for interrogating a large factorial data set of 33,750 scenarios generated using the Global Change Assessment Model. We demonstrate how the analytic framework can aid in identifying which scenario assumptions are most tied to user-specified measures for policy relevant outcomes of interest, specifically for our example high or low mitigation costs. We show that the current approach for selecting reference scenarios can miss policy relevant scenario narratives that often emerge as hybrids of optimistic and pessimistic scenario assumptions. We also show that the same scenario assumption can be associated with both high and low mitigation costs depending on the climate outcome of interest and the mitigation policy context. In the illustrative example, we show how agricultural productivity, population growth, and economic growth are most predictive of the level of mitigation costs. Formulating policy relevant scenarios of deeply and broadly uncertain futures benefits from large ensemble-based exploration of quantitative measures of consequences. To this end, we have contributed a large database of climate change futures that can support "bottom-up" scenario generation techniques that capture a broader array of consequences than those that emerge from limited sampling of a few reference scenarios.

  13. Agricultural Baseline (BL0) scenario

    DOE Data Explorer

    Davis, Maggie R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000181319328); Hellwinckel, Chad M [University of Tennessee] (ORCID:0000000173085058); Eaton, Laurence [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000312709626); Turhollow, Anthony [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000228159350); Brandt, Craig [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000214707379); Langholtz, Matthew H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000281537154)

    2016-07-13

    Scientific reason for data generation: to serve as the reference case for the BT16 volume 1 agricultural scenarios. The agricultural baseline runs from 2015 through 2040; a starting year of 2014 is used. Date the data set was last modified: 02/12/2016 How each parameter was produced (methods), format, and relationship to other data in the data set: simulation was developed without offering a farmgate price to energy crops or residues (i.e., building on both the USDA 2015 baseline and the agricultural census data (USDA NASS 2014). Data generated are .txt output files by year, simulation identifier, county code (1-3109). Instruments used: POLYSYS (version POLYS2015_V10_alt_JAN22B) supplied by the University of Tennessee APAC The quality assurance and quality control that have been applied: • Check for negative planted area, harvested area, production, yield and cost values. • Check if harvested area exceeds planted area for annuals. • Check FIPS codes.

  14. New trends in transportation and land use scenario planning : five case studies of regional and local scenario planning efforts

    DOT National Transportation Integrated Search

    2010-04-01

    This report summarizes important findings from a literature review on scenario planning processes and a scan of stakeholders. It also presents case studies on innovative, next generation scenario planning efforts. The project team defined next ...

  15. Effect of pesticide fate parameters and their uncertainty on the selection of 'worst-case' scenarios of pesticide leaching to groundwater.

    PubMed

    Vanderborght, Jan; Tiktak, Aaldrik; Boesten, Jos J T I; Vereecken, Harry

    2011-03-01

    For the registration of pesticides in the European Union, model simulations for worst-case scenarios are used to demonstrate that leaching concentrations to groundwater do not exceed a critical threshold. A worst-case scenario is a combination of soil and climate properties for which predicted leaching concentrations are higher than a certain percentile of the spatial concentration distribution within a region. The derivation of scenarios is complicated by uncertainty about soil and pesticide fate parameters. As the ranking of climate and soil property combinations according to predicted leaching concentrations is different for different pesticides, the worst-case scenario for one pesticide may misrepresent the worst case for another pesticide, which leads to 'scenario uncertainty'. Pesticide fate parameter uncertainty led to higher concentrations in the higher percentiles of spatial concentration distributions, especially for distributions in smaller and more homogeneous regions. The effect of pesticide fate parameter uncertainty on the spatial concentration distribution was small when compared with the uncertainty of local concentration predictions and with the scenario uncertainty. Uncertainty in pesticide fate parameters and scenario uncertainty can be accounted for using higher percentiles of spatial concentration distributions and considering a range of pesticides for the scenario selection. Copyright © 2010 Society of Chemical Industry.

  16. The impact of demand management strategies on parents’ decision-making for out-of-hours primary care: findings from a survey in The Netherlands

    PubMed Central

    Giesen, Marie-Jeanne; Keizer, Ellen; van de Pol, Julia; Knoben, Joris; Wensing, Michel; Giesen, Paul

    2017-01-01

    Objective To explore the potential impact of demand management strategies on patient decision-making in medically non-urgent and urgent scenarios during out-of-hours for children between the ages of 0 and 4 years. Design and methods We conducted a cross-sectional survey with paper-based case scenarios. A survey was sent to all 797 parents of children aged between 0 and 4 years from four Dutch general practitioner (GP) practices. Four demand management strategies (copayment, online advice, overview medical cost and GP appointment next morning) were incorporated in two medically non-urgent and two urgent case scenarios. Combining the case scenarios with the demand management strategies resulted in 16 cases (four scenarios each with four demand management strategies). Each parent randomly received a questionnaire with three different case scenarios with three different demand strategies and a baseline case scenario without a demand management strategy. Results The response rate was 47.4%. The strategy online advice led to more medically appropriate decision-making for both non-urgent case scenarios (OR 0.26; 95% CI 0.11 to 0.58) and urgent case scenarios (OR 0.16; 95% CI 0.08 to 0.32). Overview of medical cost (OR 0.59; 95% CI 0.38 to 0.92) and a GP appointment planned the next morning (OR 0.57; 95% CI 0.34 to 0.97) had some influence on patient decisions for urgent cases, but not for non-urgent cases. Copayment had no influence on patient decisions. Conclusion Online advice has the highest potential to reduce medically unnecessary use. Furthermore it enhanced safety of parents' decisions on seeking help for their young children during out-of-hours primary care. Valid online information on health symptoms for patients should be promoted. PMID:28487458

  17. The impact of demand management strategies on parents' decision-making for out-of-hours primary care: findings from a survey in The Netherlands.

    PubMed

    Giesen, Marie-Jeanne; Keizer, Ellen; van de Pol, Julia; Knoben, Joris; Wensing, Michel; Giesen, Paul

    2017-05-09

    To explore the potential impact of demand management strategies on patient decision-making in medically non-urgent and urgent scenarios during out-of-hours for children between the ages of 0 and 4 years. We conducted a cross-sectional survey with paper-based case scenarios. A survey was sent to all 797 parents of children aged between 0 and 4 years from four Dutch general practitioner (GP) practices. Four demand management strategies (copayment, online advice, overview medical cost and GP appointment next morning) were incorporated in two medically non-urgent and two urgent case scenarios. Combining the case scenarios with the demand management strategies resulted in 16 cases (four scenarios each with four demand management strategies). Each parent randomly received a questionnaire with three different case scenarios with three different demand strategies and a baseline case scenario without a demand management strategy. The response rate was 47.4%. The strategy online advice led to more medically appropriate decision-making for both non-urgent case scenarios (OR 0.26; CI 0.11 to 0.58) and urgent case scenarios (OR 0.16; CI 0.08 to 0.32). Overview of medical cost (OR 0.59; CI 0.38 to 0.92) and a GP appointment planned the next morning (OR 0.57; CI 0.34 to 0.97) had some influence on patient decisions for urgent cases, but not for non-urgent cases. Copayment had no influence on patient decisions. Online advice has the highest potential to reduce medically unnecessary use. Furthermore it enhanced safety of parents' decisions on seeking help for their young children during out-of-hours primary care. Valid online information on health symptoms for patients should be promoted. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  18. Making Energy-Water Nexus Scenarios more Fit-for-Purpose through Better Characterization of Extremes

    NASA Astrophysics Data System (ADS)

    Yetman, G.; Levy, M. A.; Chen, R. S.; Schnarr, E.

    2017-12-01

    Often quantitative scenarios of future trends exhibit less variability than the historic data upon which the models that generate them are based. The problem of dampened variability, which typically also entails dampened extremes, manifests both temporally and spatially. As a result, risk assessments that rely on such scenarios are in danger of producing misleading results. This danger is pronounced in nexus issues, because of the multiple dimensions of change that are relevant. We illustrate the above problem by developing alternative joint distributions of the probability of drought and of human population totals, across U.S. counties over the period 2010-2030. For the dampened-extremes case we use drought frequencies derived from climate models used in the U.S. National Climate Assessment and the Environmental Protection Agency's population and land use projections contained in its Integrated Climate and Land Use Scenarios (ICLUS). For the elevated extremes case we use an alternative spatial drought frequency estimate based on tree-ring data, covering a 555-year period (Ho et al 2017); and we introduce greater temporal and spatial extremes in the ICLUS socioeconomic projections so that they conform to observed extremes in the historical U.S. spatial census data 1790-present (National Historical Geographic Information System). We use spatial and temporal coincidence of high population and extreme drought as a proxy for energy-water nexus risk. We compare the representation of risk in the dampened-extreme and elevated-extreme scenario analysis. We identify areas of the country where using more realistic portrayals of extremes makes the biggest difference in estimate risk and suggest implications for future risk assessments. References: Michelle Ho, Upmanu Lall, Xun Sun, Edward R. Cook. 2017. Multiscale temporal variability and regional patterns in 555 years of conterminous U.S. streamflow. Water Resources Research. . doi: 10.1002/2016WR019632

  19. Use of HRP-2-based rapid diagnostic test for Plasmodium falciparum malaria: assessing accuracy and cost-effectiveness in the villages of Dielmo and Ndiop, Senegal.

    PubMed

    Ly, Alioune Badara; Tall, Adama; Perry, Robert; Baril, Laurence; Badiane, Abdoulaye; Faye, Joseph; Rogier, Christophe; Touré, Aissatou; Sokhna, Cheikh; Trape, Jean-François; Michel, Rémy

    2010-06-04

    In 2006, the Senegalese National Malaria Control Programme (NMCP) has recommended artemisinin-based combination therapy (ACT) as the first-line treatment for uncomplicated malaria and, in 2007, mandated testing for all suspected cases of malaria with a Plasmodium falciparum HRP-2-based rapid diagnostic test for malaria (RDT(Paracheck). Given the higher cost of ACT compared to earlier anti-malarials, the objectives of the present study were i) to study the accuracy of Paracheck compared to the thick blood smear (TBS) in two areas with different levels of malaria endemicity and ii) analyse the cost-effectiveness of the strategy of the parasitological confirmation of clinically suspected malaria cases management recommended by the NMCP. A cross-sectional study was undertaken in the villages of Dielmo and Ndiop (Senegal) nested in a cohort study of about 800 inhabitants. For all the individuals consulting between October 2008 and January 2009 with a clinical diagnosis of malaria, a questionnaire was filled and finger-prick blood samples were taken both for microscopic examination and RDT. The estimated costs and cost-effectiveness analysis were made considering five scenarios, the recommendations of the NMCP being the reference scenario. In addition, a sensitivity analysis was performed assuming that all the RDT-positive patients and 50% of RDT-negative patients were treated with ACT. A total of 189 consultations for clinically suspected malaria occurred during the study period. The sensitivity, specificity, positive and negative predictive values were respectively 100%, 98.3%, 80.0% and 100%. The estimated cost of the reference scenario was close to 700 euros per 1000 episodes of illness, approximately twice as expensive as most of the other scenarios. Nevertheless, it appeared to us cost-effective while ensuring the diagnosis and the treatment of 100% of malaria attacks and an adequate management of 98.4% of episodes of illness. The present study also demonstrated that full compliance of health care providers with RDT results was required in order to avoid severe incremental costs. A rational use of ACT requires laboratory testing of all patients presenting with presumed malaria. Use of RDTs inevitably has incremental costs, but the strategy associating RDT use for all clinically suspected malaria and prescribing ACT only to patients tested positive is cost-effective in areas where microscopy is unavailable.

  20. Budgetary Impact of Telotristat Ethyl, a Novel Treatment for Patients with Carcinoid Syndrome Diarrhea: A US Health Plan Perspective.

    PubMed

    Joish, Vijay N; Frech, Feride; Lapuerta, Pablo

    2017-12-01

    Telotristat ethyl (TE) was recently approved for carcinoid syndrome diarrhea (CSD) in patients not adequately controlled with somatostatin analog long-acting release (SSA LAR) therapy alone. A budget impact model was developed to determine the short-term affordability of reimbursing TE in a US health plan. A budget impact model compared health care costs when CSD is managed per current treatment patterns (SSA LAR, reference drug scenario) versus when TE is incorporated in the treatment algorithm (SSA LAR + TE, new drug scenario). Prevalence of CSD, proportion of patients not adequately controlled on SSA LAR, monthly treatment costs (pharmacy and medical), and treatment efficacy were derived from the literature. In the reference drug scenario, an escalated monthly dose of SSA LAR therapy of 40 mg was assumed to treat patients with CSD not adequately controlled on the labeled dose of SSA LAR. In the new drug scenario, TE was added to the maximum labeled monthly dose of SSA LAR therapy of 30 mg. The incremental budget impact was calculated based on an assumed TE market uptake of 28%, 42%, and 55% during Years 1, 2, and 3, respectively. One-way sensitivity analyses were conducted to test model assumptions. A hypothetical health plan of 1 million members was estimated to have 42 prevalent CSD patients of whom 17 would be inadequately controlled on SSA LAR therapy. The monthly medical cost per patient not adequately controlled on SSA LAR in addition to pharmacotherapy was estimated to be $3946 based on the literature. Based on the observed treatment response in a clinical trial of 20% and 44% for the base case reference and new drug scenarios, total per patient per month costs were estimated to be $7563 and $11,205, respectively. Total annual costs in the new drug scenario were estimated to be $2.3 to $2.5 million during the first 3 years. The overall incremental annual costs were estimated to be $154,000 in Year 1, $231,000 in Year 2, and $302,000 in Year 3. This translated to an incremental per patient per month cost of $0.013, $0.019, and $0.025 for Years 1, 2, and 3. These results remained robust in 1-way sensitivity analyses. The availability of TE for patients not adequately controlled on SSA LAR therapy provides a novel treatment option for CSD. This model showed that providing access to this first-in-class oral agent would have a minimal budget impact to a US health plan. Copyright © 2017 Elsevier HS Journals, Inc. All rights reserved.

  1. 30 CFR 254.47 - Determining the volume of oil of your worst case discharge scenario.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the daily discharge rate, you must consider reservoir characteristics, casing/production tubing sizes, and historical production and reservoir pressure data. Your scenario must discuss how to respond to... drilling operations, the size of your worst case discharge scenario is the daily volume possible from an...

  2. Indoor exposure to toluene from printed matter matters: complementary views from life cycle assessment and risk assessment.

    PubMed

    Walser, Tobias; Juraske, Ronnie; Demou, Evangelia; Hellweg, Stefanie

    2014-01-01

    A pronounced presence of toluene from rotogravure printed matter has been frequently observed indoors. However, its consequences to human health in the life cycle of magazines are poorly known. Therefore, we quantified human-health risks in indoor environments with Risk Assessment (RA) and impacts relative to the total impact of toxic releases occurring in the life cycle of a magazine with Life Cycle Assessment (LCA). We used a one-box indoor model to estimate toluene concentrations in printing facilities, newsstands, and residences in a best, average, and worst-case scenario. The modeled concentrations are in the range of the values measured in on-site campaigns. Toluene concentrations can be close or even surpass the occupational legal thresholds in printing facilities in realistic worst-case scenarios. The concentrations in homes can surpass the US EPA reference dose (69 μg/kg/day) in worst-case scenarios, but are still at least 1 order of magnitude lower than in press rooms or newsstands. However, toluene inhaled at home becomes the dominant contribution to the total potential human toxicity impacts of toluene from printed matter when assessed with LCA, using the USEtox method complemented with indoor characterization factors for toluene. The significant contribution (44%) of toluene exposure in production, retail, and use in households, to the total life cycle impact of a magazine in the category of human toxicity, demonstrates that the indoor compartment requires particular attention in LCA. While RA works with threshold levels, LCA assumes that every toxic emission causes an incremental change to the total impact. Here, the combination of the two paradigms provides valuable information on the life cycle stages of printed matter.

  3. How dynamic number of evacuee affects the multi-objective optimum allocation for earthquake emergency shelters: A case study in the central area of Beijing, China

    NASA Astrophysics Data System (ADS)

    Ma, Y.; Xu, W.; Zhao, X.; Qin, L.

    2016-12-01

    Accurate location and allocation of earthquake emergency shelters is a key component of effective urban planning and emergency management. A number of models have been developed to solve the complex location-allocation problem with diverse and strict constraints, but there still remain a big gap between the model and the actual situation because the uncertainty of earthquake, damage rate of buildings and evacuee behaviors have been neglected or excessively simplified in the existing models. An innovative model was first developed to estimate the hourly dynamic changes of the number of evacuees under two damage scenarios of earthquake by considering these factors at the community level based on a location-based service data, and then followed by a multi-objective model for the allocation of residents to earthquake shelters using the central area of Beijing, China as a case study. The two objectives of this shelter allocation model were to minimize the total evacuation distance from communities to a specified shelter and to minimize the total area of all the shelters with the constraints of shelter capacity and service radius. The modified particle swarm optimization algorithm was used to solve this model. The results show that increasing the shelter area will result in a large decrease of the total evacuation distance in all of the schemes of the four scenarios (i.e., Scenario A and B in daytime and nighttime respectively). According to the schemes of minimum distance, parts of communities in downtown area needed to be reallocated due to the insufficient capacity of the nearest shelters, and the numbers of these communities sequentially decreased in scenarios Ad, An, Bd and Bn due to the decreasing population. According to the schemes of minimum area in each scenario, 27 or 28 shelters, covering a total area of approximately 37 km2, were selected; and the communities almost evacuated using the same routes in different scenarios. The results can be used as a scientific reference for the planning of shelters in Beijing.

  4. Does centralisation of acute obstetric care reduce intrapartum and first-week mortality? An empirical study of over 1 million births in the Netherlands.

    PubMed

    Poeran, Jashvant; Borsboom, Gerard J J M; de Graaf, Johanna P; Birnie, Erwin; Steegers, Eric A P; Mackenbach, Johan P; Bonsel, Gouke J

    2014-07-01

    In this hypothetical analysis with retrospective cohort data (1,160,708 hospital births) we estimated outcome of centralisation of acute obstetric care, i.e., closure of 10 hospitals (out of 99) in The Netherlands. The main outcome was predicted intrapartum and first-week mortality (further referred to as neonatal mortality) for several subgroups of patients affected by two centralisation scenarios: (1) closure of the 10 smallest hospitals; (2) closure of the 10 smallest hospitals, but avoiding adjacent closures. Predictions followed from regression coefficients from a multilevel logistic regression model. Scenario 1 resulted in doubled travel time, and 10% increased mortality (210 [0.34%] to 231 [0.38%] cases). Scenario 2 showed less effect on mortality (268 [0.33%] to 259 [0.32%] cases) and travel time. Heterogeneity in hospital organisational features caused simultaneous improvement and deterioration of predicted neonatal mortality. Consequences vary for subgroups. We demonstrate that (in The Netherlands) centralisation of acute obstetric care according to the 'closure-of-the-smallest-rule' yields suboptimal outcomes. In order to develop an optimal strategy one would need to consider all positive and negative effects, e.g., organisational heterogeneity of closing and surviving hospitals, differential effects for patient subgroups, increased travel time, and financial aspects. The provided framework may be beneficial for other countries considering centralisation of acute obstetric care. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  5. Environmental consequences of future biogas technologies based on separated slurry.

    PubMed

    Hamelin, Lorie; Wesnæs, Marianne; Wenzel, Henrik; Petersen, Bjørn M

    2011-07-01

    This consequential life cycle assessment study highlights the key environmental aspects of producing biogas from separated pig and cow slurry, a relatively new but probable scenario for future biogas production, as it avoids the reliance on constrained carbon cosubstrates. Three scenarios involving different slurry separation technologies have been assessed and compared to a business-as-usual reference slurry management scenario. The results show that the environmental benefits of such biogas production are highly dependent upon the efficiency of the separation technology used to concentrate the volatile solids in the solid fraction. The biogas scenario involving the most efficient separation technology resulted in a dry matter separation efficiency of 87% and allowed a net reduction of the global warming potential of 40%, compared to the reference slurry management. This figure comprises the whole slurry life cycle, including the flows bypassing the biogas plant. This study includes soil carbon balances and a method for quantifying the changes in yield resulting from increased nitrogen availability as well as for quantifying mineral fertilizers displacement. Soil carbon balances showed that between 13 and 50% less carbon ends up in the soil pool with the different biogas alternatives, as opposed to the reference slurry management.

  6. Reference interval estimation: Methodological comparison using extensive simulations and empirical data.

    PubMed

    Daly, Caitlin H; Higgins, Victoria; Adeli, Khosrow; Grey, Vijay L; Hamid, Jemila S

    2017-12-01

    To statistically compare and evaluate commonly used methods of estimating reference intervals and to determine which method is best based on characteristics of the distribution of various data sets. Three approaches for estimating reference intervals, i.e. parametric, non-parametric, and robust, were compared with simulated Gaussian and non-Gaussian data. The hierarchy of the performances of each method was examined based on bias and measures of precision. The findings of the simulation study were illustrated through real data sets. In all Gaussian scenarios, the parametric approach provided the least biased and most precise estimates. In non-Gaussian scenarios, no single method provided the least biased and most precise estimates for both limits of a reference interval across all sample sizes, although the non-parametric approach performed the best for most scenarios. The hierarchy of the performances of the three methods was only impacted by sample size and skewness. Differences between reference interval estimates established by the three methods were inflated by variability. Whenever possible, laboratories should attempt to transform data to a Gaussian distribution and use the parametric approach to obtain the most optimal reference intervals. When this is not possible, laboratories should consider sample size and skewness as factors in their choice of reference interval estimation method. The consequences of false positives or false negatives may also serve as factors in this decision. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  7. From scenarios to domain models: processes and representations

    NASA Astrophysics Data System (ADS)

    Haddock, Gail; Harbison, Karan

    1994-03-01

    The domain specific software architectures (DSSA) community has defined a philosophy for the development of complex systems. This philosophy improves productivity and efficiency by increasing the user's role in the definition of requirements, increasing the systems engineer's role in the reuse of components, and decreasing the software engineer's role to the development of new components and component modifications only. The scenario-based engineering process (SEP), the first instantiation of the DSSA philosophy, has been adopted by the next generation controller project. It is also the chosen methodology of the trauma care information management system project, and the surrogate semi-autonomous vehicle project. SEP uses scenarios from the user to create domain models and define the system's requirements. Domain knowledge is obtained from a variety of sources including experts, documents, and videos. This knowledge is analyzed using three techniques: scenario analysis, task analysis, and object-oriented analysis. Scenario analysis results in formal representations of selected scenarios. Task analysis of the scenario representations results in descriptions of tasks necessary for object-oriented analysis and also subtasks necessary for functional system analysis. Object-oriented analysis of task descriptions produces domain models and system requirements. This paper examines the representations that support the DSSA philosophy, including reference requirements, reference architectures, and domain models. The processes used to create and use the representations are explained through use of the scenario-based engineering process. Selected examples are taken from the next generation controller project.

  8. Relevance of workplace social mixing during influenza pandemics: an experimental modelling study of workplace cultures.

    PubMed

    Timpka, T; Eriksson, H; Holm, E; Strömgren, M; Ekberg, J; Spreco, A; Dahlström, Ö

    2016-07-01

    Workplaces are one of the most important regular meeting places in society. The aim of this study was to use simulation experiments to examine the impact of different workplace cultures on influenza dissemination during pandemics. The impact is investigated by experiments with defined social-mixing patterns at workplaces using semi-virtual models based on authentic sociodemographic and geographical data from a North European community (population 136 000). A simulated pandemic outbreak was found to affect 33% of the total population in the community with the reference academic-creative workplace culture; virus transmission at the workplace accounted for 10·6% of the cases. A model with a prevailing industrial-administrative workplace culture generated 11% lower incidence than the reference model, while the model with a self-employed workplace culture (also corresponding to a hypothetical scenario with all workplaces closed) produced 20% fewer cases. The model representing an academic-creative workplace culture with restricted workplace interaction generated 12% lower cumulative incidence compared to the reference model. The results display important theoretical associations between workplace social-mixing cultures and community-level incidence rates during influenza pandemics. Social interaction patterns at workplaces should be taken into consideration when analysing virus transmission patterns during influenza pandemics.

  9. Modeling and control of a brushless DC axial flow ventricular assist device.

    PubMed

    Giridharan, Guruprasad A; Skliar, Mikhail; Olsen, Donald B; Pantalos, George M

    2002-01-01

    This article presents an integrated model of the human circulatory system that incorporates circulatory support by a brushless DC axial flow ventricular assist device (VAD), and a feedback VAD controller designed to maintain physiologically sufficient perfusion. The developed integrated model combines a network type model of the circulatory system with a nonlinear dynamic model of the brushless DC pump We show that maintaining a reference differential pressure between the left ventricle and aorta leads to adequate perfusion for different pathologic cases, ranging from normal heart to left heart asystole, and widely varying physical activity scenarios from rest to exercise.

  10. A General Model for Performance Evaluation in DS-CDMA Systems with Variable Spreading Factors

    NASA Astrophysics Data System (ADS)

    Chiaraluce, Franco; Gambi, Ennio; Righi, Giorgia

    This paper extends previous analytical approaches for the study of CDMA systems to the relevant case of multipath environments where users can operate at different bit rates. This scenario is of interest for the Wideband CDMA strategy employed in UMTS, and the model permits the performance comparison of classic and more innovative spreading signals. The method is based on the characteristic function approach, that allows to model accurately the various kinds of interferences. Some numerical examples are given with reference to the ITU-R M. 1225 Recommendations, but the analysis could be extended to different channel descriptions.

  11. Smart-DS: Synthetic Models for Advanced, Realistic Testing: Distribution Systems and Scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnan, Venkat K; Palmintier, Bryan S; Hodge, Brian S

    The National Renewable Energy Laboratory (NREL) in collaboration with Massachusetts Institute of Technology (MIT), Universidad Pontificia Comillas (Comillas-IIT, Spain) and GE Grid Solutions, is working on an ARPA-E GRID DATA project, titled Smart-DS, to create: 1) High-quality, realistic, synthetic distribution network models, and 2) Advanced tools for automated scenario generation based on high-resolution weather data and generation growth projections. Through these advancements, the Smart-DS project is envisioned to accelerate the development, testing, and adoption of advanced algorithms, approaches, and technologies for sustainable and resilient electric power systems, especially in the realm of U.S. distribution systems. This talk will present themore » goals and overall approach of the Smart-DS project, including the process of creating the synthetic distribution datasets using reference network model (RNM) and the comprehensive validation process to ensure network realism, feasibility, and applicability to advanced use cases. The talk will provide demonstrations of early versions of synthetic models, along with the lessons learnt from expert engagements to enhance future iterations. Finally, the scenario generation framework, its development plans, and co-ordination with GRID DATA repository teams to house these datasets for public access will also be discussed.« less

  12. Reference-dependent preferences for maternity wards: an exploration of two reference points.

    PubMed

    Neuman, Einat

    2014-01-01

    It is now well established that a person's valuation of the benefit from an outcome of a decision is determined by the intrinsic "consumption utility" of the outcome itself and also by the relation of the outcome to some reference point. The most notable expression of such reference-dependent preferences is loss aversion. What precisely this reference point is, however, is less clear. This paper claims and provides empirical evidence for the existence of more than one reference point. Using a discrete choice experiment in the Israeli public health-care sector, within a sample of 219 women who had given birth, it is shown that respondents refer to two reference points : (i) a constant scenario that is used in the experiment; and (ii) also the actual state of the quantitative attributes of the service (number of beds in room of hospitalization; and travel time from residence to hospital). In line with the loss aversion theory, it is also shown that losses (vis-à-vis the constant scenario and vis-à-vis the actual state) accumulate and have reinforced effects, while gains do not.

  13. Utility of additional tissue sections in dermatopathology: diagnostic, clinical and financial implications.

    PubMed

    Stuart, Lauren N; Rodriguez, Adrianna S; Gardner, Jerad M; Foster, Toby E; MacKelfresh, Jamie; Parker, Douglas C; Chen, Suephy C; Stoff, Benjamin K

    2014-02-01

    As histopathologic assessment is subject to sampling error, some institutions 'preorder' deeper sections on some or all cases (hereafter referred to as prospective deeper sections), while others order additional sections only when needed (hereafter referred to as retrospective deeper sections). We investigated how often additional sections changed a diagnosis and/or clinical management. Given the recent decrease in reimbursement for CPT-code 88305, we also considered the financial implications of ordering additional sections. Cases (n = 204) were assigned a preliminary diagnosis, based on review of the initial slide, and a final diagnosis, after reviewing additional sections. Cases with discordant diagnoses were assessed by two dermatologists, who indicated whether the change in diagnosis altered clinical management. Expenses were estimated for three scenarios: (a) no additional sections, (b) prospective deeper sections and (c) retrospective deeper sections. Diagnoses were modified in 9% of cases, which changed clinical management in 56% of these cases. Lesions obtained by punch-biopsy and inflammatory lesions were disproportionately overrepresented amongst cases with changed diagnoses (p < 0.001, p = 0.12, respectively). The cost of prospective deeper sections and retrospective deeper sections represented a 56% and 115% increase over base costs, respectively. Labor costs, particularly the cost of dermatopathologist evaluation, were the most significant cost-drivers. While additional sections improve diagnostic accuracy, they delay turn-around-time and increase expenditures. In our practice, prospective deeper sections are cost effective, however, this may vary by institution. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  14. Prediction of Change in Prescription Ingredient Costs and Co-payment Rates under a Reference Pricing System in South Korea.

    PubMed

    Heo, Ji Haeng; Rascati, Karen L; Lee, Eui-Kyung

    2017-05-01

    The reference pricing system (RPS) establishes reference prices within interchangeable reference groupings. For drugs priced higher than the reference point, patients pay the difference between the reference price and the total price. To predict potential changes in prescription ingredient costs and co-payment rates after implementation of an RPS in South Korea. Korean National Health Insurance claims data were used as a baseline to develop possible RPS models. Five components of a potential RPS policy were varied: reference groupings, reference pricing methods, co-pay reduction programs, manufacturer price reductions, and increased drug substitutions. The potential changes for prescription ingredient costs and co-payment rates were predicted for the various scenarios. It was predicted that transferring the difference (total price minus reference price) from the insurer to patients would reduce ingredient costs from 1.4% to 22.8% for the third-party payer (government), but patient co-payment rates would increase from a baseline of 20.4% to 22.0% using chemical groupings and to 25.0% using therapeutic groupings. Savings rates in prescription ingredient costs (government and patient combined) were predicted to range from 1.6% to 13.7% depending on various scenarios. Although the co-payment rate would increase, a 15% price reduction by manufacturers coupled with a substitution rate of 30% would result in a decrease in the co-payment amount (change in absolute dollars vs. change in rates). Our models predicted that the implementation of RPS in South Korea would lead to savings in ingredient costs for the third-party payer and co-payments for patients with potential scenarios. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  15. Derivation of the Cramér-Rao Bound in the GNSS-Reflectometry Context for Static, Ground-Based Receivers in Scenarios with Coherent Reflection

    PubMed Central

    Ribot, Miguel Angel; Botteron, Cyril; Farine, Pierre-André

    2016-01-01

    The use of the reflected Global Navigation Satellite Systems’ (GNSS) signals in Earth observation applications, referred to as GNSS reflectometry (GNSS-R), has been already studied for more than two decades. However, the estimation precision that can be achieved by GNSS-R sensors in some particular scenarios is still not fully understood yet. In an effort to partially fill this gap, in this paper, we compute the Cramér–Rao bound (CRB) for the specific case of static ground-based GNSS-R receivers and scenarios where the coherent component of the reflected signal is dominant. We compute the CRB for GNSS signals with different modulations, GPS L1 C/A and GPS L5 I/Q, which use binary phase-shift keying, and Galileo E1 B/C and E5, using the binary offset carrier. The CRB for these signals is evaluated as a function of the receiver bandwidth and different scenario parameters, such as the height of the receiver or the properties of the reflection surface. The CRB computation presented considers observation times of up to several tens of seconds, in which the satellite elevation angle observed changes significantly. Finally, the results obtained show the theoretical benefit of using modern GNSS signals with GNSS-R techniques using long observation times, such as the interference pattern technique. PMID:27929388

  16. Derivation of the Cramér-Rao Bound in the GNSS-Reflectometry Context for Static, Ground-Based Receivers in Scenarios with Coherent Reflection.

    PubMed

    Ribot, Miguel Angel; Botteron, Cyril; Farine, Pierre-André

    2016-12-05

    The use of the reflected Global Navigation Satellite Systems' (GNSS) signals in Earth observation applications, referred to as GNSS reflectometry (GNSS-R), has been already studied for more than two decades. However, the estimation precision that can be achieved by GNSS-R sensors in some particular scenarios is still not fully understood yet. In an effort to partially fill this gap, in this paper, we compute the Cramér-Rao bound (CRB) for the specific case of static ground-based GNSS-R receivers and scenarios where the coherent component of the reflected signal is dominant. We compute the CRB for GNSS signals with different modulations, GPS L1 C/A and GPS L5 I/Q, which use binary phase-shift keying, and Galileo E1 B/C and E5, using the binary offset carrier. The CRB for these signals is evaluated as a function of the receiver bandwidth and different scenario parameters, such as the height of the receiver or the properties of the reflection surface. The CRB computation presented considers observation times of up to several tens of seconds, in which the satellite elevation angle observed changes significantly. Finally, the results obtained show the theoretical benefit of using modern GNSS signals with GNSS-R techniques using long observation times, such as the interference pattern technique.

  17. Sensitivity and Specificity Estimation for the Clinical Diagnosis of Highly Pathogenic Avian Influenza in the Egyptian Participatory Disease Surveillance Program.

    PubMed

    Verdugo, C; El Masry, I; Makonnen, Y; Hannah, H; Unger, F; Soliman, M; Galal, S; Lubroth, J; Grace, D

    2016-12-01

    Many developing countries lack sufficient resources to conduct animal disease surveillance. In recent years, participatory epidemiology has been used to increase the cover and decrease the costs of surveillance. However, few diagnostic performance assessments have been carried out on participatory methods. The objective of the present study was to estimate the diagnostic performance of practitioners working for the Community-Based Animal Health and Outreach (CAHO) program, which is a participatory disease surveillance system for the detection of highly pathogenic avian influenza outbreaks in Egypt. CAHO practitioners' diagnostic assessment of inspected birds was compared with real-time reverse-transcriptase polymerase chain reaction (RRT-PCR) test results at the household level. Diagnostic performance was estimated directly from two-by-two tables using RRT-PCR as a reference test in two different scenarios. In the first scenario, only results from chickens were considered. In the second scenario, results for all poultry species were analyzed. Poultry flocks in 916 households located in 717 villages were inspected by CAHO practitioners, who collected 3458 bird samples. In the first scenario, CAHO practitioners presented sensitivity (Se) and specificity (Sp) estimates of 40% (95% confidence interval [CI]: 21%-59%) and 92% (95% CI: 91%-94%), respectively. In the second scenario, diagnostic performance estimates were Se = 47% (95% CI: 29%-65%) and Sp = 88% (95% CI: 86%-90%). A significant difference was observed only between Sp estimates (P < 0.01). Practitioners' diagnostics and RRT-PCR results were in very poor agreement with kappa values of 0.16 and 0.14 for scenarios 1 and 2, respectively. However, the use of a broad case definition, the possible presence of immunity against the virus in replacement birds, and the low prevalence observed during the survey would negatively affect the practitioners' performance.

  18. A Case-Based Scenario with Interdisciplinary Guided-Inquiry in Chemistry and Biology: Experiences of First Year Forensic Science Students

    ERIC Educational Resources Information Center

    Cresswell, Sarah L.; Loughlin, Wendy A.

    2017-01-01

    In this paper, insight into forensic science students' experiences of a case-based scenario with an interdisciplinary guided-inquiry experience in chemistry and biology is presented. Evaluation of student experiences and interest showed that the students were engaged with all aspects of the case-based scenario, including the curriculum theory…

  19. Overview of ICRP Committee 5: protection of the environment.

    PubMed

    Larsson, C-M

    2016-06-01

    Protection of the environment is integral to the system of radiological protection, as outlined in the 2007 Recommendations of the International Commission on Radiological Protection (ICRP, Publication 103). The Commission's activities in this area are mainly pursued by Committee 5 and its associated Task Groups. Publication 91 broadly outlines the approach to radiological protection of the environment, and its alignment with approaches to environmental protection from hazardous substances in general. Publications 108 and 114 provide the cornerstones of the environmental protection system and relevant databases. Publication 124 considers its application in planned, existing, and emergency exposure situations. The system centres on 12 Reference Animals and Plants (RAPs) with broad relevance for environmental protection based on their ubiquity and significance as well as other criteria, as described in Publication 108 The databases comprise general biology of the RAPs, transfer parameters, dose conversion coefficients, and effects data. Derived Consideration Reference Levels (DCRLs) were established for each RAP; a DCRL represents a band of dose rates that might result in some deleterious effects in individuals of that type of RAP. Newly established Task Group 99 will compile the RAP-specific reference information into monographs, with the view of updating information and improving the applicability of the system in different exposure situations. For certain scenarios, more precise and ecosystem-specific protection benchmarks may be justified, which would have to be informed by consideration of representative organisms (i.e. representative of a particular ecosystem and relevant to the specific scenario; Publication 124). Committee 5 will explore this further, making use of a limited number of case studies. © The International Society for Prosthetics and Orthotics.

  20. Using existing case-mix methods to fund trauma cases.

    PubMed

    Monakova, Julia; Blais, Irene; Botz, Charles; Chechulin, Yuriy; Picciano, Gino; Basinski, Antoni

    2010-01-01

    Policymakers frequently face the need to increase funding in isolated and frequently heterogeneous (clinically and in terms of resource consumption) patient subpopulations. This article presents a methodologic solution for testing the appropriateness of using existing grouping and weighting methodologies for funding subsets of patients in the scenario where a case-mix approach is preferable to a flat-rate based payment system. Using as an example the subpopulation of trauma cases of Ontario lead trauma hospitals, the statistical techniques of linear and nonlinear regression models, regression trees, and spline models were applied to examine the fit of the existing case-mix groups and reference weights for the trauma cases. The analyses demonstrated that for funding Ontario trauma cases, the existing case-mix systems can form the basis for rational and equitable hospital funding, decreasing the need to develop a different grouper for this subset of patients. This study confirmed that Injury Severity Score is a poor predictor of costs for trauma patients. Although our analysis used the Canadian case-mix classification system and cost weights, the demonstrated concept of using existing case-mix systems to develop funding rates for specific subsets of patient populations may be applicable internationally.

  1. Temperature-based modeling of reference evapotranspiration using several artificial intelligence models: application of different modeling scenarios

    NASA Astrophysics Data System (ADS)

    Sanikhani, Hadi; Kisi, Ozgur; Maroufpoor, Eisa; Yaseen, Zaher Mundher

    2018-02-01

    The establishment of an accurate computational model for predicting reference evapotranspiration (ET0) process is highly essential for several agricultural and hydrological applications, especially for the rural water resource systems, water use allocations, utilization and demand assessments, and the management of irrigation systems. In this research, six artificial intelligence (AI) models were investigated for modeling ET0 using a small number of climatic data generated from the minimum and maximum temperatures of the air and extraterrestrial radiation. The investigated models were multilayer perceptron (MLP), generalized regression neural networks (GRNN), radial basis neural networks (RBNN), integrated adaptive neuro-fuzzy inference systems with grid partitioning and subtractive clustering (ANFIS-GP and ANFIS-SC), and gene expression programming (GEP). The implemented monthly time scale data set was collected at the Antalya and Isparta stations which are located in the Mediterranean Region of Turkey. The Hargreaves-Samani (HS) equation and its calibrated version (CHS) were used to perform a verification analysis of the established AI models. The accuracy of validation was focused on multiple quantitative metrics, including root mean squared error (RMSE), mean absolute error (MAE), correlation coefficient (R 2), coefficient of residual mass (CRM), and Nash-Sutcliffe efficiency coefficient (NS). The results of the conducted models were highly practical and reliable for the investigated case studies. At the Antalya station, the performance of the GEP and GRNN models was better than the other investigated models, while the performance of the RBNN and ANFIS-SC models was best compared to the other models at the Isparta station. Except for the MLP model, all the other investigated models presented a better performance accuracy compared to the HS and CHS empirical models when applied in a cross-station scenario. A cross-station scenario examination implies the prediction of the ET0 of any station using the input data of the nearby station. The performance of the CHS models in the modeling the ET0 was better in all the cases when compared to that of the original HS.

  2. Creation and Initial Validation of the International Dysphagia Diet Standardisation Initiative Functional Diet Scale

    PubMed Central

    Steele, Catriona M.; Namasivayam-MacDonald, Ashwini M.; Guida, Brittany T.; Cichero, Julie A.; Duivestein, Janice; MRSc; Hanson, Ben; Lam, Peter; Riquelme, Luis F.

    2018-01-01

    Objective To assess consensual validity, interrater reliability, and criterion validity of the International Dysphagia Diet Standardisation Initiative Functional Diet Scale, a new functional outcome scale intended to capture the severity of oropharyngeal dysphagia, as represented by the degree of diet texture restriction recommended for the patient. Design Participants assigned International Dysphagia Diet Standardisation Initiative Functional Diet Scale scores to 16 clinical cases. Consensual validity was measured against reference scores determined by an author reference panel. Interrater reliability was measured overall and across quartile subsets of the dataset. Criterion validity was evaluated versus Functional Oral Intake Scale (FOIS) scores assigned by survey respondents to the same case scenarios. Feedback was requested regarding ease and likelihood of use. Setting Web-based survey. Participants Respondents (NZ170) from 29 countries. Interventions Not applicable. Main Outcome Measures Consensual validity (percent agreement and Kendall t), criterion validity (Spearman rank correlation), and interrater reliability (Kendall concordance and intraclass coefficients). Results The International Dysphagia Diet Standardisation Initiative Functional Diet Scale showed strong consensual validity, criterion validity, and interrater reliability. Scenarios involving liquid-only diets, transition from nonoral feeding, or trial diet advances in therapy showed the poorest consensus, indicating a need for clear instructions on how to score these situations. The International Dysphagia Diet Standardisation Initiative Functional Diet Scale showed greater sensitivity than the FOIS to specific changes in diet. Most (>70%) respondents indicated enthusiasm for implementing the International Dysphagia Diet Standardisation Initiative Functional Diet Scale. Conclusions This initial validation study suggests that the International Dysphagia Diet Standardisation Initiative Functional Diet Scale has strong consensual and criterion validity and can be used reliably by clinicians to capture diet texture restriction and progression in people with dysphagia. PMID:29428348

  3. Creation and Initial Validation of the International Dysphagia Diet Standardisation Initiative Functional Diet Scale.

    PubMed

    Steele, Catriona M; Namasivayam-MacDonald, Ashwini M; Guida, Brittany T; Cichero, Julie A; Duivestein, Janice; Hanson, Ben; Lam, Peter; Riquelme, Luis F

    2018-05-01

    To assess consensual validity, interrater reliability, and criterion validity of the International Dysphagia Diet Standardisation Initiative Functional Diet Scale, a new functional outcome scale intended to capture the severity of oropharyngeal dysphagia, as represented by the degree of diet texture restriction recommended for the patient. Participants assigned International Dysphagia Diet Standardisation Initiative Functional Diet Scale scores to 16 clinical cases. Consensual validity was measured against reference scores determined by an author reference panel. Interrater reliability was measured overall and across quartile subsets of the dataset. Criterion validity was evaluated versus Functional Oral Intake Scale (FOIS) scores assigned by survey respondents to the same case scenarios. Feedback was requested regarding ease and likelihood of use. Web-based survey. Respondents (N=170) from 29 countries. Not applicable. Consensual validity (percent agreement and Kendall τ), criterion validity (Spearman rank correlation), and interrater reliability (Kendall concordance and intraclass coefficients). The International Dysphagia Diet Standardisation Initiative Functional Diet Scale showed strong consensual validity, criterion validity, and interrater reliability. Scenarios involving liquid-only diets, transition from nonoral feeding, or trial diet advances in therapy showed the poorest consensus, indicating a need for clear instructions on how to score these situations. The International Dysphagia Diet Standardisation Initiative Functional Diet Scale showed greater sensitivity than the FOIS to specific changes in diet. Most (>70%) respondents indicated enthusiasm for implementing the International Dysphagia Diet Standardisation Initiative Functional Diet Scale. This initial validation study suggests that the International Dysphagia Diet Standardisation Initiative Functional Diet Scale has strong consensual and criterion validity and can be used reliably by clinicians to capture diet texture restriction and progression in people with dysphagia. Copyright © 2018 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  4. Verification of GCM-generated regional seasonal precipitation for current climate and of statistical downscaling estimates under changing climate conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Busuioc, A.; Storch, H. von; Schnur, R.

    Empirical downscaling procedures relate large-scale atmospheric features with local features such as station rainfall in order to facilitate local scenarios of climate change. The purpose of the present paper is twofold: first, a downscaling technique is used as a diagnostic tool to verify the performance of climate models on the regional scale; second, a technique is proposed for verifying the validity of empirical downscaling procedures in climate change applications. The case considered is regional seasonal precipitation in Romania. The downscaling model is a regression based on canonical correlation analysis between observed station precipitation and European-scale sea level pressure (SLP). Themore » climate models considered here are the T21 and T42 versions of the Hamburg ECHAM3 atmospheric GCM run in time-slice mode. The climate change scenario refers to the expected time of doubled carbon dioxide concentrations around the year 2050. Generally, applications of statistical downscaling to climate change scenarios have been based on the assumption that the empirical link between the large-scale and regional parameters remains valid under a changed climate. In this study, a rationale is proposed for this assumption by showing the consistency of the 2 x CO{sub 2} GCM scenarios in winter, derived directly from the gridpoint data, with the regional scenarios obtained through empirical downscaling. Since the skill of the GCMs in regional terms is already established, it is concluded that the downscaling technique is adequate for describing climatically changing regional and local conditions, at least for precipitation in Romania during winter.« less

  5. Defining climate change scenario characteristics with a phase space of cumulative primary energy and carbon intensity

    NASA Astrophysics Data System (ADS)

    Ritchie, Justin; Dowlatabadi, Hadi

    2018-02-01

    Climate change modeling relies on projections of future greenhouse gas emissions and other phenomena leading to changes in planetary radiative forcing. Scenarios of socio-technical development consistent with end-of-century forcing levels are commonly produced by integrated assessment models. However, outlooks for forcing from fossil energy combustion can also be presented and defined in terms of two essential components: total energy use this century and the carbon intensity of that energy. This formulation allows a phase space diagram to succinctly describe a broad range of possible outcomes for carbon emissions from the future energy system. In the following paper, we demonstrate this phase space method with the Representative Concentration Pathways (RCPs) as used in the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5). The resulting RCP phase space is applied to map IPCC Working Group III (WGIII) reference case ‘no policy’ scenarios. Once these scenarios are described as coordinates in the phase space, data mining techniques can readily distill their core features. Accordingly, we conduct a k-means cluster analysis to distinguish the shared outlooks of these scenarios for oil, gas and coal resource use. As a whole, the AR5 database depicts a transition toward re-carbonization, where a world without climate policy inevitably leads to an energy supply with increasing carbon intensity. This orientation runs counter to the experienced ‘dynamics as usual’ of gradual decarbonization, suggesting climate change targets outlined in the Paris Accord are more readily achievable than projected to date.

  6. Proposing Telecardiology Services on Cloud for Different Medical Institutions: A Model of Reference.

    PubMed

    de la Torre-Díez, Isabel; Garcia-Zapirain, Begoña; López-Coronado, Miguel; Rodrigues, Joel J P C

    2017-08-01

    For a cloud-based telecardiology solution to be established in any scenario, it is necessary to ensure optimum levels of security, as patient's data will not be in the same place from where access is gained. The main objective of this article is to present a secure, cloud-based solution for a telecardiology service in different scenarios: a hospital, a health center in a city, and a group of health centers in a rural area. iCanCloud software is used to simulate the scenarios. The first scenario will be a city hospital with over 220,000 patients at its emergency services, and ∼1 million outpatient consultations. For the health center in a city, it serves ∼107,000 medical consultations and 16,700 pediatric consultations/year. In the last scenario, a group of health centers in a rural area serve an average 437.08 consultations/month and around 15.6 a day. Each one of the solutions proposed shares common features including the following: secure authentication through smart cards, the use of StorageGRID technology, and load balancers. For all cases, the cloud is private and the estimated price of the solution would cost around 450 €/month. Thanks to the research conducted in this work, it has been possible to provide an adapted solution in the form of a telecardiology service for a hospital, city health center, and rural health centers that offer security, privacy, and robustness, and is also optimum for a large number of cloud requests.

  7. A comparison between the example reference biosphere model ERB 2B and a process-based model: simulation of a natural release scenario.

    PubMed

    Almahayni, T

    2014-12-01

    The BIOMASS methodology was developed with the objective of constructing defensible assessment biospheres for assessing potential radiological impacts of radioactive waste repositories. To this end, a set of Example Reference Biospheres were developed to demonstrate the use of the methodology and to provide an international point of reference. In this paper, the performance of the Example Reference Biosphere model ERB 2B associated with the natural release scenario, discharge of contaminated groundwater to the surface environment, was evaluated by comparing its long-term projections of radionuclide dynamics and distribution in a soil-plant system to those of a process-based, transient advection-dispersion model (AD). The models were parametrised with data characteristic of a typical rainfed winter wheat crop grown on a sandy loam soil under temperate climate conditions. Three safety-relevant radionuclides, (99)Tc, (129)I and (237)Np with different degree of sorption were selected for the study. Although the models were driven by the same hydraulic (soil moisture content and water fluxes) and radiological (Kds) input data, their projections were remarkably different. On one hand, both models were able to capture short and long-term variation in activity concentration in the subsoil compartment. On the other hand, the Reference Biosphere model did not project any radionuclide accumulation in the topsoil and crop compartments. This behaviour would underestimate the radiological exposure under natural release scenarios. The results highlight the potential role deep roots play in soil-to-plant transfer under a natural release scenario where radionuclides are released into the subsoil. When considering the relative activity and root depth profiles within the soil column, much of the radioactivity was taken up into the crop from the subsoil compartment. Further improvements were suggested to address the limitations of the Reference Biosphere model presented in this paper. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. ESTIMATION OF EXPOSURE DOSES FOR THE SAFE MANAGEMENT OF NORM WASTE DISPOSAL.

    PubMed

    Jeong, Jongtae; Ko, Nak Yul; Cho, Dong-Keun; Baik, Min Hoon; Yoon, Ki-Hoon

    2018-03-16

    Naturally occurring radioactive materials (NORM) wastes with different radiological characteristics are generated in several industries. The appropriate options for NORM waste management including disposal options should be discussed and established based on the act and regulation guidelines. Several studies calculated the exposure dose and mass of NORM waste to be disposed in landfill site by considering the activity concentration level and exposure dose. In 2012, the Korean government promulgated an act on the safety control of NORM around living environments to protect human health and the environment. For the successful implementation of this act, we suggest a reference design for a landfill for the disposal of NORM waste. Based on this reference landfill, we estimate the maximum exposure doses and the relative impact of each pathway to exposure dose for three scenarios: a reference scenario, an ingestion pathway exclusion scenario, and a low leach rate scenario. Also, we estimate the possible quantity of NORM waste disposal into a landfill as a function of the activity concentration level of U series, Th series and 40K and two kinds of exposure dose levels, 1 and 0.3 mSv/y. The results of this study can be used to support the establishment of technical bases of the management strategy for the safe disposal of NORM waste.

  9. Atmospheric circulation and hydroclimate impacts of alternative warming scenarios for the Eocene

    NASA Astrophysics Data System (ADS)

    Carlson, Henrik; Caballero, Rodrigo

    2017-08-01

    Recent work in modelling the warm climates of the early Eocene shows that it is possible to obtain a reasonable global match between model surface temperature and proxy reconstructions, but only by using extremely high atmospheric CO2 concentrations or more modest CO2 levels complemented by a reduction in global cloud albedo. Understanding the mix of radiative forcing that gave rise to Eocene warmth has important implications for constraining Earth's climate sensitivity, but progress in this direction is hampered by the lack of direct proxy constraints on cloud properties. Here, we explore the potential for distinguishing among different radiative forcing scenarios via their impact on regional climate changes. We do this by comparing climate model simulations of two end-member scenarios: one in which the climate is warmed entirely by CO2 (which we refer to as the greenhouse gas (GHG) scenario) and another in which it is warmed entirely by reduced cloud albedo (which we refer to as the low CO2-thin clouds or LCTC scenario) . The two simulations have an almost identical global-mean surface temperature and equator-to-pole temperature difference, but the LCTC scenario has ˜ 11 % greater global-mean precipitation than the GHG scenario. The LCTC scenario also has cooler midlatitude continents and warmer oceans than the GHG scenario and a tropical climate which is significantly more El Niño-like. Extremely high warm-season temperatures in the subtropics are mitigated in the LCTC scenario, while cool-season temperatures are lower at all latitudes. These changes appear large enough to motivate further, more detailed study using other climate models and a more realistic set of modelling assumptions.

  10. Agricultural Baseline (BL0) scenario of the 2016 Billion-Ton Report

    DOE Data Explorer

    Davis, Maggie R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000181319328); Hellwinkel, Chad [University of Tennessee, APAC] (ORCID:0000000173085058); Eaton, Laurence [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000312709626); Langholtz, Matthew H [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000281537154); Turhollow, Anthony [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000228159350); Brandt, Craig [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000214707379); Myers, Aaron [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)] (ORCID:0000000320373827)

    2016-07-13

    Scientific reason for data generation: to serve as the reference case for the BT16 volume 1 agricultural scenarios. The agricultural baseline runs from 2015 through 2040; a starting year of 2014 is used. Date the data set was last modified: 02/12/2016 How each parameter was produced (methods), format, and relationship to other data in the data set: simulation was developed without offering a farmgate price to energy crops or residues (i.e., building on both the USDA 2015 baseline and the agricultural census data (USDA NASS 2014). Data generated are .txt output files by year, simulation identifier, county code (1-3109). Instruments used: POLYSYS (version POLYS2015_V10_alt_JAN22B) supplied by the University of Tennessee APAC The quality assurance and quality control that have been applied: • Check for negative planted area, harvested area, production, yield and cost values. • Check if harvested area exceeds planted area for annuals. • Check FIPS codes.

  11. Eager feelings and vigilant reasons: Regulatory focus differences in judging moral wrongs

    PubMed Central

    Cornwell, James F. M.; Higgins, E. Tory

    2015-01-01

    For over a decade, moral psychologists have been actively researching the processes underlying moral judgments that are made intuitively without reference to an action’s concrete harms or injustice, such as the well-known case of non-procreative, consensual incest. We suggest that the reason some judge this scenario as wrong (using intuitive feelings) and others do not (using deliberative reasons) is due to an important motivational distinction. Consistent with this view, across seven studies, we demonstrate that negative judgments of such intuitive moral scenarios are more intense when processed in the promotion focus compared to the prevention focus, and that this is due to differences in whether eager (intuitive) versus vigilant (deliberative) means are employed in judging these moral wrongs. By examining various boundary conditions for this phenomenon and foundations for these judgments, we learn about the overall differences between promotion and prevention regarding how proscriptive judgments are processed, and begin to integrate these differences with existing theories in moral psychology. PMID:26726912

  12. Dedicated outreach service for hard to reach patients with tuberculosis in London: observational study and economic evaluation

    PubMed Central

    Jit, Mark; Stagg, Helen R; Aldridge, Robert W; White, Peter J

    2011-01-01

    Objective To assess the cost effectiveness of the Find and Treat service for diagnosing and managing hard to reach individuals with active tuberculosis. Design Economic evaluation using a discrete, multiple age cohort, compartmental model of treated and untreated cases of active tuberculosis. Setting London, United Kingdom. Population Hard to reach individuals with active pulmonary tuberculosis screened or managed by the Find and Treat service (48 mobile screening unit cases, 188 cases referred for case management support, and 180 cases referred for loss to follow-up), and 252 passively presenting controls from London’s enhanced tuberculosis surveillance system. Main outcome measures Incremental costs, quality adjusted life years (QALYs), and cost effectiveness ratios for the Find and Treat service. Results The model estimated that, on average, the Find and Treat service identifies 16 and manages 123 active cases of tuberculosis each year in hard to reach groups in London. The service has a net cost of £1.4 million/year and, under conservative assumptions, gains 220 QALYs. The incremental cost effectiveness ratio was £6400-£10 000/QALY gained (about €7300-€11 000 or $10 000-$16 000 in September 2011). The two Find and Treat components were also cost effective, even in unfavourable scenarios (mobile screening unit (for undiagnosed cases), £18 000-£26 000/QALY gained; case management support team, £4100-£6800/QALY gained). Conclusions Both the screening and case management components of the Find and Treat service are likely to be cost effective in London. The cost effectiveness of the mobile screening unit in particular could be even greater than estimated, in view of the secondary effects of infection transmission and development of antibiotic resistance. PMID:22067473

  13. Dedicated outreach service for hard to reach patients with tuberculosis in London: observational study and economic evaluation.

    PubMed

    Jit, Mark; Stagg, Helen R; Aldridge, Robert W; White, Peter J; Abubakar, Ibrahim

    2011-09-14

    To assess the cost effectiveness of the Find and Treat service for diagnosing and managing hard to reach individuals with active tuberculosis. Economic evaluation using a discrete, multiple age cohort, compartmental model of treated and untreated cases of active tuberculosis. London, United Kingdom. Population Hard to reach individuals with active pulmonary tuberculosis screened or managed by the Find and Treat service (48 mobile screening unit cases, 188 cases referred for case management support, and 180 cases referred for loss to follow-up), and 252 passively presenting controls from London's enhanced tuberculosis surveillance system. Incremental costs, quality adjusted life years (QALYs), and cost effectiveness ratios for the Find and Treat service. The model estimated that, on average, the Find and Treat service identifies 16 and manages 123 active cases of tuberculosis each year in hard to reach groups in London. The service has a net cost of £1.4 million/year and, under conservative assumptions, gains 220 QALYs. The incremental cost effectiveness ratio was £6400-£10,000/QALY gained (about €7300-€11,000 or $10,000-$16 000 in September 2011). The two Find and Treat components were also cost effective, even in unfavourable scenarios (mobile screening unit (for undiagnosed cases), £18,000-£26,000/QALY gained; case management support team, £4100-£6800/QALY gained). Both the screening and case management components of the Find and Treat service are likely to be cost effective in London. The cost effectiveness of the mobile screening unit in particular could be even greater than estimated, in view of the secondary effects of infection transmission and development of antibiotic resistance.

  14. The Assignment of American Society of Anesthesiologists Physical Status Classification for Adult Polytrauma Patients: Results From a Survey and Future Considerations.

    PubMed

    Kuza, Catherine M; Hatzakis, George; Nahmias, Jeffry T

    2017-12-01

    The American Society of Anesthesiologists (ASA) physical status (PS) classification system assesses the preoperative health of patients. Previous studies demonstrated poor interrater reliability and variable ASA PS scores, especially in trauma scenarios. There are few studies that evaluated the assignment of ASA PS scores in trauma patients and no studies that evaluated ASA PS assignment in severely injured adult polytrauma patients. Our objective was to assess interrater reliability and identify sources of discrepancy among anesthesiologists and trauma surgeons in designating ASA PS scores to adult polytrauma patients. A link to an online survey containing questions assessing attitudes regarding ASA PS classification, demographic information, and 8 fictional trauma cases was e-mailed to anesthesiologists and trauma surgeons. The participants were asked to assign an ASA PS score to each scenario and explain their choice. Rater-versus-reference and interrater reliability, beyond that expected by chance, among respondents was analyzed using the Fleiss kappa analysis. A total of 349 participants completed the survey. All 8 cases had inconsistent ASA PS scores; several cases had scores ranging from I to VI and variable emergency (E) designations. Using weighted kappa (Kw) analysis for a subset of 201 respondents (101 trauma surgeons [S] and 100 anesthesiologists [A]), we found moderate (Kw = 0.63; SE = 0.024; 95% confidence interval, 0.594-0.666; P < .001) interrater-versus-reference reliability. The interrater reliability was fair (Kw = 0.43; SE = 0.037; 95% confidence interval, 0.360-0.491; P < .001). This study demonstrates fair interrater reliability beyond that expected by chance of the ASA PS scores among anesthesiologists and trauma surgeons when assessing adult polytrauma patients. Although the ASA PS is used in some trauma risk stratification models, discrepancies of ASA PS scores assigned to trauma cases exist. Future modifications of the ASA PS guidelines should aim to improve the interrater reliability of ASA PS scores in trauma patients. Further studies are warranted to determine the value of the ASA PS score as a trauma prognostic metric.

  15. Climate change adaptation accounting for huge uncertainties in future projections - the case of urban drainage

    NASA Astrophysics Data System (ADS)

    Willems, Patrick

    2015-04-01

    Hydrological design parameters, which are currently used in the guidelines for the design of urban drainage systems (Willems et al., 2013) have been revised, taking the Flanders region of Belgium as case study. The revision involved extrapolation of the design rainfall statistics, taking into account the current knowledge on future climate change trends till 2100. Uncertainties in these trend projections have been assessed after statistically analysing and downscaling by a quantile perturbation tool based on a broad ensemble set of climate model simulation results (44 regional + 69 global control-scenario climate model run combinations for different greenhouse gas scenarios). The impact results of the climate scenarios were investigated as changes to rainfall intensity-duration-frequency (IDF) curves. Thereafter, the climate scenarios and related changes in rainfall statistics were transferred to changes in flood frequencies of sewer systems and overflow frequencies of storage facilities. This has been done based on conceptual urban drainage models. Also the change in storage capacity required to exceed a given overflow return period, has been calculated for a range of return periods and infiltration or throughflow rates. These results were used on the basis of the revision of the hydraulic design rules of urban drainage systems. One of the major challenges while formulating these policy guidelines was the consideration of the huge uncertainties in the future climate change projections and impact assessments; see also the difficulties and pitfalls reported by the IWA/IAHR Joint Committee on Urban Drainage - Working group on urban rainfall (Willems et al., 2012). We made use of the risk concept, and found it a very useful approach to deal with the high uncertainties. It involves an impact study of the different climate projections, or - for practical reasons - a reduced set of climate scenarios tailored for the specific type of impact considered (urban floods in our case study), following the approach proposed by Ntegeka et al. (2014). When the consequences of given scenarios are high, they should be taken into account in the decision making process. For the Flanders' guidelines, it was agreed among the members of the regional Coordination Commission Integrated Water Management to consider (in addition to the traditional range of return periods up to 5 years) a 20-year design storm for scenario investigation. It was motivated by the outcome of this study that under the high climate scenario a 20-year storm would become - in order of magnitude - a 5-year storm. If after a design for a 5-year storm, the 20-year scenario investigation would conclude that specific zones along the sewer system would have severe additional impacts, it is recommended to apply changes to the system or to design flexible adaptation measures for the future (depending on which of the options would be most cost-efficient). Another adaptation action agreed was the installation of storm water infiltration devices at private houses and make these mandatory for new and renovated houses. Such installation was found to be cost-effective in any of the climate scenario's. This is one way of dealing with climate uncertainties, but lessons learned from other cases/applications are highly welcomed. References Ntegeka, V., Baguis, P., Roulin, E., Willems, P. (2014), 'Developing tailored climate change scenarios for hydrological impact assessments', Journal of Hydrology, 508C, 307-321 Willems, P. (2013). 'Revision of urban drainage design rules after assessment of climate change impacts on precipitation extremes at Uccle, Belgium', Journal of Hydrology, 496, 166-177 Willems, P., Arnbjerg-Nielsen, K., Olsson, J., Nguyen, V.T.V. (2012), 'Climate change impact assessment on urban rainfall extremes and urban drainage: methods and shortcomings', Atmospheric Research, 103, 106-118

  16. The effects of prior workplace behavior on subsequent sexual harassment judgments.

    PubMed

    Wiener, Richard L; Winter, Ryan; Rogers, Melanie; Arnot, Lucy

    2004-02-01

    A dual processing model of sexual harassment judgments predicted that the behavior of a complainant in a prior case would influence evaluations in an unrelated subsequent case. In the first of two experimental scenarios depicting social-sexual conduct at work, the female complainant's conduct was manipulated to be aggressive, submissive, ambiguous, or neutral. Half of the participants were asked to reflect upon the first scenario after reading it and before answering responsibility questions. The other half simply reviewed the scenario and answered the questions. When the complainant acted aggressively, her behavior in the first scenario caused men who reflected on the fact pattern to find less evidence of harassment. Most interestingly, an aggressive complainant observed in the first scenario caused participants (especially women) to rate lower the likelihood that a neutral complainant in a second independent case was the victim of gender discrimination. Across cases, men found less evidence of harassment than did women.

  17. Comparing long-term projections of the space debris environment to real world data - Looking back to 1990

    NASA Astrophysics Data System (ADS)

    Radtke, Jonas; Stoll, Enrico

    2016-10-01

    Long-term projections of the space debris environment are commonly used to assess the trends within different scenarios for the assumed future development of spacefaring. General scenarios investigated include business-as-usual cases in which spaceflight is performed as today and mitigation scenarios, assuming the implementation of Space Debris Mitigation Guidelines at different advances or the effectiveness of more drastic measures, such as active debris removal. One problem that always goes along with the projection of a system's behaviour in the future is that affecting parameters, such as the launch rate, are unpredictable. It is common to look backwards and re-model the past in other fields of research. This is a rather difficult task for spaceflight as it is still quite young, and furthermore mostly influenced by drastic politic changes, as the break-down of the Soviet Union in the end of the 1980s. Furthermore, one major driver of the evolution of the number of on-orbit objects turn out to be collisions between objects. As of today, these collisions are, fortunately, very rare and therefore, a real-world-data modelling approach is difficult. Nevertheless, since the end of the cold war more than 20 years of a comparably stable evolution of spaceflight activities have passed. For this study, this period is used in a comparison between the real evolution of the space debris environment and that one projected using the Institute of Space System's in-house tool for long-term assessment LUCA (Long-Term Utility for Collision Analysis). Four different scenarios are investigated in this comparison; all of them have the common starting point of using an initial population for 1st May 1989. The first scenario, which serves as reference, is simply taken from MASTER-2009. All launch and mission related objects from the Two Line Elements (TLE) catalogue and other available sources are included. All events such as explosion and collision events have been re-modelled as close to the reality as possible and included in the corresponding population. They furthermore have been correlated with TLE catalogue objects. As the latest available validated population snapshot for MASTER is May 2009, this epoch is chosen as endpoint for the simulations. The second scenario uses the knowledge of the past 25 years to perform a Monte-Carlo simulation of the evolution of the space debris environment. Necessary input parameters such as explosions per year, launch rates, and the evolution of the solar cycle are taken from their real evolutions. The third scenario goes a step further by only extracting mean numbers and trends from inputs such as launch and explosion rates and applying them. The final and fourth scenario aims to disregarding all knowledge of the time frame under investigation and inputs are determined based on data available in 1989 only. Results are compared to the reference scenario of the space debris environment.

  18. EPA QUICK REFERENCE GUIDES

    EPA Science Inventory

    EPA Quick Reference Guides are compilations of information on chemical and biological terrorist agents. The information is presented in consistent format and includes agent characteristics, release scenarios, health and safety data, real-time field detection, effect levels, samp...

  19. Participatory Approach to Long-Term Socio-Economic Scenarios as Building Block of a Local Vulnerability and Risk Assessment Tool - The Case Study Lienz (East-Tyrol)

    NASA Astrophysics Data System (ADS)

    Meyer, Ina; Eder, Brigitte; Hama, Michiko; Leitner, Markus

    2016-04-01

    Risks associated with climate change are mostly still understood and analyzed in a sector- or hazard-specific and rarely in a systemic, dynamic and scenario-based manner. In addition, socio-economic trends are often neglected in local vulnerability and risk assessments although they represent potential key determinants of risk and vulnerability. The project ARISE (Adaptation and Decision Support via Risk Management Through Local Burning Embers) aims at filling this gap by applying a participatory approach to socio-economic scenario building as building block of a local vulnerability assessment and risk management tool. Overall, ARISE aims at developing a decision support system for climate-sensitive iterative risk management as a key adaptation tool for the local level using Lienz in the East-Tyrol as a test-site City. One central building block is participatory socio-economic scenario building that - together with regionalized climate change scenarios - form a centrepiece in the process-oriented assessment of climate change risks and vulnerability. Major vulnerabilities and risks may stem from the economic performance, the socio-economic or socio-demographic developments or changes in asset exposition and not from climate change impacts themselves. The IPCC 5th assessment report underlines this and states that for most economic sectors, the impact of climate change may be small relative to the impacts of other driving forces such as changes in population growth, age, income, technology, relative prices, lifestyle, regulation, governance and many other factors in the socio-economy (Arent et al., 2014). The paper presents the methodology, process and results with respect to the building of long-term local socio-economic scenarios for the City of Lienz and the surrounding countryside. Scenarios were developed in a participatory approach using a scenario workshop that involved major stakeholders from the region. Participatory approaches are increasingly recognized as an important element in management and decision-making as problems in today's world are complex and require knowledge from many different domains and disciplines. Participation is also said to be a process of collective learning that changes the way people think and act which is a relevant point in forming appropriate region-specific climate adaptation strategies. The scenarios are based on an analysis of data on recent states and trends in major local sector developments concerning absolute and relative employment and value creation as well as on distinct socio-demographic developments in the region. Categories discussed in the scenario workshop cover inter alia institutions and governance, demographics, production and demand, markets, value-chains and trade, scientific and technological innovations, education and health. The derived stakeholder-based socio-economic scenarios were, in a second step, matched with the Shared Socio-economic reference Pathways (SSPs) in order to frame the locally produced scenarios with global narratives. Both strains were, in a third step, combined and backed-up by scientific literature in order to build the local socio-economic scenarios that served as background information in the analysis of risks, vulnerability and appropriate adaptation measures in the case-study region.

  20. Employment references: defamation law in the clinical laboratory.

    PubMed

    Parks, D G

    1993-01-01

    The law of defamation and the risks involved in issuing employment references are discussed. A hypothetical scenario is used to illustrate the legal standards governing the tort of defamation and to apply those standards to employment references. Practical suggestions for a "controlled reference" policy are provided, with the objective of allowing for responsible exchange of employment information and avoiding a defamation lawsuit.

  1. Urban expansion simulation and the spatio-temporal changes of ecosystem services, a case study in Atlanta Metropolitan area, USA.

    PubMed

    Sun, Xiao; Crittenden, John C; Li, Feng; Lu, Zhongming; Dou, Xiaolin

    2018-05-01

    Urban expansion can lead to land use changes and, hence, threatens the ecosystems. Understanding the effects of urbanization on ecosystem services (ESs) can provide scientific guidance for land use planning and the protection of ESs. We established a framework to assess the spatial distributions of ESs based on land use changes in the Atlanta Metropolitan area (AMA) from 1985 to 2012. A new comprehensive ecosystem service (CES) index was developed to reflect the comprehensive level of ESs. Associated with the influential factors, we simulated the business as usual scenario in 2030. Four alternative scenarios, including more compact growth (MCG), riparian vegetation buffer (RVB), soil conservation (SC), and combined development (CD) scenarios were developed to explore the optimal land use strategies which can enhance the ESs. The results showed that forest and wetland had the greatest decreases, while low and high intensity built-up lands had the greatest increases. The values of CES and most of ESs decreased significantly due to the sprawling expansion of built-up land. The scenario analysis revealed that the CD scenario performs best in CES value, while it performs the worst in food supply. Compared with the RVB and SC scenarios, MCG scenario is a more optimal land use strategy to enhance the ESs without at the expense of food supply. To integrate multiple ESs into land use planning and decision making, corresponding land management policies and ecological engineering measures should be implemented to enhance: (1) the water yield and water purification in urban core counties, (2) the carbon storage, habitat quality, and recreational opportunity in counties around the core area, and (3) the soil conservation and food supply in surrounding suburban counties. The land use strategies and ecological engineering measures in this study can provide references for enhancing the ESs in the AMA and other metropolitan areas. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. The economics of transboundary air pollution in Europe.

    PubMed

    Van Ierland, E C

    1991-01-01

    Acid rain is causing substantial damage in all Eastern and Western European countries. This article presents a stepwise linear optimisation model, that places transboundary air pollution by SO2 and NOx in a game theoretical framework. The national authorities of 28 countries are perceived as players in a game in which they can choose optimal strategies. It is illustrated that optimal national abatement programmes may be far from optimal if considered from an international point of view. Several scenarios are discussed, including a reference case, full cooperation, Pareto optimality and a critical loads approach. The need for international cooperation and regional differentiation of abatement programmes is emphasised.

  3. Cost-savings for biosimilars in the United States: a theoretical framework and budget impact case study application using filgrastim.

    PubMed

    Grewal, Simrun; Ramsey, Scott; Balu, Sanjeev; Carlson, Josh J

    2018-05-18

    Biosimilars can directly reduce the cost of treating patients for whom a reference biologic is indicated by offering a highly similar, lower priced alternative. We examine factors related to biosimilar regulatory approval, uptake, pricing, and financing and the potential impact on drug expenditures in the U.S. We developed a framework to illustrate how key factors including regulatory policies, provider and patient perception, pricing, and payer policies impact biosimilar cost-savings. Further, we developed a budget impact cost model to estimate savings from filgrastim biosimilars under various scenarios. The model uses publicly available data on disease incidence, treatment patterns, market share, and drug prices to estimate the cost-savings over a 5-year time horizon. We estimate five-year cost savings of $256 million, of which 18% ($47 million) are from reduced patient out-of-pocket costs, 34% ($86 million) are savings to commercial payers, and 48% ($123 million) are savings for Medicare. Additional scenarios demonstrate the impact of uncertain factors, including price, uptake, and financing policies. A variety or interrelated factors influence the development, uptake, and cost-savings for Biosimilars use in the U.S. The filgrastim case is a useful example that illustrates these factors and the potential magnitude of costs savings.

  4. Applications of surface metrology in firearm identification

    NASA Astrophysics Data System (ADS)

    Zheng, X.; Soons, J.; Vorburger, T. V.; Song, J.; Renegar, T.; Thompson, R.

    2014-01-01

    Surface metrology is commonly used to characterize functional engineering surfaces. The technologies developed offer opportunities to improve forensic toolmark identification. Toolmarks are created when a hard surface, the tool, comes into contact with a softer surface and causes plastic deformation. Toolmarks are commonly found on fired bullets and cartridge cases. Trained firearms examiners use these toolmarks to link an evidence bullet or cartridge case to a specific firearm, which can lead to a criminal conviction. Currently, identification is typically based on qualitative visual comparison by a trained examiner using a comparison microscope. In 2009, a report by the National Academies called this method into question. Amongst other issues, they questioned the objectivity of visual toolmark identification by firearms examiners. The National Academies recommended the development of objective toolmark identification criteria and confidence limits. The National Institute of Standards and Technology (NIST) have applied its experience in surface metrology to develop objective identification criteria, measurement methods, and reference artefacts for toolmark identification. NIST developed the Standard Reference Material SRM 2460 standard bullet and SRM 2461 standard cartridge case to facilitate quality control and traceability of identifications performed in crime laboratories. Objectivity is improved through measurement of surface topography and application of unambiguous surface similarity metrics, such as the maximum value (ACCFMAX) of the areal cross correlation function. Case studies were performed on consecutively manufactured tools, such as gun barrels and breech faces, to demonstrate that, even in this worst case scenario, all the tested tools imparted unique surface topographies that were identifiable. These studies provide scientific support for toolmark evidence admissibility in criminal court cases.

  5. Use of HRP-2-based rapid diagnostic test for Plasmodium falciparum malaria: assessing accuracy and cost-effectiveness in the villages of Dielmo and Ndiop, Senegal

    PubMed Central

    2010-01-01

    Background In 2006, the Senegalese National Malaria Control Programme (NMCP) has recommended artemisinin-based combination therapy (ACT) as the first-line treatment for uncomplicated malaria and, in 2007, mandated testing for all suspected cases of malaria with a Plasmodium falciparum HRP-2-based rapid diagnostic test for malaria (RDT(Paracheck®). Given the higher cost of ACT compared to earlier anti-malarials, the objectives of the present study were i) to study the accuracy of Paracheck® compared to the thick blood smear (TBS) in two areas with different levels of malaria endemicity and ii) analyse the cost-effectiveness of the strategy of the parasitological confirmation of clinically suspected malaria cases management recommended by the NMCP. Methods A cross-sectional study was undertaken in the villages of Dielmo and Ndiop (Senegal) nested in a cohort study of about 800 inhabitants. For all the individuals consulting between October 2008 and January 2009 with a clinical diagnosis of malaria, a questionnaire was filled and finger-prick blood samples were taken both for microscopic examination and RDT. The estimated costs and cost-effectiveness analysis were made considering five scenarios, the recommendations of the NMCP being the reference scenario. In addition, a sensitivity analysis was performed assuming that all the RDT-positive patients and 50% of RDT-negative patients were treated with ACT. Results A total of 189 consultations for clinically suspected malaria occurred during the study period. The sensitivity, specificity, positive and negative predictive values were respectively 100%, 98.3%, 80.0% and 100%. The estimated cost of the reference scenario was close to 700€ per 1000 episodes of illness, approximately twice as expensive as most of the other scenarios. Nevertheless, it appeared to us cost-effective while ensuring the diagnosis and the treatment of 100% of malaria attacks and an adequate management of 98.4% of episodes of illness. The present study also demonstrated that full compliance of health care providers with RDT results was required in order to avoid severe incremental costs. Conclusions A rational use of ACT requires laboratory testing of all patients presenting with presumed malaria. Use of RDTs inevitably has incremental costs, but the strategy associating RDT use for all clinically suspected malaria and prescribing ACT only to patients tested positive is cost-effective in areas where microscopy is unavailable. PMID:20525322

  6. Probabilistic-numerical assessment of pyroclastic current hazard at Campi Flegrei and Naples city: Multi-VEI scenarios as a tool for "full-scale" risk management.

    PubMed

    Mastrolorenzo, Giuseppe; Palladino, Danilo M; Pappalardo, Lucia; Rossano, Sergio

    2017-01-01

    The Campi Flegrei volcanic field (Italy) poses very high risk to the highly urbanized Neapolitan area. Eruptive history was dominated by explosive activity producing pyroclastic currents (hereon PCs; acronym for Pyroclastic Currents) ranging in scale from localized base surges to regional flows. Here we apply probabilistic numerical simulation approaches to produce PC hazard maps, based on a comprehensive spectrum of flow properties and vent locations. These maps are incorporated in a Geographic Information System (GIS) and provide all probable Volcanic Explosivity Index (VEI) scenarios from different source vents in the caldera, relevant for risk management planning. For each VEI scenario, we report the conditional probability for PCs (i.e., the probability for a given area to be affected by the passage of PCs in case of a PC-forming explosive event) and related dynamic pressure. Model results indicate that PCs from VEI<4 events would be confined within the Campi Flegrei caldera, PC propagation being impeded by the northern and eastern caldera walls. Conversely, PCs from VEI 4-5 events could invade a wide area beyond the northern caldera rim, as well as part of the Naples metropolitan area to the east. A major controlling factor of PC dispersal is represented by the location of the vent area. PCs from the potentially largest eruption scenarios (analogous to the ~15 ka, VEI 6 Neapolitan Yellow Tuff or even the ~39 ka, VEI 7 Campanian Ignimbrite extreme event) would affect a large part of the Campanian Plain to the north and the city of Naples to the east. Thus, in case of renewal of eruptive activity at Campi Flegrei, up to 3 million people will be potentially exposed to volcanic hazard, pointing out the urgency of an emergency plan. Considering the present level of uncertainty in forecasting the future eruption type, size and location (essentially based on statistical analysis of previous activity), we suggest that appropriate planning measures should face at least the VEI 5 reference scenario (at least 2 occurrences documented in the last 10 ka).

  7. Probabilistic-numerical assessment of pyroclastic current hazard at Campi Flegrei and Naples city: Multi-VEI scenarios as a tool for “full-scale” risk management

    PubMed Central

    Mastrolorenzo, Giuseppe; Palladino, Danilo M.; Pappalardo, Lucia; Rossano, Sergio

    2017-01-01

    The Campi Flegrei volcanic field (Italy) poses very high risk to the highly urbanized Neapolitan area. Eruptive history was dominated by explosive activity producing pyroclastic currents (hereon PCs; acronym for Pyroclastic Currents) ranging in scale from localized base surges to regional flows. Here we apply probabilistic numerical simulation approaches to produce PC hazard maps, based on a comprehensive spectrum of flow properties and vent locations. These maps are incorporated in a Geographic Information System (GIS) and provide all probable Volcanic Explosivity Index (VEI) scenarios from different source vents in the caldera, relevant for risk management planning. For each VEI scenario, we report the conditional probability for PCs (i.e., the probability for a given area to be affected by the passage of PCs in case of a PC-forming explosive event) and related dynamic pressure. Model results indicate that PCs from VEI<4 events would be confined within the Campi Flegrei caldera, PC propagation being impeded by the northern and eastern caldera walls. Conversely, PCs from VEI 4–5 events could invade a wide area beyond the northern caldera rim, as well as part of the Naples metropolitan area to the east. A major controlling factor of PC dispersal is represented by the location of the vent area. PCs from the potentially largest eruption scenarios (analogous to the ~15 ka, VEI 6 Neapolitan Yellow Tuff or even the ~39 ka, VEI 7 Campanian Ignimbrite extreme event) would affect a large part of the Campanian Plain to the north and the city of Naples to the east. Thus, in case of renewal of eruptive activity at Campi Flegrei, up to 3 million people will be potentially exposed to volcanic hazard, pointing out the urgency of an emergency plan. Considering the present level of uncertainty in forecasting the future eruption type, size and location (essentially based on statistical analysis of previous activity), we suggest that appropriate planning measures should face at least the VEI 5 reference scenario (at least 2 occurrences documented in the last 10 ka). PMID:29020018

  8. The policy effects of feed-in tariff and renewable portfolio standard: A case study of China's waste incineration power industry.

    PubMed

    Xin-Gang, Zhao; Yu-Zhuo, Zhang; Ling-Zhi, Ren; Yi, Zuo; Zhi-Gong, Wu

    2017-10-01

    Among the regulatory policies, feed-in tariffs (FIT) and renewable portfolio standards (RPS) are the most popular to promote the development of renewable energy power industry. They can significantly contribute to the expansion of domestic industrial activities in terms of sustainable energy. This paper uses system dynamics (SD) to establish models of long-term development of China's waste incineration power industry under FIT and RPS schemes, and provides a case study by using scenario analysis method. The model, on the one hand, not only clearly shows the complex logical relationship between the factors but also assesses policy effects of the two policy tools in the development of the industry. On the other hand, it provides a reference for scholars to study similar problems in different countries, thereby facilitating an understanding of waste incineration power's long-term sustainable development pattern under FIT and RPS schemes, and helping to provide references for policy-making institutions. The results show that in the perfect competitive market, the implementation of RPS can promote long-term and rapid development of China's waste incineration power industry given the constraints and actions of the mechanisms of RPS quota proportion, the TGC valid period, and fines, compared with FIT. At the end of the paper, policy implications are offered as references for the government. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Method for the technical, financial, economic and environmental pre-feasibility study of geothermal power plants by RETScreen - Ecuador's case study.

    PubMed

    Moya, Diego; Paredes, Juan; Kaparaju, Prasad

    2018-01-01

    RETScreen presents a proven focused methodology on pre-feasibility studies. Although this tool has been used to carry out a number of pre-feasibility studies of solar, wind, and hydropower projects; that is not the case for geothermal developments. This method paper proposes a systematic methodology to cover all the necessary inputs of the RETScreen-International Geothermal Project Model. As case study, geothermal power plant developments in the Ecuadorian context were analysed by RETScreen-International Geothermal Project Model. Three different scenarios were considered for analyses. Scenario I and II considered incentives of 132.1 USD/MWh for electricity generation and grants of 3 million USD. Scenario III considered the geothermal project with an electricity export price of 49.3 USD/MWh. Scenario III was further divided into IIIA and IIIB case studies. Scenario IIIA considered a 3 million USD grant while Scenario IIIB considered an income of 8.9 USD/MWh for selling heat in direct applications. Modelling results showed that binary power cycle was the most suitable geothermal technology to produce electricity along with aquaculture and greenhouse heating for direct use applications in all scenarios. Financial analyses showed that the debt payment would be 5.36 million USD/year under in Scenario I and III. The correspindig values for Scenario II was 7.06 million USD/year. Net Present Value was positive for all studied scenarios except for Scenario IIIA. Overall, Scenario II was identified as the most feasible project due to positive NPV with short payback period. Scenario IIIB could become financially attractive by selling heat for direct applications. The total initial investment for a 22 MW geothermal power plant was 114.3 million USD (at 2017 costs). Economic analysis showed an annual savings of 24.3 million USD by avoiding fossil fuel electricity generation. More than 184,000 tCO 2 eq. could be avoided annually.

  10. Response and adaptation of grapevine cultivars to hydrological conditions forced by a changing climate in a complex landscape

    NASA Astrophysics Data System (ADS)

    De Lorenzi, Francesca; Bonfante, Antonello; Alfieri, Silvia Maria; Monaco, Eugenia; De Mascellis, Roberto; Manna, Piero; Menenti, Massimo

    2014-05-01

    Soil water availability is one of the main components of the terroir concept, influencing crop yield and fruit composition in grapes. The aim of this work is to analyze some elements of the "natural environment" of terroir (climate and soil) in combination with the intra-specific biodiversity of yield responses of grapevine to water availability. From a reference (1961-90) to a future (2021-50) climate case, the effects of climate evolution on soil water availability are assessed and, regarding soil water regime as a predictor variable, the potential spatial distribution of wine-producing cultivars is determined. In a region of Southern Italy (Valle Telesina, 20,000 ha), where a terroir classification has been produced (Bonfante et al., 2011), we applied an agro-hydrological model to determine water availability indicators. Simulations were performed in 60 soil typological units, over the entire study area, and water availability (= hydrological) indicators were determined. Two climate cases were considered: reference (1961-90) and future (2021-2050), the former from climatic statistics on observed variables, and the latter from statistical downscaling of predictions by general circulation models (AOGCM) under A1B SRES scenario. Climatic data consist of daily time series of maximum and minimum temperature, and daily rainfall on a grid with a spatial resolution of 35 km. Spatial and temporal variability of hydrological indicators was addressed. With respect to temporal variability, both inter-annual and intra-annual (i.e. at different stages of crop cycle) variability were analyzed. Some cultivar-specific relations between hydrological indicators and characteristics of must quality were established. Moreover, for several wine-producing cultivars, hydrological requirements were determined by means of yield response functions to soil water availability, through the re-analysis of experimental data derived from scientific literature. The standard errors of estimated requirements were determined. To assess cultivars adaptability, hydrological requirements were evaluated against hydrological indicators. A probabilistic assessment of adaptability was performed, and the inaccuracy of estimated hydrological requirements was accounted for by the error of estimate and its distribution. Maps of cultivars potential distribution, i.e. locations where each cultivar is expected to be compatible with climate, were derived and possible options for adaptation to climate change were defined. The 2021 - 2050 climate scenario was characterized by higher temperatures throughout the year and by a significant decrease in precipitation during spring and autumn. The results have shown the relevant variability of soils water regime and its effects on cultivars adaptability. In the future climate scenario, a hydrological indicator (i.e. relative evapotranspiration deficit - RETD), averaged over the growing season, showed an average increase of 5-8 %, and more pronounced increases occurred in the phenological phases of berry formation and ripening. At the locations where soil hydrological conditions were favourable (like the ancient terraces), hydrological indicators were quite similar in both climate scenarios and the adaptability of the cultivars was high both in the reference and future climate case. The work was carried out within the Italian national project AGROSCENARI funded by the Ministry for Agricultural, Food and Forest Policies (MIPAAF, D.M. 8608/7303/2008) Keywords: climate change, Vitis vinifera L., simulation model, yield response functions, potential cultivation area.

  11. An evaluation of climate change effects in estuarine salinity patterns: Application to Ria de Aveiro shallow water system

    NASA Astrophysics Data System (ADS)

    Vargas, Catarina I. C.; Vaz, Nuno; Dias, João M.

    2017-04-01

    It is of global interest, for the definition of effective adaptation strategies, to make an assessment of climate change impacts in coastal environments. In this study, the salinity patterns adjustments and the correspondent Venice System zonations adaptations are evaluated through numerical modelling for Ria de Aveiro, a mesotidal shallow water lagoon located in the Portuguese coast, for the end of the 21st century in a climate change context. A reference (equivalent to present conditions) and three future scenarios are defined and simulated, both for wet and dry conditions. The future scenarios are designed with the following changes to the reference: scenario 1) projected mean sea level (MSL) rise; scenario 2) projected river flow discharges; and scenario 3) projections for both MSL and river flow discharges. The projections imposed are: a MSL rise of 0.42 m; a freshwater flow reduction of ∼22% for the wet season and a reduction of ∼87% for the dry season. Modelling results are analyzed for different tidal ranges. Results indicate: a) a salinity upstream intrusion and a generalized salinity increase for sea level rise scenario, with higher significance in middle-to-upper lagoon zones; b) a maximum salinity increase of ∼12 in scenario 3 and wet conditions for Espinheiro channel, the one with higher freshwater contribution; c) an upstream displacement of the saline fronts occurring in wet conditions for all future scenarios, with stronger expression for scenario 3, of ∼2 km in Espinheiro channel; and d) a landward progression of the saltier physical zones established in the Venice System scheme. The adaptation of the ecosystem to the upstream relocation of physical zones may be blocked by human settlements and other artificial barriers surrounding the estuarine environment.

  12. Techno-economic evaluation of stillage treatment with anaerobic digestion in a softwood-to-ethanol process.

    PubMed

    Barta, Zsolt; Reczey, Kati; Zacchi, Guido

    2010-09-15

    Replacing the energy-intensive evaporation of stillage by anaerobic digestion is one way of decreasing the energy demand of the lignocellulosic biomass to the ethanol process. The biogas can be upgraded and sold as transportation fuel, injected directly into the gas grid or be incinerated on-site for combined heat and power generation. A techno-economic evaluation of the spruce-to-ethanol process, based on SO2-catalysed steam pretreatment followed by simultaneous saccharification and fermentation, has been performed using the commercial flow-sheeting program Aspen Plus™. Various process configurations of anaerobic digestion of the stillage, with different combinations of co-products, have been evaluated in terms of energy efficiency and ethanol production cost versus the reference case of evaporation. Anaerobic digestion of the stillage showed a significantly higher overall energy efficiency (87-92%), based on the lower heating values, than the reference case (81%). Although the amount of ethanol produced was the same in all scenarios, the production cost varied between 4.00 and 5.27 Swedish kronor per litre (0.38-0.50 euro/L), including the reference case. Higher energy efficiency options did not necessarily result in lower ethanol production costs. Anaerobic digestion of the stillage with biogas upgrading was demonstrated to be a favourable option for both energy efficiency and ethanol production cost. The difference in the production cost of ethanol between using the whole stillage or only the liquid fraction in anaerobic digestion was negligible for the combination of co-products including upgraded biogas, electricity and district heat.

  13. Techno-economic evaluation of stillage treatment with anaerobic digestion in a softwood-to-ethanol process

    PubMed Central

    2010-01-01

    Background Replacing the energy-intensive evaporation of stillage by anaerobic digestion is one way of decreasing the energy demand of the lignocellulosic biomass to the ethanol process. The biogas can be upgraded and sold as transportation fuel, injected directly into the gas grid or be incinerated on-site for combined heat and power generation. A techno-economic evaluation of the spruce-to-ethanol process, based on SO2-catalysed steam pretreatment followed by simultaneous saccharification and fermentation, has been performed using the commercial flow-sheeting program Aspen Plus™. Various process configurations of anaerobic digestion of the stillage, with different combinations of co-products, have been evaluated in terms of energy efficiency and ethanol production cost versus the reference case of evaporation. Results Anaerobic digestion of the stillage showed a significantly higher overall energy efficiency (87-92%), based on the lower heating values, than the reference case (81%). Although the amount of ethanol produced was the same in all scenarios, the production cost varied between 4.00 and 5.27 Swedish kronor per litre (0.38-0.50 euro/L), including the reference case. Conclusions Higher energy efficiency options did not necessarily result in lower ethanol production costs. Anaerobic digestion of the stillage with biogas upgrading was demonstrated to be a favourable option for both energy efficiency and ethanol production cost. The difference in the production cost of ethanol between using the whole stillage or only the liquid fraction in anaerobic digestion was negligible for the combination of co-products including upgraded biogas, electricity and district heat. PMID:20843330

  14. Building strategies for tsunami scenarios databases to be used in a tsunami early warning decision support system: an application to western Iberia

    NASA Astrophysics Data System (ADS)

    Tinti, S.; Armigliato, A.; Pagnoni, G.; Zaniboni, F.

    2012-04-01

    One of the most challenging goals that the geo-scientific community is facing after the catastrophic tsunami occurred on December 2004 in the Indian Ocean is to develop the so-called "next generation" Tsunami Early Warning Systems (TEWS). Indeed, the meaning of "next generation" does not refer to the aim of a TEWS, which obviously remains to detect whether a tsunami has been generated or not by a given source and, in the first case, to send proper warnings and/or alerts in a suitable time to all the countries and communities that can be affected by the tsunami. Instead, "next generation" identifies with the development of a Decision Support System (DSS) that, in general terms, relies on 1) an integrated set of seismic, geodetic and marine sensors whose objective is to detect and characterise the possible tsunamigenic sources and to monitor instrumentally the time and space evolution of the generated tsunami, 2) databases of pre-computed numerical tsunami scenarios to be suitably combined based on the information coming from the sensor environment and to be used to forecast the degree of exposition of different coastal places both in the near- and in the far-field, 3) a proper overall (software) system architecture. The EU-FP7 TRIDEC Project aims at developing such a DSS and has selected two test areas in the Euro-Mediterranean region, namely the western Iberian margin and the eastern Mediterranean (Turkish coasts). In this study, we discuss the strategies that are being adopted in TRIDEC to build the databases of pre-computed tsunami scenarios and we show some applications to the western Iberian margin. In particular, two different databases are being populated, called "Virtual Scenario Database" (VSDB) and "Matching Scenario Database" (MSDB). The VSDB contains detailed simulations of few selected earthquake-generated tsunamis. The cases provided by the members of the VSDB are computed "real events"; in other words, they represent the unknowns that the TRIDEC platform must be able to recognise and match during the early crisis management phase. The MSDB contains a very large number (order of thousands) of tsunami simulations performed starting from many different simple earthquake sources of different magnitudes and located in the "vicinity" of the virtual scenario earthquake. Examples from both databases will be presented.

  15. Change Ahead: Transient Scenarios for Long-term Water Management

    NASA Astrophysics Data System (ADS)

    Haasnoot, Marjolijn; Beersma, Jules; Schellekens, Jaap

    2013-04-01

    While the use of an ensemble of transient scenarios is common in climate change studies, they are rarely used in water management studies. Present planning studies on long-term water management often use a few plausible futures for one or two projection years, ignoring the dynamic aspect of adaptation through the interaction between the water system and society. Over the course of time society experiences, learns and adapts to changes and events, making policy responses part of a plausible future, and thus the success of a water management strategy. Exploring transient scenarios and policy options over time can support decision making on water management strategies in an uncertain and changing environment. We have developed and applied such a method, called exploring adaptation pathways (Haasnoot et al., 2012; Haasnoot et al., 2011). This method uses multiple realisations of transient scenarios to assess the efficacy of policy actions over time. In case specified objectives are not achieved anymore, an adaptation tipping point (Kwadijk et al., 2010) is reached. After reaching a tipping point, additional actions are needed to reach the objectives. As a result, a pathway emerges. In this presentation we describe the development of transient scenarios for long term water management, and how these scenarios can be used for long term water management under uncertainty. We illustrate this with thought experiments, and results from computational modeling experiment for exploring adaptation pathways in the lower Rhine delta. The results and the thought experiments show, among others, that climate variability is at least just as important as climate change for taking decisions in water management. References Haasnoot, M., Middelkoop, H., Offermans, A., Beek, E., Deursen, W.A.v. (2012) Exploring pathways for sustainable water management in river deltas in a changing environment. Climatic Change 115, 795-819. Haasnoot, M., Middelkoop, H., van Beek, E., van Deursen, W.P.A. (2011) A Method to Develop Sustainable Water Management Strategies for an Uncertain Future. Sustainable Development 19, 369-381. Kwadijk, J.C.J., Haasnoot, M., Mulder, J.P.M., Hoogvliet, M.M.C., Jeuken, A.B.M., van der Krogt, R.A.A., van Oostrom, N.G.C., Schelfhout, H.A., van Velzen, E.H., van Waveren, H., de Wit, M.J.M. (2010) Using adaptation tipping points to prepare for climate change and sea level rise: a case study in the Netherlands. Wiley Interdisciplinary Reviews: Climate Change 1, 729-740.

  16. Future Education: Learning the Future. Scenarios and Strategies in Europe. CEDEFOP Reference Series.

    ERIC Educational Resources Information Center

    van Wieringen, Fons; Sellin, Burkart; Schmidt, Ghislaine

    Five research institutes covering five European Union (EU) member states and five Central and Eastern European countries participated in a scenario project designed to improve understanding of vocational education and training (VET) systems in their economic-technological, employment-labor, and training-knowledge environments. The participating…

  17. Climate change impacts on freshwater fish, coral reefs, and related ecosystem services in the United States

    EPA Science Inventory

    We analyzed the potential physical and economic impacts of climate change on freshwater fisheries and coral reefs in the United States, examining a reference scenario and two policy scenarios that limit global greenhouse gas (GHG) emissions. We modeled shifts in suitable habitat ...

  18. Hepatitis C virus infection in Argentina: Burden of chronic disease

    PubMed Central

    Ridruejo, Ezequiel; Bessone, Fernando; Daruich, Jorge R; Estes, Chris; Gadano, Adrián C; Razavi, Homie; Villamil, Federico G; Silva, Marcelo O

    2016-01-01

    AIM: To estimate the progression of the hepatitis C virus (HCV) epidemic and measure the burden of HCV-related morbidity and mortality. METHODS: Age- and gender-defined cohorts were used to follow the viremic population in Argentina and estimate HCV incidence, prevalence, hepatic complications, and mortality. The relative impact of two scenarios on HCV-related outcomes was assessed: (1) increased sustained virologic response (SVR); and (2) increased SVR and treatment. RESULTS: Under scenario 1, SVR raised to 85%-95% in 2016. Compared to the base case scenario, there was a 0.3% reduction in prevalent cases and liver-related deaths by 2030. Given low treatment rates, cases of hepatocellular carcinoma and decompensated cirrhosis decreased < 1%, in contrast to the base case in 2030. Under scenario 2, the same increases in SVR were modeled, with gradual increases in the annual diagnosed and treated populations. This scenario decreased prevalent infections 45%, liver-related deaths 55%, liver cancer cases 60%, and decompensated cirrhosis 55%, as compared to the base case by 2030. CONCLUSION: In Argentina, cases of end stage liver disease and liver-related deaths due to HCV are still growing, while its prevalence is decreasing. Increasing in SVR rates is not enough, and increasing in the number of patients diagnosed and candidates for treatment is needed to reduce the HCV disease burden. Based on this scenario, strategies to increase diagnosis and treatment uptake must be developed to reduce HCV burden in Argentina. PMID:27239258

  19. Improving representation of canopy temperatures for modeling subcanopy incoming longwave radiation to the snow surface

    NASA Astrophysics Data System (ADS)

    Webster, Clare; Rutter, Nick; Jonas, Tobias

    2017-09-01

    A comprehensive analysis of canopy surface temperatures was conducted around a small and large gap at a forested alpine site in the Swiss Alps during the 2015 and 2016 snowmelt seasons (March-April). Canopy surface temperatures within the small gap were within 2-3°C of measured reference air temperature. Vertical and horizontal variations in canopy surface temperatures were greatest around the large gap, varying up to 18°C above measured reference air temperature during clear-sky days. Nighttime canopy surface temperatures around the study site were up to 3°C cooler than reference air temperature. These measurements were used to develop a simple parameterization for correcting reference air temperature for elevated canopy surface temperatures during (1) nighttime conditions (subcanopy shortwave radiation is 0 W m-2) and (2) periods of increased subcanopy shortwave radiation >400 W m-2 representing penetration of shortwave radiation through the canopy. Subcanopy shortwave and longwave radiation collected at a single point in the subcanopy over a 24 h clear-sky period was used to calculate a nighttime bulk offset of 3°C for scenario 1 and develop a multiple linear regression model for scenario 2 using reference air temperature and subcanopy shortwave radiation to predict canopy surface temperature with a root-mean-square error (RMSE) of 0.7°C. Outside of these two scenarios, reference air temperature was used to predict subcanopy incoming longwave radiation. Modeling at 20 radiometer locations throughout two snowmelt seasons using these parameterizations reduced the mean bias and RMSE to below 10 W m s-2 at all locations.

  20. Micro-Logistics Analysis for Human Space Exploration

    NASA Technical Reports Server (NTRS)

    Cirillo, William; Stromgren, Chel; Galan, Ricardo

    2008-01-01

    Traditionally, logistics analysis for space missions has focused on the delivery of elements and goods to a destination. This type of logistics analysis can be referred to as "macro-logistics". While the delivery of goods is a critical component of mission analysis, it captures only a portion of the constraints that logistics planning may impose on a mission scenario. The other component of logistics analysis concerns the local handling of goods at the destination, including storage, usage, and disposal. This type of logistics analysis, referred to as "micro-logistics", may also be a primary driver in the viability of a human lunar exploration scenario. With the rigorous constraints that will be placed upon a human lunar outpost, it is necessary to accurately evaluate micro-logistics operations in order to develop exploration scenarios that will result in an acceptable level of system performance.

  1. Assessment of riverine load of contaminants to European seas under policy implementation scenarios: an example with 3 pilot substances.

    PubMed

    Marinov, Dimitar; Pistocchi, Alberto; Trombetti, Marco; Bidoglio, Giovanni

    2014-01-01

    An evaluation of conventional emission scenarios is carried out targeting a possible impact of European Union (EU) policies on riverine loads to the European seas for 3 pilot pollutants: lindane, trifluralin, and perfluorooctane sulfonate (PFOS). The policy scenarios are investigated to the time horizon of year 2020 starting from chemical-specific reference conditions and considering different types of regulatory measures including business as usual (BAU), current trend (CT), partial implementation (PI), or complete ban (PI ban) of emissions. The scenario analyses show that the model-estimated lindane load of 745 t to European seas in 1995, based on the official emission data, would be reduced by 98.3% to approximately 12.5 t in 2005 (BAU scenario), 10 years after the start of the EU regulation of this chemical. The CT and PI ban scenarios indicate a reduction of sea loads of lindane in 2020 by 74% and 95%, respectively, when compared to the BAU estimate. For trifluralin, an annual load of approximately 61.7 t is estimated for the baseline year 2003 (BAU scenario), although the applied conservative assumptions related to pesticide use data availability in Europe. Under the PI (ban) scenario, assuming only small residual emissions of trifluralin, we estimate a sea loading of approximately 0.07 t/y. For PFOS, the total sea load from all European countries is estimated at approximately 5.8 t/y referred to 2007 (BAU scenario). Reducing the total load of PFOS below 1 t/y requires emissions to be reduced by 84%. The analysis of conventional scenarios or scenario typologies for emissions of contaminants using simple spatially explicit GIS-based models is suggested as a viable, affordable exercise that may support the assessment of implementation of policies and the identification or negotiation of emission reduction targets. © 2013 SETAC.

  2. Dispersion modeling of accidental releases of toxic gases - Comparison of the models and their utility for the fire brigades.

    NASA Astrophysics Data System (ADS)

    Stenzel, S.; Baumann-Stanzer, K.

    2009-04-01

    Dispersion modeling of accidental releases of toxic gases - Comparison of the models and their utility for the fire brigades. Sirma Stenzel, Kathrin Baumann-Stanzer In the case of accidental release of hazardous gases in the atmosphere, the emergency responders need a reliable and fast tool to assess the possible consequences and apply the optimal countermeasures. For hazard prediction and simulation of the hazard zones a number of air dispersion models are available. The most model packages (commercial or free of charge) include a chemical database, an intuitive graphical user interface (GUI) and automated graphical output for display the results, they are easy to use and can operate fast and effective during stress situations. The models are designed especially for analyzing different accidental toxic release scenarios ("worst-case scenarios"), preparing emergency response plans and optimal countermeasures as well as for real-time risk assessment and management. There are also possibilities for model direct coupling to automatic meteorological stations, in order to avoid uncertainties in the model output due to insufficient or incorrect meteorological data. Another key problem in coping with accidental toxic release is the relative width spectrum of regulations and values, like IDLH, ERPG, AEGL, MAK etc. and the different criteria for their application. Since the particulate emergency responders and organizations require for their purposes unequal regulations and values, it is quite difficult to predict the individual hazard areas. There are a quite number of research studies and investigations coping with the problem, anyway the end decision is up to the authorities. The research project RETOMOD (reference scenarios calculations for toxic gas releases - model systems and their utility for the fire brigade) was conducted by the Central Institute for Meteorology and Geodynamics (ZAMG) in cooperation with the Vienna fire brigade, OMV Refining & Marketing GmbH and Synex Ries & Greßlehner GmbH. RETOMOD was funded by the KIRAS safety research program at the Austrian Ministry of Transport, Innovation and Technology (www.kiras.at). One of the main tasks of this project was 1. Sensitivity study and optimization of the meteorological input for modeling of the hazard areas (human exposure) during the accidental toxic releases. 2. Comparison of several model packages (based on reference scenarios) in order to estimate the utility for the fire brigades. This presentation introduces the project models used and presents the results of task 2. The results of task 1 are presented by Baumann-Stanzer and Stenzel in this session. For the purpose of this study the following models were tested and compared: ALOHA (Areal Location of Hazardous atmosphere, EPA), MEMPLEX (Keudel av-Technik GmbH), Breeze (Trinity Consulting), SAFER System, SAM (Engineering office Lohmeyer), COMPAS. A set of reference scenarios for Chlorine, Ammoniac, Butane and Petrol were proceed in order to reliably predict and estimate the human exposure during the event. The models simulated the accidental release from the mentioned above gases and estimates the potential toxic areas. Since the inputs requirement differ from model to model, and the outputs are based on different criteria for toxic areas and exposure, a high degree of caution in the interpretation of the model results is needed.

  3. A Versatile Panel of Reference Gene Assays for the Measurement of Chicken mRNA by Quantitative PCR

    PubMed Central

    Maier, Helena J.; Van Borm, Steven; Young, John R.; Fife, Mark

    2016-01-01

    Quantitative real-time PCR assays are widely used for the quantification of mRNA within avian experimental samples. Multiple stably-expressed reference genes, selected for the lowest variation in representative samples, can be used to control random technical variation. Reference gene assays must be reliable, have high amplification specificity and efficiency, and not produce signals from contaminating DNA. Whilst recent research papers identify specific genes that are stable in particular tissues and experimental treatments, here we describe a panel of ten avian gene primer and probe sets that can be used to identify suitable reference genes in many experimental contexts. The panel was tested with TaqMan and SYBR Green systems in two experimental scenarios: a tissue collection and virus infection of cultured fibroblasts. GeNorm and NormFinder algorithms were able to select appropriate reference gene sets in each case. We show the effects of using the selected genes on the detection of statistically significant differences in expression. The results are compared with those obtained using 28s ribosomal RNA, the present most widely accepted reference gene in chicken work, identifying circumstances where its use might provide misleading results. Methods for eliminating DNA contamination of RNA reduced, but did not completely remove, detectable DNA. We therefore attached special importance to testing each qPCR assay for absence of signal using DNA template. The assays and analyses developed here provide a useful resource for selecting reference genes for investigations of avian biology. PMID:27537060

  4. A survey of the views and capabilities of community pharmacists in Western Australia regarding the rescheduling of selected oral antibiotics in a framework of pharmacist prescribing

    PubMed Central

    Sinkala, Fatima; Parsons, Richard; Sunderland, Bruce; Hoti, Kreshnik

    2018-01-01

    Background Antibiotic misuse in the community contributes to antimicrobial resistance. One way to address this may be by better utilizing community pharmacists’ skills in antibiotic prescribing. The aims of this study were to examine the level of support for “down-scheduling” selected antibiotics and to evaluate factors determining the appropriateness of community pharmacist prescribing for a limited range of infections, including their decision to refer to a doctor. Methods Self-administered questionnaires, including graded case vignette scenarios simulating real practice, were sent to Western Australian community pharmacists. In addition to descriptive statistics and chi-square testing, a General Estimating Equation (GEE) was used to identify factors associated with appropriateness of therapy and the decision to refer, for each of the seven vignettes. Results Of the 240 pharmacists surveyed, 90 (37.5%) responded, yielding 630 responses to seven different case vignettes. There was more than 60% respondent support for expanded prescribing (rescheduling) of commonly prescribed antibiotics. Overall 426/630 (67.6%) chose to treat the patient while the remaining 204/630 (32.4%) referred the patient to a doctor. Of those electing to treat, 380/426 (89.2%) opted to use oral antibiotics, with 293/380 (77.2%) treating with an appropriate selection and regimen. The GEE model indicated that pharmacists were more likely to prescribe inappropriately for conditions such as otitis media (p = 0.0060) and urinary tract infection in pregnancy (p < 0.0001) compared to more complex conditions. Over 80% of all pharmacists would refer the patient to a doctor following no improvement within 3 days, or within 24 h in the case of community acquired pneumonia. It was more common for younger pharmacists to refer the patient to a doctor (p = 0.0165). Discussion This study adds further insight into community pharmacy/pharmacist characteristics associated with appropriateness of oral antibiotic selection and the decision to refer to doctors. These findings require consideration in designing pharmacist over-the-counter prescribing models for oral antibiotics. PMID:29761047

  5. A survey of the views and capabilities of community pharmacists in Western Australia regarding the rescheduling of selected oral antibiotics in a framework of pharmacist prescribing.

    PubMed

    Sinkala, Fatima; Parsons, Richard; Sunderland, Bruce; Hoti, Kreshnik; Czarniak, Petra

    2018-01-01

    Antibiotic misuse in the community contributes to antimicrobial resistance. One way to address this may be by better utilizing community pharmacists' skills in antibiotic prescribing. The aims of this study were to examine the level of support for "down-scheduling" selected antibiotics and to evaluate factors determining the appropriateness of community pharmacist prescribing for a limited range of infections, including their decision to refer to a doctor. Self-administered questionnaires, including graded case vignette scenarios simulating real practice, were sent to Western Australian community pharmacists. In addition to descriptive statistics and chi-square testing, a General Estimating Equation (GEE) was used to identify factors associated with appropriateness of therapy and the decision to refer, for each of the seven vignettes. Of the 240 pharmacists surveyed, 90 (37.5%) responded, yielding 630 responses to seven different case vignettes. There was more than 60% respondent support for expanded prescribing (rescheduling) of commonly prescribed antibiotics. Overall 426/630 (67.6%) chose to treat the patient while the remaining 204/630 (32.4%) referred the patient to a doctor. Of those electing to treat, 380/426 (89.2%) opted to use oral antibiotics, with 293/380 (77.2%) treating with an appropriate selection and regimen. The GEE model indicated that pharmacists were more likely to prescribe inappropriately for conditions such as otitis media ( p = 0.0060) and urinary tract infection in pregnancy ( p  < 0.0001) compared to more complex conditions. Over 80% of all pharmacists would refer the patient to a doctor following no improvement within 3 days, or within 24 h in the case of community acquired pneumonia. It was more common for younger pharmacists to refer the patient to a doctor ( p  = 0.0165). This study adds further insight into community pharmacy/pharmacist characteristics associated with appropriateness of oral antibiotic selection and the decision to refer to doctors. These findings require consideration in designing pharmacist over-the-counter prescribing models for oral antibiotics.

  6. Impact of idealized future stratospheric aerosol injection on the large-scale ocean and land carbon cycles

    NASA Astrophysics Data System (ADS)

    Tjiputra, J. F.; Grini, A.; Lee, H.

    2016-01-01

    Using an Earth system model, we simulate stratospheric aerosol injection (SAI) on top of the Representative Concentration Pathways 8.5 future scenario. Our idealized method prescribes aerosol concentration, linearly increasing from 2020 to 2100, and thereafter remaining constant until 2200. In the aggressive scenario, the model projects a cooling trend toward 2100 despite warming that persists in the high latitudes. Following SAI termination in 2100, a rapid global warming of 0.35 K yr-1 is simulated in the subsequent 10 years, and the global mean temperature returns to levels close to the reference state, though roughly 0.5 K cooler. In contrast to earlier findings, we show a weak response in the terrestrial carbon sink during SAI implementation in the 21st century, which we attribute to nitrogen limitation. The SAI increases the land carbon uptake in the temperate forest-, grassland-, and shrub-dominated regions. The resultant lower temperatures lead to a reduction in the heterotrophic respiration rate and increase soil carbon retention. Changes in precipitation patterns are key drivers for variability in vegetation carbon. Upon SAI termination, the level of vegetation carbon storage returns to the reference case, whereas the soil carbon remains high. The ocean absorbs nearly 10% more carbon in the geoengineered simulation than in the reference simulation, leading to a ˜15 ppm lower atmospheric CO2 concentration in 2100. The largest enhancement in uptake occurs in the North Atlantic. In both hemispheres' polar regions, SAI delays the sea ice melting and, consequently, export production remains low. In the deep water of North Atlantic, SAI-induced circulation changes accelerate the ocean acidification rate and broaden the affected area.

  7. Climate change projections using the IPSL-CM5 Earth System Model: from CMIP3 to CMIP5

    NASA Astrophysics Data System (ADS)

    Dufresne, J.-L.; Foujols, M.-A.; Denvil, S.; Caubel, A.; Marti, O.; Aumont, O.; Balkanski, Y.; Bekki, S.; Bellenger, H.; Benshila, R.; Bony, S.; Bopp, L.; Braconnot, P.; Brockmann, P.; Cadule, P.; Cheruy, F.; Codron, F.; Cozic, A.; Cugnet, D.; de Noblet, N.; Duvel, J.-P.; Ethé, C.; Fairhead, L.; Fichefet, T.; Flavoni, S.; Friedlingstein, P.; Grandpeix, J.-Y.; Guez, L.; Guilyardi, E.; Hauglustaine, D.; Hourdin, F.; Idelkadi, A.; Ghattas, J.; Joussaume, S.; Kageyama, M.; Krinner, G.; Labetoulle, S.; Lahellec, A.; Lefebvre, M.-P.; Lefevre, F.; Levy, C.; Li, Z. X.; Lloyd, J.; Lott, F.; Madec, G.; Mancip, M.; Marchand, M.; Masson, S.; Meurdesoif, Y.; Mignot, J.; Musat, I.; Parouty, S.; Polcher, J.; Rio, C.; Schulz, M.; Swingedouw, D.; Szopa, S.; Talandier, C.; Terray, P.; Viovy, N.; Vuichard, N.

    2013-05-01

    We present the global general circulation model IPSL-CM5 developed to study the long-term response of the climate system to natural and anthropogenic forcings as part of the 5th Phase of the Coupled Model Intercomparison Project (CMIP5). This model includes an interactive carbon cycle, a representation of tropospheric and stratospheric chemistry, and a comprehensive representation of aerosols. As it represents the principal dynamical, physical, and bio-geochemical processes relevant to the climate system, it may be referred to as an Earth System Model. However, the IPSL-CM5 model may be used in a multitude of configurations associated with different boundary conditions and with a range of complexities in terms of processes and interactions. This paper presents an overview of the different model components and explains how they were coupled and used to simulate historical climate changes over the past 150 years and different scenarios of future climate change. A single version of the IPSL-CM5 model (IPSL-CM5A-LR) was used to provide climate projections associated with different socio-economic scenarios, including the different Representative Concentration Pathways considered by CMIP5 and several scenarios from the Special Report on Emission Scenarios considered by CMIP3. Results suggest that the magnitude of global warming projections primarily depends on the socio-economic scenario considered, that there is potential for an aggressive mitigation policy to limit global warming to about two degrees, and that the behavior of some components of the climate system such as the Arctic sea ice and the Atlantic Meridional Overturning Circulation may change drastically by the end of the twenty-first century in the case of a no climate policy scenario. Although the magnitude of regional temperature and precipitation changes depends fairly linearly on the magnitude of the projected global warming (and thus on the scenario considered), the geographical pattern of these changes is strikingly similar for the different scenarios. The representation of atmospheric physical processes in the model is shown to strongly influence the simulated climate variability and both the magnitude and pattern of the projected climate changes.

  8. Accuracy Analysis and Parameters Optimization in Urban Flood Simulation by PEST Model

    NASA Astrophysics Data System (ADS)

    Keum, H.; Han, K.; Kim, H.; Ha, C.

    2017-12-01

    The risk of urban flooding has been increasing due to heavy rainfall, flash flooding and rapid urbanization. Rainwater pumping stations, underground reservoirs are used to actively take measures against flooding, however, flood damage from lowlands continues to occur. Inundation in urban areas has resulted in overflow of sewer. Therefore, it is important to implement a network system that is intricately entangled within a city, similar to the actual physical situation and accurate terrain due to the effects on buildings and roads for accurate two-dimensional flood analysis. The purpose of this study is to propose an optimal scenario construction procedure watershed partitioning and parameterization for urban runoff analysis and pipe network analysis, and to increase the accuracy of flooded area prediction through coupled model. The establishment of optimal scenario procedure was verified by applying it to actual drainage in Seoul. In this study, optimization was performed by using four parameters such as Manning's roughness coefficient for conduits, watershed width, Manning's roughness coefficient for impervious area, Manning's roughness coefficient for pervious area. The calibration range of the parameters was determined using the SWMM manual and the ranges used in the previous studies, and the parameters were estimated using the automatic calibration method PEST. The correlation coefficient showed a high correlation coefficient for the scenarios using PEST. The RPE and RMSE also showed high accuracy for the scenarios using PEST. In the case of RPE, error was in the range of 13.9-28.9% in the no-parameter estimation scenarios, but in the scenario using the PEST, the error range was reduced to 6.8-25.7%. Based on the results of this study, it can be concluded that more accurate flood analysis is possible when the optimum scenario is selected by determining the appropriate reference conduit for future urban flooding analysis and if the results is applied to various rainfall event scenarios and parameter optimization. Keywords: Parameters Optimization; PEST model; Urban area Acknowledgement This research was supported by a grant (17AWMP-B079625-04) from Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

  9. Solar power satellite system definition study, phase 2. Volume 2: Reference system description

    NASA Technical Reports Server (NTRS)

    1979-01-01

    System descriptions and cost estimates for the reference system of the solar power satellite program are presented. The reference system is divided into five principal elements: the solar power satellites; space construction and support; space and ground transportation; ground receiving stations; and operations control. The program scenario and non-recurring costs are briefly described.

  10. An approach to checking case-crossover analyses based on equivalence with time-series methods.

    PubMed

    Lu, Yun; Symons, James Morel; Geyh, Alison S; Zeger, Scott L

    2008-03-01

    The case-crossover design has been increasingly applied to epidemiologic investigations of acute adverse health effects associated with ambient air pollution. The correspondence of the design to that of matched case-control studies makes it inferentially appealing for epidemiologic studies. Case-crossover analyses generally use conditional logistic regression modeling. This technique is equivalent to time-series log-linear regression models when there is a common exposure across individuals, as in air pollution studies. Previous methods for obtaining unbiased estimates for case-crossover analyses have assumed that time-varying risk factors are constant within reference windows. In this paper, we rely on the connection between case-crossover and time-series methods to illustrate model-checking procedures from log-linear model diagnostics for time-stratified case-crossover analyses. Additionally, we compare the relative performance of the time-stratified case-crossover approach to time-series methods under 3 simulated scenarios representing different temporal patterns of daily mortality associated with air pollution in Chicago, Illinois, during 1995 and 1996. Whenever a model-be it time-series or case-crossover-fails to account appropriately for fluctuations in time that confound the exposure, the effect estimate will be biased. It is therefore important to perform model-checking in time-stratified case-crossover analyses rather than assume the estimator is unbiased.

  11. Accuracy of estimation of genomic breeding values in pigs using low-density genotypes and imputation.

    PubMed

    Badke, Yvonne M; Bates, Ronald O; Ernst, Catherine W; Fix, Justin; Steibel, Juan P

    2014-04-16

    Genomic selection has the potential to increase genetic progress. Genotype imputation of high-density single-nucleotide polymorphism (SNP) genotypes can improve the cost efficiency of genomic breeding value (GEBV) prediction for pig breeding. Consequently, the objectives of this work were to: (1) estimate accuracy of genomic evaluation and GEBV for three traits in a Yorkshire population and (2) quantify the loss of accuracy of genomic evaluation and GEBV when genotypes were imputed under two scenarios: a high-cost, high-accuracy scenario in which only selection candidates were imputed from a low-density platform and a low-cost, low-accuracy scenario in which all animals were imputed using a small reference panel of haplotypes. Phenotypes and genotypes obtained with the PorcineSNP60 BeadChip were available for 983 Yorkshire boars. Genotypes of selection candidates were masked and imputed using tagSNP in the GeneSeek Genomic Profiler (10K). Imputation was performed with BEAGLE using 128 or 1800 haplotypes as reference panels. GEBV were obtained through an animal-centric ridge regression model using de-regressed breeding values as response variables. Accuracy of genomic evaluation was estimated as the correlation between estimated breeding values and GEBV in a 10-fold cross validation design. Accuracy of genomic evaluation using observed genotypes was high for all traits (0.65-0.68). Using genotypes imputed from a large reference panel (accuracy: R(2) = 0.95) for genomic evaluation did not significantly decrease accuracy, whereas a scenario with genotypes imputed from a small reference panel (R(2) = 0.88) did show a significant decrease in accuracy. Genomic evaluation based on imputed genotypes in selection candidates can be implemented at a fraction of the cost of a genomic evaluation using observed genotypes and still yield virtually the same accuracy. On the other side, using a very small reference panel of haplotypes to impute training animals and candidates for selection results in lower accuracy of genomic evaluation.

  12. Mobile, Collaborative Situated Knowledge Creation for Urban Planning

    PubMed Central

    Zurita, Gustavo; Baloian, Nelson

    2012-01-01

    Geo-collaboration is an emerging research area in computer sciences studying the way spatial, geographically referenced information and communication technologies can support collaborative activities. Scenarios in which information associated to its physical location are of paramount importance are often referred as Situated Knowledge Creation scenarios. To date there are few computer systems supporting knowledge creation that explicitly incorporate physical context as part of the knowledge being managed in mobile face-to-face scenarios. This work presents a collaborative software application supporting visually-geo-referenced knowledge creation in mobile working scenarios while the users are interacting face-to-face. The system allows to manage data information associated to specific physical locations for knowledge creation processes in the field, such as urban planning, identifying specific physical locations, territorial management, etc.; using Tablet-PCs and GPS in order to geo-reference data and information. It presents a model for developing mobile applications supporting situated knowledge creation in the field, introducing the requirements for such an application and the functionalities it should have in order to fulfill them. The paper also presents the results of utility and usability evaluations. PMID:22778639

  13. Mobile, collaborative situated knowledge creation for urban planning.

    PubMed

    Zurita, Gustavo; Baloian, Nelson

    2012-01-01

    Geo-collaboration is an emerging research area in computer sciences studying the way spatial, geographically referenced information and communication technologies can support collaborative activities. Scenarios in which information associated to its physical location are of paramount importance are often referred as Situated Knowledge Creation scenarios. To date there are few computer systems supporting knowledge creation that explicitly incorporate physical context as part of the knowledge being managed in mobile face-to-face scenarios. This work presents a collaborative software application supporting visually-geo-referenced knowledge creation in mobile working scenarios while the users are interacting face-to-face. The system allows to manage data information associated to specific physical locations for knowledge creation processes in the field, such as urban planning, identifying specific physical locations, territorial management, etc.; using Tablet-PCs and GPS in order to geo-reference data and information. It presents a model for developing mobile applications supporting situated knowledge creation in the field, introducing the requirements for such an application and the functionalities it should have in order to fulfill them. The paper also presents the results of utility and usability evaluations.

  14. [Effects of sampling plot number on tree species distribution prediction under climate change].

    PubMed

    Liang, Yu; He, Hong-Shi; Wu, Zhi-Wei; Li, Xiao-Na; Luo, Xu

    2013-05-01

    Based on the neutral landscapes under different degrees of landscape fragmentation, this paper studied the effects of sampling plot number on the prediction of tree species distribution at landscape scale under climate change. The tree species distribution was predicted by the coupled modeling approach which linked an ecosystem process model with a forest landscape model, and three contingent scenarios and one reference scenario of sampling plot numbers were assumed. The differences between the three scenarios and the reference scenario under different degrees of landscape fragmentation were tested. The results indicated that the effects of sampling plot number on the prediction of tree species distribution depended on the tree species life history attributes. For the generalist species, the prediction of their distribution at landscape scale needed more plots. Except for the extreme specialist, landscape fragmentation degree also affected the effects of sampling plot number on the prediction. With the increase of simulation period, the effects of sampling plot number on the prediction of tree species distribution at landscape scale could be changed. For generalist species, more plots are needed for the long-term simulation.

  15. Reference scenarios for deforestation and forest degradation in support of REDD: a review of data and methods

    NASA Astrophysics Data System (ADS)

    Olander, Lydia P.; Gibbs, Holly K.; Steininger, Marc; Swenson, Jennifer J.; Murray, Brian C.

    2008-04-01

    Global climate policy initiatives are now being proposed to compensate tropical forest nations for reducing carbon emissions from deforestation and forest degradation (REDD). These proposals have the potential to include developing countries more actively in international greenhouse gas mitigation and to address a substantial share of the world's emissions which come from tropical deforestation. For such a policy to be viable it must have a credible benchmark against which emissions reduction can be calculated. This benchmark, sometimes termed a baseline or reference emissions scenario, can be based directly on historical emissions or can use historical emissions as input for business as usual projections. Here, we review existing data and methods that could be used to measure historical deforestation and forest degradation reference scenarios including FAO (Food and Agricultural Organization of the United Nations) national statistics and various remote sensing sources. The freely available and corrected global Landsat imagery for 1990, 2000 and soon to come for 2005 may be the best primary data source for most developing countries with other coarser resolution high frequency or radar data as a valuable complement for addressing problems with cloud cover and for distinguishing larger scale degradation. While sampling of imagery has been effectively useful for pan-tropical and continental estimates of deforestation, wall-to-wall (or full coverage) allows more detailed assessments for measuring national-level reference emissions. It is possible to measure historical deforestation with sufficient certainty for determining reference emissions, but there must be continued calls at the international level for making high-resolution imagery available, and for financial and technical assistance to help countries determine credible reference scenarios. The data available for past years may not be sufficient for assessing all forms of forest degradation, but new data sources will have greater potential in 2007 and after. This paper focuses only on the methods for measuring changes in forest area, but this information must be coupled with estimates of change in forest carbon stocks in order to quantify emissions from deforestation and forest degradation.

  16. Low carbon and clean energy scenarios for India: Analysis of targets approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shukla, Priyadarshi R.; Chaturvedi, Vaibhav

    2012-12-01

    Low carbon energy technologies are gaining increasing importance in India for reducing emissions as well as diversifying its energy supply mix. The present paper presents and analyses a targeted approach for pushing solar, wind and nuclear technologies in the Indian energy market. Targets for these technologies have been constructed on the basis of Indian government documents, policy announcements and expert opinion. Different targets have been set for the reference scenario and the carbon price scenario. In the reference scenario it is found that in the long run all solar, wind and nuclear will achieve their targets without any subsidy push.more » In the short run however, nuclear and solar energy require significant subsidy push. Nuclear energy requires a much higher subsidy allocation as compared to solar because the targets assumed are also higher for nuclear energy. Under a carbon price scenario, the carbon price drives the penetration of these technologies significantly. Still subsidy is required especially in the short run when the carbon price is low. It is also found that pushing solar, wind and nuclear technologies might lead to decrease in share of CCS under the price scenario and biomass under both BAU and price scenario, which implies that one set of low carbon technologies is substituted by other set of low carbon technologies. Thus the objective of emission mitigation might not be achieved due to this substitution. Moreover sensitivity on nuclear energy cost was done to represent risk mitigation for this technology and it was found that higher cost can significantly decrease the share of this technology under both the BAU and carbon price scenario.« less

  17. 30 CFR 254.47 - Determining the volume of oil of your worst case discharge scenario.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... associated with the facility. In determining the daily discharge rate, you must consider reservoir characteristics, casing/production tubing sizes, and historical production and reservoir pressure data. Your...) For exploratory or development drilling operations, the size of your worst case discharge scenario is...

  18. Dual Mission Scenarios for the Human Lunar Campaign - Performance, Cost and Risk Benefits

    NASA Technical Reports Server (NTRS)

    Saucillo, Rudolph J.; Reeves, David M.; Chrone, Jonathan D.; Stromgren, Chel; Reeves, John D.; North, David D.

    2008-01-01

    Scenarios for human lunar operations with capabilities significantly beyond Constellation Program baseline missions are potentially feasible based on the concept of dual, sequential missions utilizing a common crew and a single Ares I/CEV (Crew Exploration Vehicle). For example, scenarios possible within the scope of baseline technology planning include outpost-based sortie missions and dual sortie missions. Top level cost benefits of these dual sortie scenarios may be estimated by comparison to the Constellation Program reference two-mission-per-year lunar campaign. The primary cost benefit is the accomplishment of Mission B with a "single launch solution" since no Ares I launch is required. Cumulative risk to the crew is lowered since crew exposure to launch risks and Earth return risks are reduced versus comparable Constellation Program reference two-mission-per-year scenarios. Payload-to-the-lunar-surface capability is substantially increased in the Mission B sortie as a result of additional propellant available for Lunar Lander #2 descent. This additional propellant is a result of EDS #2 transferring a smaller stack through trans-lunar injection and using remaining propellant to perform a portion of the lunar orbit insertion (LOI) maneuver. This paper describes these dual mission concepts, including cost, risk and performance benefits per lunar sortie site, and provides an initial feasibility assessment.

  19. Ecoregion-Based Conservation Planning in the Mediterranean: Dealing with Large-Scale Heterogeneity

    PubMed Central

    Giakoumi, Sylvaine; Sini, Maria; Gerovasileiou, Vasilis; Mazor, Tessa; Beher, Jutta; Possingham, Hugh P.; Abdulla, Ameer; Çinar, Melih Ertan; Dendrinos, Panagiotis; Gucu, Ali Cemal; Karamanlidis, Alexandros A.; Rodic, Petra; Panayotidis, Panayotis; Taskin, Ergun; Jaklin, Andrej; Voultsiadou, Eleni; Webster, Chloë; Zenetos, Argyro; Katsanevakis, Stelios

    2013-01-01

    Spatial priorities for the conservation of three key Mediterranean habitats, i.e. seagrass Posidonia oceanica meadows, coralligenous formations, and marine caves, were determined through a systematic planning approach. Available information on the distribution of these habitats across the entire Mediterranean Sea was compiled to produce basin-scale distribution maps. Conservation targets for each habitat type were set according to European Union guidelines. Surrogates were used to estimate the spatial variation of opportunity cost for commercial, non-commercial fishing, and aquaculture. Marxan conservation planning software was used to evaluate the comparative utility of two planning scenarios: (a) a whole-basin scenario, referring to selection of priority areas across the whole Mediterranean Sea, and (b) an ecoregional scenario, in which priority areas were selected within eight predefined ecoregions. Although both scenarios required approximately the same total area to be protected in order to achieve conservation targets, the opportunity cost differed between them. The whole-basin scenario yielded a lower opportunity cost, but the Alboran Sea ecoregion was not represented and priority areas were predominantly located in the Ionian, Aegean, and Adriatic Seas. In comparison, the ecoregional scenario resulted in a higher representation of ecoregions and a more even distribution of priority areas, albeit with a higher opportunity cost. We suggest that planning at the ecoregional level ensures better representativeness of the selected conservation features and adequate protection of species, functional, and genetic diversity across the basin. While there are several initiatives that identify priority areas in the Mediterranean Sea, our approach is novel as it combines three issues: (a) it is based on the distribution of habitats and not species, which was rarely the case in previous efforts, (b) it considers spatial variability of cost throughout this socioeconomically heterogeneous basin, and (c) it adopts ecoregions as the most appropriate level for large-scale planning. PMID:24155901

  20. Cost-effectiveness of tubal patency tests.

    PubMed

    Verhoeve, H R; Moolenaar, L M; Hompes, P; van der Veen, F; Mol, B W J

    2013-04-01

    Guidelines are not in agreement on the most effective diagnostic scenario for tubal patency testing; therefore, we evaluated the cost-effectiveness of invasive tubal testing in subfertile couples compared with no testing and treatment. Cost-effectiveness analysis. Decision analytic framework. Computer-simulated cohort of subfertile women. We evaluated six scenarios: (1) no tests and no treatment; (2) immediate treatment without tubal testing; (3) delayed treatment without tubal testing; (4) hysterosalpingogram (HSG), followed by immediate or delayed treatment, according to diagnosis (tailored treatment); (5) HSG and a diagnostic laparoscopy (DL) in case HSG does not prove tubal patency, followed by tailored treatment; and (6) DL followed by tailored treatment. Expected cumulative live births after 3 years. Secondary outcomes were cost per couple and the incremental cost-effectiveness ratio. For a 30-year-old woman with otherwise unexplained subfertility for 12 months, 3-year cumulative live birth rates were 51.8, 78.1, 78.4, 78.4, 78.6 and 78.4%, and costs per couple were €0, €6968, €5063, €5410, €5405 and €6163 for scenarios 1, 2, 3, 4, 5 and 6, respectively. The incremental cost-effectiveness ratios compared with scenario 1 (reference strategy), were €26,541, €19,046, €20,372, €20,150 and €23,184 for scenarios 2, 3, 4, 5 and 6, respectively. Sensitivity analysis showed the model to be robust over a wide range of values for the variables. The most cost-effective scenario is to perform no diagnostic tubal tests and to delay in vitro fertilisation (IVF) treatment for at least 12 months for women younger than 38 years old, and to perform no tubal tests and start immediate IVF treatment from the age of 39 years. If an invasive diagnostic test is planned, HSG followed by tailored treatment, or a DL if HSG shows no tubal patency, is more cost-effective than DL. © 2013 The Authors BJOG An International Journal of Obstetrics and Gynaecology © 2013 RCOG.

  1. An analysis of the impacts of global climate and emissions changes on regional tropospheric ozone

    NASA Technical Reports Server (NTRS)

    John, Kuruvilla; Crist, Kevin C.; Carmichael, Gregory R.

    1994-01-01

    Many of the synergistic impacts resulting from future changes in emissions as well as changes in ambient temperature, moisture, and UV flux have not been quantified. A three-dimensional regional-scale photo-chemical model (STEM-2) is used in this study to evaluate these perturbations to trace gas cycles over the eastern half of the United States of America. The model was successfully used to simulate a regional-scale ozone episode (base case - June 1984) and four perturbations scenarios - viz., perturbed emissions, temperature, water vapor column, and incoming UV flux cases, and a future scenario (for the year 2034). The impact of these perturbation scenarios on the distribution of ozone and other major pollutants such as SO2 and sulfates were analyzed in detail. The spatial distribution and the concentration of ozone at the surface increased by about 5-15 percent for most cases except for the perturbed water vapor case. The regional scale surface ozone concentration distribution for the year 2034 (future scenario) showed an increase of non-attainment areas. The rural areas of Pennsylvania, West Virginia, and Georgia showed the largest change in the surface ozone field for the futuristic scenario when compared to the base case.

  2. Forecasting age-related macular degeneration through the year 2050: the potential impact of new treatments.

    PubMed

    Rein, David B; Wittenborn, John S; Zhang, Xinzhi; Honeycutt, Amanda A; Lesesne, Sarah B; Saaddine, Jinan

    2009-04-01

    To forecast age-related macular degeneration (AMD) and its consequences in the United States through the year 2050 with different treatment scenarios. We simulated cases of early AMD, choroidal neovascularization (CNV), geographic atrophy (GA), and AMD-attributable visual impairment and blindness with 5 universal treatment scenarios: (1) no treatment; (2) focal laser and photodynamic therapy (PDT) for CNV; (3) vitamin prophylaxis at early-AMD incidence with focal laser/PDT for CNV; (4) no vitamin prophylaxis followed by focal laser treatment for extra and juxtafoveal CNV and anti-vascular endothelial growth factor treatment; and (5) vitamin prophylaxis at early-AMD incidence followed by CNV treatment, as in scenario 4. Cases of early AMD increased from 9.1 million in 2010 to 17.8 million in 2050 across all scenarios. In non-vitamin-receiving scenarios, cases of CNV and GA increased from 1.7 million in 2010 to 3.8 million in 2050 (25% lower in vitamin-receiving scenarios). Cases of visual impairment and blindness increased from 620 000 in 2010 to 1.6 million in 2050 when given no treatment and were 2.4%, 22.0%, 16.9%, and 34.5% lower in scenarios 2, 3, 4, and 5, respectively. Prevalence of AMD will increase substantially by 2050, but the use of new therapies can mitigate its effects.

  3. Software Architecture: Managing Design for Achieving Warfighter Capability

    DTIC Science & Technology

    2007-04-30

    The Government’s requirements and specifications for a new weapon...at the Preliminary Design Review (PDR) is likely to have a much higher probability of meeting the warfighters’ need for capability. Test -case...inventories of test cases are developed from the user-defined scenarios so that there is one or more test case for every scenario. The test cases will

  4. New alternatives for reference evapotranspiration estimation in West Africa using limited weather data and ancillary data supply strategies.

    NASA Astrophysics Data System (ADS)

    Landeras, Gorka; Bekoe, Emmanuel; Ampofo, Joseph; Logah, Frederick; Diop, Mbaye; Cisse, Madiama; Shiri, Jalal

    2018-05-01

    Accurate estimation of reference evapotranspiration ( ET 0 ) is essential for the computation of crop water requirements, irrigation scheduling, and water resources management. In this context, having a battery of alternative local calibrated ET 0 estimation methods is of great interest for any irrigation advisory service. The development of irrigation advisory services will be a major breakthrough for West African agriculture. In the case of many West African countries, the high number of meteorological inputs required by the Penman-Monteith equation has been indicated as constraining. The present paper investigates for the first time in Ghana, the estimation ability of artificial intelligence-based models (Artificial Neural Networks (ANNs) and Gene Expression Programing (GEPs)), and ancillary/external approaches for modeling reference evapotranspiration ( ET 0 ) using limited weather data. According to the results of this study, GEPs have emerged as a very interesting alternative for ET 0 estimation at all the locations of Ghana which have been evaluated in this study under different scenarios of meteorological data availability. The adoption of ancillary/external approaches has been also successful, moreover in the southern locations. The interesting results obtained in this study using GEPs and some ancillary approaches could be a reference for future studies about ET 0 estimation in West Africa.

  5. Intellectual property rights related to the genetically modified glyphosate tolerant soybeans in Brazil.

    PubMed

    Rodrigues, Roberta L; Lage, Celso L S; Vasconcellos, Alexandre G

    2011-06-01

    The present work analyzes the different modalities of protection of the intellectual creations in the biotechnology agricultural field. Regarding the Brazilian legislations related to the theme (the Industrial Property Law - no. 9. 279/96 and the Plant Variety Protection Law - no. 9. 456/97), and based in the international treaties signed by Brazil, the present work points to the inclusions of each of them, as well as to their interfaces using as reference the case study of glyphosate tolerant genetically modified soybean. For this case study, Monsanto's pipelines patents were searched and used to analyze the limits of patent protection in respect to others related to the Intellectual Property (IP) laws. Thus, it was possible to elucidate the complex scenario of the Intellectual Property of the glyphosate tolerant soybeans, since for the farmer it is hard to correlate the royalties payment with the IP enterprise's rights.

  6. More on Weinberg's no-go theorem in quantum gravity

    NASA Astrophysics Data System (ADS)

    Nagahama, Munehiro; Oda, Ichiro

    2018-05-01

    We complement Weinberg's no-go theorem on the cosmological constant problem in quantum gravity by generalizing it to the case of a scale-invariant theory. Our analysis makes use of the effective action and the BRST symmetry in a manifestly covariant quantum gravity instead of the classical Lagrangian density and the G L (4 ) symmetry in classical gravity. In this sense, our proof is very general since it does not depend on details of quantum gravity and holds true for general gravitational theories which are invariant under diffeomorphisms. As an application of our theorem, we comment on an idea that in the asymptotic safety scenario the functional renormalization flow drives a cosmological constant to zero, solving the cosmological constant problem without reference to fine tuning of parameters. Finally, we also comment on the possibility of extending the Weinberg theorem in quantum gravity to the case where the translational invariance is spontaneously broken.

  7. Scope of practice: freedom within limits.

    PubMed

    Schuiling, K D; Slager, J

    2000-01-01

    "Scope of practice" has a variety of meanings amongst midwives, other health professionals, health organizations, and consumers of midwifery care. For some, it refers to the Standards for the Practice of Midwifery; for others, it encompasses the legal base of practice; still others equate it with the components of the clinical parameters of practice. Because "scope of practice" is dynamic and parameters of practice can be impacted by many variables, succinctly defining "scope of practice" is difficult. This article provides a comprehensive discussion of the concept "scope of practice." Clinical scenarios are provided as case exemplars. The aim of this paper is to provide both new and experienced midwives with a substantive definition of the concept "scope of practice."

  8. Effects of alternative outcome scenarios and structured outcome evaluation on case-based ethics instruction.

    PubMed

    Peacock, Juandre; Harkrider, Lauren N; Bagdasarov, Zhanna; Connelly, Shane; Johnson, James F; Thiel, Chase E; Macdougall, Alexandra E; Mumford, Michael D; Devenport, Lynn D

    2013-09-01

    Case-based instruction has been regarded by many as a viable alternative to traditional lecture-based education and training. However, little is known about how case-based training techniques impact training effectiveness. This study examined the effects of two such techniques: (a) presentation of alternative outcome scenarios to a case, and (b) conducting a structured outcome evaluation. Consistent with the hypotheses, results indicate that presentation of alternative outcome scenarios reduced knowledge acquisition, reduced sensemaking and ethical decision-making strategy use, and reduced decision ethicality. Conducting a structured outcome evaluation had no impact on these outcomes. Results indicate that those who use case-based instruction should take care to use clear, less complex cases with only a singular outcome if they are seeking these types of outcomes.

  9. Generic Argillite/Shale Disposal Reference Case

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Liange; Colon, Carlos Jové; Bianchi, Marco

    Radioactive waste disposal in a deep subsurface repository hosted in clay/shale/argillite is a subject of widespread interest given the desirable isolation properties, geochemically reduced conditions, and widespread geologic occurrence of this rock type (Hansen 2010; Bianchi et al. 2013). Bianchi et al. (2013) provides a description of diffusion in a clay-hosted repository based on single-phase flow and full saturation using parametric data from documented studies in Europe (e.g., ANDRA 2005). The predominance of diffusive transport and sorption phenomena in this clay media are key attributes to impede radionuclide mobility making clay rock formations target sites for disposal of high-level radioactivemore » waste. The reports by Hansen et al. (2010) and those from numerous studies in clay-hosted underground research laboratories (URLs) in Belgium, France and Switzerland outline the extensive scientific knowledge obtained to assess long-term clay/shale/argillite repository isolation performance of nuclear waste. In the past several years under the UFDC, various kinds of models have been developed for argillite repository to demonstrate the model capability, understand the spatial and temporal alteration of the repository, and evaluate different scenarios. These models include the coupled Thermal-Hydrological-Mechanical (THM) and Thermal-Hydrological-Mechanical-Chemical (THMC) models (e.g. Liu et al. 2013; Rutqvist et al. 2014a, Zheng et al. 2014a) that focus on THMC processes in the Engineered Barrier System (EBS) bentonite and argillite host hock, the large scale hydrogeologic model (Bianchi et al. 2014) that investigates the hydraulic connection between an emplacement drift and surrounding hydrogeological units, and Disposal Systems Evaluation Framework (DSEF) models (Greenberg et al. 2013) that evaluate thermal evolution in the host rock approximated as a thermal conduction process to facilitate the analysis of design options. However, the assumptions and the properties (parameters) used in these models are different, which not only make inter-model comparisons difficult, but also compromise the applicability of the lessons learned from one model to another model. The establishment of a reference case would therefore be helpful to set up a baseline for model development. A generic salt repository reference case was developed in Freeze et al. (2013) and the generic argillite repository reference case is presented in this report. The definition of a reference case requires the characterization of the waste inventory, waste form, waste package, repository layout, EBS backfill, host rock, and biosphere. This report mainly documents the processes in EBS bentonite and host rock that are potentially important for performance assessment and properties that are needed to describe these processes, with brief description other components such as waste inventory, waste form, waste package, repository layout, aquifer, and biosphere. A thorough description of the generic argillite repository reference case will be given in Jové Colon et al. (2014).« less

  10. A multinational randomised study comparing didactic lectures with case scenario in a severe sepsis medical simulation course.

    PubMed

    Li, Chih-Huang; Kuan, Win-Sen; Mahadevan, Malcolm; Daniel-Underwood, Lynda; Chiu, Te-Fa; Nguyen, H Bryant

    2012-07-01

    Medical simulation has been used to teach critical illness in a variety of settings. This study examined the effect of didactic lectures compared with simulated case scenario in a medical simulation course on the early management of severe sepsis. A prospective multicentre randomised study was performed enrolling resident physicians in emergency medicine from four hospitals in Asia. Participants were randomly assigned to a course that included didactic lectures followed by a skills workshop and simulated case scenario (lecture-first) or to a course that included a skills workshop and simulated case scenario followed by didactic lectures (simulation-first). A pre-test was given to the participants at the beginning of the course, post-test 1 was given after the didactic lectures or simulated case scenario depending on the study group assignment, then a final post-test 2 was given at the end of the course. Performance on the simulated case scenario was evaluated with a performance task checklist. 98 participants were enrolled in the study. Post-test 2 scores were significantly higher than pre-test scores in all participants (80.8 ± 12.0% vs 65.4 ± 12.2%, p<0.01). There was no difference in pre-test scores between the two study groups. The lecture-first group had significantly higher post-test 1 scores than the simulation-first group (78.8 ± 10.6% vs 71.6 ± 12.6%, p<0.01). There was no difference in post-test 2 scores between the two groups. The simulated case scenario task performance completion was 90.8% (95% CI 86.6% to 95.0%) in the lecture-first group compared with 83.8% (95% CI 79.5% to 88.1%) in the simulation-first group (p=0.02). A medical simulation course can improve resident physician knowledge in the early management of severe sepsis. Such a course should include a comprehensive curriculum that includes didactic lectures followed by simulation experience.

  11. A First Look at the Upcoming SISO Space Reference FOM

    NASA Technical Reports Server (NTRS)

    Crues, Edwin; Dexter, Dan; Madden, Michael; Garro, Alfred; Vankov, Alexander; Skuratovskiy, Anton; Moller, Bjorn

    2016-01-01

    Simulation is increasingly used in the space domain for several purposes. One example is analysis and engineering, from the mission level down to individual systems and subsystems. Another example is training of space crew and flight controllers. Several distributed simulations have been developed for example for docking vehicles with the ISS and for mission training, in many cases with participants from several nations. Space based scenarios are also used in the "Simulation Exploration Experience", SISO's university outreach program. We have thus realized that there is a need for a distributed simulation interoperability standard for data exchange within the space domain. Based on these experiences, SISO is developing a Space Reference FOM. Members of the product development group come from several countries and contribute experiences from projects within NASA, ESA and other organizations. Participants represent government, academia and industry. The first version will focus on handling of time and space. The Space Reference FOM will provide the following: (i) a flexible positioning system using reference frames for arbitrary bodies in space, (ii) a naming conventions for well known reference frames, (iii) definitions of common time scales, (iv) federation agreements for common types of time management with focus on time stepped simulation, and (v) support for physical entities, such as space vehicles and astronauts. The Space Reference FOM is expected to make collaboration politically, contractually and technically easier. It is also expected to make collaboration easier to manage and extend.

  12. SERENITY in e-Business and Smart Item Scenarios

    NASA Astrophysics Data System (ADS)

    Benameur, Azzedine; Khoury, Paul El; Seguran, Magali; Sinha, Smriti Kumar

    SERENITY Artefacts, like Class, Patterns, Implementations and Executable Components for Security & Dependability (S&D) in addition to Serenity Runtime Framework (SRF) are discussed in previous chapters. How to integrate these artefacts with applications in Serenity approach is discussed here with two scenarios. The e-Business scenario is a standard loan origination process in a bank. The Smart Item scenario is an Ambient intelligence case study where we take advantage of Smart Items to provide an electronic healthcare infrastructure for remote healthcare assistance. In both cases, we detail how the prototype implementations of the scenarios select proper executable components through Serenity Runtime Framework and then demonstrate how these executable components of the S&D Patterns are deployed.

  13. Greenhouse gas emissions and land use change from Jatropha curcas-based jet fuel in Brazil.

    PubMed

    Bailis, Robert E; Baka, Jennifer E

    2010-11-15

    This analysis presents a comparison of life-cycle GHG emissions from synthetic paraffinic kerosene (SPK) produced as jet fuel substitute from jatropha curcas feedstock cultivated in Brazil against a reference scenario of conventional jet fuel. Life cycle inventory data are derived from surveys of actual Jatropha growers and processors. Results indicate that a baseline scenario, which assumes a medium yield of 4 tons of dry fruit per hectare under drip irrigation with existing logistical conditions using energy-based coproduct allocation methodology, and assumes a 20-year plantation lifetime with no direct land use change (dLUC), results in the emissions of 40 kg CO₂e per GJ of fuel produced, a 55% reduction relative to conventional jet fuel. However, dLUC based on observations of land-use transitions leads to widely varying changes in carbon stocks ranging from losses in excess of 50 tons of carbon per hectare when Jatropha is planted in native cerrado woodlands to gains of 10-15 tons of carbon per hectare when Jatropha is planted in former agro-pastoral land. Thus, aggregate emissions vary from a low of 13 kg CO₂e per GJ when Jatropha is planted in former agro-pastoral lands, an 85% decrease from the reference scenario, to 141 kg CO₂e per GJ when Jatropha is planted in cerrado woodlands, a 60% increase over the reference scenario. Additional sensitivities are also explored, including changes in yield, exclusion of irrigation, shortened supply chains, and alternative allocation methodologies.

  14. A decision support system to find the best water allocation strategies in a Mediterranean river basin in future scenarios of global change

    NASA Astrophysics Data System (ADS)

    Del Vasto-Terrientes, L.; Kumar, V.; Chao, T.-C.; Valls, A.

    2016-03-01

    Global change refers to climate changes, but also demographic, technological and economic changes. Predicted water scarcity will be critical in the coastal Mediterranean region, especially for provision to mid-sized and large-sized cities. This paper studies the case of the city of Tarragona, located at the Mediterranean area of north-eastern Spain (Catalonia). Several scenarios have been constructed to evaluate different sectorial water allocation policies to mitigate the water scarcity induced by global change. Future water supply and demand predictions have been made for three time spans. The decision support system presented is based on the outranking model, which constructs a partial pre-order based on pairwise preference relations among all the possible actions. The system analyses a hierarchical structure of criteria, including environmental and economic criteria. We compare several adaptation measures including alternative water sources, inter-basin water transfer and sectorial demand management coming from industry, agriculture and domestic sectors. Results indicate that the most appropriate water allocation strategies depend on the severity of the global change effects.

  15. The Avellino 3780-yr-B.P. catastrophe as a worst-case scenario for a future eruption at Vesuvius

    PubMed Central

    Mastrolorenzo, Giuseppe; Petrone, Pierpaolo; Pappalardo, Lucia; Sheridan, Michael F.

    2006-01-01

    A volcanic catastrophe even more devastating than the famous anno Domini 79 Pompeii eruption occurred during the Old Bronze Age at Vesuvius. The 3780-yr-B.P. Avellino plinian eruption produced an early violent pumice fallout and a late pyroclastic surge sequence that covered the volcano surroundings as far as 25 km away, burying land and villages. Here we present the reconstruction of this prehistoric catastrophe and its impact on the Bronze Age culture in Campania, drawn from an interdisciplinary volcanological and archaeoanthropological study. Evidence shows that a sudden, en masse evacuation of thousands of people occurred at the beginning of the eruption, before the last destructive plinian column collapse. Most of the fugitives likely survived, but the desertification of the total habitat due to the huge eruption size caused a social–demographic collapse and the abandonment of the entire area for centuries. Because an event of this scale is capable of devastating a broad territory that includes the present metropolitan district of Naples, it should be considered as a reference for the worst eruptive scenario at Vesuvius. PMID:16537390

  16. Simulating forensic casework scenarios in experimental studies: The generation of footwear marks in blood.

    PubMed

    McElhone, Rachel L; Meakin, Georgina E; French, James C; Alexander, Tracy; Morgan, Ruth M

    2016-07-01

    A study was designed to investigate the effects of external variables, including blood type, flooring surface, footwear tread depth and blood dryness, on the appearance of blood-based footwear marks, with particular reference to simulating a specific casework scenario. Results showed that footwear marks left in human blood tended to be of greater quality than those in equine blood, highlighting a potential issue in applying data generated with equine blood to human bloodstains in casework. Footwear tread effects were also dependent on blood type, but the type of flooring surface did not affect the appearance of the mark. Under some conditions, as the blood dried, the amount of detail retained from footwear contact decreased. These results provide the beginnings of an empirical evidence base to allow a more accurate interpretation of blood-based footwear marks in forensic casework. When applied to a disputed bloodstain in a specific case, these results also demonstrate the importance of such experiments in narrowing the range of explanations possible in the interpretation of forensic evidence. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Scenario-based and scenario-neutral assessment of climate change impacts on operational performance of a multipurpose reservoir

    Treesearch

    Allison G. Danner; Mohammad Safeeq; Gordon E. Grant; Charlotte Wickham; Desirée Tullos; Mary V. Santelmann

    2017-01-01

    Scenario-based and scenario-neutral impacts assessment approaches provide complementary information about how climate change-driven effects on streamflow may change the operational performance of multipurpose dams. Examining a case study of Cougar Dam in Oregon, United States, we simulated current reservoir operations under scenarios of plausible future hydrology....

  18. SERA Scenarios of Early Market Fuel Cell Electric Vehicle Introductions: Modeling Framework, Regional Markets, and Station Clustering; NREL (National Renewable Energy Laboratory)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melaina, M.

    This presentation provides an overview of the Scenario Evaluation and Regionalization Analysis (SERA) model, describes the methodology for developing scenarios for hydrogen infrastructure development, outlines an example "Hydrogen Success" scenario, and discusses detailed scenario metrics for a particular case study region, the Northeast Corridor.

  19. Assessing National Employment Impacts of Investment in Residential and Commercial Sector Energy Efficiency: Review and Example Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, David M.; Belzer, David B.; Livingston, Olga V.

    Pacific Northwest National Laboratory (PNNL) modeled the employment impacts of a major national initiative to accelerate energy efficiency trends at one of two levels: • 15 percent savings by 2030. In this scenario, efficiency activities save about 15 percent of the Annual Energy Outlook (AEO) Reference Case electricity consumption by 2030. It is assumed that additional energy savings in both the residential and commercial sectors begin in 2015 at zero, and then increase in an S-shaped market penetration curve, with the level of savings equal to about 7.0 percent of the AEO 2014 U.S. national residential and commercial electricity consumptionmore » saved by 2020, 14.8 percent by 2025, and 15 percent by 2030. • 10 percent savings by 2030. In this scenario, additional savings begin at zero in 2015, increase to 3.8 percent in 2020, 9.8 percent by 2025, and 10 percent of the AEO reference case value by 2030. The analysis of the 15 percent case indicates that by 2030 more than 300,000 new jobs would likely result from such policies, including an annual average of more than 60,000 jobs directly supporting the installation and maintenance of energy efficiency measures and practices. These are new jobs resulting initially from the investment associated with the construction of more energy-efficient new buildings or the retrofit of existing buildings and would be sustained for as long as the investment continues. Based on what is known about the current level of building-sector energy efficiency jobs, this would represent an increase of more than 10 percent from the current estimated level of over 450,000 such jobs. The more significant and longer-lasting effect comes from the redirection of energy bill savings toward the purchase of other goods and services in the general economy, with its attendant influence on increasing the total number of jobs. This example analysis utilized PNNL’s ImSET model, a modeling framework that PNNL has used over the past two decades to assess the economic impacts of the U.S. Department of Energy’s (DOE’s) energy efficiency programs in the buildings sector.« less

  20. SU-C-19A-07: Influence of Immobilization On Plan Robustness in the Treatment of Head and Neck Cancer with IMPT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bues, M; Anand, A; Liu, W

    2014-06-15

    Purpose: We evaluated the effect of interposing immobilization devices into the beam's path on the robustness of a head and neck plan. Methods: An anthropomorphic head phantom was placed into a preliminary prototype of a specialized head and neck immobilization device for proton beam therapy. The device consists of a hard low density shell, a custom mold insert, and thermoplastic mask to immobilize the patient's head in the shell. This device was provided by CIVCO Medical Solutions for the purpose of evaluation of suitability for proton beam therapy. See Figure 1. Two pairs of treatment plans were generated. The firstmore » plan in each pair was a reference plan including only the anthropomorphic phantom, and the second plan in each pair included the immobilization device. In all other respects the plans within the pair were identical. Results: In the case of the simple plan the degradation of plan robustness was found to be clinically insignificant. In this case, target coverage in the worst case scenario was reduced from 95% of the target volume receiving 96.5% of prescription dose to 95% of the target volume receiving 96.3% of prescription dose by introducing the immobilization device. In the case of the complex plan, target coverage of the boost volume in the worst case scenario was reduced from 95% of the boost target volume receiving 97% of prescription dose to 95% of the boost target volume receiving 83% of prescription dose by introducing the immobilization device. See Figure 2. Conclusion: Immobilization devices may have a deleterious effect on plan robustness. Evaluation of the preliminary prototype revealed a variable impact on the plan robustness depending of the complexity of the case. Brian Morse is an employee of CIVCO Medical Solutions.« less

  1. Two types of physical inconsistency to avoid with quantile mapping: a case study with relative humidity over North America.

    NASA Astrophysics Data System (ADS)

    Grenier, P.

    2017-12-01

    Statistical post-processing techniques aim at generating plausible climate scenarios from climate simulations and observation-based reference products. These techniques are generally not physically-based, and consequently they remedy the problem of simulation biases at the risk of generating physical inconsistency (PI). Although this concern is often emphasized, it is rarely addressed quantitatively. Here, PI generated by quantile mapping (QM), a technique widely used in climatological and hydrological applications, is investigated using relative humidity (RH) and its parent variables, namely specific humidity (SH), temperature and pressure. PI is classified into two types: 1) inadequate value for an individual variable (e.g. RH > 100 %), and 2) breaking of an inter-variable relationship. Scenarios built for this study correspond to twelve sites representing a variety of climate types over North America. Data used are an ensemble of ten 3-hourly global (CMIP5) and regional (CORDEX-NAM) simulations, as well as the CFSR reanalysis. PI of type 1 is discussed in terms of frequency of occurrence and amplitude of unphysical cases for RH and SH variables. PI of type 2 is investigated with heuristic proxies designed to directly compare the physical inconsistency problem with the initial bias problem. Finally, recommendations are provided for an appropriate use of QM given the potential to generate physical inconsistency of types 1 and 2.

  2. A statistically robust EEG re-referencing procedure to mitigate reference effect

    PubMed Central

    Lepage, Kyle Q.; Kramer, Mark A.; Chu, Catherine J.

    2014-01-01

    Background The electroencephalogram (EEG) remains the primary tool for diagnosis of abnormal brain activity in clinical neurology and for in vivo recordings of human neurophysiology in neuroscience research. In EEG data acquisition, voltage is measured at positions on the scalp with respect to a reference electrode. When this reference electrode responds to electrical activity or artifact all electrodes are affected. Successful analysis of EEG data often involves re-referencing procedures that modify the recorded traces and seek to minimize the impact of reference electrode activity upon functions of the original EEG recordings. New method We provide a novel, statistically robust procedure that adapts a robust maximum-likelihood type estimator to the problem of reference estimation, reduces the influence of neural activity from the re-referencing operation, and maintains good performance in a wide variety of empirical scenarios. Results The performance of the proposed and existing re-referencing procedures are validated in simulation and with examples of EEG recordings. To facilitate this comparison, channel-to-channel correlations are investigated theoretically and in simulation. Comparison with existing methods The proposed procedure avoids using data contaminated by neural signal and remains unbiased in recording scenarios where physical references, the common average reference (CAR) and the reference estimation standardization technique (REST) are not optimal. Conclusion The proposed procedure is simple, fast, and avoids the potential for substantial bias when analyzing low-density EEG data. PMID:24975291

  3. Simulating land-use changes by incorporating spatial autocorrelation and self-organization in CLUE-S modeling: a case study in Zengcheng District, Guangzhou, China

    NASA Astrophysics Data System (ADS)

    Mei, Zhixiong; Wu, Hao; Li, Shiyun

    2018-06-01

    The Conversion of Land Use and its Effects at Small regional extent (CLUE-S), which is a widely used model for land-use simulation, utilizes logistic regression to estimate the relationships between land use and its drivers, and thus, predict land-use change probabilities. However, logistic regression disregards possible spatial autocorrelation and self-organization in land-use data. Autologistic regression can depict spatial autocorrelation but cannot address self-organization, while logistic regression by considering only self-organization (NElogistic regression) fails to capture spatial autocorrelation. Therefore, this study developed a regression (NE-autologistic regression) method, which incorporated both spatial autocorrelation and self-organization, to improve CLUE-S. The Zengcheng District of Guangzhou, China was selected as the study area. The land-use data of 2001, 2005, and 2009, as well as 10 typical driving factors, were used to validate the proposed regression method and the improved CLUE-S model. Then, three future land-use scenarios in 2020: the natural growth scenario, ecological protection scenario, and economic development scenario, were simulated using the improved model. Validation results showed that NE-autologistic regression performed better than logistic regression, autologistic regression, and NE-logistic regression in predicting land-use change probabilities. The spatial allocation accuracy and kappa values of NE-autologistic-CLUE-S were higher than those of logistic-CLUE-S, autologistic-CLUE-S, and NE-logistic-CLUE-S for the simulations of two periods, 2001-2009 and 2005-2009, which proved that the improved CLUE-S model achieved the best simulation and was thereby effective to a certain extent. The scenario simulation results indicated that under all three scenarios, traffic land and residential/industrial land would increase, whereas arable land and unused land would decrease during 2009-2020. Apparent differences also existed in the simulated change sizes and locations of each land-use type under different scenarios. The results not only demonstrate the validity of the improved model but also provide a valuable reference for relevant policy-makers.

  4. The use of cognitive continuum theory and patient scenarios to explore nurse prescribers' pharmacological knowledge and decision-making.

    PubMed

    Offredy, Maxine; Kendall, Sally; Goodman, Claire

    2008-06-01

    Nurses have been involved in prescribing in England since 1996, and to date over 41,000 nurses are registered with the Nursing and Midwifery Council as prescribers. The majority of evaluative research on nurse prescribing is descriptive and relies on self-report and assessment of patient satisfaction. To explore and test nurse prescribers' pharmacological knowledge and decision-making. An exploratory approach to test the usefulness of patient scenarios in addressing the reasons why nurses decide whether or not to prescribe was utilised. Semi-structured interviews with nurse prescribers using patient scenarios were used as proxy methods of assessment of how nurses made their prescribing decisions. Two primary care trusts in the southeast of England were the settings for this study. Purposive sampling to ensure there was a mixed group of prescribers was used to enable detailed exploration of the research objectives and to obtain in-depth understanding of the complex activities involved in nurse prescribing. Interviews and case scenarios. The use of cognitive continuum theory guided the analysis. The majority of participants were unable to identify the issues involved in all the scenarios; they also failed to provide an acceptable solution to the problem, suggesting that they would refer the patient to the general practitioner. A similar number described themselves as 'very confident' while seven participants felt that they were 'not confident' in dealing with medication issues, four of whom were practising prescribing. The effects of social and institutional factors are important in the decision-making process. The lack of appropriate pharmacological knowledge coupled with lack of confidence in prescribing was demonstrated. The scenarios used in this study indicate that nurses are perhaps knowledgeable in their small area of practise but flounder outside this. Further research could be conducted with a larger sample and with more scenarios to explore the decision-making and the pharmacological knowledge base of nurse prescribers, particularly in the light of government policy to extend prescribing rights to non-medical prescribers, including pharmacists.

  5. The Future of Utility Customer-Funded Energy Efficiency Programs in the United States: Projected Spending and Savings to 2025

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbose, Galen; Goldman, Charles; Hoffman, Ian

    2012-09-11

    We develop projections of future spending on, and savings from, energy efficiency programs funded by electric and gas utility customers in the United States, under three scenarios through 2025. Our analysis, which updates a previous LBNL study, relies on detailed bottom-up modeling of current state energy efficiency policies, regulatory decisions, and demand-side management and utility resource plans. The three scenarios are intended to represent a range of potential outcomes under the current policy environment (i.e., without considering possible major new policy developments). By 2025, spending on electric and gas efficiency programs (excluding load management programs) is projected to double frommore » 2010 levels to $9.5 billion in the medium case, compared to $15.6 billion in the high case and $6.5 billion in the low case. Compliance with statewide legislative or regulatory savings or spending targets is the primary driver for the increase in electric program spending through 2025, though a significant share of the increase is also driven by utility DSM planning activity and integrated resource planning. Our analysis suggests that electric efficiency program spending may approach a more even geographic distribution over time in terms of absolute dollars spent, with the Northeastern and Western states declining from over 70% of total U.S. spending in 2010 to slightly more than 50% in 2025, with the South and Midwest splitting the remainder roughly evenly. Under our medium case scenario, annual incremental savings from customer-funded electric energy efficiency programs increase from 18.4 TWh in 2010 in the U.S. (which is about 0.5% of electric utility retail sales) to 28.8 TWh in 2025 (0.8% of retail sales). These savings would offset the majority of load growth in the Energy Information Administration’s most recent reference case forecast, given specific assumptions about the extent to which future energy efficiency program savings are captured in that forecast. However, the pathway that customer-funded efficiency programs ultimately take will depend on a series of key challenges and uncertainties associated both with the broader market and policy context and with the implementation and regulatory oversight of the energy efficiency programs themselves.« less

  6. Integrating remediation and resource recovery: On the economic conditions of landfill mining.

    PubMed

    Frändegård, Per; Krook, Joakim; Svensson, Niclas

    2015-08-01

    This article analyzes the economic potential of integrating material separation and resource recovery into a landfill remediation project, and discusses the result and the largest impact factors. The analysis is done using a direct costs/revenues approach and the stochastic uncertainties are handled using Monte Carlo simulation. Two remediation scenarios are applied to a hypothetical landfill. One scenario includes only remediation, while the second scenario adds resource recovery to the remediation project. Moreover, the second scenario is divided into two cases, case A and B. In case A, the landfill tax needs to be paid for re-deposited material and the landfill holder does not own a combined heat and power plant (CHP), which leads to disposal costs in the form of gate fees. In case B, the landfill tax is waived on the re-deposited material and the landfill holder owns its own CHP. Results show that the remediation project in the first scenario costs about €23/ton. Adding resource recovery as in case A worsens the result to -€36/ton, while for case B the result improves to -€14/ton. This shows the importance of landfill tax and the access to a CHP. Other important factors for the result are the material composition in the landfill, the efficiency of the separation technology used, and the price of the saleable material. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Impact of respiratory motion on worst-case scenario optimized intensity modulated proton therapy for lung cancers.

    PubMed

    Liu, Wei; Liao, Zhongxing; Schild, Steven E; Liu, Zhong; Li, Heng; Li, Yupeng; Park, Peter C; Li, Xiaoqiang; Stoker, Joshua; Shen, Jiajian; Keole, Sameer; Anand, Aman; Fatyga, Mirek; Dong, Lei; Sahoo, Narayan; Vora, Sujay; Wong, William; Zhu, X Ronald; Bues, Martin; Mohan, Radhe

    2015-01-01

    We compared conventionally optimized intensity modulated proton therapy (IMPT) treatment plans against worst-case scenario optimized treatment plans for lung cancer. The comparison of the 2 IMPT optimization strategies focused on the resulting plans' ability to retain dose objectives under the influence of patient setup, inherent proton range uncertainty, and dose perturbation caused by respiratory motion. For each of the 9 lung cancer cases, 2 treatment plans were created that accounted for treatment uncertainties in 2 different ways. The first used the conventional method: delivery of prescribed dose to the planning target volume that is geometrically expanded from the internal target volume (ITV). The second used a worst-case scenario optimization scheme that addressed setup and range uncertainties through beamlet optimization. The plan optimality and plan robustness were calculated and compared. Furthermore, the effects on dose distributions of changes in patient anatomy attributable to respiratory motion were investigated for both strategies by comparing the corresponding plan evaluation metrics at the end-inspiration and end-expiration phase and absolute differences between these phases. The mean plan evaluation metrics of the 2 groups were compared with 2-sided paired Student t tests. Without respiratory motion considered, we affirmed that worst-case scenario optimization is superior to planning target volume-based conventional optimization in terms of plan robustness and optimality. With respiratory motion considered, worst-case scenario optimization still achieved more robust dose distributions to respiratory motion for targets and comparable or even better plan optimality (D95% ITV, 96.6% vs 96.1% [P = .26]; D5%- D95% ITV, 10.0% vs 12.3% [P = .082]; D1% spinal cord, 31.8% vs 36.5% [P = .035]). Worst-case scenario optimization led to superior solutions for lung IMPT. Despite the fact that worst-case scenario optimization did not explicitly account for respiratory motion, it produced motion-resistant treatment plans. However, further research is needed to incorporate respiratory motion into IMPT robust optimization. Copyright © 2015 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  8. Bridging Scales: Developing a Framework to Build a City-Scale Environmental Scenario for Japanese Municipalities

    NASA Astrophysics Data System (ADS)

    Hashimoto, S.; Fujita, T.; Nakayama, T.; Xu, K.

    2007-12-01

    There is an ongoing project on establishing environmental scenarios in Japan to evaluate middle to long-term environmental policy and technology options toward low carbon society. In this project, the time horizon of the scenarios is set for 2050 on the ground that a large part of social infrastructure in Japan is likely to be renovated by that time, and cities are supposed to play important roles in building low carbon society in Japan. This belief is held because cities or local governments could implement various policies and programs, such as land use planning and promotion of new technologies with low GHG emissions, which produce an effect in an ununiform manner, taking local socio-economic conditions into account, while higher governments, either national or prefectural, could impose environmental tax on electricity and gas to alleviate ongoing GHG emissions, which uniformly covers their jurisdictions. In order for local governments to devise and implement concrete administrative actions equipped with rational policies and technologies, referring the environmental scenarios developed for the entire nation, we need to localize the national scenarios, both in terms of spatial and temporal extent, so that they could better reflect local socio-economic and institutional conditions. In localizing the national scenarios, the participation of stakeholders is significant because they play major roles in shaping future society. Stakeholder participation in the localization process would bring both creative and realistic inputs on how future unfolds on a city scale. In this research, 1) we reviewed recent efforts on international and domestic scenario development to set a practical time horizon for a city-scale environmental scenario, which would lead to concrete environmental policies and programs, 2) designed a participatory scenario development/localization process, drawing on the framework of the 'Story-and-Simulation' or SAS approach, which Alcamo(2001) proposed, and 3) started implementing it to the city of Kawasaki, Kanagawa, Japan, in cooperation with municipal officials and stakeholders. The participatory process is to develop city-scale environmental scenarios toward low carbon society, referring international and domestic environmental scenarios. Though the scenario development is still in process, it has already brought practical knowledge about and experience on how to bridge scenarios developed for different temporal and spatial scales.

  9. LWR First Recycle of TRU with Thorium Oxide for Transmutation and Cross Sections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrea Alfonsi; Gilles Youinou; Sonat Sen

    2013-02-01

    Thorium has been considered as an option to uranium-based fuel, based on considerations of resource utilization (thorium is approximately three times more plentiful than uranium) and as a result of concerns about proliferation and waste management (e.g. reduced production of plutonium, etc.). Since the average composition of natural Thorium is dominated (100%) by the fertile isotope Th-232, Thorium is only useful as a resource for breeding new fissile materials, in this case U-233. Consequently a certain amount of fissile material must be present at the start-up of the reactor in order to guarantee its operation. The thorium fuel can bemore » used in both once-through and recycle options, and in both fast and thermal spectrum systems. The present study has been aimed by the necessity of investigating the option of using reprocessed plutonium/TRU, from a once-through reference LEU scenario (50 GWd/ tIHM), mixed with natural thorium and the need of collect data (mass fractions, cross-sections etc.) for this particular fuel cycle scenario. As previously pointed out, the fissile plutonium is needed to guarantee the operation of the reactor. Four different scenarios have been considered: • Thorium – recycled Plutonium; • Thorium – recycled Plutonium/Neptunium; • Thorium – recycled Plutonium/Neptunium/Americium; • Thorium – recycled Transuranic. The calculations have been performed with SCALE6.1-TRITON.« less

  10. LWR First Recycle of TRU with Thorium Oxide for Transmutation and Cross Sections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrea Alfonsi; Gilles Youinou

    2012-07-01

    Thorium has been considered as an option to uranium-based fuel, based on considerations of resource utilization (thorium is approximately three times more plentiful than uranium) and as a result of concerns about proliferation and waste management (e.g. reduced production of plutonium, etc.). Since the average composition of natural Thorium is dominated (100%) by the fertile isotope Th-232, Thorium is only useful as a resource for breeding new fissile materials, in this case U-233. Consequently a certain amount of fissile material must be present at the start-up of the reactor in order to guarantee its operation. The thorium fuel can bemore » used in both once-through and recycle options, and in both fast and thermal spectrum systems. The present study has been aimed by the necessity of investigating the option of using reprocessed plutonium/TRU, from a once-through reference LEU scenario (50 GWd/ tIHM), mixed with natural thorium and the need of collect data (mass fractions, cross-sections etc.) for this particular fuel cycle scenario. As previously pointed out, the fissile plutonium is needed to guarantee the operation of the reactor. Four different scenarios have been considered: • Thorium – recycled Plutonium; • Thorium – recycled Plutonium/Neptunium; • Thorium – recycled Plutonium/Neptunium/Americium; • Thorium – recycled Transuranic. The calculations have been performed with SCALE6.1-TRITON.« less

  11. Health Impact Assessment for Second-Hand Smoke Exposure in Germany--Quantifying Estimates for Ischaemic Heart Diseases, COPD, and Stroke.

    PubMed

    Fischer, Florian; Kraemer, Alexander

    2016-02-05

    Evidence of the adverse health effects attributable to second-hand smoke (SHS) exposure is available. This study aims to quantify the impact of SHS exposure on ischaemic heart diseases (IHD), chronic obstructive pulmonary diseases (COPD), and stroke in Germany. Therefore, this study estimated and forecasted the morbidity for the three outcomes in the German population. Furthermore, a health impact assessment was performed using DYNAMO-HIA, which is a generic software tool applying a Markov model. Overall 687,254 IHD cases, 231,973 COPD cases, and 288,015 stroke cases were estimated to be attributable to SHS exposure in Germany for 2014. Under the assumption that the population prevalence of these diseases and the prevalence of SHS exposure remain constant, the total number of cases will increase due to demographic aging. Assuming a total eradication of SHS exposure beginning in 2014 leads to an estimated reduction of 50% in cases, compared to the reference scenario in 2040 for all three diseases. The results highlight the relevance of SHS exposure because it affects several chronic disease conditions and has a major impact on the population's health. Therefore, public health campaigns to protect non-smokers are urgently needed.

  12. All Information Is Not Equal: Using the Literature Databases PubMed and The Cochrane Library for Identifying the Evidence on Granulocyte Transfusion Therapy.

    PubMed

    Metzendorf, Maria-Inti; Schulz, Manuela; Braun, Volker

    2014-10-01

    To be able to take well-informed decisions or carry out sound research, clinicians and researchers alike require specific information seeking skills matching their respective information needs. Biomedical information is traditionally available via different literature databases. This article gives an introduction to two diverging sources, PubMed (23 million references) and The Cochrane Library (800,000 references), both of which offer sophisticated instruments for searching an increasing amount of medical publications of varied quality and ambition. Whereas PubMed as an unfiltered source of primary literature comprises all different kinds of publication types occurring in academic journals, The Cochrane Library is a pre-filtered source which offers access to either synthesized publication types or critically appraised and carefully selected references. A search approach has to be carried out deliberately and requires a good knowledge on the scope and features of the databases as well as on the ability to build a search strategy in a structured way. We present a specific and a sensitive search approach, making use of both databases within two application case scenarios in order to identify the evidence on granulocyte transfusions for infections in adult patients with neutropenia.

  13. Spread of the dust temperature distribution in circumstellar disks

    NASA Astrophysics Data System (ADS)

    Heese, S.; Wolf, S.; Dutrey, A.; Guilloteau, S.

    2017-07-01

    Context. Accurate temperature calculations for circumstellar disks are particularly important for their chemical evolution. Their temperature distribution is determined by the optical properties of the dust grains, which, among other parameters, depend on their radius. However, in most disk studies, only average optical properties and thus an average temperature is assumed to account for an ensemble of grains with different radii. Aims: We investigate the impact of subdividing the grain radius distribution into multiple sub-intervals on the resulting dust temperature distribution and spectral energy distribution (SED). Methods: The temperature distribution, the relative grain surface below a certain temperature, the freeze-out radius, and the SED were computed for two different scenarios: (1) Radius distribution represented by 16 logarithmically distributed radius intervals, and (2) radius distribution represented by a single grain species with averaged optical properties (reference). Results: Within the considered parameter range, I.e., of grain radii between 5 nm and 1 mm and an optically thin and thick disk with a parameterized density distribution, we obtain the following results: in optically thin disk regions, the temperature spread can be as large as 63% and the relative grain surface below a certain temperature is lower than in the reference disk. With increasing optical depth, the difference in the midplane temperature and the relative grain surface below a certain temperature decreases. Furthermore, below 20 K, this fraction is higher for the reference disk than for the case of multiple grain radii, while it shows the opposite behavior for temperatures above this threshold. The thermal emission in the case of multiple grain radii at short wavelengths is stronger than for the reference disk. The freeze-out radius (snowline) is a function of grain radius, spanning a radial range between the coldest and warmest grain species of 30 AU.

  14. Estimating the future number of cases in the Ebola epidemic--Liberia and Sierra Leone, 2014-2015.

    PubMed

    Meltzer, Martin I; Atkins, Charisma Y; Santibanez, Scott; Knust, Barbara; Petersen, Brett W; Ervin, Elizabeth D; Nichol, Stuart T; Damon, Inger K; Washington, Michael L

    2014-09-26

    The first cases of the current West African epidemic of Ebola virus disease (hereafter referred to as Ebola) were reported on March 22, 2014, with a report of 49 cases in Guinea. By August 31, 2014, a total of 3,685 probable, confirmed, and suspected cases in West Africa had been reported. To aid in planning for additional disease-control efforts, CDC constructed a modeling tool called EbolaResponse to provide estimates of the potential number of future cases. If trends continue without scale-up of effective interventions, by September 30, 2014, Sierra Leone and Liberia will have a total of approximately 8,000 Ebola cases. A potential underreporting correction factor of 2.5 also was calculated. Using this correction factor, the model estimates that approximately 21,000 total cases will have occurred in Liberia and Sierra Leone by September 30, 2014. Reported cases in Liberia are doubling every 15-20 days, and those in Sierra Leone are doubling every 30-40 days. The EbolaResponse modeling tool also was used to estimate how control and prevention interventions can slow and eventually stop the epidemic. In a hypothetical scenario, the epidemic begins to decrease and eventually end if approximately 70% of persons with Ebola are in medical care facilities or Ebola treatment units (ETUs) or, when these settings are at capacity, in a non-ETU setting such that there is a reduced risk for disease transmission (including safe burial when needed). In another hypothetical scenario, every 30-day delay in increasing the percentage of patients in ETUs to 70% was associated with an approximate tripling in the number of daily cases that occur at the peak of the epidemic (however, the epidemic still eventually ends). Officials have developed a plan to rapidly increase ETU capacities and also are developing innovative methods that can be quickly scaled up to isolate patients in non-ETU settings in a way that can help disrupt Ebola transmission in communities. The U.S. government and international organizations recently announced commitments to support these measures. As these measures are rapidly implemented and sustained, the higher projections presented in this report become very unlikely.

  15. Testbeds for Assessing Critical Scenarios in Power Control Systems

    NASA Astrophysics Data System (ADS)

    Dondossola, Giovanna; Deconinck, Geert; Garrone, Fabrizio; Beitollahi, Hakem

    The paper presents a set of control system scenarios implemented in two testbeds developed in the context of the European Project CRUTIAL - CRitical UTility InfrastructurAL Resilience. The selected scenarios refer to power control systems encompassing information and communication security of SCADA systems for grid teleoperation, impact of attacks on inter-operator communications in power emergency conditions, impact of intentional faults on the secondary and tertiary control in power grids with distributed generators. Two testbeds have been developed for assessing the effect of the attacks and prototyping resilient architectures.

  16. A Comparison of the Kernel Equating Method with Traditional Equating Methods Using SAT[R] Data

    ERIC Educational Resources Information Center

    Liu, Jinghua; Low, Albert C.

    2008-01-01

    This study applied kernel equating (KE) in two scenarios: equating to a very similar population and equating to a very different population, referred to as a distant population, using SAT[R] data. The KE results were compared to the results obtained from analogous traditional equating methods in both scenarios. The results indicate that KE results…

  17. A new scenario framework for climate change research: The concept of Shared Climate Policy Assumptions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kriegler, Elmar; Edmonds, James A.; Hallegatte, Stephane

    2014-04-01

    The paper presents the concept of shared climate policy assumptions as an important element of the new scenario framework. Shared climate policy assumptions capture key climate policy dimensions such as the type and scale of mitigation and adaptation measures. They are not specified in the socio-economic reference pathways, and therefore introduce an important third dimension to the scenario matrix architecture. Climate policy assumptions will have to be made in any climate policy scenario, and can have a significant impact on the scenario description. We conclude that a meaningful set of shared climate policy assumptions is useful for grouping individual climatemore » policy analyses and facilitating their comparison. Shared climate policy assumptions should be designed to be policy relevant, and as a set to be broad enough to allow a comprehensive exploration of the climate change scenario space.« less

  18. Analysis of Pulsed Flow Modification Alternatives, Lower Missouri River, 2005

    USGS Publications Warehouse

    Jacobson, Robert B.

    2008-01-01

    The graphical, tabular, and statistical data presented in this report resulted from analysis of alternative flow regime designs considered by a group of Missouri River managers, stakeholders, and scientists during the summer of 2005. This plenary group was charged with designing a flow regime with increased spring flow pulses to support reproduction and survival of the endangered pallid sturgeon. Environmental flow components extracted from the reference natural flow regime were used to design and assess performance of alternative flow regimes. The analysis is based on modeled flow releases from Gavins Point Dam (near Yankton, South Dakota) for nine design alternatives and two reference scenarios; the reference scenarios are the run-of-the-river and the water-control plan implemented in 2004. The alternative designs were developed by the plenary group with the goal of providing pulsed spring flows, while retaining traditional social and economic uses of the river.

  19. Coordinated EV adoption: double-digit reductions in emissions and fuel use for $40/vehicle-year.

    PubMed

    Choi, Dong Gu; Kreikebaum, Frank; Thomas, Valerie M; Divan, Deepak

    2013-09-17

    Adoption of electric vehicles (EVs) would affect the costs and sources of electricity and the United States efficiency requirements for conventional vehicles (CVs). We model EV adoption scenarios in each of six regions of the Eastern Interconnection, containing 70% of the United States population. We develop electricity system optimization models at the multidecade, day-ahead, and hour-ahead time scales, incorporating spatial wind energy modeling, endogenous modeling of CV efficiencies, projections for EV efficiencies, and projected CV and EV costs. We find two means to reduce total consumer expenditure (TCE): (i) controlling charge timing and (ii) unlinking the fuel economy regulations for CVs from EVs. Although EVs provide minimal direct GHG reductions, controlled charging provides load flexibility, lowering the cost of renewable electricity. Without EVs, a 33% renewable electricity standard (RES) would cost $193/vehicle-year more than the reference case (10% RES). Combining a 33% RES, EVs with controlled charging and unlinking would reduce combined electric- and vehicle-sector CO2 emissions by 27% and reduce gasoline consumption by 59% for $40/vehicle-year more than the reference case. Coordinating EV adoption with adoption of controlled charging, unlinked fuel economy regulations, and renewable electricity standards would provide low-cost reductions in emissions and fuel usage.

  20. Estimated cost of universal public coverage of prescription drugs in Canada

    PubMed Central

    Morgan, Steven G.; Law, Michael; Daw, Jamie R.; Abraham, Liza; Martin, Danielle

    2015-01-01

    Background: With the exception of Canada, all countries with universal health insurance systems provide universal coverage of prescription drugs. Progress toward universal public drug coverage in Canada has been slow, in part because of concerns about the potential costs. We sought to estimate the cost of implementing universal public coverage of prescription drugs in Canada. Methods: We used published data on prescribing patterns and costs by drug type, as well as source of funding (i.e., private drug plans, public drug plans and out-of-pocket expenses), in each province to estimate the cost of universal public coverage of prescription drugs from the perspectives of government, private payers and society as a whole. We estimated the cost of universal public drug coverage based on its anticipated effects on the volume of prescriptions filled, products selected and prices paid. We selected these parameters based on current policies and practices seen either in a Canadian province or in an international comparator. Results: Universal public drug coverage would reduce total spending on prescription drugs in Canada by $7.3 billion (worst-case scenario $4.2 billion, best-case scenario $9.4 billion). The private sector would save $8.2 billion (worst-case scenario $6.6 billion, best-case scenario $9.6 billion), whereas costs to government would increase by about $1.0 billion (worst-case scenario $5.4 billion net increase, best-case scenario $2.9 billion net savings). Most of the projected increase in government costs would arise from a small number of drug classes. Interpretation: The long-term barrier to the implementation of universal pharmacare owing to its perceived costs appears to be unjustified. Universal public drug coverage would likely yield substantial savings to the private sector with comparatively little increase in costs to government. PMID:25780047

  1. Estimated cost of universal public coverage of prescription drugs in Canada.

    PubMed

    Morgan, Steven G; Law, Michael; Daw, Jamie R; Abraham, Liza; Martin, Danielle

    2015-04-21

    With the exception of Canada, all countries with universal health insurance systems provide universal coverage of prescription drugs. Progress toward universal public drug coverage in Canada has been slow, in part because of concerns about the potential costs. We sought to estimate the cost of implementing universal public coverage of prescription drugs in Canada. We used published data on prescribing patterns and costs by drug type, as well as source of funding (i.e., private drug plans, public drug plans and out-of-pocket expenses), in each province to estimate the cost of universal public coverage of prescription drugs from the perspectives of government, private payers and society as a whole. We estimated the cost of universal public drug coverage based on its anticipated effects on the volume of prescriptions filled, products selected and prices paid. We selected these parameters based on current policies and practices seen either in a Canadian province or in an international comparator. Universal public drug coverage would reduce total spending on prescription drugs in Canada by $7.3 billion (worst-case scenario $4.2 billion, best-case scenario $9.4 billion). The private sector would save $8.2 billion (worst-case scenario $6.6 billion, best-case scenario $9.6 billion), whereas costs to government would increase by about $1.0 billion (worst-case scenario $5.4 billion net increase, best-case scenario $2.9 billion net savings). Most of the projected increase in government costs would arise from a small number of drug classes. The long-term barrier to the implementation of universal pharmacare owing to its perceived costs appears to be unjustified. Universal public drug coverage would likely yield substantial savings to the private sector with comparatively little increase in costs to government. © 2015 Canadian Medical Association or its licensors.

  2. Linking multimetric and multivariate approaches to assess the ecological condition of streams.

    PubMed

    Collier, Kevin J

    2009-10-01

    Few attempts have been made to combine multimetric and multivariate analyses for bioassessment despite recognition that an integrated method could yield powerful tools for bioassessment. An approach is described that integrates eight macroinvertebrate community metrics into a Principal Components Analysis to develop a Multivariate Condition Score (MCS) from a calibration dataset of 511 samples. The MCS is compared to an Index of Biotic Integrity (IBI) derived using the same metrics based on the ratio to the reference site mean. Both approaches were highly correlated although the MCS appeared to offer greater potential for discriminating a wider range of impaired conditions. Both the MCS and IBI displayed low temporal variability within reference sites, and were able to distinguish between reference conditions and low levels of catchment modification and local habitat degradation, although neither discriminated among three levels of low impact. Pseudosamples developed to test the response of the metric aggregation approaches to organic enrichment, urban, mining, pastoral and logging stressor scenarios ranked pressures in the same order, but the MCS provided a lower score for the urban scenario and a higher score for the pastoral scenario. The MCS was calculated for an independent test dataset of urban and reference sites, and yielded similar results to the IBI. Although both methods performed comparably, the MCS approach may have some advantages because it removes the subjectivity of assigning thresholds for scoring biological condition, and it appears to discriminate a wider range of degraded conditions.

  3. Risk assessment of buckwheat flour contaminated by thorn-apple (Datura stramonium L.) alkaloids: a case study from Slovenia.

    PubMed

    Perharič, Lucija; Koželj, Gordana; Družina, Branko; Stanovnik, Lovro

    2013-01-01

    In Slovenia, a mass poisoning incident involving 73 consumers with symptoms such as dry mouth, hot red skin, blurred vision, tachycardia, urinary retention, ataxia, speech disturbance, disorientation and visual hallucinations occurred in 2003. In all cases, consumers had eaten buckwheat flour food products within the last few hours. Investigations by responsible authorities identified the contamination of a range of buckwheat food products with thorn-apple (Datura stramonium L.) seeds containing toxic alkaloids, atropine and scopolamine. To ensure the safe consumption of buckwheat food products, we carried out risk characterisation and proposed provisional maximum residue levels (MRLs) of atropine and scopolamine mixture in buckwheat flour. In the absence of critical "no observed adverse effect levels" for atropine and scopolamine, we based our estimation of the acute reference doses on the lowest recommended therapeutic doses. Taking into account the additive effect of the two alkaloids, we calculated acute reference doses of the mixture, that is 0.05 µg/kg of body mass for atropine and 0.03 µg/kg of body mass for scopolamine. MRLs for atropine and scopolamine mixture in buckwheat flour were estimated in a worst-case scenario, that is consumption of 100 g of flour by a child weighing 10 kg and taking into account a range of atropine/scopolamine ratio in implicated food products, that is 0.85-3.3. We proposed the national MRLs for atropine/scopolamine mixture in buckwheat food products: 4.0 µg/kg (atropine) and 2.0 µg/kg(scopolamine). However, in view of the large variability in the alkaloid content, depending on the origin of the Datura, we propose that risk assessment should be carried out on a case-by-case basis, taking into account the ratio between atropine and scopolamine content in a particular sample.

  4. Considering the Role of Natural Gas in the Deep Decarbonization of the U.S. Electricity Sector. Natural Gas and the Evolving U.S. Power Sector Monograph Series: Number 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cole, Wesley; Beppler, Ross; Zinaman, Owen

    Natural gas generation in the U.S. electricity sector has grown substantially in recent years, while the sector's carbon dioxide (CO2) emissions have generally declined. This relationship highlights the concept of natural gas as a potential enabler of a transition to a lower-carbon future. This work considers that concept by using the National Renewable Energy Laboratory (NREL) Renewable Energy Deployment System (ReEDS) model. ReEDS is a long-term capacity expansion model of the U.S. electricity sector. We examine the role of natural gas within the ReEDS modeling framework as increasingly strict carbon emission targets are imposed on the electricity sector. In additionmore » to various natural gas price futures, we also consider scenarios that emphasize a low-carbon technology in order to better understand the role of natural gas if that low-carbon technology shows particular promise. Specifically, we consider scenarios with high amounts of energy efficiency (EE), low nuclear power costs, low renewable energy (RE) costs, and low carbon capture and storage (CCS) costs. Within these scenarios we find that requiring the electricity sector to lower CO2 emissions over time increases near-to-mid-term (through 2030) natural gas generation (see Figure 1 - left). The long-term (2050) role of natural gas generation in the electricity sector is dependent on the level of CO2 emission reduction required. Moderate reductions in long-term CO2 emissions have relatively little impact on long-term natural gas generation, while more stringent CO2 emission limits lower long-term natural gas generation (see Figure 1 - right). More stringent carbon targets also impact other generating technologies, with the scenarios considered here seeing significant decreases in coal generation, and new capacity of nuclear and renewable energy technologies over time. Figure 1 also demonstrates the role of natural gas in the context of scenarios where a specific low-carbon technology is advantaged. In 2030, natural gas generation in the technology scenarios is quite similar to that in the reference scenarios, indicating relatively little change in the role of natural gas in the near-to-mid-term due to advancements in those technology areas. The 2050 natural gas generation shows more significant differences, suggesting that technology advancements will likely have substantial impacts on the role of natural gas in the longer-term timeframe. Natural gas generation differences are most strongly driven by alternative natural gas price trajectories--changes in natural gas generation in the Low NG Price and High NG Price scenarios are much larger than in any other scenario in both the 2030 and 2050 timeframes. The only low-carbon technology scenarios that showed any increase in long-term natural gas generation relative to the reference case were the Low CCS cost scenarios. Carbon capture and storage technology costs are currently high, but have the potential to allow fossil fuels to play a larger role in low-carbon grid. This work considers three CCS cost trajectories for natural gas and coal generators: a baseline trajectory and two lower cost trajectories where CO2 capture costs reach $40/metric ton and $10/metric ton, respectively. We find that in the context of the ReEDS model and with these assumed cost trajectories, CCS can increase the long-term natural gas generation under a low carbon target (see Figure 2). Under less stringent carbon targets we do not see ReEDS electing to use CCS as part of its electricity generating portfolio for the scenarios considered in this work.« less

  5. Projections of Rainfall and Temperature from CMIP5 Models over BIMSTEC Countries

    NASA Astrophysics Data System (ADS)

    Pattnayak, K. C.; Kar, S. C.; Ragi, A. R.

    2014-12-01

    Rainfall and surface temperature are the most important climatic variables in the context of climate change. Thus, these variables simulated from fifth phase of the Climate Model Inter-comparison Project (CMIP5) models have been compared against Climatic Research Unit (CRU) observed data and projected for the twenty first century under the Representative Concentration Pathways (RCPs) 4.5 and 8.5 emission scenarios. Results for the seven countries under Bay of Bengal Initiative for Multi-Sectoral Technical and Economic Cooperation (BIMSTEC) such as Bangladesh, Bhutan, India, Myanmar, Nepal, Sri Lanka and Thailand have been examined. Six CMIP5 models namely GFDL-CM3, GFDL-ESM2M, GFDL-ESM2G, HadGEM2-AO, HadGEM2-CC and HadGEM2-ES have been chosen for this study. The study period has been considered is from 1861 to 2100. From this period, initial 145 years i.e. 1861 to 2005 is reference or historical period and the later 95 years i.e. 2005 to 2100 is projected period. The climate change in the projected period has been examined with respect to the reference period. In order to validate the models, the mean annual rainfall and temperature has been compared with CRU over the reference period 1901 to 2005. Comparison reveals that most of the models are able to capture the spatial distribution of rainfall and temperature over most of the regions of BIMSTEC countries. Therefore these model data can be used to study the future changes in the 21st Century. Four out six models shows that the rainfall over Central and North India, Thailand and eastern part of Myanmar shows decreasing trend and Bangladesh, Bhutan, Nepal and Sri Lanka shows an increasing trend in both RCP 4.5 and 8.5 scenarios. In case of temperature, all of the models show an increasing trend over all the BIMSTEC countries in both scenarios, however, the rate of increase is relatively less over Sri Lanka than the other countries. Annual cycles of rainfall and temperature over Bangladesh, Myanmar and Thailand reveals that the magnitudes are more in 2070 to 2100 of RCP8.5. Inter-model comparison show that there are large more uncertainties within the CMIP5 model projections.

  6. Projections of Rainfall and Surface Temperature from CMIP5 Models under RCP4.5 and 8.5 over BIMSTEC Countries

    NASA Astrophysics Data System (ADS)

    Charan Pattnayak, Kanhu; Kar, Sarat Chandra; Kumari Pattnayak, Rashmita

    2015-04-01

    Rainfall and surface temperature are the most important climatic variables in the context of climate change. Thus, these variables simulated from fifth phase of the Climate Model Inter-comparison Project (CMIP5) models have been compared against Climatic Research Unit (CRU) observed data and projected for the twenty first century under the Representative Concentration Pathways (RCPs) 4.5 and 8.5 emission scenarios. Results for the seven countries under Bay of Bengal Initiative for Multi-Sectoral Technical and Economic Cooperation (BIMSTEC) such as Bangladesh, Bhutan, India, Myanmar, Nepal, Sri Lanka and Thailand have been examined. Six CMIP5 models namely GFDL-CM3, GFDL-ESM2M, GFDL-ESM2G, HadGEM2-AO, HadGEM2-CC and HadGEM2-ES have been chosen for this study. The study period has been considered is from 1861 to 2100. From this period, initial 145 years i.e. 1861 to 2005 is reference or historical period and the later 95 years i.e. 2005 to 2100 is projected period. The climate change in the projected period has been examined with respect to the reference period. In order to validate the models, the mean annual rainfall and temperature has been compared with CRU over the reference period 1901 to 2005. Comparison reveals that most of the models are able to capture the spatial distribution of rainfall and temperature over most of the regions of BIMSTEC countries. Therefore these model data can be used to study the future changes in the 21st Century. Four out six models shows that the rainfall over Central and North India, Thailand and eastern part of Myanmar shows decreasing trend and Bangladesh, Bhutan, Nepal and Sri Lanka shows an increasing trend in both RCP 4.5 and 8.5 scenarios. In case of temperature, all of the models show an increasing trend over all the BIMSTEC countries in both scenarios, however, the rate of increase is relatively less over Sri Lanka than the other countries. Annual cycles of rainfall and temperature over Bangladesh, Myanmar and Thailand reveals that the magnitudes are more in 2070 to 2100 of RCP8.5. Inter-model comparison show that there are large more uncertainties within the CMIP5 model projections.

  7. A multi-source probabilistic hazard assessment of tephra dispersal in the Neapolitan area

    NASA Astrophysics Data System (ADS)

    Sandri, Laura; Costa, Antonio; Selva, Jacopo; Folch, Arnau; Macedonio, Giovanni; Tonini, Roberto

    2015-04-01

    In this study we present the results obtained from a long-term Probabilistic Hazard Assessment (PHA) of tephra dispersal in the Neapolitan area. Usual PHA for tephra dispersal needs the definition of eruptive scenarios (usually by grouping eruption sizes and possible vent positions in a limited number of classes) with associated probabilities, a meteorological dataset covering a representative time period, and a tephra dispersal model. PHA then results from combining simulations considering different volcanological and meteorological conditions through weights associated to their specific probability of occurrence. However, volcanological parameters (i.e., erupted mass, eruption column height, eruption duration, bulk granulometry, fraction of aggregates) typically encompass a wide range of values. Because of such a natural variability, single representative scenarios or size classes cannot be adequately defined using single values for the volcanological inputs. In the present study, we use a method that accounts for this within-size-class variability in the framework of Event Trees. The variability of each parameter is modeled with specific Probability Density Functions, and meteorological and volcanological input values are chosen by using a stratified sampling method. This procedure allows for quantifying hazard without relying on the definition of scenarios, thus avoiding potential biases introduced by selecting single representative scenarios. Embedding this procedure into the Bayesian Event Tree scheme enables the tephra fall PHA and its epistemic uncertainties. We have appied this scheme to analyze long-term tephra fall PHA from Vesuvius and Campi Flegrei, in a multi-source paradigm. We integrate two tephra dispersal models (the analytical HAZMAP and the numerical FALL3D) into BET_VH. The ECMWF reanalysis dataset are used for exploring different meteorological conditions. The results obtained show that PHA accounting for the whole natural variability are consistent with previous probabilities maps elaborated for Vesuvius and Campi Flegrei on the basis of single representative scenarios, but show significant differences. In particular, the area characterized by a 300 kg/m2-load exceedance probability larger than 5%, accounting for the whole range of variability (that is, from small violent strombolian to plinian eruptions), is similar to that displayed in the maps based on the medium magnitude reference eruption, but it is of a smaller extent. This is due to the relatively higher weight of the small magnitude eruptions considered in this study, but neglected in the reference scenario maps. On the other hand, in our new maps the area characterized by a 300 kg/m2-load exceedance probability larger than 1% is much larger than that of the medium magnitude reference eruption, due to the contribution of plinian eruptions at lower probabilities, again neglected in the reference scenario maps.

  8. Information-seeking behaviors of medical students: a classification of questions asked of librarians and physicians.

    PubMed Central

    Wildemuth, B M; de Bliek, R; Friedman, C P; Miya, T S

    1994-01-01

    To solve a problem, a person often asks questions of someone with more expertise. This paper reports on a study of the types of questions asked and how the experts are chosen. In the study, sixty-three first-year medical students responded to clinical scenarios, each describing a patient affected by a toxin and asking questions concerning the identity of the toxin and its characteristics. After answering those questions, the students were asked to imagine that they had access to a medical reference librarian and an internist specializing in toxicology. The students then generated two questions for each expert about each clinical scenario. Each question was categorized according to the type of information requested, and the frequency of each type of question was calculated. The study found that students most often asked for the identification of the toxin(s), references about the scenario, or the effects of the toxin; an explanation of the patient's symptoms; or a description of the appropriate treatment. Students were more likely to address questions on the identity of the toxin and references to the hypothetical librarian; they were more likely to ask the internist for explanations of the symptoms and descriptions of the treatment. The implications of these results for the design of information and educational systems are discussed. PMID:7920340

  9. Combined sewer overflow control with LID based on SWMM: an example in Shanghai, China.

    PubMed

    Liao, Z L; Zhang, G Q; Wu, Z H; He, Y; Chen, H

    2015-01-01

    Although low impact development (LID) has been commonly applied across the developed countries for mitigating the negative impacts of combined sewer overflows (CSOs) on urban hydrological environment, it has not been widely used in developing countries yet. In this paper, a typical combined sewer system in an urbanized area of Shanghai, China was used to demonstrate how to design and choose CSO control solutions with LID using stormwater management model. We constructed and simulated three types of CSO control scenarios. Our findings support the notion that LID measures possess favorable capability on CSO reduction. Nevertheless, the green scenarios which are completely comprised by LID measures fail to achieve the maximal effectiveness on CSO reduction, while the gray-green scenarios (LID measure combined with gray measures) achieve it. The unit cost-effectiveness of each type of scenario sorts as: green scenario > gray-green scenario > gray scenario. Actually, as the storage tank is built in the case catchment, a complete application of green scenario is inaccessible here. Through comprehensive evaluation and comparison, the gray-green scenario F which used the combination of storage tank, bio-retention and rain barrels is considered as the most feasible one in this case.

  10. Group decisions in biodiversity conservation: implications from game theory.

    PubMed

    Frank, David M; Sarkar, Sahotra

    2010-05-27

    Decision analysis and game theory have proved useful tools in various biodiversity conservation planning and modeling contexts. This paper shows how game theory may be used to inform group decisions in biodiversity conservation scenarios by modeling conflicts between stakeholders to identify Pareto-inefficient Nash equilibria. These are cases in which each agent pursuing individual self-interest leads to a worse outcome for all, relative to other feasible outcomes. Three case studies from biodiversity conservation contexts showing this feature are modeled to demonstrate how game-theoretical representation can inform group decision-making. The mathematical theory of games is used to model three biodiversity conservation scenarios with Pareto-inefficient Nash equilibria: (i) a two-agent case involving wild dogs in South Africa; (ii) a three-agent raptor and grouse conservation scenario from the United Kingdom; and (iii) an n-agent fish and coral conservation scenario from the Philippines. In each case there is reason to believe that traditional mechanism-design solutions that appeal to material incentives may be inadequate, and the game-theoretical analysis recommends a resumption of further deliberation between agents and the initiation of trust--and confidence--building measures. Game theory can and should be used as a normative tool in biodiversity conservation contexts: identifying scenarios with Pareto-inefficient Nash equilibria enables constructive action in order to achieve (closer to) optimal conservation outcomes, whether by policy solutions based on mechanism design or otherwise. However, there is mounting evidence that formal mechanism-design solutions may backfire in certain cases. Such scenarios demand a return to group deliberation and the creation of reciprocal relationships of trust.

  11. A multivariate copula-based framework for dealing with hazard scenarios and failure probabilities

    NASA Astrophysics Data System (ADS)

    Salvadori, G.; Durante, F.; De Michele, C.; Bernardi, M.; Petrella, L.

    2016-05-01

    This paper is of methodological nature, and deals with the foundations of Risk Assessment. Several international guidelines have recently recommended to select appropriate/relevant Hazard Scenarios in order to tame the consequences of (extreme) natural phenomena. In particular, the scenarios should be multivariate, i.e., they should take into account the fact that several variables, generally not independent, may be of interest. In this work, it is shown how a Hazard Scenario can be identified in terms of (i) a specific geometry and (ii) a suitable probability level. Several scenarios, as well as a Structural approach, are presented, and due comparisons are carried out. In addition, it is shown how the Hazard Scenario approach illustrated here is well suited to cope with the notion of Failure Probability, a tool traditionally used for design and risk assessment in engineering practice. All the results outlined throughout the work are based on the Copula Theory, which turns out to be a fundamental theoretical apparatus for doing multivariate risk assessment: formulas for the calculation of the probability of Hazard Scenarios in the general multidimensional case (d≥2) are derived, and worthy analytical relationships among the probabilities of occurrence of Hazard Scenarios are presented. In addition, the Extreme Value and Archimedean special cases are dealt with, relationships between dependence ordering and scenario levels are studied, and a counter-example concerning Tail Dependence is shown. Suitable indications for the practical application of the techniques outlined in the work are given, and two case studies illustrate the procedures discussed in the paper.

  12. Secondary School Education in Assam (India) with Special Reference to Mathematics

    ERIC Educational Resources Information Center

    Das, N. R.; Baruah, Karuna

    2010-01-01

    This paper describes the prevailing academic scenarios of a representative group of secondary schools in Assam (India) with special references to students performance in general and mathematics performance in particular. The state of Assam is one of the economically backward regions of India and is witnessing socio-political disturbances mainly…

  13. Impact of pre-imputation SNP-filtering on genotype imputation results

    PubMed Central

    2014-01-01

    Background Imputation of partially missing or unobserved genotypes is an indispensable tool for SNP data analyses. However, research and understanding of the impact of initial SNP-data quality control on imputation results is still limited. In this paper, we aim to evaluate the effect of different strategies of pre-imputation quality filtering on the performance of the widely used imputation algorithms MaCH and IMPUTE. Results We considered three scenarios: imputation of partially missing genotypes with usage of an external reference panel, without usage of an external reference panel, as well as imputation of completely un-typed SNPs using an external reference panel. We first created various datasets applying different SNP quality filters and masking certain percentages of randomly selected high-quality SNPs. We imputed these SNPs and compared the results between the different filtering scenarios by using established and newly proposed measures of imputation quality. While the established measures assess certainty of imputation results, our newly proposed measures focus on the agreement with true genotypes. These measures showed that pre-imputation SNP-filtering might be detrimental regarding imputation quality. Moreover, the strongest drivers of imputation quality were in general the burden of missingness and the number of SNPs used for imputation. We also found that using a reference panel always improves imputation quality of partially missing genotypes. MaCH performed slightly better than IMPUTE2 in most of our scenarios. Again, these results were more pronounced when using our newly defined measures of imputation quality. Conclusion Even a moderate filtering has a detrimental effect on the imputation quality. Therefore little or no SNP filtering prior to imputation appears to be the best strategy for imputing small to moderately sized datasets. Our results also showed that for these datasets, MaCH performs slightly better than IMPUTE2 in most scenarios at the cost of increased computing time. PMID:25112433

  14. SU-F-T-192: Study of Robustness Analysis Method of Multiple Field Optimized IMPT Plans for Head & Neck Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; Wang, X; Li, H

    Purpose: Proton therapy is more sensitive to uncertainties than photon treatments due to protons’ finite range depending on the tissue density. Worst case scenario (WCS) method originally proposed by Lomax has been adopted in our institute for robustness analysis of IMPT plans. This work demonstrates that WCS method is sufficient enough to take into account of the uncertainties which could be encountered during daily clinical treatment. Methods: A fast and approximate dose calculation method is developed to calculate the dose for the IMPT plan under different setup and range uncertainties. Effects of two factors, inversed square factor and range uncertainty,more » are explored. WCS robustness analysis method was evaluated using this fast dose calculation method. The worst-case dose distribution was generated by shifting isocenter by 3 mm along x,y and z directions and modifying stopping power ratios by ±3.5%. 1000 randomly perturbed cases in proton range and x, yz directions were created and the corresponding dose distributions were calculated using this approximated method. DVH and dosimetric indexes of all 1000 perturbed cases were calculated and compared with the result using worst case scenario method. Results: The distributions of dosimetric indexes of 1000 perturbed cases were generated and compared with the results using worst case scenario. For D95 of CTVs, at least 97% of 1000 perturbed cases show higher values than the one of worst case scenario. For D5 of CTVs, at least 98% of perturbed cases have lower values than worst case scenario. Conclusion: By extensively calculating the dose distributions under random uncertainties, WCS method was verified to be reliable in evaluating the robustness level of MFO IMPT plans of H&N patients. The extensively sampling approach using fast approximated method could be used in evaluating the effects of different factors on the robustness level of IMPT plans in the future.« less

  15. Implications of an Absolute Simultaneity Theory for Cosmology and Universe Acceleration

    PubMed Central

    Kipreos, Edward T.

    2014-01-01

    An alternate Lorentz transformation, Absolute Lorentz Transformation (ALT), has similar kinematics to special relativity yet maintains absolute simultaneity in the context of a preferred reference frame. In this study, it is shown that ALT is compatible with current experiments to test Lorentz invariance only if the proposed preferred reference frame is locally equivalent to the Earth-centered non-rotating inertial reference frame, with the inference that in an ALT framework, preferred reference frames are associated with centers of gravitational mass. Applying this theoretical framework to cosmological data produces a scenario of universal time contraction in the past. In this scenario, past time contraction would be associated with increased levels of blueshifted light emissions from cosmological objects when viewed from our current perspective. The observation that distant Type Ia supernovae are dimmer than predicted by linear Hubble expansion currently provides the most direct evidence for an accelerating universe. Adjusting for the effects of time contraction on a redshift–distance modulus diagram produces a linear distribution of supernovae over the full redshift spectrum that is consistent with a non-accelerating universe. PMID:25536116

  16. Implications of an absolute simultaneity theory for cosmology and universe acceleration.

    PubMed

    Kipreos, Edward T

    2014-01-01

    An alternate Lorentz transformation, Absolute Lorentz Transformation (ALT), has similar kinematics to special relativity yet maintains absolute simultaneity in the context of a preferred reference frame. In this study, it is shown that ALT is compatible with current experiments to test Lorentz invariance only if the proposed preferred reference frame is locally equivalent to the Earth-centered non-rotating inertial reference frame, with the inference that in an ALT framework, preferred reference frames are associated with centers of gravitational mass. Applying this theoretical framework to cosmological data produces a scenario of universal time contraction in the past. In this scenario, past time contraction would be associated with increased levels of blueshifted light emissions from cosmological objects when viewed from our current perspective. The observation that distant Type Ia supernovae are dimmer than predicted by linear Hubble expansion currently provides the most direct evidence for an accelerating universe. Adjusting for the effects of time contraction on a redshift-distance modulus diagram produces a linear distribution of supernovae over the full redshift spectrum that is consistent with a non-accelerating universe.

  17. Combined semantic and similarity search in medical image databases

    NASA Astrophysics Data System (ADS)

    Seifert, Sascha; Thoma, Marisa; Stegmaier, Florian; Hammon, Matthias; Kramer, Martin; Huber, Martin; Kriegel, Hans-Peter; Cavallaro, Alexander; Comaniciu, Dorin

    2011-03-01

    The current diagnostic process at hospitals is mainly based on reviewing and comparing images coming from multiple time points and modalities in order to monitor disease progression over a period of time. However, for ambiguous cases the radiologist deeply relies on reference literature or second opinion. Although there is a vast amount of acquired images stored in PACS systems which could be reused for decision support, these data sets suffer from weak search capabilities. Thus, we present a search methodology which enables the physician to fulfill intelligent search scenarios on medical image databases combining ontology-based semantic and appearance-based similarity search. It enabled the elimination of 12% of the top ten hits which would arise without taking the semantic context into account.

  18. Directed Energy Technology Working Group Report (IDA/OSD R&M (Institute for Defense Analyses/Office of the Secretary of Defense Reliability and Maintainability) Study).

    DTIC Science & Technology

    1983-08-01

    Missile (SLBM) Defense Scenario ............................................ B-1 C Space-Based Anti-Ballistic Missile ( ABM ) Defense Scenario...Ballistic Missile (SLBM) Defense Scenario, and at Strategic Space-Based Anti-Ballistic Missile ( ABM ) Defense Scenario. These case studies are provided...of flight. 3.5.3 Spaced-Based ABM Defense Scenario In this scenario, an orbiting battle station is operating as an element of GBMD System, and it is

  19. The potential effect of population development, smoking and antioxidant supplementation on the future epidemiology of age-related macular degeneration in Switzerland.

    PubMed

    Bauer, P; Barthelmes, D; Kurz, M; Fleischhauer, J C; Sutter, F K

    2008-05-01

    Due to the predicted age shift of the population an increase in the number of patients with late AMD is expected. At present smoking represents the only modifiable risk factor. Supplementation of antioxidants in patients at risk is the sole effective pharmacological prevention. The aim of this study is to estimate the future epidemiological development of late AMD in Switzerland and to quantify the potential effects of smoking and antioxidants supplementation. The modelling of the future development of late AMD cases in Switzerland was based on a meta-analysis of the published data on AMD-prevalence and on published Swiss population development scenarios until 2050. Three different scenarios were compared: low, mean and high. The late AMD cases caused by smoking were calculated using the "population attributable fraction" formula and data on the current smoking habits of the Swiss population. The number of potentially preventable cases was estimated using the data of the Age-Related Eye Disease Study (AREDS). According to the mean population development scenario, late AMD cases in Switzerland will rise from 37 200 cases in 2005 to 52 500 cases in 2020 and to 93 200 cases in 2050. Using the "low" and the "high" scenarios the late AMD cases may range from 49 500 to 56 000 in 2020 and from 73 700 to 118 400 in 2050, respectively. Smoking is responsible for approximately 7 % of all late AMD cases, i. e., 2600 cases in 2005, 3800 cases in 2020, 6600 cases in 2050 ("mean scenario"). With future antioxidant supplementation to all patients at risk another 3100 cases would be preventable until 2020 and possibly 23 500 cases until 2050. Due to age shift in the population a 2.5-fold increase in late AMD cases until 2050 is expected, representing a socioeconomic challenge. Cessation of smoking and supplementation of antioxidants to all patients at risk has the potential to reduce this number. Unfortunately, public awareness is low. These data may support health-care providers and public opinion leaders when developing public education and prevention strategies.

  20. Transitioning from Software Requirements Models to Design Models

    NASA Technical Reports Server (NTRS)

    Lowry, Michael (Technical Monitor); Whittle, Jon

    2003-01-01

    Summary: 1. Proof-of-concept of state machine synthesis from scenarios - CTAS case study. 2. CTAS team wants to use the syntheses algorithm to validate trajectory generation. 3. Extending synthesis algorithm towards requirements validation: (a) scenario relationships' (b) methodology for generalizing/refining scenarios, and (c) interaction patterns to control synthesis. 4. Initial ideas tested on conflict detection scenarios.

  1. Children and adults exposed to electromagnetic fields at the ICNIRP reference levels: theoretical assessment of the induced peak temperature increase.

    PubMed

    Bakker, J F; Paulides, M M; Neufeld, E; Christ, A; Kuster, N; van Rhoon, G C

    2011-08-07

    To avoid potentially adverse health effects of electromagnetic fields (EMF), the International Commission on Non-Ionizing Radiation Protection (ICNIRP) has defined EMF reference levels. Restrictions on induced whole-body-averaged specific absorption rate (SAR(wb)) are provided to keep the whole-body temperature increase (T(body, incr)) under 1 °C during 30 min. Additional restrictions on the peak 10 g spatial-averaged SAR (SAR(10g)) are provided to prevent excessive localized tissue heating. The objective of this study is to assess the localized peak temperature increase (T(incr, max)) in children upon exposure at the reference levels. Finite-difference time-domain modeling was used to calculate T(incr, max) in six children and two adults exposed to orthogonal plane-wave configurations. We performed a sensitivity study and Monte Carlo analysis to assess the uncertainty of the results. Considering the uncertainties in the model parameters, we found that a peak temperature increase as high as 1 °C can occur for worst-case scenarios at the ICNIRP reference levels. Since the guidelines are deduced from temperature increase, we used T(incr, max) as being a better metric to prevent excessive localized tissue heating instead of localized peak SAR. However, we note that the exposure time should also be considered in future guidelines. Hence, we advise defining limits on T(incr, max) for specified durations of exposure.

  2. Towards clinical application of RayStretch for heterogeneity corrections in LDR permanent 125I prostate brachytherapy.

    PubMed

    Hueso-González, Fernando; Ballester, Facundo; Perez-Calatayud, Jose; Siebert, Frank-André; Vijande, Javier

    RayStretch is a simple algorithm proposed for heterogeneity corrections in low-dose-rate brachytherapy. It is built on top of TG-43 consensus data, and it has been validated with Monte Carlo (MC) simulations. In this study, we take a real clinical prostate implant with 71 125 I seeds as reference and we apply RayStretch to analyze its performance in worst-case scenarios. To do so, we design two cases where large calcifications are located in the prostate lobules. RayStretch resilience under various calcification density values is also explored. Comparisons against MC calculations are performed. Dose-volume histogram-related parameters like prostate D 90 , rectum D 2cc , or urethra D 10 obtained with RayStretch agree within a few percent with the detailed MC results for all cases considered. The robustness and compatibility of RayStretch with commercial treatment planning systems indicate its applicability in clinical practice for dosimetric corrections in prostate calcifications. Its use during intraoperative ultrasound planning is foreseen. Copyright © 2017 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  3. Impact of forecasted changes in Polish economy (2015 and 2020) on nutrient emission into the river basins.

    PubMed

    Pastuszak, Marianna; Kowalkowski, Tomasz; Kopiński, Jerzy; Stalenga, Jarosław; Panasiuk, Damian

    2014-09-15

    Poland, with its large drainage area, with 50% contribution of agricultural land and 45% contribution of population to overall agricultural land area and population number in the Baltic catchment, is the largest exporter of riverine nitrogen (N) and phosphorus (P) to the sea. The economic transition has resulted in substantial, statistically significant decline in N, P export from Polish territory to the Baltic Sea. Following the obligations arising from the Helsinki Commission (HELCOM) declarations, in the coming years, Poland is expected to reduce riverine N loads by ca. 25% and P loads by ca. 60% as referred to the average flow normalized loads recorded in 1997-2003. The aim of this paper is to estimate annual source apportioned N and P emissions into these river basins in 2015 and 2020 with application of modeling studies (MONERIS). Twelve scenarios, encompassing changes in anthropogenic (diffuse, point source) and natural pressure (precipitation, water outflow due to climate change), have been applied. Modeling outcome for the period 2003-2008 served as our reference material. In applied scenarios, N emission into the Oder basin in 2015 and 2020 shows an increase from 4.2% up to 9.1% as compared with the reference period. N emission into the Vistula basin is more variable and shows an increase by max. 17.8% or a decrease by max. 4.7%, depending on the scenario. The difference between N emission into the Oder and Vistula basins is related to the catchment peculiarities and handling of point sources emission. P emission into both basins shows identical scenario patters and a maximum decrease reaches 17.8% in the Oder and 16.7% in the Vistula basin. Despite a declining tendency in P loads in both rivers in all the scenarios, HELCOM targeted P load reduction is not feasible. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Signature of present and projected climate change at an urban scale: The case of Addis Ababa

    NASA Astrophysics Data System (ADS)

    Arsiso, Bisrat Kifle; Mengistu Tsidu, Gizaw; Stoffberg, Gerrit Hendrik

    2018-06-01

    Understanding climate change and variability at an urban scale is essential for water resource management, land use planning, development of adaption plans, mitigation of air and water pollution. However, there are serious challenges to meet these goals due to unavailability of observed and/or simulated high resolution spatial and temporal climate data. The statistical downscaling of general circulation climate model, for instance, is usually driven by sparse observational data hindering the use of downscaled data to investigate urban scale climate variability and change in the past. Recently, these challenges are partly resolved by concerted international effort to produce global and high spatial resolution climate data. In this study, the 1 km2 high resolution NIMR-HadGEM2-AO simulations for future projections under Representative Concentration Pathways (RCP4.5 and RCP8.5) scenarios and gridded observations provided by Worldclim data center are used to assess changes in rainfall, minimum and maximum temperature expected under the two scenarios over Addis Ababa city. The gridded 1 km2 observational data set for the base period (1950-2000) is compared to observation from a meteorological station in the city in order to assess its quality for use as a reference (baseline) data. The comparison revealed that the data set has a very good quality. The rainfall anomalies under RCPs scenarios are wet in the 2030s (2020-2039), 2050s (2040-2069) and 2080s (2070-2099). Both minimum and maximum temperature anomalies under RCPs are successively getting warmer during these periods. Thus, the projected changes under RCPs scenarios show a general increase in rainfall and temperatures with strong variabilities in rainfall during rainy season implying level of difficulty in water resource use and management as well as land use planning and management.

  5. The origins of dragon-kings and their occurrence in society

    NASA Astrophysics Data System (ADS)

    Malkov, Artemy; Zinkina, Julia; Korotayev, Andrey

    2012-11-01

    A society is a medium with a complex structure of one-to-one relations between people. Those could be relations between friends, wife-husband relationships, relations between business partners, and so on. At a certain level of analysis, a society can be regarded as a gigantic maze constituted of one-to-one relationships between people. From a physical standpoint it can be considered as a highly porous medium. Such media are widely known for their outstanding properties and effects like self-organized criticality, percolation, power-law distribution of network cluster sizes, etc. In these media supercritical events, referred to as dragon-kings, may occur in two cases: when increasing stress is applied to a system (self-organized criticality scenario) or when increasing conductivity of a system is observed (percolation scenario). In social applications the first scenario is typical for negative effects: crises, wars, revolutions, financial breakdowns, state collapses, etc. The second scenario is more typical for positive effects like emergence of cities, growth of firms, population blow-ups, economic miracles, technology diffusion, social network formation, etc. If both conditions (increasing stress and increasing conductivity) are observed together, then absolutely miraculous dragon-king effects can occur that involve most human society. Historical examples of this effect are the emergence of the Mongol Empire, world religions, World War II, and the explosive proliferation of global internet services. This article describes these two scenarios in detail beginning with an overview of historical dragon-king events and phenomena starting from the early human history till the last decades and concluding with an analysis of their possible near future consequences on our global society. Thus we demonstrate that in social systems dragon-king is not a random outlier unexplainable by power-law statistics, but a natural effect. It is a very large cluster in a porous percolation medium. It occurs as a result of changes in external conditions, such as supercritical load, increase in system elements’ sensitivity, or system connectivity growth.

  6. Accuracy and Reliability of Emergency Department Triage Using the Emergency Severity Index: An International Multicenter Assessment.

    PubMed

    Mistry, Binoy; Stewart De Ramirez, Sarah; Kelen, Gabor; Schmitz, Paulo S K; Balhara, Kamna S; Levin, Scott; Martinez, Diego; Psoter, Kevin; Anton, Xavier; Hinson, Jeremiah S

    2018-05-01

    We assess accuracy and variability of triage score assignment by emergency department (ED) nurses using the Emergency Severity Index (ESI) in 3 countries. In accordance with previous reports and clinical observation, we hypothesize low accuracy and high variability across all sites. This cross-sectional multicenter study enrolled 87 ESI-trained nurses from EDs in Brazil, the United Arab Emirates, and the United States. Standardized triage scenarios published by the Agency for Healthcare Research and Quality (AHRQ) were used. Accuracy was defined by concordance with the AHRQ key and calculated as percentages. Accuracy comparisons were made with one-way ANOVA and paired t test. Interrater reliability was measured with Krippendorff's α. Subanalyses based on nursing experience and triage scenario type were also performed. Mean accuracy pooled across all sites and scenarios was 59.2% (95% confidence interval [CI] 56.4% to 62.0%) and interrater reliability was modest (α=.730; 95% CI .692 to .767). There was no difference in overall accuracy between sites or according to nurse experience. Medium-acuity scenarios were scored with greater accuracy (76.4%; 95% CI 72.6% to 80.3%) than high- or low-acuity cases (44.1%, 95% CI 39.3% to 49.0% and 54%, 95% CI 49.9% to 58.2%), and adult scenarios were scored with greater accuracy than pediatric ones (66.2%, 95% CI 62.9% to 69.7% versus 46.9%, 95% CI 43.4% to 50.3%). In this multinational study, concordance of nurse-assigned ESI score with reference standard was universally poor and variability was high. Although the ESI is the most popular ED triage tool in the United States and is increasingly used worldwide, our findings point to a need for more reliable ED triage tools. Copyright © 2017 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.

  7. A downscaling method for the assessment of local climate change

    NASA Astrophysics Data System (ADS)

    Bruno, E.; Portoghese, I.; Vurro, M.

    2009-04-01

    The use of complimentary models is necessary to study the impact of climate change scenarios on the hydrological response at different space-time scales. However, the structure of GCMs is such that their space resolution (hundreds of kilometres) is too coarse and not adequate to describe the variability of extreme events at basin scale (Burlando and Rosso, 2002). To bridge the space-time gap between the climate scenarios and the usual scale of the inputs for hydrological prediction models is a fundamental requisite for the evaluation of climate change impacts on water resources. Since models operate a simplification of a complex reality, their results cannot be expected to fit with climate observations. Identifying local climate scenarios for impact analysis implies the definition of more detailed local scenario by downscaling GCMs or RCMs results. Among the output correction methods we consider the statistical approach by Déqué (2007) reported as a ‘Variable correction method' in which the correction of model outputs is obtained by a function build with the observation dataset and operating a quantile-quantile transformation (Q-Q transform). However, in the case of daily precipitation fields the Q-Q transform is not able to correct the temporal property of the model output concerning the dry-wet lacunarity process. An alternative correction method is proposed based on a stochastic description of the arrival-duration-intensity processes in coherence with the Poissonian Rectangular Pulse scheme (PRP) (Eagleson, 1972). In this proposed approach, the Q-Q transform is applied to the PRP variables derived from the daily rainfall datasets. Consequently the corrected PRP parameters are used for the synthetic generation of statistically homogeneous rainfall time series that mimic the persistency of daily observations for the reference period. Then the PRP parameters are forced through the GCM scenarios to generate local scale rainfall records for the 21st century. The statistical parameters characterizing daily storm occurrence, storm intensity and duration needed to apply the PRP scheme are considered among STARDEX collection of extreme indices.

  8. Projected changes in the evolution of drought assessed with the SPEI over the Czech Republic

    NASA Astrophysics Data System (ADS)

    Potop, V.; Boroneana, C.; Stepanek, P.; Skalak, P.; Mozny, M.

    2012-04-01

    In previous studies the spatial and temporal evolution of drought events in the Czech Republic were extensively analyzed by comparing results from the most advanced drought indices (e.g. the SPI and SPEI), which take into account the role of antecedent conditions in quantifying drought severity. In the present study, the Standardized Precipitation Evapotranspiration Index (SPEI) was adopted to assess and project drought characteristics in the Czech Republic based on the regional climate model ALADIN-Climate/CZ simulated data. The simulations were conducted at high resolution (10km) for the current (1961-2000) and two future climates (2021-2050 and 2071-2100). First, the observed data of air temperature and precipitation totals was transferred into a regular grid of ALADIN-Climate/CZ model. The bias correction was applied on the scenario runs. The bias correction method is based on variable correction using individual percentiles whose relationship is derived from observations and control RCM simulation. The SPEI was calculated based on observed monthly data of mean temperature and precipitation totals for the period 1961-1990, as reference period, and for the periods 2021-2050 and 2071-2100, as future climates under A1B SRES scenario. The SPEI were calculated with various lags, 1, 3, 6, 12 and 24 months because the drought at these time scales is relevant for agricultural, hydrological and socio-economic impact, respectively. The study refers at the warm season of the year (April to September). As in the case of observational study, we have identified three climatically homogeneous regions, corresponding to the altitudes below 400 m, between 401 and 700 m and, above 700 m. For these three regions the frequency distribution of the SPEI values in 7 classes of drought category (%) were calculated based on grid point data falling in each region, both for the observed data and scenario runs. The paper presents the projected changes in frequency distribution of SPEI at various time scales, in intensity, duration and spatial distribution of drought over the territory of the Czech Republic under A1B scenario for the middle and the end of 21st century. We gratefully acknowledge the support of the Ministry of Education, Youth and Sports for projects OC10010.

  9. Thermoelectric transport properties in graphene connected molecular junctions

    NASA Astrophysics Data System (ADS)

    Rodriguez, S. T.; Grosu, I.; Crisan, M.; Ţifrea, I.

    2018-02-01

    We study the electronic contribution to the main thermoelectric properties of a molecular junction consisting of a single quantum dot coupled to graphene external leads. The system electrical conductivity (G), Seebeck coefficient (S), and the thermal conductivity (κ), are numerically calculated based on a Green's function formalism that includes contributions up to the Hartree-Fock level. We consider the system leads to be made either of pure or gapped-graphene. To describe the free electrons in the gapped-graphene electrodes we used two possible scenarios, the massive gap scenario, and the massless gap scenario, respectively. In all cases, the Fano effect is responsible for a strong violation of the Wiedemann-Franz law and we found a substantial increase of the system figure of merit ZT due to a drastic reduction of the system thermal coefficient. In the case of gapped-graphene electrodes, the system figure of merit presents a maximum at an optimal value of the energy gap of the order of Δ / D ∼ 0.002 (massive gap scenario) and Δ / D ∼ 0.0026 (massless gap scenario). Additionally, for all cases, the system figure of merit is temperature dependent.

  10. Cost-effectiveness comparison of lamivudine plus adefovir combination treatment and nucleos(t)ide analog monotherapies in Chinese chronic hepatitis B patients.

    PubMed

    Zhang, Chi; Ke, Weixia; Liu, Li; Gao, Yanhui; Yao, Zhenjiang; Ye, Xiaohua; Zhou, Shudong; Yang, Yi

    2016-01-01

    Lamivudine (LAM) plus adefovir (ADV) combination therapy is clinically efficacious for treating chronic hepatitis B (CHB) patients in China, but no pharmacoeconomic evaluations of this strategy are available. The aim of this study was to examine the cost-effectiveness of LAM plus ADV combination treatment compared with five other nucleos(t)ide analog monotherapies (LAM, ADV, telbivudine [TBV], entecavir [ETV], and tenofovir [TDF]). To simulate the lifetime (40-year time span) costs and quality-adjusted life-years (QALYs) for different therapy options, a Markov model that included five initial monotherapies and LAM plus ADV combination as an initial treatment was developed. Two kinds of rescue combination strategies (base-case: LAM + ADV then ETV + ADV; alternative: direct use of ETV + ADV) were considered separately for treating patients refractory to initial therapy. One-way and probabilistic sensitivity analyses were used to explore model uncertainties. In base-case analysis, ETV had the lowest lifetime cost and served as the reference therapy. Compared to the reference, LAM, ADV, and TBV had higher costs and lower efficacy, and were completely dominated by ETV. LAM plus ADV combination therapy or TDF was more efficacious than ETV, but also more expensive. Although the incremental cost-effectiveness ratios of combination therapy or TDF were both higher than the willingness-to-pay threshold of $20,466/QALY gained for the reference treatment, in an alternative scenario analysis LAM plus ADV combination therapy would be the preferable treatment option. ETV and LAM plus ADV combination therapy are both cost-effective strategies for treating Chinese CHB patients.

  11. Cross - Scale Intercomparison of Climate Change Impacts Simulated by Regional and Global Hydrological Models in Eleven Large River Basins

    NASA Technical Reports Server (NTRS)

    Hattermann, F. F.; Krysanova, V.; Gosling, S. N.; Dankers, R.; Daggupati, P.; Donnelly, C.; Florke, M.; Huang, S.; Motovilov, Y.; Buda, S.; hide

    2017-01-01

    Ideally, the results from models operating at different scales should agree in trend direction and magnitude of impacts under climate change. However, this implies that the sensitivity to climate variability and climate change is comparable for impact models designed for either scale. In this study, we compare hydrological changes simulated by 9 global and 9 regional hydrological models (HM) for 11 large river basins in all continents under reference and scenario conditions. The foci are on model validation runs, sensitivity of annual discharge to climate variability in the reference period, and sensitivity of the long-term average monthly seasonal dynamics to climate change. One major result is that the global models, mostly not calibrated against observations, often show a considerable bias in mean monthly discharge, whereas regional models show a better reproduction of reference conditions. However, the sensitivity of the two HM ensembles to climate variability is in general similar. The simulated climate change impacts in terms of long-term average monthly dynamics evaluated for HM ensemble medians and spreads show that the medians are to a certain extent comparable in some cases, but have distinct differences in other cases, and the spreads related to global models are mostly notably larger. Summarizing, this implies that global HMs are useful tools when looking at large-scale impacts of climate change and variability. Whenever impacts for a specific river basin or region are of interest, e.g. for complex water management applications, the regional-scale models calibrated and validated against observed discharge should be used.

  12. Assessment of regional air quality resulting from emission control in the Pearl River Delta region, southern China.

    PubMed

    Wang, N; Lyu, X P; Deng, X J; Guo, H; Deng, T; Li, Y; Yin, C Q; Li, F; Wang, S Q

    2016-12-15

    To evaluate the impact of emission control measures on the air quality in the Pearl River Delta (PRD) region of South China, statistic data including atmospheric observations, emissions and energy consumptions during 2006-2014 were analyzed, and a Weather Research and Forecasting - Community Multi-scale Air Quality (WRF-CMAQ) model was used for various scenario simulations. Although energy consumption doubled from 2004 to 2014 and vehicle number significantly increased from 2006 to 2014, ambient SO 2 , NO 2 and PM 10 were reduced by 66%, 20% and 24%, respectively, mainly due to emissions control efforts. In contrast, O 3 increased by 19%. Model simulations of three emission control scenarios, including a baseline (a case in 2010), a CAP (a case in 2020 assuming control strength followed past control tendency) and a REF (a case in 2020 referring to the strict control measures based on recent policy/plans) were conducted to investigate the variations of air pollutants to the changes in NO x , VOCs and NH 3 emissions. Although the area mean concentrations of NO x , nitrate and PM 2.5 decreased under both NO x CAP (reduced by 1.8%, 0.7% and 0.2%, respectively) and NO x REF (reduced by 7.2%, 1.8% and 0.3%, respectively), a rising of PM 2.5 was found in certain areas as reducing NO x emissions elevated the atmospheric oxidizability. Furthermore, scenarios with NH 3 emission reductions showed that nitrate was sensitive to NH 3 emissions, with decreasing percentages of 0-10.6% and 0-48% under CAP and REF, respectively. Controlling emissions of VOCs reduced PM 2.5 in the southwestern PRD where severe photochemical pollution frequently occurred. It was also found that O 3 formation in PRD was generally VOCs-limited while turned to be NO x -limited in the afternoon (13:00-17:00), suggesting that cutting VOCs emissions would reduce the overall O 3 concentrations while mitigating NO x emissions in the afternoon could reduce the peak O 3 levels. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Impact of Idealized Stratospheric Aerosol Injection on the Future Ocean and Land Carbon Cycles

    NASA Astrophysics Data System (ADS)

    Tjiputra, J.; Lauvset, S.

    2017-12-01

    Using a state-of-the-art Earth system model, we simulate stratospheric aerosol injection (SAI) on top of the Representative Concentration Pathways 8.5 future scenario. Our idealized method prescribes aerosol concentration, linearly increasing from 2020 to 2100, and thereafter remaining constant until 2200. In one of the scenarios, the model able to project future warming below 2 degree toward 2100, despite greatier warming persists in the high latitudes. When SAI is terminated in 2100, a rapid global warming of 0.35 K yr-1 (as compared to 0.05 K yr-1 under RCP8.5) is simulated in the subsequent 10 years, and the global mean temperature rapidly returns to levels close to the reference state. In contrast to earlier findings, we show a weak response in the terrestrial carbon sink during SAI implementation in the 21st century, which we attribute to nitrogen limitation. The SAI increases the land carbon uptake in the temperate forest-, grassland-, and shrub-dominated regions. The resultant lower temperatures lead to a reduction in the heterotrophic respiration rate and increase soil carbon retention. Changes in precipitation patterns are key drivers for variability in vegetation carbon. Upon SAI termination, the level of vegetation carbon storage returns to the reference case, whereas the soil carbon remains high. The ocean absorbs nearly 10% more carbon in the geoengineered simulation than in the reference simulation, leading to a ˜15 ppm lower atmospheric CO2 concentration in 2100. The largest enhancement in uptake occurs in the North Atlantic. In both hemispheres' polar regions, SAI delays the sea ice melting and, consequently, export production remains low. Despite inducing little impact on surface acidification, in the deep water of North Atlantic, SAI-induced circulation changes accelerate the ocean acidification rate and broaden the affected area. Since the deep ocean provides vital ecosystem function and services, e.g., fish stocks, this accelerated changes could introduce broader negative impacts on human welfare.

  14. A Simple Case Study of a Grid Performance System

    NASA Technical Reports Server (NTRS)

    Aydt, Ruth; Gunter, Dan; Quesnel, Darcy; Smith, Warren; Taylor, Valerie; Biegel, Bryan (Technical Monitor)

    2001-01-01

    This document presents a simple case study of a Grid performance system based on the Grid Monitoring Architecture (GMA) being developed by the Grid Forum Performance Working Group. It describes how the various system components would interact for a very basic monitoring scenario, and is intended to introduce people to the terminology and concepts presented in greater detail in other Working Group documents. We believe that by focusing on the simple case first, working group members can familiarize themselves with terminology and concepts, and productively join in the ongoing discussions of the group. In addition, prototype implementations of this basic scenario can be built to explore the feasibility of the proposed architecture and to expose possible shortcomings. Once the simple case is understood and agreed upon, complexities can be added incrementally as warranted by cases not addressed in the most basic implementation described here. Following the basic performance monitoring scenario discussion, unresolved issues are introduced for future discussion.

  15. Key conclusions from AVOID Work Stream One

    NASA Astrophysics Data System (ADS)

    Warren, Rachel

    2010-05-01

    AVOID work stream (WS1)one has produced emission scenarios that simulate potential future global emission pathways for greenhouse gases during the 21st century. The study explored the influence of three key features of such pathways: (1) the year in which emissions peak globally, (2) the rate of emission reduction, and (3) the minimum level to which emissions are eventually reduced. It examined the resultant climate change, climate change impacts and economic implications using computer simulations. Avoided impacts, carbon taxes and GDP change increase throughout the 21st century in the models. AVOID-WS1 showed that in the absence of climate policy it is very likely that global mean temperatures would exceed 3 degrees and there are evens chances that the temperature would rise by 4 degrees relative to pre-industrial times. Scenarios that peak emissions in 2016 were more effective at constraining temperatures to below 3 degrees than those that peaked in 2030: one ‘2016' scenario achieved a probability of 45% of avoiding breaching of a 2 degree threshold. Scenarios peaking in 2030 were inconsistent with constraining temperatures to below 2 degrees. Correspondingly, scenarios that peak in 2030 are more effective at avoiding climate impacts than scenarios that peak in 2016, for all sectors that we studied. Hence the date at which emissions peak is more important than the rate of subsequent emissions reduction in determining the avoided impacts. Avoided impacts increase with time, being negligible in the 2030s, significant by the 2050s and large by the 2080s. Finally, the choice of GCM influences the magnitude of the avoided impacts strongly, so that the uncertainties in our estimates of avoided impacts for each scenario are larger than the difference between the scenarios. Our economic analysis is based on models which differ greatly in the assumptions that they make, but generally show that the date at which emissions peak is a stronger driver of induced GDP changes, and, with some exceptions, carbon taxes, than the rate at which emissions are subsequently reduced. In models which assume perfect rationality and foresight and/or assume the economy to be equilibrium with full employment, then mitigation could cause GDP to decrease. In models which do not make these assumptions, mitigation could cause GDP to increase. In either case the effects are small (a few % of GDP lost or gained in 2100) and insignificant when compared with the 600-1200% increase in global GDP forecast between 2000 and 2100 in the SRES A1B reference scenario used in this study. Estimates of carbon taxes required differ widely between models.

  16. KB3D Reference Manual. Version 1.a

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar; Siminiceanu, Radu; Carreno, Victor A.; Dowek, Gilles

    2005-01-01

    This paper is a reference manual describing the implementation of the KB3D conflict detection and resolution algorithm. The algorithm has been implemented in the Java and C++ programming languages. The reference manual gives a short overview of the detection and resolution functions, the structural implementation of the program, inputs and outputs to the program, and describes how the program is used. Inputs to the program can be rectangular coordinates or geodesic coordinates. The reference manual also gives examples of conflict scenarios and the resolution outputs the program produces.

  17. An Exploration of Kernel Equating Using SAT® Data: Equating to a Similar Population and to a Distant Population. Research Report. ETS RR-07-17

    ERIC Educational Resources Information Center

    Liu, Jinghua; Low, Albert C.

    2007-01-01

    This study applied kernel equating (KE) in two scenarios: equating to a very similar population and equating to a very different population, referred to as a distant population, using SAT® data. The KE results were compared to the results obtained from analogous classical equating methods in both scenarios. The results indicate that KE results are…

  18. Commercial Mobile Alert Service (CMAS) Scenarios

    DTIC Science & Technology

    2012-05-01

    Commercial Mobile Alert Service (CMAS) Scenarios The WEA Project Team May 2012 SPECIAL REPORT CMU/SEI-2012-SR-020 CERT® Division, Software ...Homeland Security under Contract No. FA8721-05-C-0003 with Carnegie Mellon University for the operation of the Software Engineering Institute, a federally...DISTRIBUTES IT “AS IS.” References herein to any specific commercial product, process, or service by trade name, trade mark, manufacturer, or otherwise

  19. Assessing cost-effectiveness of bioretention on stormwater in response to climate change and urbanization for future scenarios

    NASA Astrophysics Data System (ADS)

    Wang, Mo; Zhang, Dongqing; Adhityan, Appan; Ng, Wun Jern; Dong, Jianwen; Tan, Soon Keat

    2016-12-01

    Bioretention, as a popular low impact development practice, has become more important to mitigate adverse impacts on urban stormwater. However, there is very limited information regarding ensuring the effectiveness of bioretention response to uncertain future challenges, especially when taking into consideration climate change and urbanization. The main objective of this paper is to identify the cost-effectiveness of bioretention by assessing the hydrology performance under future scenarios modeling. First, the hydrology model was used to obtain peak runoff and TSS loads of bioretention with variable scales under different scenarios, i.e., different Representative Concentration Pathways (RCPs) and Shared Socio-economic reference Pathways (SSPs) for 2-year and 10-year design storms in Singapore. Then, life cycle costing (LCC) and life cycle assessment (LCA) were estimated for bioretention, and the cost-effectiveness was identified under different scenarios. Our finding showed that there were different degree of responses to 2-year and 10-year design storms but the general patterns and insights deduced were similar. The performance of bioretenion was more sensitive to urbanization than that for climate change in the urban catchment. In addition, it was noted that the methodology used in this study was generic and the findings could be useful as reference for other LID practices in response to climate change and urbanization.

  20. Wealth distribution across communities of adaptive financial agents

    NASA Astrophysics Data System (ADS)

    DeLellis, Pietro; Garofalo, Franco; Lo Iudice, Francesco; Napoletano, Elena

    2015-08-01

    This paper studies the trading volumes and wealth distribution of a novel agent-based model of an artificial financial market. In this model, heterogeneous agents, behaving according to the Von Neumann and Morgenstern utility theory, may mutually interact. A Tobin-like tax (TT) on successful investments and a flat tax are compared to assess the effects on the agents’ wealth distribution. We carry out extensive numerical simulations in two alternative scenarios: (i) a reference scenario, where the agents keep their utility function fixed, and (ii) a focal scenario, where the agents are adaptive and self-organize in communities, emulating their neighbours by updating their own utility function. Specifically, the interactions among the agents are modelled through a directed scale-free network to account for the presence of community leaders, and the herding-like effect is tested against the reference scenario. We observe that our model is capable of replicating the benefits and drawbacks of the two taxation systems and that the interactions among the agents strongly affect the wealth distribution across the communities. Remarkably, the communities benefit from the presence of leaders with successful trading strategies, and are more likely to increase their average wealth. Moreover, this emulation mechanism mitigates the decrease in trading volumes, which is a typical drawback of TTs.

  1. Assessing multimedia/multipathway exposures to inorganic arsenic at population and individual level using MERLIN-Expo.

    PubMed

    Van Holderbeke, Mirja; Fierens, Tine; Standaert, Arnout; Cornelis, Christa; Brochot, Céline; Ciffroy, Philippe; Johansson, Erik; Bierkens, Johan

    2016-10-15

    In this study, we report on model simulations performed using the newly developed exposure tool, MERLIN-Expo, in order to assess inorganic arsenic (iAs) exposure to adults resulting from past emissions by non-ferrous smelters in Belgium (Northern Campine area). Exposure scenarios were constructed to estimate external iAs exposure as well as the toxicologically relevant As (tAs, i.e., iAs, MMA and DMA) body burden in adults living in the vicinity of the former industrial sites as compared to adults living in adjacent areas and a reference area. Two scenarios are discussed: a first scenario studying exposure to iAs at the aggregated population level and a second scenario studying exposure at the individual level for a random sub-sample of subjects in each of the three different study areas. These two scenarios only differ in the type of human related input data (i.e., time-activity data, ingestion rates and consumption patterns) that were used, namely averages (incl. probability density functions, PDFs) in the simulation at population level and subject-specific values in the simulation at individual level. The model predictions are shown to be lower than the corresponding biomonitoring data from the monitoring campaign. Urinary tAs levels in adults, irrespective of the area they lived in, were under-predicted by MERLIN-Expo by 40% on average. The model predictions for individual adults, by contrast, under-predict the biomonitoring data by 7% on average, but with more important under-predictions for subjects at the upper end of exposure. Still, average predicted urinary tAs levels from the simulations at population level and at individual level overlap, and, at least for the current case, lead to similar conclusions. These results constitute a first and partial verification of the model performance of MERLIN-Expo when dealing with iAs in a complex site-specific exposure scenario, and demonstrate the robustness of the modelling tool for these situations. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Quantifying the impact of cross coverage on physician's workload and performance in radiation oncology.

    PubMed

    Mosaly, Prithima R; Mazur, Lukasz M; Jones, Ellen L; Hoyle, Lesley; Zagar, Timothy; Chera, Bhishamjit S; Marks, Lawrence B

    2013-01-01

    To quantitatively assess the difference in workload and performance of radiation oncology physicians during radiation therapy treatment planning tasks under the conditions of "cross coverage" versus planning a patient with whom they were familiar. Eight physicians (3 experienced faculty physicians and 5 physician residents) performed 2 cases. The first case represented a "cross-coverage" scenario where the physicians had no prior information about the case to be planned. The second exposure represented a "regular-coverage" scenario where the physicians were familiar with the patient case to be planned. Each case involved 3 tasks to be completed systematically. Workload was assessed both subjectively (perceived) using National Aeronautics and Space Administration-Task Load Index (NASA-TLX), and objectively (physiological) throughout the task using eye data (via monitoring pupil size and blink rate). Performance of each task and the case was measured using completion time. Subjective willingness to approve or disapprove the generated plan was obtained after completion of the case only. Forty-eight perceived and 48 physiological workload assessments were obtained. Overall, results revealed a significant increase in perceived workload (high NASA-TLX score) and decrease in performance (longer completion time and reduced approval rate) during cross coverage. There were nonsignificant increases in pupil diameter and decreases in the blink rate during cross-coverage versus regular-coverage scenario. In both cross-coverage and regular-coverage scenarios the level of experience did not affect workload and performance. The cross-coverage scenario significantly increases perceived workload and degrades performance versus regular coverage. Hence, to improve patient safety, efforts must be made to develop policies, standard operating procedures, and usability improvements to electronic medical record and treatment planning systems for "easier" information processing to deal with cross coverage, while recognizing strengths and limitations of human performance. Published by Elsevier Inc.

  3. Juggling with Indianness in the Gestation of Translation with Special Reference to the English Translation of a Hindi Story

    ERIC Educational Resources Information Center

    Priya, K.

    2014-01-01

    This paper is an attempt to look closely at the process of translating dramas with special reference to the Hindi story Aadmi ka Baccha ("The Child of a Man") by Yashpal in India and the role and significance of prose transcreations in today's changing global scenario.

  4. Forecasts of health care utilization related to pandemic A(H1N1)2009 influenza in the Nord-Pas-de-Calais region, France.

    PubMed

    Giovannelli, J; Loury, P; Lainé, M; Spaccaferri, G; Hubert, B; Chaud, P

    2015-05-01

    To describe and evaluate the forecasts of the load that pandemic A(H1N1)2009 influenza would have on the general practitioners (GP) and hospital care systems, especially during its peak, in the Nord-Pas-de-Calais (NPDC) region, France. Modelling study. The epidemic curve was modelled using an assumption of normal distribution of cases. The values for the forecast parameters were estimated from a literature review of observed data from the Southern hemisphere and French Overseas Territories, where the pandemic had already occurred. Two scenarios were considered, one realistic, the other pessimistic, enabling the authors to evaluate the 'reasonable worst case'. Forecasts were then assessed by comparing them with observed data in the NPDC region--of 4 million people. The realistic scenarios forecasts estimated 300,000 cases, 1500 hospitalizations, 225 intensive care units (ICU) admissions for the pandemic wave; 115 hospital beds and 45 ICU beds would be required per day during the peak. The pessimistic scenario's forecasts were 2-3 times higher than the realistic scenario's forecasts. Observed data were: 235,000 cases, 1585 hospitalizations, 58 ICU admissions; and a maximum of 11.6 ICU beds per day. The realistic scenario correctly estimated the temporal distribution of GP and hospitalized cases but overestimated the number of cases admitted to ICU. Obtaining more robust data for parameters estimation--particularly the rate of ICU admission among the population that the authors recommend to use--may provide better forecasts. Copyright © 2015 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  5. Development of an Assessment for Entrustable Professional Activity (EPA) 10: Emergent Patient Management.

    PubMed

    Thompson, Laura R; Leung, Cynthia G; Green, Brad; Lipps, Jonathan; Schaffernocker, Troy; Ledford, Cynthia; Davis, John; Way, David P; Kman, Nicholas E

    2017-01-01

    Medical schools in the United States are encouraged to prepare and certify the entrustment of medical students to perform 13 core entrustable professional activities (EPAs) prior to graduation. Entrustment is defined as the informed belief that the learner is qualified to autonomously perform specific patient-care activities. Core EPA-10 is the entrustment of a graduate to care for the emergent patient. The purpose of this project was to design a realistic performance assessment method for evaluating fourth-year medical students on EPA-10. First, we wrote five emergent patient case-scenarios that a medical trainee would likely confront in an acute care setting. Furthermore, we developed high-fidelity simulations to realistically portray these patient case scenarios. Finally, we designed a performance assessment instrument to evaluate the medical student's performance on executing critical actions related to EPA-10 competencies. Critical actions included the following: triage skills, mustering the medical team, identifying causes of patient decompensation, and initiating care. Up to four students were involved with each case scenario; however, only the team leader was evaluated using the assessment instruments developed for each case. A total of 114 students participated in the EPA-10 assessment during their final year of medical school. Most students demonstrated competence in recognizing unstable vital signs (97%), engaging the team (93%), and making appropriate dispositions (92%). Almost 87% of the students were rated as having reached entrustment to manage the care of an emergent patient (99 of 114). Inter-rater reliability varied by case scenario, ranging from moderate to near-perfect agreement. Three of five case-scenario assessment instruments contained items that were internally consistent at measuring student performance. Additionally, the individual item scores for these case scenarios were highly correlated with the global entrustment decision. High-fidelity simulation showed good potential for effective assessment of medical student entrustment of caring for the emergent patient. Preliminary evidence from this pilot project suggests content validity of most cases and associated checklist items. The assessments also demonstrated moderately strong faculty inter-rater reliability.

  6. Preliminary report of a Web-based instrument to assess and teach knowledge and clinical thinking to medical student

    PubMed Central

    Tokunaga, Hironobu; Ando, Hirotaka; Obika, Mikako; Miyoshi, Tomoko; Tokuda, Yasuharu; Bautista, Miho; Kataoka, Hitomi; Terasawa, Hidekazu

    2014-01-01

    Objectives We report the preliminary development of a unique Web-based instrument for assessing and teaching knowledge and developing clinical thinking called the “Sequential Questions and Answers” (SQA) test. Included in this feasibility report are physicians’ answers to the Sequential Questions and Answers pre- and posttests and their brief questionnaire replies. Methods The authors refined the SQA test case scenario for content, ease of modifications of case scenarios, test uploading and answer retrieval. Eleven geographically distant physicians evaluated the SQA test, taking the pretest and posttest within two weeks. These physicians completed a brief questionnaire about the SQA test. Results Eleven physicians completed the SQA pre- and posttest; all answers were downloaded for analysis. They reported the ease of website login and navigating within the test module together with many helpful suggestions. Their average posttest score gain was 53% (p=0.012). Conclusions We report the successful launch of a unique Web-based instrument referred to as the Sequential Questions and Answers test. This distinctive test combines teaching organization of the clinical narrative into an assessment tool that promotes acquiring medical knowledge and clinical thinking. We successfully demonstrated the feasibility of geographically distant physicians to access the SQA instrument. The physicians’ helpful suggestions will be added to future SQA test versions. Medical schools might explore the integration of this multi-language-capable SQA assessment and teaching instrument into their undergraduate medical curriculum. PMID:25341203

  7. Simulation of extreme reservoir level distribution with the SCHADEX method (EXTRAFLO project)

    NASA Astrophysics Data System (ADS)

    Paquet, Emmanuel; Penot, David; Garavaglia, Federico

    2013-04-01

    The standard practice for the design of dam spillways structures and gates is to consider the maximum reservoir level reached for a given hydrologic scenario. This scenario has several components: peak discharge, flood volumes on different durations, discharge gradients etc. Within a probabilistic analysis framework, several scenarios can be associated with different return times, although a reference return level (e.g. 1000 years) is often prescribed by the local regulation rules or usual practice. Using continuous simulation method for extreme flood estimation is a convenient solution to provide a great variety of hydrological scenarios to feed a hydraulic model of dam operation: flood hydrographs are explicitly simulated by a rainfall-runoff model fed by a stochastic rainfall generator. The maximum reservoir level reached will be conditioned by the scale and the dynamics of the generated hydrograph, by the filling of the reservoir prior to the flood, and by the dam gates and spillway operation during the event. The simulation of a great number of floods will allow building a probabilistic distribution of maximum reservoir levels. A design value can be chosen at a definite return level. An alternative approach is proposed here, based on the SCHADEX method for extreme flood estimation, proposed by Paquet et al. (2006, 2013). SCHADEX is a so-called "semi-continuous" stochastic simulation method in that flood events are simulated on an event basis and are superimposed on a continuous simulation of the catchment saturation hazard using rainfall-runoff modelling. The SCHADEX process works at the study time-step (e.g. daily), and the peak flow distribution is deduced from the simulated daily flow distribution by a peak-to-volume ratio. A reference hydrograph relevant for extreme floods is proposed. In the standard version of the method, both the peak-to-volume and the reference hydrograph are constant. An enhancement of this method is presented, with variable peak-to-volume ratios and hydrographs applied to each simulated event. This allows accounting for different flood dynamics, depending on the season, the generating precipitation event, the soil saturation state, etc. In both cases, a hydraulic simulation of dam operation is performed, in order to compute the distribution of maximum reservoir levels. Results are detailed for an extreme return level, showing that a 1000 years return level reservoir level can be reached during flood events whose components (peaks, volumes) are not necessarily associated with such return level. The presentation will be illustrated by the example of a fictive dam on the Tech River at Reynes (South of France, 477 km²). This study has been carried out within the EXTRAFLO project, Task 8 (https://extraflo.cemagref.fr/). References: Paquet, E., Gailhard, J. and Garçon, R. (2006), Evolution of the GRADEX method: improvement by atmospheric circulation classification and hydrological modeling, La Houille Blanche, 5, 80-90. doi:10.1051/lhb:2006091. Paquet, E., Garavaglia, F., Garçon, R. and Gailhard, J. (2012), The SCHADEX method: a semi-continuous rainfall-runoff simulation for extreme food estimation, Journal of Hydrology, under revision

  8. Simulation of Reclaimed-Water Injection and Pumping Scenarios and Particle-Tracking Analysis near Mount Pleasant, South Carolina

    USGS Publications Warehouse

    Petkewich, Matthew D.; Campbell, Bruce G.

    2009-01-01

    The effect of injecting reclaimed water into the Middendorf aquifer beneath Mount Pleasant, South Carolina, was simulated using a groundwater-flow model of the Coastal Plain Physiographic Province of South Carolina and parts of Georgia and North Carolina. Reclaimed water, also known as recycled water, is wastewater or stormwater that has been treated to an appropriate level so that the water can be reused. The scenarios were simulated to evaluate potential changes in groundwater flow and groundwater-level conditions caused by injecting reclaimed water into the Middendorf aquifer. Simulations included a Base Case and two injection scenarios. Maximum pumping rates were simulated as 6.65, 8.50, and 10.5 million gallons per day for the Base Case, Scenario 1, and Scenario 2, respectively. The Base Case simulation represents a non-injection estimate of the year 2050 groundwater levels for comparison purposes for the two injection scenarios. For Scenarios 1 and 2, the simulated injection of reclaimed water at 3 million gallons per day begins in 2012 and continues through 2050. The flow paths and time of travel for the injected reclaimed water were simulated using particle-tracking analysis. The simulations indicated a general decline of groundwater altitudes in the Middendorf aquifer in the Mount Pleasant, South Carolina, area between 2004 and 2050 for the Base Case and two injection scenarios. For the Base Case, groundwater altitudes generally declined about 90 feet from the 2004 groundwater levels. For Scenarios 1 and 2, although groundwater altitudes initially increased in the Mount Pleasant area because of the simulated injection, these higher groundwater levels declined as Mount Pleasant Waterworks pumping increased over time. When compared to the Base Case simulation, 2050 groundwater altitudes for Scenario 1 are between 15 feet lower to 23 feet higher for production wells, between 41 and 77 feet higher for the injection wells, and between 9 and 23 feet higher for observation wells in the Mount Pleasant area. When compared to the Base Case simulation, 2050 groundwater altitudes for Scenario 2 are between 2 and 106 feet lower for production wells and observation wells and between 11 and 27 feet higher for the injection wells in the Mount Pleasant area. Water budgets for the model area immediately surrounding the Mount Pleasant area were calculated for 2011 and for 2050. The largest flow component for the 2050 water budget in the Mount Pleasant area is discharge through wells at rates between 7.1 and 10.9 million gallons of water per day. This groundwater is replaced predominantly by between 6.0 and 7.8 million gallons per day of lateral groundwater flow within the Middendorf aquifer for the Base Case and two scenarios and through reclaimed-water injection of 3 million gallons per day for Scenarios 1 and 2. In addition, between 175,000 and 319,000 gallons of groundwater are removed from this area per day because of the regional hydraulic gradient. Additional sources of water to this area are groundwater storage releases at rates between 86,800 and 116,000 gallons per day and vertical flow from over- and underlying confining units at rates between 69,100 and 150,000 gallons per day. Reclaimed water injected into the Middendorf aquifer at three hypothetical injection wells moved to the Mount Pleasant Waterworks production wells in 18 to 256 years as indicated by particle-tracking simulations. Time of travel varied from 18 to 179 years for simulated conditions of 20 percent uniform aquifer porosity and between 25 to 256 years for 30 percent uniform aquifer porosity.

  9. 78 FR 49831 - Endangered and Threatened Wildlife and Plants; Proposed Designation of Critical Habitat for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-15

    ... Service (NPS) for the Florida leafwing and the pine rockland ecosystem, in general. Sea Level Rise... habitat. In the best case scenario, which assumes low sea level rise, high financial resources, proactive... human population. In the worst case scenario, which assumes high sea level rise, low financial resources...

  10. Reducing Probabilistic Weather Forecasts to the Worst-Case Scenario: Anchoring Effects

    ERIC Educational Resources Information Center

    Joslyn, Susan; Savelli, Sonia; Nadav-Greenberg, Limor

    2011-01-01

    Many weather forecast providers believe that forecast uncertainty in the form of the worst-case scenario would be useful for general public end users. We tested this suggestion in 4 studies using realistic weather-related decision tasks involving high winds and low temperatures. College undergraduates, given the statistical equivalent of the…

  11. Reenacted Case Scenarios for Undergraduate Healthcare Students to Illustrate Person-Centered Care in Dementia

    ERIC Educational Resources Information Center

    Bradley, Sandra L.; De Bellis, Anita; Guerin, Pauline; Walters, Bonnie; Wotherspoon, Alison; Cecchin, Maggie; Paterson, Jan

    2010-01-01

    Healthcare practitioners have suggested that interpreting person-centered care for people who have dementia to undergraduate students requires guidance in practical application. This article describes the production of a written and digital interdisciplinary educational resource for tertiary students. It uses real-life case scenarios provided by…

  12. Air Pollutant Emissions Projections for the Cement and Steel Industry in China and the Impact of Emissions Control Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasanbeigi, Ali; Khanna, Nina; Price, Lynn

    China’s cement and steel industry accounts for approximately half of the world’s total cement and steel production. These two industries are two of the most energy-intensive and highest carbon dioxide (CO 2)-emitting industries and two of the key industrial contributors to air pollution in China. For example, the cement industry is the largest source of particulate matter (PM) emissions in China, accounting for 40 percent of its industrial PM emissions and 27 percent of its total national PM emissions. The Chinese steel industry contributed to approximately 20 percent of sulfur dioxide (SO 2) emissions and 27 percent of PM emissionsmore » for all key manufacturing industries in China in 2013. In this study, we analyzed and projected the total PM and SO2 emissions from the Chinese cement and steel industry from 2010–2050 under three different scenarios: a Base Case scenario, an Advanced scenario, and an Advanced EOP (end-of-pipe) scenario. We used bottom-up emissions control technologies data and assumptions to project the emissions. In addition, we conducted an economic analysis to estimate the cost for PM emissions reductions in the Chinese cement industry using EOP control technologies, energy efficiency measures, and product change measures. The results of the emissions projection showed that there is not a substantial difference in PM emissions between the Base Case and Advanced scenarios, for both the cement and steel industries. This is mainly because PM emissions in the cement industry caused mainly by production process and not the fuel use. Since our forecast for the cement production in the Base Case and Advanced scenarios are not too different from each other, this results in only a slight difference in PM emissions forecast for these two scenarios. Also, we assumed a similar share and penetration rate of control technologies from 2010 up to 2050 for these two scenarios for the cement and steel industry. However, the Advanced EOP scenario showed significantly lower PM emissions for the cement industry, reaching to 1.7 million tons of PM in 2050, which is less than half of that in the other two scenarios. The Advanced EOP scenario also has the lowest SO2 emissions for the cement industry in China, reaching to 212,000 tons of SO2 in 2050, which is equal to 40 percent of the SO2 emissions in the Advanced scenario and 30 percent of the emissions in the Base Case scenario. The SO2 emission is mainly caused by fuel (coal) burning in cement kiln or steel processes. For the steel industry, the SO2 emissions of the Advanced EOP scenario are significantly lower than the other scenarios, with emissions declining to 323,000 tons in 2050, which is equal to 21 percent and 17 percent of the emissions of Advanced and Base Case scenarios in 2050, respectively. Results of the economic analysis show that for the Chinese cement industry, end-of-pipe PM control technologies have the lowest abatement cost per ton of PM reduced, followed by product change measures and energy efficiency measures, respectively. In summary, in order to meet Chinese national and regional air quality standards, best practice end-of-pipe emissions control technologies must be installed in both cement and steel industry and it must be supplemented by implementation of energy efficiency technologies and reduction of cement and steel production through structural change in industry.« less

  13. Quantification and mapping of urban fluxes under climate change: Application of WRF-SUEWS model to Greater Porto area (Portugal).

    PubMed

    Rafael, S; Martins, H; Marta-Almeida, M; Sá, E; Coelho, S; Rocha, A; Borrego, C; Lopes, M

    2017-05-01

    Climate change and the growth of urban populations are two of the main challenges facing Europe today. These issues are linked as climate change results in serious challenges for cities. Recent attention has focused on how urban surface-atmosphere exchanges of heat and water will be affected by climate change and the implications for urban planning and sustainability. In this study energy fluxes for Greater Porto area, Portugal, were estimated and the influence of the projected climate change evaluated. To accomplish this, the Weather Research and Forecasting Model (WRF) and the Surface Urban Energy and Water Balance Scheme (SUEWS) were applied for two climatological scenarios: a present (or reference, 1986-2005) scenario and a future scenario (2046-2065), in this case the Representative Concentration Pathway RCP8.5, which reflects the worst set of expectations (with the most onerous impacts). The results show that for the future climate conditions, the incoming shortwave radiation will increase by around 10%, the sensible heat flux around 40% and the net storage heat flux around 35%. In contrast, the latent heat flux will decrease about 20%. The changes in the magnitude of the different fluxes result in an increase of the net all-wave radiation by 15%. The implications of the changes of the energy balance on the meteorological variables are discussed, particularly in terms of temperature and precipitation. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments

    PubMed Central

    Slater, Mel

    2009-01-01

    In this paper, I address the question as to why participants tend to respond realistically to situations and events portrayed within an immersive virtual reality system. The idea is put forward, based on the experience of a large number of experimental studies, that there are two orthogonal components that contribute to this realistic response. The first is ‘being there’, often called ‘presence’, the qualia of having a sensation of being in a real place. We call this place illusion (PI). Second, plausibility illusion (Psi) refers to the illusion that the scenario being depicted is actually occurring. In the case of both PI and Psi the participant knows for sure that they are not ‘there’ and that the events are not occurring. PI is constrained by the sensorimotor contingencies afforded by the virtual reality system. Psi is determined by the extent to which the system can produce events that directly relate to the participant, the overall credibility of the scenario being depicted in comparison with expectations. We argue that when both PI and Psi occur, participants will respond realistically to the virtual reality. PMID:19884149

  15. Physical factors that influence patients' privacy perception toward a psychiatric behavioral monitoring system: a qualitative study.

    PubMed

    Zakaria, Nasriah; Ramli, Rusyaizila

    2018-01-01

    Psychiatric patients have privacy concerns when it comes to technology intervention in the hospital setting. In this paper, we present scenarios for psychiatric behavioral monitoring systems to be placed in psychiatric wards to understand patients' perception regarding privacy. Psychiatric behavioral monitoring refers to systems that are deemed useful in measuring clinical outcomes, but little research has been done on how these systems will impact patients' privacy. We conducted a case study in one teaching hospital in Malaysia. We investigated the physical factors that influence patients' perceived privacy with respect to a psychiatric monitoring system. The eight physical factors identified from the information system development privacy model, a comprehensive model for designing a privacy-sensitive information system, were adapted in this research. Scenario-based interviews were conducted with 25 patients in a psychiatric ward for 3 months. Psychiatric patients were able to share how physical factors influence their perception of privacy. Results show how patients responded to each of these dimensions in the context of a psychiatric behavioral monitoring system. Some subfactors under physical privacy are modified to reflect the data obtained in the interviews. We were able to capture the different physical factors that influence patient privacy.

  16. Exposure caused by wireless technologies used for short-range indoor communication in homes and offices.

    PubMed

    Schmid, G; Lager, D; Preiner, P; Uberbacher, R; Cecil, S

    2007-01-01

    In order to estimate typical radio frequency exposures from indoor used wireless communication technologies applied in homes and offices, WLAN, Bluetooth and Digital Enhanced Cordless Telecommunications systems, as well as baby surveillance devices and wireless headphones for indoor usage, have been investigated by measurements and numerical computations. Based on optimised measurement methods, field distributions and resulting exposure were assessed on selected products and real exposure scenarios. Additionally, generic scenarios have been investigated on the basis of numerical computations. The obtained results demonstrate that under usual conditions the resulting spatially (over body dimensions) averaged and 6-min time-averaged exposure for persons in the radio frequency fields of the considered applications is below approximately 0.1% of the reference level for power density according to the International Commission on Non-Ionizing Radiation Protection (ICNIRP) guidelines published in 1998. Spatial and temporal peak values can be considerably higher by 2-3 orders of magnitude. In case of some transmitting devices operated in close proximity to the body (e.g. WLAN transmitters), local exposure can reach the same order of magnitude as the basic restriction; however, none of the devices considered in this study exceeded the limits according to the ICNIRP guidelines.

  17. Bias due to differential participation in case-control studies and review of available approaches for adjustment.

    PubMed

    Aigner, Annette; Grittner, Ulrike; Becher, Heiko

    2018-01-01

    Low response rates in epidemiologic research potentially lead to the recruitment of a non-representative sample of controls in case-control studies. Problems in the unbiased estimation of odds ratios arise when characteristics causing the probability of participation are associated with exposure and outcome. This is a specific setting of selection bias and a realistic hazard in many case-control studies. This paper formally describes the problem and shows its potential extent, reviews existing approaches for bias adjustment applicable under certain conditions, compares and applies them. We focus on two scenarios: a characteristic C causing differential participation of controls is linked to the outcome through its association with risk factor E (scenario I), and C is additionally a genuine risk factor itself (scenario II). We further assume external data sources are available which provide an unbiased estimate of C in the underlying population. Given these scenarios, we (i) review available approaches and their performance in the setting of bias due to differential participation; (ii) describe two existing approaches to correct for the bias in both scenarios in more detail; (iii) present the magnitude of the resulting bias by simulation if the selection of a non-representative sample is ignored; and (iv) demonstrate the approaches' application via data from a case-control study on stroke. The bias of the effect measure for variable E in scenario I and C in scenario II can be large and should therefore be adjusted for in any analysis. It is positively associated with the difference in response rates between groups of the characteristic causing differential participation, and inversely associated with the total response rate in the controls. Adjustment in a standard logistic regression framework is possible in both scenarios if the population distribution of the characteristic causing differential participation is known or can be approximated well.

  18. Prospective randomized comparison of standard didactic lecture versus high-fidelity simulation for radiology resident contrast reaction management training.

    PubMed

    Wang, Carolyn L; Schopp, Jennifer G; Petscavage, Jonelle M; Paladin, Angelisa M; Richardson, Michael L; Bush, William H

    2011-06-01

    The objective of our study was to assess whether high-fidelity simulation-based training is more effective than traditional didactic lecture to train radiology residents in the management of contrast reactions. This was a prospective study of 44 radiology residents randomized into a simulation group versus a lecture group. All residents attended a contrast reaction didactic lecture. Four months later, baseline knowledge was assessed with a written test, which we refer to as the "pretest." After the pretest, the 21 residents in the lecture group attended a repeat didactic lecture and the 23 residents in the simulation group underwent high-fidelity simulation-based training with five contrast reaction scenarios. Next, all residents took a second written test, which we refer to as the "posttest." Two months after the posttest, both groups took a third written test, which we refer to as the "delayed posttest," and underwent performance testing with a high-fidelity severe contrast reaction scenario graded on predefined critical actions. There was no statistically significant difference between the simulation and lecture group pretest, immediate posttest, or delayed posttest scores. The simulation group performed better than the lecture group on the severe contrast reaction simulation scenario (p = 0.001). The simulation group reported improved comfort in identifying and managing contrast reactions and administering medications after the simulation training (p ≤ 0.04) and was more comfortable than the control group (p = 0.03), which reported no change in comfort level after the repeat didactic lecture. When compared with didactic lecture, high-fidelity simulation-based training of contrast reaction management shows equal results on written test scores but improved performance during a high-fidelity severe contrast reaction simulation scenario.

  19. Legumes steam allergy in childhood: Update of the reported cases.

    PubMed

    Vitaliti, G; Pavone, P; Spataro, G; Giunta, L; Guglielmo, F; Falsaperla, R

    2015-01-01

    In the past few decades, the prevalence of allergic diseases has deeply increased, with a key role played by food allergies. Legumes seem to play a major role towards the overall increase in the scenario of food allergy, since they are an appreciated source, consumed worldwide, due to their high protein content, variable amounts of lipids and for the presence of vitamins. In literature there are numerous descriptions of adverse reactions after ingestion of uncooked and cooked legumes. Nevertheless, cases of allergic reactions induced by inhaling vapours from cooking legumes have rarely been described. Herein the authors report an update of the literature data on allergic reactions caused by legume steam inhalation, underlying the possible pathogenic mechanism of these atopic events and the knowledge of literature data in paediatric age. The importance of this review is the focus on the clinical aspects concerning legume vapour allergy, referring to literature data in childhood. Copyright © 2013 SEICAP. Published by Elsevier Espana. All rights reserved.

  20. Integrating remediation and resource recovery: On the economic conditions of landfill mining

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frändegård, Per, E-mail: per.frandegard@liu.se; Krook, Joakim; Svensson, Niclas

    Highlights: • We compare two remediation scenarios; one with resource recovery and one without. • Economic analysis includes relevant direct costs and revenues for the landfill owner. • High degrees of metal and/or combustible contents are important economic factors. • Landfill tax and the access to a CHP can have a large impact on the result. • Combining landfill mining and remediation may decrease the project cost. - Abstract: This article analyzes the economic potential of integrating material separation and resource recovery into a landfill remediation project, and discusses the result and the largest impact factors. The analysis is donemore » using a direct costs/revenues approach and the stochastic uncertainties are handled using Monte Carlo simulation. Two remediation scenarios are applied to a hypothetical landfill. One scenario includes only remediation, while the second scenario adds resource recovery to the remediation project. Moreover, the second scenario is divided into two cases, case A and B. In case A, the landfill tax needs to be paid for re-deposited material and the landfill holder does not own a combined heat and power plant (CHP), which leads to disposal costs in the form of gate fees. In case B, the landfill tax is waived on the re-deposited material and the landfill holder owns its own CHP. Results show that the remediation project in the first scenario costs about €23/ton. Adding resource recovery as in case A worsens the result to −€36/ton, while for case B the result improves to −€14/ton. This shows the importance of landfill tax and the access to a CHP. Other important factors for the result are the material composition in the landfill, the efficiency of the separation technology used, and the price of the saleable material.« less

  1. Additive Partial Least Squares for efficient modelling of independent variance sources demonstrated on practical case studies.

    PubMed

    Luoma, Pekka; Natschläger, Thomas; Malli, Birgit; Pawliczek, Marcin; Brandstetter, Markus

    2018-05-12

    A model recalibration method based on additive Partial Least Squares (PLS) regression is generalized for multi-adjustment scenarios of independent variance sources (referred to as additive PLS - aPLS). aPLS allows for effortless model readjustment under changing measurement conditions and the combination of independent variance sources with the initial model by means of additive modelling. We demonstrate these distinguishing features on two NIR spectroscopic case-studies. In case study 1 aPLS was used as a readjustment method for an emerging offset. The achieved RMS error of prediction (1.91 a.u.) was of similar level as before the offset occurred (2.11 a.u.). In case-study 2 a calibration combining different variance sources was conducted. The achieved performance was of sufficient level with an absolute error being better than 0.8% of the mean concentration, therefore being able to compensate negative effects of two independent variance sources. The presented results show the applicability of the aPLS approach. The main advantages of the method are that the original model stays unadjusted and that the modelling is conducted on concrete changes in the spectra thus supporting efficient (in most cases straightforward) modelling. Additionally, the method is put into context of existing machine learning algorithms. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Multiple imputation strategies for zero-inflated cost data in economic evaluations: which method works best?

    PubMed

    MacNeil Vroomen, Janet; Eekhout, Iris; Dijkgraaf, Marcel G; van Hout, Hein; de Rooij, Sophia E; Heymans, Martijn W; Bosmans, Judith E

    2016-11-01

    Cost and effect data often have missing data because economic evaluations are frequently added onto clinical studies where cost data are rarely the primary outcome. The objective of this article was to investigate which multiple imputation strategy is most appropriate to use for missing cost-effectiveness data in a randomized controlled trial. Three incomplete data sets were generated from a complete reference data set with 17, 35 and 50 % missing data in effects and costs. The strategies evaluated included complete case analysis (CCA), multiple imputation with predictive mean matching (MI-PMM), MI-PMM on log-transformed costs (log MI-PMM), and a two-step MI. Mean cost and effect estimates, standard errors and incremental net benefits were compared with the results of the analyses on the complete reference data set. The CCA, MI-PMM, and the two-step MI strategy diverged from the results for the reference data set when the amount of missing data increased. In contrast, the estimates of the Log MI-PMM strategy remained stable irrespective of the amount of missing data. MI provided better estimates than CCA in all scenarios. With low amounts of missing data the MI strategies appeared equivalent but we recommend using the log MI-PMM with missing data greater than 35 %.

  3. All Information Is Not Equal: Using the Literature Databases PubMed and The Cochrane Library for Identifying the Evidence on Granulocyte Transfusion Therapy

    PubMed Central

    Metzendorf, Maria-Inti; Schulz, Manuela; Braun, Volker

    2014-01-01

    Summary To be able to take well-informed decisions or carry out sound research, clinicians and researchers alike require specific information seeking skills matching their respective information needs. Biomedical information is traditionally available via different literature databases. This article gives an introduction to two diverging sources, PubMed (23 million references) and The Cochrane Library (800,000 references), both of which offer sophisticated instruments for searching an increasing amount of medical publications of varied quality and ambition. Whereas PubMed as an unfiltered source of primary literature comprises all different kinds of publication types occurring in academic journals, The Cochrane Library is a pre-filtered source which offers access to either synthesized publication types or critically appraised and carefully selected references. A search approach has to be carried out deliberately and requires a good knowledge on the scope and features of the databases as well as on the ability to build a search strategy in a structured way. We present a specific and a sensitive search approach, making use of both databases within two application case scenarios in order to identify the evidence on granulocyte transfusions for infections in adult patients with neutropenia. PMID:25538539

  4. A Standard-Based and Context-Aware Architecture for Personal Healthcare Smart Gateways.

    PubMed

    Santos, Danilo F S; Gorgônio, Kyller C; Perkusich, Angelo; Almeida, Hyggo O

    2016-10-01

    The rising availability of Personal Health Devices (PHDs) capable of Personal Network Area (PAN) communication and the desire of keeping a high quality of life are the ingredients of the Connected Health vision. In parallel, a growing number of personal and portable devices, like smartphones and tablet computers, are becoming capable of taking the role of health gateway, that is, a data collector for the sensor PHDs. However, as the number of PHDs increase, the number of other peripherals connected in PAN also increases. Therefore, PHDs are now competing for medium access with other devices, decreasing the Quality of Service (QoS) of health applications in the PAN. In this article we present a reference architecture to prioritize PHD connections based on their state and requirements, creating a healthcare Smart Gateway. Healthcare context information is extracted by observing the traffic through the gateway. A standard-based approach was used to identify health traffic based on ISO/IEEE 11073 family of standards. A reference implementation was developed showing the relevance of the problem and how the proposed architecture can assist in the prioritization. The reference Smart Gateway solution was integrated with a Connected Health System for the Internet of Things, validating its use in a real case scenario.

  5. Cyber Signal/Noise Characteristics and Sensor Models for Early Cyber Indications and Warning

    DTIC Science & Technology

    2005-09-01

    investigating and simulating attack scenarios. The sensors are, in effect , mathematical functions. These functions range from simple functions of...172 8.1.2 Examine each attack scenario or case to derive the cause- effect network for the attack scenario...threat profiles............................ 174 8.1.4 Develop attack profiles by enlarging the cause- effect network of each attack scenario with

  6. Impact of one's own mobile phone in stand-by mode on personal radiofrequency electromagnetic field exposure.

    PubMed

    Urbinello, Damiano; Röösli, Martin

    2013-01-01

    When moving around, mobile phones in stand-by mode periodically send data about their positions. The aim of this paper is to evaluate how personal radiofrequency electromagnetic field (RF-EMF) measurements are affected by such location updates. Exposure from a mobile phone handset (uplink) was measured during commuting by using a randomized cross-over study with three different scenarios: disabled mobile phone (reference), an activated dual-band phone and a quad-band phone. In the reference scenario, uplink exposure was highest during train rides (1.19 mW/m(2)) and lowest during car rides in rural areas (0.001 mW/m(2)). In public transports, the impact of one's own mobile phone on personal RF-EMF measurements was not observable because of high background uplink radiation from other people's mobile phone. In a car, uplink exposure with an activated phone was orders of magnitude higher compared with the reference scenario. This study demonstrates that personal RF-EMF exposure is affected by one's own mobile phone in stand-by mode because of its regular location update. Further dosimetric studies should quantify the contribution of location updates to the total RF-EMF exposure in order to clarify whether the duration of mobile phone use, the most common exposure surrogate in the epidemiological RF-EMF research, is actually an adequate exposure proxy.

  7. Physical Model for the Evolution of the Genetic Code

    NASA Astrophysics Data System (ADS)

    Yamashita, Tatsuro; Narikiyo, Osamu

    2011-12-01

    Using the shape space of codons and tRNAs we give a physical description of the genetic code evolution on the basis of the codon capture and ambiguous intermediate scenarios in a consistent manner. In the lowest dimensional version of our description, a physical quantity, codon level is introduced. In terms of the codon levels two scenarios are typically classified into two different routes of the evolutional process. In the case of the ambiguous intermediate scenario we perform an evolutional simulation implemented cost selection of amino acids and confirm a rapid transition of the code change. Such rapidness reduces uncomfortableness of the non-unique translation of the code at intermediate state that is the weakness of the scenario. In the case of the codon capture scenario the survival against mutations under the mutational pressure minimizing GC content in genomes is simulated and it is demonstrated that cells which experience only neutral mutations survive.

  8. Abortion: Strong's counterexamples fail.

    PubMed

    Di Nucci, E

    2009-05-01

    This paper shows that the counterexamples proposed by Strong in 2008 in the Journal of Medical Ethics to Marquis's argument against abortion fail. Strong's basic idea is that there are cases--for example, terminally ill patients--where killing an adult human being is prima facie seriously morally wrong even though that human being is not being deprived of a "valuable future". So Marquis would be wrong in thinking that what is essential about the wrongness of killing an adult human being is that they are being deprived of a valuable future. This paper shows that whichever way the concept of "valuable future" is interpreted, the proposed counterexamples fail: if it is interpreted as "future like ours", the proposed counterexamples have no bearing on Marquis's argument. If the concept is interpreted as referring to the patient's preferences, it must be either conceded that the patients in Strong's scenarios have some valuable future or admitted that killing them is not seriously morally wrong. Finally, if "valuable future" is interpreted as referring to objective standards, one ends up with implausible and unpalatable moral claims.

  9. Towards inverse modeling of turbidity currents: The inverse lock-exchange problem

    NASA Astrophysics Data System (ADS)

    Lesshafft, Lutz; Meiburg, Eckart; Kneller, Ben; Marsden, Alison

    2011-04-01

    A new approach is introduced for turbidite modeling, leveraging the potential of computational fluid dynamics methods to simulate the flow processes that led to turbidite formation. The practical use of numerical flow simulation for the purpose of turbidite modeling so far is hindered by the need to specify parameters and initial flow conditions that are a priori unknown. The present study proposes a method to determine optimal simulation parameters via an automated optimization process. An iterative procedure matches deposit predictions from successive flow simulations against available localized reference data, as in practice may be obtained from well logs, and aims at convergence towards the best-fit scenario. The final result is a prediction of the entire deposit thickness and local grain size distribution. The optimization strategy is based on a derivative-free, surrogate-based technique. Direct numerical simulations are performed to compute the flow dynamics. A proof of concept is successfully conducted for the simple test case of a two-dimensional lock-exchange turbidity current. The optimization approach is demonstrated to accurately retrieve the initial conditions used in a reference calculation.

  10. Risk assessment for infants exposed to furan from ready-to-eat thermally processed food products in Poland.

    PubMed

    Minorczyk, Maria; Góralczyk, Katarzyna; Struciński, Paweł; Hernik, Agnieszka; Czaja, Katarzyna; Łyczewska, Monika; Korcz, Wojciech; Starski, Andrzej; Ludwicki, Jan K

    2012-01-01

    Thermal processes and long storage of food lead to reactions between reducing sugars and amino acids, or with ascorbic acid, carbohydrates or polyunsaturated fatty acids. As a result of these reactions, new compounds are created. One of these compounds having an adverse effect on human health is furan. The aim of this paper was to estimate the infants exposure to furan found in thermally processed jarred food products, as well as characterizing the risk by comparing the exposure to the reference dose (RfD) and calculating margins of exposure. The material consisted of 301 samples of thermally processed food for infants taken from the Polish market in years 2008 - 2010. The samples included vegetable-meat, vegetables and fruit jarred meals for infants and young children in which the furan levels were analyzed by GC/MS technique. The exposure to furan has been assessed for the 3, 4, 6, 9,12 months old infants using different consumption scenarios. The levels of furan ranged from <1 microg/kg (LOQ) to 166.9 microg/kg. The average furan concentration in all samples was 40.2 microg/kg. The estimated exposures, calculated with different nutrition scenarios, were in the range from 0.03 to 3.56 microg/kg bw/day and exceeded in some cases RfD set at level of 1 microg/kg bw/day. Margins of exposure (MOE) achieved values even below 300 for scenarios assuming higher consumption of vegetable and vegetable-meat products. The magnitude of exposure to furan present in ready-to-eat meals among Polish infants is similar to data reported previously in other European countries but slightly higher than indicated in the recent EFSA report. As for some cases the estimated intake exceeds the RfD, and MOE) values are much lower than 10000 indicating a potential health concern, it is necessary to continue monitoring of furan in jarred food and estimate of its intake by infants.

  11. An Integrated Cyberenvironment for Event-Driven Environmental Observatory Research and Education

    NASA Astrophysics Data System (ADS)

    Myers, J.; Minsker, B.; Butler, R.

    2006-12-01

    National environmental observatories will soon provide large-scale data from diverse sensor networks and community models. While much attention is focused on piping data from sensors to archives and users, truly integrating these resources into the everyday research activities of scientists and engineers across the community, and enabling their results and innovations to be brought back into the observatory, also critical to long-term success of the observatories, is often neglected. This talk will give an overview of the Environmental Cyberinfrastructure Demonstrator (ECID) Cyberenvironment for observatory-centric environmental research and education, under development at the National Center for Supercomputing Applications (NCSA), which is designed to address these issues. Cyberenvironments incorporate collaboratory and grid technologies, web services, and other cyberinfrastructure into an overall framework that balances needs for efficient coordination and the ability to innovate. They are designed to support the full scientific lifecycle both in terms of individual experiments moving from data to workflows to publication and at the macro level where new discoveries lead to additional data, models, tools, and conceptual frameworks that augment and evolve community-scale systems such as observatories. The ECID cyberenvironment currently integrates five major components a collaborative portal, workflow engine, event manager, metadata repository, and social network personalization capabilities - that have novel features inspired by the Cyberenvironment concept and enabling powerful environmental research scenarios. A summary of these components and the overall cyberenvironment will be given in this talk, while other posters will give details on several of the components. The summary will be presented within the context of environmental use case scenarios created in collaboration with researchers from the WATERS (WATer and Environmental Research Systems) Network, a joint National Science Foundation-funded initiative of the hydrology and environmental engineering communities. The use case scenarios include identifying sensor anomalies in point- and streaming sensor data and notifying data managers in near-real time; and referring users of data or data products (e.g., workflows, publications) to related data or data products.

  12. Experimental Optimization of Exposure Index and Quality of Service in Wlan Networks.

    PubMed

    Plets, David; Vermeeren, Günter; Poorter, Eli De; Moerman, Ingrid; Goudos, Sotirios K; Luc, Martens; Wout, Joseph

    2017-07-01

    This paper presents the first real-life optimization of the Exposure Index (EI). A genetic optimization algorithm is developed and applied to three real-life Wireless Local Area Network scenarios in an experimental testbed. The optimization accounts for downlink, uplink and uplink of other users, for realistic duty cycles, and ensures a sufficient Quality of Service to all users. EI reductions up to 97.5% compared to a reference configuration can be achieved in a downlink-only scenario, in combination with an improved Quality of Service. Due to the dominance of uplink exposure and the lack of WiFi power control, no optimizations are possible in scenarios that also consider uplink traffic. However, future deployments that do implement WiFi power control can be successfully optimized, with EI reductions up to 86% compared to a reference configuration and an EI that is 278 times lower than optimized configurations under the absence of power control. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Evaluation of ecosystem service based on scenario simulation of land use in Yunnan Province

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Liao, Xiaoli; Zhai, Tianlin

    2018-04-01

    Climate change and rapid urbanization are important factors restricting future land use. Situational analysis, as an important foundation for the optimization of land use, needs to focus on the impact of climate factors and socio-economic factors. In this paper, the Markov model and the DLS (Simulation of Land System Dynamics) model are combined for the first time, and the land use pattern in 2020 is simulated based on the data of land use in 2000 and 2010 as well as the climate, soil, topography and socio-economic factors of Yunnan Province. In his paper, we took Yunnan Province as the case study area, and selected 12 driving factors by logistic regression method, then the land use demands and layout of Yunnan Province in 2020 has been forecasted and simulated under business as usual (BAU) scenario and farmland protection (FP) scenario and the changes in ecosystem service value has been calculated. The result shows that: (1) after the regression analysis and ROC (Relative Operating Characteristics) test, the 12 factors selected in this paper have a strong ability to explain the land use change in Yunnan Province. (2) Under the two scenarios, the significant reduction of arable land area is a common feature of land use change in Yunnan Province in the future, and its main land use type will be construction land. However, under FP scenario, the current situation where construction land encroach on arable land will be improved. Compared with the change from 2000 to 2010, the trend of arable land, forest land, water area, construction land and unused land will be the same under the two scenarios, whereas the change trend of grassland was opposite. (3) From 2000 to 2020, the value of ecosystem services in Yunnan Province is on the rise, but the ecosystem service value under FP scenario is higher than that of the ecosystem services under BAU scenario. In general, land use in 2020 in Yunnan Province continues the pattern of 2010, but there are also significant spatial differences. Under the BAU scenario, the construction land is mainly in the south of Lijiang City and the northeastern part of Kunming. Under the FP scenario, the new construction land is concentrated near the Lashi dam in northern Yunnan Province, and the high-quality arable land in the valley will be better protected. The research results can provide reference for the optimization of land use pattern in Yunnan Province, and provide scientific basis for land use management and planning. Based on the value of ecosystem services, we should implement the policy of strict protection of arable land, both to ensure food supply and promote the healthy development of ecological environment.

  14. An analysis of the costs of treating schizophrenia in Spain: a hierarchical Bayesian approach.

    PubMed

    Vázquez-Polo, Francisco-Jose; Negrín, Miguel; Cabasés, Juan M; Sánchez, Eduardo; Haro, Joseph M; Salvador-Carulla, Luis

    2005-09-01

    Health care decisions should incorporate cost of illness and treatment data, particularly for disorders such as schizophrenia with a high morbidity rate and a disproportionately low allocation of resources. Previous cost of illness analyses may have disregarded geographical aspects relevant for resource consumption and unit cost calculation. To compare the utilisation of resources and the care costs of schizophrenic patients in four mental-health districts in Spain (in Madrid, Catalonia, Navarra and Andalusia), and to analyse factors that determine the costs and the differences between areas. A treated prevalence bottom-up three year follow-up design was used for obtaining data concerning socio-demography, clinical evolution and the utilisation of services. 1997 reference prices were updated for years 1998-2000 in euros. We propose two different scenarios, varying in the prices applied. In the first (Scenario 0) the reference prices are those obtained for a single geographic area, and so the cost variations are only due to differences in the use of resources. In the second situation (Scenario 1), we analyse the variations in resource utilisation at different levels, using the prices applicable to each healthcare area. Bayesian hierarchical models are used to discuss the factors that determine such costs and the differences between geographic areas. In scenario 0, the estimated mean cost was 4918.948 euros for the first year. In scenario 1 the highest cost was in Gava (Catalonia) and the lowest in Loja (Andalusia). Mean costs were respectively 4547.24 and 2473.98 euros. With respect to the evolution of costs over time, we observed an increase during the second year and a reduction during the third year. Geographical differences appeared in follow-up costs. The variables related to lower treatment costs were: residence in the family household, higher patient age and being in work. On the contrary, the number of relapses is directly related to higher treatment costs. No differences were observed between health areas concerning resource utilisation. Calculating the costs of a given disease involves two principal factors: the resource utilisation and the prices. In most studies, emphasis is placed on the analysis of resource utilisation. Other evaluations, however, have recognized the implications of incorporating different prices into the final results. In this study we show both scenarios. The factors that determine the cost of schizophrenia for the Spanish case are similar to the factors encountered in studies carried out in other countries. Treatment costs may be reduced by the prevention of psychotic symptoms and relapse. The use of the same price data in multicentre studies may not be realistic. More effort should be made to obtain price data from all the centres or countries participating in a study. In the present study, only direct healthcare and social costs have been included. Future research should consider informal and indirect costs.

  15. Selection and Training of Field Artillery Forward Observers: Methodologies for Improving Target Acquisition Skills

    DTIC Science & Technology

    1979-07-01

    African scenario.) The training analysis revealed some discrepancies between the list of tasks taught in FAOBC and the list of tasks emerging from the...I tD ’. 0C-) Q) 4- ) 0 N 4- _ L ~~1 CC 0 -- .0 I 4 J0C cog 1 . wi. I -4 1- Co4- ~a) U’ cu ) 0o 0 0 CDm 0 -% o c u- CO 0) -* -- cN- LO) C’I) NO 0 - CV...population density. (Refer to Figure 3-2). The African combat scenario, closely followed by the Middle Eastern scenario, was rated as being the most

  16. The KULTURisk Regional Risk Assessment methodology for water-related natural hazards - Part 2: Application to the Zurich case study

    NASA Astrophysics Data System (ADS)

    Ronco, P.; Bullo, M.; Torresan, S.; Critto, A.; Olschewski, R.; Zappa, M.; Marcomini, A.

    2014-07-01

    The main objective of the paper is the application of the KULTURisk Regional Risk Assessment (KR-RRA) methodology, presented in the companion paper (Part 1, Ronco et al., 2014), to the Sihl River valley, in Switzerland. Through a tuning process of the methodology to the site-specific context and features, flood related risks have been assessed for different receptors lying on the Sihl River valley including the city of Zurich, which represents a typical case of river flooding in urban area. After characterizing the peculiarities of the specific case study, risk maps have been developed under a 300 years return period scenario (selected as baseline) for six identified relevant targets, exposed to flood risk in the Sihl valley, namely: people, economic activities (including buildings, infrastructures and agriculture), natural and semi-natural systems and cultural heritage. Finally, the total risk index map, which allows to identify and rank areas and hotspots at risk by means of Multi Criteria Decision Analysis tools, has been produced to visualize the spatial pattern of flood risk within the area of study. By means of a tailored participative approach, the total risk maps supplement the consideration of technical experts with the (essential) point of view of the relevant stakeholders for the appraisal of the specific scores and weights related to the receptor-relative risks. The total risk maps obtained for the Sihl River case study are associated with the lower classes of risk. In general, higher relative risks are concentrated in the deeply urbanized area within and around the Zurich city centre and areas that rely just behind to the Sihl River course. Here, forecasted injuries and potential fatalities are mainly due to high population density and high presence of old (vulnerable) people; inundated buildings are mainly classified as continuous and discontinuous urban fabric; flooded roads, pathways and railways, the majority of them referring to the Zurich main train station (Hauptbahnhof), are at high risk of inundation, causing huge indirect damages. The analysis of flood risk to agriculture, natural and semi-natural systems and cultural heritage have pointed out that these receptors could be relatively less impacted by the selected flood scenario mainly because their scattered presence. Finally, the application of the KR-RRA methodology to the Sihl River case study as well as to several other sites across Europe (not presented here), has demonstrated its flexibility and possible adaptation to different geographical and socio-economic contexts, depending on data availability and peculiarities of the sites, as well as for other hazard scenarios.

  17. Assessing the potential impact of artemisinin and partner drug resistance in sub-Saharan Africa.

    PubMed

    Slater, Hannah C; Griffin, Jamie T; Ghani, Azra C; Okell, Lucy C

    2016-01-06

    Artemisinin and partner drug resistant malaria parasites have emerged in Southeast Asia. If resistance were to emerge in Africa it could have a devastating impact on malaria-related morbidity and mortality. This study estimates the potential impact of artemisinin and partner drug resistance on disease burden in Africa if it were to emerge. Using data from Asia and Africa, five possible artemisinin and partner drug resistance scenarios are characterized. An individual-based malaria transmission model is used to estimate the impact of each resistance scenario on clinical incidence and parasite prevalence across Africa. Artemisinin resistance is characterized by slow parasite clearance and partner drug resistance is associated with late clinical failure or late parasitological failure. Scenarios with high levels of recrudescent infections resulted in far greater increases in clinical incidence compared to scenarios with high levels of slow parasite clearance. Across Africa, it is estimated that artemisinin and partner drug resistance at levels similar to those observed in Oddar Meanchey province in Cambodia could result in an additional 78 million cases over a 5 year period, a 7% increase in cases compared to a scenario with no resistance. A scenario with high levels of slow clearance but no recrudescence resulted in an additional 10 million additional cases over the same period. Artemisinin resistance is potentially a more pressing concern than partner drug resistance due to the lack of viable alternatives. However, it is predicted that a failing partner drug will result in greater increases in malaria cases and morbidity than would be observed from artemisinin resistance only.

  18. Scenarios and decisionmaking for complex environmental systems

    Treesearch

    Stephen R. Carpenter; Adena R. Rissman

    2012-01-01

    Scenarios are used for expanding the scope of imaginable outcomes considered by assessments, planning exercises, or research projects on social-ecological systems. We discuss a global case study, the Millennium Ecosystem Assessment, and a regional project for an urbanizing agricultural watershed. Qualitative and quantitative aspects of scenarios are complementary....

  19. Orientation and metacognition in virtual space.

    PubMed

    Tenbrink, Thora; Salwiczek, Lucie H

    2016-05-01

    Cognitive scientists increasingly use virtual reality scenarios to address spatial perception, orientation, and navigation. If based on desktops rather than mobile immersive environments, this involves a discrepancy between the physically experienced static position and the visually perceived dynamic scene, leading to cognitive challenges that users of virtual worlds may or may not be aware of. The frequently reported loss of orientation and worse performance in point-to-origin tasks relate to the difficulty of establishing a consistent reference system on an allocentric or egocentric basis. We address the verbalizability of spatial concepts relevant in this regard, along with the conscious strategies reported by participants. Behavioral and verbal data were collected using a perceptually sparse virtual tunnel scenario that has frequently been used to differentiate between humans' preferred reference systems. Surprisingly, the linguistic data we collected relate to reference system verbalizations known from the earlier literature only to a limited extent, but instead reveal complex cognitive mechanisms and strategies. Orientation in desktop virtual reality appears to pose considerable challenges, which participants react to by conceptualizing the task in individual ways that do not systematically relate to the generic concepts of egocentric and allocentric reference frames. (c) 2016 APA, all rights reserved).

  20. Performance of low-power RFID tags based on modulated backscattering

    NASA Astrophysics Data System (ADS)

    Mhanna, Zeinab; Sibille, Alain; Contreras, Richard

    2017-02-01

    Ultra Wideband (UWB) modulated backscattering (MBS) passive Radio-Frequency IDentification (RFID) systems provide a promising solution to overcome many limitations of current narrowband RFID devices. This work addresses the performance of such systems from the point of view of the radio channel between the readers and the tags. Such systems will likely combine several readers, in order to provide both the detection and localization of tags operating in MBS. Two successive measurements campaigns have been carried out in an indoor reference scenario environment. The first is intended to verify the methods and serves as a way to validate the RFID backscattering measurement setup. The second represents a real use case for RFID application and allows one to quantitatively analyze the path loss of the backscattering propagation channel. xml:lang="fr"

  1. Analytical model for orbital debris environmental management

    NASA Technical Reports Server (NTRS)

    Talent, David L.

    1990-01-01

    A differential equation, also referred to as the PIB (particle-in-a-box) model, expressing the time rate of change of the number of objects in orbit, is developed, and its applicability is illustrated. The model can be used as a tool for the assessment of LEO environment stability, and as a starting point for the development of numerical evolutionary models. Within the context of the model, evolutionary scenarios are examined, and found to be sensitive to the growth rate. It is determined that the present environment is slightly unstable to catastrophic growth, and that the number of particles on orbit will continue to increase until approximately 2250-2350 AD, with a maximum of 2,000,000. The model is expandable to the more realistic (complex) case of multiple species in a multiple-tier system.

  2. Importance of baseline specification in evaluating conservation interventions and achieving no net loss of biodiversity.

    PubMed

    Bull, J W; Gordon, A; Law, E A; Suttle, K B; Milner-Gulland, E J

    2014-06-01

    There is an urgent need to improve the evaluation of conservation interventions. This requires specifying an objective and a frame of reference from which to measure performance. Reference frames can be baselines (i.e., known biodiversity at a fixed point in history) or counterfactuals (i.e., a scenario that would have occurred without the intervention). Biodiversity offsets are interventions with the objective of no net loss of biodiversity (NNL). We used biodiversity offsets to analyze the effects of the choice of reference frame on whether interventions met stated objectives. We developed 2 models to investigate the implications of setting different frames of reference in regions subject to various biodiversity trends and anthropogenic impacts. First, a general analytic model evaluated offsets against a range of baseline and counterfactual specifications. Second, a simulation model then replicated these results with a complex real world case study: native grassland offsets in Melbourne, Australia. Both models showed that achieving NNL depended upon the interaction between reference frame and background biodiversity trends. With a baseline, offsets were less likely to achieve NNL where biodiversity was decreasing than where biodiversity was stable or increasing. With a no-development counterfactual, however, NNL was achievable only where biodiversity was declining. Otherwise, preventing development was better for biodiversity. Uncertainty about compliance was a stronger determinant of success than uncertainty in underlying biodiversity trends. When only development and offset locations were considered, offsets sometimes resulted in NNL, but not across an entire region. Choice of reference frame determined feasibility and effort required to attain objectives when designing and evaluating biodiversity offset schemes. We argue the choice is thus of fundamental importance for conservation policy. Our results shed light on situations in which biodiversity offsets may be an inappropriate policy instrument.

  3. Exact-Differential Large-Scale Traffic Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanai, Masatoshi; Suzumura, Toyotaro; Theodoropoulos, Georgios

    2015-01-01

    Analyzing large-scale traffics by simulation needs repeating execution many times with various patterns of scenarios or parameters. Such repeating execution brings about big redundancy because the change from a prior scenario to a later scenario is very minor in most cases, for example, blocking only one of roads or changing the speed limit of several roads. In this paper, we propose a new redundancy reduction technique, called exact-differential simulation, which enables to simulate only changing scenarios in later execution while keeping exactly same results as in the case of whole simulation. The paper consists of two main efforts: (i) amore » key idea and algorithm of the exact-differential simulation, (ii) a method to build large-scale traffic simulation on the top of the exact-differential simulation. In experiments of Tokyo traffic simulation, the exact-differential simulation shows 7.26 times as much elapsed time improvement in average and 2.26 times improvement even in the worst case as the whole simulation.« less

  4. Natural hazards impact on the technosphere from the point of view of the stability and chaos theory

    NASA Astrophysics Data System (ADS)

    Kudin, Valery; Petrova, Elena

    2013-04-01

    Technological disasters occur when the technosphere gets into the transition interval from its stable state to the chaos. Unstable state of the system is one of the possible patterns in scenario of dynamic transition to a chaotic state through a cascade of bifurcations. According to the theory of stability, the chaotic dynamics of the state is caused due to a constant supply of energy into the system from the outside. The role of external source of energy for the man-made technosphere play environmental impacts such as natural hazards or phenomena. A qualitative change in the state of the system depends on the scale and frequency of these natural impacts. Each of the major natural-technological catastrophes is associated with a long chain of triggers and effects in the unfavorable combination of many unlikely accidental circumstances and human factors. According to the classical Gaussian distribution, large deviations are so rare that they can be ignored. However, many accidents and disasters generate statistics with an exponental distribution. In this case, rare events can not be ignored, such cases are often referred to as "heavy-tailed distributions". We should address them differently than the "usual" accidents that fit the description of normal distributions. In the case of "an exponental disaster" we should expect the worst. This is a sphere in which the elements of the stability and chaos theory are of a crucial position. Nowadays scientific research related to the forecast focus on the description and prediction of rare catastrophic events. It should be noted that the asymptotic behavior of such processes before the disaster is so-called blow-up regime, where one or more variables that characterize the system, grow to infinity in a finite time. Thus, in some cases we can reffer to some generic scenarios of disasters. In some model problems, where some value changes in chaotic regime and sometimes makes giant leaps, we can identify precursors that signal danger.

  5. Evaluation of new alternatives in wastewater treatment plants based on dynamic modelling and life cycle assessment (DM-LCA).

    PubMed

    Bisinella de Faria, A B; Spérandio, M; Ahmadi, A; Tiruta-Barna, L

    2015-11-01

    With a view to quantifying the energy and environmental advantages of Urine Source-Separation (USS) combined with different treatment processes, five wastewater treatment plant (WWTP) scenarios were compared to a reference scenario using Dynamic Modelling (DM) and Life Cycle Assessment (LCA), and an integrated DM-LCA framework was thus developed. Dynamic simulations were carried out in BioWin(®) in order to obtain a realistic evaluation of the dynamic behaviour and performance of plants under perturbation. LCA calculations were performed within Umberto(®) using the Ecoinvent database. A Python™ interface was used to integrate and convert simulation data and to introduce them into Umberto(®) to achieve a complete LCA evaluation comprising foreground and background processes. Comparisons between steady-state and dynamic simulations revealed the importance of considering dynamic aspects such as nutrient and flow peaks. The results of the evaluation highlighted the potential of the USS scenario for nutrient recovery whereas the Enhanced Primary Clarification (EPC) scenario gave increased biogas production and also notably decreased aeration consumption, leading to a positive energy balance. Both USS and EPC scenarios also showed increased stability of plant operation, with smaller daily averages of total nitrogen and phosphorus. In this context, USS and EPC results demonstrated that the coupled USS + EPC scenario and its combinations with agricultural spreading of N-rich effluent and nitritation/anaerobic deammonification could present an energy-positive balance with respectively 27% and 33% lower energy requirements and an increase in biogas production of 23%, compared to the reference scenario. The coupled scenarios also presented lesser environmental impacts (reduction of 31% and 39% in total endpoint impacts) along with effluent quality well within the specified limits. The marked environmental performance (reduction of global warming) when nitrogen is used in agriculture shows the importance of future research on sustainable solutions for nitrogen recovery. The contribution analysis of midpoint impacts also showed hotspots that it will be important to optimize further, such as plant infrastructure and direct N2O emissions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Photobioreactor: Biotechnology for the Technology Education Classroom.

    ERIC Educational Resources Information Center

    Dunham, Trey; Wells, John; White, Karissa

    2002-01-01

    Describes a problem scenario involving photobioreactors and presents materials and resources, student project activities, and teaching and evaluation methods for use in the technology education classroom. (Contains 14 references.) (SK)

  7. Monte Carlo reference data sets for imaging research: Executive summary of the report of AAPM Research Committee Task Group 195.

    PubMed

    Sechopoulos, Ioannis; Ali, Elsayed S M; Badal, Andreu; Badano, Aldo; Boone, John M; Kyprianou, Iacovos S; Mainegra-Hing, Ernesto; McMillan, Kyle L; McNitt-Gray, Michael F; Rogers, D W O; Samei, Ehsan; Turner, Adam C

    2015-10-01

    The use of Monte Carlo simulations in diagnostic medical imaging research is widespread due to its flexibility and ability to estimate quantities that are challenging to measure empirically. However, any new Monte Carlo simulation code needs to be validated before it can be used reliably. The type and degree of validation required depends on the goals of the research project, but, typically, such validation involves either comparison of simulation results to physical measurements or to previously published results obtained with established Monte Carlo codes. The former is complicated due to nuances of experimental conditions and uncertainty, while the latter is challenging due to typical graphical presentation and lack of simulation details in previous publications. In addition, entering the field of Monte Carlo simulations in general involves a steep learning curve. It is not a simple task to learn how to program and interpret a Monte Carlo simulation, even when using one of the publicly available code packages. This Task Group report provides a common reference for benchmarking Monte Carlo simulations across a range of Monte Carlo codes and simulation scenarios. In the report, all simulation conditions are provided for six different Monte Carlo simulation cases that involve common x-ray based imaging research areas. The results obtained for the six cases using four publicly available Monte Carlo software packages are included in tabular form. In addition to a full description of all simulation conditions and results, a discussion and comparison of results among the Monte Carlo packages and the lessons learned during the compilation of these results are included. This abridged version of the report includes only an introductory description of the six cases and a brief example of the results of one of the cases. This work provides an investigator the necessary information to benchmark his/her Monte Carlo simulation software against the reference cases included here before performing his/her own novel research. In addition, an investigator entering the field of Monte Carlo simulations can use these descriptions and results as a self-teaching tool to ensure that he/she is able to perform a specific simulation correctly. Finally, educators can assign these cases as learning projects as part of course objectives or training programs.

  8. Using Supercomputers to Speed Execution of the CAISO/PLEXOS 33% RPS Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyers, C; Streitz, F; Yao, Y

    2011-09-19

    The study's official title is 'ISO Study of Operational Requirements and Market Impacts at 33% Renewable Portfolio Standard (RPS).' The stated objectives are twofold: (1) identifying operational requirements and resource options to reliably operate the ISO-controlled grid under a 33% RPS in 2020; and (2) inform market, planning, and policy/regulatory decisions by the ISO, state agencies, market participants, and other stakeholders. The first of these objectives requires the hourly estimates of integration requirements, measured in terms of operational ramp, load following and regulation capacity and ramp rates, as well as additional capacity to resolve operational violations. It also involves considerationmore » of other variables that affect the results, such as the impact of different mixes of renewable technologies, and the impact of forecasting error and variability. The second objective entails supporting the CPUC to identify long-term procurement planning needs, costs, and options, as well as informing other decisions made by the CPUC and state agencies. For the ISO itself this includes informing state-wide transmission planning needs for renewables up to a 33% RPS, and informing design of wholesale markets for energy and ancillary services to facilitate provision of integration capacities. The study is designed in two phases. The first (current) phase is focused on operational requirements and addressing these requirements with existing and new conventional fossil generation; for instance, gas turbines and/or combined cycle units. The second (planned) phase will address the same operational requirements with a combination of conventional fossil generation resources, new non-generation resources, and a renewable resource dispatch. There are seven different scenarios considered in the current phase: a 20% RPS reference case; four 33% RPS cases (a reference case, a high out-of-state case, a high distributed generation case, and a low load case); an alternative 27.5% RPS case; and an all-gas case (no new renewables after 2008). In addition, the CPUC is planning a new set of cases that will alter the anticipated sets of runs.« less

  9. Evaluating uncertainty in environmental life-cycle assessment. A case study comparing two insulation options for a Dutch one-family dwelling.

    PubMed

    Huijbregts, Mark A J; Gilijamse, Wim; Ragas, Ad M J; Reijnders, Lucas

    2003-06-01

    The evaluation of uncertainty is relatively new in environmental life-cycle assessment (LCA). It provides useful information to assess the reliability of LCA-based decisions and to guide future research toward reducing uncertainty. Most uncertainty studies in LCA quantify only one type of uncertainty, i.e., uncertainty due to input data (parameter uncertainty). However, LCA outcomes can also be uncertain due to normative choices (scenario uncertainty) and the mathematical models involved (model uncertainty). The present paper outlines a new methodology that quantifies parameter, scenario, and model uncertainty simultaneously in environmental life-cycle assessment. The procedure is illustrated in a case study that compares two insulation options for a Dutch one-family dwelling. Parameter uncertainty was quantified by means of Monte Carlo simulation. Scenario and model uncertainty were quantified by resampling different decision scenarios and model formulations, respectively. Although scenario and model uncertainty were not quantified comprehensively, the results indicate that both types of uncertainty influence the case study outcomes. This stresses the importance of quantifying parameter, scenario, and model uncertainty simultaneously. The two insulation options studied were found to have significantly different impact scores for global warming, stratospheric ozone depletion, and eutrophication. The thickest insulation option has the lowest impact on global warming and eutrophication, and the highest impact on stratospheric ozone depletion.

  10. Anticipatory Water Management in Phoenix using Advanced Scenario Planning and Analyses: WaterSim 5

    NASA Astrophysics Data System (ADS)

    Sampson, D. A.; Quay, R.; White, D. D.; Gober, P.; Kirkwood, C.

    2013-12-01

    Complexity, uncertainty, and variability are inherent properties of linked social and natural processes; sustainable resource management must somehow consider all three. Typically, a decision support tool (using scenario analyses) is used to examine management alternatives under suspected trajectories in driver variables (i.e., climate forcing's, growth or economic projections, etc.). This traditional planning focuses on a small set of envisioned scenarios whose outputs are compared against one-another in order to evaluate their differing impacts on desired metrics. Human cognition typically limits this to three to five scenarios. However, complex and highly uncertain issues may require more, often much more, than five scenarios. In this case advanced scenario analysis provides quantitative or qualitative methods that can reveal patterns and associations among scenario metrics for a large ensemble of scenarios. From this analysis, then, a smaller set of heuristics that describe the complexity and uncertainty revealed provides a basis to guide planning in an anticipatory fashion. Our water policy and management model, termed WaterSim, permits advanced scenario planning and analysis for the Phoenix Metropolitan Area. In this contribution we examine the concepts of advanced scenario analysis on a large scale ensemble of scenarios using our work with WaterSim as a case study. For this case study we created a range of possible water futures by creating scenarios that encompasses differences in water supplies (our surrogates for climate change, drought, and inherent variability in riverine flows), population growth, and per capital water consumption. We used IPCC estimates of plausible, future, alterations in riverine runoff, locally produced and vetted estimates of population growth projections, and empirical trends in per capita water consumption for metropolitan cities. This ensemble consisted of ~ 30, 700 scenarios (~575 k observations). We compared and contrasted two metropolitan communities that exhibit differing growth projections and water portfolios; moderate growth with a diverse portfolio versus high growth for a more restrictive portfolio. Results illustrate that both communities exhibited an expanding envelope of possible, future water outcomes with rational water management trajectories. However, a more diverse portfolio resulted in a broad, time-insensitive decision space for management interventions. The reverse was true for the more restrictive water portfolio with high growth projections.

  11. Impacts to Dungeness Crab from the Southwest Washington Littoral Drift Restoration Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Greg D.; Kohn, Nancy P.; Pearson, Walter H.

    2005-11-09

    The Benson Beach littoral drift restoration project is a demonstration project that will replenish sand on Benson Beach, the public beach north of the North Jetty at the mouth of the Columbia River (MCR), using material dredged from the river during normal maintenance dredging of the navigational channel. A U.S. Army Corps of Engineers (Corps) proposal involves pumping the material from a sump area on the south side of the jetty to Benson Beach using a cutter suction dredge, also known as a pipeline dredge. If this one-time demonstration project proves feasible and successful, up to a million cubic yardsmore » of sediment could be used to replenish the outer coast littoral drift system in successive years by the same process. The primary goal of this study was to assess the potential risk of impacts to Dungeness crab from the proposed Benson Beach littoral drift restoration process of using the cutter suction dredge to move sediment from the proposed sump area on one side of the North Jetty to the beach on the other side of the jetty. Because there are no direct measurements of crab entrainment by pipeline dredge operating outside of the lower Columbia River navigation channel, dredge impacts for the proposed demonstration project were estimated using a modification of the dredge impact model (DIM) of Armstrong et al. (1987). The model estimates adult equivalent loss (AEL) of crabs using crab population density from trawl surveys, dredge project information (gear type, season, location, volume), and an entrainment function relating crab population density to entrainment by the dredge. The input used in applying the DIM to the Benson Beach littoral drift restoration included the specific dredging scenario provided by the Corps, existing data on crab density in previously proposed sump areas, and a series of entrainment functions. A total of fourteen scenarios were modeled and the outcomes compared with six reference scenarios intended to represent realistic to worst cases. Dungeness crab entrainment and subsequent loss of recruitment to adult age classes and the crab fishery estimated for the Benson Beach littoral drift restoration project varied widely (over three orders of magnitude) because of the range of assumptions about initial crab density, dredging scenarios, and entrainment functions. Although the comparison to reference scenarios helps put the results in perspective, losses to the crab fishery could still span two orders of magnitude. This uncertainty can only be assessed by direct measurements of crab entrainment during the demonstration project if crab losses are to be more accurately estimated for the demonstration, which is recommended in order to evaluate cumulative crab losses from successive replenishment efforts.« less

  12. Cost-comparison of different management policies for tuberculosis patients in Italy. AIPO TB Study Group.

    PubMed Central

    Migliori, G. B.; Ambrosetti, M.; Besozzi, G.; Farris, B.; Nutini, S.; Saini, L.; Casali, L.; Nardini, S.; Bugiani, M.; Neri, M.; Raviglione, M. C.

    1999-01-01

    Although in developing countries the treatment of tuberculosis (TB) cases is among the most cost-effective health interventions, few studies have evaluated the cost-effectiveness of TB control in low-prevalence countries. The aim of the present study was to carry out an economic analysis in Italy that takes into account both the perspective of the resource-allocating authority (i.e. the Ministry of Health) and the broader social perspective, including a cost description based on current outcomes applied to a representative sample of TB patients nationwide (admission and directly observed treatment (DOT) during the initial intensive phase of treatment); a cost-comparison analysis of two alternative programmes: current policy based on available data (scenario 1) and an hypothetical policy oriented more towards outpatient care (scenario 2) (both scenarios included the option of including or not including DOT outside hospital admission, and incentives) were compared in terms of cost per case treated successfully. Indirect costs (such as loss of productivity) were included in considerations of the broader social perspective. The study was designed as a prospective monitoring activity based on the supervised collection of forms from a representative sample of Italian TB units. Individual data were collected and analysed to obtain a complete economic profile of the patients enrolled and to evaluate the effectiveness of the intervention. A separate analysis was done for each scenario to determine the end-point at different levels of cure rate (50-90%). The mean length of treatment was 6.6 months (i.e. patients hospitalized during the intensive phase; length of stay was significantly higher in smear-positive patients and in human immunodeficiency virus (HIV) seropositive patients). Roughly six direct smear and culture examinations were performed during hospital admission and three during ambulatory treatment. The cost of a single bed day was US$186.90, whereas that of a single outpatient visit ranged, according to the different options, from US$2.50 to US$11. Scenario 2 was consistently less costly than scenario 1. The cost per case cured for smear-positive cases was US$16,703 in scenario 1 and US$5946 in scenario 2. The difference in cost between the cheapest option (no DOT) and the more expensive option (DOT, additional staff, incentives) ranged from US$1407 (scenario 1, smear-negative and extrapulmonary cases) to US$1814 (scenario 2, smear-positive cases). The additional cost to society including indirect costs ranged from US$1800 to US$4200. The possible savings at the national level were in the order of US$50 million per year. In conclusion, cost-comparison analysis showed that a relatively minor change in policy can result in significant savings and that the adoption of DOT will represent a relatively modest economic burden, although the real gain in effectiveness resulting from DOT in Italy requires further evaluation. PMID:10427931

  13. Event-based knowledge elicitation of operating room management decision-making using scenarios adapted from information systems data

    PubMed Central

    2011-01-01

    Background No systematic process has previously been described for a needs assessment that identifies the operating room (OR) management decisions made by the anesthesiologists and nurse managers at a facility that do not maximize the efficiency of use of OR time. We evaluated whether event-based knowledge elicitation can be used practically for rapid assessment of OR management decision-making at facilities, whether scenarios can be adapted automatically from information systems data, and the usefulness of the approach. Methods A process of event-based knowledge elicitation was developed to assess OR management decision-making that may reduce the efficiency of use of OR time. Hypothetical scenarios addressing every OR management decision influencing OR efficiency were created from published examples. Scenarios are adapted, so that cues about conditions are accurate and appropriate for each facility (e.g., if OR 1 is used as an example in a scenario, the listed procedure is a type of procedure performed at the facility in OR 1). Adaptation is performed automatically using the facility's OR information system or anesthesia information management system (AIMS) data for most scenarios (43 of 45). Performing the needs assessment takes approximately 1 hour of local managers' time while they decide if their decisions are consistent with the described scenarios. A table of contents of the indexed scenarios is created automatically, providing a simple version of problem solving using case-based reasoning. For example, a new OR manager wanting to know the best way to decide whether to move a case can look in the chapter on "Moving Cases on the Day of Surgery" to find a scenario that describes the situation being encountered. Results Scenarios have been adapted and used at 22 hospitals. Few changes in decisions were needed to increase the efficiency of use of OR time. The few changes were heterogeneous among hospitals, showing the usefulness of individualized assessments. Conclusions Our technical advance is the development and use of automated event-based knowledge elicitation to identify suboptimal OR management decisions that decrease the efficiency of use of OR time. The adapted scenarios can be used in future decision-making. PMID:21214905

  14. Event-based knowledge elicitation of operating room management decision-making using scenarios adapted from information systems data.

    PubMed

    Dexter, Franklin; Wachtel, Ruth E; Epstein, Richard H

    2011-01-07

    No systematic process has previously been described for a needs assessment that identifies the operating room (OR) management decisions made by the anesthesiologists and nurse managers at a facility that do not maximize the efficiency of use of OR time. We evaluated whether event-based knowledge elicitation can be used practically for rapid assessment of OR management decision-making at facilities, whether scenarios can be adapted automatically from information systems data, and the usefulness of the approach. A process of event-based knowledge elicitation was developed to assess OR management decision-making that may reduce the efficiency of use of OR time. Hypothetical scenarios addressing every OR management decision influencing OR efficiency were created from published examples. Scenarios are adapted, so that cues about conditions are accurate and appropriate for each facility (e.g., if OR 1 is used as an example in a scenario, the listed procedure is a type of procedure performed at the facility in OR 1). Adaptation is performed automatically using the facility's OR information system or anesthesia information management system (AIMS) data for most scenarios (43 of 45). Performing the needs assessment takes approximately 1 hour of local managers' time while they decide if their decisions are consistent with the described scenarios. A table of contents of the indexed scenarios is created automatically, providing a simple version of problem solving using case-based reasoning. For example, a new OR manager wanting to know the best way to decide whether to move a case can look in the chapter on "Moving Cases on the Day of Surgery" to find a scenario that describes the situation being encountered. Scenarios have been adapted and used at 22 hospitals. Few changes in decisions were needed to increase the efficiency of use of OR time. The few changes were heterogeneous among hospitals, showing the usefulness of individualized assessments. Our technical advance is the development and use of automated event-based knowledge elicitation to identify suboptimal OR management decisions that decrease the efficiency of use of OR time. The adapted scenarios can be used in future decision-making.

  15. The Effectiveness of Remotely Piloted Aircraft in a Permissive Hunter-Killer Scenario

    DTIC Science & Technology

    2014-01-01

    APY -8, Lynx Block 20. 20 Specific Scenario In our scenario, prior intelligence has indicated that the target is likely to emerge from hiding...an additional 5 sec. must be taken to transfer (digitally) the target geolocation data to the strike platform. (Unlike in Case 1, it was not

  16. Scenario-Based Case Study Method and the Functionality of the Section Called "From Production to Consumption" from the Perspective of Primary School Students

    ERIC Educational Resources Information Center

    Taneri, Ahu

    2018-01-01

    In this research, the aim was showing the evaluation of students on scenario-based case study method and showing the functionality of the studied section called "from production to consumption". Qualitative research method and content analysis were used to reveal participants' experiences and reveal meaningful relations regarding…

  17. Elementary Social Studies in 2005: Danger or Opportunity?--A Response to Jeff Passe

    ERIC Educational Resources Information Center

    Libresco, Andrea S.

    2006-01-01

    From the emphasis on lower-level test-prep materials to the disappearance of the subject altogether, elementary social studies is, in the best case scenario, being tested and, thus, taught with a heavy emphasis on recall; and, in the worst-case scenario, not being taught at all. In this article, the author responds to Jeff Passe's views on…

  18. Climate balance of biogas upgrading systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pertl, A., E-mail: andreas.pertl@boku.ac.a; Mostbauer, P.; Obersteiner, G.

    2010-01-15

    One of the numerous applications of renewable energy is represented by the use of upgraded biogas where needed by feeding into the gas grid. The aim of the present study was to identify an upgrading scenario featuring minimum overall GHG emissions. The study was based on a life-cycle approach taking into account also GHG emissions resulting from plant cultivation to the process of energy conversion. For anaerobic digestion two substrates have been taken into account: (1) agricultural resources and (2) municipal organic waste. The study provides results for four different upgrading technologies including the BABIU (Bottom Ash for Biogas Upgrading)more » method. As the transport of bottom ash is a critical factor implicated in the BABIU-method, different transport distances and means of conveyance (lorry, train) have been considered. Furthermore, aspects including biogas compression and energy conversion in a combined heat and power plant were assessed. GHG emissions from a conventional energy supply system (natural gas) have been estimated as reference scenario. The main findings obtained underlined how the overall reduction of GHG emissions may be rather limited, for example for an agricultural context in which PSA-scenarios emit only 10% less greenhouse gases than the reference scenario. The BABIU-method constitutes an efficient upgrading method capable of attaining a high reduction of GHG emission by sequestration of CO{sub 2}.« less

  19. The International Experimental Thermal Hydraulic Systems database – TIETHYS: A new NEA validation tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohatgi, Upendra S.

    Nuclear reactor codes require validation with appropriate data representing the plant for specific scenarios. The thermal-hydraulic data is scattered in different locations and in different formats. Some of the data is in danger of being lost. A relational database is being developed to organize the international thermal hydraulic test data for various reactor concepts and different scenarios. At the reactor system level, that data is organized to include separate effect tests and integral effect tests for specific scenarios and corresponding phenomena. The database relies on the phenomena identification sections of expert developed PIRTs. The database will provide a summary ofmore » appropriate data, review of facility information, test description, instrumentation, references for the experimental data and some examples of application of the data for validation. The current database platform includes scenarios for PWR, BWR, VVER, and specific benchmarks for CFD modelling data and is to be expanded to include references for molten salt reactors. There are place holders for high temperature gas cooled reactors, CANDU and liquid metal reactors. This relational database is called The International Experimental Thermal Hydraulic Systems (TIETHYS) database and currently resides at Nuclear Energy Agency (NEA) of the OECD and is freely open to public access. Going forward the database will be extended to include additional links and data as they become available. https://www.oecd-nea.org/tiethysweb/« less

  20. Forb, Insect, and Soil Response to Burning and Mowing Wyoming Big Sagebrush in Greater Sage-Grouse Breeding Habitat

    NASA Astrophysics Data System (ADS)

    Hess, Jennifer E.; Beck, Jeffrey L.

    2014-04-01

    Wyoming big sagebrush ( Artemisia tridentata wyomingensis A. t. Nutt. ssp. wyomingensis Beetle and Young) communities provide structure and forbs and insects needed by greater sage-grouse ( Centrocercus urophasianus) for growth and survival. We evaluated forb, insect, and soil responses at six mowed and 19 prescribed burned sites compared to 25, paired and untreated reference sites. Sites were classified by treatment type, soil type, season, and decade of treatment (sites burned during 1990-1999 and sites burned or mowed during 2000-2006). Our objective was to evaluate differences in ten habitat attributes known to influence sage-grouse nesting and brood rearing to compare responses among treatment scenarios. Contrary to desired outcomes, treating Wyoming big sagebrush through prescribed burning or mowing may not stimulate cover or increase nutrition in food forbs, or increase insect abundance or indicators of soil quality compared with reference sites. In some cases, prescribed burning showed positive results compared with mowing such as greater forb crude protein content (%), ant (Hymenoptera; no./trap), beetle (Coleoptera/no./trap), and grasshopper abundance (Orthoptera; no./sweep), and total (%) soil carbon and nitrogen, but of these attributes, only grasshopper abundance was enhanced at burned sites compared with reference sites in 2008. Mowing did not promote a statistically significant increase in sage-grouse nesting or early brood-rearing habitat attributes such as cover or nutritional quality of food forbs, or counts of ants, beetles, or grasshoppers compared with reference sites.

  1. Forb, insect, and soil response to burning and mowing Wyoming big sagebrush in greater sage-grouse breeding habitat.

    PubMed

    Hess, Jennifer E; Beck, Jeffrey L

    2014-04-01

    Wyoming big sagebrush (Artemisia tridentata wyomingensis A. t. Nutt. ssp. wyomingensis Beetle and Young) communities provide structure and forbs and insects needed by greater sage-grouse (Centrocercus urophasianus) for growth and survival. We evaluated forb, insect, and soil responses at six mowed and 19 prescribed burned sites compared to 25, paired and untreated reference sites. Sites were classified by treatment type, soil type, season, and decade of treatment (sites burned during 1990-1999 and sites burned or mowed during 2000-2006). Our objective was to evaluate differences in ten habitat attributes known to influence sage-grouse nesting and brood rearing to compare responses among treatment scenarios. Contrary to desired outcomes, treating Wyoming big sagebrush through prescribed burning or mowing may not stimulate cover or increase nutrition in food forbs, or increase insect abundance or indicators of soil quality compared with reference sites. In some cases, prescribed burning showed positive results compared with mowing such as greater forb crude protein content (%), ant (Hymenoptera; no./trap), beetle (Coleoptera/no./trap), and grasshopper abundance (Orthoptera; no./sweep), and total (%) soil carbon and nitrogen, but of these attributes, only grasshopper abundance was enhanced at burned sites compared with reference sites in 2008. Mowing did not promote a statistically significant increase in sage-grouse nesting or early brood-rearing habitat attributes such as cover or nutritional quality of food forbs, or counts of ants, beetles, or grasshoppers compared with reference sites.

  2. Decision support framework for evaluating the operational environment of forest bioenergy production and use: Case of four European countries.

    PubMed

    Pezdevšek Malovrh, Špela; Kurttila, Mikko; Hujala, Teppo; Kärkkäinen, Leena; Leban, Vasja; Lindstad, Berit H; Peters, Dörte Marie; Rhodius, Regina; Solberg, Birger; Wirth, Kristina; Zadnik Stirn, Lidija; Krč, Janez

    2016-09-15

    Complex policy-making situations around bioenergy production and use require examination of the operational environment of the society and a participatory approach. This paper presents and demonstrates a three-phase decision-making framework for analysing the operational environment of strategies related to increased forest bioenergy targets. The framework is based on SWOT (strengths, weaknesses, opportunities and threats) analysis and the Simple Multi-Attribute Rating Technique (SMART). Stakeholders of four case countries (Finland, Germany, Norway and Slovenia) defined the factors that affect the operational environments, classified in four pre-set categories (Forest Characteristics and Management, Policy Framework, Technology and Science, and Consumers and Society). The stakeholders participated in weighting of SWOT items for two future scenarios with SMART technique. The first scenario reflected the current 2020 targets (the Business-as-Usual scenario), and the second scenario contained a further increase in the targets (the Increase scenario). This framework can be applied to various problems of environmental management and also to other fields where public decision-making is combined with stakeholders' engagement. The case results show that the greatest differences between the scenarios appear in Germany, indicating a notably negative outlook for the Increase scenario, while the smallest differences were found in Finland. Policy Framework was a highly rated category across the countries, mainly with respect to weaknesses and threats. Intensified forest bioenergy harvesting and utilization has potentially wide country-specific impacts which need to be anticipated and considered in national policies and public dialogue. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. A General Safety Assessment for Purified Food Ingredients Derived From Biotechnology Crops: Case Study of Brazilian Sugar and Beverages Produced From Insect-Protected Sugarcane.

    PubMed

    Kennedy, Reese D; Cheavegatti-Gianotto, Adriana; de Oliveira, Wladecir S; Lirette, Ronald P; Hjelle, Jerry J

    2018-01-01

    Insect-protected sugarcane that expresses Cry1Ab has been developed in Brazil. Analysis of trade information has shown that effectively all the sugarcane-derived Brazilian exports are raw or refined sugar and ethanol. The fact that raw and refined sugar are highly purified food ingredients, with no detectable transgenic protein, provides an interesting case study of a generalized safety assessment approach. In this study, both the theoretical protein intakes and safety assessments of Cry1Ab, Cry1Ac, NPTII, and Bar proteins used in insect-protected biotechnology crops were examined. The potential consumption of these proteins was examined using local market research data of average added sugar intakes in eight diverse and representative Brazilian raw and refined sugar export markets (Brazil, Canada, China, Indonesia, India, Japan, Russia, and the USA). The average sugar intakes, which ranged from 5.1 g of added sugar/person/day (India) to 126 g sugar/p/day (USA) were used to calculated possible human exposure. The theoretical protein intake estimates were carried out in the "Worst-case" scenario, assumed that 1 μg of newly-expressed protein is detected/g of raw or refined sugar; and the "Reasonable-case" scenario assumed 1 ng protein/g sugar. The "Worst-case" scenario was based on results of detailed studies of sugarcane processing in Brazil that showed that refined sugar contains less than 1 μg of total plant protein /g refined sugar. The "Reasonable-case" scenario was based on assumption that the expression levels in stalk of newly-expressed proteins were less than 0.1% of total stalk protein. Using these calculated protein intake values from the consumption of sugar, along with the accepted NOAEL levels of the four representative proteins we concluded that safety margins for the "Worst-case" scenario ranged from 6.9 × 10 5 to 5.9 × 10 7 and for the "Reasonable-case" scenario ranged from 6.9 × 10 8 to 5.9 × 10 10 . These safety margins are very high due to the extremely low possible exposures and the high NOAELs for these non-toxic proteins. This generalized approach to the safety assessment of highly purified food ingredients like sugar illustrates that sugar processed from Brazilian GM varieties are safe for consumption in representative markets globally.

  4. Exposure of the general public due to wireless LAN applications in public places.

    PubMed

    Schmid, G; Preiner, P; Lager, D; Uberbacher, R; Georg, R

    2007-01-01

    The typical exposure caused by wireless LAN applications in public areas has been investigated in a variety of scenarios. Small-sized (internet café) and large-scale (airport) indoor scenarios as well as outdoor scenarios in the environment of access points (AP) supplying for residential areas and public places were considered. The exposure assessment was carried out by numerical GTD/UTD computations based on optical wave propagation, as well as by verifying frequency selective measurements in the considered scenarios under real life conditions. In the small-sized indoor scenario the maximum temporal peak values of power density, spatially averaged over body dimensions, were found to be lower than 20 mW/m(2), corresponding to 0.2% of the reference level according to the European Council Recommendation 1999/519/EC. Local peak values of power density might be 1-2 orders of magnitude higher, spatial and time-averaged values for usual data traffic conditions might be 2-3 orders of magnitude lower, depending on the actual data traffic. In the considered outdoor scenarios, exposure was several orders of magnitude lower than in indoor scenarios due to the usually larger distances to the AP antennas.

  5. AeroMACS Interference Simulations for Global Airports

    NASA Technical Reports Server (NTRS)

    Wilson, Jeffrey D.; Apaza, Rafael D.

    2012-01-01

    Ran 18 scenarios with Visualyse Professional interference software (presented 2 most realistic scenarios). Scenario A: 85 large airports can transmit 1650 mW on each of 11 channels. 173 medium airports can transmit 825 mW on each of 6 channels. 5951 small airports can transmit 275 mW on one channel. Reducing power allowed for small airports in Scenario B increases allowable power for large and medium airports, but should not be necessary as Scenario A levels are more than adequate. These power limitations are conservative because we are assuming worst case with 100% duty.

  6. Protocol Architecture Model Report

    NASA Technical Reports Server (NTRS)

    Dhas, Chris

    2000-01-01

    NASA's Glenn Research Center (GRC) defines and develops advanced technology for high priority national needs in communications technologies for application to aeronautics and space. GRC tasked Computer Networks and Software Inc. (CNS) to examine protocols and architectures for an In-Space Internet Node. CNS has developed a methodology for network reference models to support NASA's four mission areas: Earth Science, Space Science, Human Exploration and Development of Space (REDS), Aerospace Technology. This report applies the methodology to three space Internet-based communications scenarios for future missions. CNS has conceptualized, designed, and developed space Internet-based communications protocols and architectures for each of the independent scenarios. The scenarios are: Scenario 1: Unicast communications between a Low-Earth-Orbit (LEO) spacecraft inspace Internet node and a ground terminal Internet node via a Tracking and Data Rela Satellite (TDRS) transfer; Scenario 2: Unicast communications between a Low-Earth-Orbit (LEO) International Space Station and a ground terminal Internet node via a TDRS transfer; Scenario 3: Multicast Communications (or "Multicasting"), 1 Spacecraft to N Ground Receivers, N Ground Transmitters to 1 Ground Receiver via a Spacecraft.

  7. 'Weather Value at Risk': A uniform approach to describe and compare sectoral income risks from climate change.

    PubMed

    Prettenthaler, Franz; Köberl, Judith; Bird, David Neil

    2016-02-01

    We extend the concept of 'Weather Value at Risk' - initially introduced to measure the economic risks resulting from current weather fluctuations - to describe and compare sectoral income risks from climate change. This is illustrated using the examples of wheat cultivation and summer tourism in (parts of) Sardinia. Based on climate scenario data from four different regional climate models we study the change in the risk of weather-related income losses between some reference (1971-2000) and some future (2041-2070) period. Results from both examples suggest an increase in weather-related risks of income losses due to climate change, which is somewhat more pronounced for summer tourism. Nevertheless, income from wheat cultivation is at much higher risk of weather-related losses than income from summer tourism, both under reference and future climatic conditions. A weather-induced loss of at least 5% - compared to the income associated with average reference weather conditions - shows a 40% (80%) probability of occurrence in the case of wheat cultivation, but only a 0.4% (16%) probability of occurrence in the case of summer tourism, given reference (future) climatic conditions. Whereas in the agricultural example increases in the weather-related income risks mainly result from an overall decrease in average wheat yields, the heightened risk in the tourism example stems mostly from a change in the weather-induced variability of tourism incomes. With the extended 'Weather Value at Risk' concept being able to capture both, impacts from changes in the mean and the variability of the climate, it is a powerful tool for presenting and disseminating the results of climate change impact assessments. Due to its flexibility, the concept can be applied to any economic sector and therefore provides a valuable tool for cross-sectoral comparisons of climate change impacts, but also for the assessment of the costs and benefits of adaptation measures. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Historical trends and high-resolution future climate projections in northern Tuscany (Italy)

    NASA Astrophysics Data System (ADS)

    D'Oria, Marco; Ferraresi, Massimo; Tanda, Maria Giovanna

    2017-12-01

    This paper analyzes the historical precipitation and temperature trends and the future climate projections with reference to the northern part of Tuscany (Italy). The trends are identified and quantified at monthly and annual scale at gauging stations with data collected for long periods (60-90 years). An ensemble of 13 Regional Climate Models (RCMs), based on two Representative Concentration Pathways (RCP4.5 and RCP8.5), was then used to assess local scale future precipitation and temperature projections and to represent the uncertainty in the results. The historical data highlight a general decrease of the annual rainfall at a mean rate of 22 mm per decade but, in many cases, the tendencies are not statistically significant. Conversely, the annual mean temperature exhibits an upward trend, statistically significant in the majority of cases, with a warming rate of about 0.1 °C per decade. With reference to the model projections and the annual precipitation, the results are not concordant; the deviations between models in the same period are higher than the future changes at medium- (2031-2040) and long-term (2051-2060) and highlight that the model uncertainty and variability is high. According to the climate model projections, the warming of the study area is unequivocal; a mean positive increment of 0.8 °C at medium-term and 1.1 °C at long-term is expected with respect to the reference period (2003-2012) and the scenario RCP4.5; the increments grow to 0.9 °C and 1.9 °C for the RCP8.5. Finally, in order to check the observed climate change signals, the climate model projections were compared with the trends based on the historical data. A satisfactory agreement is obtained with reference to the precipitation; a systematic underestimation of the trend values with respect to the models, at medium- and long-term, is observed for the temperature data.

  9. Personalized cloud-based bioinformatics services for research and education: use cases and the elasticHPC package

    PubMed Central

    2012-01-01

    Background Bioinformatics services have been traditionally provided in the form of a web-server that is hosted at institutional infrastructure and serves multiple users. This model, however, is not flexible enough to cope with the increasing number of users, increasing data size, and new requirements in terms of speed and availability of service. The advent of cloud computing suggests a new service model that provides an efficient solution to these problems, based on the concepts of "resources-on-demand" and "pay-as-you-go". However, cloud computing has not yet been introduced within bioinformatics servers due to the lack of usage scenarios and software layers that address the requirements of the bioinformatics domain. Results In this paper, we provide different use case scenarios for providing cloud computing based services, considering both the technical and financial aspects of the cloud computing service model. These scenarios are for individual users seeking computational power as well as bioinformatics service providers aiming at provision of personalized bioinformatics services to their users. We also present elasticHPC, a software package and a library that facilitates the use of high performance cloud computing resources in general and the implementation of the suggested bioinformatics scenarios in particular. Concrete examples that demonstrate the suggested use case scenarios with whole bioinformatics servers and major sequence analysis tools like BLAST are presented. Experimental results with large datasets are also included to show the advantages of the cloud model. Conclusions Our use case scenarios and the elasticHPC package are steps towards the provision of cloud based bioinformatics services, which would help in overcoming the data challenge of recent biological research. All resources related to elasticHPC and its web-interface are available at http://www.elasticHPC.org. PMID:23281941

  10. Personalized cloud-based bioinformatics services for research and education: use cases and the elasticHPC package.

    PubMed

    El-Kalioby, Mohamed; Abouelhoda, Mohamed; Krüger, Jan; Giegerich, Robert; Sczyrba, Alexander; Wall, Dennis P; Tonellato, Peter

    2012-01-01

    Bioinformatics services have been traditionally provided in the form of a web-server that is hosted at institutional infrastructure and serves multiple users. This model, however, is not flexible enough to cope with the increasing number of users, increasing data size, and new requirements in terms of speed and availability of service. The advent of cloud computing suggests a new service model that provides an efficient solution to these problems, based on the concepts of "resources-on-demand" and "pay-as-you-go". However, cloud computing has not yet been introduced within bioinformatics servers due to the lack of usage scenarios and software layers that address the requirements of the bioinformatics domain. In this paper, we provide different use case scenarios for providing cloud computing based services, considering both the technical and financial aspects of the cloud computing service model. These scenarios are for individual users seeking computational power as well as bioinformatics service providers aiming at provision of personalized bioinformatics services to their users. We also present elasticHPC, a software package and a library that facilitates the use of high performance cloud computing resources in general and the implementation of the suggested bioinformatics scenarios in particular. Concrete examples that demonstrate the suggested use case scenarios with whole bioinformatics servers and major sequence analysis tools like BLAST are presented. Experimental results with large datasets are also included to show the advantages of the cloud model. Our use case scenarios and the elasticHPC package are steps towards the provision of cloud based bioinformatics services, which would help in overcoming the data challenge of recent biological research. All resources related to elasticHPC and its web-interface are available at http://www.elasticHPC.org.

  11. The Value of CCS under Current Policy Scenarios: NDCs and Beyond

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidson, Casie L.; Dahowski, Robert T.; McJeon, Haewon C.

    This paper describes preliminary results of analysis using the Global Change Assessment Model (GCAM) to evaluate the potential role of CCS in addressing emissions reduction targets. Scenarios are modelled using the Paris-Increased Ambition (PIA) case developed by Fawcett et al. (2015), and a more aggressive Paris Two-Degree Ambition (P2A) case. Both cases are based upon nationally determined contributions (NDCs) agreed to at the UNFCCC Conference of Parties (COP-21) in December 2015, coupled with additional mitigation effort beyond the 2030 Paris timeframe, through the end of the century. Analysis of CCS deployment and abatement costs under both policy scenarios suggests that,more » as modelled, having CCS in the technological portfolio could reduce the global cost of addressing emissions reduction targets specified under the policy scenario by trillions of dollars, primarily by enabling a smoother and lower-cost transition to next-generation technologies. Through the end of the century, total global abatement costs associated with the PIA case – with five percent annual reduction in emission intensity and reaching 2.2 degrees by 2100 – are reduced by $15 trillion USD in the scenario where CCS is available to deploy by 2025 and remains available through 2100, reflecting a 47 percent savings in the cost of climate change abatement. Under the more ambitious P2A case, with 8 percent annual reduction in emission intensity and reaching 1.9 degrees by 2100, the availability of CCS reduces global abatement costs by $22 trillion USD through the end of the century, again nearly halving the costs of addressing the policy, relative to achieving the same target using an energy portfolio that does not include CCS. PIA and P2A scenarios with CCS result in 1,250 and 1,580 GtCO2 of global geologic storage by the end of the century, respectively.« less

  12. Assessing Discriminative Performance at External Validation of Clinical Prediction Models

    PubMed Central

    Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W.

    2016-01-01

    Introduction External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. Methods We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. Results The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. Conclusion The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients. PMID:26881753

  13. Assessing Discriminative Performance at External Validation of Clinical Prediction Models.

    PubMed

    Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W

    2016-01-01

    External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients.

  14. Scenario-Based Case Study Analysis of Asteroid Mitigation in the Short Response Time Regime

    NASA Astrophysics Data System (ADS)

    Seery, B.; Greenaugh, K. C.

    2017-12-01

    Asteroid impact on Earth is a rare but inevitable occurrence, with potentially cataclysmic consequences. If a pending impact is discovered, mitigation options include civil-defense preparations as well as missions to deflect the asteroid and/or robustly disrupt and disperse it to an extent that only a negligible fraction remains on a threatening path (National Research Council's "Defending the Planet," 2010). If discovered with sufficient warning time, a kinetic impactor can deflect smaller objects, but response delays can rule out the option. If a body is too large to deflect by kinetic impactor, or the time for response is insufficient, deflection or disruption can be achieved with a nuclear device. The use of nuclear ablation is considered within the context of current capabilities, requiring no need for nuclear testing. Existing, well-understood devices are sufficient for the largest known Potentially Hazardous Objects (PHOs). The National Aeronautics and Space Administration/Goddard Space Flight Center and the Department of Energy/National Nuclear Security Administration are collaborating to determine the critical characterization issues that define the boundaries for the asteroid-deflection options. Drawing from such work, we examine the timeline for a deflection mission, and how to provide the best opportunity for an impactor to suffice by minimizing the response time. This integrated problem considers the physical process of the deflection method (impact or ablation), along with the spacecraft, launch capability, risk analysis, and the available intercept flight trajectories. Our joint DOE/NASA team has conducted case study analysis of three distinctly different PHOs, on a hypothetical earth impacting trajectory. The size of the design reference bodies ranges from 100 - 500 meters in diameter, with varying physical parameters such as composition, spin state, and metallicity, to name a few. We assemble the design reference of the small body in question using known values for key parameters and expert elicitation to make educated guesses on the unknown parameters, including an estimate of the overall uncertainties in those values. Our scenario-based systems approach includes 2-D and 3-D physics-based modeling and simulations.

  15. Vulnerability and impact assessment of extreme climatic event: A case study of southern Punjab, Pakistan.

    PubMed

    Aslam, Abdul Qayyum; Ahmad, Sajid R; Ahmad, Iftikhar; Hussain, Yawar; Hussain, Muhammad Sameem

    2017-02-15

    Understanding of frequency, severity, damages and adaptation costs of climate extremes is crucial to manage their aftermath. Evaluation of PRECIS RCM modelled data under IPCC scenarios in Southern Punjab reveals that monthly mean temperature is 30°C under A2 scenario, 2.4°C higher than A1B which is 27.6°C in defined time lapses. Monthly mean precipitation under A2 scenario ranges from 12 to 15mm and for A1B scenario it ranges from 15 to 19mm. Frequency modelling of floods and droughts via poisson distribution shows increasing trend in upcoming decades posing serious impacts on agriculture and livestock, food security, water resources, public health and economic status. Cumulative loss projected for frequent floods without adaptation will be in the range of USD 66.8-79.3 billion in time lapse of 40years from 2010 base case. Drought damage function @ 18% for A2 scenario and @ 13.5% for A1B scenario was calculated; drought losses on agriculture and livestock sectors were modelled. Cumulative loss projected for frequent droughts without adaptation under A2 scenario will be in the range of USD 7.5-8.5 billion while under A1B scenario it will be in the range of USD 3.5-4.2 billion for time lapse of 60years from base case 1998-2002. Severity analysis of extreme events shows that situation get worse if adaptations are not only included in the policy but also in the integrated development framework with required allocation of funds. This evaluation also highlights the result of cost benefit analysis, benefits of the adaptation options (mean & worst case) for floods and droughts in Southern Punjab. Additionally the research highlights the role of integrated extreme events impact assessment methodology in performing the vulnerability assessments and to support the adaptation decisions. This paper is an effort to highlight importance of bottom up approaches to deal with climate change. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. The NASA Hydrogen Energy Systems Technology study - A summary

    NASA Technical Reports Server (NTRS)

    Laumann, E. A.

    1976-01-01

    This study is concerned with: hydrogen use, alternatives and comparisons, hydrogen production, factors affecting application, and technology requirements. Two scenarios for future use are explained. One is called the reference hydrogen use scenario and assumes continued historic uses of hydrogen along with additional use for coal gasification and liquefaction, consistent with the Ford technical fix baseline (1974) projection. The expanded scenario relies on the nuclear electric economy (1973) energy projection and assumes the addition of limited new uses such as experimental hydrogen-fueled aircraft, some mixing with natural gas, and energy storage by utilities. Current uses and supply of hydrogen are described, and the technological requirements for developing new methods of hydrogen production are discussed.

  17. Evaluating United States and world consumption of neodymium, dysprosium, terbium, and praseodymium in final products

    NASA Astrophysics Data System (ADS)

    Hart, Matthew

    This paper develops scenarios of future rare-earth-magnet metal (neodymium, dysprosium, terbium, and praseodymium) consumption in the permanent magnets used in wind turbines and hybrid electric vehicles. The scenarios start with naive base-case scenarios for growth in wind-turbine and hybrid-electric-vehicle sales over the period 2011 to 2020, using historical data for each good. These naive scenarios assume that future growth follows time trends in historical data and does not depend on any exogenous variable. Specifically, growth of each technological market follows historical time trends, and the amount of rare earths used per unit of technology remains fixed. The chosen reference year is 2010. Implied consumptions of the rare earth magnet metals are calculated from these scenarios. Assumptions are made for the material composition of permanent magnets, the market share of permanent-magnet wind turbines and vehicles, and magnet weight per unit of technology. Different scenarios estimate how changes in factors like the material composition of magnets, growth of the economy, and the price of a substitute could affect future consumption. Each scenario presents a different method for reducing rare earth consumption and could be interpreted as potential policy choices. In 2010, the consumption (metric tons, rare-earth-oxide equivalent) of each rare-earth-magnet metal was as follows. Total neodymium consumption in the world for both technologies was 995 tons; dysprosium consumption was 133 tons; terbium consumption was 50 tons; praseodymium consumption was zero tons. The base scenario for wind turbines shows there could be strong, exponential growth in the global wind turbine market. New U.S. sales of hybrid vehicles would decline (in line with the current economic recession) while non-U.S. sales increase through 2020. There would be an overall increase in the total amount of magnetic rare earths consumed in the world. Total consumption of each rare earth in the short-term (2015) and mid-term (2020) scenarios could be between: 1,984 to 6,475 tons (2015) and 3,487 to 13,763 tons (2020) of neodymium; 331 to 864 tons (2015) and 587 to 1,834 tons (2020) of dysprosium; 123 to 325 tons (2015) and 219 to 687 tons (2020) of terbium; finally, zero to 871 tons (2015) and zero to 1,493 tons (2020) of praseodymium. Hybrid vehicle sales in non-U.S. countries could account for a large portion of magnetic rare earth consumption. Wind turbine and related rare earth consumption growth will also be driven by non-U.S. countries, especially developing nations like China. Despite wind turbines using bigger magnets, the sheer volume of hybrids sold and non-U.S. consumers could account for most future consumption of permanent magnets and their rare earths.

  18. Study on mitigation of pulsed heat load for ITER cryogenic system

    NASA Astrophysics Data System (ADS)

    Peng, N.; Xiong, L. Y.; Jiang, Y. C.; Tang, J. C.; Liu, L. Q.

    2015-03-01

    One of the key requirements for ITER cryogenic system is the mitigation of the pulsed heat load deposited in the magnet system due to magnetic field variation and pulsed DT neutron production. As one of the control strategies, bypass valves of Toroidal Field (TF) case helium loop would be adjusted to mitigate the pulsed heat load to the LHe plant. A quasi-3D time-dependent thermal-hydraulic analysis of the TF winding packs and TF case has been performed to study the behaviors of TF magnets during the reference plasma scenario with the pulses of 400 s burn and repetition time of 1800 s. The model is based on a 1D helium flow and quasi-3D solid heat conduction model. The whole TF magnet is simulated taking into account thermal conduction between winding pack and case which are cooled separately. The heat loads are given as input information, which include AC losses in the conductor, eddy current losses in the structure, thermal radiation, thermal conduction and nuclear heating. The simulation results indicate that the temperature variation of TF magnet stays within the allowable range when the smooth control strategy is active.

  19. Dynamic assessment of urban economy-environment-energy system using system dynamics model: A case study in Beijing.

    PubMed

    Wu, Desheng; Ning, Shuang

    2018-07-01

    Economic development, accompanying with environmental damage and energy depletion, becomes essential nowadays. There is a complicated and comprehensive interaction between economics, environment and energy. Understanding the operating mechanism of Energy-Environment-Economy model (3E) and its key factors is the inherent part in dealing with the issue. In this paper, we combine System Dynamics model and Geographic Information System to analyze the energy-environment-economy (3E) system both temporally and spatially, which explicitly explore the interaction of economics, energy, and environment and effects of the key influencing factors. Beijing is selected as a case study to verify our SD-GIS model. Alternative scenarios, e.g., current, technology, energy and environment scenarios are explored and compared. Simulation results shows that, current scenario is not sustainable; technology scenario is applicable to economic growth; environment scenario maintains a balanced path of development for long term stability. Policy-making insights are given based on our results and analysis. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. MEGASTAR: The Meaning of Energy Growth: An Assessment of Systems, Technologies, and Requirements

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A methodology for the display and analysis of postulated energy futures for the United States is presented. A systems approach that includes the methodology of technology assessment is used to examine three energy scenarios--the Westinghouse Nuclear Electric Economy, the Ford Technical Fix Base Case and a MEGASTAR generated Alternate to the Ford Technical Fix Base Case. The three scenarios represent different paths of energy consumption for the present to the year 2000. Associated with these paths are various mixes of fuels, conversion, distribution, conservation and end-use technologies. MEGASTAR presents the estimated times and unit requirements to supply the fuels, conversion and distribution systems for the postulated end uses for the three scenarios and then estimates the aggregate manpower, materials, and capital requirements needed to develop the energy system described by the particular scenario. The total requirements and the energy subsystems for each scenario are assessed for their primary impacts in the areas of society, the environment, technology and the economy.

  1. Validation of software for calculating the likelihood ratio for parentage and kinship.

    PubMed

    Drábek, J

    2009-03-01

    Although the likelihood ratio is a well-known statistical technique, commercial off-the-shelf (COTS) software products for its calculation are not sufficiently validated to suit general requirements for the competence of testing and calibration laboratories (EN/ISO/IEC 17025:2005 norm) per se. The software in question can be considered critical as it directly weighs the forensic evidence allowing judges to decide on guilt or innocence or to identify person or kin (i.e.: in mass fatalities). For these reasons, accredited laboratories shall validate likelihood ratio software in accordance with the above norm. To validate software for calculating the likelihood ratio in parentage/kinship scenarios I assessed available vendors, chose two programs (Paternity Index and familias) for testing, and finally validated them using tests derived from elaboration of the available guidelines for the field of forensics, biomedicine, and software engineering. MS Excel calculation using known likelihood ratio formulas or peer-reviewed results of difficult paternity cases were used as a reference. Using seven testing cases, it was found that both programs satisfied the requirements for basic paternity cases. However, only a combination of two software programs fulfills the criteria needed for our purpose in the whole spectrum of functions under validation with the exceptions of providing algebraic formulas in cases of mutation and/or silent allele.

  2. Restriction to period of interest improves informative value of death certificate only proportions in period analysis of cancer survival.

    PubMed

    Brenner, Hermann; Jansen, Lina

    2015-12-01

    The proportion of cases registered by death certificates only (DCO) is a widely used indicator for potential bias in cancer survival studies. Period analysis is increasingly used to derive up-to-date cancer survival estimates. We aimed to assess whether reported DCO proportions should be restricted to the specific recent calendar period ("restricted period") or refer to all diagnosis years of included patients ("full period"). We assessed correlations of bias in period survival estimates resulting from DCO cases with DCO proportions in the restricted and full period, respectively. We used cancer registry data to simulate bias and DCO proportions resulting from various patterns of underreporting of deceased cases. We show results for six common cancers with very different prognosis and five different age groups. In all scenarios, the expected bias was highly correlated with expected DCO proportions in both periods, but correlations were consistently higher with DCO proportions in the restricted period. In period analyses of cancer survival, DCO proportions for the restricted period of specific interest are a better indicator of potential bias due to underreporting of deceased cases than DCO proportions for all years of diagnosis of included patients. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. A price-responsive dispatching strategy for Vehicle-to-Grid: An economic evaluation applied to the case of Singapore

    NASA Astrophysics Data System (ADS)

    Pelzer, Dominik; Ciechanowicz, David; Aydt, Heiko; Knoll, Alois

    2014-06-01

    Employing electric vehicles as short-term energy storage could improve power system stability and at the same time create a new income source for vehicle owners. In this paper, the economic viability of this concept referred to as Vehicle-to-Grid is investigated. For this purpose, a price-responsive charging and dispatching strategy built upon temporally resolved electricity market data is presented. This concept allows vehicle owners to maximize returns by restricting market participation to profitable time periods. As a case study, this strategy is then applied using the example of Singapore. It is shown that an annual loss of S 1000 resulting from a non-price-responsive strategy as employed in previous works can be turned into a S 130 profit by applying the price-responsive approach. In addition to this scenario, realistic mobility patterns which restrict the temporal availability of vehicles are considered. In this case, profits in the range of S 21-S 121 are achievable. Returns in this order of magnitude are not expected to make Vehicle-to-Grid a viable business case, sensitivity analyses, however, show that improved technical parameters could increase profitability. It is further assumed that employing the price-responsive strategy to other national markets may yield significantly greater returns.

  4. Overview of the Hydrogen Financial Analysis Scenario Tool (H2FAST); NREL (National Renewable Energy Laboratory)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melaina, Marc; Bush, Brian; Penev, Michael

    This presentation provides an introduction to the Hydrogen Financial Analysis Scenario Tool (H2FAST) and includes an overview of each of the three versions of H2FAST: the Web tool, the Excel spreadsheet version, and the beta version of the H2FAST Business Case Scenario tool.

  5. Designing, Developing and Implementing a Software Tool for Scenario Based Learning

    ERIC Educational Resources Information Center

    Norton, Geoff; Taylor, Mathew; Stewart, Terry; Blackburn, Greg; Jinks, Audrey; Razdar, Bahareh; Holmes, Paul; Marastoni, Enrique

    2012-01-01

    The pedagogical value of problem-based and inquiry-based learning activities has led to increased use of this approach in many courses. While scenarios or case studies were initially presented to learners as text-based material, the development of modern software technology provides the opportunity to deliver scenarios as e-learning modules,…

  6. Technical geothermal potential of urban subsurface influenced by land surface effects

    NASA Astrophysics Data System (ADS)

    Rivera, Jaime A.; Blum, Philipp; Bayer, Peter

    2016-04-01

    Changes in land use are probably one of the most notorious anthropogenic perturbations in urban environments. They significantly change the coupled thermal regime at the ground surface leading in most cases to increased ground surface temperatures (GST). The associated elevated vertical heat fluxes act at different scales and can influence the thermal conditions in several tens of meters in the subsurface. Urban subsurface thus often stores a higher amount of heat than less affected rural surroundings. The stored heat is regarded as a potential source of low-enthalpy geothermal energy to supply the heating energy demands in urban areas. In this work, we explore the technical geothermal potential of urban subsurface via ground coupled heat pumps with borehole heat exchangers (BHE). This is tackled by semi-analytical line-source equations. The commonly used response factors or g-functions are modified to include transient land surface effects. By including this additional source of heat, the new formulation allows to analyse the effect of pre-existing urban warming as well as different exploitation schemes fulfilling standard renewable and sustainable criteria. In our generalized reference scenario, it is demonstrated that energy gains for a single BHE may be up to 40 % when compared to non-urbanized conditions. For a scenario including the interaction of multiple BHEs, results indicate that it would be possible to supply between 6 % and 27 % of the heating demands in Central European urban settlements in a renewable way. The methodology is also applied to a study case of the city of Zurich, Switzerland, where the detailed evolution of land use is available.

  7. Developing ecological scenarios for the prospective aquatic risk assessment of pesticides.

    PubMed

    Rico, Andreu; Van den Brink, Paul J; Gylstra, Ronald; Focks, Andreas; Brock, Theo Cm

    2016-07-01

    The prospective aquatic environmental risk assessment (ERA) of pesticides is generally based on the comparison of predicted environmental concentrations in edge-of-field surface waters with regulatory acceptable concentrations derived from laboratory and/or model ecosystem experiments with aquatic organisms. New improvements in mechanistic effect modeling have allowed a better characterization of the ecological risks of pesticides through the incorporation of biological trait information and landscape parameters to assess individual, population and/or community-level effects and recovery. Similarly to exposure models, ecological models require scenarios that describe the environmental context in which they are applied. In this article, we propose a conceptual framework for the development of ecological scenarios that, when merged with exposure scenarios, will constitute environmental scenarios for prospective aquatic ERA. These "unified" environmental scenarios are defined as the combination of the biotic and abiotic parameters that are required to characterize exposure, (direct and indirect) effects, and recovery of aquatic nontarget species under realistic worst-case conditions. Ideally, environmental scenarios aim to avoid a potential mismatch between the parameter values and the spatial-temporal scales currently used in aquatic exposure and effect modeling. This requires a deeper understanding of the ecological entities we intend to protect, which can be preliminarily addressed by the formulation of ecological scenarios. In this article we present a methodological approach for the development of ecological scenarios and illustrate this approach by a case-study for Dutch agricultural ditches and the example focal species Sialis lutaria. Finally, we discuss the applicability of ecological scenarios in ERA and propose research needs and recommendations for their development and integration with exposure scenarios. Integr Environ Assess Manag 2016;12:510-521. © 2015 SETAC. © 2015 SETAC.

  8. TCL2 Ocean Scenario Replay

    NASA Technical Reports Server (NTRS)

    Mohlenbrink, Christoph P.; Omar, Faisal Gamal; Homola, Jeffrey R.

    2017-01-01

    This is a video replay of system data that was generated from the UAS Traffic Management (UTM) Technical Capability Level (TCL) 2 flight demonstration in Nevada and rendered in Google Earth. What is depicted in the replay is a particular set of flights conducted as part of what was referred to as the Ocean scenario. The test range and surrounding area are presented followed by an overview of operational volumes. System messaging is also displayed as well as a replay of all of the five test flights as they occurred.

  9. A protocol for the development of a critical thinking assessment tool for nurses using a Delphi technique.

    PubMed

    Jacob, Elisabeth; Duffield, Christine; Jacob, Darren

    2017-08-01

    The aim of this study was to develop an assessment tool to measure the critical thinking ability of nurses. As an increasing number of complex patients are admitted to hospitals, the importance of nurses recognizing changes in health status and picking up on deterioration is more important. To detect early signs of complication requires critical thinking skills. Registered Nurses are expected to commence their clinical careers with the necessary critical thinking skills to ensure safe nursing practice. Currently, there is no published tool to assess critical thinking skills which is context specific to Australian nurses. A modified Delphi study will be used for the project. This study will develop a series of unfolding case scenarios using national health data with multiple-choice questions to assess critical thinking. Face validity of the scenarios will be determined by an expert reference group of clinical and academic nurses. A Delphi study will determine the answers to scenario questions. Panel members will be expert clinicians and educators from two states in Australia. Rasch analysis of the questionnaire will assess validity and reliability of the tool. Funding for the study and Research Ethics Committee approval were obtained in March and November 2016, respectively. Patient outcomes and safety are directly linked to nurses' critical thinking skills. This study will develop an assessment tool to provide a standardized method of measuring nurses' critical thinking skills across Australia. This will provide healthcare providers with greater confidence in the critical thinking level of graduate Registered Nurses. © 2017 John Wiley & Sons Ltd.

  10. Experimental study of heat pump thermodynamic cycles using CO2 based mixtures - Methodology and first results

    NASA Astrophysics Data System (ADS)

    Bouteiller, Paul; Terrier, Marie-France; Tobaly, Pascal

    2017-02-01

    The aim of this work is to study heat pump cycles, using CO2 based mixtures as working fluids. Since adding other chemicals to CO2 moves the critical point and generally equilibrium lines, it is expected that lower operating pressures as well as higher global efficiencies may be reached. A simple stage pure CO2 cycle is used as reference, with fixed external conditions. Two scenarios are considered: water is heated from 10 °C to 65 °C for Domestic Hot Water scenario and from 30 °C to 35 °C for Central Heating scenario. In both cases, water at the evaporator inlet is set at 7 °C to account for such outdoor temperature conditions. In order to understand the dynamic behaviour of thermodynamic cycles with mixtures, it is essential to measure the fluid circulating composition. To this end, we have developed a non intrusive method. Online optical flow cells allow the recording of infrared spectra by means of a Fourier Transform Infra Red spectrometer. A careful calibration is performed by measuring a statistically significant number of spectra for samples of known composition. Then, a statistical model is constructed to relate spectra to compositions. After calibration, compositions are obtained by recording the spectrum in few seconds, thus allowing for a dynamic analysis. This article will describe the experimental setup and the composition measurement techniques. Then a first account of results with pure CO2, and with the addition of propane or R-1234yf will be given.

  11. Retrievable Inferior Vena Cava Filters in Trauma Patients: Prevalence and Management of Thrombus Within the Filter.

    PubMed

    Pan, Y; Zhao, J; Mei, J; Shao, M; Zhang, J; Wu, H

    2016-12-01

    The incidence of thrombus was investigated within retrievable filters placed in trauma patients with confirmed DVT at the time of retrieval and the optimal treatment for this clinical scenario was assessed. A technique called "filter retrieval with manual negative pressure aspiration thrombectomy" for management of filter thrombus was introduced and assessed. The retrievable filters referred for retrieval between January 2008 and December 2015 were retrospectively reviewed to determine the incidence of filter thrombus on a pre-retrieval cavogram. The clinical outcomes of different managements for thrombus within filters were recorded and analyzed. During the study 764 patients having Aegisy Filters implanted were referred for filter removal, from which 236 cases (134 male patients, mean age 50.2 years) of thrombus within the filter were observed on initial pre-retrieval IVC venogram 12-39 days after insertion (average 16.9 days). The incidence of infra-filter thrombus was 30.9%, and complete occlusion of the filter bearing IVC was seen in 2.4% (18) of cases. Retrieval was attempted in all 121 cases with small clots using a regular snare and sheath technique, and was successful in 120. A total of 116 cases with massive thrombus and IVC occlusion by thrombus were treated by CDT and/or the new retrieval technique. Overall, 213 cases (90.3%) of thrombus in the filter were removed successfully without PE. A small thrombus within the filter can be safely removed without additional management. CDT for reduction of the clot burden in filters was effective and safe. Filter retrieval with manual negative pressure aspiration thrombectomy seemed reasonable and valuable for management of massive thrombus within filters in some patients. Full assessment of the value and safety of this technique requires additional studies. Copyright © 2016 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.

  12. Energy and environmental evaluation of combined cooling heating and power system

    NASA Astrophysics Data System (ADS)

    Bugaj, Andrzej

    2017-11-01

    The paper addresses issues involving problems of implementing combined cooling, heating and power (CCHP) system to industrial facility with well-defined demand profiles of cooling, heating and electricity. The application of CCHP system in this particular industrial facility is being evaluated by comparison with the reference system that consists of three conventional methods of energy supply: (a) electricity from external grid, (b) heat from gas-fired boilers and (c) cooling from vapour compression chillers run by electricity from the grid. The CCHP system scenario is based on the combined heat and power (CHP) plant with gas turbine-compressor arrangement and water/lithium bromide absorption chiller of a single-effect type. Those two scenarios are analysed in terms of annual primary energy usage as well as emissions of CO2. The results of the analysis show an extent of primary energy savings of the CCHP system in comparison with the reference system. Furthermore, the environmental impact of the CCHP usage, in the form of greenhouse gases emission reductions, compares quite favourably with the reference conventional option.

  13. RMP*Comp

    EPA Pesticide Factsheets

    You can use this free software program to complete the Off-site Consequence Analyses (both worst case scenarios and alternative scenarios) required under the Risk Management Program rule, so that you don't have to do calculations by hand.

  14. ED-WAVE tool design approach: Case of a textile wastewater treatment plant in Blantyre, Malawi

    NASA Astrophysics Data System (ADS)

    Chipofya, V.; Kraslawski, A.; Avramenko, Y.

    The ED-WAVE tool is a PC based package for imparting training on wastewater treatment technologies. The system consists of four modules viz. Reference Library, Process Builder, Case Study Manager, and Treatment Adviser. The principles of case-based design and case-based reasoning as applied in the ED-WAVE tool are utilised in this paper to evaluate the design approach of the wastewater treatment plant at Mapeto David Whitehead & Sons (MDW&S) textile and garments factory, Blantyre, Malawi. The case being compared with MDW&S in the ED-WAVE tool is Textile Case 4 in Sri Lanka (2003). Equalisation, coagulation and rotating biological contactors is the sequencing of treatment units at Textile Case 4 in Sri Lanka. Screening, oxidation ditches and sedimentation is the sequencing of treatment units at MDW&S textile and garments factory. The study suggests that aerobic biological treatment is necessary in the treatment of wastewater from a textile and garments factory. MDW&S incorporates a sedimentation process which is necessary for the removal of settleable matter before the effluent is discharged to the municipal wastewater treatment plant. The study confirmed the practical use of the ED-WAVE tool in the design of wastewater treatment systems, where after encountering a new situation; already collected decision scenarios (cases) are invoked and modified in order to arrive at a particular design alternative. What is necessary, however, is to appropriately modify the case arrived at through the Case Study Manager in order to come up with a design appropriate to the local situation taking into account technical, socio-economic and environmental aspects.

  15. Response prediction techniques and case studies of a path blocking system based on Global Transmissibility Direct Transmissibility method

    NASA Astrophysics Data System (ADS)

    Wang, Zengwei; Zhu, Ping; Zhao, Jianxuan

    2017-02-01

    In this paper, the prediction capabilities of the Global Transmissibility Direct Transmissibility (GTDT) method are further developed. Two path blocking techniques solely using the easily measured variables of the original system to predict the response of a path blocking system are generalized to finite element models of continuous systems. The proposed techniques are derived theoretically in a general form for the scenarios of setting the response of a subsystem to zero and of removing the link between two directly connected subsystems. The objective of this paper is to verify the reliability of the proposed techniques by finite element simulations. Two typical cases, the structural vibration transmission case and the structure-borne sound case, in two different configurations are employed to illustrate the validity of proposed techniques. The points of attention for each case have been discussed, and conclusions are given. It is shown that for the two cases of blocking a subsystem the proposed techniques are able to predict the new response using measured variables of the original system, even though operational forces are unknown. For the structural vibration transmission case of removing a connector between two components, the proposed techniques are available only when the rotational component responses of the connector are very small. The proposed techniques offer relative path measures and provide an alternative way to deal with NVH problems. The work in this paper provides guidance and reference for the engineering application of the GTDT prediction techniques.

  16. Current and Future Urban Stormwater Flooding Scenarios in the Southeast Florida Coasts

    NASA Astrophysics Data System (ADS)

    Huq, E.; Abdul-Aziz, O. I.

    2016-12-01

    This study computed rainfall-fed stormwater flooding under the historical and future reference scenarios for the Southeast Coasts Basin of Florida. A large-scale, mechanistic rainfall-runoff model was developed using the U.S. E.P.A. Storm Water Management Model (SWMM 5.1). The model parameterized important processes of urban hydrology, groundwater, and sea level, while including hydroclimatological variables and land use features. The model was calibrated and validated with historical streamflow data. It was then used to estimate the sensitivity of stormwater runoff to the reference changes in hydroclimatological variables (rainfall and evapotranspiration) and different land use/land cover features (imperviousness, roughness). Furthermore, historical (1970-2000) and potential 2050s stormwater budgets were also estimated for the Florida Southeast Coasts Basin by incorporating climatic projections from different GCMs and RCMs, as well as by using relevant projections of sea level and land use/cover. Comparative synthesis of the historical and future scenarios along with the results of sensitivity analysis can aid in efficient management of stormwater flooding for the southeast Florida coasts and similar urban centers under a changing regime of climate, sea level, land use/cover and hydrology.

  17. Evaluation of Hybrid Power Plants using Biomass, Photovoltaics and Steam Electrolysis for Hydrogen and Power Generation

    NASA Astrophysics Data System (ADS)

    Petrakopoulou, F.; Sanz, J.

    2014-12-01

    Steam electrolysis is a promising process of large-scale centralized hydrogen production, while it is also considered an excellent option for the efficient use of renewable solar and geothermal energy resources. This work studies the operation of an intermediate temperature steam electrolyzer (ITSE) and its incorporation into hybrid power plants that include biomass combustion and photovoltaic panels (PV). The plants generate both electricity and hydrogen. The reference -biomass- power plant and four variations of a hybrid biomass-PV incorporating the reference biomass plant and the ITSE are simulated and evaluated using exergetic analysis. The variations of the hybrid power plants are associated with (1) the air recirculation from the electrolyzer to the biomass power plant, (2) the elimination of the sweep gas of the electrolyzer, (3) the replacement of two electric heaters with gas/gas heat exchangers, and (4) the replacement two heat exchangers of the reference electrolyzer unit with one heat exchanger that uses steam from the biomass power plant. In all cases, 60% of the electricity required in the electrolyzer is covered by the biomass plant and 40% by the photovoltaic panels. When comparing the hybrid plants with the reference biomass power plant that has identical operation and structure as that incorporated in the hybrid plants, we observe an efficiency decrease that varies depending on the scenario. The efficiency decrease stems mainly from the low effectiveness of the photovoltaic panels (14.4%). When comparing the hybrid scenarios, we see that the elimination of the sweep gas decreases the power consumption due to the elimination of the compressor used to cover the pressure losses of the filter, the heat exchangers and the electrolyzer. Nevertheless, if the sweep gas is used to preheat the air entering the boiler of the biomass power plant, the efficiency of the plant increases. When replacing the electric heaters with gas-gas heat exchangers, the efficiency of the plant increases, although the higher pressure losses of the flue-gas path increase the requirements of the air compressor. Finally, replacing the two heat exchangers of the electrolyzer unit with one that uses extracted steam from the biomass power plant can lead to an overall decrease in the operating and investment costs of the plant.

  18. 3D reconstruction optimization using imagery captured by unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Bassie, Abby L.; Meacham, Sean; Young, David; Turnage, Gray; Moorhead, Robert J.

    2017-05-01

    Because unmanned air vehicles (UAVs) are emerging as an indispensable image acquisition platform in precision agriculture, it is vitally important that researchers understand how to optimize UAV camera payloads for analysis of surveyed areas. In this study, imagery captured by a Nikon RGB camera attached to a Precision Hawk Lancaster was used to survey an agricultural field from six different altitudes ranging from 45.72 m (150 ft.) to 121.92 m (400 ft.). After collecting imagery, two different software packages (MeshLab and AgiSoft) were used to measure predetermined reference objects within six three-dimensional (3-D) point clouds (one per altitude scenario). In-silico measurements were then compared to actual reference object measurements, as recorded with a tape measure. Deviations of in-silico measurements from actual measurements were recorded as Δx, Δy, and Δz. The average measurement deviation in each coordinate direction was then calculated for each of the six flight scenarios. Results from MeshLab vs. AgiSoft offered insight into the effectiveness of GPS-defined point cloud scaling in comparison to user-defined point cloud scaling. In three of the six flight scenarios flown, MeshLab's 3D imaging software (user-defined scale) was able to measure object dimensions from 50.8 to 76.2 cm (20-30 inches) with greater than 93% accuracy. The largest average deviation in any flight scenario from actual measurements was 14.77 cm (5.82 in.). Analysis of the point clouds in AgiSoft (GPS-defined scale) yielded even smaller Δx, Δy, and Δz than the MeshLab measurements in over 75% of the flight scenarios. The precisions of these results are satisfactory in a wide variety of precision agriculture applications focused on differentiating and identifying objects using remote imagery.

  19. Modeling the Water - Quality Effects of Changes to the Klamath River Upstream of Keno Dam, Oregon

    USGS Publications Warehouse

    Sullivan, Annett B.; Sogutlugil, I. Ertugrul; Rounds, Stewart A.; Deas, Michael L.

    2013-01-01

    The Link River to Keno Dam (Link-Keno) reach of the Klamath River, Oregon, generally has periods of water-quality impairment during summer, including low dissolved oxygen, elevated concentrations of ammonia and algae, and high pH. Efforts are underway to improve water quality in this reach through a Total Maximum Daily Load (TMDL) program and other management and operational actions. To assist in planning, a hydrodynamic and water-quality model was used in this study to provide insight about how various actions could affect water quality in the reach. These model scenarios used a previously developed and calibrated CE-QUAL-W2 model of the Link-Keno reach developed by the U.S. Geological Survey (USGS), Watercourse Engineering Inc., and the Bureau of Reclamation for calendar years 2006-09 (referred to as the "USGS model" in this report). Another model of the same river reach was previously developed by Tetra Tech, Inc. and the Oregon Department of Environmental Quality for years 2000 and 2002 and was used in the TMDL process; that model is referred to as the "TMDL model" in this report. This report includes scenarios that (1) assess the effect of TMDL allocations on water quality, (2) provide insight on certain aspects of the TMDL model, (3) assess various methods to improve water quality in this reach, and (4) examine possible water-quality effects of a future warmer climate. Results presented in this report for the first 5 scenarios supersede or augment those that were previously published (scenarios 1 and 2 in Sullivan and others [2011], 3 through 5 in Sullivan and others [2012]); those previous results are still valid, but the results for those scenarios in this report are more current.

  20. Readings from Visibility Meters: Do They Really Mean the Maximum Distance of Observing A Black Object?

    NASA Astrophysics Data System (ADS)

    Li, M.; Zhang, S.; Garcia-Menendez, F.; Monier, E.; Selin, N. E.

    2016-12-01

    Climate change, favoring more heat waves and episodes of stagnant air, may deteriorate air quality by increasing ozone and fine particulate matter (PM2.5) concentrations and high pollution episodes. This effect, termed as "climate penalty", has been quantified and explained by many earlier studies in the U.S. and Europe, but research efforts in Asian countries are limited. We evaluate the impact of climate change on air quality and human health in China and India using a modeling framework that links the Massachusetts Institute of Technology Integrated Global System Model to the Community Atmosphere Model (MIT IGSM-CAM). Future climate fields are projected under three climate scenarios including a no-policy reference scenario and two climate stabilization scenarios with 2100 total radiative forcing targets of 9.7, 4.5 and 3.7 W m-2, respectively. Each climate scenario is run for five representations of climate variability to account for the role of natural variability. Thirty-year chemical transport simulations are conducted in 1981-2010 and 2086-2115 under the three climate scenarios with fixed anthropogenic emissions at year 2000 levels. We find that 2000—2100 climate change under the no-policy reference scenario would increase ozone concentrations in eastern China and northern India by up to 5 ppb through enhancing biogenic emissions and ozone production efficiency. Ozone extreme episodes also become more frequent in these regions, while climate policies can offset most of the increase in ozone episodes. Climate change between 2000 and 2100 would slightly increase anthropogenic PM2.5 concentrations in northern China and Sichuan province, but significantly reduce anthropogenic PM2.5 concentrations in southern China and northern India, primarily due to different chemical responses of sulfate-nitrate-ammonium aerosols to climate change in these regions. Our study also suggests that the mitigation costs of climate policies can be partially offset by health benefits from reduced climate-induced air pollution in China.

  1. Climate Penalty on Air Quality and Human Health in China and India

    NASA Astrophysics Data System (ADS)

    Li, M.; Zhang, S.; Garcia-Menendez, F.; Monier, E.; Selin, N. E.

    2017-12-01

    Climate change, favoring more heat waves and episodes of stagnant air, may deteriorate air quality by increasing ozone and fine particulate matter (PM2.5) concentrations and high pollution episodes. This effect, termed as "climate penalty", has been quantified and explained by many earlier studies in the U.S. and Europe, but research efforts in Asian countries are limited. We evaluate the impact of climate change on air quality and human health in China and India using a modeling framework that links the Massachusetts Institute of Technology Integrated Global System Model to the Community Atmosphere Model (MIT IGSM-CAM). Future climate fields are projected under three climate scenarios including a no-policy reference scenario and two climate stabilization scenarios with 2100 total radiative forcing targets of 9.7, 4.5 and 3.7 W m-2, respectively. Each climate scenario is run for five representations of climate variability to account for the role of natural variability. Thirty-year chemical transport simulations are conducted in 1981-2010 and 2086-2115 under the three climate scenarios with fixed anthropogenic emissions at year 2000 levels. We find that 2000—2100 climate change under the no-policy reference scenario would increase ozone concentrations in eastern China and northern India by up to 5 ppb through enhancing biogenic emissions and ozone production efficiency. Ozone extreme episodes also become more frequent in these regions, while climate policies can offset most of the increase in ozone episodes. Climate change between 2000 and 2100 would slightly increase anthropogenic PM2.5 concentrations in northern China and Sichuan province, but significantly reduce anthropogenic PM2.5 concentrations in southern China and northern India, primarily due to different chemical responses of sulfate-nitrate-ammonium aerosols to climate change in these regions. Our study also suggests that the mitigation costs of climate policies can be partially offset by health benefits from reduced climate-induced air pollution in China.

  2. Environment, Health and Climate: Impact of African aerosols

    NASA Astrophysics Data System (ADS)

    Liousse, C.; Doumbia, T.; Assamoi, E.; Galy-Lacaux, C.; Baeza, A.; Penner, J. E.; Val, S.; Cachier, H.; Xu, L.; Criqui, P.

    2012-12-01

    Fossil fuel and biofuel emissions of particles in Africa are expected to significantly increase in the near future, particularly due to rapid growth of African cities. In addition to biomass burning emissions prevailing in these areas, air quality degradation is then expected with important consequences on population health and climatic/radiative impact. In our group, we are constructing a new integrated methodology to study the relations between emissions, air quality and their impacts. This approach includes: (1) African combustion emission characterizations; (2) joint experimental determination of aerosol chemistry from ultrafine to coarse fractions and health issues (toxicology and epidemiology). (3) integrated environmental, health and radiative modeling. In this work, we show some results illustrating our first estimates of African anthropogenic emission impacts: - a new African anthropogenic emission inventory adapted to regional specificities on traffic, biofuel and industrial emissions has been constructed for the years 2005 and 2030. Biomass burning inventories were also improved in the frame of AMMA (African Monsoon) program. - carbonaceous aerosol radiative impact in Africa has been modeled with TM5 model and Penner et al. (2011) radiative code for these inventories for 2005 and 2030 and for two scenarios of emissions : a reference scenario, with no further emission controls beyond those achieved in 2003 and a ccc* scenario including planned policies in Kyoto protocol and regulations as applied to African emission specificities. In this study we will show that enhanced heating is expected with the ccc* scenarios emissions in which the OC fraction is relatively lower than in the reference scenario. - results of short term POLCA intensive campaigns in Bamako and Dakar in terms of aerosol chemical characterization linked to specific emissions sources and their inflammatory impacts on the respiratory tract through in vitro studies. In this study, organic carbon particles have appeared quite biologically active. Quite importantly, air quality improvement obtained through regulations in the ccc* scenario are accompanied by stronger heating impact.

  3. The influence of taxon sampling on Bayesian divergence time inference under scenarios of rate heterogeneity among lineages.

    PubMed

    Soares, André E R; Schrago, Carlos G

    2015-01-07

    Although taxon sampling is commonly considered an important issue in phylogenetic inference, it is rarely considered in the Bayesian estimation of divergence times. In fact, the studies conducted to date have presented ambiguous results, and the relevance of taxon sampling for molecular dating remains unclear. In this study, we developed a series of simulations that, after six hundred Bayesian molecular dating analyses, allowed us to evaluate the impact of taxon sampling on chronological estimates under three scenarios of among-lineage rate heterogeneity. The first scenario allowed us to examine the influence of the number of terminals on the age estimates based on a strict molecular clock. The second scenario imposed an extreme example of lineage specific rate variation, and the third scenario permitted extensive rate variation distributed along the branches. We also analyzed empirical data on selected mitochondrial genomes of mammals. Our results showed that in the strict molecular-clock scenario (Case I), taxon sampling had a minor impact on the accuracy of the time estimates, although the precision of the estimates was greater with an increased number of terminals. The effect was similar in the scenario (Case III) based on rate variation distributed among the branches. Only under intensive rate variation among lineages (Case II) taxon sampling did result in biased estimates. The results of an empirical analysis corroborated the simulation findings. We demonstrate that taxonomic sampling affected divergence time inference but that its impact was significant if the rates deviated from those derived for the strict molecular clock. Increased taxon sampling improved the precision and accuracy of the divergence time estimates, but the impact on precision is more relevant. On average, biased estimates were obtained only if lineage rate variation was pronounced. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Insensitivity of evapotranspiration to seasonal rainfall distribution directs climate change impacts at water yield

    NASA Astrophysics Data System (ADS)

    Montaldo, N.; Oren, R.

    2017-12-01

    Over the past century, climate change is affecting precipitation regimes across the world. In the Mediterranean regions there is a persistent trend of precipitation and runoff decreases, generating a desertification process. Given the past winter precipitation shifts, the impacts on evapotranspiration (ET) need to be carefully evaluated, and the compelling question is what will be the impact of future climate change scenarios (predicting changes of precipitation and vapor pressure deficit, VPD) on evapotranspiration and water yield? Looking for the key elements of the climate change that are impacting annual ET, we investigate main climate conditions (e.g. precipitation and VPD) and basin physiographic properties contributing to annual ET. We propose a simplified model for annual ET predictions that accounts for the strong meteo seasonality typical of Mediterranean climates, using the steady state assumption of the basin water balance at mean annual scale. We investigate the Sardinia case study because the position of the island of Sardinia in the center of the western Mediterranean Sea basin and its low urbanization and human activity make Sardinia a perfect reference laboratory for Mediterranean hydrologic studies. Sardinian runoff decreased drastically over the 1975-2010 period, with mean yearly runoff reduced by more than 40% compared to the previous 1922-1974 period, and most yearly runoff in the Sardinian basins (70% on average) is produced by winter precipitation due to the seasonality typical of the Mediterranean climate regime. The use of our proposed model allows to predict future ET and water yield using future climate scenarios. We use the future climate scenarios predicted by Global climate models (GCM) in the Fifth Assessment report of the Intergovernmental Panel on Climate Change (IPCC), and we select most reliable models testing the past GCM predictions with historical data. Contrasting shifts of precipitation (both positive and negative) are predicted in the future scenarios by GCMs but these changes will produce significant changes (level of significance > 90%) only in runoff and not in ET. Surprisingly, we show that ET is insensitive to intra-annual rainfall distribution changes, and is insensitive to VPD scenario changes.

  5. In-Flight Suppression of an Unstable F/A-18 Structural Mode Using the Space Launch System Adaptive Augmenting Control System

    NASA Technical Reports Server (NTRS)

    VanZwieten, Tannen S.; Gilligan, Eric T.; Wall, John H.; Miller, Christopher J.; Hanson, Curtis E.; Orr, Jeb S.

    2015-01-01

    NASA's Space Launch System (SLS) Flight Control System (FCS) includes an Adaptive Augmenting Control (AAC) component which employs a multiplicative gain update law to enhance the performance and robustness of the baseline control system for extreme off-nominal scenarios. The SLS FCS algorithm including AAC has been flight tested utilizing a specially outfitted F/A-18 fighter jet in which the pitch axis control of the aircraft was performed by a Non-linear Dynamic Inversion (NDI) controller, SLS reference models, and the SLS flight software prototype. This paper describes test cases from the research flight campaign in which the fundamental F/A-18 airframe structural mode was identified using post-flight frequency-domain reconstruction, amplified to result in closed loop instability, and suppressed in-flight by the SLS adaptive control system.

  6. Robustness of the filamentation instability as shock mediator in arbitrarily oriented magnetic field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bret, A.; Alvaro, E. Perez

    2011-08-15

    The filamentation instability (sometimes also referred to as ''Weibel'') is a key process in many astrophysical scenario. In the Fireball model for gamma ray bursts, this instability is believed to mediate collisionless shock formation from the collision of two plasma shells. It has been known for long that a flow aligned magnetic field can completely cancel this instability. We show here that in the general case where there is an angle between the field and the flow, the filamentation instability can never be stabilized, regardless of the field strength. The presented model analyzes the stability of two symmetric counter-streaming coldmore » electron/proton plasma shells. Relativistic effects are accounted for, and various exact analytical results are derived. This result guarantees the occurrence of the instability in realistic settings fulfilling the cold approximation.« less

  7. Architectural Implications for Spatial Object Association Algorithms*

    PubMed Central

    Kumar, Vijay S.; Kurc, Tahsin; Saltz, Joel; Abdulla, Ghaleb; Kohn, Scott R.; Matarazzo, Celeste

    2013-01-01

    Spatial object association, also referred to as crossmatch of spatial datasets, is the problem of identifying and comparing objects in two or more datasets based on their positions in a common spatial coordinate system. In this work, we evaluate two crossmatch algorithms that are used for astronomical sky surveys, on the following database system architecture configurations: (1) Netezza Performance Server®, a parallel database system with active disk style processing capabilities, (2) MySQL Cluster, a high-throughput network database system, and (3) a hybrid configuration consisting of a collection of independent database system instances with data replication support. Our evaluation provides insights about how architectural characteristics of these systems affect the performance of the spatial crossmatch algorithms. We conducted our study using real use-case scenarios borrowed from a large-scale astronomy application known as the Large Synoptic Survey Telescope (LSST). PMID:25692244

  8. Performance Evaluation of Evasion Maneuvers for Parallel Approach Collision Avoidance

    NASA Technical Reports Server (NTRS)

    Winder, Lee F.; Kuchar, James K.; Waller, Marvin (Technical Monitor)

    2000-01-01

    Current plans for independent instrument approaches to closely spaced parallel runways call for an automated pilot alerting system to ensure separation of aircraft in the case of a "blunder," or unexpected deviation from the a normal approach path. Resolution advisories by this system would require the pilot of an endangered aircraft to perform a trained evasion maneuver. The potential performance of two evasion maneuvers, referred to as the "turn-climb" and "climb-only," was estimated using an experimental NASA alerting logic (AILS) and a computer simulation of relative trajectory scenarios between two aircraft. One aircraft was equipped with the NASA alerting system, and maneuvered accordingly. Observation of the rates of different types of alerting failure allowed judgement of evasion maneuver performance. System Operating Characteristic (SOC) curves were used to assess the benefit of alerting with each maneuver.

  9. RMP Guidance for Offsite Consequence Analysis

    EPA Pesticide Factsheets

    Offsite consequence analysis (OCA) consists of a worst-case release scenario and alternative release scenarios. OCA is required from facilities with chemicals above threshold quantities. RMP*Comp software can be used to perform calculations described here.

  10. Collaborative emitter tracking using Rao-Blackwellized random exchange diffusion particle filtering

    NASA Astrophysics Data System (ADS)

    Bruno, Marcelo G. S.; Dias, Stiven S.

    2014-12-01

    We introduce in this paper the fully distributed, random exchange diffusion particle filter (ReDif-PF) to track a moving emitter using multiple received signal strength (RSS) sensors. We consider scenarios with both known and unknown sensor model parameters. In the unknown parameter case, a Rao-Blackwellized (RB) version of the random exchange diffusion particle filter, referred to as the RB ReDif-PF, is introduced. In a simulated scenario with a partially connected network, the proposed ReDif-PF outperformed a PF tracker that assimilates local neighboring measurements only and also outperformed a linearized random exchange distributed extended Kalman filter (ReDif-EKF). Furthermore, the novel ReDif-PF matched the tracking error performance of alternative suboptimal distributed PFs based respectively on iterative Markov chain move steps and selective average gossiping with an inter-node communication cost that is roughly two orders of magnitude lower than the corresponding cost for the Markov chain and selective gossip filters. Compared to a broadcast-based filter which exactly mimics the optimal centralized tracker or its equivalent (exact) consensus-based implementations, ReDif-PF showed a degradation in steady-state error performance. However, compared to the optimal consensus-based trackers, ReDif-PF is better suited for real-time applications since it does not require iterative inter-node communication between measurement arrivals.

  11. Discrimination of plant root zone water status in greenhouse production based on phenotyping and machine learning techniques.

    PubMed

    Guo, Doudou; Juan, Jiaxiang; Chang, Liying; Zhang, Jingjin; Huang, Danfeng

    2017-08-15

    Plant-based sensing on water stress can provide sensitive and direct reference for precision irrigation system in greenhouse. However, plant information acquisition, interpretation, and systematical application remain insufficient. This study developed a discrimination method for plant root zone water status in greenhouse by integrating phenotyping and machine learning techniques. Pakchoi plants were used and treated by three root zone moisture levels, 40%, 60%, and 80% relative water content. Three classification models, Random Forest (RF), Neural Network (NN), and Support Vector Machine (SVM) were developed and validated in different scenarios with overall accuracy over 90% for all. SVM model had the highest value, but it required the longest training time. All models had accuracy over 85% in all scenarios, and more stable performance was observed in RF model. Simplified SVM model developed by the top five most contributing traits had the largest accuracy reduction as 29.5%, while simplified RF and NN model still maintained approximately 80%. For real case application, factors such as operation cost, precision requirement, and system reaction time should be synthetically considered in model selection. Our work shows it is promising to discriminate plant root zone water status by implementing phenotyping and machine learning techniques for precision irrigation management.

  12. Quantitative Analysis of Critical Factors for the Climate Impact of Landfill Mining.

    PubMed

    Laner, David; Cencic, Oliver; Svensson, Niclas; Krook, Joakim

    2016-07-05

    Landfill mining has been proposed as an innovative strategy to mitigate environmental risks associated with landfills, to recover secondary raw materials and energy from the deposited waste, and to enable high-valued land uses at the site. The present study quantitatively assesses the importance of specific factors and conditions for the net contribution of landfill mining to global warming using a novel, set-based modeling approach and provides policy recommendations for facilitating the development of projects contributing to global warming mitigation. Building on life-cycle assessment, scenario modeling and sensitivity analysis methods are used to identify critical factors for the climate impact of landfill mining. The net contributions to global warming of the scenarios range from -1550 (saving) to 640 (burden) kg CO2e per Mg of excavated waste. Nearly 90% of the results' total variation can be explained by changes in four factors, namely the landfill gas management in the reference case (i.e., alternative to mining the landfill), the background energy system, the composition of the excavated waste, and the applied waste-to-energy technology. Based on the analyses, circumstances under which landfill mining should be prioritized or not are identified and sensitive parameters for the climate impact assessment of landfill mining are highlighted.

  13. Physical factors that influence patients’ privacy perception toward a psychiatric behavioral monitoring system: a qualitative study

    PubMed Central

    Zakaria, Nasriah; Ramli, Rusyaizila

    2018-01-01

    Background Psychiatric patients have privacy concerns when it comes to technology intervention in the hospital setting. In this paper, we present scenarios for psychiatric behavioral monitoring systems to be placed in psychiatric wards to understand patients’ perception regarding privacy. Psychiatric behavioral monitoring refers to systems that are deemed useful in measuring clinical outcomes, but little research has been done on how these systems will impact patients’ privacy. Methods We conducted a case study in one teaching hospital in Malaysia. We investigated the physical factors that influence patients’ perceived privacy with respect to a psychiatric monitoring system. The eight physical factors identified from the information system development privacy model, a comprehensive model for designing a privacy-sensitive information system, were adapted in this research. Scenario-based interviews were conducted with 25 patients in a psychiatric ward for 3 months. Results Psychiatric patients were able to share how physical factors influence their perception of privacy. Results show how patients responded to each of these dimensions in the context of a psychiatric behavioral monitoring system. Conclusion Some subfactors under physical privacy are modified to reflect the data obtained in the interviews. We were able to capture the different physical factors that influence patient privacy. PMID:29343963

  14. Automated Construction of Molecular Active Spaces from Atomic Valence Orbitals.

    PubMed

    Sayfutyarova, Elvira R; Sun, Qiming; Chan, Garnet Kin-Lic; Knizia, Gerald

    2017-09-12

    We introduce the atomic valence active space (AVAS), a simple and well-defined automated technique for constructing active orbital spaces for use in multiconfiguration and multireference (MR) electronic structure calculations. Concretely, the technique constructs active molecular orbitals capable of describing all relevant electronic configurations emerging from a targeted set of atomic valence orbitals (e.g., the metal d orbitals in a coordination complex). This is achieved via a linear transformation of the occupied and unoccupied orbital spaces from an easily obtainable single-reference wave function (such as from a Hartree-Fock or Kohn-Sham calculations) based on projectors to targeted atomic valence orbitals. We discuss the premises, theory, and implementation of the idea, and several of its variations are tested. To investigate the performance and accuracy, we calculate the excitation energies for various transition-metal complexes in typical application scenarios. Additionally, we follow the homolytic bond breaking process of a Fenton reaction along its reaction coordinate. While the described AVAS technique is not a universal solution to the active space problem, its premises are fulfilled in many application scenarios of transition-metal chemistry and bond dissociation processes. In these cases the technique makes MR calculations easier to execute, easier to reproduce by any user, and simplifies the determination of the appropriate size of the active space required for accurate results.

  15. Diagnosis of human fascioliasis by stool and blood techniques: update for the present global scenario.

    PubMed

    Mas-Coma, S; Bargues, M D; Valero, M A

    2014-12-01

    Before the 1990s, human fascioliasis diagnosis focused on individual patients in hospitals or health centres. Case reports were mainly from developed countries and usually concerned isolated human infection in animal endemic areas. From the mid-1990s onwards, due to the progressive description of human endemic areas and human infection reports in developing countries, but also new knowledge on clinical manifestations and pathology, new situations, hitherto neglected, entered in the global scenario. Human fascioliasis has proved to be pronouncedly more heterogeneous than previously thought, including different transmission patterns and epidemiological situations. Stool and blood techniques, the main tools for diagnosis in humans, have been improved for both patient and survey diagnosis. Present availabilities for human diagnosis are reviewed focusing on advantages and weaknesses, sample management, egg differentiation, qualitative and quantitative diagnosis, antibody and antigen detection, post-treatment monitoring and post-control surveillance. Main conclusions refer to the pronounced difficulties of diagnosing fascioliasis in humans given the different infection phases and parasite migration capacities, clinical heterogeneity, immunological complexity, different epidemiological situations and transmission patterns, the lack of a diagnostic technique covering all needs and situations, and the advisability for a combined use of different techniques, at least including a stool technique and a blood technique.

  16. Modeling of policies for reduction of GHG emissions in energy sector using ANN: case study-Croatia (EU).

    PubMed

    Bolanča, Tomislav; Strahovnik, Tomislav; Ukić, Šime; Stankov, Mirjana Novak; Rogošić, Marko

    2017-07-01

    This study describes the development of tool for testing different policies for reduction of greenhouse gas (GHG) emissions in energy sector using artificial neural networks (ANNs). The case study of Croatia was elaborated. Two different energy consumption scenarios were used as a base for calculations and predictions of GHG emissions: the business as usual (BAU) scenario and sustainable scenario. Both of them are based on predicted energy consumption using different growth rates; the growth rates within the second scenario resulted from the implementation of corresponding energy efficiency measures in final energy consumption and increasing share of renewable energy sources. Both ANN architecture and training methodology were optimized to produce network that was able to successfully describe the existing data and to achieve reliable prediction of emissions in a forward time sense. The BAU scenario was found to produce continuously increasing emissions of all GHGs. The sustainable scenario was found to decrease the GHG emission levels of all gases with respect to BAU. The observed decrease was attributed to the group of measures termed the reduction of final energy consumption through energy efficiency measures.

  17. Determining access to assisted reproductive technology: reactions of clinic directors to ethically complex case scenarios.

    PubMed

    Stern, J E; Cramer, C P; Green, R M; Garrod, A; DeVries, K O

    2003-06-01

    Our aim was to increase understanding of how patient selection is handled by assisted reproductive technology (ART) clinicians. Ethically complex case scenarios were evaluated by the directors of USA ART clinics. Scenarios included using a son as sperm donor for his father, sex selection without associated disease, treatment of morally irresponsible couples, and a dispute over embryo disposition. Respondents reviewed eight scenarios and gave their opinions on whether to offer treatment. Reasons given for these decisions were placed into one of 13 categories. Survey response rate was 57%. Between 3 and 50% of respondents would treat in each case. Of reasons given, 'conditional' responses (requiring counselling, blood tests or agreement to other 'conditions') were common (31.4%). Non-maleficence (risk) accounted for 29.4% of responses, philosophy of medicine 18.9%, respect for patient autonomy 5.9% and legal concerns 4.6%. Discrimination and threats were each significant in one case. Reasons evoking absolutist beliefs, personal discomfort, commitment to justice, religion and ethical relativism were rare. Clinicians felt conflict between a desire to respect patient autonomy and their discomfort over the risk associated with the procedure. They raised concerns about misuse of medical technology. Attempts to resolve complex issues through negotiation and compromise were common.

  18. Effects of the length of central cancer registry operations on identification of subsequent cancers and on survival estimates.

    PubMed

    Qiao, Baozhen; Schymura, Maria J; Kahn, Amy R

    2016-10-01

    Population-based cancer survival analyses have traditionally been based on the first primary cancer. Recent studies have brought this practice into question, arguing that varying registry reference dates affect the ability to identify earlier cancers, resulting in selection bias. We used a theoretical approach to evaluate the extent to which the length of registry operations affects the classification of first versus subsequent cancers and consequently survival estimates. Sequence number central was used to classify tumors from the New York State Cancer Registry, diagnosed 2001-2010, as either first primaries (value=0 or 1) or subsequent primaries (≥2). A set of three sequence numbers, each based on an assumed reference year (1976, 1986 or 1996), was assigned to each tumor. Percent of subsequent cancers was evaluated by reference year, cancer site and age. 5-year relative survival estimates were compared under four different selection scenarios. The percent of cancer cases classified as subsequent primaries was 15.3%, 14.3% and 11.2% for reference years 1976, 1986 and 1996, respectively; and varied by cancer site and age. When only the first primary was included, shorter registry operation time was associated with slightly lower 5-year survival estimates. When all primary cancers were included, survival estimates decreased, with the largest decreases seen for the earliest reference year. Registry operation length affected the identification of subsequent cancers, but the overall effect of this misclassification on survival estimates was small. Survival estimates based on all primary cancers were slightly lower, but might be more comparable across registries. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Agricultural policy and social returns to eradication programs: the case of Aujeszky's disease in Sweden.

    PubMed

    Andersson, H; Lexmon, A; Robertsson, J A; Lundeheim, N; Wierup, M

    1997-02-01

    Economic-welfare analysis of animal disease prevention programs frequently ignore the constraints of the agricultural policy environment. Prevention programs affect producers, consumers and the government. The policy environment to a large extent determines the magnitude as well as the distribution of benefits of the program among these groups. The Swedish hog industry has been exposed to three major policy changes during the 1990-1995 period. These scenarios involve various degrees of government intervention in the agricultural sector including internal market deregulation and EU-membership. Aujeszky's disease is a virus disease with swine as the natural infection reservoir. Piglets are the most fragile and an outbreak of the disease results in symptoms such as shaking, cramps and convulsions with an increase in the mortality rate. Slaughter hogs suffer from coughing, fever and reduce their feed consumption. During the last 20-25 years the incidence of Aujeszky's disease (AD) has been increasing in Sweden. In 1989 an eradication program was undertaken. A model is developed to analyze social benefits of an eradication program given variations in agricultural policy. The model refers to the specifics of the AD-program implemented in Sweden. The expected benefits of the program are evaluated using a welfare-economic analysis applying cost-benefit analysis. Total benefits of the program are evaluated across herd and size categories and different regions. Data concerning the frequency of the virus among various categories of herds prior to enacting the program were used (Wahlström et al., 1990). In addition, data from an agricultural insurance company were used to estimate the conditional probability of an outbreak given that the herd is infected. Biological and technical parameter values were collected from a variety of sources. The results of the analysis indicate that the program is economically viable given a social rate of discount in the range of 3-5% without considering non-monetary aspects such as animal ethics. A scenario where the Swedish agricultural sector is deregulated provides the maximum benefits of the program. Consumers obtain about 50% of the benefits excluding program costs. The deregulation scenario would correspond closely to a case where a reformed Common Agricultural Policy (CAP) is applied across member countries. In the current case where Sweden is a member of the EU, the benefits are reduced mainly due to lower prices of inputs and pork.

  20. Assessment of health risks due to arsenic from iron ore lumps in a beach setting.

    PubMed

    Swartjes, Frank A; Janssen, Paul J C M

    2016-09-01

    In 2011, an artificial hook-shaped peninsula of 128ha beach area was created along the Dutch coast, containing thousands of iron ore lumps, which include arsenic from natural origin. Elemental arsenic and inorganic arsenic induce a range of toxicological effects and has been classified as proven human carcinogens. The combination of easy access to the beach and the presence of arsenic raised concern about possible human health effects by the local authorities. The objective of this study is therefore to investigate human health risks from the presence of arsenic-containing iron ore lumps in a beach setting. The exposure scenarios underlying the human health-based risk limits for contaminated land in The Netherlands, based on soil material ingestion and a residential setting, are not appropriate. Two specific exposure scenarios related to the playing with iron ore lumps on the beach ('sandcastle building') are developed on the basis of expert judgement, relating to children in the age of 2 to 12years, i.e., a worst case exposure scenario and a precautionary scenario. Subsequently, exposure is calculated by the quantification of the following factors: hand loading, soil-mouth transfer effectivity, hand-mouth contact frequency, contact surface, body weight and the relative oral bioavailability factor. By lack of consensus on a universal reference dose for arsenic for use in the stage of risk characterization, three different types of assessments have been evaluated: on the basis of the current Provisional Tolerable Daily Intake (PTWI), on the basis of the Benchmark Dose Lower limit (BMDL), and by a comparison of exposure from the iron ore lumps with background exposure. It is concluded, certainly from the perspective of the conservative exposure assessment, that unacceptable human health risks due to exposure to arsenic from the iron ore lumps are unlikely and there is no need for risk management actions. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. RMP Guidance for Warehouses - Chapter 4: Offsite Consequence Analysis

    EPA Pesticide Factsheets

    Offsite consequence analysis (OCA) informs government and the public about potential consequences of an accidental toxic or flammable chemical release at your facility, and consists of a worst-case release scenario and alternative release scenarios.

  2. RMP Guidance for Chemical Distributors - Chapter 4: Offsite Consequence Analysis

    EPA Pesticide Factsheets

    How to perform the OCA for regulated substances, informing the government and the public about potential consequences of an accidental chemical release at your facility. Includes calculations for worst-case scenario, alternative scenarios, and endpoints.

  3. MEGASTAR: The meaning of growth. An assessment of systems, technologies, and requirements. [methodology for display and analysis of energy production and consumption

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A methodology for the display and analysis of postulated energy futures for the United States is presented. A systems approach methodology including the methodology of technology assessment is used to examine three energy scenarios--the Westinghouse Nuclear Electric Economy, the Ford Technical Fix Base Case and a MEGASTAR generated Alternate to the Ford Technical Fix Base Case. The three scenarios represent different paths of energy consumption from the present to the year 2000. Associated with these paths are various mixes of fuels, conversion, distribution, conservation and end-use technologies. MEGASTAR presents the estimated times and unit requirements to supply the fuels, conversion and distribution systems for the postulated end uses for the three scenarios and then estimates the aggregate manpower, materials, and capital requirements needed to develop the energy system described by the particular scenario.

  4. San Pedro River Basin Data Browser Report

    EPA Science Inventory

    Acquisition of primary spatial data and database development are initial features of any type of landscape assessment project. They provide contemporary land cover and the ancillary datasets necessary to establish reference condition and develop alternative future scenarios that ...

  5. New mechanisms of disease and parasite-host interactions.

    PubMed

    de Souza, Tiago Alves Jorge; de Carli, Gabriel Jose; Pereira, Tiago Campos

    2016-09-01

    An unconventional interaction between a patient and parasites was recently reported, in which parasitic cells invaded host's tissues, establishing several tumors. This finding raises various intriguing hypotheses on unpredicted forms of interplay between a patient and infecting parasites. Here we present four unusual hypothetical host-parasite scenarios with intriguing medical consequences. Relatively simple experimental designs are described in order to evaluate such hypotheses. The first one refers to the possibility of metabolic disorders in parasites intoxicating the host. The second one is on possibility of patients with inborn errors of metabolism (IEM) being more resistant to parasites (due to accumulation of toxic compounds in the bloodstream). The third one refers to a mirrored scenario: development of tumors in parasites due to ingestion of host's circulating cancer cells. The last one describes a complex relationship between parasites accumulating a metabolite and supplying it to a patient with an IEM. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Rationale, Scenarios, and Profiles for the Application of the Internet Protocol Suite (IPS) in Space Operations

    NASA Technical Reports Server (NTRS)

    Benbenek, Daniel B.; Walsh, William

    2010-01-01

    This greenbook captures some of the current, planned and possible future uses of the Internet Protocol (IP) as part of Space Operations. It attempts to describe how the Internet Protocol is used in specific scenarios. Of primary focus is low-earth-orbit space operations, which is referred to here as the design reference mission (DRM). This is because most of the program experience drawn upon derives from this type of mission. Application profiles are provided. This includes parameter settings programs have proposed for sending IP datagrams over CCSDS links, the minimal subsets and features of the IP protocol suite and applications expected for interoperability between projects, and the configuration, operations and maintenance of these IP functions. Of special interest is capturing the lessons learned from the Constellation Program in this area, since that program included a fairly ambitious use of the Internet Protocol.

  7. Logarithmic spiral trajectories generated by Solar sails

    NASA Astrophysics Data System (ADS)

    Bassetto, Marco; Niccolai, Lorenzo; Quarta, Alessandro A.; Mengali, Giovanni

    2018-02-01

    Analytic solutions to continuous thrust-propelled trajectories are available in a few cases only. An interesting case is offered by the logarithmic spiral, that is, a trajectory characterized by a constant flight path angle and a fixed thrust vector direction in an orbital reference frame. The logarithmic spiral is important from a practical point of view, because it may be passively maintained by a Solar sail-based spacecraft. The aim of this paper is to provide a systematic study concerning the possibility of inserting a Solar sail-based spacecraft into a heliocentric logarithmic spiral trajectory without using any impulsive maneuver. The required conditions to be met by the sail in terms of attitude angle, propulsive performance, parking orbit characteristics, and initial position are thoroughly investigated. The closed-form variations of the osculating orbital parameters are analyzed, and the obtained analytical results are used for investigating the phasing maneuver of a Solar sail along an elliptic heliocentric orbit. In this mission scenario, the phasing orbit is composed of two symmetric logarithmic spiral trajectories connected with a coasting arc.

  8. Negative βhCG and Molar Pregnancy: The Hook Effect.

    PubMed

    Lobo Antunes, Isabel; Curado, Joana; Quintas, Ana; Pereira, Alcides

    2017-09-29

    Molar pregnancy, included in gestational trophoblastic disease, is a benign pathology with ability to metastasize, usually occurring with excessively high βhCG levels. Clinical scenario is usually a woman in extremes of reproductive age presenting with amenorrhoea, pain and vaginal blood loss; signs derived from high βhCG levels may be present (hyperthyroidism, hyperemesis). Diagnosis is based on a positive pregnancy test - usually a qualitative urinary test. The limitation of this test results from its inability to become positive in presence of markedly high levels of βhCG, saturating the antigens used - known as the 'hook effect'. With the widespread use of gynaecological ultrasound cases of molar pregnancy have been diagnosed in timely fashion. We describe a case referred as a degenerating fibroid, with a negative urinary pregnancy test. Transvaginal ultrasound was highly suggestive of molar pregnancy, which was confirmed with a quantitative βhCG test, allowing for timely treatment. The importance of a high index of suspicion for this pathology is tremendous to avoid the devastating consequences of a delayed diagnosis.

  9. Systemic risk in multiplex networks with asymmetric coupling and threshold feedback

    NASA Astrophysics Data System (ADS)

    Burkholz, Rebekka; Leduc, Matt V.; Garas, Antonios; Schweitzer, Frank

    2016-06-01

    We study cascades on a two-layer multiplex network, with asymmetric feedback that depends on the coupling strength between the layers. Based on an analytical branching process approximation, we calculate the systemic risk measured by the final fraction of failed nodes on a reference layer. The results are compared with the case of a single layer network that is an aggregated representation of the two layers. We find that systemic risk in the two-layer network is smaller than in the aggregated one only if the coupling strength between the two layers is small. Above a critical coupling strength, systemic risk is increased because of the mutual amplification of cascades in the two layers. We even observe sharp phase transitions in the cascade size that are less pronounced on the aggregated layer. Our insights can be applied to a scenario where firms decide whether they want to split their business into a less risky core business and a more risky subsidiary business. In most cases, this may lead to a drastic increase of systemic risk, which is underestimated in an aggregated approach.

  10. The patient-parent-pediatrician relationship: everyday ethics in the office.

    PubMed

    Lantos, John

    2015-01-01

    Pediatricians and parents generally try to do what is best for children, but they do not always agree about what that is. Mothers and fathers may disagree with each other. Parents may disagree with pediatricians. Disagreements can arise about the goals, nature, and value of communication with children about health information. Disagreements can arise over the value of particular medical interventions. Some disagreements are grounded in different religious beliefs. Some are about moral values. Some are disagreements about ends, others about the best means to an agreed on end. If there is an intractable disagreement and discussion has failed to resolve that disagreement, pediatricians must decide whether to compromise their own values to preserve a therapeutic relationship, sever that relationship, or try to override a parental choice by referring a case to child protection authorities. Most cases can be resolved and a consensus found. This article discusses some common scenarios in which disagreements arise, including home birth, refusal of vitamin K, vaccine hesitancy, birth control for teens, corporal punishment, and surreptitious drug testing. © American Academy of Pediatrics, 2015. All rights reserved.

  11. Computational investigation of feedback loop as a potential source of neuromechanical wave speed discrepancy in swimming animals

    NASA Astrophysics Data System (ADS)

    Patel, Namu; Patankar, Neelesh A.

    2017-11-01

    Aquatic locomotion relies on feedback loops to generate the flexural muscle moment needed to attain the reference shape. Experimentalists have consistently reported a difference between the electromyogram (EMG) and curvature wave speeds. The EMG wave speed has been found to correlate with the cross-sectional moment wave. The correlation, however, remains unexplained. Using feedback dependent controller models, we demonstrate two scenarios - one at higher passive elastic stiffness and another at lower passive elastic stiffness of the body. The former case becomes equivalent to the penalty type mathematical model for swimming used in prior literature and it does not reproduce neuromechanical wave speed discrepancy. The latter case at lower elastic stiffness does reproduce the wave speed discrepancy and appears to be biologically most relevant. These findings are applied to develop testable hypotheses about control mechanisms that animals might be using at during low and high Reynolds number swimming. This work is supported by NSF Grants DMS-1547394, CBET-1066575, ACI-1460334, and IOS-1456830. Travel for NP is supported by Institute for Defense Analyses.

  12. Synthetic Proxy Infrastructure for Task Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Junghans, Christoph; Pavel, Robert

    The Synthetic Proxy Infrastructure for Task Evaluation is a proxy application designed to support application developers in gauging the performance of various task granularities when determining how best to utilize task based programming models.The infrastructure is designed to provide examples of common communication patterns with a synthetic workload intended to provide performance data to evaluate programming model and platform overheads for the purpose of determining task granularity for task decomposition purposes. This is presented as a reference implementation of a proxy application with run-time configurable input and output task dependencies ranging from an embarrassingly parallel scenario to patterns with stencil-likemore » dependencies upon their nearest neighbors. Once all, if any, inputs are satisfied each task will execute a synthetic workload (a simple DGEMM of in this case) of varying size and output all, if any, outputs to the next tasks.The intent is for this reference implementation to be implemented as a proxy app in different programming models so as to provide the same infrastructure and to allow for application developers to simulate their own communication needs to assist in task decomposition under various models on a given platform.« less

  13. The development of an inherent safety approach to the prevention of domino accidents.

    PubMed

    Cozzani, Valerio; Tugnoli, Alessandro; Salzano, Ernesto

    2009-11-01

    The severity of industrial accidents in which a domino effect takes place is well known in the chemical and process industry. The application of an inherent safety approach for the prevention of escalation events leading to domino accidents was explored in the present study. Reference primary scenarios were analyzed and escalation vectors were defined. Inherent safety distances were defined and proposed as a metric to express the intensity of the escalation vectors. Simple rules of thumb were presented for a preliminary screening of these distances. Swift reference indices for layout screening with respect to escalation hazard were also defined. Two case studies derived from existing layouts of oil refineries were selected to understand the potentialities coming from the application in the methodology. The results evidenced that the approach allows a first comparative assessment of the actual domino hazard in a layout, and the identification of critical primary units with respect to escalation events. The methodology developed also represents a useful screening tool to identify were to dedicate major efforts in the design of add-on measures, optimizing conventional passive and active measures for the prevention of severe domino accidents.

  14. CSI-Chocolate Science Investigation and the Case of the Recipe Rip-Off: Using an Extended Problem-Based Scenario to Enhance High School Students' Science Engagement

    ERIC Educational Resources Information Center

    Marle, Peter D.; Decker, Lisa; Taylor, Victoria; Fitzpatrick, Kathleen; Khaliqi, David; Owens, Janel E.; Henry, Renee M.

    2014-01-01

    This paper discusses a K-12/university collaboration in which students participated in a four-day scenario-based summer STEM (science, technology, engineering, and mathematics) camp aimed at making difficult scientific concepts salient. This scenario, Jumpstart STEM-CSI: Chocolate Science Investigation (JSCSI), used open- and guided-inquiry…

  15. The Energy Puzzle Between the United States and China

    DTIC Science & Technology

    2013-03-01

    information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM...development, energy growth and developments in energy technology. It concludes with the best case scenario of the two countries building a trust that...development, energy growth and developments in energy technology. It concludes with the best case scenario of the two countries building a trust that will

  16. Techno-economic analysis for a sugarcane biorefinery: Colombian case.

    PubMed

    Moncada, Jonathan; El-Halwagi, Mahmoud M; Cardona, Carlos A

    2013-05-01

    In this paper a techno-economic analysis for a sugarcane biorefinery is presented for the Colombian case. It is shown two scenarios for different conversion pathways as function of feedstock distribution and technologies for sugar, fuel ethanol, PHB, anthocyanins and electricity production. These scenarios are compared with the Colombian base case which simultaneously produce sugar, fuel ethanol and electricity. A simulation procedure was used in order to evaluate biorefinery schemes for all the scenarios, using Aspen Plus software, that include productivity analysis, energy calculations and economic evaluation for each process configuration. The results showed that the configuration with the best economic, environmental and social performance is the one that considers fuel ethanol and PHB production from combined cane bagasse and molasses. This result served as the basis to draw recommendations on technological and economic feasibility as well as social aspects for the implementation of such type of biorefinery in Colombia. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. TMS for Instantiating a Knowledge Base With Incomplete Data

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    A computer program that belongs to the class known among software experts as output truth-maintenance-systems (output TMSs) has been devised as one of a number of software tools for reducing the size of the knowledge base that must be searched during execution of artificial- intelligence software of the rule-based inference-engine type in a case in which data are missing. This program determines whether the consequences of activation of two or more rules can be combined without causing a logical inconsistency. For example, in a case involving hypothetical scenarios that could lead to turning a given device on or off, the program determines whether a scenario involving a given combination of rules could lead to turning the device both on and off at the same time, in which case that combination of rules would not be included in the scenario.

  18. The fourth radiation transfer model intercomparison (RAMI-IV): Proficiency testing of canopy reflectance models with ISO-13528

    NASA Astrophysics Data System (ADS)

    Widlowski, J.-L.; Pinty, B.; Lopatka, M.; Atzberger, C.; Buzica, D.; Chelle, M.; Disney, M.; Gastellu-Etchegorry, J.-P.; Gerboles, M.; Gobron, N.; Grau, E.; Huang, H.; Kallel, A.; Kobayashi, H.; Lewis, P. E.; Qin, W.; Schlerf, M.; Stuckens, J.; Xie, D.

    2013-07-01

    The radiation transfer model intercomparison (RAMI) activity aims at assessing the reliability of physics-based radiative transfer (RT) models under controlled experimental conditions. RAMI focuses on computer simulation models that mimic the interactions of radiation with plant canopies. These models are increasingly used in the development of satellite retrieval algorithms for terrestrial essential climate variables (ECVs). Rather than applying ad hoc performance metrics, RAMI-IV makes use of existing ISO standards to enhance the rigor of its protocols evaluating the quality of RT models. ISO-13528 was developed "to determine the performance of individual laboratories for specific tests or measurements." More specifically, it aims to guarantee that measurement results fall within specified tolerance criteria from a known reference. Of particular interest to RAMI is that ISO-13528 provides guidelines for comparisons where the true value of the target quantity is unknown. In those cases, "truth" must be replaced by a reliable "conventional reference value" to enable absolute performance tests. This contribution will show, for the first time, how the ISO-13528 standard developed by the chemical and physical measurement communities can be applied to proficiency testing of computer simulation models. Step by step, the pre-screening of data, the identification of reference solutions, and the choice of proficiency statistics will be discussed and illustrated with simulation results from the RAMI-IV "abstract canopy" scenarios. Detailed performance statistics of the participating RT models will be provided and the role of the accuracy of the reference solutions as well as the choice of the tolerance criteria will be highlighted.

  19. Spatial Multicriteria Decision Analysis of Flood Risks in Aging-Dam Management in China: A Framework and Case Study

    PubMed Central

    Yang, Meng; Qian, Xin; Zhang, Yuchao; Sheng, Jinbao; Shen, Dengle; Ge, Yi

    2011-01-01

    Approximately 30,000 dams in China are aging and are considered to be high-level risks. Developing a framework for analyzing spatial multicriteria flood risk is crucial to ranking management scenarios for these dams, especially in densely populated areas. Based on the theories of spatial multicriteria decision analysis, this report generalizes a framework consisting of scenario definition, problem structuring, criteria construction, spatial quantification of criteria, criteria weighting, decision rules, sensitivity analyses, and scenario appraisal. The framework is presented in detail by using a case study to rank dam rehabilitation, decommissioning and existing-condition scenarios. The results show that there was a serious inundation, and that a dam rehabilitation scenario could reduce the multicriteria flood risk by 0.25 in the most affected areas; this indicates a mean risk decrease of less than 23%. Although increased risk (<0.20) was found for some residential and commercial buildings, if the dam were to be decommissioned, the mean risk would not be greater than the current existing risk, indicating that the dam rehabilitation scenario had a higher rank for decreasing the flood risk than the decommissioning scenario, but that dam rehabilitation alone might be of little help in abating flood risk. With adjustments and improvement to the specific methods (according to the circumstances and available data) this framework may be applied to other sites. PMID:21655125

  20. ICLUS v1.3 Population Projections

    EPA Pesticide Factsheets

    Climate and land-use change are major components of global environmental change with feedbacks between these components. The consequences of these interactions show that land use may exacerbate or alleviate climate change effects. Based on these findings it is important to use land-use scenarios that are consistent with the specific assumptions underlying climate-change scenarios. The Integrated Climate and Land-Use Scenarios (ICLUS) project developed land-use outputs that are based on a downscaled version of the Intergovernmental Panel on Climate Change (IPCC) Special Report on Emissions Scenarios (SRES) social, economic, and demographic storylines. ICLUS outputs are derived from a pair of models. A demographic model generates county-level population estimates that are distributed by a spatial allocation model (SERGoM v3) as housing density across the landscape. Land-use outputs were developed for the four main SRES storylines and a baseline (base case). The model is run for the conterminous USA and output is available for each scenario by decade to 2100. In addition to housing density at a 1 hectare spatial resolution, this project also generated estimates of impervious surface at a resolution of 1 square kilometer. This shapefile holds population data for all counties of the conterminous USA for all decades (2010-2100) and SRES population growth scenarios (A1, A2, B1, B2), as well as a 'base case' (BC) scenario, for use in the Integrated Climate and Land Use

  1. Application of Energy Integration Techniques to the Design of Advanced Life Support Systems

    NASA Technical Reports Server (NTRS)

    Levri, Julie; Finn, Cory

    2000-01-01

    Exchanging heat between hot and cold streams within an advanced life support system can save energy. This savings will reduce the equivalent system mass (ESM) of the system. Different system configurations are examined under steady-state conditions for various percentages of food growth and waste treatment. The scenarios investigated represent possible design options for a Mars reference mission. Reference mission definitions are drawn from the ALSS Modeling and Analysis Reference Missions Document, which includes definitions for space station evolution, Mars landers, and a Mars base. For each scenario, streams requiring heating or cooling are identified and characterized by mass flow, supply and target temperatures and heat capacities. The Pinch Technique is applied to identify good matches for energy exchange between the hot and cold streams and to calculate the minimum external heating and cooling requirements for the system. For each pair of hot and cold streams that are matched, there will be a reduction in the amount of external heating and cooling required, and the original heating and cooling equipment will be replaced with a heat exchanger. The net cost savings can be either positive or negative for each stream pairing, and the priority for implementing each pairing can be ranked according to its potential cost savings. Using the Pinch technique, a complete system heat exchange network is developed and heat exchangers are sized to allow for calculation of ESM. The energy-integrated design typically has a lower total ESM than the original design with no energy integration. A comparison of ESM savings in each of the scenarios is made to direct future Pinch Analysis efforts.

  2. Design Support of an Above Cap-rock Early Detection Monitoring System using Simulated Leakage Scenarios at the FutureGen2.0 Site

    DOE PAGES

    Williams, Mark D.; USA, Richland Washington; Vermuel, Vince R.; ...

    2014-12-31

    The FutureGen 2.0 Project will design and build a first-of-its-kind, near-zero emissions coal-fueled power plant with carbon capture and storage (CCS). To assess storage site performance and meet the regulatory requirements of the Class VI Underground Injection Control (UIC) Program for CO 2 Geologic Sequestration, the FutureGen 2.0 project will implement a suite of monitoring technologies designed to evaluate CO 2 mass balance and detect any unforeseen loss in CO 2 containment. The monitoring program will include direct monitoring of the reservoir, and early-leak-detection monitoring directly above the primary confining zone. This preliminary modeling study described here focuses on hypotheticalmore » leakage scenarios into the first permeable unit above the primary confining zone (Ironton Sandstone) and is used to support assessment of early-leak detection capabilities. Future updates of the model will be used to assess potential impacts on the lowermost underground source of drinking water (Saint Peter Sandstone) for a range of theoretical leakage scenarios. This preliminary modeling evaluation considers both pressure response and geochemical signals in the overlying Ironton Sandstone. This model is independent of the FutureGen 2.0 reservoir model in that it does not simulate caprock discontinuities, faults, or failure scenarios. Instead this modeling effort is based on theoretical, volumetric-rate based leakage scenarios. The scenarios include leakage of 1% of the total injected CO 2 mass, but spread out over different time periods (20, 100, and 500 years) with each case yielding a different mass flux (i.e., smaller mass fluxes for longer duration leakage cases]. A brine leakage scenario using a volumetric leakage similar to the 20 year 1% CO 2 case was also considered. A framework for the comparison of the various cases was developed based on the exceedance of selected pressure and geochemical thresholds at different distances from the point of leakage and at different vertical positions within the Ironton Sandstone. These preliminary results, and results from an updated models that incorporate additional site-specific characterization data, support development/refinement of the monitoring system design.« less

  3. Head injury causation scenarios for belted, rear-seated children in frontal impacts.

    PubMed

    Bohman, Katarina; Arbogast, Kristy B; Bostrom, Ola

    2011-02-01

    Head injuries are the most common serious injuries sustained by children in motor vehicle crashes and are of critical importance with regard to long-term disability. There is a lack of understanding of how seat belt-restrained children sustain head injuries in frontal impacts. The aim of the study was to identify the AIS2+ head injury causation scenarios for rear-seated, belt-restrained children in frontal impacts, including the set of parameters contributing to the injury. In-depth crash investigations from two National Highway Traffic Safety Administration (NHTSA) databases, the National Automotive Sampling System-Crashworthiness Data System (NASS-CDS; 1997-2008) and the Crash Injury Research and Engineering Network (CIREN; 1996-2009), were collected and analyzed in detail. Selection criteria were all frontal impacts with principal direction of force (PDOF) of 11, 12, and 1 o'clock involving rear-seated, three-point belt-restrained, with or without booster cushion, children from 3 to 13 years with an AIS2+ head injury. Cases were analyzed using the BioTab method of injury causation assessment in order to systematically analyze the injury causation scenario for each case. There were 27 cases meeting the inclusion criteria, 19 cases with MAIS2 head injuries and 8 cases with MAIS3+ head injuries, including 2 fatalities. Three major injury causation scenarios were identified, including head contact with seatback (10 cases), head contact with side interior (7 cases,) and no evidence of head contact (9 cases). Head injuries with seatback or side interior contact typically included a PDOF greater than 10 degree (similar to the Insurance Institute for Highway Safety [IIHS] and EuroNCAP offset frontal testing) and vehicle maneuvers. For seatback contact, the vehicle's movements contributed to occupant kinematics inboard the vehicle, causing a less than optimal restraint of the torso and/or torso roll out of the shoulder belt. For side interior contact, the PDOF and/or maneuvers forced the occupant toward the side interior. The cases without evidence of head/face contact were characterized by high crash severity and accompanied by severe injuries to the thorax and spine. These data lead to increased understanding of the injury patterns and causation in this crash restraint scenario so that interventions to mitigate the burden of injury can be advanced.

  4. Coordinate references for the indoor/outdoor seamless positioning

    NASA Astrophysics Data System (ADS)

    Ruan, Ling; Zhang, Ling; Long, Yi; Cheng, Fei

    2018-05-01

    Indoor positioning technologies are being developed rapidly, and seamless positioning which connected indoor and outdoor space is a new trend. The indoor and outdoor positioning are not applying the same coordinate system and different indoor positioning scenes uses different indoor local coordinate reference systems. A specific and unified coordinate reference frame is needed as the space basis and premise in seamless positioning application. Trajectory analysis of indoor and outdoor integration also requires a uniform coordinate reference. However, the coordinate reference frame in seamless positioning which can applied to various complex scenarios is lacking of research for a long time. In this paper, we proposed a universal coordinate reference frame in indoor/outdoor seamless positioning. The research focus on analysis and classify the indoor positioning scenes and put forward the coordinate reference system establishment and coordinate transformation methods in each scene. And, through some experiments, the calibration method feasibility was verified.

  5. Tailoring Green Infrastructure Implementation Scenarios based on Stormwater Management Objectives

    EPA Science Inventory

    Green infrastructure (GI) refers to stormwater management practices that mimic nature by soaking up, storing, and controlling onsite. GI practices can contribute reckonable benefits towards meeting stormwater management objectives, such as runoff peak shaving, volume reduction, f...

  6. International Multidisciplinary Artificial Gravity (IMAG) Project

    NASA Technical Reports Server (NTRS)

    Laurini, Kathy

    2007-01-01

    This viewgraph presentation reviews the efforts of the International Multidisciplinary Artificial Gravity Project. Specifically it reviews the NASA Exploration Planning Status, NASA Exploration Roadmap, Status of Planning for the Moon, Mars Planning, Reference health maintenance scenario, and The Human Research Program.

  7. Games for All Seasons.

    ERIC Educational Resources Information Center

    Jaques, David

    1981-01-01

    Argues that games with a simple communication structure and/or an abstract content have more virtues than games which introduce too many details into the roles and scenario. Four such "simple" games are described, one in detail, and four references are listed. (LLS)

  8. Joint Concept Development and Experimentation: A Force Development Perspective

    DTIC Science & Technology

    2012-02-01

    REFINED SCENARIO SET KLE S E C U R IT Y E N V IR O N M E N T E X IS T IN G Existing Scenarios KLE Figure 7: Hierarchy of force...CORA TM 2012-036 References ..... [1] Palla, G ., Barabasi, A.L., and Vicsek, T . Quantifying Social Group Evolution. Nature, Vol 446, pp...activités interarmées de développement des forces ( planification axée sur les capacités, élaboration et expérimentation de concepts) ne sont pas bien

  9. Offshore Wind Jobs and Economic Development Impacts in the United States: Four Regional Scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tegen, S.; Keyser, D.; Flores-Espino, F.

    This report uses the offshore wind Jobs and Economic Development Impacts (JEDI) model and provides four case studies of potential offshore deployment scenarios in different regions of the United States: the Southeast, the Great Lakes, the Gulf Coast, and the Mid-Atlantic. Researchers worked with developers and industry representatives in each region to create potential offshore wind deployment and supply chain growth scenarios, specific to their locations. These scenarios were used as inputs into the offshore JEDI model to estimate jobs and other gross economic impacts in each region.

  10. Improving Conflict Alert Performance Using Moving Target Detector Data.

    DTIC Science & Technology

    1982-06-01

    2 L136 IIIII I lIlS 1 1 10 11120 125 11111I ~1.6 MICROCOPY RESOLUTION TEST CHART NATIONAL BUREAU Of SIANDARDg 19bi A DOT/FAA/RD-82/47 DOT/FAA/CT-81...Differences for Stochastic Case 23 7 Illustration of Scenarios for Warning Time Tests 30 8 Illustration of Scenarios Used for Nuisance Alert 35 Area...Nuisance Alert Area Analysis of Scenario 3 with a Target 64 Velocity of 480 Knots and SPMB= SPPB =2.8 nmi 12 Nuisance Alert Area Analysis of Scenario 3

  11. Impacts of future deforestation and climate change on the hydrology of the Amazon Basin: a multi-model analysis with a new set of land-cover change scenarios

    NASA Astrophysics Data System (ADS)

    Guimberteau, Matthieu; Ciais, Philippe; Ducharne, Agnès; Boisier, Juan Pablo; Dutra Aguiar, Ana Paula; Biemans, Hester; De Deurwaerder, Hannes; Galbraith, David; Kruijt, Bart; Langerwisch, Fanny; Poveda, German; Rammig, Anja; Andres Rodriguez, Daniel; Tejada, Graciela; Thonicke, Kirsten; Von Randow, Celso; Von Randow, Rita C. S.; Zhang, Ke; Verbeeck, Hans

    2017-03-01

    Deforestation in Amazon is expected to decrease evapotranspiration (ET) and to increase soil moisture and river discharge under prevailing energy-limited conditions. The magnitude and sign of the response of ET to deforestation depend both on the magnitude and regional patterns of land-cover change (LCC), as well as on climate change and CO2 levels. On the one hand, elevated CO2 decreases leaf-scale transpiration, but this effect could be offset by increased foliar area density. Using three regional LCC scenarios specifically established for the Brazilian and Bolivian Amazon, we investigate the impacts of climate change and deforestation on the surface hydrology of the Amazon Basin for this century, taking 2009 as a reference. For each LCC scenario, three land surface models (LSMs), LPJmL-DGVM, INLAND-DGVM and ORCHIDEE, are forced by bias-corrected climate simulated by three general circulation models (GCMs) of the IPCC 4th Assessment Report (AR4). On average, over the Amazon Basin with no deforestation, the GCM results indicate a temperature increase of 3.3 °C by 2100 which drives up the evaporative demand, whereby precipitation increases by 8.5 %, with a large uncertainty across GCMs. In the case of no deforestation, we found that ET and runoff increase by 5.0 and 14 %, respectively. However, in south-east Amazonia, precipitation decreases by 10 % at the end of the dry season and the three LSMs produce a 6 % decrease of ET, which is less than precipitation, so that runoff decreases by 22 %. For instance, the minimum river discharge of the Rio Tapajós is reduced by 31 % in 2100. To study the additional effect of deforestation, we prescribed to the LSMs three contrasted LCC scenarios, with a forest decline going from 7 to 34 % over this century. All three scenarios partly offset the climate-induced increase of ET, and runoff increases over the entire Amazon. In the south-east, however, deforestation amplifies the decrease of ET at the end of dry season, leading to a large increase of runoff (up to +27 % in the extreme deforestation case), offsetting the negative effect of climate change, thus balancing the decrease of low flows in the Rio Tapajós. These projections are associated with large uncertainties, which we attribute separately to the differences in LSMs, GCMs and to the uncertain range of deforestation. At the subcatchment scale, the uncertainty range on ET changes is shown to first depend on GCMs, while the uncertainty of runoff projections is predominantly induced by LSM structural differences. By contrast, we found that the uncertainty in both ET and runoff changes attributable to uncertain future deforestation is low.

  12. Modeling and Composing Scenario-Based Requirements with Aspects

    NASA Technical Reports Server (NTRS)

    Araujo, Joao; Whittle, Jon; Ki, Dae-Kyoo

    2004-01-01

    There has been significant recent interest, within the Aspect-Oriented Software Development (AOSD) community, in representing crosscutting concerns at various stages of the software lifecycle. However, most of these efforts have concentrated on the design and implementation phases. We focus in this paper on representing aspects during use case modeling. In particular, we focus on scenario-based requirements and show how to compose aspectual and non-aspectual scenarios so that they can be simulated as a whole. Non-aspectual scenarios are modeled as UML sequence diagram. Aspectual scenarios are modeled as Interaction Pattern Specifications (IPS). In order to simulate them, the scenarios are transformed into a set of executable state machines using an existing state machine synthesis algorithm. Previous work composed aspectual and non-aspectual scenarios at the sequence diagram level. In this paper, the composition is done at the state machine level.

  13. 7 CFR 1.626 - What regulations apply to a case referred for a hearing?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... referred for a hearing? (a) If NFS refers the case to OALJ, these regulations will continue to apply to the hearing process. (b) If NFS refers the case to the Department of the Interior's Office of Hearing and Appeals, the regulations at 43 CFR 45.1 et seq. will apply from that point. (c) If NFS refers the case to...

  14. 7 CFR 1.626 - What regulations apply to a case referred for a hearing?

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... referred for a hearing? (a) If NFS refers the case to OALJ, these regulations will continue to apply to the hearing process. (b) If NFS refers the case to the Department of the Interior's Office of Hearing and Appeals, the regulations at 43 CFR 45.1 et seq. will apply from that point. (c) If NFS refers the case to...

  15. 7 CFR 1.626 - What regulations apply to a case referred for a hearing?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... referred for a hearing? (a) If NFS refers the case to OALJ, these regulations will continue to apply to the hearing process. (b) If NFS refers the case to the Department of the Interior's Office of Hearing and Appeals, the regulations at 43 CFR 45.1 et seq. will apply from that point. (c) If NFS refers the case to...

  16. 7 CFR 1.626 - What regulations apply to a case referred for a hearing?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... referred for a hearing? (a) If NFS refers the case to OALJ, these regulations will continue to apply to the hearing process. (b) If NFS refers the case to the Department of the Interior's Office of Hearing and Appeals, the regulations at 43 CFR 45.1 et seq. will apply from that point. (c) If NFS refers the case to...

  17. 7 CFR 1.626 - What regulations apply to a case referred for a hearing?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... referred for a hearing? (a) If NFS refers the case to OALJ, these regulations will continue to apply to the hearing process. (b) If NFS refers the case to the Department of the Interior's Office of Hearing and Appeals, the regulations at 43 CFR 45.1 et seq. will apply from that point. (c) If NFS refers the case to...

  18. Evaluating landfill aftercare strategies: A life cycle assessment approach.

    PubMed

    Turner, David A; Beaven, Richard P; Woodman, Nick D

    2017-05-01

    This study investigates the potential impacts caused by the loss of active environmental control measures during the aftercare period of landfill management. A combined mechanistic solute flow model and life cycle assessment (LCA) approach was used to evaluate the potential impacts of leachate emissions over a 10,000year time horizon. A continuum of control loss possibilities occurring at different times and for different durations were investigated for four different basic aftercare scenarios, including a typical aftercare scenario involving a low permeability cap and three accelerated aftercare scenarios involving higher initial infiltration rates. Assuming a 'best case' where control is never lost, the largest potential impacts resulted from the typical aftercare scenario. The maximum difference between potential impacts from the 'best case' and the 'worst case', where control fails at the earliest possible point and is never reinstated, was only a fourfold increase. This highlights potential deficiencies in standard life cycle impact assessment practice, which are discussed. Nevertheless, the results show how the influence of active control loss on the potential impacts of landfilling varies considerably depending on the aftercare strategy used and highlight the importance that leachate treatment efficiencies have upon impacts. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Under What Circumstances Do Wood Products from Native Forests Benefit Climate Change Mitigation?

    PubMed

    Keith, Heather; Lindenmayer, David; Macintosh, Andrew; Mackey, Brendan

    2015-01-01

    Climate change mitigation benefits from the land sector are not being fully realised because of uncertainty and controversy about the role of native forest management. The dominant policy view, as stated in the IPCC's Fifth Assessment Report, is that sustainable forest harvesting yielding wood products, generates the largest mitigation benefit. We demonstrate that changing native forest management from commercial harvesting to conservation can make an important contribution to mitigation. Conservation of native forests results in an immediate and substantial reduction in net emissions relative to a reference case of commercial harvesting. We calibrated models to simulate scenarios of native forest management for two Australian case studies: mixed-eucalypt in New South Wales and Mountain Ash in Victoria. Carbon stocks in the harvested forest included forest biomass, wood and paper products, waste in landfill, and bioenergy that substituted for fossil fuel energy. The conservation forest included forest biomass, and subtracted stocks for the foregone products that were substituted by non-wood products or plantation products. Total carbon stocks were lower in harvested forest than in conservation forest in both case studies over the 100-year simulation period. We tested a range of potential parameter values reported in the literature: none could increase the combined carbon stock in products, slash, landfill and substitution sufficiently to exceed the increase in carbon stock due to changing management of native forest to conservation. The key parameters determining carbon stock change under different forest management scenarios are those affecting accumulation of carbon in forest biomass, rather than parameters affecting transfers among wood products. This analysis helps prioritise mitigation activities to focus on maximising forest biomass. International forest-related policies, including negotiations under the UNFCCC, have failed to recognize fully the mitigation value of native forest conservation. Our analyses provide evidence for decision-making about the circumstances under which forest management provides mitigation benefits.

  20. Combined didactic and scenario-based education improves the ability of intensive care unit staff to recognize delirium at the bedside

    PubMed Central

    Devlin, John W; Marquis, Francois; Riker, Richard R; Robbins, Tracey; Garpestad, Erik; Fong, Jeffrey J; Didomenico, Dorothy; Skrobik, Yoanna

    2008-01-01

    Background While nurses play a key role in identifying delirium, several authors have noted variability in their ability to recognize delirium. We sought to measure the impact of a simple educational intervention on the ability of intensive care unit (ICU) nurses to clinically identify delirium and to use a standardized delirium scale correctly. Methods Fifty ICU nurses from two different hospitals (university medical and community teaching) evaluated an ICU patient for pain, level of sedation and presence of delirium before and after an educational intervention. The same patient was concomitantly, but independently, evaluated by a validated judge (ρ = 0.98) who acted as the reference standard in all cases. The education consisted of two script concordance case scenarios, a slide presentation regarding scale-based delirium assessment, and two further cases. Results Nurses' clinical recognition of delirium was poor in the before-education period as only 24% of nurses reported the presence or absence of delirium and only 16% were correct compared with the judge. After education, the number of nurses able to evaluate delirium using any scale (12% vs 82%, P < 0.0005) and use it correctly (8% vs 62%, P < 0.0005) increased significantly. While judge-nurse agreement (Spearman ρ) for the presence of delirium was relatively high for both the before-education period (r = 0.74, P = 0.262) and after-education period (r = 0.71, P < 0.0005), the low number of nurses evaluating delirium before education lead to statistical significance only after education. Education did not alter nurses' self-reported evaluation of delirium (before 76% vs after 100%, P = 0.125). Conclusion A simple composite educational intervention incorporating script concordance theory improves the capacity for ICU nurses to screen for delirium nearly as well as experts. Self-reporting by nurses of completion of delirium screening may not constitute an adequate quality assurance process. PMID:18291021

  1. Combined didactic and scenario-based education improves the ability of intensive care unit staff to recognize delirium at the bedside.

    PubMed

    Devlin, John W; Marquis, Francois; Riker, Richard R; Robbins, Tracey; Garpestad, Erik; Fong, Jeffrey J; Didomenico, Dorothy; Skrobik, Yoanna

    2008-01-01

    While nurses play a key role in identifying delirium, several authors have noted variability in their ability to recognize delirium. We sought to measure the impact of a simple educational intervention on the ability of intensive care unit (ICU) nurses to clinically identify delirium and to use a standardized delirium scale correctly. Fifty ICU nurses from two different hospitals (university medical and community teaching) evaluated an ICU patient for pain, level of sedation and presence of delirium before and after an educational intervention. The same patient was concomitantly, but independently, evaluated by a validated judge (rho = 0.98) who acted as the reference standard in all cases. The education consisted of two script concordance case scenarios, a slide presentation regarding scale-based delirium assessment, and two further cases. Nurses' clinical recognition of delirium was poor in the before-education period as only 24% of nurses reported the presence or absence of delirium and only 16% were correct compared with the judge. After education, the number of nurses able to evaluate delirium using any scale (12% vs 82%, P < 0.0005) and use it correctly (8% vs 62%, P < 0.0005) increased significantly. While judge-nurse agreement (Spearman rho) for the presence of delirium was relatively high for both the before-education period (r = 0.74, P = 0.262) and after-education period (r = 0.71, P < 0.0005), the low number of nurses evaluating delirium before education lead to statistical significance only after education. Education did not alter nurses' self-reported evaluation of delirium (before 76% vs after 100%, P = 0.125). A simple composite educational intervention incorporating script concordance theory improves the capacity for ICU nurses to screen for delirium nearly as well as experts. Self-reporting by nurses of completion of delirium screening may not constitute an adequate quality assurance process.

  2. Under What Circumstances Do Wood Products from Native Forests Benefit Climate Change Mitigation?

    PubMed Central

    Keith, Heather; Lindenmayer, David; Macintosh, Andrew; Mackey, Brendan

    2015-01-01

    Climate change mitigation benefits from the land sector are not being fully realised because of uncertainty and controversy about the role of native forest management. The dominant policy view, as stated in the IPCC’s Fifth Assessment Report, is that sustainable forest harvesting yielding wood products, generates the largest mitigation benefit. We demonstrate that changing native forest management from commercial harvesting to conservation can make an important contribution to mitigation. Conservation of native forests results in an immediate and substantial reduction in net emissions relative to a reference case of commercial harvesting. We calibrated models to simulate scenarios of native forest management for two Australian case studies: mixed-eucalypt in New South Wales and Mountain Ash in Victoria. Carbon stocks in the harvested forest included forest biomass, wood and paper products, waste in landfill, and bioenergy that substituted for fossil fuel energy. The conservation forest included forest biomass, and subtracted stocks for the foregone products that were substituted by non-wood products or plantation products. Total carbon stocks were lower in harvested forest than in conservation forest in both case studies over the 100-year simulation period. We tested a range of potential parameter values reported in the literature: none could increase the combined carbon stock in products, slash, landfill and substitution sufficiently to exceed the increase in carbon stock due to changing management of native forest to conservation. The key parameters determining carbon stock change under different forest management scenarios are those affecting accumulation of carbon in forest biomass, rather than parameters affecting transfers among wood products. This analysis helps prioritise mitigation activities to focus on maximising forest biomass. International forest-related policies, including negotiations under the UNFCCC, have failed to recognize fully the mitigation value of native forest conservation. Our analyses provide evidence for decision-making about the circumstances under which forest management provides mitigation benefits. PMID:26436916

  3. Health risk for children and adults consuming apples with pesticide residue.

    PubMed

    Lozowicka, Bozena

    2015-01-01

    The presence of pesticide residues in apples raises serious health concerns, especially when the fresh fruits are consumed by children, particularly vulnerable to the pesticide hazards. This study demonstrates the results from nine years of investigation (2005-2013) of 696 samples of Polish apples for 182 pesticides using gas and liquid chromatography and spectrophotometric techniques. Only 33.5% of the samples did not contain residues above the limit of detection. In 66.5% of the samples, 34 pesticides were detected, of which maximum residue level (MRL) was exceeded in 3%. Multiple residues were present in 35% of the samples with two to six pesticides, and one sample contained seven compounds. A study of the health risk for children, adults and the general population consuming apples with these pesticides was performed. The pesticide residue data have been combined with the consumption of apples in the 97.5 percentile and the mean diet. A deterministic model was used to assess the chronic and acute exposures that are based on the average and high concentrations of residues. Additionally, the "worst-case scenario" and "optimistic case scenario" were used to assess the chronic risk. In certain cases, the total dietary pesticide intake calculated from the residue levels observed in apples exceeds the toxicological criteria. Children were the group most exposed to the pesticides, and the greatest short-term hazard stemmed from flusilazole at 624%, dimethoate at 312%, tebuconazole at 173%, and chlorpyrifos methyl and captan with 104% Acute Reference Dose (ARfD) each. In the cumulative chronic exposure, among the 17 groups of compounds studied, organophosphate insecticides constituted 99% acceptable daily intake (ADI). The results indicate that the occurrence of pesticide residues in apples could not be considered a serious public health problem. Nevertheless, an investigation into continuous monitoring and tighter regulation of pesticide residues is recommended. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Hydrogen Financial Analysis Scenario Tool (H2FAST); NREL (National Renewable Energy Laboratory)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melaina, Marc

    This presentation describes the Hydrogen Financial Analysis Scenario Tool, H2FAST, and provides an overview of each of the three H2FAST formats: the H2FAST web tool, the H2FAST Excel spreadsheet, and the H2FAST Business Case Scenario (BCS) tool. Examples are presented to illustrate the types of questions that H2FAST can help answer.

  5. Arctic Planning Scenarios: Scenario #2 - Safety and Security Scenario

    DTIC Science & Technology

    2011-07-01

    the case within the security communities (DND, RCMP and CBSA) where the facility expansion programme fell behind the pace of socio -economic growth...became a political issue with many First Nations leaders protesting about the speed of progress not keeping pace with the socio -economic demands of...raised level of socio -political consciousness quickly gravitated across the Inuit bands of NWT and Nunavut. It also drew support from those Southern

  6. Quantitative assessment of building fire risk to life safety.

    PubMed

    Guanquan, Chu; Jinhua, Sun

    2008-06-01

    This article presents a quantitative risk assessment framework for evaluating fire risk to life safety. Fire risk is divided into two parts: probability and corresponding consequence of every fire scenario. The time-dependent event tree technique is used to analyze probable fire scenarios based on the effect of fire protection systems on fire spread and smoke movement. To obtain the variation of occurrence probability with time, Markov chain is combined with a time-dependent event tree for stochastic analysis on the occurrence probability of fire scenarios. To obtain consequences of every fire scenario, some uncertainties are considered in the risk analysis process. When calculating the onset time to untenable conditions, a range of fires are designed based on different fire growth rates, after which uncertainty of onset time to untenable conditions can be characterized by probability distribution. When calculating occupant evacuation time, occupant premovement time is considered as a probability distribution. Consequences of a fire scenario can be evaluated according to probability distribution of evacuation time and onset time of untenable conditions. Then, fire risk to life safety can be evaluated based on occurrence probability and consequences of every fire scenario. To express the risk assessment method in detail, a commercial building is presented as a case study. A discussion compares the assessment result of the case study with fire statistics.

  7. HIV status: the prima facie right not to know the result.

    PubMed

    Chan, Tak Kwong

    2016-02-01

    When a patient regains consciousness from Cryptococcus meningitis, the clinician may offer an HIV test (in case it has not already been done) (scenario 1) or offer to tell the patient his HIV status (in case the test has already been performed with a positive result while the patient was unconscious) (scenario 2). Youngs and Simmonds proposed that the patient has the prima facie right to refuse an HIV test in scenario 1 but not the prima facie right not to be told the HIV status in scenario 2. I submit that the claims to the right of refusal in both scenarios are similarly strong as they should both be grounded in privacy, self determination or dignity. But a conscientious agent should bear in mind that members of the public also have the right not to be harmed. When the circumstance allows, a proper balance of the potential benefits and harm for all the competing parties should guide the clinical decision as to whose right should finally prevail. Where a full ethical analysis is not possible, the presumption should favour respecting the patient's right of refusal in both scenarios. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  8. The influence of climate change on Tanzania's hydropower sustainability

    NASA Astrophysics Data System (ADS)

    Sperna Weiland, Frederiek; Boehlert, Brent; Meijer, Karen; Schellekens, Jaap; Magnell, Jan-Petter; Helbrink, Jakob; Kassana, Leonard; Liden, Rikard

    2015-04-01

    Economic costs induced by current climate variability are large for Tanzania and may further increase due to future climate change. The Tanzanian National Climate Change Strategy addressed the need for stabilization of hydropower generation and strengthening of water resources management. Increased hydropower generation can contribute to sustainable use of energy resources and stabilization of the national electricity grid. To support Tanzania the World Bank financed this study in which the impact of climate change on the water resources and related hydropower generation capacity of Tanzania is assessed. To this end an ensemble of 78 GCM projections from both the CMIP3 and CMIP5 datasets was bias-corrected and down-scaled to 0.5 degrees resolution following the BCSD technique using the Princeton Global Meteorological Forcing Dataset as a reference. To quantify the hydrological impacts of climate change by 2035 the global hydrological model PCR-GLOBWB was set-up for Tanzania at a resolution of 3 minutes and run with all 78 GCM datasets. From the full set of projections a probable (median) and worst case scenario (95th percentile) were selected based upon (1) the country average Climate Moisture Index and (2) discharge statistics of relevance to hydropower generation. Although precipitation from the Princeton dataset shows deviations from local station measurements and the global hydrological model does not perfectly reproduce local scale hydrographs, the main discharge characteristics and precipitation patterns are represented well. The modeled natural river flows were adjusted for water demand and irrigation within the water resources model RIBASIM (both historical values and future scenarios). Potential hydropower capacity was assessed with the power market simulation model PoMo-C that considers both reservoir inflows obtained from RIBASIM and overall electricity generation costs. Results of the study show that climate change is unlikely to negatively affect the average potential of future hydropower production; it will likely make hydropower more profitable. Yet, the uncertainty in climate change projections remains large and risks are significant, adaptation strategies should ideally consider a worst case scenario to ensure robust power generation. Overall a diversified power generation portfolio, anchored in hydropower and supported by other renewables and fossil fuel-based energy sources, is the best solution for Tanzania

  9. An Adjoint Force-restore Model for Glacier Terminus Fluctuations

    NASA Astrophysics Data System (ADS)

    Ren, D.; Leslie, L.; Karoly, D.

    2006-12-01

    A linear inverse formula comprises the basis for an individual treatment of 7 central Asian (25-55°N; 70-95°E) glaciers. The linear forward model is based on first order glacier dynamics, and requires the knowledge of reference states of forcing and glacier perturbation magnitude. In this study, the adjoint based 4D-var method was applied to optimally determine the reference states and make it possible to start the integration at an arbitrarily chosen time, and thus suitable to use the availability of the coupled general circulation model (CGCM) predictions of future temperature scenarios. Two sensitive yet uncertain glacier parameters and reference states at year 1900 are inferred from observed glacier length records distributed irregularly over the 20th century and the regional mean annual temperature anomaly (against 1961-1990 reference) time series. We rotated the temperature forcing for the Hadley Centre- Climatic Research Unit of the University of East Anglia (HadCRUT2), the Global Historical Climatology Network (GHCN) observations, and the ensemble mean of multiple CGCM runs and compared the retrieval results. Because of the high resemblance between the three data sources after 1960, it was decided practicable to use the observed temperature as forcing in retrieving the model parameters and initial states and then run an extended period with forcing from ensemble mean CGCM temperature of the next century. The length fluctuation is estimated for the transient climate period with 9 CGCM simulations under SRES A2 (a strong emission scenario from the Special report on Emissions Scenarios). For the 60-year period 2000- 2060, all glaciers experienced salient shrinkage, especially those with gentle slopes. Although nearly one-third the year 2000 length will be reduced for some small glaciers, the very existence of the glaciers studied here is not threatened by year 2060. The differences in individual glacier responses are very large. No straightforward relationship is found between glacier size and fractional change of its length.

  10. Scenario planning.

    PubMed

    Enzmann, Dieter R; Beauchamp, Norman J; Norbash, Alexander

    2011-03-01

    In facing future developments in health care, scenario planning offers a complementary approach to traditional strategic planning. Whereas traditional strategic planning typically consists of predicting the future at a single point on a chosen time horizon and mapping the preferred plans to address such a future, scenario planning creates stories about multiple likely potential futures on a given time horizon and maps the preferred plans to address the multiple described potential futures. Each scenario is purposefully different and specifically not a consensus worst-case, average, or best-case forecast; nor is scenario planning a process in probabilistic prediction. Scenario planning focuses on high-impact, uncertain driving forces that in the authors' example affect the field of radiology. Uncertainty is the key concept as these forces are mapped onto axes of uncertainty, the poles of which have opposed effects on radiology. One chosen axis was "market focus," with poles of centralized health care (government control) vs a decentralized private market. Another axis was "radiology's business model," with one pole being a unified, single specialty vs a splintered, disaggregated subspecialty. The third axis was "technology and science," with one pole representing technology enabling to radiology vs technology threatening to radiology. Selected poles of these axes were then combined to create 3 scenarios. One scenario, termed "entrepreneurialism," consisted of a decentralized private market, a disaggregated business model, and threatening technology and science. A second scenario, termed "socialized medicine," had a centralized market focus, a unified specialty business model, and enabling technology and science. A third scenario, termed "freefall," had a centralized market focus, a disaggregated business model, and threatening technology and science. These scenarios provide a range of futures that ultimately allow the identification of defined "signposts" that can suggest which basic features among the "possible futures" are playing out. Scenario planning provides for the implementation of appropriate constructed strategic responses. Scenarios allow for a pre-prepared game plan available for ready use as the future unfolds. They allow a deliberative response rather than a hastily constructed, urgent response. Copyright © 2011 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  11. Review of potential EGS sites and possible EGS demonstration scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    1999-09-01

    Review of potential sites for Enhanced Geothermal Systems (EGS) and development of reference scenarios for EGS demonstration projects are two sub-tasks included in the FY 1999 EGS Research and Development (R&D) Management Task (DOE Task Order Number DE-AT07-99ID60365, included in the Appendix of this report). These sub-tasks are consistent with the EGS Strategic Plan, which includes milestones relating to EGS site selection (Milestone 4, to be completed in 2004) and development of a cost-shared, pilot-scale demonstration project (Milestone 5, to be completed in 2008). The purpose of the present work is to provide some reference points for discussing what typemore » of EGS projects might be undertaken, where they might be located, and what the associated benefits are likely to be. The review of potential EGS sites is presented in Chapter 2 of this report. It draws upon site-selection criteria (and potential project sites that were identified using those criteria) developed at a mini-workshop held at the April 1998 DOE Geothermal Program Review to discuss EGS R&D issues. The criteria and the sites were the focus of a paper presented at the 4th International Hot Dry Rock Forum in Strasbourg in September 1998 (Sass and Robertson-Tait, 1998). The selection criteria, project sites and possible EGS developments discussed in the workshop and paper are described in more detail herein. Input from geothermal operators is incorporated, and water availability and transmission-line access are emphasized. The reference scenarios for EGS demonstration projects are presented in Chapter 3. Three alternative scenarios are discussed: (1) a stand-alone demonstration plant in an area with no existing geothermal development; (2) a separate generating facility adjacent to an existing geothermal development; and (3) an EGS project that supplies an existing geothermal power plant with additional generating capacity. Furthermore, information potentially useful to DOE in framing solicitations and selecting projects for funding is discussed objectively. Although defined as separate sub-tasks, the EGS site review and reference scenarios are closely related. The incremental approach to EGS development that has recently been adopted could logically be expected to yield proposals for studies that lead up to and include production-enhancement experiments in producing geothermal fields in the very near future. However, the strategic plan clearly calls for the development of a more comprehensive demonstration project that can generate up to perhaps 10 MW (gross). It is anticipated that a series of small-scale experiments will define what realistically may be achieved in the near future, thus setting the stage for a successful pilot demonstration. This report continues the process of presenting information on EGS sites and experiments, and begins the process of defining what a demonstration project might be.« less

  12. Intensity earthquake scenario (scenario event - a damaging earthquake with higher probability of occurrence) for the city of Sofia

    NASA Astrophysics Data System (ADS)

    Aleksandrova, Irena; Simeonova, Stela; Solakov, Dimcho; Popova, Maria

    2014-05-01

    Among the many kinds of natural and man-made disasters, earthquakes dominate with regard to their social and economical impact on the urban environment. Global seismic risk to earthquakes are increasing steadily as urbanization and development occupy more areas that a prone to effects of strong earthquakes. Additionally, the uncontrolled growth of mega cities in highly seismic areas around the world is often associated with the construction of seismically unsafe buildings and infrastructures, and undertaken with an insufficient knowledge of the regional seismicity peculiarities and seismic hazard. The assessment of seismic hazard and generation of earthquake scenarios is the first link in the prevention chain and the first step in the evaluation of the seismic risk. The earthquake scenarios are intended as a basic input for developing detailed earthquake damage scenarios for the cities and can be used in earthquake-safe town and infrastructure planning. The city of Sofia is the capital of Bulgaria. It is situated in the centre of the Sofia area that is the most populated (the population is of more than 1.2 mil. inhabitants), industrial and cultural region of Bulgaria that faces considerable earthquake risk. The available historical documents prove the occurrence of destructive earthquakes during the 15th-18th centuries in the Sofia zone. In 19th century the city of Sofia has experienced two strong earthquakes: the 1818 earthquake with epicentral intensity I0=8-9 MSK and the 1858 earthquake with I0=9-10 MSK. During the 20th century the strongest event occurred in the vicinity of the city of Sofia is the 1917 earthquake with MS=5.3 (I0=7-8 MSK). Almost a century later (95 years) an earthquake of moment magnitude 5.6 (I0=7-8 MSK) hit the city of Sofia, on May 22nd, 2012. In the present study as a deterministic scenario event is considered a damaging earthquake with higher probability of occurrence that could affect the city with intensity less than or equal to VIII. The usable and realistic ground motion maps for urban areas are generated: - either from the assumption of a "reference earthquake" - or directly, showing values of macroseimic intensity generated by a damaging, real earthquake. In the study, applying deterministic approach, earthquake scenario in macroseismic intensity ("model" earthquake scenario) for the city of Sofia is generated. The deterministic "model" intensity scenario based on assumption of a "reference earthquake" is compared with a scenario based on observed macroseimic effects caused by the damaging 2012 earthquake (MW5.6). The difference between observed (Io) and predicted (Ip) intensities values is analyzed.

  13. A comparative analysis of the density of the SNOMED CT conceptual content for semantic harmonization

    PubMed Central

    He, Zhe; Geller, James; Chen, Yan

    2015-01-01

    Objectives Medical terminologies vary in the amount of concept information (the “density”) represented, even in the same sub-domains. This causes problems in terminology mapping, semantic harmonization and terminology integration. Moreover, complex clinical scenarios need to be encoded by a medical terminology with comprehensive content. SNOMED Clinical Terms (SNOMED CT), a leading clinical terminology, was reported to lack concepts and synonyms, problems that cannot be fully alleviated by using post-coordination. Therefore, a scalable solution is needed to enrich the conceptual content of SNOMED CT. We are developing a structure-based, algorithmic method to identify potential concepts for enriching the conceptual content of SNOMED CT and to support semantic harmonization of SNOMED CT with selected other Unified Medical Language System (UMLS) terminologies. Methods We first identified a subset of English terminologies in the UMLS that have ‘PAR’ relationship labeled with ‘IS_A’ and over 10% overlap with one or more of the 19 hierarchies of SNOMED CT. We call these “reference terminologies” and we note that our use of this name is different from the standard use. Next, we defined a set of topological patterns across pairs of terminologies, with SNOMED CT being one terminology in each pair and the other being one of the reference terminologies. We then explored how often these topological patterns appear between SNOMED CT and each reference terminology, and how to interpret them. Results Four viable reference terminologies were identified. Large density differences between terminologies were found. Expected interpretations of these differences were indeed observed, as follows. A random sample of 299 instances of special topological patterns (“2:3 and 3:2 trapezoids”) showed that 39.1% and 59.5% of analyzed concepts in SNOMED CT and in a reference terminology, respectively, were deemed to be alternative classifications of the same conceptual content. In 30.5% and 17.6% of the cases, it was found that intermediate concepts could be imported into SNOMED CT or into the reference terminology, respectively, to enhance their conceptual content, if approved by a human curator. Other cases included synonymy and errors in one of the terminologies. Conclusion These results show that structure-based algorithmic methods can be used to identify potential concepts to enrich SNOMED CT and the four reference terminologies. The comparative analysis has the future potential of supporting terminology authoring by suggesting new content to improve content coverage and semantic harmonization between terminologies. PMID:25890688

  14. A reliable simultaneous representation of seismic hazard and of ground shaking recurrence

    NASA Astrophysics Data System (ADS)

    Peresan, A.; Panza, G. F.; Magrin, A.; Vaccari, F.

    2015-12-01

    Different earthquake hazard maps may be appropriate for different purposes - such as emergency management, insurance and engineering design. Accounting for the lower occurrence rate of larger sporadic earthquakes may allow to formulate cost-effective policies in some specific applications, provided that statistically sound recurrence estimates are used, which is not typically the case of PSHA (Probabilistic Seismic Hazard Assessment). We illustrate the procedure to associate the expected ground motions from Neo-deterministic Seismic Hazard Assessment (NDSHA) to an estimate of their recurrence. Neo-deterministic refers to a scenario-based approach, which allows for the construction of a broad range of earthquake scenarios via full waveforms modeling. From the synthetic seismograms the estimates of peak ground acceleration, velocity and displacement, or any other parameter relevant to seismic engineering, can be extracted. NDSHA, in its standard form, defines the hazard computed from a wide set of scenario earthquakes (including the largest deterministically or historically defined credible earthquake, MCE) and it does not supply the frequency of occurrence of the expected ground shaking. A recent enhanced variant of NDSHA that reliably accounts for recurrence has been developed and it is applied to the Italian territory. The characterization of the frequency-magnitude relation can be performed by any statistically sound method supported by data (e.g. multi-scale seismicity model), so that a recurrence estimate is associated to each of the pertinent sources. In this way a standard NDSHA map of ground shaking is obtained simultaneously with the map of the corresponding recurrences. The introduction of recurrence estimates in NDSHA naturally allows for the generation of ground shaking maps at specified return periods. This permits a straightforward comparison between NDSHA and PSHA maps.

  15. Age- and risk-related appropriateness of the use of available influenza vaccines in the Italian elderly population is advantageous: results from a budget impact analysis

    PubMed Central

    BARBIERI, M.; CAPRI, S.; WAURE, C. DE; BOCCALINI, S.; PANATTO, D.

    2017-01-01

    Summary Introduction Nowadays, four different types of influenza vaccines are available in Italy: trivalent (TIV), quadrivalent (QIV), MF59-adjuvanted (aTIV) and intradermal TIV (idTIV) inactivated vaccines. Recently, a concept of the appropriateness (i.e. according to the age and risk factors) of the use of different vaccines has been established in Italy. We conducted a budget impact analysis of switching to a policy, in which the Italian elderly (who carry the major disease burden) received the available vaccines according to their age and risk profile. Methods A novel budget impact model was constructed with a time horizon of one influenza season. In the reference scenario the cohort of Italian elderly individuals could receive either available vaccine according to 2017/18 season market share. The alternative scenario envisaged the administration of TIV/QIV to people aged 65-74 years and at low risk of developing influenza-related complications, while aTIV/idTIV were allocated to high-risk 65-74-year-olds and all subjects aged ≥ 75 years. Results Switching to the alternative scenario would result in both significant health benefits and net budget savings. Particularly, it would be possible to prevent an additional 8201 cases of laboratory-confirmed influenza, 988 complications, 355 hospitalizations and 14 deaths. Despite the alternative strategy being associated with slightly higher vaccination costs, the total savings derived from fewer influenza events completely resets this increase with net budget savings of € 0.13 million. Conclusions An immunization policy in which influenza vaccines are administered according to the age and risk profile of Italian elderly individuals is advisable. PMID:29707658

  16. Age- and risk-related appropriateness of the use of available influenza vaccines in the Italian elderly population is advantageous: results from a budget impact analysis.

    PubMed

    Barbieri, M; Capri, S; Waure, C DE; Boccalini, S; Panatto, D

    2017-12-01

    Nowadays, four different types of influenza vaccines are available in Italy: trivalent (TIV), quadrivalent (QIV), MF59-adjuvanted (aTIV) and intradermal TIV (idTIV) inactivated vaccines. Recently, a concept of the appropriateness (i.e. according to the age and risk factors) of the use of different vaccines has been established in Italy. We conducted a budget impact analysis of switching to a policy, in which the Italian elderly (who carry the major disease burden) received the available vaccines according to their age and risk profile. A novel budget impact model was constructed with a time horizon of one influenza season. In the reference scenario the cohort of Italian elderly individuals could receive either available vaccine according to 2017/18 season market share. The alternative scenario envisaged the administration of TIV/QIV to people aged 65-74 years and at low risk of developing influenza-related complications, while aTIV/idTIV were allocated to high-risk 65-74-year-olds and all subjects aged ≥ 75 years. Switching to the alternative scenario would result in both significant health benefits and net budget savings. Particularly, it would be possible to prevent an additional 8201 cases of laboratory-confirmed influenza, 988 complications, 355 hospitalizations and 14 deaths. Despite the alternative strategy being associated with slightly higher vaccination costs, the total savings derived from fewer influenza events completely resets this increase with net budget savings of € 0.13 million. An immunization policy in which influenza vaccines are administered according to the age and risk profile of Italian elderly individuals is advisable.

  17. Ready-to-use pre-filled syringes of atropine for anaesthesia care in French hospitals - a budget impact analysis.

    PubMed

    Benhamou, Dan; Piriou, Vincent; De Vaumas, Cyrille; Albaladejo, Pierre; Malinovsky, Jean-Marc; Doz, Marianne; Lafuma, Antoine; Bouaziz, Hervé

    2017-04-01

    Patient safety is improved by the use of labelled, ready-to-use, pre-filled syringes (PFS) when compared to conventional methods of syringe preparation (CMP) of the same product from an ampoule. However, the PFS presentation costs more than the CMP presentation. To estimate the budget impact for French hospitals of switching from atropine in ampoules to atropine PFS for anaesthesia care. A model was constructed to simulate the financial consequences of the use of atropine PFS in operating theatres, taking into account wastage and medication errors. The model tested different scenarios and a sensitivity analysis was performed. In a reference scenario, the systematic use of atropine PFS rather than atropine CMP yielded a net one-year budget saving of €5,255,304. Medication errors outweighed other cost factors relating to the use of atropine CMP (€9,425,448). Avoidance of wastage in the case of atropine CMP (prepared and unused) was a major source of savings (€1,167,323). Significant savings were made by means of other scenarios examined. The sensitivity analysis suggests that the results obtained are robust and stable for a range of parameter estimates and assumptions. The financial model was based on data obtained from the literature and expert opinions. The budget impact analysis shows that even though atropine PFS is more expensive than atropine CMP, its use would lead to significant cost savings. Savings would mainly be due to fewer medication errors and their associated consequences and the absence of wastage when atropine syringes are prepared in advance. Copyright © 2016 Société française d'anesthésie et de réanimation (Sfar). Published by Elsevier Masson SAS. All rights reserved.

  18. Evaluating application of the National Healthcare Safety Network central line-associated bloodstream infection surveillance definition: a survey of pediatric intensive care and hematology/oncology units.

    PubMed

    Gaur, Aditya H; Miller, Marlene R; Gao, Cuilan; Rosenberg, Carol; Morrell, Gloria C; Coffin, Susan E; Huskins, W Charles

    2013-07-01

    To evaluate the application of the National Healthcare Safety Network (NHSN) central line-associated bloodstream infection (CLABSI) definition in pediatric intensive care units (PICUs) and pediatric hematology/oncology units (PHOUs) participating in a multicenter quality improvement collaborative to reduce CLABSIs; to identify sources of variability in the application of the definition. Online survey using 18 standardized case scenarios. Each described a positive blood culture in a patient and required a yes- or-no answer to the question "Is this a CLABSI?" NHSN staff responses were the reference standard. Sixty-five US PICUs and PHOUs. Staff who routinely adjudicate CLABSIs using NHSN definitions. Sixty responses were received from 58 (89%) of 65 institutions; 78% of respondents were infection preventionists, infection control officers, or infectious disease physicians. Responses matched those of NHSN staff for 78% of questions. The mean (SE) percentage of concurring answers did not differ for scenarios evaluating application of 1 of the 3 criteria ("known pathogen," 78% [1.7%]; "skin contaminant, >1 year of age," 76% [SE, 2.5%]; "skin contaminant, ≤1 year of age," 81% [3.8%]; [Formula: see text]). The mean percentage of concurring answers was lower for scenarios requiring respondents to determine whether a CLABSI was present or incubating on admission (64% [4.6%]; [Formula: see text]) or to distinguish between primary and secondary bacteremia (65% [2.5%]; [Formula: see text]). The accuracy of application of the CLABSI definition was suboptimal. Efforts to reduce variability in identifying CLABSIs that are present or incubating on admission and in distinguishing primary from secondary bloodstream infection are needed.

  19. Improvement of Bragg peak shift estimation using dimensionality reduction techniques and predictive linear modeling

    NASA Astrophysics Data System (ADS)

    Xing, Yafei; Macq, Benoit

    2017-11-01

    With the emergence of clinical prototypes and first patient acquisitions for proton therapy, the research on prompt gamma imaging is aiming at making most use of the prompt gamma data for in vivo estimation of any shift from expected Bragg peak (BP). The simple problem of matching the measured prompt gamma profile of each pencil beam with a reference simulation from the treatment plan is actually made complex by uncertainties which can translate into distortions during treatment. We will illustrate this challenge and demonstrate the robustness of a predictive linear model we proposed for BP shift estimation based on principal component analysis (PCA) method. It considered the first clinical knife-edge slit camera design in use with anthropomorphic phantom CT data. Particularly, 4115 error scenarios were simulated for the learning model. PCA was applied to the training input randomly chosen from 500 scenarios for eliminating data collinearities. A total variance of 99.95% was used for representing the testing input from 3615 scenarios. This model improved the BP shift estimation by an average of 63+/-19% in a range between -2.5% and 86%, comparing to our previous profile shift (PS) method. The robustness of our method was demonstrated by a comparative study conducted by applying 1000 times Poisson noise to each profile. 67% cases obtained by the learning model had lower prediction errors than those obtained by PS method. The estimation accuracy ranged between 0.31 +/- 0.22 mm and 1.84 +/- 8.98 mm for the learning model, while for PS method it ranged between 0.3 +/- 0.25 mm and 20.71 +/- 8.38 mm.

  20. Benefits of investing in ecosystem restoration.

    PubMed

    DE Groot, Rudolf S; Blignaut, James; VAN DER Ploeg, Sander; Aronson, James; Elmqvist, Thomas; Farley, Joshua

    2013-12-01

    Measures aimed at conservation or restoration of ecosystems are often seen as net-cost projects by governments and businesses because they are based on incomplete and often faulty cost-benefit analyses. After screening over 200 studies, we examined the costs (94 studies) and benefits (225 studies) of ecosystem restoration projects that had sufficient reliable data in 9 different biomes ranging from coral reefs to tropical forests. Costs included capital investment and maintenance of the restoration project, and benefits were based on the monetary value of the total bundle of ecosystem services provided by the restored ecosystem. Assuming restoration is always imperfect and benefits attain only 75% of the maximum value of the reference systems over 20 years, we calculated the net present value at the social discount rates of 2% and 8%. We also conducted 2 threshold cum sensitivity analyses. Benefit-cost ratios ranged from about 0.05:1 (coral reefs and coastal systems, worst-case scenario) to as much as 35:1 (grasslands, best-case scenario). Our results provide only partial estimates of benefits at one point in time and reflect the lower limit of the welfare benefits of ecosystem restoration because both scarcity of and demand for ecosystem services is increasing and new benefits of natural ecosystems and biological diversity are being discovered. Nonetheless, when accounting for even the incomplete range of known benefits through the use of static estimates that fail to capture rising values, the majority of the restoration projects we analyzed provided net benefits and should be considered not only as profitable but also as high-yielding investments. Beneficios de Invertir en la Restauración de Ecosistemas. © 2013 Society for Conservation Biology.

  1. Extending Climate Analytics as a Service to the Earth System Grid Federation Progress Report on the Reanalysis Ensemble Service

    NASA Astrophysics Data System (ADS)

    Tamkin, G.; Schnase, J. L.; Duffy, D.; Li, J.; Strong, S.; Thompson, J. H.

    2016-12-01

    We are extending climate analytics-as-a-service, including: (1) A high-performance Virtual Real-Time Analytics Testbed supporting six major reanalysis data sets using advanced technologies like the Cloudera Impala-based SQL and Hadoop-based MapReduce analytics over native NetCDF files. (2) A Reanalysis Ensemble Service (RES) that offers a basic set of commonly used operations over the reanalysis collections that are accessible through NASA's climate data analytics Web services and our client-side Climate Data Services Python library, CDSlib. (3) An Open Geospatial Consortium (OGC) WPS-compliant Web service interface to CDSLib to accommodate ESGF's Web service endpoints. This presentation will report on the overall progress of this effort, with special attention to recent enhancements that have been made to the Reanalysis Ensemble Service, including the following: - An CDSlib Python library that supports full temporal, spatial, and grid-based resolution services - A new reanalysis collections reference model to enable operator design and implementation - An enhanced library of sample queries to demonstrate and develop use case scenarios - Extended operators that enable single- and multiple reanalysis area average, vertical average, re-gridding, and trend, climatology, and anomaly computations - Full support for the MERRA-2 reanalysis and the initial integration of two additional reanalyses - A prototype Jupyter notebook-based distribution mechanism that combines CDSlib documentation with interactive use case scenarios and personalized project management - Prototyped uncertainty quantification services that combine ensemble products with comparative observational products - Convenient, one-stop shopping for commonly used data products from multiple reanalyses, including basic subsetting and arithmetic operations over the data and extractions of trends, climatologies, and anomalies - The ability to compute and visualize multiple reanalysis intercomparisons

  2. On-line Meteorology-Chemistry/Aerosols Modelling and Integration for Risk Assessment: Case Studies

    NASA Astrophysics Data System (ADS)

    Bostanbekov, Kairat; Mahura, Alexander; Nuterman, Roman; Nurseitov, Daniyar; Zakarin, Edige; Baklanov, Alexander

    2016-04-01

    On regional level, and especially in areas with potential diverse sources of industrial pollutants, the risk assessment of impact on environment and population is critically important. During normal operations, the risk is minimal. However, during accidental situations, the risk is increased due to releases of harmful pollutants into different environments such as water, soil, and atmosphere where it is following processes of continuous transformation and transport. In this study, the Enviro-HIRLAM (Environment High Resolution Limited Area Model) was adapted and employed for assessment of scenarios with accidental and continuous emissions of sulphur dioxide (SO2) for selected case studies during January of 2010. The following scenarios were considered: (i) control reference run; (ii) accidental release (due to short-term 1 day fire at oil storage facility) occurred at city of Atyrau (Kazakhstan) near the northern part of the Caspian Sea; and (iii) doubling of original continuous emissions from three locations of metallurgical enterprises on the Kola Peninsula (Russia). The implemented aerosol microphysics module M7 uses 5 types - sulphates, sea salt, dust, black and organic carbon; as well as distributed in 7 size modes. Removal processes of aerosols include gravitational settling and wet deposition. As the Enviro-HIRLAM model is the on-line integrated model, both meteorological and chemical processes are simultaneously modelled at each time step. The modelled spatio-temporal variations for meteorological and chemical patterns are analyzed for both European and Kazakhstan regions domains. The results of evaluation of sulphur dioxide concentration and deposition on main populated cities, selected regions, countries are presented employing GIS tools. As outcome, the results of Enviro-HIRLAM modelling for accidental release near the Caspian Sea are integrated into the RANDOM (Risk Assessment of Nature Detriment due to Oil spill Migration) system.

  3. The Electronic Library Workstation--Today.

    ERIC Educational Resources Information Center

    Nolte, James

    1990-01-01

    Describes the components--hardware, software and applications, CD-ROM and online reference resources, and telecommunications links--of an electronic library workstation in use at Clarkson University (Potsdam, New York). Data manipulation, a hypothetical research scenario, and recommended workstation capabilities are also discussed. (MES)

  4. Biomass Scenario Model Documentation: Data and References

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Y.; Newes, E.; Bush, B.

    2013-05-01

    The Biomass Scenario Model (BSM) is a system dynamics model that represents the entire biomass-to-biofuels supply chain, from feedstock to fuel use. The BSM is a complex model that has been used for extensive analyses; the model and its results can be better understood if input data used for initialization and calibration are well-characterized. It has been carefully validated and calibrated against the available data, with data gaps filled in using expert opinion and internally consistent assumed values. Most of the main data sources that feed into the model are recognized as baseline values by the industry. This report documentsmore » data sources and references in Version 2 of the BSM (BSM2), which only contains the ethanol pathway, although subsequent versions of the BSM contain multiple conversion pathways. The BSM2 contains over 12,000 total input values, with 506 distinct variables. Many of the variables are opportunities for the user to define scenarios, while others are simply used to initialize a stock, such as the initial number of biorefineries. However, around 35% of the distinct variables are defined by external sources, such as models or reports. The focus of this report is to provide insight into which sources are most influential in each area of the supply chain.« less

  5. Generating local scale land use/cover change scenarios: case studies of high-risk mountain areas

    NASA Astrophysics Data System (ADS)

    Malek, Žiga; Glade, Thomas; Boerboom, Luc

    2014-05-01

    The relationship between land use/cover changes and consequences to human well-being is well acknowledged and has led to higher interest of both researchers and decision makers in driving forces and consequences of such changes. For example, removal of natural vegetation cover or urban expansion resulting in new elements at risk can increase hydro-meteorological risk. This is why it is necessary to study how the land use/cover could evolve in the future. Emphasis should especially be given to areas experiencing, or expecting, high rates of socio-economic change. A suitable approach to address these changes is scenario development; it offers exploring possible futures and the corresponding environmental consequences, and aids decision-making, as it enables to analyse possible options. Scenarios provide a creative methodology to depict possible futures, resulting from existing decisions, based on different assumptions of future socio-economic development. They have been used in various disciplines and on various scales, such as flood risk and soil erosion. Several studies have simulated future scenarios of land use/cover changes at a very high success rate, however usually these approaches are tailor made for specific case study areas and fit to available data. This study presents a multi-step scenario generation framework, which can be transferable to other local scale case study areas, taking into account the case study specific consequences of land use/cover changes. Through the use of experts' and decision-makers' knowledge, we aimed to develop a framework with the following characteristics: (1) it enables development of scenarios that are plausible, (2) it can overcome data inaccessibility, (3) it can address intangible and external driving forces of land use/cover change, and (4) it ensures transferability to other local scale case study areas with different land use/cover change processes and consequences. To achieve this, a set of different methods is applied including: qualitative methods such as interviews, group discussions and fuzzy cognitive mapping to identify land use/cover change processes, their driving forces and possible consequences, and final scenario generation; and geospatial methods such as GIS, geostatistics and environmental modeling in an environment for geoprocessing objects (Dinamica EGO) for spatial allocation of these scenarios. The methods were applied in the Italian Alps and the Romanian Carpathians. Both are mountainous areas, however they differ in terms of past and most likely future socio-economic development, and therefore consequent land use/cover changes. Whereas we focused on urban expansion due to tourism development in the Alps, we focused on possible deforestation trajectories in the Carpathians. In both areas, the recognized most significant driving forces were either not covered by accessible data, or were characterized as intangible. With the proposed framework we were able to generate futures scenarios despite these shortcomings, and enabling the transferability of the method.

  6. Use of simulated patients to assess the clinical and communication skills of community pharmacists.

    PubMed

    Weiss, Marjorie C; Booth, Anneka; Jones, Bethan; Ramjeet, Sarah; Wong, Eva

    2010-06-01

    To investigate the quality and appropriateness of Emergency Hormonal Contraception (EHC) supply from community pharmacies. Community pharmacies in the southwest of England during 2007. Two simulated patient ('mystery shopper') scenarios to each participating pharmacy, one where the supply of EHC would be appropriate (scenario 1) and one where there was a drug interaction between EHC and St John's Wort, and the supply inappropriate (scenario 2). Pharmacy consultations were rated using criteria developed from two focus groups: one with pharmacist academics and one with female university students. Feedback to pharmacists to inform their continuing professional development was provided. Scores on rating scales encompassing the clinical and communication skills of the participating community pharmacists completed immediately after each mystery shopper visit. 40 pharmacist visits were completed: 21 for scenario 1 and 19 for scenario 2. Eighteen pharmacists were visited twice. Five pharmacists visited for scenario 2 supplied EHC against professional guidance, although other reference sources conflicted with this advice. Pharmacies which were part of the local PGD scheme scored higher overall in scenario 1 (P = 0.005) than those not part of the scheme. Overall the communication skills of pharmacists were rated highly although some pharmacists used jargon when explaining the interaction for scenario 2. Formatively assessing communication skills in an integrative manner alongside clinical skills has been identified as an important part of the medical consultation skills training and can be incorporated into the routine assessment and feedback of pharmacy over-the-counter medicines advice.

  7. Preliminary identification of potentially disruptive scenarios at the Greater Confinement Disposal Facility, Area 5 of the Nevada Test Site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guzowski, R.V.; Newman, G.

    1993-12-01

    The Greater Confinement Disposal location is being evaluated to determine whether defense-generated transuranic waste buried at this location complies with the Containment Requirements established by the US Environmental Protection Agency. One step in determining compliance is to identify those combinations of events and processes (scenarios) that define possible future states of the disposal system for which performance assessments must be performed. An established scenario-development procedure was used to identify a comprehensive set of mutually exclusive scenarios. To assure completeness, 761 features, events, processes, and other listings (FEPS) were compiled from 11 references. This number was reduced to 205 primarily throughmore » the elimination of duplications. The 205 FEPs were screened based on site-specific, goal-specific, and regulatory criteria. Four events survived screening and were used in preliminary scenario development: (1) exploratory drilling penetrates a GCD borehole, (2) drilling of a withdrawal/injection well penetrates a GCD borehole, (3) subsidence occurs at the RWMS, and (4) irrigation occurs at the RWMS. A logic diagram was used to develop 16 scenarios from the four events. No screening of these scenarios was attempted at this time. Additional screening of the currently retained events and processes will be based on additional data and information from site-characterization activities. When screening of the events and processes is completed, a final set of scenarios will be developed and screened based on consequence and probability of occurrence.« less

  8. In-Flight Suppression of a Destabilized F/A-18 Structural Mode Using the Space Launch System Adaptive Augmenting Control System

    NASA Technical Reports Server (NTRS)

    Wall, John H.; VanZwieten, Tannen S.; Gilligan, Eric T.; Miller, Christopher J.; Hanson, Curtis E.; Orr, Jeb S.

    2015-01-01

    NASA's Space Launch System (SLS) Flight Control System (FCS) includes an Adaptive Augmenting Control (AAC) component which employs a multiplicative gain update law to enhance the performance and robustness of the baseline control system for extreme off nominal scenarios. The SLS FCS algorithm including AAC has been flight tested utilizing a specially outfitted F/A-18 fighter jet in which the pitch axis control of the aircraft was performed by a Non-linear Dynamic Inversion (NDI) controller, SLS reference models, and the SLS flight software prototype. This paper describes test cases from the research flight campaign in which the fundamental F/A-18 airframe structural mode was identified using frequency-domain reconstruction of flight data, amplified to result in closed loop instability, and suppressed in-flight by the SLS adaptive control system.

  9. Light Water Reactor Sustainability Program: Risk-Informed Safety Margins Characterization (RISMC) Pathway Technical Program Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Curtis; Rabiti, Cristian; Martineau, Richard

    Safety is central to the design, licensing, operation, and economics of Nuclear Power Plants (NPPs). As the current Light Water Reactor (LWR) NPPs age beyond 60 years, there are possibilities for increased frequency of Systems, Structures, and Components (SSCs) degradations or failures that initiate safety-significant events, reduce existing accident mitigation capabilities, or create new failure modes. Plant designers commonly “over-design” portions of NPPs and provide robustness in the form of redundant and diverse engineered safety features to ensure that, even in the case of well-beyond design basis scenarios, public health and safety will be protected with a very high degreemore » of assurance. This form of defense-in-depth is a reasoned response to uncertainties and is often referred to generically as “safety margin.” Historically, specific safety margin provisions have been formulated, primarily based on “engineering judgment.”« less

  10. Architectural Implications for Spatial Object Association Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, V S; Kurc, T; Saltz, J

    2009-01-29

    Spatial object association, also referred to as cross-match of spatial datasets, is the problem of identifying and comparing objects in two or more datasets based on their positions in a common spatial coordinate system. In this work, we evaluate two crossmatch algorithms that are used for astronomical sky surveys, on the following database system architecture configurations: (1) Netezza Performance Server R, a parallel database system with active disk style processing capabilities, (2) MySQL Cluster, a high-throughput network database system, and (3) a hybrid configuration consisting of a collection of independent database system instances with data replication support. Our evaluation providesmore » insights about how architectural characteristics of these systems affect the performance of the spatial crossmatch algorithms. We conducted our study using real use-case scenarios borrowed from a large-scale astronomy application known as the Large Synoptic Survey Telescope (LSST).« less

  11. Perceptions of nonhuman primates in human-wildlife conflict scenarios.

    PubMed

    Hill, Catherine M; Webber, Amanda D

    2010-09-01

    Nonhuman primates (referred to as primates in this study) are sometimes revered as gods, abhorred as evil spirits, killed for food because they damage crops, or butchered for sport. Primates' perceived similarity to humans places them in an anomalous position. While some human groups accept the idea that primates "straddle" the human-nonhuman boundary, for others this resemblance is a violation of the human-animal divide. In this study we use two case studies to explore how people's perceptions of primates are often influenced by these animals' apparent similarity to humans, creating expectations, founded within a "human morality" about how primates should interact with people. When animals transgress these social rules, they are measured against the same moral framework as humans. This has implications for how people view and respond to certain kinds of primate behaviors, their willingness to tolerate co-existence with primates and their likely support for primate conservation initiatives. 2010 Wiley-Liss, Inc.

  12. Urban Resources Selection and Allocation for Emergency Shelters: In a Multi-Hazard Environment.

    PubMed

    Chen, Wei; Zhai, Guofang; Ren, Chongqiang; Shi, Yijun; Zhang, Jianxin

    2018-06-14

    This study explores how emergency shelters can adapt to a multi-hazard environment by geographic information system (GIS) and takes Guangzhou as a case for analysis. The physical suitability of the overall urban resources was first assessed by aiming to select the suitable resources and safe locations for emergency shelters in the context of multiple disasters. Afterward, by analyzing the scale and spatial distribution of affected areas and populations under different types of disaster scenarios, the demand for different kinds of shelters were predicted. Lastly, taking into account the coverage of the affected people, shelters were allocated according to different conditions in the districts. This work will hopefully provide a reference for the construction of emergency shelters and help form emergency operations in order to mitigate the impact of hazards. The issues identified in the study need to be further studied in medium or small-scale cities.

  13. Importance of Baseline Specification in Evaluating Conservation Interventions and Achieving No Net Loss of Biodiversity

    PubMed Central

    Bull, J W; Gordon, A; Law, E A; Suttle, K B; Milner-Gulland, E J

    2014-01-01

    There is an urgent need to improve the evaluation of conservation interventions. This requires specifying an objective and a frame of reference from which to measure performance. Reference frames can be baselines (i.e., known biodiversity at a fixed point in history) or counterfactuals (i.e., a scenario that would have occurred without the intervention). Biodiversity offsets are interventions with the objective of no net loss of biodiversity (NNL). We used biodiversity offsets to analyze the effects of the choice of reference frame on whether interventions met stated objectives. We developed 2 models to investigate the implications of setting different frames of reference in regions subject to various biodiversity trends and anthropogenic impacts. First, a general analytic model evaluated offsets against a range of baseline and counterfactual specifications. Second, a simulation model then replicated these results with a complex real world case study: native grassland offsets in Melbourne, Australia. Both models showed that achieving NNL depended upon the interaction between reference frame and background biodiversity trends. With a baseline, offsets were less likely to achieve NNL where biodiversity was decreasing than where biodiversity was stable or increasing. With a no-development counterfactual, however, NNL was achievable only where biodiversity was declining. Otherwise, preventing development was better for biodiversity. Uncertainty about compliance was a stronger determinant of success than uncertainty in underlying biodiversity trends. When only development and offset locations were considered, offsets sometimes resulted in NNL, but not across an entire region. Choice of reference frame determined feasibility and effort required to attain objectives when designing and evaluating biodiversity offset schemes. We argue the choice is thus of fundamental importance for conservation policy. Our results shed light on situations in which biodiversity offsets may be an inappropriate policy instrument Importancia de la Especificación de Línea de Base en la Evaluación de Intervenciones de Conservación y la Obtención de Ninguna Pérdida Neta de la Biodiversidad PMID:24945031

  14. Fossil-fueled development (SSP5): An energy and resource intensive scenario for the 21st century

    DOE PAGES

    Kriegler, Elmar; Bauer, Nico; Popp, Alexander; ...

    2016-08-18

    Here, this paper presents a set of energy and resource intensive scenarios based on the concept of Shared Socio-Economic Pathways (SSPs). The scenario family is characterized by rapid and fossil-fueled development with high socio-economic challenges to mitigation and low socio-economic challenges to adaptation (SSP5). A special focus is placed on the SSP5 marker scenario developed by the REMIND-MAgPIE integrated assessment modeling framework. The SSP5 scenarios exhibit very high levels of fossil fuel use, up to a doubling of global food demand, and up to a tripling of energy demand and greenhouse gas emissions over the course of the century, markingmore » the upper end of the scenario literature in several dimensions. The SSP5 marker scenario results in a radiative forcing pathway close to the highest Representative Concentration Pathway (RCP8.5), and represents currently the only socio-economic scenario family that can be combined with climate model projections based on RCP8.5. This paper further investigates the direct impact of mitigation policies on the energy, land and emissions dynamics confirming high socio-economic challenges to mitigation in SSP5. Nonetheless, mitigation policies reaching climate forcing levels as low as in the lowest Representative Concentration Pathway (RCP2.6) are accessible in SSP5. Finally, the SSP5 scenarios presented in this paper aim to provide useful reference points for future climate change, climate impact, adaption and mitigation analysis, and broader questions of sustainable development.« less

  15. GSM base stations: short-term effects on well-being.

    PubMed

    Augner, Christoph; Florian, Matthias; Pauser, Gernot; Oberfeld, Gerd; Hacker, Gerhard W

    2009-01-01

    The purpose of this study was to examine the effects of short-term GSM (Global System for Mobile Communications) cellular phone base station RF-EMF (radiofrequency electromagnetic fields) exposure on psychological symptoms (good mood, alertness, calmness) as measured by a standardized well-being questionnaire. Fifty-seven participants were selected and randomly assigned to one of three different exposure scenarios. Each of those scenarios subjected participants to five 50-min exposure sessions, with only the first four relevant for the study of psychological symptoms. Three exposure levels were created by shielding devices in a field laboratory, which could be installed or removed during the breaks between sessions such that double-blinded conditions prevailed. The overall median power flux densities were 5.2 microW/m(2) during "low," 153.6 microW/m(2) during "medium," and 2126.8 microW/m(2) during "high" exposure sessions. For scenario HM and MH, the first and third sessions were "low" exposure. The second session was "high" and the fourth was "medium" in scenario HM; and vice versa for scenario MH. Scenario LL had four successive "low" exposure sessions constituting the reference condition. Participants in scenarios HM and MH (high and medium exposure) were significantly calmer during those sessions than participants in scenario LL (low exposure throughout) (P = 0.042). However, no significant differences between exposure scenarios in the "good mood" or "alertness" factors were obtained. We conclude that short-term exposure to GSM base station signals may have an impact on well-being by reducing psychological arousal. (c) 2008 Wiley-Liss, Inc.

  16. Fossil-fueled development (SSP5): An energy and resource intensive scenario for the 21st century

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kriegler, Elmar; Bauer, Nico; Popp, Alexander

    Here, this paper presents a set of energy and resource intensive scenarios based on the concept of Shared Socio-Economic Pathways (SSPs). The scenario family is characterized by rapid and fossil-fueled development with high socio-economic challenges to mitigation and low socio-economic challenges to adaptation (SSP5). A special focus is placed on the SSP5 marker scenario developed by the REMIND-MAgPIE integrated assessment modeling framework. The SSP5 scenarios exhibit very high levels of fossil fuel use, up to a doubling of global food demand, and up to a tripling of energy demand and greenhouse gas emissions over the course of the century, markingmore » the upper end of the scenario literature in several dimensions. The SSP5 marker scenario results in a radiative forcing pathway close to the highest Representative Concentration Pathway (RCP8.5), and represents currently the only socio-economic scenario family that can be combined with climate model projections based on RCP8.5. This paper further investigates the direct impact of mitigation policies on the energy, land and emissions dynamics confirming high socio-economic challenges to mitigation in SSP5. Nonetheless, mitigation policies reaching climate forcing levels as low as in the lowest Representative Concentration Pathway (RCP2.6) are accessible in SSP5. Finally, the SSP5 scenarios presented in this paper aim to provide useful reference points for future climate change, climate impact, adaption and mitigation analysis, and broader questions of sustainable development.« less

  17. Future electricity: The challenge of reducing both carbon and water footprint.

    PubMed

    Mekonnen, Mesfin M; Gerbens-Leenes, P W; Hoekstra, Arjen Y

    2016-11-01

    We estimate the consumptive water footprint (WF) of electricity and heat in 2035 for the four energy scenarios of the International Energy Agency (IEA) and a fifth scenario with a larger percentage of solar energy. Counter-intuitively, the 'greenest' IEA scenario (with the smallest carbon footprint) shows the largest WF increase over time: an increase by a factor four over the period 2010-2035. In 2010, electricity from solar, wind, and geothermal contributed 1.8% to the total. The increase of this contribution to 19.6% in IEA's '450 scenario' contributes significantly to the decrease of the WF of the global electricity and heat sector, but is offset by the simultaneous increase of the use of firewood and hydropower. Only substantial growth in the fractions of energy sources with small WFs - solar, wind, and geothermal energy - can contribute to a lowering of the WF of the electricity and heat sector in the coming decades. The fifth energy scenario - adapted from the IEA 450 scenario but based on a quick transition to solar, wind and geothermal energy and a minimum in bio-energy - is the only scenario that shows a strong decline in both carbon footprint (-66%) and consumptive WF (-12%) in 2035 compared to the reference year 2010. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  18. A Statistical Bias Correction Tool for Generating Climate Change Scenarios in Indonesia based on CMIP5 Datasets

    NASA Astrophysics Data System (ADS)

    Faqih, A.

    2017-03-01

    Providing information regarding future climate scenarios is very important in climate change study. The climate scenario can be used as basic information to support adaptation and mitigation studies. In order to deliver future climate scenarios over specific region, baseline and projection data from the outputs of global climate models (GCM) is needed. However, due to its coarse resolution, the data have to be downscaled and bias corrected in order to get scenario data with better spatial resolution that match the characteristics of the observed data. Generating this downscaled data is mostly difficult for scientist who do not have specific background, experience and skill in dealing with the complex data from the GCM outputs. In this regards, it is necessary to develop a tool that can be used to simplify the downscaling processes in order to help scientist, especially in Indonesia, for generating future climate scenario data that can be used for their climate change-related studies. In this paper, we introduce a tool called as “Statistical Bias Correction for Climate Scenarios (SiBiaS)”. The tool is specially designed to facilitate the use of CMIP5 GCM data outputs and process their statistical bias corrections relative to the reference data from observations. It is prepared for supporting capacity building in climate modeling in Indonesia as part of the Indonesia 3rd National Communication (TNC) project activities.

  19. Scenarios for Ultrafast Gamma-Ray Variability in AGN

    NASA Astrophysics Data System (ADS)

    Aharonian, F. A.; Barkov, M. V.; Khangulyan, D.

    2017-05-01

    We analyze three scenarios to address the challenge of ultrafast gamma-ray variability reported from active galactic nuclei. We focus on the energy requirements imposed by these scenarios: (I) external cloud in the jet, (II) relativistic blob propagating through the jet material, and (III) production of high-energy gamma-rays in the magnetosphere gaps. We show that while the first two scenarios are not constrained by the flare luminosity, there is a robust upper limit on the luminosity of flares generated in the black hole magnetosphere. This limit depends weakly on the mass of the central black hole and is determined by the accretion disk magnetization, viewing angle, and the pair multiplicity. For the most favorable values of these parameters, the luminosity for 5-minute flares is limited by 2× {10}43 {erg} {{{s}}}-1, which excludes a black hole magnetosphere origin of the flare detected from IC 310. In the scopes of scenarios (I) and (II), the jet power, which is required to explain the IC 310 flare, exceeds the jet power estimated based on the radio data. To resolve this discrepancy in the framework of scenario (II), it is sufficient to assume that the relativistic blobs are not distributed isotropically in the jet reference frame. A realization of scenario (I) demands that the jet power during the flare exceeds by a factor 102 the power of the radio jet relevant to a timescale of 108 years.

  20. Assessing Mechanisms of Climate Change Impact on the Upland Forest Water Balance of the Willamette River Basin

    NASA Astrophysics Data System (ADS)

    Turner, D. P.; Conklin, D. R.; Vache, K. B.; Schwartz, C.; Nolin, A. W.; Chang, H.; Watson, E.; John, B.

    2016-12-01

    Projected changes in air temperature, precipitation, and vapor pressure for the Willamette River Basin (Oregon, USA) over the next century will have significant impacts on the river basin water balance, notably on the amount of evapotranspiration (ET). Mechanisms of impact on ET will be both direct and indirect, but there is limited understanding of their absolute and relative magnitudes. Here we developed a spatially-explicit, daily time-step, modeling infrastructure to simulate the basin-wide water balance that accounts for meteorological influences, as well as effects mediated by changing vegetation cover type, leaf area, and ecophysiology. Three CMIP5 climate scenarios (LowClim, Reference, HighClim) were run for the 2010 to 2100 period. Besides warmer temperatures, the climate scenarios were characterized by wetter winters and increasing vapor pressure deficits. In the mid-range Reference scenario, our landscape simulation model (Envision) projected a continuation of forest cover on the uplands but a 3-fold increase in area burned per year. A decline (12-30%) in basin-wide mean leaf area index (LAI) in forests was projected in all scenarios. The lower LAIs drove a corresponding decline in ET. In a sensitivity test, the effect of increasing CO2 on stomatal conductance induced a further substantial decrease (11-18%) in basin-wide mean ET. The net effect of decreases in ET and increases in winter precipitation was an increase in annual streamflow. These results support the inclusion of changes in land cover, land use, LAI, and ecophysiology in efforts to anticipate impacts of climate change on basin-scale water balances.

  1. Multiple methods for multiple futures: Integrating qualitative scenario planning and quantitative simulation modeling for natural resource decision making

    USGS Publications Warehouse

    Symstad, Amy J.; Fisichelli, Nicholas A.; Miller, Brian W.; Rowland, Erika; Schuurman, Gregor W.

    2017-01-01

    Scenario planning helps managers incorporate climate change into their natural resource decision making through a structured “what-if” process of identifying key uncertainties and potential impacts and responses. Although qualitative scenarios, in which ecosystem responses to climate change are derived via expert opinion, often suffice for managers to begin addressing climate change in their planning, this approach may face limits in resolving the responses of complex systems to altered climate conditions. In addition, this approach may fall short of the scientific credibility managers often require to take actions that differ from current practice. Quantitative simulation modeling of ecosystem response to climate conditions and management actions can provide this credibility, but its utility is limited unless the modeling addresses the most impactful and management-relevant uncertainties and incorporates realistic management actions. We use a case study to compare and contrast management implications derived from qualitative scenario narratives and from scenarios supported by quantitative simulations. We then describe an analytical framework that refines the case study’s integrated approach in order to improve applicability of results to management decisions. The case study illustrates the value of an integrated approach for identifying counterintuitive system dynamics, refining understanding of complex relationships, clarifying the magnitude and timing of changes, identifying and checking the validity of assumptions about resource responses to climate, and refining management directions. Our proposed analytical framework retains qualitative scenario planning as a core element because its participatory approach builds understanding for both managers and scientists, lays the groundwork to focus quantitative simulations on key system dynamics, and clarifies the challenges that subsequent decision making must address.

  2. What do cardiovascular nurses know about the hematological management of patients with Eisenmenger syndrome?

    PubMed

    Moons, Philip; Fleck, Desiree; Jaarsma, Tiny; Norekval, Tone M; Smith, Karen; Stromberg, Anna; Thompson, David R; Budts, Werner

    2009-10-01

    We investigated the level of knowledge of hematological management of patients with Eisenmenger syndrome among general cardiovascular nurses and nurses who specialize in congenital heart disease (CHD). We conducted a survey at two international conferences attended by cardiovascular nurses. Nurses were asked to complete a questionnaire comprising two questions and three clinical case scenarios. Overall, 89 nurses participated (response rate 90.8%), 43 of whom specialized in CHD. The level of knowledge displayed among cardiovascular nurses is poor. About one-third of nurses not specialized in CHD recognized the definition of Eisenmenger syndrome and knew what normal hematocrit levels are. With respect to the cases presented, less than 10% of the nurses could give a correct answer. The level of knowledge of specialized nurses was significantly higher, but also here, important gaps in the level of knowledge could be observed. Less than two-thirds knew the reference values of hematocrit and knew the appropriate management in two cases. Less than half of the specialized nurses knew about the procedure of isovolumic phlebotomy. The level of knowledge displayed by cardiovascular nurses regarding the hematological management of patients with Eisenmenger syndrome is poor. Also the knowledge of nurses specialized in CHD could be improved.

  3. The scenario-based generalization of radiation therapy margins.

    PubMed

    Fredriksson, Albin; Bokrantz, Rasmus

    2016-03-07

    We give a scenario-based treatment plan optimization formulation that is equivalent to planning with geometric margins if the scenario doses are calculated using the static dose cloud approximation. If the scenario doses are instead calculated more accurately, then our formulation provides a novel robust planning method that overcomes many of the difficulties associated with previous scenario-based robust planning methods. In particular, our method protects only against uncertainties that can occur in practice, it gives a sharp dose fall-off outside high dose regions, and it avoids underdosage of the target in 'easy' scenarios. The method shares the benefits of the previous scenario-based robust planning methods over geometric margins for applications where the static dose cloud approximation is inaccurate, such as irradiation with few fields and irradiation with ion beams. These properties are demonstrated on a suite of phantom cases planned for treatment with scanned proton beams subject to systematic setup uncertainty.

  4. NOx emissions in China: historical trends and future perspectives

    NASA Astrophysics Data System (ADS)

    Zhao, B.; Wang, S. X.; Liu, H.; Xu, J. Y.; Fu, K.; Klimont, Z.; Hao, J. M.; He, K. B.; Cofala, J.; Amann, M.

    2013-10-01

    Nitrogen oxides (NOx) are key pollutants for the improvement of ambient air quality. Within this study we estimated the historical NOx emissions in China for the period 1995-2010, and calculated future NOx emissions every five years until 2030 under six emission scenarios. Driven by the fast growth of energy consumption, we estimate the NOx emissions in China increased rapidly from 11.0 Mt in 1995 to 26.1 Mt in 2010. Power plants, industry and transportation were major sources of NOx emissions, accounting for 28.4%, 34.0%, and 25.4% of the total NOx emissions in 2010, respectively. Two energy scenarios, a business as usual scenario (BAU) and an alternative policy scenario (PC), were developed to project future energy consumption. In 2030, total energy consumption is projected to increase by 64% and 27% from 2010 level respectively. Three sets of end-of-pipe pollution control measures, including baseline, progressive, and stringent control case, were developed for each energy scenario, thereby constituting six emission scenarios. By 2030, the total NOx emissions are projected to increase (compared to 2010) by 36% in the baseline while policy cases result in reduction up to 61% in the most ambitious case with stringent control measures. More than a third of the reduction achieved by 2030 between least and most ambitious scenario comes from power sector, and more than half is distributed equally between industry and transportation sectors. Selective catalytic reduction dominates the NOx emission reductions in power plants, while life style changes, control measures for industrial boilers and cement production are major contributors to reductions in industry. Timely enforcement of legislation on heavy-duty vehicles would contribute significantly to NOx emission reductions. About 30% of the NOx emission reduction in 2020 and 40% of the NOx emission reduction in 2030 could be treated as the ancillary benefit of energy conservation. Sensitivity analysis was conducted to explore the impact of key factors on future emissions.

  5. Cosmological singularity resolution from quantum gravity: The emergent-bouncing universe

    NASA Astrophysics Data System (ADS)

    Alesci, Emanuele; Botta, Gioele; Cianfrani, Francesco; Liberati, Stefano

    2017-08-01

    Alternative scenarios to the big bang singularity have been subject of intense research for several decades by now. Most popular in this sense have been frameworks were such singularity is replaced by a bounce around some minimal cosmological volume or by some early quantum phase. This latter scenario was devised a long time ago and referred as an "emergent universe" (in the sense that our universe emerged from a constant volume quantum phase). We show here that within an improved framework of canonical quantum gravity (the so-called quantum reduced loop gravity) the Friedmann equations for cosmology are modified in such a way to replace the big bang singularity with a short bounce preceded by a metastable quantum phase in which the volume of the universe oscillates between a series of local maxima and minima. We call this hybrid scenario an "emergent-bouncing universe" since after a pure oscillating quantum phase the classical Friedmann spacetime emerges. Perspective developments and possible tests of this scenario are discussed in the end.

  6. Do environmental dynamics matter in fate models? Exploring scenario dynamics for a terrestrial and an aquatic system.

    PubMed

    Morselli, Melissa; Terzaghi, Elisa; Di Guardo, Antonio

    2018-01-24

    Nowadays, there is growing interest in inserting more ecological realism into risk assessment of chemicals. On the exposure evaluation side, this can be done by studying the complexity of exposure in the ecosystem, niche partitioning, e.g. variation of the exposure scenario. Current regulatory predictive approaches, to ensure simplicity and predictive ability, generally keep the scenario as static as possible. This could lead to under or overprediction of chemical exposure depending on the chemical and scenario simulated. To account for more realistic exposure conditions, varying temporally and spatially, additional scenario complexity should be included in currently used models to improve their predictive ability. This study presents two case studies (a terrestrial and an aquatic one) in which some polychlorinated biphenyls (PCBs) were simulated with the SoilPlusVeg and ChimERA models to show the importance of scenario variation in time (biotic and abiotic compartments). The results outlined the importance of accounting for planetary boundary layer variation and vegetation dynamics to accurately predict air concentration changes and the timing of chemical dispersion from the source in terrestrial systems. For the aquatic exercise, the results indicated the need to account for organic carbon forms (particulate and dissolved organic carbon) and vegetation biomass dynamics. In both cases the range of variation was up to two orders of magnitude depending on the congener and scenario, reinforcing the need for incorporating such knowledge into exposure assessment.

  7. Considering the worst-case metabolic scenario, but training to the typical-case competitive scenario: response to Amtmann (2012).

    PubMed

    Del Vecchio, Fabrício Boscolo; Franchini, Emerson

    2013-08-01

    This response to Amtmann's letter emphasizes that the knowledge of the typical time structure, as well as its variation, together with the main goal of the mixed martial arts athletes--to win by knock out or submission--need to be properly considered during the training sessions. Example with other combat sports are given and discussed, especially concerning the importance of adapting the physical conditioning workouts to the technical-tactical profile of the athlete and not the opposite.

  8. Emitting electron spectra and acceleration processes in the jet of PKS 0447-439

    NASA Astrophysics Data System (ADS)

    Zhou, Yao; Yan, Dahai; Dai, Benzhong; Zhang, Li

    2014-02-01

    We investigate the electron energy distributions (EEDs) and the corresponding acceleration processes in the jet of PKS 0447-439, and estimate its redshift through modeling its observed spectral energy distribution (SED) in the frame of a one-zone synchrotron-self Compton (SSC) model. Three EEDs formed in different acceleration scenarios are assumed: the power-law with exponential cut-off (PLC) EED (shock-acceleration scenario or the case of the EED approaching equilibrium in the stochastic-acceleration scenario), the log-parabolic (LP) EED (stochastic-acceleration scenario and the acceleration dominating), and the broken power-law (BPL) EED (no acceleration scenario). The corresponding fluxes of both synchrotron and SSC are then calculated. The model is applied to PKS 0447-439, and modeled SEDs are compared to the observed SED of this object by using the Markov Chain Monte Carlo method. The results show that the PLC model fails to fit the observed SED well, while the LP and BPL models give comparably good fits for the observed SED. The results indicate that it is possible that a stochastic acceleration process acts in the emitting region of PKS 0447-439 and the EED is far from equilibrium (acceleration dominating) or no acceleration process works (in the emitting region). The redshift of PKS 0447-439 is also estimated in our fitting: z = 0.16 ± 0.05 for the LP case and z = 0.17 ± 0.04 for BPL case.

  9. Economic and Social Impact of Increasing Uptake of Cardiac Rehabilitation Services--A Cost Benefit Analysis.

    PubMed

    De Gruyter, Elaine; Ford, Greg; Stavreski, Bill

    2016-02-01

    Cardiac rehabilitation can reduce mortality, improve cardiac risk factor profile and reduce readmissions; yet uptake remains low at 30%. This research aims to investigate the social and economic impact of increasing the uptake of cardiac rehabilitation in Victoria, Australia using cost benefit analysis (CBA). Cost benefit analysis has been undertaken over a 10-year period to analyse three scenarios: (1) Base Case: 30% uptake; (2) Scenario 1: 50% uptake; and (3) Scenario 2: 65% uptake. Impacts considered include cardiac rehabilitation program costs, direct inpatient costs, other healthcare costs, burden of disease, productivity losses, informal care costs and net deadweight loss. There is a net financial saving of $46.7-$86.7 million under the scenarios. Compared to the Base Case, an additional net benefit of $138.9-$227.2 million is expected. This results in a Benefit Cost Ratio of 5.6 and 6.8 for Scenarios 1 and 2 respectively. Disability Adjusted Life Years were 21,117-37,565 years lower than the Base Case. Greater uptake of cardiac rehabilitation can reduce the burden of disease, directly translating to benefits for society and the economy. This research supports the need for greater promotion, routine referral to be made standard practice and implementation of reforms to boost uptake. Copyright © 2015 Australian and New Zealand Society of Cardiac and Thoracic Surgeons (ANZSCTS) and the Cardiac Society of Australia and New Zealand (CSANZ). Published by Elsevier B.V. All rights reserved.

  10. Merger of Two Neutron Stars: Predictions from the Two-families Scenario

    NASA Astrophysics Data System (ADS)

    Drago, Alessandro; Pagliara, Giuseppe

    2018-01-01

    If only one family of “neutron stars” exists, their maximum mass must be equal to or larger than 2{M}ȯ and then, only in less than about 18% of cases, the outcome of the merger of two neutron stars is a prompt collapse to a black hole, since the newly formed system can avoid the collapse at least until differential rotation is present. In the so-called two-families scenario, stars made of hadrons are stable only up to about (1.5{--}1.6){M}ȯ , while the most massive compact stars are entirely made of strange quark matter. We show that in this scenario the outcome of the merger of two compact stars, entirely composed by hadrons, is a prompt collapse in at least 34% of the cases. It will therefore be easy to discriminate between the two scenarios once the gravitational waves emitted at the moment of the merger are detected. Finally, we shortly discuss the implications of GW170817‑GRB 170817A.

  11. Mortality, greenhouse gas emissions and consumer cost impacts of combined diet and physical activity scenarios: a health impact assessment study

    PubMed Central

    Monsivais, Pablo; Jones, Nicholas RV; Brand, Christian; Woodcock, James

    2017-01-01

    Objective To quantify changes in mortality, greenhouse gas (GHG) emissions and consumer costs for physical activity and diet scenarios. Design For the physical activity scenarios, all car trips from <1 to <8 miles long were progressively replaced with cycling. For the diet scenarios, the study population was assumed to increase fruit and vegetable (F&V) consumption by 1–5 portions of F&V per day, or to eat at least 5 portions per day. Health effects were modelled with the comparative risk assessment method. Consumer costs were based on fuel cost savings and average costs of F&V, and GHG emissions to fuel usage and F&V production. Setting Working age population for England. Participants Data from the Health Survey for England, National Travel Survey and National Diet and Nutrition Survey. Primary outcomes measured Changes in premature deaths, consumer costs and GHG emissions stratified by age, gender and socioeconomic status (SES). Results Premature deaths were reduced by between 75 and 7648 cases per year for the physical activity scenarios, and 3255 and 6187 cases per year for the diet scenarios. Mortality reductions were greater among people of medium and high SES in the physical activity scenarios, whereas people with lower SES benefited more in the diet scenarios. Similarly, transport fuel costs fell more for people of high SES, whereas diet costs increased most for the lowest SES group. Net GHG emissions decreased by between 0.2 and 10.6 million tons of carbon dioxide equivalent (MtCO2e) per year for the physical activity scenarios and increased by between 1.3 and 6.3 MtCO2e/year for the diet scenarios. Conclusions Increasing F&V consumption offers the potential for large health benefits and reduces health inequalities. Replacing short car trips with cycling offers the potential for net benefits for health, GHG emissions and consumer costs. PMID:28399514

  12. Alternative approaches to the construction of a composite indicator of agricultural sustainability: An application to irrigated agriculture in the Duero basin in Spain.

    PubMed

    Gómez-Limón, José A; Riesgo, Laura

    2009-08-01

    This paper describes a comparative analysis of alternative methods of constructing composite indicators to measure the sustainability of the agricultural sector. The three methods employed were Principal Component Analysis, the Analytic Hierarchy Process and a Multi-Criteria technique. The comparison focused on the irrigated agriculture of the Duero basin in Spain as a case study, using a dataset of indicators previously calculated for various farm types and policy scenarios. The results enabled us to establish a hierarchy of preferred policy scenarios on the basis of the level of sustainability achieved, and show that the most recent CAP reform is the most sustainable agricultural policy scenario. By analyzing the heterogeneity of different farms types in each scenario, we can also determine the main features of the most sustainable farms in each case. The analysis demonstrates that full-time farmers with small to medium-sized farms and sowing profitable crops are the most sustainable farm types in all the policy scenarios. All of this information is useful for the support of agricultural policy design and its implementation, as we attempt to improve the sustainability of this sector.

  13. Incremental sanitation improvement strategy: comparison of options for Hanoi, Vietnam.

    PubMed

    Harada, H; Matsui, S; Dong, N T; Shimizu, Y; Fujii, S

    2010-01-01

    Urban sanitation issues should be tackled strategically, and may be addressed effectively when sewage development is pursued in conjunction with complementary sanitation measures. Five sanitation improvement scenarios employing sewage, night-soil collection-and-treatment (NSCT) system, and/or septic-tank improvement by annual dislodging were analyzed from the perspective of COD loads, total nitrogen loads, and cost under the conditions found in Hanoi, Vietnam. Compared to the development of sewage alone, the scenario of developing NSCT systems in a complementary manner with sewage development was estimated to be the most effective for a rapid decrease of both COD and total nitrogen loads. However, it may be difficult in some cases to replace ordinary water-flush toilet by the micro-flush toilet that are used in NSCT systems. In this case, the scenario employing septic-tank improvement in conjunction with sewage development may be effective for a rapid decrease of COD in locations where septic tanks are widely used under poor maintenance conditions and nitrogen pollution is not serious compared to COD. It was calculated that the two scenarios above would respectively require cost increases of 16 and 22% over the sewage development scenario.

  14. Using the scenario method in the context of health and health care--a scoping review.

    PubMed

    Vollmar, Horst Christian; Ostermann, Thomas; Redaèlli, Marcus

    2015-10-16

    The scenario technique is a method for future research and for strategic planning. Today, it includes both qualitative and quantitative elements. The aims of this scoping review are to give an overview of the application of the scenario method in the fields of health care and to make suggestions for better reporting in future scenario projects. Between January 2013 and October 2013 we conducted a systematic search in the databases Medline, Embase, PsycInfo, Eric, The Cochrane Library, Scopus, Web of Science, and Cinahl since inception for the term 'scenario(s)' in combination with other terms, e.g. method, model, and technique. Our search was not restricted by date or language. In addition, we screened the reference lists of the included articles. A total of 576 bibliographical records were screened. After removing duplicates and three rounds of screening, 41 articles covering 38 different scenario projects were included for the final analysis. Nine of the included articles addressed disease related issues, led by mental health and dementia (n = 4), and followed by cancer (n = 3). Five scenario projects focused on public health issues at an organizational level and five focused on the labor market for different health care professionals. In addition, four projects dealt with health care 'in general', four with the field of biotechnology and personalized medicine, and additional four with other technology developments. Some of the scenario projects suffered from poor reporting of methodological aspects. Despite its potential, use of the scenario method seems to be published rarely in comparison to other methods such as the Delphi-technique, at least in the field of health care. This might be due to the complexity of the methodological approach. Individual project methods and activities vary widely and are poorly reported. Improved criteria are required for reporting of scenario project methods. With improved standards and greater transparency, the scenario method will be a good tool for scientific health care planning and strategic decision-making in public health.

  15. Downscaled climate change impacts on agricultural water resources in Puerto Rico

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harmsen, E.W.; Miller, N.L.; Schlegel, N.J.

    2009-04-01

    The purpose of this study is to estimate reference evapotranspiration (ET{sub o}), rainfall deficit (rainfall - ET{sub o}) and relative crop yield reduction for a generic crop under climate change conditions for three locations in Puerto Rico: Adjuntas, Mayaguez, and Lajas. Reference evapotranspiration is estimated by the Penman-Monteith method. Rainfall and temperature data were statistically downscaled and evaluated using the DOE/NCAR PCM global circulation model projections for the B1 (low), A2 (mid-high) and A1fi (high) emission scenarios of the Intergovernmental Panel on Climate Change Special Report on Emission Scenarios. Relative crop yield reductions were estimated from a function dependent watermore » stress factor, which is a function of soil moisture content. Average soil moisture content for the three locations was determined by means of a simple water balance approach. Results from the analysis indicate that the rainy season will become wetter and the dry season will become drier. The 20-year mean 1990-2010 September rainfall excess (i.e., rainfall - ET{sub o} > 0) increased for all scenarios and locations from 149.8 to 356.4 mm for 2080-2100. Similarly, the 20-year average February rainfall deficit (i.e., rainfall - ET{sub o} < 0) decreased from a -26.1 mm for 1990-2010 to -72.1 mm for the year 2080-2100. The results suggest that additional water could be saved during the wet months to offset increased irrigation requirements during the dry months. Relative crop yield reduction did not change significantly under the B1 projected emissions scenario, but increased by approximately 20% during the summer months under the A1fi emissions scenario. Components of the annual water balance for the three climate change scenarios are rainfall, evapotranspiration (adjusted for soil moisture), surface runoff, aquifer recharge and change in soil moisture storage. Under the A1fi scenario, for all locations, annual evapotranspiration decreased owing to lower soil moisture, surface runoff decreased, and aquifer recharge increased. Aquifer recharge increased at all three locations because the majority of recharge occurs during the wet season and the wet season became wetter. This is good news from a groundwater production standpoint. Increasing aquifer recharge also suggests that groundwater levels may increase and this may help to minimize saltwater intrusion near the coasts as sea levels increase, provided that groundwater use is not over-subscribed.« less

  16. Future Heat Waves in Paris Metropolitan Area

    NASA Astrophysics Data System (ADS)

    Beaulant, A.; Lemonsu, A.; Somot, S.; Masson, V.

    2010-12-01

    Cities are particularly vulnerable to heat waves, firstly because they concentrate the majority of the population and, secondly because the heat island that characterizes the urban climate exacerbates heat wave effects. This work is part of the interdisciplinary VURCA project (Vulnerability of cities to heat waves), which deals with the evolution of heat wave events in the context of global warming, urban vulnerability and adaptation strategies. The aim of this study is to analyse urban heat wave events in present climate (1950-2009) and their evolution in an enhanced greenhouse gazes future climate (2010-2100). We used daily observations of temperature from several stations covering Paris metropolitan area and climate projections following three different IPCC-SRES scenarios (B1, A1B, A2) and issued from several ENSEMBLES regional climate models. The heat wave definition is based on the indexes of the operational French warning system. A heat wave is detected within observed or simulated time-series by a heat wave peak, when the temperatures exceed the value of the 99.9th percentile. Its duration is determined by all adjacent days to this peak, for which the temperatures are not durably smaller than the 99.9th percentile value minus 2 °C. The 99.9th percentile threshold is inferred from quantile-quantile plots produced for each climate model in comparison with observations for the reference period 1950-2000. Heat waves have been extracted within observations and 12 climatic simulations. The number of heat wave events and cumulated HW days per year have been calculated, the maximum being seven heat waves cumulating more than 60 HW days in one year in the case of the A2 scenario and until 50 days in the case of the more moderate A1B scenario. From 2050, the occurrence of three or four HW events per year is becoming the norm all scenarios taken together. The evolution of heat wave features has been analysed, highlighting the large variability of the climatic simulations, but also an overall trend to an increase in frequency and duration but less significantly in intensity. Further work will be carried out in order to assess the sensitivity of the Paris urban climate to different future heat wave events. Synthetic HW events will be built from future HW features as duration and intensity, and will be simulated using a urban-weather model. Then, the impacts in terms of energy consumption and bioclimatic comfort will be analysed and adaptation strategies will be proposed.

  17. A Usability and Learnability Case Study of Glass Flight Deck Interfaces and Pilot Interactions through Scenario-based Training

    NASA Astrophysics Data System (ADS)

    De Cino, Thomas J., II

    In the aviation industry, digitally produced and presented flight, navigation, and aircraft information is commonly referred to as glass flight decks. Glass flight decks are driven by computer-based subsystems and have long been a part of military and commercial aviation sectors. Over the past 15 years, the General Aviation (GA) sector of the aviation industry has become a recent beneficiary of the rapid advancement of computer-based glass flight deck (GFD) systems. While providing the GA pilot considerable enhancements in the quality of information about the status and operations of the aircraft, training pilots on the use of glass flight decks is often delivered with traditional methods (e.g. textbooks, PowerPoint presentations, user manuals, and limited computer-based training modules). These training methods have been reported as less than desirable in learning to use the glass flight deck interface. Difficulties in achieving a complete understanding of functional and operational characteristics of the GFD systems, acquiring a full understanding of the interrelationships of the varied subsystems, and handling the wealth of flight information provided have been reported. Documented pilot concerns of poor user experience and satisfaction, and problems with the learning the complex and sophisticated interface of the GFD are additional issues with current pilot training approaches. A case study was executed to explore ways to improve training using GFD systems at a Midwestern aviation university. The researcher investigated if variations in instructional systems design and training methods for learning glass flight deck technology would affect the perceptions and attitudes of pilots of the learnability (an attribute of usability) of the glass flight deck interface. Specifically, this study investigated the effectiveness of scenario-based training (SBT) methods to potentially improve pilot knowledge and understanding of a GFD system, and overall pilot user experience and satisfaction. Participants overwhelmingly reported positive learning experiences from scenario-based GFD systems flight training, noting that learning and knowledge construction were improved over other training received in the past. In contrast, participants rated the usability and learnability of the GFD training systems low, reporting various problems with the systems' interface, and the learnability (first-time use) of the complex GFD system. However, issues with usability of the GFD training systems did not reduce or change participant attitudes towards learning and mastering GFD systems; to the contrary, all participants requested additional coursework opportunities to train on GFD systems with the scenario-based flight training format.

  18. Computing the Risk of Postprandial Hypo- and Hyperglycemia in Type 1 Diabetes Mellitus Considering Intrapatient Variability and Other Sources of Uncertainty

    PubMed Central

    García-Jaramillo, Maira; Calm, Remei; Bondia, Jorge; Tarín, Cristina; Vehí, Josep

    2009-01-01

    Objective The objective of this article was to develop a methodology to quantify the risk of suffering different grades of hypo- and hyperglycemia episodes in the postprandial state. Methods Interval predictions of patient postprandial glucose were performed during a 5-hour period after a meal for a set of 3315 scenarios. Uncertainty in the patient's insulin sensitivities and carbohydrate (CHO) contents of the planned meal was considered. A normalized area under the curve of the worst-case predicted glucose excursion for severe and mild hypo- and hyperglycemia glucose ranges was obtained and weighted accordingly to their importance. As a result, a comprehensive risk measure was obtained. A reference model of preprandial glucose values representing the behavior in different ranges was chosen by a ξ2 test. The relationship between the computed risk index and the probability of occurrence of events was analyzed for these reference models through 19,500 Monte Carlo simulations. Results The obtained reference models for each preprandial glucose range were 100, 160, and 220 mg/dl. A relationship between the risk index ranges <10, 10–60, 60–120, and >120 and the probability of occurrence of mild and severe postprandial hyper- and hypoglycemia can be derived. Conclusions When intrapatient variability and uncertainty in the CHO content of the meal are considered, a safer prediction of possible hyper- and hypoglycemia episodes induced by the tested insulin therapy can be calculated. PMID:20144339

  19. Referral recommendations for osteoarthritis of the knee incorporating patients' preferences

    PubMed Central

    Musila, Nyokabi; Underwood, Martin; McCaskie, Andrew W; Black, Nick; Clarke, Aileen; van der Meulen, Jan H

    2011-01-01

    Background. GPs have to respond to conflicting policy developments. As gatekeeper they are supposed to manage the growing demand for specialist services and as patient advocate they should be responsive to patients' preferences. We used an innovative approach to develop a referral guideline for patients with chronic knee pain that explicitly incorporates patients' preferences. Methods. A guideline development group of 12 members including patients, GPs, orthopaedic surgeons and other health care professionals used formal consensus development informed by systematic evidence reviews. They rated the appropriateness of referral for 108 case scenarios describing patients according to symptom severity, age, body mass, co-morbidity and referral preference. Appropriateness was expressed on scale from 1 (‘strongly disagree’) to 9 (‘strongly agree’). Results. Ratings of referral appropriateness were strongly influenced by symptom severity and patients' referral preferences. The influence of other patient characteristics was small. There was consensus that patients with severe knee symptoms who want to be referred should be referred and that patient with moderate or mild symptoms and strong preference against referral should not be referred. Referral preference had a greater impact on the ratings of referral appropriateness when symptoms were moderate or severe than when symptoms were mild. Conclusions. Referral decisions for patients with osteoarthritis of the knee should only be guided by symptom severity and patients' referral preferences. The guideline development group seemed to have given priority to avoiding inefficient resource use in patients with mild symptoms and to respecting patient autonomy in patients with severe symptoms. PMID:20817791

  20. Cost-effectiveness of Lung Cancer Screening in Canada.

    PubMed

    Goffin, John R; Flanagan, William M; Miller, Anthony B; Fitzgerald, Natalie R; Memon, Saima; Wolfson, Michael C; Evans, William K

    2015-09-01

    The US National Lung Screening Trial supports screening for lung cancer among smokers using low-dose computed tomographic (LDCT) scans. The cost-effectiveness of screening in a publically funded health care system remains a concern. To assess the cost-effectiveness of LDCT scan screening for lung cancer within the Canadian health care system. The Cancer Risk Management Model (CRMM) simulated individual lives within the Canadian population from 2014 to 2034, incorporating cancer risk, disease management, outcome, and cost data. Smokers and former smokers eligible for lung cancer screening (30 pack-year smoking history, ages 55-74 years, for the reference scenario) were modeled, and performance parameters were calibrated to the National Lung Screening Trial (NLST). The reference screening scenario assumes annual scans to age 75 years, 60% participation by 10 years, 70% adherence to screening, and unchanged smoking rates. The CRMM outputs are aggregated, and costs (2008 Canadian dollars) and life-years are discounted 3% annually. The incremental cost-effectiveness ratio. Compared with no screening, the reference scenario saved 51,000 quality-adjusted life-years (QALY) and had an incremental cost-effectiveness ratio of CaD $52,000/QALY. If smoking history is modeled for 20 or 40 pack-years, incremental cost-effectiveness ratios of CaD $62,000 and CaD $43,000/QALY, respectively, were generated. Changes in participation rates altered life years saved but not the incremental cost-effectiveness ratio, while the incremental cost-effectiveness ratio is sensitive to changes in adherence. An adjunct smoking cessation program improving the quit rate by 22.5% improves the incremental cost-effectiveness ratio to CaD $24,000/QALY. Lung cancer screening with LDCT appears cost-effective in the publicly funded Canadian health care system. An adjunct smoking cessation program has the potential to improve outcomes.

  1. Experimental investigations of weak definite and weak indefinite noun phrases

    PubMed Central

    Klein, Natalie M.; Gegg-Harrison, Whitney M.; Carlson, Greg N.; Tanenhaus, Michael K.

    2013-01-01

    Definite noun phrases typically refer to entities that are uniquely identifiable in the speaker and addressee’s common ground. Some definite noun phrases (e.g. the hospital in Mary had to go the hospital and John did too) seem to violate this uniqueness constraint. We report six experiments that were motivated by the hypothesis that these “weak definite” interpretations arise in “incorporated” constructions. Experiments 1-3 compared nouns that seem to allow for a weak definite interpretation (e.g. hospital, bank, bus, radio) with those that do not (e.g. farm, concert, car, book). Experiments 1 and 2 used an instruction-following task and picture-judgment task, respectively, to demonstrate that a weak definite need not uniquely refer. In Experiment 3 participants imagined scenarios described by sentences such as The Federal Express driver had to go to the hospital/farm. The imagined scenarios following weak definite noun phrases were more likely to include conventional activities associated with the object, whereas following regular nouns, participants were more likely to imagine scenarios that included typical activities associated with the subject; similar effects were observed with weak indefinites. Experiment 4 found that object-related activities were reduced when the same subject and object were used with a verb that does not license weak definite interpretations. In Experiment 5, a science fiction story introduced an artificial lexicon for novel concepts. Novel nouns that shared conceptual properties with English weak definite nouns were more likely to allow weak reference in a judgment task. Experiment 6 demonstrated that familiarity for definite articles and anti- familiarity for indefinite articles applies to the activity associated with the noun, consistent with predictions made by the incorporation analysis. PMID:23685208

  2. A Comparison of Educational Interventions to Enhance Cultural Competency in Pharmacy Students

    PubMed Central

    Jonkman, Lauren; Connor, Sharon; Hall, Deanne

    2013-01-01

    Objective. To determine the degree to which 3 different educational interventions enhance cultural competency in pharmacy students. Methods. Students were equally divided among a simulated-patient activity group, a written case-scenarios group, and a formal lecture group. Mean scores for pre- and post-intervention cultural self-assessment surveys were compared. Results. In the simulation group, there were significant positive changes in the cultural skills and cultural desire components; in the case-scenario group, there was a significant positive change in the cultural awareness component; and in the lecture group, there were significant positive changes in the cultural skills and cultural empathy components. With respect to the cultural skills component, there was greater post-intervention improvement in the simulation and lecture groups than in the case-scenario group. Conclusions. There were significant positive changes within each group, indicating that ideologies and behaviors may be altered based on the educational intervention received. However, a 1-hour practicum may not be sufficient to enhance cultural competency. PMID:23716744

  3. A Preliminary Performance Assessment for Salt Disposal of High-Level Nuclear Waste - 12173

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Joon H.; Clayton, Daniel; Jove-Colon, Carlos

    2012-07-01

    A salt repository is one of the four geologic media currently under study by the U.S. DOE Office of Nuclear Energy to support the development of a long-term strategy for geologic disposal of commercial used nuclear fuel (UNF) and high-level radioactive waste (HLW). The immediate goal of the generic salt repository study is to develop the necessary modeling tools to evaluate and improve the understanding of the repository system response and processes relevant to long-term disposal of UNF and HLW in a salt formation. The current phase of this study considers representative geologic settings and features adopted from previous studiesmore » for salt repository sites. For the reference scenario, the brine flow rates in the repository and underlying interbeds are very low, and transport of radionuclides in the transport pathways is dominated by diffusion and greatly retarded by sorption on the interbed filling materials. I-129 is the dominant annual dose contributor at the hypothetical accessible environment, but the calculated mean annual dose is negligibly small. For the human intrusion (or disturbed) scenario, the mean mass release rate and mean annual dose histories are very different from those for the reference scenario. Actinides including Pu-239, Pu-242 and Np-237 are major annual dose contributors, and the calculated peak mean annual dose is acceptably low. A performance assessment model for a generic salt repository has been developed incorporating, where applicable, representative geologic settings and features adopted from literature data for salt repository sites. The conceptual model and scenario for radionuclide release and transport from a salt repository were developed utilizing literature data. The salt GDS model was developed in a probabilistic analysis framework. The preliminary performance analysis for demonstration of model capability is for an isothermal condition at the ambient temperature for the near field. The capability demonstration emphasizes key attributes of a salt repository that are potentially important to the long-term safe disposal of UNF and HLW. The analysis presents and discusses the results showing repository responses to different radionuclide release scenarios (undisturbed and human intrusion). For the reference (or nominal or undisturbed) scenario, the brine flow rates in the repository and underlying interbeds are very low, and transport of radionuclides in the transport pathways is dominated by diffusion and greatly retarded by sorption on the interbed filling materials. I-129 (non-sorbing and unlimited solubility with a very long half-life) is the dominant annual dose contributor at the hypothetical accessible environment, but the calculated mean annual dose is negligibly small that there is no meaningful consequence for the repository performance. For the human intrusion (or disturbed) scenario analysis, the mean mass release rate and mean annual dose histories are very different from those for the reference scenario analysis. Compared to the reference scenario, the relative annual dose contributions by soluble, non-sorbing fission products, particularly I-129, are much lower than by actinides including Pu-239, Pu-242 and Np-237. The lower relative mean annual dose contributions by the fission product radionuclides are due to their lower total inventory available for release (i.e., up to five affected waste packages), and the higher mean annual doses by the actinides are the outcome of the direct release of the radionuclides into the overlying aquifer having high water flow rates, thereby resulting in an early arrival of higher concentrations of the radionuclides at the biosphere drinking water well prior to their significant decay. The salt GDS model analysis has also identified the following future recommendations and/or knowledge gaps to improve and enhance the confidence of the future repository performance analysis. - Repository thermal loading by UNF and HLW, and the effect on the engineered barrier and near-field performance. - Closure and consolidation of salt rocks by creep deformation under the influence of thermal perturbation, and the effect on the engineered barrier and near-field performance. - Brine migration and radionuclide transport under the influence of thermal perturbation in generic salt repository environment, and the effect on the engineered barrier and near-field performance and far-field performance. - Near-field geochemistry and radionuclide mobility in generic salt repository environment (high ionic strength brines, elevated temperatures and chemically reducing condition). - Degradation of engineer barrier components (waste package, waste canister, waste forms, etc.) in a generic salt repository environment (high ionic strength brines, elevated temperatures and chemically reducing condition). - Waste stream types and inventory estimates, particularly for reprocessing high-level waste. (authors)« less

  4. Plant distributions in the southwestern United States; a scenario assessment of the modern-day and future distribution ranges of 166 Species

    USGS Publications Warehouse

    Thomas, Kathryn A.; Guertin, Patricia P.; Gass, Leila

    2012-01-01

    The authors developed spatial models of the predicted modern-day suitable habitat (SH) of 166 dominant and indicator plant species of the southwestern United States (herein referred to as the Southwest) and then conducted a coarse assessment of potential future changes in the distribution of their suitable habitat under three climate-change scenarios for two time periods. We used Maxent-based spatial modeling to predict the modern-day and future scenarios of SH for each species in an over 342-million-acre area encompassing all or parts of six states in the Southwest--Arizona, California, Colorado, Nevada, New Mexico, and Utah. Modern-day SH models were predicted by our using 26 annual and monthly average temperature and precipitation variables, averaged for the years 1971-2000. Future SH models were predicted for each species by our using six climate models based on application of the average of 16 General Circulation Models to Intergovernmental Panel on Climate Change emission scenarios B1, A1B, and A2 for two time periods, 2040 to 2069 and 2070 and 2100, referred to respectively as the 2050 and 2100 time periods. The assessment examined each species' vulnerability to loss of modern-day SH under future climate scenarios, potential to gain SH under future climate scenarios, and each species' estimated risk as a function of both vulnerability and potential gains. All 166 species were predicted to lose modern-day SH in the future climate change scenarios. In the 2050 time period, nearly 30 percent of the species lost 75 percent or more of their modern-day suitable habitat, 21 species gained more new SH than their modern-day SH, and 30 species gained less new SH than 25 percent of their modern-day SH. In the 2100 time period, nearly half of the species lost 75 percent or more of their modern-day SH, 28 species gained more new SH than their modern-day SH, and 34 gained less new SH than 25 percent of their modern-day SH. Using nine risk categories we found only two species were in the least risk category, while 20 species were in the highest risk category. The assessment showed that species respond independently to predicted climate change, suggesting that current plant assemblages may disassemble under predicted climate change scenarios. This report presents the results for each species in tables (Appendix A) and maps (14 for each species) in Appendix B.

  5. Adaptive management of irrigation and crops' biodiversity: a case study on tomato

    NASA Astrophysics Data System (ADS)

    De Lorenzi, Francesca; Alfieri, Silvia Maria; Basile, Angelo; Bonfante, Antonello; Monaco, Eugenia; Riccardi, Maria; Menenti, Massimo

    2013-04-01

    We have assessed the impacts of climate change and evaluated options to adapt irrigation management in the face of predicted changes of agricultural water demand. We have evaluated irrigation scheduling and its effectiveness (versus crop transpiration), and cultivars' adaptability. The spatial and temporal variations of effectiveness and adaptability were studied in an irrigated district of Southern Italy. Two climate scenarios were considered: reference (1961-90) and future (2021-2050) climate, the former from climatic statistics, and the latter from statistical downscaling of general circulation models (AOGCM). Climatic data consist of daily time series of maximum and minimum temperature, and daily rainfall on a grid with a spatial resolution of 35 km. The work was carried out in the Destra Sele irrigation scheme (18.000 ha. Twenty-five soil units were identified and their hydrological properties were determined (measured or estimated from texture through pedo-transfer functions). A tomato crop, in a rotation typical of the area, was considered. A mechanistic model of water flow in the soil-plant-atmosphere system (SWAP) was used to study crop water requirements and water consumption. The model was calibrated and validated in the same area for many different crops. Tomato crop input data and model parameters were estimated on the basis of scientific literature and assumed to be generically representative of the species. Simulations were performed for reference and future climate, and for different irrigation scheduling options. In all soil units, six levels of irrigation volumes were applied: full irrigation (100%), deficit irrigation (80%, 60%, 40%, 20%), no irrigation. From simulation runs, indicators of soil water availability were calculated, moreover the marginal increases of transpiration per unit of irrigation volume, i.e. the effectiveness of irrigation (ΔT/I), were computed, in both climate scenarios. Indicators and marginal increases were used to evaluate the tomato crop adaptability to future climate. To this purpose, for several tomato cultivars, threshold values of their yield responses to soil water availability were determined (data from scientific literature). Cultivars' threshold values were evaluated, in all soil units, against the indicators' values, for irrigation levels with different ΔT/I. Less water intensive cultivars and irrigation volumes that optimize transpiration (and yield) could thus be identified in both climate scenarios, and irrigation management scenarios were determined taking into account soils' hydrological properties, crop biodiversity, and efficient use of water resource. The work was carried out within the Italian national project AGROSCENARI funded by the Ministry for Agricultural, Food and Forest Policies (MIPAAF, D.M. 8608/7303/2008) Keywords: climate change, adaptation, simulation models, deficit irrigation, water resource efficiency, SWAP

  6. The Risk of Termination Shock From Solar Geoengineering

    NASA Astrophysics Data System (ADS)

    Parker, Andy; Irvine, Peter J.

    2018-03-01

    If solar geoengineering were to be deployed so as to mask a high level of global warming, and then stopped suddenly, there would be a rapid and damaging rise in temperatures. This effect is often referred to as termination shock, and it is an influential concept. Based on studies of its potential impacts, commentators often cite termination shock as one of the greatest risks of solar geoengineering. However, there has been little consideration of the likelihood of termination shock, so that conclusions about its risk are premature. This paper explores the physical characteristics of termination shock, then uses simple scenario analysis to plot out the pathways by which different driver events (such as terrorist attacks, natural disasters, or political action) could lead to termination. It then considers where timely policies could intervene to avert termination shock. We conclude that some relatively simple policies could protect a solar geoengineering system against most of the plausible drivers. If backup deployment hardware were maintained and if solar geoengineering were implemented by agreement among just a few powerful countries, then the system should be resilient against all but the most extreme catastrophes. If this analysis is correct, then termination shock should be much less likely, and therefore much less of a risk, than has previously been assumed. Much more sophisticated scenario analysis—going beyond simulations purely of worst-case scenarios—will be needed to allow for more insightful policy conclusions.

  7. A case study predicting environmental impacts of urban transport planning in China.

    PubMed

    Chen, Chong; Shao, Li-guo; Xu, Ling; Shang, Jin-cheng

    2009-10-01

    Predicting environmental impacts is essential when performing an environmental assessment on urban transport planning. System dynamics (SD) is usually used to solve complex nonlinear problems. In this study, we utilized system dynamics (SD) to evaluate the environmental impacts associated with urban transport planning in Jilin City, China with respect to the local economy, society, transport, the environment and resources. To accomplish this, we generated simulation models comprising interrelated subsystems designed to utilize changes in the economy, society, road construction, changes in the number of vehicles, the capacity of the road network capacity, nitrogen oxides emission, traffic noise, land used for road construction and fuel consumption associated with traffic to estimate dynamic trends in the environmental impacts associated with Jilin's transport planning. Two simulation scenarios were then analyzed comparatively. The results of this study indicated that implementation of Jilin transport planning would improve the current urban traffic conditions and boost the local economy and development while benefiting the environment in Jilin City. In addition, comparative analysis of the two scenarios provided additional information that can be used to aid in scientific decision-making regarding which aspects of the transport planning to implement in Jilin City. This study demonstrates that our application of the SD method, which is referred to as the Strategic Environmental Assessment (SEA), is feasible for use in urban transport planning.

  8. [Study on strategies of pollution prevention in coastal city of Zhejiang Province based on scenario analysis].

    PubMed

    Tian, Jin-Ping; Chen, Lü-Jun; Du, Peng-Fei; Qian, Yi

    2013-01-01

    Scenario analysis was used to study the environmental burden in a coastal city of Zhejiang province under different patterns of economic development. The aim of this research is to propose advices on decision making by illustrating how to make emissions reduced by transforming the pattern of economic development in a developed coastal area, which had acquired the level of 70 000 yuan GDP per cap. At first, 18 heavy pollution industries were screened out, by referencing total emissions of chemical oxygen demand, ammonia-nitrogen, sulfur dioxide, and nitrogen oxide. Then, a model of scenario analysis and the back-up calculation program were designed to study the sustainable development of the heavy pollution industries. With 2008 and 2015 as the reference year and the target year respectively, emissions of four pollutants mentioned above in the 18 heavy pollution industries in the city were analyzed under six scenarios. The total emissions of 4 pollutants should be reduced to an expectant degree, which is set as the constraint prerequisite of the scenario analysis. At last, some suggestions for decision-making are put forward, which include maintaining a moderate increase rate of GDP around 7%, strengthening the adjustment of economic structure, controlling the increasing rate of industrial added value of the industries with heavy pollution, optimizing the structure of industries with heavy pollution, decreasing the intensity of waste emission by implementing cleaner production to reduce emission produce at the source, and strengthening regulations on the operation of waste treatment plants to further promote the efficiency of waste treatment. Only by implementing such measures mentioned above, can the total emissions of chemical oxygen demand, ammonia-nitrogen, sulfur dioxide, and nitrogen oxide of the 18 industries with heavy pollution in the city be reduced by a 10%, 10%, 5%, and 15% respectively based on the reference year.

  9. Modeling the Car Crash Crisis Management System Using HiLA

    NASA Astrophysics Data System (ADS)

    Hölzl, Matthias; Knapp, Alexander; Zhang, Gefei

    An aspect-oriented modeling approach to the Car Crash Crisis Management System (CCCMS) using the High-Level Aspect (HiLA) language is described. HiLA is a language for expressing aspects for UML static structures and UML state machines. In particular, HiLA supports both a static graph transformational and a dynamic approach of applying aspects. Furthermore, it facilitates methodologically turning use case descriptions into state machines: for each main success scenario, a base state machine is developed; all extensions to this main success scenario are covered by aspects. Overall, the static structure of the CCCMS is modeled in 43 classes, the main success scenarios in 13 base machines, the use case extensions in 47 static and 31 dynamic aspects, most of which are instantiations of simple aspect templates.

  10. Changing risk of spring frost damage in grapevines due to climate change? A case study in the Swiss Rhone Valley.

    PubMed

    Meier, Michael; Fuhrer, Jürg; Holzkämper, Annelie

    2018-06-01

    Late spring frost is a severe risk during early plant development. It may cause important economic damage to grapevine production. In a warming climate, late frost risk either could decline due to the reduction in frost days and an advancement of the last day of frost or increase due to a more pronounced shift forward of the start of the active growing period of the plants. These possibilities were analyzed in a case study for two locations in the lower Swiss Rhone Valley (Sion, Aigle) where viticulture is an important part of agriculture. Twelve phenology models were calibrated for the developmental stage BBCH09 (bud burst) using measured or reconstructed temperature data for two vineyards in Changins (1958 to 2012) and Leytron (1977 to 2014) together with observed phenological data. The day of year (DOY) for BBCH09 was then modelled for the years 1951 to 2050 using the best performing phenology model in combination with ten downscaled and bias-corrected climate scenarios. A 100-day period starting with BBCH09 was defined, during which daily mean and minimum temperatures were used to calculate three frost risk indices in each year. These indices were compared between the periods 1961-1990 (reference) and 2021-2050 (climate change scenario). Based on the average of the ensemble of climate model chains, BBCH09 advanced by 9 (range 7-11) (Aigle) and 7 (range 5-8) (Sion) days between the two time periods, similar to the shift in the last day of frost. The separate results of the different model chains suggest that, in the near future, late spring frost risk may increase or decrease, depending on location and climate change projections. While for the reference, the risk is larger at the warmer site (Sion) compared to that at the cooler site (Aigle), for the period 2021-2050, small shifts in both phenology and occurrence of frost (i.e., days with daily minimum temperature below 0 °C) lead to a small decrease in frost risk at the warmer but an increase at the cooler site. However, considerable uncertainties remain that are mostly related to climate model chains. Consequently, shifts in frost risk remain uncertain for the time period considered and the two study locations.

  11. Changing risk of spring frost damage in grapevines due to climate change? A case study in the Swiss Rhone Valley

    NASA Astrophysics Data System (ADS)

    Meier, Michael; Fuhrer, Jürg; Holzkämper, Annelie

    2018-06-01

    Late spring frost is a severe risk during early plant development. It may cause important economic damage to grapevine production. In a warming climate, late frost risk either could decline due to the reduction in frost days and an advancement of the last day of frost or increase due to a more pronounced shift forward of the start of the active growing period of the plants. These possibilities were analyzed in a case study for two locations in the lower Swiss Rhone Valley (Sion, Aigle) where viticulture is an important part of agriculture. Twelve phenology models were calibrated for the developmental stage BBCH09 (bud burst) using measured or reconstructed temperature data for two vineyards in Changins (1958 to 2012) and Leytron (1977 to 2014) together with observed phenological data. The day of year (DOY) for BBCH09 was then modelled for the years 1951 to 2050 using the best performing phenology model in combination with ten downscaled and bias-corrected climate scenarios. A 100-day period starting with BBCH09 was defined, during which daily mean and minimum temperatures were used to calculate three frost risk indices in each year. These indices were compared between the periods 1961-1990 (reference) and 2021-2050 (climate change scenario). Based on the average of the ensemble of climate model chains, BBCH09 advanced by 9 (range 7-11) (Aigle) and 7 (range 5-8) (Sion) days between the two time periods, similar to the shift in the last day of frost. The separate results of the different model chains suggest that, in the near future, late spring frost risk may increase or decrease, depending on location and climate change projections. While for the reference, the risk is larger at the warmer site (Sion) compared to that at the cooler site (Aigle), for the period 2021-2050, small shifts in both phenology and occurrence of frost (i.e., days with daily minimum temperature below 0 °C) lead to a small decrease in frost risk at the warmer but an increase at the cooler site. However, considerable uncertainties remain that are mostly related to climate model chains. Consequently, shifts in frost risk remain uncertain for the time period considered and the two study locations.

  12. Changing risk of spring frost damage in grapevines due to climate change? A case study in the Swiss Rhone Valley

    NASA Astrophysics Data System (ADS)

    Meier, Michael; Fuhrer, Jürg; Holzkämper, Annelie

    2018-01-01

    Late spring frost is a severe risk during early plant development. It may cause important economic damage to grapevine production. In a warming climate, late frost risk either could decline due to the reduction in frost days and an advancement of the last day of frost or increase due to a more pronounced shift forward of the start of the active growing period of the plants. These possibilities were analyzed in a case study for two locations in the lower Swiss Rhone Valley (Sion, Aigle) where viticulture is an important part of agriculture. Twelve phenology models were calibrated for the developmental stage BBCH09 (bud burst) using measured or reconstructed temperature data for two vineyards in Changins (1958 to 2012) and Leytron (1977 to 2014) together with observed phenological data. The day of year (DOY) for BBCH09 was then modelled for the years 1951 to 2050 using the best performing phenology model in combination with ten downscaled and bias-corrected climate scenarios. A 100-day period starting with BBCH09 was defined, during which daily mean and minimum temperatures were used to calculate three frost risk indices in each year. These indices were compared between the periods 1961-1990 (reference) and 2021-2050 (climate change scenario). Based on the average of the ensemble of climate model chains, BBCH09 advanced by 9 (range 7-11) (Aigle) and 7 (range 5-8) (Sion) days between the two time periods, similar to the shift in the last day of frost. The separate results of the different model chains suggest that, in the near future, late spring frost risk may increase or decrease, depending on location and climate change projections. While for the reference, the risk is larger at the warmer site (Sion) compared to that at the cooler site (Aigle), for the period 2021-2050, small shifts in both phenology and occurrence of frost (i.e., days with daily minimum temperature below 0 °C) lead to a small decrease in frost risk at the warmer but an increase at the cooler site. However, considerable uncertainties remain that are mostly related to climate model chains. Consequently, shifts in frost risk remain uncertain for the time period considered and the two study locations.

  13. Scenarios for gluino coannihilation

    DOE PAGES

    Ellis, John; Evans, Jason L.; Luo, Feng; ...

    2016-02-11

    In this article, we study supersymmetric scenarios in which the gluino is the next-to-lightest supersymmetric particle (NLSP), with a mass sufficiently close to that of the lightest supersymmetric particle (LSP) that gluino coannihilation becomes important. One of these scenarios is the MSSM with soft supersymmetry-breaking squark and slepton masses that are universal at an input GUT renormalization scale, but with non-universal gaugino masses. The other scenario is an extension of the MSSM to include vector-like supermultiplets. In both scenarios, we identify the regions of parameter space where gluino coannihilation is important, and discuss their relations to other regions of parametermore » space where other mechanisms bring the dark matter density into the range allowed by cosmology. In the case of the non-universal MSSM scenario, we find that the allowed range of parameter space is constrained by the requirement of electroweak symmetry breaking, the avoidance of a charged LSP and the measured mass of the Higgs boson, in particular, as well as the appearance of other dark matter (co)annihilation processes. Nevertheless, LSP masses m X ≲ 8TeV with the correct dark matter density are quite possible. In the case of pure gravity mediation with additional vector-like supermultiplets, changes to the anomaly-mediated gluino mass and the threshold effects associated with these states can make the gluino almost degenerate with the LSP, and we find a similar upper bound.« less

  14. Economic Evaluation of Endoscopic Sinus Surgery versus Continued Medical Therapy for Refractory Chronic Rhinosinusitis

    PubMed Central

    Rudmik, Luke; Soler, Zachary M.; Mace, Jess C.; Schlosser, Rodney J.; Smith, Timothy L.

    2014-01-01

    Objective To evaluate the long-term cost-effectiveness of endoscopic sinus surgery (ESS) compared to continued medical therapy for patients with refractory chronic rhinosinusitis (CRS). Study Design Cohort-style Markov decision tree economic evaluation Methods The economic perspective was the US third party payer with a 30 year time horizon. The two comparative treatment strategies were: 1) ESS followed by appropriate postoperative medical therapy and 2) continued medical therapy alone. Primary outcome was the incremental cost per quality adjusted life year (QALY). Costs were discounted at a rate of 3.5% in the reference case. Multiple sensitivity analyses were performed including differing time-horizons, discounting scenarios, and a probabilistic sensitivity analysis (PSA). Results The reference case demonstrated that the ESS strategy cost a total of $48,838.38 and produced a total of 20.50 QALYs. The medical therapy alone strategy cost a total of $28,948.98 and produced a total of 17.13 QALYs. The incremental cost effectiveness ratio (ICER) for ESS versus medical therapy alone is $5,901.90 per QALY. The cost-effectiveness acceptability curve from the PSA demonstrated that there is 74% certainty that the ESS strategy is the most cost-effective decision for any willingness to pay threshold greater then $25,000. The time horizon analysis suggests that ESS becomes the cost-effective intervention within the 3rd year after surgery. Conclusion Results from this study suggest that employing an ESS treatment strategy is the most cost-effective intervention compared to continued medical therapy alone for the long-term management of patients with refractory CRS. PMID:25186499

  15. Hazard Evaluation in Valparaíso: the MAR VASTO Project

    NASA Astrophysics Data System (ADS)

    Indirli, Maurizio; Razafindrakoto, Hoby; Romanelli, Fabio; Puglisi, Claudio; Lanzoni, Luca; Milani, Enrico; Munari, Marco; Apablaza, Sotero

    2011-03-01

    The Project "MAR VASTO" (Risk Management in Valparaíso/Manejo de Riesgos en Valparaíso), funded by BID/IADB (Banco InterAmericano de Desarrollo/InterAmerican Development Bank), has been managed by ENEA, with an Italian/Chilean joined partnership and the support of local institutions. Valparaíso tells the never-ending story of a tight interaction between society and environment and the city has been declared a Patrimony of Humanity by UNESCO since 2003. The main goals of the project have been to evaluate in the Valparaíso urban area the impact of main hazards (earthquake, tsunami, fire, and landslide), defining scenarios and maps on a geo-referenced GIS database. In particular, for earthquake hazard assessment the realistic modelling of ground motion is a very important base of knowledge for the preparation of groundshaking scenarios which serve as a valid and economic tool to be fruitfully used by civil engineers, supplying a particularly powerful tool for the prevention aspects of Civil Defense. When numerical modelling is successfully compared with records (as in the case of the Valparaíso, 1985 earthquake), the resulting synthetic seismograms permit the generation of groundshaking maps, based upon a set of possible scenario earthquakes. Where no recordings are available for the scenario event, synthetic signals can be used to estimate ground motion without having to wait for a strong earthquake to occur (pre-disaster microzonation). For the tsunami hazard, the available reports, [e.g., SHOA (1999) Carta de Inundacion por Tsunami para la bahia de Valparaíso, Chile, http://www.shoa.cl/servicios/citsu/citsu.php], have been used as the reference documents for the hazard assessment for the Valparaíso site. The deep and detailed studies already carried out by SHOA have been complemented with (a) sets of parametric studies of the tsunamigenic potential of the 1985 and 1906 scenario earthquakes; and (b) analytical modelling of tsunami waveforms for different scenarios, in order to provide a complementary dataset to be used for the tsunami hazard assessment at Valparaíso. In addition, other targeted activities have been carried out, such as architectonic/urban planning studies/vulnerability evaluation for a pilot building stock in a historic area and a vulnerability analysis for three monumental churches. In this paper, a general description of the work is given, taking into account the in situ work that drove the suggestion of guidelines for mitigation actions.

  16. Analyzing Uncertainty and Risk in the Management of Water Resources in the State Of Texas

    NASA Astrophysics Data System (ADS)

    Singh, A.; Hauffpauir, R.; Mishra, S.; Lavenue, M.

    2010-12-01

    The State of Texas updates its state water plan every five years to determine the water demand required to meet its growing population. The plan compiles forecasts of water deficits from state-wide regional water planning groups as well as the water supply strategies to address these deficits. To date, the plan has adopted a deterministic framework, where reference values (e.g., best estimates, worst-case scenario) are used for key factors such as population growth, demand for water, severity of drought, water availability, etc. These key factors can, however, be affected by multiple sources of uncertainties such as - the impact of climate on surface water and groundwater availability, uncertainty in population projections, changes in sectoral composition of the economy, variability in water usage, feasibility of the permitting process, cost of implementation, etc. The objective of this study was to develop a generalized and scalable methodology for addressing uncertainty and risk in water resources management both at the regional and the local water planning level. The study proposes a framework defining the elements of an end-to-end system model that captures the key components of demand, supply and planning modules along with their associated uncertainties. The framework preserves the fundamental elements of the well-established planning process in the State of Texas, promoting an incremental and stakeholder-driven approach to adding different levels of uncertainty (and risk) into the decision-making environment. The uncertainty in the water planning process is broken down into two primary categories: demand uncertainty and supply uncertainty. Uncertainty in Demand is related to the uncertainty in population projections and the per-capita usage rates. Uncertainty in Supply, in turn, is dominated by the uncertainty in future climate conditions. Climate is represented in terms of time series of precipitation, temperature and/or surface evaporation flux for some future time period of interest, which can be obtained as outputs of global climate models (GCMs). These are then linked with hydrologic and water-availability models (WAMs) to estimate water availability for the worst drought conditions under each future climate scenario. Combining the demand scenarios with the water availability scenarios yields multiple scenarios for water shortage (or surplus). Given multiple shortage/surplus scenarios, various water management strategies can be assessed to evaluate the reliability of meeting projected deficits. These reliabilities are then used within a multi-criteria decision-framework to assess trade-offs between various water management objectives, thus helping to make more robust decisions while planning for the water needs of the future.

  17. Comparison of the near field/far field model and the advanced reach tool (ART) model V1.5: exposure estimates to benzene during parts washing with mineral spirits.

    PubMed

    LeBlanc, Mallory; Allen, Joseph G; Herrick, Robert F; Stewart, James H

    2018-03-01

    The Advanced Reach Tool V1.5 (ART) is a mathematical model for occupational exposures conceptually based on, but implemented differently than, the "classic" Near Field/Far Field (NF/FF) exposure model. The NF/FF model conceptualizes two distinct exposure "zones"; the near field, within approximately 1m of the breathing zone, and the far field, consisting of the rest of the room in which the exposure occurs. ART has been reported to provide "realistic and reasonable worst case" estimates of the exposure distribution. In this study, benzene exposure during the use of a metal parts washer was modeled using ART V1.5, and compared to actual measured workers samples and to NF/FF model results from three previous studies. Next, the exposure concentrations expected to be exceeded 25%, 10% and 5% of the time for the exposure scenario were calculated using ART. Lastly, ART exposure estimates were compared with and without Bayesian adjustment. The modeled parts washing benzene exposure scenario included distinct tasks, e.g. spraying, brushing, rinsing and soaking/drying. Because ART can directly incorporate specific types of tasks that are part of the exposure scenario, the present analysis identified each task's determinants of exposure and performance time, thus extending the work of the previous three studies where the process of parts washing was modeled as one event. The ART 50th percentile exposure estimate for benzene (0.425ppm) more closely approximated the reported measured mean value of 0.50ppm than the NF/FF model estimates of 0.33ppm, 0.070ppm or 0.2ppm obtained from other modeling studies of this exposure scenario. The ART model with the Bayesian analysis provided the closest estimate to the measured value (0.50ppm). ART (with Bayesian adjustment) was then used to assess the 75th, the 90th and 95th percentile exposures, predicting that on randomly selected days during this parts washing exposure scenario, 25% of the benzene exposures would be above 0.70ppm; 10% above 0.95ppm; and 5% above 1.15ppm. These exposure estimates at the three different percentiles of the ART exposure distribution refer to the modeled exposure scenario not a specific workplace or worker. This study provides a detailed comparison of modeling tools currently available to occupational hygienists and other exposure assessors. Possible applications are considered. Copyright © 2017 Elsevier GmbH. All rights reserved.

  18. Traffic accident reconstruction and an approach for prediction of fault rates using artificial neural networks: A case study in Turkey.

    PubMed

    Can Yilmaz, Ali; Aci, Cigdem; Aydin, Kadir

    2016-08-17

    Currently, in Turkey, fault rates in traffic accidents are determined according to the initiative of accident experts (no speed analyses of vehicles just considering accident type) and there are no specific quantitative instructions on fault rates related to procession of accidents which just represents the type of collision (side impact, head to head, rear end, etc.) in No. 2918 Turkish Highway Traffic Act (THTA 1983). The aim of this study is to introduce a scientific and systematic approach for determination of fault rates in most frequent property damage-only (PDO) traffic accidents in Turkey. In this study, data (police reports, skid marks, deformation, crush depth, etc.) collected from the most frequent and controversial accident types (4 sample vehicle-vehicle scenarios) that consist of PDO were inserted into a reconstruction software called vCrash. Sample real-world scenarios were simulated on the software to generate different vehicle deformations that also correspond to energy-equivalent speed data just before the crash. These values were used to train a multilayer feedforward artificial neural network (MFANN), function fitting neural network (FITNET, a specialized version of MFANN), and generalized regression neural network (GRNN) models within 10-fold cross-validation to predict fault rates without using software. The performance of the artificial neural network (ANN) prediction models was evaluated using mean square error (MSE) and multiple correlation coefficient (R). It was shown that the MFANN model performed better for predicting fault rates (i.e., lower MSE and higher R) than FITNET and GRNN models for accident scenarios 1, 2, and 3, whereas FITNET performed the best for scenario 4. The FITNET model showed the second best results for prediction for the first 3 scenarios. Because there is no training phase in GRNN, the GRNN model produced results much faster than MFANN and FITNET models. However, the GRNN model had the worst prediction results. The R values for prediction of fault rates were close to 1 for all folds and scenarios. This study focuses on exhibiting new aspects and scientific approaches for determining fault rates of involvement in most frequent PDO accidents occurring in Turkey by discussing some deficiencies in THTA and without regard to initiative and/or experience of experts. This study yields judicious decisions to be made especially on forensic investigations and events involving insurance companies. Referring to this approach, injury/fatal and/or pedestrian-related accidents may be analyzed as future work by developing new scientific models.

  19. Potential for reducing air-pollutants while achieving 2 °C global temperature change limit target.

    PubMed

    Hanaoka, Tatsuya; Akashi, Osamu; Fujiwara, Kazuya; Motoki, Yuko; Hibino, Go

    2014-12-01

    This study analyzes the potential to reduce air pollutants while achieving the 2 °C global temperature change limit target above pre-industrial levels, by using the bottom-up optimization model, AIM/Enduse[Global]. This study focuses on; 1) estimating mitigation potentials and costs for achieving 2 °C, 2.5 °C, and 3 °C target scenarios, 2) assessing co-benefits of reducing air pollutants such as NOx, SO2, BC, PM, and 3) analyzing features of sectoral attributions in Annex I and Non-Annex I groups of countries. The carbon tax scenario at 50 US$/tCO2-eq in 2050 can reduce GHG emissions more than the 3 °C target scenario, but a higher carbon price around 400 US$/tCO2-eq in 2050 is required to achieve the 2 °C target scenario. However, there is also a co-benefit of large reduction potential of air pollutants, in the range of 60-80% reductions in 2050 from the reference scenario while achieving the 2 °C target. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. NASA CEV Reference Entry GN&C System and Analysis

    NASA Technical Reports Server (NTRS)

    Munday, S.; Madsen, C.; Broome, J.; Gay, R.; Tigges, M.; Strahan, A.

    2007-01-01

    As part of its overall objectives, the Orion spacecraft will be required to perform entry and Earth landing functions for Low Earth Orbit (LEO) and Lunar missions. Both of these entry scenarios will begin with separation of the Service Module (SM), making them unique from other Orion mission phases in that only the Command Module (CM) portion of the Crew Exploration Vehicle (CEV) will be involved, requiring a CM specific Guidance, Navigation and Control (GN&C) system. Also common to these mission scenarios will be the need for GN&C to safely return crew (or cargo) to earth within the dynamic thermal and structural constraints of entry and within acceptable accelerations on the crew, utilizing the limited aerodynamic performance of the CM capsule. The lunar return mission could additionally require an initial atmospheric entry designed to support a precision skip and second entry, all to maximize downrange performance and ensure landing in the United States. This paper describes the Entry GN&C reference design, developed by the NASA-led team, that supports these entry scenarios and that was used to validate the Orion System requirements. Description of the reference design will include an overview of the GN&C functions, avionics, and effectors and will relate these to the specific design drivers of the entry scenarios, as well as the desire for commonality in vehicle systems to support the different missions. The discussion will also include the requirement for an Emergency Entry capability beyond that of the nominal performance of the multi-string GNC system, intended to return the crew to the earth in a survivable but unguided manner. Finally, various analyses will be discussed, including those completed to support validation efforts of the current CEV requirements, along with those on-going and planned with the intention to further refine the requirements and to support design development work in conjunction with the prime contractor. Some of these ongoing analyses will include work to size effectors (jets) and fuel budgets, to refine skip entry concepts, to characterize navigation performance and uncertainties, to provide for SM disposal offshore and to identify requirements to support target site selection.

Top