Sample records for risk-based priority scoring

  1. What's wrong with hazard-ranking systems? An expository note.

    PubMed

    Cox, Louis Anthony Tony

    2009-07-01

    Two commonly recommended principles for allocating risk management resources to remediate uncertain hazards are: (1) select a subset to maximize risk-reduction benefits (e.g., maximize the von Neumann-Morgenstern expected utility of the selected risk-reducing activities), and (2) assign priorities to risk-reducing opportunities and then select activities from the top of the priority list down until no more can be afforded. When different activities create uncertain but correlated risk reductions, as is often the case in practice, then these principles are inconsistent: priority scoring and ranking fails to maximize risk-reduction benefits. Real-world risk priority scoring systems used in homeland security and terrorism risk assessment, environmental risk management, information system vulnerability rating, business risk matrices, and many other important applications do not exploit correlations among risk-reducing opportunities or optimally diversify risk-reducing investments. As a result, they generally make suboptimal risk management recommendations. Applying portfolio optimization methods instead of risk prioritization ranking, rating, or scoring methods can achieve greater risk-reduction value for resources spent.

  2. Higher Mortality in registrants with sudden model for end-stage liver disease increase: Disadvantaged by the current allocation policy.

    PubMed

    Massie, Allan B; Luo, Xun; Alejo, Jennifer L; Poon, Anna K; Cameron, Andrew M; Segev, Dorry L

    2015-05-01

    Liver allocation is based on current Model for End-Stage Liver Disease (MELD) scores, with priority in the case of a tie being given to those waiting the longest with a given MELD score. We hypothesized that this priority might not reflect risk: registrants whose MELD score has recently increased receive lower priority but might have higher wait-list mortality. We studied wait-list and posttransplant mortality in 69,643 adult registrants from 2002 to 2013. By likelihood maximization, we empirically defined a MELD spike as a MELD increase ≥ 30% over the previous 7 days. At any given time, only 0.6% of wait-list patients experienced a spike; however, these patients accounted for 25% of all wait-list deaths. Registrants who reached a given MELD score after a spike had higher wait-list mortality in the ensuing 7 days than those with the same resulting MELD score who did not spike, but they had no difference in posttransplant mortality. The spike-associated wait-list mortality increase was highest for registrants with medium MELD scores: specifically, 2.3-fold higher (spike versus no spike) for a MELD score of 10, 4.0-fold higher for a MELD score of 20, and 2.5-fold higher for a MELD score of 30. A model incorporating the MELD score and spikes predicted wait-list mortality risk much better than a model incorporating only the MELD score. Registrants with a sudden MELD increase have a higher risk of short-term wait-list mortality than is indicated by their current MELD score but have no increased risk of posttransplant mortality; allocation policy should be adjusted accordingly. © 2015 American Association for the Study of Liver Diseases.

  3. The priority group index: a proposed new method incorporating high risk and population burden to identify target populations for public health interventions.

    PubMed

    Zhang, Bo; Cohen, Joanna E; OʼConnor, Shawn

    2014-01-01

    Selection of priority groups is important for health interventions. However, no quantitative method has been developed. To develop a quantitative method to support the process of selecting priority groups for public health interventions based on both high risk and population health burden. Secondary data analysis of the 2010 Canadian Community Health Survey. Canadian population. Survey respondents. We identified priority groups for 3 diseases: heart disease, stroke, and chronic lower respiratory diseases. Three measures--prevalence, population counts, and adjusted odds ratios (OR)--were calculated for subpopulations (sociodemographic characteristics and other risk factors). A Priority Group Index (PGI) was calculated by summing the rank scores of these 3 measures. Of the 30 priority groups identified by the PGI (10 for each of the 3 disease outcomes), 7 were identified on the basis of high prevalence only, 5 based on population count only, 3 based on high OR only, and the remainder based on combinations of these. The identified priority groups were all in line with the literature as risk factors for the 3 diseases, such as elderly people for heart disease and stroke and those with low income for chronic lower respiratory diseases. The PGI was thus able to balance both high risk and population burden approaches in selecting priority groups, and thus it would address health inequities as well as disease burden in the overall population. The PGI is a quantitative method to select priority groups for public health interventions; it has the potential to enhance the effective use of limited public resources.

  4. The robust corrective action priority-an improved approach for selecting competing corrective actions in FMEA based on principle of robust design

    NASA Astrophysics Data System (ADS)

    Sutrisno, Agung; Gunawan, Indra; Vanany, Iwan

    2017-11-01

    In spite of being integral part in risk - based quality improvement effort, studies improving quality of selection of corrective action priority using FMEA technique are still limited in literature. If any, none is considering robustness and risk in selecting competing improvement initiatives. This study proposed a theoretical model to select risk - based competing corrective action by considering robustness and risk of competing corrective actions. We incorporated the principle of robust design in counting the preference score among corrective action candidates. Along with considering cost and benefit of competing corrective actions, we also incorporate the risk and robustness of corrective actions. An example is provided to represent the applicability of the proposed model.

  5. Global Conservation Priorities for Marine Turtles

    PubMed Central

    Wallace, Bryan P.; DiMatteo, Andrew D.; Bolten, Alan B.; Chaloupka, Milani Y.; Hutchinson, Brian J.; Abreu-Grobois, F. Alberto; Mortimer, Jeanne A.; Seminoff, Jeffrey A.; Amorocho, Diego; Bjorndal, Karen A.; Bourjea, Jérôme; Bowen, Brian W.; Briseño Dueñas, Raquel; Casale, Paolo; Choudhury, B. C.; Costa, Alice; Dutton, Peter H.; Fallabrino, Alejandro; Finkbeiner, Elena M.; Girard, Alexandre; Girondot, Marc; Hamann, Mark; Hurley, Brendan J.; López-Mendilaharsu, Milagros; Marcovaldi, Maria Angela; Musick, John A.; Nel, Ronel; Pilcher, Nicolas J.; Troëng, Sebastian; Witherington, Blair; Mast, Roderic B.

    2011-01-01

    Where conservation resources are limited and conservation targets are diverse, robust yet flexible priority-setting frameworks are vital. Priority-setting is especially important for geographically widespread species with distinct populations subject to multiple threats that operate on different spatial and temporal scales. Marine turtles are widely distributed and exhibit intra-specific variations in population sizes and trends, as well as reproduction and morphology. However, current global extinction risk assessment frameworks do not assess conservation status of spatially and biologically distinct marine turtle Regional Management Units (RMUs), and thus do not capture variations in population trends, impacts of threats, or necessary conservation actions across individual populations. To address this issue, we developed a new assessment framework that allowed us to evaluate, compare and organize marine turtle RMUs according to status and threats criteria. Because conservation priorities can vary widely (i.e. from avoiding imminent extinction to maintaining long-term monitoring efforts) we developed a “conservation priorities portfolio” system using categories of paired risk and threats scores for all RMUs (n = 58). We performed these assessments and rankings globally, by species, by ocean basin, and by recognized geopolitical bodies to identify patterns in risk, threats, and data gaps at different scales. This process resulted in characterization of risk and threats to all marine turtle RMUs, including identification of the world's 11 most endangered marine turtle RMUs based on highest risk and threats scores. This system also highlighted important gaps in available information that is crucial for accurate conservation assessments. Overall, this priority-setting framework can provide guidance for research and conservation priorities at multiple relevant scales, and should serve as a model for conservation status assessments and priority-setting for widespread, long-lived taxa. PMID:21969858

  6. Development of a novel scoring system for identifying emerging chemical risks in the food chain.

    PubMed

    Oltmanns, J; Licht, O; Bitsch, A; Bohlen, M-L; Escher, S E; Silano, V; MacLeod, M; Serafimova, R; Kass, G E N; Merten, C

    2018-02-21

    The European Food Safety Authority (EFSA) is responsible for risk assessment of all aspects of food safety, including the establishment of procedures aimed at the identification of emerging risks to food safety. Here, a scoring system was developed for identifying chemicals registered under the European REACH Regulation that could be of potential concern in the food chain using the following parameters: (i) environmental release based on maximum aggregated tonnages and environmental release categories; (ii) biodegradation in the environment; (iii) bioaccumulation and in vivo and in vitro toxicity. The screening approach was tested on 100 data-rich chemicals registered under the REACH Regulation at aggregated volumes of at least 1000 tonnes per annum. The results show that substance-specific data generated under the REACH Regulation can be used to identify potential emerging risks in the food chain. After application of the screening procedure, priority chemicals can be identified as potentially emerging risk chemicals through the integration of exposure, environmental fate and toxicity. The default approach is to generate a single total score for each substance using a predefined weighting scenario. However, it is also possible to use a pivot table approach to combine the individual scores in different ways that reflect user-defined priorities, which enables a very flexible, iterative definition of screening criteria. Possible applications of the approaches are discussed using illustrative examples. Either approach can then be followed by in-depth evaluation of priority substances to ensure the identification of substances that present a real emerging chemical risk in the food chain.

  7. Evaluating the operational risks of biomedical waste using failure mode and effects analysis.

    PubMed

    Chen, Ying-Chu; Tsai, Pei-Yi

    2017-06-01

    The potential problems and risks of biomedical waste generation have become increasingly apparent in recent years. This study applied a failure mode and effects analysis to evaluate the operational problems and risks of biomedical waste. The microbiological contamination of biomedical waste seldom receives the attention of researchers. In this study, the biomedical waste lifecycle was divided into seven processes: Production, classification, packaging, sterilisation, weighing, storage, and transportation. Twenty main failure modes were identified in these phases and risks were assessed based on their risk priority numbers. The failure modes in the production phase accounted for the highest proportion of the risk priority number score (27.7%). In the packaging phase, the failure mode 'sharp articles not placed in solid containers' had the highest risk priority number score, mainly owing to its high severity rating. The sterilisation process is the main difference in the treatment of infectious and non-infectious biomedical waste. The failure modes in the sterilisation phase were mainly owing to human factors (mostly related to operators). This study increases the understanding of the potential problems and risks associated with biomedical waste, thereby increasing awareness of how to improve the management of biomedical waste to better protect workers, the public, and the environment.

  8. Threatened species and the potential loss of phylogenetic diversity: conservation scenarios based on estimated extinction probabilities and phylogenetic risk analysis.

    PubMed

    Faith, Daniel P

    2008-12-01

    New species conservation strategies, including the EDGE of Existence (EDGE) program, have expanded threatened species assessments by integrating information about species' phylogenetic distinctiveness. Distinctiveness has been measured through simple scores that assign shared credit among species for evolutionary heritage represented by the deeper phylogenetic branches. A species with a high score combined with a high extinction probability receives high priority for conservation efforts. Simple hypothetical scenarios for phylogenetic trees and extinction probabilities demonstrate how such scoring approaches can provide inefficient priorities for conservation. An existing probabilistic framework derived from the phylogenetic diversity measure (PD) properly captures the idea of shared responsibility for the persistence of evolutionary history. It avoids static scores, takes into account the status of close relatives through their extinction probabilities, and allows for the necessary updating of priorities in light of changes in species threat status. A hypothetical phylogenetic tree illustrates how changes in extinction probabilities of one or more species translate into changes in expected PD. The probabilistic PD framework provided a range of strategies that moved beyond expected PD to better consider worst-case PD losses. In another example, risk aversion gave higher priority to a conservation program that provided a smaller, but less risky, gain in expected PD. The EDGE program could continue to promote a list of top species conservation priorities through application of probabilistic PD and simple estimates of current extinction probability. The list might be a dynamic one, with all the priority scores updated as extinction probabilities change. Results of recent studies suggest that estimation of extinction probabilities derived from the red list criteria linked to changes in species range sizes may provide estimated probabilities for many different species. Probabilistic PD provides a framework for single-species assessment that is well-integrated with a broader measurement of impacts on PD owing to climate change and other factors.

  9. Revised Risk Priority Number in Failure Mode and Effects Analysis Model from the Perspective of Healthcare System

    PubMed Central

    Rezaei, Fatemeh; Yarmohammadian, Mohmmad H.; Haghshenas, Abbas; Fallah, Ali; Ferdosi, Masoud

    2018-01-01

    Background: Methodology of Failure Mode and Effects Analysis (FMEA) is known as an important risk assessment tool and accreditation requirement by many organizations. For prioritizing failures, the index of “risk priority number (RPN)” is used, especially for its ease and subjective evaluations of occurrence, the severity and the detectability of each failure. In this study, we have tried to apply FMEA model more compatible with health-care systems by redefining RPN index to be closer to reality. Methods: We used a quantitative and qualitative approach in this research. In the qualitative domain, focused groups discussion was used to collect data. A quantitative approach was used to calculate RPN score. Results: We have studied patient's journey in surgery ward from holding area to the operating room. The highest priority failures determined based on (1) defining inclusion criteria as severity of incident (clinical effect, claim consequence, waste of time and financial loss), occurrence of incident (time - unit occurrence and degree of exposure to risk) and preventability (degree of preventability and defensive barriers) then, (2) risks priority criteria quantified by using RPN index (361 for the highest rate failure). The ability of improved RPN scores reassessed by root cause analysis showed some variations. Conclusions: We concluded that standard criteria should be developed inconsistent with clinical linguistic and special scientific fields. Therefore, cooperation and partnership of technical and clinical groups are necessary to modify these models. PMID:29441184

  10. Priority screening of toxic chemicals and industry sectors in the U.S. toxics release inventory: a comparison of the life cycle impact-based and risk-based assessment tools developed by U.S. EPA.

    PubMed

    Lim, Seong-Rin; Lam, Carl W; Schoenung, Julie M

    2011-09-01

    Life Cycle Impact Assessment (LCIA) and Risk Assessment (RA) employ different approaches to evaluate toxic impact potential for their own general applications. LCIA is often used to evaluate toxicity potentials for corporate environmental management and RA is often used to evaluate a risk score for environmental policy in government. This study evaluates the cancer, non-cancer, and ecotoxicity potentials and risk scores of chemicals and industry sectors in the United States on the basis of the LCIA- and RA-based tools developed by U.S. EPA, and compares the priority screening of toxic chemicals and industry sectors identified with each method to examine whether the LCIA- and RA-based results lead to the same prioritization schemes. The Tool for the Reduction and Assessment of Chemical and other environmental Impacts (TRACI) is applied as an LCIA-based screening approach with a focus on air and water emissions, and the Risk-Screening Environmental Indicator (RSEI) is applied in equivalent fashion as an RA-based screening approach. The U.S. Toxic Release Inventory is used as the dataset for this analysis, because of its general applicability to a comprehensive list of chemical substances and industry sectors. Overall, the TRACI and RSEI results do not agree with each other in part due to the unavailability of characterization factors and toxic scores for select substances, but primarily because of their different evaluation approaches. Therefore, TRACI and RSEI should be used together both to support a more comprehensive and robust approach to screening of chemicals for environmental management and policy and to highlight substances that are found to be of concern from both perspectives. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Clinical risk analysis with failure mode and effect analysis (FMEA) model in a dialysis unit.

    PubMed

    Bonfant, Giovanna; Belfanti, Pietro; Paternoster, Giuseppe; Gabrielli, Danila; Gaiter, Alberto M; Manes, Massimo; Molino, Andrea; Pellu, Valentina; Ponzetti, Clemente; Farina, Massimo; Nebiolo, Pier E

    2010-01-01

    The aim of clinical risk management is to improve the quality of care provided by health care organizations and to assure patients' safety. Failure mode and effect analysis (FMEA) is a tool employed for clinical risk reduction. We applied FMEA to chronic hemodialysis outpatients. FMEA steps: (i) process study: we recorded phases and activities. (ii) Hazard analysis: we listed activity-related failure modes and their effects; described control measures; assigned severity, occurrence and detection scores for each failure mode and calculated the risk priority numbers (RPNs) by multiplying the 3 scores. Total RPN is calculated by adding single failure mode RPN. (iii) Planning: we performed a RPNs prioritization on a priority matrix taking into account the 3 scores, and we analyzed failure modes causes, made recommendations and planned new control measures. (iv) Monitoring: after failure mode elimination or reduction, we compared the resulting RPN with the previous one. Our failure modes with the highest RPN came from communication and organization problems. Two tools have been created to ameliorate information flow: "dialysis agenda" software and nursing datasheets. We scheduled nephrological examinations, and we changed both medical and nursing organization. Total RPN value decreased from 892 to 815 (8.6%) after reorganization. Employing FMEA, we worked on a few critical activities, and we reduced patients' clinical risk. A priority matrix also takes into account the weight of the control measures: we believe this evaluation is quick, because of simple priority selection, and that it decreases action times.

  12. Setting priorities for surveillance, prevention, and control of zoonoses in Bogotá, Colombia.

    PubMed

    Cediel, Natalia; Villamil, Luis Carlos; Romero, Jaime; Renteria, Libardo; De Meneghi, Daniele

    2013-05-01

    To establish priorities for zoonoses surveillance, prevention, and control in Bogotá, Colombia. A Delphi panel of experts in veterinary and human medicine was conducted using a validated prioritization method to assess the importance of 32 selected zoonoses. This exercise was complemented by a questionnaire survey, using the knowledge, attitudes, and practices (KAP) methodology, administered in 19 districts of Bogotá from September 2009 to April 2010 to an at-risk population (workers at veterinary clinics; pet shops; butcher shops; and traditional food markets that sell poultry, meat, cheese, and eggs). A risk indicator based on level of knowledge about zoonoses was constructed using categorical principal component and logistic regression analyses. Twelve experts participated in the Delphi panel. The diseases scored as highest priority were: influenza A(H1N1), salmonellosis, Escherichia coli infection, leptospirosis, and rabies. The diseases scored as lowest priority were: ancylostomiasis, scabies, ringworm, and trichinellosis. A total of 535 questionnaires were collected and analyzed. Respondents claimed to have had scabies (21%), fungi (8%), brucellosis (8%), and pulicosis (8%). Workers with the most limited knowledge on zoonoses and therefore the highest health risk were those who 1) did not have a professional education, 2) had limited or no zoonoses prevention training, and 3) worked in Usme, Bosa, or Ciudad Bolívar districts. According to the experts, influenza A(H1N1) was the most important zoonoses. Rabies, leptospirosis, brucellosis, and toxoplasmosis were identified as priority diseases by both the experts and the exposed workers. This is the first prioritization exercise focused on zoonoses surveillance, prevention, and control in Colombia. These results could be used to guide decision-making for resource allocation in public health.

  13. Risk assessment in the upstream crude oil supply chain: Leveraging analytic hierarchy process

    NASA Astrophysics Data System (ADS)

    Briggs, Charles Awoala

    For an organization to be successful, an effective strategy is required, and if implemented appropriately the strategy will result in a sustainable competitive advantage. The importance of decision making in the oil industry is reflected in the magnitude and nature of the industry. Specific features of the oil industry supply chain, such as its longer chain, the complexity of its transportation system, its complex production and storage processes, etc., pose challenges to its effective management. Hence, understanding the risks, the risk sources, and their potential impacts on the oil industry's operations will be helpful in proposing a risk management model for the upstream oil supply chain. The risk-based model in this research uses a three-level analytic hierarchy process (AHP), a multiple-attribute decision-making technique, to underline the importance of risk analysis and risk management in the upstream crude oil supply chain. Level 1 represents the overall goal of risk management; Level 2 is comprised of the various risk factors; and Level 3 represents the alternative criteria of the decision maker as indicated on the hierarchical structure of the crude oil supply chain. Several risk management experts from different oil companies around the world were surveyed, and six major types of supply chain risks were identified: (1) exploration and production, (2) environmental and regulatory compliance, (3) transportation, (4) availability of oil, (5) geopolitical, and (6) reputational. Also identified are the preferred methods of managing risks which include; (1) accept and control the risks, (2) avoid the risk by stopping the activity, or (3) transfer or share the risks to other companies or insurers. The results from the survey indicate that the most important risk to manage is transportation risk with a priority of .263, followed by exploration/production with priority of .198, with an overall inconsistency of .03. With respect to major objectives the most preferred risk management policy option based on the result of the composite score is accept and control risk with a priority of .446, followed by transfer or share risk with a priority of .303. The least likely option is to terminate or forgo activity with a priority of .251.

  14. 7 CFR 4279.155 - Loan priorities.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... natural disaster or experiencing fundamental structural changes in its economic base (5 points). (iv... the maximum allowable for a loan of its size (5 points). (5) High impact business investment priorities. The priority score for high impact business investment will be the total score for the following...

  15. A macro environmental risk assessment methodology for establishing priorities among risks to human health and the environment in the Philippines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gernhofer, S.; Oliver, T.J.; Vasquez, R.

    1994-12-31

    A macro environmental risk assessment (ERA) methodology was developed for the Philippine Department of Environment and Natural Resources (DENR) as part of the US Agency for International Development Industrial Environmental Management Project. The DENR allocates its limited resources to mitigate those environmental problems that pose the greatest threat to human health and the environment. The National Regional Industry Prioritization Strategy (NRIPS) methodology was developed as a risk assessment tool to establish a national ranking of industrial facilities. The ranking establishes regional and national priorities, based on risk factors, that DENR can use to determine the most effective allocation of itsmore » limited resources. NRIPS is a systematic framework that examines the potential risk to human health and the environment from hazardous substances released from a facility, and, in doing so, generates a relative numerical score that represents that risk. More than 3,300 facilities throughout the Philippines were evaluated successfully with the NRIPS.« less

  16. Research priorities in the field of HIV and AIDS in Iran

    PubMed Central

    Haghdoost, AliAkbar; Sadeghi, Masoomeh; Nasirian, Maryam; Mirzazadeh, Ali; Navadeh, Soodabeh

    2012-01-01

    Background: HIV is a multidimensional problem. Therefore, prioritization of research topics in this field is a serious challenge. We decided to prioritize the major areas of research on HIV/AIDS in Iran. Materials ans Methods: In a brain-storming session with the main national and provincial stakeholders and experts from different relevant fields, the direct and indirect dimensions of HIV/AIDS and its related research issues were explored. Afterward, using the Delphi method, we sent questionnaires to 20 experts (13 respondents) from different sectors. In this electronic based questioner, we requested experts to evaluate main topics and their subtopics. The ranges of scores were between 0 and 100. Results: The score of priorities of main themes were preventive activities (43.2), large scale planning (25.4), the estimation of the HIV/AIDS burden (20.9), and basic scientific research (10.5). The most important priority in each main theme was education particularly in high risk groups (52.5), developing the national strategy to address the epidemic (31.8), estimation of the incidence and prevalence among high-risk groups (59.5) and developing new preventive methods (66.7), respectively. Conclusions: The most important priorities of researches on HIV/AIDS were preventive activities and developing national strategy. As high risk groups are the most involved people in the epidemic, and they are also the most hard-to-reach sub-populations, a national well designated comprehensive strategy is essential. However, we believe with a very specific and directed scheme, special attention to research in basic sciences is necessary, at least in limited number of institutes. PMID:23626616

  17. Effects of gaps in priorities between ideal and real lives on psychological burnout among academic faculty members at a medical university in Japan: a cross-sectional study.

    PubMed

    Chatani, Yuki; Nomura, Kyoko; Horie, Saki; Takemoto, Keisuke; Takeuchi, Masumi; Sasamori, Yukifumi; Takenoshita, Shinichi; Murakami, Aya; Hiraike, Haruko; Okinaga, Hiroko; Smith, Derek

    2017-04-04

    Accumulating evidence from medical workforce research indicates that poor work/life balance and increased work/home conflict induce psychological distress. In this study we aim to examine the existence of a priority gap between ideal and real lives, and its association with psychological burnout among academic professionals. This cross-sectional survey, conducted in 2014, included faculty members (228 men, 102 women) at a single medical university in Tokyo, Japan. The outcome of interest was psychological burnout, measured with a validated inventory. Discordance between ideal- and real-life priorities, based on participants' responses (work, family, individual life, combinations thereof), was defined as a priority gap. The majority (64%) of participants chose "work" as the greatest priority in real life, but only 28% chose "work" as the greatest priority in their conception of an ideal life. Priority gaps were identified in 59.5% of respondents. A stepwise multivariable general linear model demonstrated that burnout scores were associated positively with respondents' current position (P < 0.0018) and the presence of a priority gap (P < 0.0001), and negatively with the presence of social support (P < 0.0001). Among participants reporting priority gaps, burnout scores were significantly lower in those with children than in those with no children (P interaction  = 0.011); no such trend was observed in participants with no priority gap. A gap in priorities between an ideal and real life was associated with an increased risk of burnout, and the presence of children, which is a type of "family" social support, had a mitigating effect on burnout among those reporting priority gaps.

  18. Tetrapods on the EDGE: Overcoming data limitations to identify phylogenetic conservation priorities

    PubMed Central

    Gray, Claudia L.; Wearn, Oliver R.; Owen, Nisha R.

    2018-01-01

    The scale of the ongoing biodiversity crisis requires both effective conservation prioritisation and urgent action. As extinction is non-random across the tree of life, it is important to prioritise threatened species which represent large amounts of evolutionary history. The EDGE metric prioritises species based on their Evolutionary Distinctiveness (ED), which measures the relative contribution of a species to the total evolutionary history of their taxonomic group, and Global Endangerment (GE), or extinction risk. EDGE prioritisations rely on adequate phylogenetic and extinction risk data to generate meaningful priorities for conservation. However, comprehensive phylogenetic trees of large taxonomic groups are extremely rare and, even when available, become quickly out-of-date due to the rapid rate of species descriptions and taxonomic revisions. Thus, it is important that conservationists can use the available data to incorporate evolutionary history into conservation prioritisation. We compared published and new methods to estimate missing ED scores for species absent from a phylogenetic tree whilst simultaneously correcting the ED scores of their close taxonomic relatives. We found that following artificial removal of species from a phylogenetic tree, the new method provided the closest estimates of their “true” ED score, differing from the true ED score by an average of less than 1%, compared to the 31% and 38% difference of the previous methods. The previous methods also substantially under- and over-estimated scores as more species were artificially removed from a phylogenetic tree. We therefore used the new method to estimate ED scores for all tetrapods. From these scores we updated EDGE prioritisation rankings for all tetrapod species with IUCN Red List assessments, including the first EDGE prioritisation for reptiles. Further, we identified criteria to identify robust priority species in an effort to further inform conservation action whilst limiting uncertainty and anticipating future phylogenetic advances. PMID:29641585

  19. Health, safety, and environmental risk assessment of steel production complex in central Iran using TOPSIS.

    PubMed

    Jozi, S A; Majd, N Moradi

    2014-10-01

    This research was carried out with the aim of presenting an environmental management plan for steel production complex (SPC) in central Iran. Following precise identification of the plant activities as well as the study area, possible sources of environmental pollution and adverse impacts on the air quality, water, soil, biological environment, socioeconomic and cultural environment, and health and safety of the employees were determined considering the work processes of the steel complex. Afterwards, noise, wastewater, and air pollution sources were measured. Subsequently, factors polluting the steel complex were identified by TOPSIS and then prioritized using Excel Software. Based on the obtained results, the operation of the furnaces in hot rolling process with the score 1, effluent derived from hot rolling process with the score 0.565, nonprincipal disposal and dumping of waste at the plant enclosure with the score 0.335, walking beam process with the score 1.483 respectively allocated themselves the highest priority in terms of air, water, soil and noise pollution. In terms of habitats, land cover and socioeconomic and cultural environment, closeness to the forest area and the existence of four groups of wildlife with the score 1.106 and proximity of villages and residential areas to the plant with the score 3.771 respectively enjoyed the highest priorities while impressibility and occupational accidents with the score 2.725 and cutting and welding operations with score 2.134 had the highest priority among health and safety criteria. Finally, strategies for the control of pollution sources were identified and Training, Monitoring and environmental management plan of the SPC was prepared.

  20. Does present use of cardiovascular medication reflect elevated cardiovascular risk scores estimated ten years ago? A population based longitudinal observational study

    PubMed Central

    2011-01-01

    Background It is desirable that those at highest risk of cardiovascular disease should have priority for preventive measures, eg. treatment with prescription drugs to modify their risk. We wanted to investigate to what extent present use of cardiovascular medication (CVM) correlates with cardiovascular risk estimated by three different risk scores (Framingham, SCORE and NORRISK) ten years ago. Methods Prospective logitudinal observational study of 20 252 participants in The Hordaland Health Study born 1950-57, not using CVM in 1997-99. Prescription data obtained from The Norwegian Prescription Database in 2008. Results 26% of men and 22% of women aged 51-58 years had started to use some CVM during the previous decade. As a group, persons using CVM scored significantly higher on the risk algorithms Framingham, SCORE and NORRISK compared to those not treated. 16-20% of men and 20-22% of women with risk scores below the high-risk thresholds for the three risk scores were treated with CVM, while 60-65% of men and 25-45% of women with scores above the high-risk thresholds received no treatment. Among women using CVM, only 2.2% (NORRISK), 4.4% (SCORE) and 14.5% (Framingham) had risk scores above the high-risk values. Low education, poor self-reported general health, muscular pains, mental distress (in females only) and a family history of premature cardiovascular disease correlated with use of CVM. Elevated blood pressure was the single factor most strongly predictive of CVM treatment. Conclusion Prescription of CVM to middle-aged individuals by large seems to occur independently of estimated total cardiovascular risk, and this applies especially to females. PMID:21366925

  1. Market segmentation using perceived constraints

    Treesearch

    Jinhee Jun; Gerard Kyle; Andrew Mowen

    2008-01-01

    We examined the practical utility of segmenting potential visitors to Cleveland Metroparks using their constraint profiles. Our analysis identified three segments based on their scores on the dimensions of constraints: Other priorities--visitors who scored the highest on 'other priorities' dimension; Highly Constrained--visitors who scored relatively high on...

  2. Failure Mode and Effect Analysis for Delivery of Lung Stereotactic Body Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perks, Julian R., E-mail: julian.perks@ucdmc.ucdavis.edu; Stanic, Sinisa; Stern, Robin L.

    2012-07-15

    Purpose: To improve the quality and safety of our practice of stereotactic body radiation therapy (SBRT), we analyzed the process following the failure mode and effects analysis (FMEA) method. Methods: The FMEA was performed by a multidisciplinary team. For each step in the SBRT delivery process, a potential failure occurrence was derived and three factors were assessed: the probability of each occurrence, the severity if the event occurs, and the probability of detection by the treatment team. A rank of 1 to 10 was assigned to each factor, and then the multiplied ranks yielded the relative risks (risk priority numbers).more » The failure modes with the highest risk priority numbers were then considered to implement process improvement measures. Results: A total of 28 occurrences were derived, of which nine events scored with significantly high risk priority numbers. The risk priority numbers of the highest ranked events ranged from 20 to 80. These included transcription errors of the stereotactic coordinates and machine failures. Conclusion: Several areas of our SBRT delivery were reconsidered in terms of process improvement, and safety measures, including treatment checklists and a surgical time-out, were added for our practice of gantry-based image-guided SBRT. This study serves as a guide for other users of SBRT to perform FMEA of their own practice.« less

  3. Setting Priorities in Global Child Health Research Investments: Guidelines for Implementation of the CHNRI Method

    PubMed Central

    Rudan, Igor; Gibson, Jennifer L.; Ameratunga, Shanthi; El Arifeen, Shams; Bhutta, Zulfiqar A.; Black, Maureen; Black, Robert E.; Brown, Kenneth H.; Campbell, Harry; Carneiro, Ilona; Chan, Kit Yee; Chandramohan, Daniel; Chopra, Mickey; Cousens, Simon; Darmstadt, Gary L.; Gardner, Julie Meeks; Hess, Sonja Y.; Hyder, Adnan A.; Kapiriri, Lydia; Kosek, Margaret; Lanata, Claudio F.; Lansang, Mary Ann; Lawn, Joy; Tomlinson, Mark; Tsai, Alexander C.; Webster, Jayne

    2008-01-01

    This article provides detailed guidelines for the implementation of systematic method for setting priorities in health research investments that was recently developed by Child Health and Nutrition Research Initiative (CHNRI). The target audience for the proposed method are international agencies, large research funding donors, and national governments and policy-makers. The process has the following steps: (i) selecting the managers of the process; (ii) specifying the context and risk management preferences; (iii) discussing criteria for setting health research priorities; (iv) choosing a limited set of the most useful and important criteria; (v) developing means to assess the likelihood that proposed health research options will satisfy the selected criteria; (vi) systematic listing of a large number of proposed health research options; (vii) pre-scoring check of all competing health research options; (viii) scoring of health research options using the chosen set of criteria; (ix) calculating intermediate scores for each health research option; (x) obtaining further input from the stakeholders; (xi) adjusting intermediate scores taking into account the values of stakeholders; (xii) calculating overall priority scores and assigning ranks; (xiii) performing an analysis of agreement between the scorers; (xiv) linking computed research priority scores with investment decisions; (xv) feedback and revision. The CHNRI method is a flexible process that enables prioritizing health research investments at any level: institutional, regional, national, international, or global. PMID:19090596

  4. Research priorities in medical education at Shiraz University of Medical Sciences:categories and subcategories in the Iranian context

    PubMed Central

    NABEIEI, PARISA; AMINI, MITRA; GHANAVATI, SHIRIN; MARHAMATI, SAADAT

    2016-01-01

    Introduction Research in education is a globally significant issue without a long history. Due to the importance of the issue in Health System Development programs, this study intended to determine research priorities in medical education, considering their details and functions. By determining barriers existing in research in education progress, it is tried to make research priorities more functional by recommending acceptable strategies. Methods This is a qualitative-descriptive study in two descriptive phases. The goal of these phases was to determine research priorities subcategories in medical education by Nominal Group Technique (NGT) and two rounds of Delphi method. Through the first phase, subcategories of research priorities were determined, using Nominal Group Technique under medical education experts’ supervision. Through two rounds of Delphi, a questionnaire was constructed based on the subcategories. Eventually, research priorities were determined based on their highest score (scores more than 7 out of 10). Results In the first phase (NGT), 35 priorities in 5 major fields of medical education were presented. In the second phase, priorities were scored, using Delphi method. Medical Ethics and professionalism gained the highest scores (7.63±1.26) and educational evaluation the lowest (7.28±1.52). In this stage, 7 items were omitted but 2 of them were added again after experts’ revision in the third round of Delphi. Conclusion According to the results of the present study and based on previous studies, it really seems that the fields of “Learning and Teaching Approaches” and “Medical Ethics and Professionalism” were more important. Because of financial and resource limitations in our country and the importance of research priorities, it is recommended to frequently study “research priorities determination program” at universities. PMID:26793723

  5. National occupational health research priorities, agenda and strategy of Japan: invited report in NORA symposium 2001, USA.

    PubMed

    Araki, Shunichi; Tachi, Masatomo

    2003-01-01

    An invited report on national occupational health research priorities, agenda and strategy of Japan was delivered in the NORA (National Occupational Research Agenda) Symposium 2001, USA. The third NORA Symposium was held by the US National Institute for Occupational Safety and Health (NIOSH) in Washington DC on June 27, 2001. The national conference in Japan entitled "Conference on Occupational Health Research Strategies in the 21st Century" was organized by the Japanese Ministry of Labour (Currently, Ministry of Health, Labour and Welfare) in the years 1998-2001, and the national occupational health research agenda and strategy for the next decade in Japan was identified. A total of 50 Conference members, i.e., representatives from various fields of occupational health in Japan, ranked 58 comprehensive research topics, yielding short-term (5-year) and long-term (6-10 year) priority research topics. Overall (10-year) priority research topics were calculated by combining the short-term and long-term priority scores. Together with the ranking by 145 extramural occupational health specialists, it was identified that work stress (i.e., one of the 58 research topics) was the first overall priority research topic for the next 10 years in Japan. Three other topics, i.e., elderly workers, women workers and maternity protection, and mental health and quality of work and life, were the second group of priority topics; and hazard and risk assessment and biological effect index were the third priority group. Based on the scores for the short-term and long-term priority research topics, all 58 research topics were classified into three key research areas with 18 key research issues (National Occupational Health Research Agenda, NOHRA). Finally, eight implementation measures of national strategy for the Japanese Government to promote occupational health research were introduced.

  6. Prioritizing Avian Species for Their Risk of Population-Level Consequences from Wind Energy Development

    PubMed Central

    Beston, Julie A.; Diffendorfer, Jay E.; Loss, Scott R.; Johnson, Douglas H.

    2016-01-01

    Recent growth in the wind energy industry has increased concerns about its impacts on wildlife populations. Direct impacts of wind energy include bird and bat collisions with turbines whereas indirect impacts include changes in wildlife habitat and behavior. Although many species may withstand these effects, species that are long-lived with low rates of reproduction, have specialized habitat preferences, or are attracted to turbines may be more prone to declines in population abundance. We developed a prioritization system to identify the avian species most likely to experience population declines from wind facilities based on their current conservation status and their expected risk from turbines. We developed 3 metrics of turbine risk that incorporate data on collision fatalities at wind facilities, population size, life history, species’ distributions relative to turbine locations, number of suitable habitat types, and species’ conservation status. We calculated at least 1 measure of turbine risk for 428 avian species that breed in the United States. We then simulated 100,000 random sets of cutoff criteria (i.e., the metric values used to assign species to different priority categories) for each turbine risk metric and for conservation status. For each set of criteria, we assigned each species a priority score and calculated the average priority score across all sets of criteria. Our prioritization system highlights both species that could potentially experience population decline caused by wind energy and species at low risk of population decline. For instance, several birds of prey, such as the long-eared owl, ferruginous hawk, Swainson’s hawk, and golden eagle, were at relatively high risk of population decline across a wide variety of cutoff values, whereas many passerines were at relatively low risk of decline. This prioritization system is a first step that will help researchers, conservationists, managers, and industry target future study and management activity. PMID:26963254

  7. Prioritizing avian species for their risk of population-level consequences from wind energy development

    USGS Publications Warehouse

    Beston, Julie A.; Diffendorfer, James E.; Loss, Scott; Johnson, Douglas H.

    2016-01-01

    Recent growth in the wind energy industry has increased concerns about its impacts on wildlife populations. Direct impacts of wind energy include bird and bat collisions with turbines whereas indirect impacts include changes in wildlife habitat and behavior. Although many species may withstand these effects, species that are long-lived with low rates of reproduction, have specialized habitat preferences, or are attracted to turbines may be more prone to declines in population abundance. We developed a prioritization system to identify the avian species most likely to experience population declines from wind facilities based on their current conservation status and their expected risk from turbines. We developed 3 metrics of turbine risk that incorporate data on collision fatalities at wind facilities, population size, life history, species’ distributions relative to turbine locations, number of suitable habitat types, and species’ conservation status. We calculated at least 1 measure of turbine risk for 428 avian species that breed in the United States. We then simulated 100,000 random sets of cutoff criteria (i.e., the metric values used to assign species to different priority categories) for each turbine risk metric and for conservation status. For each set of criteria, we assigned each species a priority score and calculated the average priority score across all sets of criteria. Our prioritization system highlights both species that could potentially experience population decline caused by wind energy and species at low risk of population decline. For instance, several birds of prey, such as the long-eared owl, ferruginous hawk, Swainson’s hawk, and golden eagle, were at relatively high risk of population decline across a wide variety of cutoff values, whereas many passerines were at relatively low risk of decline. This prioritization system is a first step that will help researchers, conservationists, managers, and industry target future study and management activity.

  8. Prioritizing Avian Species for Their Risk of Population-Level Consequences from Wind Energy Development.

    PubMed

    Beston, Julie A; Diffendorfer, Jay E; Loss, Scott R; Johnson, Douglas H

    2016-01-01

    Recent growth in the wind energy industry has increased concerns about its impacts on wildlife populations. Direct impacts of wind energy include bird and bat collisions with turbines whereas indirect impacts include changes in wildlife habitat and behavior. Although many species may withstand these effects, species that are long-lived with low rates of reproduction, have specialized habitat preferences, or are attracted to turbines may be more prone to declines in population abundance. We developed a prioritization system to identify the avian species most likely to experience population declines from wind facilities based on their current conservation status and their expected risk from turbines. We developed 3 metrics of turbine risk that incorporate data on collision fatalities at wind facilities, population size, life history, species' distributions relative to turbine locations, number of suitable habitat types, and species' conservation status. We calculated at least 1 measure of turbine risk for 428 avian species that breed in the United States. We then simulated 100,000 random sets of cutoff criteria (i.e., the metric values used to assign species to different priority categories) for each turbine risk metric and for conservation status. For each set of criteria, we assigned each species a priority score and calculated the average priority score across all sets of criteria. Our prioritization system highlights both species that could potentially experience population decline caused by wind energy and species at low risk of population decline. For instance, several birds of prey, such as the long-eared owl, ferruginous hawk, Swainson's hawk, and golden eagle, were at relatively high risk of population decline across a wide variety of cutoff values, whereas many passerines were at relatively low risk of decline. This prioritization system is a first step that will help researchers, conservationists, managers, and industry target future study and management activity.

  9. Use of a systematic risk analysis method (FMECA) to improve quality in a clinical laboratory procedure.

    PubMed

    Serafini, A; Troiano, G; Franceschini, E; Calzoni, P; Nante, N; Scapellato, C

    2016-01-01

    Risk management is a set of actions to recognize or identify risks, errors and their consequences and to take the steps to counter it. The aim of our study was to apply FMECA (Failure Mode, Effects and Criticality Analysis) to the Activated Protein C resistance (APCR) test in order to detect and avoid mistakes in this process. We created a team and the process was divided in phases and sub phases. For each phase we calculated the probability of occurrence (O) of an error, the detectability score (D) and the severity (S). The product of these three indexes yields the RPN (Risk Priority Number). Phases with a higher RPN need corrective actions with a higher priority. The calculation of RPN showed that more than 20 activities have a score higher than 150 and need important preventive actions; 8 have a score between 100 and 150. Only 23 actions obtained an acceptable score lower than 100. This was one of the first experience of application of FMECA analysis to a laboratory process, and the first one which applies this technique to the identification of the factor V Leiden, and our results confirm that FMECA could be a simple, powerful and useful tool in risk management and helps to identify quickly the criticality in a laboratory process.

  10. A comparison of two prospective risk analysis methods: Traditional FMEA and a modified healthcare FMEA.

    PubMed

    Rah, Jeong-Eun; Manger, Ryan P; Yock, Adam D; Kim, Gwe-Ya

    2016-12-01

    To examine the abilities of a traditional failure mode and effects analysis (FMEA) and modified healthcare FMEA (m-HFMEA) scoring methods by comparing the degree of congruence in identifying high risk failures. The authors applied two prospective methods of the quality management to surface image guided, linac-based radiosurgery (SIG-RS). For the traditional FMEA, decisions on how to improve an operation were based on the risk priority number (RPN). The RPN is a product of three indices: occurrence, severity, and detectability. The m-HFMEA approach utilized two indices, severity and frequency. A risk inventory matrix was divided into four categories: very low, low, high, and very high. For high risk events, an additional evaluation was performed. Based upon the criticality of the process, it was decided if additional safety measures were needed and what they comprise. The two methods were independently compared to determine if the results and rated risks matched. The authors' results showed an agreement of 85% between FMEA and m-HFMEA approaches for top 20 risks of SIG-RS-specific failure modes. The main differences between the two approaches were the distribution of the values and the observation that failure modes (52, 54, 154) with high m-HFMEA scores do not necessarily have high FMEA-RPN scores. In the m-HFMEA analysis, when the risk score is determined, the basis of the established HFMEA Decision Tree™ or the failure mode should be more thoroughly investigated. m-HFMEA is inductive because it requires the identification of the consequences from causes, and semi-quantitative since it allows the prioritization of high risks and mitigation measures. It is therefore a useful tool for the prospective risk analysis method to radiotherapy.

  11. Research priorities about stoma-related quality of life from the perspective of people with a stoma: A pilot survey.

    PubMed

    Hubbard, Gill; Taylor, Claire; Beeken, Becca; Campbell, Anna; Gracey, Jackie; Grimmett, Chloe; Fisher, Abi; Ozakinci, Gozde; Slater, Sarah; Gorely, Trish

    2017-12-01

    There is a recognized need to include patients in setting research priorities. Research priorities identified by people with a stoma are rarely elicited. To improve the quality of life of people with a stoma through use of evidence-based practice based on research priorities set by patients. Online pilot survey publicized in 2016 via United Kingdom stoma charities. People ranked nine stoma-related quality of life topics in order of research priority. People 16 years of age and over who currently have or have had a stoma for treatment for any medical condition. Distributions of the priority scores for each of the nine research topics were examined. Group differences were explored using either the Mann-Whitney U-test or the Kruskal-Wallis test depending on the number of groups. In total, 225 people completed the survey. The most important research priority was pouch leak problems and stoma bag/appliance problems followed by hernia risk. There were statistically significant differences in ranking research priorities between males and females, age, underlying disease that led to a stoma, stoma type and length of time with a stoma. People with a stoma are willing to engage in and set research priorities. The results should contribute towards future research about setting the research agenda for the study of stoma-related concerns that impact quality of life. © 2017 The Authors Health Expectations Published by John Wiley & Sons Ltd.

  12. Research priority setting for integrated early child development and violence prevention (ECD+) in low and middle income countries: An expert opinion exercise.

    PubMed

    Tomlinson, Mark; Jordans, Mark; MacMillan, Harriet; Betancourt, Theresa; Hunt, Xanthe; Mikton, Christopher

    2017-10-01

    Child development in low and middle income countries (LMIC) is compromised by multiple risk factors. Reducing children's exposure to harmful events is essential for early childhood development (ECD). In particular, preventing violence against children - a highly prevalent risk factor that negatively affects optimal child development - should be an intervention priority. We used the Child Health and Nutrition Initiative (CHNRI) method for the setting of research priorities in integrated Early Childhood Development and violence prevention programs (ECD+). An expert group was identified and invited to systematically list and score research questions. A total of 186 stakeholders were asked to contribute five research questions each, and contributions were received from 81 respondents. These were subsequently evaluated using a set of five criteria: answerability; effectiveness; feasibility and/or affordability; applicability and impact; and equity. Of the 400 questions generated, a composite group of 50 were scored by 55 respondents. The highest scoring research questions related to the training of Community Health Workers (CHW's) to deliver ECD+ interventions effectively and whether ECD+ interventions could be integrated within existing delivery platforms such as HIV, nutrition or mental health platforms. The priority research questions can direct new research initiatives, mainly in focusing on the effectiveness of an ECD+ approach, as well as on service delivery questions. To the best of our knowledge, this is the first systematic exercise of its kind in the field of ECD+. The findings from this research priority setting exercise can help guide donors and other development actors towards funding priorities for important future research related to ECD and violence prevention. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. Range-wide network of priority areas for greater sage-grouse - a design for conserving connected distributions or isolating individual zoos?

    USGS Publications Warehouse

    Crist, Michele R.; Knick, Steven T.; Hanser, Steven E.

    2015-09-08

    The network of areas delineated in 11 Western States for prioritizing management of greater sage-grouse (Centrocercus urophasianus) represents a grand experiment in conservation biology and reserve design. We used centrality metrics from social network theory to gain insights into how this priority area network might function. The network was highly centralized. Twenty of 188 priority areas accounted for 80 percent of the total centrality scores. These priority areas, characterized by large size and a central location in the range-wide distribution, are strongholds for greater sage-grouse populations and also might function as sources. Mid-ranking priority areas may serve as stepping stones because of their location between large central and smaller peripheral priority areas. The current network design and conservation strategy has risks. The contribution of almost one-half (n = 93) of the priority areas combined for less than 1 percent of the cumulative centrality scores for the network. These priority areas individually are likely too small to support viable sage-grouse populations within their boundary. Without habitat corridors to connect small priority areas either to larger priority areas or as a clustered group within the network, their isolation could lead to loss of sage-grouse within these regions of the network. 

  14. Quantifying C-17 Aircrew Training Priorities

    DTIC Science & Technology

    2015-06-19

    mentally process the situation. Skill-Probability-Risk (SPR) Score Once the data from the SS, SP, and SR has been calculated, an SPR Score can...discrepancies between the SPR Rank-Score and the Vol 1 Rank-Score. Figure 4.8 illustrates the differences in scores between the two processes . Ideally, the...tables. The extreme lower left and extreme upper right of the chart are areas of major discrepancy between the two processes and potentially provide the

  15. The Future of Software Engineering for High Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, G

    DOE ASCR requested that from May through mid-July 2015 a study group identify issues and recommend solutions from a software engineering perspective transitioning into the next generation of High Performance Computing. The approach used was to ask some of the DOE complex experts who will be responsible for doing this work to contribute to the study group. The technique used was to solicit elevator speeches: a short and concise write up done as if the author was a speaker with only a few minutes to convince a decision maker of their top issues. Pages 2-18 contain the original texts ofmore » the contributed elevator speeches and end notes identifying the 20 contributors. The study group also ranked the importance of each topic, and those scores are displayed with each topic heading. A perfect score (and highest priority) is three, two is medium priority, and one is lowest priority. The highest scoring topic areas were software engineering and testing resources; the lowest scoring area was compliance to DOE standards. The following two paragraphs are an elevator speech summarizing the contributed elevator speeches. Each sentence or phrase in the summary is hyperlinked to its source via a numeral embedded in the text. A risk one liner has also been added to each topic to allow future risk tracking and mitigation.« less

  16. Mortality determinants and prediction of outcome in high risk newborns.

    PubMed

    Dalvi, R; Dalvi, B V; Birewar, N; Chari, G; Fernandez, A R

    1990-06-01

    The aim of this study was to determine independent patient-related predictors of mortality in high risk newborns admitted at our centre. The study population comprised 100 consecutive newborns each, from the premature unit (PU) and sick baby care unit (SBCU), respectively. Thirteen high risk factors (variables) for each of the two units, were entered into a multivariate regression analysis. Variables with independent predictive value for poor outcome (i.e., death) in PU were, weight less than 1 kg, hyaline membrane disease, neurologic problems, and intravenous therapy. High risk factors in SBCU included, blood gas abnormality, bleeding phenomena, recurrent convulsions, apnea, and congenital anomalies. Identification of these factors guided us in defining priority areas for improvement in our system of neonatal care. Also, based on these variables a simple predictive score for outcome was constructed. The prediction equation and the score were cross-validated by applying them to a 'test-set' of 100 newborns each for PU and SBCU. Results showed a comparable sensitivity, specificity and error rate.

  17. A consensus exercise identifying priorities for research into clinical effectiveness among children's orthopaedic surgeons in the United Kingdom.

    PubMed

    Perry, D C; Wright, J G; Cooke, S; Roposch, A; Gaston, M S; Nicolaou, N; Theologis, T

    2018-05-01

    Aims High-quality clinical research in children's orthopaedic surgery has lagged behind other surgical subspecialties. This study used a consensus-based approach to identify research priorities for clinical trials in children's orthopaedics. Methods A modified Delphi technique was used, which involved an initial scoping survey, a two-round Delphi process and an expert panel formed of members of the British Society of Children's Orthopaedic Surgery. The survey was conducted amongst orthopaedic surgeons treating children in the United Kingdom and Ireland. Results A total of 86 clinicians contributed to both rounds of the Delphi process, scoring priorities from one (low priority) to five (high priority). Elective topics were ranked higher than those relating to trauma, with the top ten elective research questions scoring higher than the top question for trauma. Ten elective, and five trauma research priorities were identified, with the three highest ranked questions relating to the treatment of slipped capital femoral epiphysis (mean score 4.6/ 5), Perthes' disease (4.5) and bone infection (4.5). Conclusion This consensus-based research agenda will guide surgeons, academics and funders to improve the evidence in children's orthopaedic surgery and encourage the development of multicentre clinical trials. Cite this article: Bone Joint J 2018;100-B:680-4.

  18. Application of failure mode and effect analysis in managing catheter-related blood stream infection in intensive care unit

    PubMed Central

    Li, Xixi; He, Mei; Wang, Haiyan

    2017-01-01

    Abstract In this study, failure mode and effect analysis (FMEA), a proactive tool, was applied to reduce errors associated with the process which begins with assessment of patient and ends with treatment of complications. The aim of this study is to assess whether FMEA implementation will significantly reduce the incidence of catheter-related bloodstream infections (CRBSIs) in intensive care unit. The FMEA team was constructed. A team of 15 medical staff from different departments were recruited and trained. Their main responsibility was to analyze and score all possible processes of central venous catheterization failures. Failure modes with risk priority number (RPN) ≥100 (top 10 RPN scores) were deemed as high-priority-risks, meaning that they needed immediate corrective action. After modifications were put, the resulting RPN was compared with the previous one. A centralized nursing care system was designed. A total of 25 failure modes were identified. High-priority risks were “Unqualified medical device sterilization” (RPN, 337), “leukopenia, very low immunity” (RPN, 222), and “Poor hand hygiene Basic diseases” (RPN, 160). The corrective measures that we took allowed a decrease in the RPNs, especially for the high-priority risks. The maximum reduction was approximately 80%, as observed for the failure mode “Not creating the maximal barrier for patient.” The averaged incidence of CRBSIs was reduced from 5.19% to 1.45%, with 3 months of 0 infection rate. The FMEA can effectively reduce incidence of CRBSIs, improve the security of central venous catheterization technology, decrease overall medical expenses, and improve nursing quality. PMID:29390515

  19. Perspectives of comparing risks of environmental carcinogens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perera, F.; Boffetta, P.

    1988-10-19

    In 1987, investigators concluded that the risks of man-made industrial carcinogens and pesticides (outside of the workplace) are trivial compared with the risks of naturally occurring carcinogens found mostly in the diet. They used a ranking system based on human exposure and rodent potency (HERP) data to arrive at this conclusion. As a result, they recommend that regulatory agencies, such as the Environmental Protection Agency and the Food and Drug Administration, base their priorities in this area on their HERP system. We analyzed the assumptions and data set upon which the HERPs were based, concluding that such a simplified approachmore » to set public health policy is inappropriate given the underlying uncertainties. However, we note that when comparisons are consistently based on estimates of average daily exposure to common carcinogens, the HERP scores of many man-made pollutants are comparable to those of naturally occurring carcinogens in the diet.158 references.« less

  20. Stakeholder value-linked sustainability assessment: Evaluating remedial alternatives for the Portland Harbor Superfund Site, Portland, Oregon, USA.

    PubMed

    Apitz, Sabine E; Fitzpatrick, Anne G; McNally, Amanda; Harrison, David; Coughlin, Conor; Edwards, Deborah A

    2018-01-01

    Regulatory decisions on remediation should consider affected communities' needs and values, and how these might be impacted by remedial options; this process requires that diverse stakeholders are able to engage in a transparent consideration of value trade-offs and of the distribution of risks and benefits associated with remedial actions and outcomes. The Stakeholder Values Assessment (SVA) tool was developed to evaluate remedial impacts on environmental quality, economic viability, and social equity in the context of stakeholder values and priorities. Stakeholder values were linked to the pillars of sustainability and also to a range of metrics to evaluate how sediment remediation affects these values. Sediment remedial alternatives proposed by the US Environmental Protection Agency (USEPA) for the Portland Harbor Superfund Site were scored for each metric, based upon data provided in published feasibility study (FS) documents. Metric scores were aggregated to generate scores for each value; these were then aggregated to generate scores for each pillar of sustainability. In parallel, the inferred priorities (in terms of regional remediation, restoration, planning, and development) of diverse stakeholder groups (SGs) were used to evaluate the sensitivity and robustness of the values-based sustainability assessment to diverse SG priorities. This approach, which addresses social indicators of impact and then integrates them with indicators of environmental and economic impacts, goes well beyond the Comprehensive Environmental Response, Compensation and Liability Act's (CERCLA) 9 criteria for evaluating remedial alternatives because it evaluates how remedial alternatives might be ranked in terms of the diverse values and priorities of stakeholders. This approach identified trade-offs and points of potential contention, providing a systematic, semiquantitative, transparent valuation tool that can be used in community engagement. Integr Environ Assess Manag 2018;14:43-62. © 2017 The Authors. Integrated Environmental Assessment and Management published by Wiley Periodicals, Inc. on behalf of Society of Environmental Toxicology & Chemistry (SETAC). © 2017 The Authors. Integrated Environmental Assessment and Management published by Wiley Periodicals, Inc. on behalf of Society of Environmental Toxicology & Chemistry (SETAC).

  1. Complementarity and Area-Efficiency in the Prioritization of the Global Protected Area Network.

    PubMed

    Kullberg, Peter; Toivonen, Tuuli; Montesino Pouzols, Federico; Lehtomäki, Joona; Di Minin, Enrico; Moilanen, Atte

    2015-01-01

    Complementarity and cost-efficiency are widely used principles for protected area network design. Despite the wide use and robust theoretical underpinnings, their effects on the performance and patterns of priority areas are rarely studied in detail. Here we compare two approaches for identifying the management priority areas inside the global protected area network: 1) a scoring-based approach, used in recently published analysis and 2) a spatial prioritization method, which accounts for complementarity and area-efficiency. Using the same IUCN species distribution data the complementarity method found an equal-area set of priority areas with double the mean species ranges covered compared to the scoring-based approach. The complementarity set also had 72% more species with full ranges covered, and lacked any coverage only for half of the species compared to the scoring approach. Protected areas in our complementarity-based solution were on average smaller and geographically more scattered. The large difference between the two solutions highlights the need for critical thinking about the selected prioritization method. According to our analysis, accounting for complementarity and area-efficiency can lead to considerable improvements when setting management priorities for the global protected area network.

  2. Supporting Risk Assessment: Accounting for Indirect Risk to Ecosystem Components

    PubMed Central

    Mach, Megan E.; Martone, Rebecca G.; Singh, Gerald G.; O, Miriam; Chan, Kai M. A.

    2016-01-01

    The multi-scalar complexity of social-ecological systems makes it challenging to quantify impacts from human activities on ecosystems, inspiring risk-based approaches to assessments of potential effects of human activities on valued ecosystem components. Risk assessments do not commonly include the risk from indirect effects as mediated via habitat and prey. In this case study from British Columbia, Canada, we illustrate how such “indirect risks” can be incorporated into risk assessments for seventeen ecosystem components. We ask whether (i) the addition of indirect risk changes the at-risk ranking of the seventeen ecosystem components and if (ii) risk scores correlate with trophic prey and habitat linkages in the food web. Even with conservative assumptions about the transfer of impacts or risks from prey species and habitats, the addition of indirect risks in the cumulative risk score changes the ranking of priorities for management. In particular, resident orca, Steller sea lion, and Pacific herring all increase in relative risk, more closely aligning these species with their “at-risk status” designations. Risk assessments are not a replacement for impact assessments, but—by considering the potential for indirect risks as we demonstrate here—they offer a crucial complementary perspective for the management of ecosystems and the organisms within. PMID:27632287

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mossahebi, S; Feigenberg, S; Nichols, E

    Purpose: GammaPod™, the first stereotactic radiotherapy device for early stage breast cancer treatment, has been recently installed and commissioned at our institution. A multidisciplinary working group applied the failure mode and effects analysis (FMEA) approach to perform a risk analysis. Methods: FMEA was applied to the GammaPod™ treatment process by: 1) generating process maps for each stage of treatment; 2) identifying potential failure modes and outlining their causes and effects; 3) scoring the potential failure modes using the risk priority number (RPN) system based on the product of severity, frequency of occurrence, and detectability (ranging 1–10). An RPN of highermore » than 150 was set as the threshold for minimal concern of risk. For these high-risk failure modes, potential quality assurance procedures and risk control techniques have been proposed. A new set of severity, occurrence, and detectability values were re-assessed in presence of the suggested mitigation strategies. Results: In the single-day image-and-treat workflow, 19, 22, and 27 sub-processes were identified for the stages of simulation, treatment planning, and delivery processes, respectively. During the simulation stage, 38 potential failure modes were found and scored, in terms of RPN, in the range of 9-392. 34 potential failure modes were analyzed in treatment planning with a score range of 16-200. For the treatment delivery stage, 47 potential failure modes were found with an RPN score range of 16-392. The most critical failure modes consisted of breast-cup pressure loss and incorrect target localization due to patient upper-body alignment inaccuracies. The final RPN score of these failure modes based on recommended actions were assessed to be below 150. Conclusion: FMEA risk analysis technique was applied to the treatment process of GammaPod™, a new stereotactic radiotherapy technology. Application of systematic risk analysis methods is projected to lead to improved quality of GammaPod™ treatments. Ying Niu and Cedric Yu are affiliated with Xcision Medical Systems.« less

  4. Risk-Screening Environmental Indicators (RSEI)

    EPA Pesticide Factsheets

    EPA's Risk-Screening Environmental Indicators (RSEI) is a geographically-based model that helps policy makers and communities explore data on releases of toxic substances from industrial facilities reporting to EPA??s Toxics Release Inventory (TRI). By analyzing TRI information together with simplified risk factors, such as the amount of chemical released, its fate and transport through the environment, each chemical??s relative toxicity, and the number of people potentially exposed, RSEI calculates a numeric score, which is designed to only be compared to other scores calculated by RSEI. Because it is designed as a screening-level model, RSEI uses worst-case assumptions about toxicity and potential exposure where data are lacking, and also uses simplifying assumptions to reduce the complexity of the calculations. A more refined assessment is required before any conclusions about health impacts can be drawn. RSEI is used to establish priorities for further investigation and to look at changes in potential impacts over time. Users can save resources by conducting preliminary analyses with RSEI.

  5. Application of failure mode and effect analysis in managing catheter-related blood stream infection in intensive care unit.

    PubMed

    Li, Xixi; He, Mei; Wang, Haiyan

    2017-12-01

    In this study, failure mode and effect analysis (FMEA), a proactive tool, was applied to reduce errors associated with the process which begins with assessment of patient and ends with treatment of complications. The aim of this study is to assess whether FMEA implementation will significantly reduce the incidence of catheter-related bloodstream infections (CRBSIs) in intensive care unit.The FMEA team was constructed. A team of 15 medical staff from different departments were recruited and trained. Their main responsibility was to analyze and score all possible processes of central venous catheterization failures. Failure modes with risk priority number (RPN) ≥100 (top 10 RPN scores) were deemed as high-priority-risks, meaning that they needed immediate corrective action. After modifications were put, the resulting RPN was compared with the previous one. A centralized nursing care system was designed.A total of 25 failure modes were identified. High-priority risks were "Unqualified medical device sterilization" (RPN, 337), "leukopenia, very low immunity" (RPN, 222), and "Poor hand hygiene Basic diseases" (RPN, 160). The corrective measures that we took allowed a decrease in the RPNs, especially for the high-priority risks. The maximum reduction was approximately 80%, as observed for the failure mode "Not creating the maximal barrier for patient." The averaged incidence of CRBSIs was reduced from 5.19% to 1.45%, with 3 months of 0 infection rate.The FMEA can effectively reduce incidence of CRBSIs, improve the security of central venous catheterization technology, decrease overall medical expenses, and improve nursing quality. Copyright © 2017 The Authors. Published by Wolters Kluwer Health, Inc. All rights reserved.

  6. Comparing the sustainability impacts of solar thermal and natural gas combined cycle for electricity production in Mexico: Accounting for decision makers' priorities

    NASA Astrophysics Data System (ADS)

    Rodríguez-Serrano, Irene; Caldés, Natalia; Oltra, Christian; Sala, Roser

    2017-06-01

    The aim of this paper is to conduct a comprehensive sustainability assessment of the electricity generation with two alternative electricity generation technologies by estimating its economic, environmental and social impacts through the "Framework for Integrated Sustainability Assessment" (FISA). Based on a Multiregional Input Output (MRIO) model linked to a social risk database (Social Hotspot Database), the framework accounts for up to fifteen impacts across the three sustainability pillars along the supply chain of the electricity production from Solar Thermal Electricity (STE) and Natural Gas Combined Cycle (NGCC) technologies in Mexico. Except for value creation, results show larger negative impacts for NGCC, particularly in the environmental pillar. Next, these impacts are transformed into "Aggregated Sustainability Endpoints" (ASE points) as a way to support the decision making in selecting the best sustainable project. ASE points obtained are later compared to the resulting points weighted by the reported priorities of Mexican decision makers in the energy sector obtained from a questionnaire survey. The comparison shows that NGCC achieves a 1.94 times worse negative score than STE, but after incorporating decision makerś priorities, the ratio increases to 2.06 due to the relevance given to environmental impacts such as photochemical oxidants formation and climate change potential, as well as social risks like human rights risks.

  7. Perceived usability and use of custom-made footwear in diabetic patients at high risk for foot ulceration.

    PubMed

    Arts, Mark L J; de Haart, Mirjam; Bus, Sicco A; Bakker, Jan P J; Hacking, Hub G A; Nollet, Frans

    2014-04-01

    To assess the perceived usability and use of custom- made footwear in diabetic patients who are at high-risk for foot ulceration, and to elucidate the determinants of usability and use. Survey. A total of 153 patients with diabetes, peripheral neuropathy, prior plantar foot ulceration and newly prescribed custom-made footwear, recruited from 10 Dutch multidisciplinary foot clinics. The Questionnaire of Usability Evaluation was used to assess the patients' perception of weight, appearance, comfort, durability, donning/doffing, stability, benefit and overall appreciation of their prescription footwear (all expressed as visual analogue scores). Data on priorities for usability and footwear use (in h/day) were obtained from patient reports. Multivariate logistic regression analysis was used to assess determinants of usability and use. Median (interquartile range) score for overall appreciation was 8.3 (7.1-9.1). Scores ranged from 6.5 (4.5-8.6) for weight to 9.6 (6.3-9.9) for donning/doffing. Footwear comfort was listed most often (33.3%) as the highest priority. Footwear use was <60% of daytime (where daytime was defined as 16 h out of bed) in 58% of patients. The only significant determinant of footwear use was the perceived benefit of the footwear (p = 0.045). Perceived usability of footwear was mostly positive, although individual scores and priorities varied considerably. Footwear use was low to moderate and dependent only on the perceived benefit of the footwear. Therefore, practitioners should focus on enhancing the patient's ap-preciation of the therapeutic benefit of custom-made footwear.

  8. Neonatal survival in complex humanitarian emergencies: setting an evidence-based research agenda

    PubMed Central

    2014-01-01

    Background Over 40% of all deaths among children under 5 are neonatal deaths (0–28 days), and this proportion is increasing. In 2012, 2.9 million newborns died, with 99% occurring in low- and middle-income countries. Many of the countries with the highest neonatal mortality rates globally are currently or have recently been affected by complex humanitarian emergencies. Despite the global burden of neonatal morbidity and mortality and risks inherent in complex emergency situations, research investments are not commensurate to burden and little is known about the epidemiology or best practices for neonatal survival in these settings. Methods We used the Child Health and Nutrition Research Initiative (CHNRI) methodology to prioritize research questions on neonatal health in complex humanitarian emergencies. Experts evaluated 35 questions using four criteria (answerability, feasibility, relevance, equity) with three subcomponents per criterion. Using SAS 9.2, a research prioritization score (RPS) and average expert agreement score (AEA) were calculated for each question. Results Twenty-eight experts evaluated all 35 questions. RPS ranged from 0.846 to 0.679 and the AEA ranged from 0.667 to 0.411. The top ten research priorities covered a range of issues but generally fell into two categories– epidemiologic and programmatic components of neonatal health. The highest ranked question in this survey was “What strategies are effective in increasing demand for, and use of skilled attendance?” Conclusions In this study, a diverse group of experts used the CHRNI methodology to systematically identify and determine research priorities for neonatal health and survival in complex humanitarian emergencies. The priorities included the need to better understand the magnitude of the disease burden and interventions to improve neonatal health in complex humanitarian emergencies. The findings from this study will provide guidance to researchers and program implementers in neonatal and complex humanitarian fields to engage on the research priorities needed to save lives most at risk. PMID:24959198

  9. 7 CFR 4279.155 - Loan priorities.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... natural resource value-added product (2 points). (iii) Occupations. The priority score for occupations... natural disaster or experiencing fundamental structural changes in its economic base (5 points). (iv... for the following: (A) Business that offers high value, specialized products and services that command...

  10. 7 CFR 4279.155 - Loan priorities.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... natural resource value-added product (2 points). (iii) Occupations. The priority score for occupations... natural disaster or experiencing fundamental structural changes in its economic base (5 points). (iv... for the following: (A) Business that offers high value, specialized products and services that command...

  11. 7 CFR 4279.155 - Loan priorities.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... natural resource value-added product (2 points). (iii) Occupations. The priority score for occupations... natural disaster or experiencing fundamental structural changes in its economic base (5 points). (iv... for the following: (A) Business that offers high value, specialized products and services that command...

  12. 7 CFR 4279.155 - Loan priorities.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... natural resource value-added product (2 points). (iii) Occupations. The priority score for occupations... natural disaster or experiencing fundamental structural changes in its economic base (5 points). (iv... for the following: (A) Business that offers high value, specialized products and services that command...

  13. Setting priorities for research in medical nutrition education: an international approach.

    PubMed

    Ball, Lauren; Barnes, Katelyn; Laur, Celia; Crowley, Jennifer; Ray, Sumantra

    2016-12-14

    To identify the research priorities for medical nutrition education worldwide. A 5-step stakeholder engagement process based on methodological guidelines for identifying research priorities in health. 277 individuals were identified as representatives for 30 different stakeholder organisations across 86 countries. The stakeholder organisations represented the views of medical educators, medical students, doctors, patients and researchers in medical education. Each stakeholder representative was asked to provide up to three research questions that should be deemed as a priority for medical nutrition education. Research questions were critically appraised for answerability, sustainability, effectiveness, potential for translation and potential to impact on disease burden. A blinded scoring system was used to rank the appraised questions, with higher scores indicating higher priority (range of scores possible 36-108). 37 submissions were received, of which 25 were unique research questions. Submitted questions received a range of scores from 62 to 106 points. The highest scoring questions focused on (1) increasing the confidence of medical students and doctors in providing nutrition care to patients, (2) clarifying the essential nutrition skills doctors should acquire, (3) understanding the effectiveness of doctors at influencing dietary behaviours and (4) improving medical students' attitudes towards the importance of nutrition. These research questions can be used to ensure future projects in medical nutrition education directly align with the needs and preferences of research stakeholders. Funders should consider these priorities in their commissioning of research. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  14. An Analysis of Preliminary and Post-Discussion Priority Scores for Grant Applications Peer Reviewed by the Center for Scientific Review at the NIH

    PubMed Central

    Martin, Michael R.; Kopstein, Andrea; Janice, Joy M.

    2010-01-01

    There has been the impression amongst many observers that discussion of a grant application has little practical impact on the final priority scores. Rather the final score is largely dictated by the range of preliminary scores given by the assigned reviewers. The implication is that the preliminary and final scores are the same and the discussion has little impact. The purpose of this examination of the peer review process at the National Institutes of Health is to describe the relationship between preliminary priority scores of the assigned reviewers and the final priority score given by the scientific review group. This study also describes the practical importance of any differences in priority scores. Priority scores for a sample of standard (R01) research grant applications were used in this assessment. The results indicate that the preliminary meeting evaluation is positively correlated with the final meeting outcome but that they are on average significantly different. The results demonstrate that discussion at the meeting has an important practical impact on over 13% of the applications. PMID:21103331

  15. Evaluating Partnerships to Enhance Disaster Risk Management using Multi-Criteria Analysis: An Application at the Pan-European Level

    NASA Astrophysics Data System (ADS)

    Hochrainer-Stigler, Stefan; Lorant, Anna

    2018-01-01

    Disaster risk is increasingly recognized as a major development challenge. Recent calls emphasize the need to proactively engage in disaster risk reduction, as well as to establish new partnerships between private and public sector entities in order to decrease current and future risks. Very often such potential partnerships have to meet different objectives reflecting on the priorities of stakeholders involved. Consequently, potential partnerships need to be assessed on multiple criteria to determine weakest links and greatest threats in collaboration. This paper takes a supranational multi-sector partnership perspective, and considers possible ways to enhance disaster risk management in the European Union by better coordination between the European Union Solidarity Fund, risk reduction efforts, and insurance mechanisms. Based on flood risk estimates we employ a risk-layer approach to determine set of options for new partnerships and test them in a high-level workshop via a novel cardinal ranking based multi-criteria approach. Whilst transformative changes receive good overall scores, we also find that the incorporation of risk into budget planning is an essential condition for successful partnerships.

  16. Concordance of Motion Sensor and Clinician-Rated Fall Risk Scores in Older Adults.

    PubMed

    Elledge, Julie

    2017-12-01

    As the older adult population in the United States continues to grow, developing reliable, valid, and practical methods for identifying fall risk is a high priority. Falls are prevalent in older adults and contribute significantly to morbidity and mortality rates and rising health costs. Identifying at-risk older adults and intervening in a timely manner can reduce falls. Conventional fall risk assessment tools require a health professional trained in the use of each tool for administration and interpretation. Motion sensor technology, which uses three-dimensional cameras to measure patient movements, is promising for assessing older adults' fall risk because it could eliminate or reduce the need for provider oversight. The purpose of this study was to assess the concordance of fall risk scores as measured by a motion sensor device, the OmniVR Virtual Rehabilitation System, with clinician-rated fall risk scores in older adult outpatients undergoing physical rehabilitation. Three standardized fall risk assessments were administered by the OmniVR and by a clinician. Validity of the OmniVR was assessed by measuring the concordance between the two assessment methods. Stability of the OmniVR fall risk ratings was assessed by measuring test-retest reliability. The OmniVR scores showed high concordance with the clinician-rated scores and high stability over time, demonstrating comparability with provider measurements.

  17. Adapting Technological Interventions to Meet the Needs of Priority Populations.

    PubMed

    Linke, Sarah E; Larsen, Britta A; Marquez, Becky; Mendoza-Vasconez, Andrea; Marcus, Bess H

    2016-01-01

    Cardiovascular diseases (CVD) comprise the leading cause of mortality worldwide, accounting for 3 in 10 deaths. Individuals with certain risk factors, including tobacco use, obesity, low levels of physical activity, type 2 diabetes mellitus, racial/ethnic minority status and low socioeconomic status, experience higher rates of CVD and are, therefore, considered priority populations. Technological devices such as computers and smartphones are now routinely utilized in research studies aiming to prevent CVD and its risk factors, and they are also rampant in the public and private health sectors. Traditional health behavior interventions targeting these risk factors have been adapted for technology-based approaches. This review provides an overview of technology-based interventions conducted in these priority populations as well as the challenges and gaps to be addressed in future research. Researchers currently possess tremendous opportunities to engage in technology-based implementation and dissemination science to help spread evidence-based programs focusing on CVD risk factors in these and other priority populations. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bekelman, Justin E., E-mail: bekelman@uphs.upenn.edu; Department of Medical Ethics and Health Policy, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, Pennsylvania

    Purpose: To present the principles and rationale of the Proton Priority System (PROPS), a priority points framework that assigns higher scores to patients thought to more likely benefit from proton therapy, and the distribution of PROPS scores by patient characteristics Methods and Materials: We performed multivariable logistic regression to evaluate the association between PROPS scores and receipt of proton therapy, adjusted for insurance status, gender, race, geography, and the domains that inform the PROPS score. Results: Among 1529 adult patients considered for proton therapy prioritization during our Center's ramp-up phase of treatment availability, PROPS scores varied by age, diagnosis, site,more » and other PROPS domains. In adjusted analyses, receipt of proton therapy was lower for patients with non-Medicare relative to Medicare health insurance (commercial vs Medicare: adjusted odds ratio [OR] 0.47, 95% confidence interval [CI] 0.34-0.64; managed care vs Medicare: OR 0.40, 95% CI 0.28-0.56; Medicaid vs Medicare: OR 0.24, 95% CI 0.13-0.44). Proton Priority System score and age were not significantly associated with receipt of proton therapy. Conclusions: The Proton Priority System is a rationally designed and transparent system for allocation of proton therapy slots based on the best available evidence and expert opinion. Because the actual allocation of treatment slots depends mostly on insurance status, payers may consider incorporating PROPS, or its underlying principles, into proton therapy coverage policies.« less

  19. Risk analysis of analytical validations by probabilistic modification of FMEA.

    PubMed

    Barends, D M; Oldenhof, M T; Vredenbregt, M J; Nauta, M J

    2012-05-01

    Risk analysis is a valuable addition to validation of an analytical chemistry process, enabling not only detecting technical risks, but also risks related to human failures. Failure Mode and Effect Analysis (FMEA) can be applied, using a categorical risk scoring of the occurrence, detection and severity of failure modes, and calculating the Risk Priority Number (RPN) to select failure modes for correction. We propose a probabilistic modification of FMEA, replacing the categorical scoring of occurrence and detection by their estimated relative frequency and maintaining the categorical scoring of severity. In an example, the results of traditional FMEA of a Near Infrared (NIR) analytical procedure used for the screening of suspected counterfeited tablets are re-interpretated by this probabilistic modification of FMEA. Using this probabilistic modification of FMEA, the frequency of occurrence of undetected failure mode(s) can be estimated quantitatively, for each individual failure mode, for a set of failure modes, and the full analytical procedure. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. TU-FG-201-11: Evaluating the Validity of Prospective Risk Analysis Methods: A Comparison of Traditional FMEA and Modified Healthcare FMEA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lah, J; Manger, R; Kim, G

    Purpose: To examine the ability of traditional Failure mode and effects analysis (FMEA) and a light version of Healthcare FMEA (HFMEA), called Scenario analysis of FMEA (SAFER) by comparing their outputs in terms of the risks identified and their severity rankings. Methods: We applied two prospective methods of the quality management to surface image guided, linac-based radiosurgery (SIG-RS). For the traditional FMEA, decisions on how to improve an operation are based on risk priority number (RPN). RPN is a product of three indices: occurrence, severity and detectability. The SAFER approach; utilized two indices-frequency and severity-which were defined by a multidisciplinarymore » team. A criticality matrix was divided into 4 categories; very low, low, high and very high. For high risk events, an additional evaluation was performed. Based upon the criticality of the process, it was decided if additional safety measures were needed and what they comprise. Results: Two methods were independently compared to determine if the results and rated risks were matching or not. Our results showed an agreement of 67% between FMEA and SAFER approaches for the 15 riskiest SIG-specific failure modes. The main differences between the two approaches were the distribution of the values and the failure modes (No.52, 54, 154) that have high SAFER scores do not necessarily have high FMEA RPN scores. In our results, there were additional risks identified by both methods with little correspondence. In the SAFER, when the risk score is determined, the basis of the established decision tree or the failure mode should be more investigated. Conclusion: The FMEA method takes into account the probability that an error passes without being detected. SAFER is inductive because it requires the identification of the consequences from causes, and semi-quantitative since it allow the prioritization of risks and mitigation measures, and thus is perfectly applicable to clinical parts of radiotherapy.« less

  1. Using needs-based frameworks for evaluating new technologies: an application to genetic tests.

    PubMed

    Rogowski, Wolf H; Schleidgen, Sebastian

    2015-02-01

    Given the multitude of newly available genetic tests in the face of limited healthcare budgets, the European Society of Human Genetics assessed how genetic services can be prioritized fairly. Using (health) benefit maximizing frameworks for this purpose has been criticized on the grounds that rather than maximization, fairness requires meeting claims (e.g. based on medical need) equitably. This study develops a prioritization score for genetic tests to facilitate equitable allocation based on need-based claims. It includes attributes representing health need associated with hereditary conditions (severity and progression), a genetic service's suitability to alleviate need (evidence of benefit and likelihood of positive result) and costs to meet the needs. A case study for measuring the attributes is provided and a suggestion is made how need-based claims can be quantified in a priority function. Attribute weights can be informed by data from discrete-choice experiments. Further work is needed to measure the attributes across the multitude of genetic tests and to determine appropriate weights. The priority score is most likely to be considered acceptable if developed within a decision process which meets criteria of procedural fairness and if the priority score is interpreted as "strength of recommendation" rather than a fixed cut-off value. Copyright © 2014. Published by Elsevier Ireland Ltd.

  2. 40 CFR 300.425 - Establishing remedial priorities.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... may submit HRS scoring packages to EPA anytime throughout the year. (2) EPA shall review lead agencies' HRS scoring packages and revise them as appropriate. EPA shall develop any additional HRS scoring packages on releases known to EPA. (3) EPA shall compile the NPL based on the methods identified in...

  3. 40 CFR 300.425 - Establishing remedial priorities.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... may submit HRS scoring packages to EPA anytime throughout the year. (2) EPA shall review lead agencies' HRS scoring packages and revise them as appropriate. EPA shall develop any additional HRS scoring packages on releases known to EPA. (3) EPA shall compile the NPL based on the methods identified in...

  4. 40 CFR 300.425 - Establishing remedial priorities.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... may submit HRS scoring packages to EPA anytime throughout the year. (2) EPA shall review lead agencies' HRS scoring packages and revise them as appropriate. EPA shall develop any additional HRS scoring packages on releases known to EPA. (3) EPA shall compile the NPL based on the methods identified in...

  5. 40 CFR 300.425 - Establishing remedial priorities.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... may submit HRS scoring packages to EPA anytime throughout the year. (2) EPA shall review lead agencies' HRS scoring packages and revise them as appropriate. EPA shall develop any additional HRS scoring packages on releases known to EPA. (3) EPA shall compile the NPL based on the methods identified in...

  6. 40 CFR 300.425 - Establishing remedial priorities.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... may submit HRS scoring packages to EPA anytime throughout the year. (2) EPA shall review lead agencies' HRS scoring packages and revise them as appropriate. EPA shall develop any additional HRS scoring packages on releases known to EPA. (3) EPA shall compile the NPL based on the methods identified in...

  7. The corneal transplant score: a simple corneal graft candidate calculator.

    PubMed

    Rosenfeld, Eldar; Varssano, David

    2013-07-01

    Shortage of corneas for transplantation has created long waiting lists in most countries. Transplant calculators are available for many organs. The purpose of this study is to describe a simple automatic scoring system for keratoplasty recipient candidates, based on several parameters that we consider most relevant for tissue allocation, and to compare the system's accuracy in predicting decisions made by a cornea specialist. Twenty pairs of candidate data were randomly created on an electronic spreadsheet. A single priority score was computed from the data of each candidate. A cornea surgeon and the automated system then decided independently which candidate in each pair should have surgery if only a single cornea was available. The scoring system can calculate values between 0 (lowest priority) and 18 (highest priority) for each candidate. Average score value in our randomly created cohort was 6.35 ± 2.38 (mean ± SD), range 1.28 to 10.76. Average score difference between the candidates in each pair was 3.12 ± 2.10, range 0.08 to 8.45. The manual scoring process, although theoretical, was mentally and emotionally demanding for the surgeon. Agreement was achieved between the human decision and the calculated value in 19 of 20 pairs. Disagreement was reached in the pair with the lowest score difference (0.08). With worldwide donor cornea shortage, waiting for transplantation can be long. Manual sorting of priority for transplantation in a long waiting list is difficult, time-consuming and prone to error. The suggested system may help achieve a justified distribution of available tissue.

  8. Data Analyses and Modelling for Risk Based Monitoring of Mycotoxins in Animal Feed

    PubMed Central

    van der Fels-Klerx, H.J. (Ine); Adamse, Paulien; Punt, Ans; van Asselt, Esther D.

    2018-01-01

    Following legislation, European Member States should have multi-annual control programs for contaminants, such as for mycotoxins, in feed and food. These programs need to be risk based implying the checks are regular and proportional to the estimated risk for animal and human health. This study aimed to prioritize feed products in the Netherlands for deoxynivalenol and aflatoxin B1 monitoring. Historical mycotoxin monitoring results from the period 2007–2016 were combined with data from other sources. Based on occurrence, groundnuts had high priority for aflatoxin B1 monitoring; some feed materials (maize and maize products and several oil seed products) and complete/complementary feed excluding dairy cattle and young animals had medium priority; and all other animal feeds and feed materials had low priority. For deoxynivalenol, maize by-products had a high priority, complete and complementary feed for pigs had a medium priority and all other feed and feed materials a low priority. Also including health consequence estimations showed that feed materials that ranked highest for aflatoxin B1 included sunflower seed and palmkernel expeller/extracts and maize. For deoxynivalenol, maize products were ranked highest, followed by various small grain cereals (products); all other feed materials were of lower concern. Results of this study have proven to be useful in setting up the annual risk based control program for mycotoxins in animal feed and feed materials. PMID:29373559

  9. 25 CFR Appendix A to Subpart C - IRR High Priority Project Scoring Matrix

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 25 Indians 1 2011-04-01 2011-04-01 false IRR High Priority Project Scoring Matrix A Appendix A to...—IRR High Priority Project Scoring Matrix Score 10 5 3 1 0 Accident and fatality rate for candidate...,000 or less 250,001-500,000 500,001-750,000 Over 750,000. Geographic isolation No external access to...

  10. 25 CFR Appendix A to Subpart C - IRR High Priority Project Scoring Matrix

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 25 Indians 1 2013-04-01 2013-04-01 false IRR High Priority Project Scoring Matrix A Appendix A to...—IRR High Priority Project Scoring Matrix Score 10 5 3 1 0 Accident and fatality rate for candidate...,000 or less 250,001-500,000 500,001-750,000 Over 750,000. Geographic isolation No external access to...

  11. 25 CFR Appendix A to Subpart C - IRR High Priority Project Scoring Matrix

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 25 Indians 1 2014-04-01 2014-04-01 false IRR High Priority Project Scoring Matrix A Appendix A to...—IRR High Priority Project Scoring Matrix Score 10 5 3 1 0 Accident and fatality rate for candidate...,000 or less 250,001-500,000 500,001-750,000 Over 750,000. Geographic isolation No external access to...

  12. 25 CFR Appendix A to Subpart C - IRR High Priority Project Scoring Matrix

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 25 Indians 1 2012-04-01 2011-04-01 true IRR High Priority Project Scoring Matrix A Appendix A to...—IRR High Priority Project Scoring Matrix Score 10 5 3 1 0 Accident and fatality rate for candidate...,000 or less 250,001-500,000 500,001-750,000 Over 750,000. Geographic isolation No external access to...

  13. 76 FR 21985 - Notice of Final Priorities, Requirements, Definitions, and Selection Criteria

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-19

    ... only after a research base has been established to support the use of the assessments for such purposes..., research-based assessment practices. Discussion: We agree that the selection criteria should address the... selection criterion, which addresses methods of scoring, to allow for self-scoring of student performance on...

  14. Research Priorities for the Intersection of Alcohol and HIV/AIDS in Low and Middle Income Countries: A Priority Setting Exercise.

    PubMed

    Gordon, Sara; Rotheram-Borus, Mary Jane; Skeen, Sarah; Perry, Charles; Bryant, Kendall; Tomlinson, Mark

    2017-11-01

    The harmful use of alcohol is a component cause for more than 200 diseases. The association between alcohol consumption, risk taking behavior and a range of infectious diseases such as HIV/AIDS is well established. The prevalence of HIV/AIDS as well as harmful alcohol use in low and middle income countries is high. Alcohol has been identified as a modifiable risk factor in the prevention and treatment of HIV/AIDS. The objective of this paper is to define research priorities for the interaction of alcohol and HIV/AIDS in low and middle income countries. The Child Health and Nutrition Research Initiative (CHNRI) priority setting methodology was applied in order to assess research priorities of the interaction of alcohol and HIV/AIDS. A group of 171 global and local experts in the field of alcohol and or HIV/AIDS related research were identified and invited to generate research questions. This resulted in 205 research questions which have been categorized and refined by senior researchers into 48 research questions to be evaluated using five criteria: answerability, effectiveness, feasibility, applicability and impact, as well as equity. A total of 59 experts participated independently in the voluntary scoring exercise (a 34% response rate). There was substantial consensus among experts on priorities for research on alcohol and HIV. These tended to break down into two categories, those focusing on better understanding the nexus between alcohol and HIV and those directed towards informing practical interventions to reduce the impact of alcohol use on HIV treatment outcomes, which replicates what Bryant (Subst Use Misuse 41:1465-1507, 2006) and Parry et al. (Addiction 108:1-2, 2012) found. Responses from experts were stratified by location in order to determine any differences between groups. On average experts in the LMIC gave higher scores than the HIC experts. Recent research has shown the causal link between alcohol consumption and the incidence of HIV/AIDS including a better understanding of the pathways through which alcohol use affects ARV adherence (and other medications to treat opportunistic infections) and CD4 counts. The results of this process clearly indicated that the important priorities for future research related to the development and assessment of interventions focusing on addressing alcohol and HIV/AIDS, addressing and exploring the impact of HIV risk and comorbid alcohol use, as well as exploring the risk and protective factors in the field of alcohol and HIV/AIDS. The findings from this priority setting exercise could guide international research agenda and make research funding more effective in addressing the research on intersection of alcohol and HIV/AIDS.

  15. Safe sleep practices in a New Zealand community and development of a Sudden Unexpected Death in Infancy (SUDI) risk assessment instrument.

    PubMed

    Galland, Barbara C; Gray, Andrew; Sayers, Rachel M; Heath, Anne-Louise M; Lawrence, Julie; Taylor, Rachael; Taylor, Barry J

    2014-10-13

    Interventions to prevent sudden unexpected death in infancy (SUDI) have generally been population wide interventions instituted after case-control studies identified specific childcare practices associated with sudden death. While successful overall, in New Zealand (NZ), the rates are still relatively high by international comparison. This study aims to describe childcare practices related to SUDI prevention messages in a New Zealand community, and to develop and explore the utility of a risk assessment instrument based on international guidelines and evidence. Prospective longitudinal study of 209 infants recruited antenatally. Participant characteristics and infant care data were collected by questionnaire at: baseline (third trimester), and monthly from infant age 3 weeks through 23 weeks. Published meta-analyses data were used to estimate individual risk ratios for 6 important SUDI risk factors which, when combined, yielded a "SUDI risk score". Most infants were at low risk for SUDI with 72% at the lowest or slightly elevated risk (combined risk ratio ≤1.5). There was a high prevalence of the safe practices: supine sleeping (86-89% over 3-19 weeks), mother not smoking (90-92% over 3-19 weeks), and not bed sharing at a young age (87% at 3 weeks). Five independent predictors of a high SUDI risk score were: higher parity (P =0.028), younger age (P =0.030), not working or caring for other children antenatally (P =0.031), higher depression scores antenatally (P =0.036), and lower education (P =0.042). Groups within the community identified as priorities for education about safe sleep practices beyond standard care are mothers who are young, have high parity, low educational levels, and have symptoms of depression antenatally. These findings emphasize the importance of addressing maternal depression as a modifiable risk factor in pregnancy.

  16. Developing Consensus-Based Priority Outcome Domains for Trials in Kidney Transplantation: A Multinational Delphi Survey With Patients, Caregivers, and Health Professionals.

    PubMed

    Sautenet, Bénédicte; Tong, Allison; Manera, Karine E; Chapman, Jeremy R; Warrens, Anthony N; Rosenbloom, David; Wong, Germaine; Gill, John; Budde, Klemens; Rostaing, Lionel; Marson, Lorna; Josephson, Michelle A; Reese, Peter P; Pruett, Timothy L; Hanson, Camilla S; O'Donoghue, Donal; Tam-Tham, Helen; Halimi, Jean-Michel; Shen, Jenny I; Kanellis, John; Scandling, John D; Howard, Kirsten; Howell, Martin; Cross, Nick; Evangelidis, Nicole; Masson, Philip; Oberbauer, Rainer; Fung, Samuel; Jesudason, Shilpa; Knight, Simon; Mandayam, Sreedhar; McDonald, Stephen P; Chadban, Steve; Rajan, Tasleem; Craig, Jonathan C

    2017-08-01

    Inconsistencies in outcome reporting and frequent omission of patient-centered outcomes can diminish the value of trials in treatment decision making. We identified critically important outcome domains in kidney transplantation based on the shared priorities of patients/caregivers and health professionals. In a 3-round Delphi survey, patients/caregivers and health professionals rated the importance of outcome domains for trials in kidney transplantation on a 9-point Likert scale and provided comments. During rounds 2 and 3, participants rerated the outcomes after reviewing their own score, the distribution of the respondents' scores, and comments. We calculated the median, mean, and proportion rating 7 to 9 (critically important), and analyzed comments thematically. One thousand eighteen participants (461 [45%] patients/caregivers and 557 [55%] health professionals) from 79 countries completed round 1, and 779 (77%) completed round 3. The top 8 outcomes that met the consensus criteria in round 3 (mean, ≥7.5; median, ≥8; proportion, >85%) in both groups were graft loss, graft function, chronic rejection, acute rejection, mortality, infection, cancer (excluding skin), and cardiovascular disease. Compared with health professionals, patients/caregivers gave higher priority to 6 outcomes (mean difference of 0.5 or more): skin cancer, surgical complications, cognition, blood pressure, depression, and ability to work. We identified 5 themes: capacity to control and inevitability, personal relevance, debilitating repercussions, gaining awareness of risks, and addressing knowledge gaps. Graft complications and severe comorbidities were critically important for both stakeholder groups. These stakeholder-prioritized outcomes will inform the core outcome set to improve the consistency and relevance of trials in kidney transplantation.

  17. FMEA of manual and automated methods for commissioning a radiotherapy treatment planning system.

    PubMed

    Wexler, Amy; Gu, Bruce; Goddu, Sreekrishna; Mutic, Maya; Yaddanapudi, Sridhar; Olsen, Lindsey; Harry, Taylor; Noel, Camille; Pawlicki, Todd; Mutic, Sasa; Cai, Bin

    2017-09-01

    To evaluate the level of risk involved in treatment planning system (TPS) commissioning using a manual test procedure, and to compare the associated process-based risk to that of an automated commissioning process (ACP) by performing an in-depth failure modes and effects analysis (FMEA). The authors collaborated to determine the potential failure modes of the TPS commissioning process using (a) approaches involving manual data measurement, modeling, and validation tests and (b) an automated process utilizing application programming interface (API) scripting, preloaded, and premodeled standard radiation beam data, digital heterogeneous phantom, and an automated commissioning test suite (ACTS). The severity (S), occurrence (O), and detectability (D) were scored for each failure mode and the risk priority numbers (RPN) were derived based on TG-100 scale. Failure modes were then analyzed and ranked based on RPN. The total number of failure modes, RPN scores and the top 10 failure modes with highest risk were described and cross-compared between the two approaches. RPN reduction analysis is also presented and used as another quantifiable metric to evaluate the proposed approach. The FMEA of a MTP resulted in 47 failure modes with an RPN ave of 161 and S ave of 6.7. The highest risk process of "Measurement Equipment Selection" resulted in an RPN max of 640. The FMEA of an ACP resulted in 36 failure modes with an RPN ave of 73 and S ave of 6.7. The highest risk process of "EPID Calibration" resulted in an RPN max of 576. An FMEA of treatment planning commissioning tests using automation and standardization via API scripting, preloaded, and pre-modeled standard beam data, and digital phantoms suggests that errors and risks may be reduced through the use of an ACP. © 2017 American Association of Physicists in Medicine.

  18. Setting health research priorities using the CHNRI method: IV. Key conceptual advances.

    PubMed

    Rudan, Igor

    2016-06-01

    Child Health and Nutrition Research Initiative (CHNRI) started as an initiative of the Global Forum for Health Research in Geneva, Switzerland. Its aim was to develop a method that could assist priority setting in health research investments. The first version of the CHNRI method was published in 2007-2008. The aim of this paper was to summarize the history of the development of the CHNRI method and its key conceptual advances. The guiding principle of the CHNRI method is to expose the potential of many competing health research ideas to reduce disease burden and inequities that exist in the population in a feasible and cost-effective way. The CHNRI method introduced three key conceptual advances that led to its increased popularity in comparison to other priority-setting methods and processes. First, it proposed a systematic approach to listing a large number of possible research ideas, using the "4D" framework (description, delivery, development and discovery research) and a well-defined "depth" of proposed research ideas (research instruments, avenues, options and questions). Second, it proposed a systematic approach for discriminating between many proposed research ideas based on a well-defined context and criteria. The five "standard" components of the context are the population of interest, the disease burden of interest, geographic limits, time scale and the preferred style of investing with respect to risk. The five "standard" criteria proposed for prioritization between research ideas are answerability, effectiveness, deliverability, maximum potential for disease burden reduction and the effect on equity. However, both the context and the criteria can be flexibly changed to meet the specific needs of each priority-setting exercise. Third, it facilitated consensus development through measuring collective optimism on each component of each research idea among a larger group of experts using a simple scoring system. This enabled the use of the knowledge of many experts in the field, "visualising" their collective opinion and presenting the list of many research ideas with their ranks, based on an intuitive score that ranges between 0 and 100. Two recent reviews showed that the CHNRI method, an approach essentially based on "crowdsourcing", has become the dominant approach to setting health research priorities in the global biomedical literature over the past decade. With more than 50 published examples of implementation to date, it is now widely used in many international organisations for collective decision-making on health research priorities. The applications have been helpful in promoting better balance between investments in fundamental research, translation research and implementation research.

  19. Failure mode and effects analysis drastically reduced potential risks in clinical trial conduct.

    PubMed

    Lee, Howard; Lee, Heechan; Baik, Jungmi; Kim, Hyunjung; Kim, Rachel

    2017-01-01

    Failure mode and effects analysis (FMEA) is a risk management tool to proactively identify and assess the causes and effects of potential failures in a system, thereby preventing them from happening. The objective of this study was to evaluate effectiveness of FMEA applied to an academic clinical trial center in a tertiary care setting. A multidisciplinary FMEA focus group at the Seoul National University Hospital Clinical Trials Center selected 6 core clinical trial processes, for which potential failure modes were identified and their risk priority number (RPN) was assessed. Remedial action plans for high-risk failure modes (RPN >160) were devised and a follow-up RPN scoring was conducted a year later. A total of 114 failure modes were identified with an RPN score ranging 3-378, which was mainly driven by the severity score. Fourteen failure modes were of high risk, 11 of which were addressed by remedial actions. Rescoring showed a dramatic improvement attributed to reduction in the occurrence and detection scores by >3 and >2 points, respectively. FMEA is a powerful tool to improve quality in clinical trials. The Seoul National University Hospital Clinical Trials Center is expanding its FMEA capability to other core clinical trial processes.

  20. Application of failure mode and effect analysis in an assisted reproduction technology laboratory.

    PubMed

    Intra, Giulia; Alteri, Alessandra; Corti, Laura; Rabellotti, Elisa; Papaleo, Enrico; Restelli, Liliana; Biondo, Stefania; Garancini, Maria Paola; Candiani, Massimo; Viganò, Paola

    2016-08-01

    Assisted reproduction technology laboratories have a very high degree of complexity. Mismatches of gametes or embryos can occur, with catastrophic consequences for patients. To minimize the risk of error, a multi-institutional working group applied failure mode and effects analysis (FMEA) to each critical activity/step as a method of risk assessment. This analysis led to the identification of the potential failure modes, together with their causes and effects, using the risk priority number (RPN) scoring system. In total, 11 individual steps and 68 different potential failure modes were identified. The highest ranked failure modes, with an RPN score of 25, encompassed 17 failures and pertained to "patient mismatch" and "biological sample mismatch". The maximum reduction in risk, with RPN reduced from 25 to 5, was mostly related to the introduction of witnessing. The critical failure modes in sample processing were improved by 50% in the RPN by focusing on staff training. Three indicators of FMEA success, based on technical skill, competence and traceability, have been evaluated after FMEA implementation. Witnessing by a second human operator should be introduced in the laboratory to avoid sample mix-ups. These findings confirm that FMEA can effectively reduce errors in assisted reproduction technology laboratories. Copyright © 2016 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  1. Research priorities in occupational health in Italy

    PubMed Central

    Iavicoli, S; Marinaccio, A; Vonesch, N; Ursini, C; Grandi, C; Palmi, S

    2001-01-01

    OBJECTIVE—To find a broad consensus on research priorities and strategies in the field of occupational health and safety in Italy.
METHODS—A two phase questionnaire survey was based on the Delphi technique previously described in other reports. 310 Occupational safety and health specialists (from universities and local health units) were given an open questionnaire (to identify three priority research areas). The data obtained from respondents (175, 56.4%) were then used to draw up a list of 27 priority topics grouped together into five macrosectors. Each of these was given a score ranging from 1 (of little importance) to 5 ( extremely important). With the mean scores obtained from a total of 203 respondents (65.4%), it was possible to place the 27 topics in rank order according to a scale of priorities.
RESULTS—Among the macrosectors, first place was given to the question of methodological approach to research in this field, and for individual topics, occupational carcinogenesis and quality in occupational medicine were ranked first and second, respectively. The question of exposure to low doses of environmental pollutants and multiple exposures ranked third among the priorities; the development of adequate and effective approaches and methods for worker education and participation in prevention was also perceived as being an important issue (fourth place).
CONCLUSIONS—This study (the first of its kind in Italy) enabled us to achieve an adequate degree of consensus on research priorities related to the protection of occupational health and safety. Disparities in the mean scores of some of the issues identified overall as being research priorities, seem to be linked both to geographical area and to whether respondents worked in local health units or universities. This finding requires debate and further analysis.


Keywords: research priorities; occupational health; strategies PMID:11303082

  2. Risk analysis by FMEA as an element of analytical validation.

    PubMed

    van Leeuwen, J F; Nauta, M J; de Kaste, D; Odekerken-Rombouts, Y M C F; Oldenhof, M T; Vredenbregt, M J; Barends, D M

    2009-12-05

    We subjected a Near-Infrared (NIR) analytical procedure used for screening drugs on authenticity to a Failure Mode and Effects Analysis (FMEA), including technical risks as well as risks related to human failure. An FMEA team broke down the NIR analytical method into process steps and identified possible failure modes for each step. Each failure mode was ranked on estimated frequency of occurrence (O), probability that the failure would remain undetected later in the process (D) and severity (S), each on a scale of 1-10. Human errors turned out to be the most common cause of failure modes. Failure risks were calculated by Risk Priority Numbers (RPNs)=O x D x S. Failure modes with the highest RPN scores were subjected to corrective actions and the FMEA was repeated, showing reductions in RPN scores and resulting in improvement indices up to 5.0. We recommend risk analysis as an addition to the usual analytical validation, as the FMEA enabled us to detect previously unidentified risks.

  3. Incorporating big data into treatment plan evaluation: Development of statistical DVH metrics and visualization dashboards.

    PubMed

    Mayo, Charles S; Yao, John; Eisbruch, Avraham; Balter, James M; Litzenberg, Dale W; Matuszak, Martha M; Kessler, Marc L; Weyburn, Grant; Anderson, Carlos J; Owen, Dawn; Jackson, William C; Haken, Randall Ten

    2017-01-01

    To develop statistical dose-volume histogram (DVH)-based metrics and a visualization method to quantify the comparison of treatment plans with historical experience and among different institutions. The descriptive statistical summary (ie, median, first and third quartiles, and 95% confidence intervals) of volume-normalized DVH curve sets of past experiences was visualized through the creation of statistical DVH plots. Detailed distribution parameters were calculated and stored in JavaScript Object Notation files to facilitate management, including transfer and potential multi-institutional comparisons. In the treatment plan evaluation, structure DVH curves were scored against computed statistical DVHs and weighted experience scores (WESs). Individual, clinically used, DVH-based metrics were integrated into a generalized evaluation metric (GEM) as a priority-weighted sum of normalized incomplete gamma functions. Historical treatment plans for 351 patients with head and neck cancer, 104 with prostate cancer who were treated with conventional fractionation, and 94 with liver cancer who were treated with stereotactic body radiation therapy were analyzed to demonstrate the usage of statistical DVH, WES, and GEM in a plan evaluation. A shareable dashboard plugin was created to display statistical DVHs and integrate GEM and WES scores into a clinical plan evaluation within the treatment planning system. Benchmarking with normal tissue complication probability scores was carried out to compare the behavior of GEM and WES scores. DVH curves from historical treatment plans were characterized and presented, with difficult-to-spare structures (ie, frequently compromised organs at risk) identified. Quantitative evaluations by GEM and/or WES compared favorably with the normal tissue complication probability Lyman-Kutcher-Burman model, transforming a set of discrete threshold-priority limits into a continuous model reflecting physician objectives and historical experience. Statistical DVH offers an easy-to-read, detailed, and comprehensive way to visualize the quantitative comparison with historical experiences and among institutions. WES and GEM metrics offer a flexible means of incorporating discrete threshold-prioritizations and historic context into a set of standardized scoring metrics. Together, they provide a practical approach for incorporating big data into clinical practice for treatment plan evaluations.

  4. Comparative assessment of absolute cardiovascular disease risk characterization from non-laboratory-based risk assessment in South African populations

    PubMed Central

    2013-01-01

    Background All rigorous primary cardiovascular disease (CVD) prevention guidelines recommend absolute CVD risk scores to identify high- and low-risk patients, but laboratory testing can be impractical in low- and middle-income countries. The purpose of this study was to compare the ranking performance of a simple, non-laboratory-based risk score to laboratory-based scores in various South African populations. Methods We calculated and compared 10-year CVD (or coronary heart disease (CHD)) risk for 14,772 adults from thirteen cross-sectional South African populations (data collected from 1987 to 2009). Risk characterization performance for the non-laboratory-based score was assessed by comparing rankings of risk with six laboratory-based scores (three versions of Framingham risk, SCORE for high- and low-risk countries, and CUORE) using Spearman rank correlation and percent of population equivalently characterized as ‘high’ or ‘low’ risk. Total 10-year non-laboratory-based risk of CVD death was also calculated for a representative cross-section from the 1998 South African Demographic Health Survey (DHS, n = 9,379) to estimate the national burden of CVD mortality risk. Results Spearman correlation coefficients for the non-laboratory-based score with the laboratory-based scores ranged from 0.88 to 0.986. Using conventional thresholds for CVD risk (10% to 20% 10-year CVD risk), 90% to 92% of men and 94% to 97% of women were equivalently characterized as ‘high’ or ‘low’ risk using the non-laboratory-based and Framingham (2008) CVD risk score. These results were robust across the six risk scores evaluated and the thirteen cross-sectional datasets, with few exceptions (lower agreement between the non-laboratory-based and Framingham (1991) CHD risk scores). Approximately 18% of adults in the DHS population were characterized as ‘high CVD risk’ (10-year CVD death risk >20%) using the non-laboratory-based score. Conclusions We found a high level of correlation between a simple, non-laboratory-based CVD risk score and commonly-used laboratory-based risk scores. The burden of CVD mortality risk was high for men and women in South Africa. The policy and clinical implications are that fast, low-cost screening tools can lead to similar risk assessment results compared to time- and resource-intensive approaches. Until setting-specific cohort studies can derive and validate country-specific risk scores, non-laboratory-based CVD risk assessment could be an effective and efficient primary CVD screening approach in South Africa. PMID:23880010

  5. SPIDERplan: A tool to support decision-making in radiation therapy treatment plan assessment.

    PubMed

    Ventura, Tiago; Lopes, Maria do Carmo; Ferreira, Brigida Costa; Khouri, Leila

    2016-01-01

    In this work, a graphical method for radiotherapy treatment plan assessment and comparison, named SPIDERplan, is proposed. It aims to support plan approval allowing independent and consistent comparisons of different treatment techniques, algorithms or treatment planning systems. Optimized plans from modern radiotherapy are not easy to evaluate and compare because of their inherent multicriterial nature. The clinical decision on the best treatment plan is mostly based on subjective options. SPIDERplan combines a graphical analysis with a scoring index. Customized radar plots based on the categorization of structures into groups and on the determination of individual structures scores are generated. To each group and structure, an angular amplitude is assigned expressing the clinical importance defined by the radiation oncologist. Completing the graphical evaluation, a global plan score, based on the structures score and their clinical weights, is determined. After a necessary clinical validation of the group weights, SPIDERplan efficacy, to compare and rank different plans, was tested through a planning exercise where plans had been generated for a nasal cavity case using different treatment planning systems. SPIDERplan method was applied to the dose metrics achieved by the nasal cavity test plans. The generated diagrams and scores successfully ranked the plans according to the prescribed dose objectives and constraints and the radiation oncologist priorities, after a necessary clinical validation process. SPIDERplan enables a fast and consistent evaluation of plan quality considering all targets and organs at risk.

  6. Data mining identifies Digit Symbol Substitution Test score and serum cystatin C as dominant predictors of mortality in older men and women.

    PubMed

    Swindell, William R; Cummings, Steven R; Sanders, Jason L; Caserotti, Paolo; Rosano, Caterina; Satterfield, Suzanne; Strotmeyer, Elsa S; Harris, Tamara B; Simonsick, Eleanor M; Cawthon, Peggy M

    2012-08-01

    Characterization of long-term health trajectory in older individuals is important for proactive health management. However, the relative prognostic value of information contained in clinical profiles of nonfrail older adults is often unclear. We screened 825 phenotypic and genetic measures evaluated during the Health, Aging, and Body Composition Study (Health ABC) baseline visit (3,067 men and women aged 70-79). Variables that best predicted mortality over 13 years of follow-up were identified using 10-fold cross-validation. Mortality was most strongly associated with low Digit Symbol Substitution Test (DSST) score (DSST<25; 21.9% of cohort; hazard ratio [HR]=1.87±0.06) and elevated serum cystatin C (≥1.30 mg/mL; 12.1% of cohort; HR=2.25±0.07). These variables predicted mortality better than 823 other measures, including baseline age and a 45-variable health deficit index. Given elevated cystatin C (≥1.30 mg/mL), mortality risk was further increased by high serum creatinine, high abdominal visceral fat density, and smoking history (2.52≤HR ≤3.73). Given a low DSST score (<25) combined with low-to-moderate cystatin C (<1.30 mg/mL), mortality risk was highest among those with elevated plasma resistin and smoking history (1.90≤HR≤2.02). DSST score and serum cystatin C warrant priority consideration for the evaluation of mortality risk in older individuals. Both variables, taken individually, predict mortality better than chronological age or a health deficit index in well-functioning older adults (ages 70-79). DSST score and serum cystatin C can thus provide evidence-based tools for geriatric assessment.

  7. Risk assessment and risk management at the Canadian Food Inspection Agency (CFIA): a perspective on the monitoring of foods for chemical residues.

    PubMed

    Bietlot, Henri P; Kolakowski, Beata

    2012-08-01

    The Canadian Food Inspection Agency (CFIA) uses 'Ranked Risk Assessment' (RRA) to prioritize chemical hazards for inclusion in monitoring programmes or method development projects based on their relative risk. The relative risk is calculated for a chemical by scoring toxicity and exposure in the 'risk model scoring system' of the Risk Priority Compound List (RPCL). The relative ranking and the risk management options are maintained and updated in the RPCL. The ranking may be refined by the data generated by the sampling and testing programs. The two principal sampling and testing programmes are the National Chemical Residue Monitoring Program (NCRMP) and the Food Safety Action Plan (FSAP). The NCRMP sampling plans focus on the analysis of federally registered products (dairy, eggs, honey, meat and poultry, fresh and processed fruit and vegetable commodities, and maple syrup) for residues of veterinary drugs, pesticides, environmental contaminants, mycotoxins, and metals. The NCRMP is complemented by the Food Safety Action Plan (FSAP) targeted surveys. These surveys focus on emerging chemical hazards associated with specific foods or geographical regions for which applicable maximum residue limits (MRLs) are not set. The data from the NCRMP and FSAP also influence the risk management (follow-up) options. Follow-up actions vary according to the magnitude of the health risk, all with the objective of preventing any repeat occurrence to minimize consumer exposure to a product representing a potential risk to human health. © Her Majesty the Queen in Right of Canada 2012. Drug Testing and Analysis © 2012 John Wiley & Sons, Ltd.

  8. Failure mode and effects analysis drastically reduced potential risks in clinical trial conduct

    PubMed Central

    Baik, Jungmi; Kim, Hyunjung; Kim, Rachel

    2017-01-01

    Background Failure mode and effects analysis (FMEA) is a risk management tool to proactively identify and assess the causes and effects of potential failures in a system, thereby preventing them from happening. The objective of this study was to evaluate effectiveness of FMEA applied to an academic clinical trial center in a tertiary care setting. Methods A multidisciplinary FMEA focus group at the Seoul National University Hospital Clinical Trials Center selected 6 core clinical trial processes, for which potential failure modes were identified and their risk priority number (RPN) was assessed. Remedial action plans for high-risk failure modes (RPN >160) were devised and a follow-up RPN scoring was conducted a year later. Results A total of 114 failure modes were identified with an RPN score ranging 3–378, which was mainly driven by the severity score. Fourteen failure modes were of high risk, 11 of which were addressed by remedial actions. Rescoring showed a dramatic improvement attributed to reduction in the occurrence and detection scores by >3 and >2 points, respectively. Conclusions FMEA is a powerful tool to improve quality in clinical trials. The Seoul National University Hospital Clinical Trials Center is expanding its FMEA capability to other core clinical trial processes. PMID:29089745

  9. Failure mode and effects analysis: a comparison of two common risk prioritisation methods.

    PubMed

    McElroy, Lisa M; Khorzad, Rebeca; Nannicelli, Anna P; Brown, Alexandra R; Ladner, Daniela P; Holl, Jane L

    2016-05-01

    Failure mode and effects analysis (FMEA) is a method of risk assessment increasingly used in healthcare over the past decade. The traditional method, however, can require substantial time and training resources. The goal of this study is to compare a simplified scoring method with the traditional scoring method to determine the degree of congruence in identifying high-risk failures. An FMEA of the operating room (OR) to intensive care unit (ICU) handoff was conducted. Failures were scored and ranked using both the traditional risk priority number (RPN) and criticality-based method, and a simplified method, which designates failures as 'high', 'medium' or 'low' risk. The degree of congruence was determined by first identifying those failures determined to be critical by the traditional method (RPN≥300), and then calculating the per cent congruence with those failures designated critical by the simplified methods (high risk). In total, 79 process failures among 37 individual steps in the OR to ICU handoff process were identified. The traditional method yielded Criticality Indices (CIs) ranging from 18 to 72 and RPNs ranging from 80 to 504. The simplified method ranked 11 failures as 'low risk', 30 as medium risk and 22 as high risk. The traditional method yielded 24 failures with an RPN ≥300, of which 22 were identified as high risk by the simplified method (92% agreement). The top 20% of CI (≥60) included 12 failures, of which six were designated as high risk by the simplified method (50% agreement). These results suggest that the simplified method of scoring and ranking failures identified by an FMEA can be a useful tool for healthcare organisations with limited access to FMEA expertise. However, the simplified method does not result in the same degree of discrimination in the ranking of failures offered by the traditional method. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  10. Evacuation of a Tertiary Neonatal Centre: Lessons from the 2016 Kumamoto Earthquakes

    PubMed Central

    Iwata, Osuke; Kawase, Akihiko; Iwai, Masanori; Wada, Kazuko

    2017-01-01

    Background Newborn infants hospitalised in the neonatal intensive care unit (NICU) are vulnerable to natural disasters. However, publications on evacuation from NICUs are sparse. The 2016 Kumamoto Earthquakes caused serious damage to Kumamoto City Hospital and its level III regional core NICU. Local/neighbour NICU teams and the disaster-communication team of a neonatal academic society cooperated to evacuate 38 newborn infants from the ward. Objective The aim of this paper was to highlight potential key factors to improve emergency NICU evacuation and coordination of hospital transportation following natural disasters. Methods Background variables including clinical risk scores and timing/destination of transportation were compared between infants, who subsequently were transferred to destinations outside of Kumamoto Prefecture, and their peers. Results All but 1 of the infants were successfully evacuated from their NICU within 8 h. One very-low-birth-weight infant developed moderate hypothermia following transportation. Fourteen infants were transferred to NICUs outside of Kumamoto Prefecture, which was associated with the diagnosis of congenital heart disease, dependence on respiratory support, higher risk scores, and longer elapsed time from the decision to departure. There was difficulty in arranging helicopter transportation because the coordination office of the Disaster Medical Assistance Team had requisitioned most air/ground ambulances and only helped arrange ground transportations for 13 low-risk infants. Transportation for all 10 high-risk infants (risk scores greater than or equal to the upper quartile) was arranged by local/neighbour NICUs. Conclusions Although the overall evacuation process was satisfactory, potential risks of relying on the adult-based emergency transportation system were highlighted. A better system needs to be developed urgently to put appropriate priority on vulnerable infants. PMID:28437783

  11. Evacuation of a Tertiary Neonatal Centre: Lessons from the 2016 Kumamoto Earthquakes.

    PubMed

    Iwata, Osuke; Kawase, Akihiko; Iwai, Masanori; Wada, Kazuko

    2017-01-01

    Newborn infants hospitalised in the neonatal intensive care unit (NICU) are vulnerable to natural disasters. However, publications on evacuation from NICUs are sparse. The 2016 Kumamoto Earthquakes caused serious damage to Kumamoto City Hospital and its level III regional core NICU. Local/neighbour NICU teams and the disaster-communication team of a neonatal academic society cooperated to evacuate 38 newborn infants from the ward. The aim of this paper was to highlight potential key factors to improve emergency NICU evacuation and coordination of hospital transportation following natural disasters. Background variables including clinical risk scores and timing/destination of transportation were compared between infants, who subsequently were transferred to destinations outside of Kumamoto Prefecture, and their peers. All but 1 of the infants were successfully evacuated from their NICU within 8 h. One very-low-birth-weight infant developed moderate hypothermia following transportation. Fourteen infants were transferred to NICUs outside of Kumamoto Prefecture, which was associated with the diagnosis of congenital heart disease, dependence on respiratory support, higher risk scores, and longer elapsed time from the decision to departure. There was difficulty in arranging helicopter transportation because the coordination office of the Disaster Medical Assistance Team had requisitioned most air/ground ambulances and only helped arrange ground transportations for 13 low-risk infants. Transportation for all 10 high-risk infants (risk scores greater than or equal to the upper quartile) was arranged by local/neighbour NICUs. Although the overall evacuation process was satisfactory, potential risks of relying on the adult-based emergency transportation system were highlighted. A better system needs to be developed urgently to put appropriate priority on vulnerable infants. © 2017 S. Karger AG, Basel.

  12. Hospital readmissions for COPD: a retrospective longitudinal study.

    PubMed

    Harries, Timothy H; Thornton, Hannah; Crichton, Siobhan; Schofield, Peter; Gilkes, Alexander; White, Patrick T

    2017-04-27

    Prevention of chronic obstructive pulmonary disease hospital readmissions is an international priority aimed to slow disease progression and limit costs. Evidence of the risk of readmission and of interventions that might prevent it is lacking. We aimed to determine readmission risk for chronic obstructive pulmonary disease, factors influencing that risk, and variation in readmission risk between hospitals across 7.5 million people in London. This retrospective longitudinal observational study included all chronic obstructive pulmonary disease admissions to any hospital in the United Kingdom among patients registered at London general practices who had emergency National Health Service chronic obstructive pulmonary disease hospital admissions between April 2006 and March 2010. Influence of patient characteristics, geographical deprivation score, length of stay, day of week of admission or of discharge, and admitting hospital, were assessed using multiple logistic regression. 38,894 chronic obstructive pulmonary disease admissions of 20,932 patients aged ≥ 45 years registered with London general practices were recorded. 6295 patients (32.2%) had at least one chronic obstructive pulmonary disease readmission within 1 year. 1993 patients (10.2%) were readmitted within 30 days and 3471 patients (17.8%) were readmitted within 90 days. Age and patient geographical deprivation score were very weak predictors of readmission. Rates of chronic obstructive pulmonary disease readmissions within 30 days and within 90 days did not vary among the majority of hospitals. The finding of lower chronic obstructive pulmonary disease readmission rates than was previously estimated and the limited variation in these rates between hospitals suggests that the opportunity to reduce chronic obstructive pulmonary disease readmission risk is small. LOWER RISK OF READMISSION FOR LONDON-BASED PATIENTS: A managed reduction of hospital readmissions for London-based chronic lung disease patients may not be needed. Preventing hospital readmissions for patients with chronic obstructive pulmonary disease (COPD) is a key priority to improve patient care and limit costs. However, few data are available to determine and ultimately reduce the risk of readmission. Timothy Harries at King's College, London, and co-workers conducted a longitudinal study incorporating all COPD admissions into UK hospitals for 20,932 patients registered at London general practitioners between 2006 and 2010. They found that 32% of patients were readmitted within a year, 17.8% within 90 days and 10% within 30 days. Neither age nor geographical deprivation were useful predictors of readmission. These represent lower than estimated levels of readmission, suggesting there may be fewer opportunities to reduce the risk of readmission further.

  13. White Paper: A Defect Prioritization Method Based on the Risk Priority Number

    DTIC Science & Technology

    2013-11-01

    adapted The Failure Modes and Effects Analysis ( FMEA ) method employs a measurement technique called Risk Priority Number (RPN) to quantify the...Up to an hour 16-60 1.5 Brief Interrupt 0-15 1 Table 1 – Time Scaling Factors In the FMEA formulation, RPN is a product of the three categories

  14. Priority Determination of Underwater Tourism Site Development in Gorontalo Province using Analytical Hierarchy Process (AHP)

    NASA Astrophysics Data System (ADS)

    Rohandi, M.; Tuloli, M. Y.; Jassin, R. T.

    2018-02-01

    This research aims to determine the development of priority of underwater tourism in Gorontalo province using the Analytical Hierarchy Process (AHP) method which is one of DSS methods applying Multi-Attribute Decision Making (MADM). This method used 5 criteria and 28 alternatives to determine the best priority of underwater tourism site development in Gorontalo province. Based on the AHP calculation it appeared that the best priority development of underwater tourism site is Pulau Cinta whose total AHP score is 0.489 or 48.9%. This DSS produced a reliable result, faster solution, time-saving, and low cost for the decision makers to obtain the best underwater tourism site to be developed.

  15. Prioritization of reproductive toxicants in unconventional oil and gas operations using a multi-country regulatory data-driven hazard assessment.

    PubMed

    Inayat-Hussain, Salmaan H; Fukumura, Masao; Muiz Aziz, A; Jin, Chai Meng; Jin, Low Wei; Garcia-Milian, Rolando; Vasiliou, Vasilis; Deziel, Nicole C

    2018-08-01

    Recent trends have witnessed the global growth of unconventional oil and gas (UOG) production. Epidemiologic studies have suggested associations between proximity to UOG operations with increased adverse birth outcomes and cancer, though specific potential etiologic agents have not yet been identified. To perform effective risk assessment of chemicals used in UOG production, the first step of hazard identification followed by prioritization specifically for reproductive toxicity, carcinogenicity and mutagenicity is crucial in an evidence-based risk assessment approach. To date, there is no single hazard classification list based on the United Nations Globally Harmonized System (GHS), with countries applying the GHS standards to generate their own chemical hazard classification lists. A current challenge for chemical prioritization, particularly for a multi-national industry, is inconsistent hazard classification which may result in misjudgment of the potential public health risks. We present a novel approach for hazard identification followed by prioritization of reproductive toxicants found in UOG operations using publicly available regulatory databases. GHS classification for reproductive toxicity of 157 UOG-related chemicals identified as potential reproductive or developmental toxicants in a previous publication was assessed using eleven governmental regulatory agency databases. If there was discordance in classifications across agencies, the most stringent classification was assigned. Chemicals in the category of known or presumed human reproductive toxicants were further evaluated for carcinogenicity and germ cell mutagenicity based on government classifications. A scoring system was utilized to assign numerical values for reproductive health, cancer and germ cell mutation hazard endpoints. Using a Cytoscape analysis, both qualitative and quantitative results were presented visually to readily identify high priority UOG chemicals with evidence of multiple adverse effects. We observed substantial inconsistencies in classification among the 11 databases. By adopting the most stringent classification within and across countries, 43 chemicals were classified as known or presumed human reproductive toxicants (GHS Category 1), while 31 chemicals were classified as suspected human reproductive toxicants (GHS Category 2). The 43 reproductive toxicants were further subjected to analysis for carcinogenic and mutagenic properties. Calculated hazard scores and Cytoscape visualization yielded several high priority chemicals including potassium dichromate, cadmium, benzene and ethylene oxide. Our findings reveal diverging GHS classification outcomes for UOG chemicals across regulatory agencies. Adoption of the most stringent classification with application of hazard scores provides a useful approach to prioritize reproductive toxicants in UOG and other industries for exposure assessments and selection of safer alternatives. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. [Economic evaluation and rationale for human health risk management decisions].

    PubMed

    Fokin, S G; Bobkova, T E

    2011-01-01

    The priority task of human health maintenance and improvement is risk management using the new economic concepts based on the assessment of potential and real human risks from exposure to poor environmental factors and on the estimation of cost-benefit and cost-effectiveness ratios. The application of economic tools to manage a human risk makes it possible to assess various measures both as a whole and their individual priority areas, to rank different scenarios in terms of their effectiveness, to estimate costs per unit of risk reduction and benefit increase (damage decrease).

  17. Prioritizing agricultural pesticides used in South Africa based on their environmental mobility and potential human health effects.

    PubMed

    Dabrowski, James Michael; Shadung, Justinus Madimetja; Wepener, Victor

    2014-01-01

    South Africa is the largest user of pesticides in sub-Saharan Africa and many studies have highlighted the occurrence of pesticides in water resources. Poor management of water treatment facilities in combination with a relatively high dependency on untreated water from boreholes and rivers creates the potential for exposure of human communities to pesticides and their associated health effects. Pesticide use, physicochemical and toxicity data was therefore used to prioritize pesticides in terms of their potential risk to human health. After eliminating pesticides used in very low quantities, four indices were used to prioritize active ingredients applied in excess of 1000 kg per annum; the quantity index (QI) which ranked pesticides in terms of the quantity of their use; the toxicity potential index (TP) which ranked pesticides according to scores derived for their potential to cause five health effects (endocrine disruption, carcinogenicity, teratogenicity, mutagenicity and neurotoxicity); hazard potential index (HP) which multiplied the TP by an exposure potential score determined by the GUS index for each pesticide (to provide an indication of environmental hazard); and weighted hazard potential (WHP), which multiplied the HP for a pesticide by the ratio of its use to the total use of all pesticides in the country. The top 25 pesticides occurring in each of these indices were identified as priority pesticides, resulting in a combined total of 69 priority pesticides. A principal component analysis identified the indices that were most important in determining why a specific pesticide was included in the final priority list. As crop specific application pesticide use data was available it was possible to identify crops to which priority pesticides were applied to. Furthermore it was possible to prioritize crops in terms of the specific pesticide applied to the crop (by expressing the WHP as a ratio of the total amount of pesticide applied to the crop to the total use of all pesticides applied in the country). This allows for an improved spatial assessment of the use of priority pesticides. The methodology applied here provides a first level of basic, important information that can be used to develop monitoring programmes, identify priority areas for management interventions and to investigate optimal mitigation strategies. © 2013.

  18. Trends and priorities in occupational health research and knowledge transfer in Italy.

    PubMed

    Rondinone, Bruna Maria; Boccuni, Fabio; Iavicoli, Sergio

    2010-06-01

    In 2000-2001, the Italian National Institute for Occupational Safety and Prevention (ISPESL) carried out a survey to identify the research priorities in the field of occupational safety and health (OSH). The present study, carried out in 2007-2008, was a follow-up designed to (i) review the themes identified earlier, (ii) detect emerging issues linked to new risks and forms of work, and (iii) look for any shifts in focus. The survey was extended to cover not only research but also the concept of knowledge transfer. In the first round, ISPESL distributed questionnaires to the heads of both university occupational medicine departments and prevention departments in local national health units (known as ASL in Italy) asking respondents to identify OSH priority themes. In the latest survey covering both research and the need for knowledge transfer, the same experts were asked to rank the importance of the earlier-identified topics and list any emerging issues in the OSH field. The two most important themes identified were "work accidents" and "occupational carcinogenesis". In the overall sample and among ASL experts, they received the 1st and 2nd highest mean scores. The university respondents also prioritized them but in reverse order. Some of the new priority topics included: risks associated with nanotechnologies; assessment of psychosocial and organizational risks; migration and work; and cost-benefit analysis of prevention. In light of the findings, efforts are urgently needed to identify research and knowledge transfer priorities related to workers' health and safety on an international scale using a standardized method in order to obtain comparable results, avoid wasteful duplication of resources, and reduce occupational accidents and illness.

  19. Irrigation, risk aversion, and water right priority under water supply uncertainty.

    PubMed

    Li, Man; Xu, Wenchao; Rosegrant, Mark W

    2017-09-01

    This paper explores the impacts of a water right's allocative priority-as an indicator of farmers' risk-bearing ability-on land irrigation under water supply uncertainty. We develop and use an economic model to simulate farmers' land irrigation decision and associated economic returns in eastern Idaho. Results indicate that the optimal acreage of land irrigated increases with water right priority when hydroclimate risk exhibits a negatively skewed or right-truncated distribution. Simulation results suggest that prior appropriation enables senior water rights holders to allocate a higher proportion of their land to irrigation, 6 times as much as junior rights holders do, creating a gap in the annual expected net revenue reaching up to $141.4 acre -1 or $55,800 per farm between the two groups. The optimal irrigated acreage, expected net revenue, and shadow value of a water right's priority are subject to substantial changes under a changing climate in the future, where temporal variation in water supply risks significantly affects the profitability of agricultural land use under the priority-based water sharing mechanism.

  20. A failure modes and effects analysis study for gynecologic high-dose-rate brachytherapy.

    PubMed

    Mayadev, Jyoti; Dieterich, Sonja; Harse, Rick; Lentz, Susan; Mathai, Mathew; Boddu, Sunita; Kern, Marianne; Courquin, Jean; Stern, Robin L

    2015-01-01

    To improve the quality of our gynecologic brachytherapy practice and reduce reportable events, we performed a process analysis after the failure modes and effects analysis (FMEA). The FMEA included a multidisciplinary team specifically targeting the tandem and ring brachytherapy procedure. The treatment process was divided into six subprocesses and failure modes (FMs). A scoring guideline was developed based on published FMEA studies and assigned through team consensus. FMs were ranked according to overall and severity scores. FM ranking >5% of the highest risk priority number (RPN) score was selected for in-depth analysis. The efficiency of each existing quality assurance to detect each FM was analyzed. We identified 170 FMs, and 99 were scored. RPN scores ranged from 1 to 192. Of the 13 highest-ranking FMs with RPN scores >80, half had severity scores of 8 or 9, with no mode having severity of 10. Of these FM, the originating process steps were simulation (5), treatment planning (5), treatment delivery (2), and insertion (1). Our high-ranking FM focused on communication and the potential for applicator movement. Evaluation of the efficiency and the comprehensiveness of our quality assurance program showed coverage of all but three of the top 49 FMs ranked by RPN. This is the first reported FMEA process for a comprehensive gynecologic brachytherapy procedure overview. We were able to identify FMs that could potentially and severely impact the patient's treatment. We continue to adjust our quality assurance program based on the results of our FMEA analysis. Published by Elsevier Inc.

  1. 25 CFR Appendix A to Subpart C - IRR High Priority Project Scoring Matrix

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...—IRR High Priority Project Scoring Matrix Score 10 5 3 1 0 Accident and fatality rate for candidate route 1 Severe X Moderate Minimal No accidents. Years since last IRR construction project completed... elements Addresses 1 element. 1 National Highway Traffic Safety Board standards. 2 Total funds requested...

  2. Identification of priorities for medication safety in neonatal intensive care.

    PubMed

    Kunac, Desireé L; Reith, David M

    2005-01-01

    Although neonates are reported to be at greater risk of medication error than infants and older children, little is known about the causes and characteristics of error in this patient group. Failure mode and effects analysis (FMEA) is a technique used in industry to evaluate system safety and identify potential hazards in advance. The aim of this study was to identify and prioritize potential failures in the neonatal intensive care unit (NICU) medication use process through application of FMEA. Using the FMEA framework and a systems-based approach, an eight-member multidisciplinary panel worked as a team to create a flow diagram of the neonatal unit medication use process. Then by brainstorming, the panel identified all potential failures, their causes and their effects at each step in the process. Each panel member independently rated failures based on occurrence, severity and likelihood of detection to allow calculation of a risk priority score (RPS). The panel identified 72 failures, with 193 associated causes and effects. Vulnerabilities were found to be distributed across the entire process, but multiple failures and associated causes were possible when prescribing the medication and when preparing the drug for administration. The top ranking issue was a perceived lack of awareness of medication safety issues (RPS score 273), due to a lack of medication safety training. The next highest ranking issues were found to occur at the administration stage. Common potential failures related to errors in the dose, timing of administration, infusion pump settings and route of administration. Perceived causes were multiple, but were largely associated with unsafe systems for medication preparation and storage in the unit, variable staff skill level and lack of computerised technology. Interventions to decrease medication-related adverse events in the NICU should aim to increase staff awareness of medication safety issues and focus on medication administration processes.

  3. Failure mode and effect analysis: improving intensive care unit risk management processes.

    PubMed

    Askari, Roohollah; Shafii, Milad; Rafiei, Sima; Abolhassani, Mohammad Sadegh; Salarikhah, Elaheh

    2017-04-18

    Purpose Failure modes and effects analysis (FMEA) is a practical tool to evaluate risks, discover failures in a proactive manner and propose corrective actions to reduce or eliminate potential risks. The purpose of this paper is to apply FMEA technique to examine the hazards associated with the process of service delivery in intensive care unit (ICU) of a tertiary hospital in Yazd, Iran. Design/methodology/approach This was a before-after study conducted between March 2013 and December 2014. By forming a FMEA team, all potential hazards associated with ICU services - their frequency and severity - were identified. Then risk priority number was calculated for each activity as an indicator representing high priority areas that need special attention and resource allocation. Findings Eight failure modes with highest priority scores including endotracheal tube defect, wrong placement of endotracheal tube, EVD interface, aspiration failure during suctioning, chest tube failure, tissue injury and deep vein thrombosis were selected for improvement. Findings affirmed that improvement strategies were generally satisfying and significantly decreased total failures. Practical implications Application of FMEA in ICUs proved to be effective in proactively decreasing the risk of failures and corrected the control measures up to acceptable levels in all eight areas of function. Originality/value Using a prospective risk assessment approach, such as FMEA, could be beneficial in dealing with potential failures through proposing preventive actions in a proactive manner. The method could be used as a tool for healthcare continuous quality improvement so that the method identifies both systemic and human errors, and offers practical advice to deal effectively with them.

  4. Prioritizing human pharmaceuticals for ecological risks in the freshwater environment of Korea.

    PubMed

    Ji, Kyunghee; Han, Eun Jeong; Back, Sunhyoung; Park, Jeongim; Ryu, Jisung; Choi, Kyungho

    2016-04-01

    Pharmaceutical residues are potential threats to aquatic ecosystems. Because more than 3000 active pharmaceutical ingredients (APIs) are in use, identifying high-priority pharmaceuticals is important for developing appropriate management options. Priority pharmaceuticals may vary by geographical region, because their occurrence levels can be influenced by demographic, societal, and regional characteristics. In the present study, the authors prioritized human pharmaceuticals of potential ecological risk in the Korean water environment, based on amount of use, biological activity, and regional hydrologic characteristics. For this purpose, the authors estimated the amounts of annual production of 695 human APIs in Korea. Then derived predicted environmental concentrations, using 2 approaches, to develop an initial candidate list of target pharmaceuticals. Major antineoplastic drugs and hormones were added in the initial candidate list regardless of their production amount because of their high biological activity potential. The predicted no effect concentrations were derived for those pharmaceuticals based on ecotoxicity information available in the literature or by model prediction. Priority lists of human pharmaceuticals were developed based on ecological risks and availability of relevant information. Those priority APIs identified include acetaminophen, clarithromycin, ciprofloxacin, ofloxacin, metformin, and norethisterone. Many of these pharmaceuticals have been neither adequately monitored nor assessed for risks in Korea. Further efforts are needed to improve these lists and to develop management decisions for these compounds in Korean water. © 2015 SETAC.

  5. Translation of oral care practice guidelines into clinical practice by intensive care unit nurses.

    PubMed

    Ganz, Freda DeKeyser; Ofra, Raanan; Khalaila, Rabia; Levy, Hadassa; Arad, Dana; Kolpak, Orly; Ben Nun, Maureen; Drori, Yardena; Benbenishty, Julie

    2013-12-01

    The purpose of this study was to determine whether there was a change in the oral care practices of intensive care unit (ICU) nurses for ventilated patients after a national effort to increase evidence-based oral care practices. Descriptive comparison of ICU nurses in 2004-2005 and 2012. Two convenience national surveys of ICU nurses were collected in 2004-2005 (n = 218) and 2012 (n = 233). After the results of the initial survey were reported, a national effort to increase awareness of evidence-based oral care practices was conducted that included in-service presentations; publication of an evidence-based protocol in a national nursing journal; publication of the survey findings in an international nursing journal; and reports to the local press. A repeat survey was conducted 7 to 8 years later. The same survey instrument was used for both periods of data collection. This questionnaire included questions about demographic and personal characteristics and a checklist of oral care practices. Nurses rated their perceived priority level concerning oral care on a scale from 0 to 100. An evidence-based practice (EBP)[O4] score was computed representing the sum of 14 items related to equipment, solutions, assessments, and techniques associated with the evidence. The EBP score, priority score, and oral care practices were compared between the two samples. A regression model was built based on those variables that were associated with the EBP score in 2012. There was a statistically significant increase in the use of EBPs as shown by the EBP score and in the perceived priority level of oral care. Increased EBPs were found in the areas of teeth brushing and oral assessment. Decreases were found in the use of non-evidence-based practices, such as the use of gauze pads, tongue depressors, lemon water, and sodium bicarbonate. No differences were found in the use of chlorhexidine, toothpaste, or the nursing documentation of oral care practices. A multiple regression model was found to be significant with the time of participation (2004-2005 vs. 2012) and priority level of oral care significantly contributing to the regression model. The national effort was partially successful in improving evidence-based oral care practices; however, increased awareness to EBP also might have come from other sources. Other strategies related to knowledge translation need to be attempted and researched in this clinical setting such as the use of opinion leaders, audits and feedback, small group consensus, provider reminder systems, incentives, clinical information systems, and computer decision support systems. This national effort to improve EBP did reap some rewards; however, other knowledge translation strategies should be used to further improve clinical practice. © 2013 Sigma Theta Tau International.

  6. Measuring and managing progress in the establishment of basic health services: the Afghanistan health sector balanced scorecard.

    PubMed

    Hansen, Peter M; Peters, David H; Niayesh, Haseebullah; Singh, Lakhwinder P; Dwivedi, Vikas; Burnham, Gilbert

    2008-01-01

    The Ministry of Public Health (MOPH) of Afghanistan has adopted the Balanced Scorecard (BSC) as a tool to measure and manage performance in delivery of a Basic Package of Health Services. Based on results from the 2004 baseline round, the MOPH identified eight of the 29 indicators on the BSC as priority areas for improvement. Like the 2004 round, the 2005 and 2006 BSCs involved a random selection of more than 600 health facilities, 1700 health workers and 5800 patient-provider interactions. The 2005 and 2006 BSCs demonstrated substantial improvements in all eight of the priority areas compared to 2004 baseline levels, with increases in median provincial scores for presence of active village health councils, availability of essential drugs, functional laboratories, provider knowledge, health worker training, use of clinical guidelines, monitoring of tuberculosis treatment, and provision of delivery care. For three of the priority indicators-drug availability, health worker training and provider knowledge-scores remained unchanged or decreased between 2005 and 2006. This highlights the need to ensure that early gains achieved in establishment of health services in Afghanistan are maintained over time. The use of a coherent and balanced monitoring framework to identify priority areas for improvement and measure performance over time reflects an objectives-based approach to management of health services that is proving to be effective in a difficult environment. 2007 John Wiley & Sons, Ltd

  7. Landscape level reforestation priorities for forest breeding landbirds in the Mississippi Alluvial Valley

    USGS Publications Warehouse

    Twedt, D.J.; Uihlein, W.B.; Fredrickson, L.H.; King, S.L.; Kaminski, R.M.

    2005-01-01

    Thousands of ha of cleared wetlands are being reforested annually in the Mississippi Alluvial Valley (MAV). Despite the expansive and long-term impacts of reforestation on the biological communities of the MAV, there is generally a lack of landscape level planning in its implementation. To address this deficiency we used raster-based digital data to assess the value of forest restoration to migratory landbirds for each ha within the MAV. Raster themes were developed that reflected distance from 3 existing forest cover parameters: (1) extant forest, (2) contiguous forest patches between 1,012 and 40,000 ha, and (3) forest cores with contiguous area 1 km from an agricultural, urban, or pastoral edge. Two additional raster themes were developed that combined information on the proportion of forest cover and average size of forest patches, respectively, within landscapes of 50,000, 100,000, 150,000, and 200,000 ha. Data from these 5 themes were amalgamated into a single raster using a weighting system that gave increased emphasis to existing forest cores, larger forest patches, and moderately forested landscapes while deemphasizing reforestation near small or isolated forest fragments and within largely agricultural landscapes. This amalgamated raster was then modified by the geographic location of historical forest cover and the current extent of public land ownership to assign a reforestation priority score to each ha in the MAV. However, because reforestation is not required on areas with extant forest cover and because restoration is unlikely on areas of open water and urban communities, these lands were not assigned a reforestation priority score. These spatially explicit reforestation priority scores were used to simulate reforestation of 368,000 ha (5%) of the highest priority lands in the MAV. Targeting restoration to these high priority areas resulted in a 54% increase in forest core - an area of forest core that exceeded the area of simulated reforestation. Bird Conservation Regions, developed within the framework of the Partners in Flight: Mississippi Alluvial Valley Bird Conservation Plan, encompassed a large proportion (circa 70%) of the area with highest priority for reforestation. Similarly, lands with high reforestation priority often were enrolled in the Wetland Reserve Program.

  8. The conservation of native priority medicinal plants in a Caatinga area in Ceará, northeastern Brazil.

    PubMed

    Santos, Maria O; Almeida, Bianca V DE; Ribeiro, Daiany A; Macêdo, Delmacia G DE; Macêdo, Márcia J F; Macedo, Julimery G F; Sousa, Francisca F S DE; Oliveira, Liana G S DE; Saraiva, Manuele E; Araújo, Thatiane M S; Souza, Marta M A

    2017-01-01

    Much of the Brazilian semiarid region faces a considerable process of degradation of natural resources, and ethnobotanical studies have collaborated with important information about the use and traditional knowledge, serving as a tool to design conservation strategies of native plant species. Thus, this study aimed to determine medicinal species meriting conservation priorities in a "Caatinga" area in the northeastern of Brazilian territory. The ethnobotanical data were collected through semi-structured interviews with key subjects selected through the "snowball" technique. The availability and species conservation priority was verified by relative density, risk of collection, local use and use of diversity in the forest fragment sampled. It was recorded 42 native medicinal plants and conservation priority score was calculated for seven species, including Mimosa tenuiflora, Hymenaea courbaril, Ximenia americana and Amburana cearensis need immediate conservation and attention, since their collection does not occur in a sustainable way. In order to ensure the perpetuation of the species and the sustainability of traditional therapeutic practice there needs to be a development of conservation practices of caatinga remaining to better conserve the species of the biome.

  9. Occupational risk assessment in the construction industry in Iran.

    PubMed

    Seifi Azad Mard, Hamid Reza; Estiri, Ali; Hadadi, Parinaz; Seifi Azad Mard, Mahshid

    2017-12-01

    Occupational accidents in the construction industry are more common compared with other fields and these accidents are more severe compared with the global average in developing countries, especially in Iran. Studies which lead to the source of these accidents and suggest solutions for them are therefore valuable. In this study a combination of the failure mode and effects analysis method and fuzzy theory is used as a semi-qualitative-quantitative method for analyzing risks and failure modes. The main causes of occupational accidents in this field were identified and analyzed based on three factors; severity, detection and occurrence. Based on whether the risks are high or low priority, modifying actions were suggested to reduce the occupational risks. Finally, the results showed that high priority risks had a 40% decrease due to these actions.

  10. Predicting suicide with the SAD PERSONS scale.

    PubMed

    Katz, Cara; Randall, Jason R; Sareen, Jitender; Chateau, Dan; Walld, Randy; Leslie, William D; Wang, JianLi; Bolton, James M

    2017-09-01

    Suicide is a major public health issue, and a priority requirement is accurately identifying high-risk individuals. The SAD PERSONS suicide risk assessment scale is widely implemented in clinical settings despite limited supporting evidence. This article aims to determine the ability of the SAD PERSONS scale (SPS) to predict future suicide in the emergency department. Five thousand four hundred sixty-two consecutive adults were seen by psychiatry consultation teams in two tertiary emergency departments with linkage to population-based administrative data to determine suicide deaths within 6 months, 1, and 5 years. Seventy-seven (1.4%) individuals died by suicide during the study period. When predicting suicide at 12 months, medium- and high-risk scores on SPS had a sensitivity of 49% and a specificity of 60%; the positive and negative predictive values were 0.9 and 99%, respectively. Half of the suicides at both 6- and 12-month intervals were classified as low risk by SPS at index visit. The area under the curve at 12 months for the Modified SPS was 0.59 (95% confidence interval [CI] range 0.51-0.67). High-risk scores (compared to low risk) were significantly associated with death by suicide over the 5-year study period using the SPS (hazard ratio 2.49; 95% CI 1.34-4.61) and modified version (hazard ratio 2.29; 95% CI 1.24-2.29). Although widely used in educational and clinical settings, these findings do not support the use of the SPS and Modified SPS to predict suicide in adults seen by psychiatric services in the emergency department. © 2017 Wiley Periodicals, Inc.

  11. A summary risk score for the prediction of Alzheimer disease in elderly persons.

    PubMed

    Reitz, Christiane; Tang, Ming-Xin; Schupf, Nicole; Manly, Jennifer J; Mayeux, Richard; Luchsinger, José A

    2010-07-01

    To develop a simple summary risk score for the prediction of Alzheimer disease in elderly persons based on their vascular risk profiles. A longitudinal, community-based study. New York, New York. Patients One thousand fifty-one Medicare recipients aged 65 years or older and residing in New York who were free of dementia or cognitive impairment at baseline. We separately explored the associations of several vascular risk factors with late-onset Alzheimer disease (LOAD) using Cox proportional hazards models to identify factors that would contribute to the risk score. Then we estimated the score values of each factor based on their beta coefficients and created the LOAD vascular risk score by summing these individual scores. Risk factors contributing to the risk score were age, sex, education, ethnicity, APOE epsilon4 genotype, history of diabetes, hypertension or smoking, high-density lipoprotein levels, and waist to hip ratio. The resulting risk score predicted dementia well. According to the vascular risk score quintiles, the risk to develop probable LOAD was 1.0 for persons with a score of 0 to 14 and increased 3.7-fold for persons with a score of 15 to 18, 3.6-fold for persons with a score of 19 to 22, 12.6-fold for persons with a score of 23 to 28, and 20.5-fold for persons with a score higher than 28. While additional studies in other populations are needed to validate and further develop the score, our study suggests that this vascular risk score could be a valuable tool to identify elderly individuals who might be at risk of LOAD. This risk score could be used to identify persons at risk of LOAD, but can also be used to adjust for confounders in epidemiologic studies.

  12. Priority of VHS Development Based in Potential Area using Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Meirawan, D.; Ana, A.; Saripudin, S.

    2018-02-01

    The current condition of VHS is still inadequate in quality, quantity and relevance. The purpose of this research is to analyse the development of VHS based on the development of regional potential by using principal component analysis (PCA) in Bandung, Indonesia. This study used descriptive qualitative data analysis using the principle of secondary data reduction component. The method used is Principal Component Analysis (PCA) analysis with Minitab Statistics Software tool. The results of this study indicate the value of the lowest requirement is a priority of the construction of development VHS with a program of majors in accordance with the development of regional potential. Based on the PCA score found that the main priority in the development of VHS in Bandung is in Saguling, which has the lowest PCA value of 416.92 in area 1, Cihampelas with the lowest PCA value in region 2 and Padalarang with the lowest PCA value.

  13. Lifetime substance use and HIV sexual risk behaviors predict treatment response to contingency management among homeless, substance-dependent MSM.

    PubMed

    Reback, Cathy J; Peck, James A; Fletcher, Jesse B; Nuno, Miriam; Dierst-Davies, Rhodri

    2012-01-01

    Homeless, substance-dependent men who have sex with men (MSM) continue to suffer health disparities, including high rates of HIV. One-hundred and thirty one homeless, substance-dependent MSM were randomized into a contingency management (CM) intervention to increase substance abstinence and health-promoting behaviors. Participants were recruited from a community-based, health education/risk reduction HIV prevention program and the research activities were also conducted at the community site. Secondary analyses were conducted to identify and characterize treatment responders (defined as participants in a contingency management intervention who scored at or above the median on three primary outcomes). Treatment responders were more likely to be Caucasian/White (p < .05), report fewer years of lifetime methamphetamine, cocaine, and polysubstance use (p < or = .05), and report more recent sexual partners and high-risk sexual behaviors than nonresponders (p < .05). The application of evidence-based interventions continues to be a public health priority, especially in the effort to implement effective interventions for use in community settings. The identification of both treatment responders and nonresponders is important for intervention development tailored to specific populations, both in service programs and research studies, to optimize outcomes among highly impacted populations.

  14. Total knee arthroplasty: good agreement of clinical severity scores between patients and consultants.

    PubMed

    Ebinesan, Ananthan D; Sarai, Bhupinder S; Walley, Gayle; Bridgman, Stephen; Maffulli, Nicola

    2006-07-31

    Nearly 20,000 patients per year in the UK receive total knee arthroplasty (TKA). One of the problems faced by the health services of many developed countries is the length of time patients spend waiting for elective treatment. We therefore report the results of a study in which the Salisbury Priority Scoring System (SPSS) was used by both the surgeon and their patients to ascertain whether there were differences between the surgeon generated and patient generated Salisbury Priority Scores. The Salisbury Priority Scoring System (SPSS) was used to assign relative priority to patients with knee osteoarthritis as part of a randomised controlled trial comparing the standard medial parapatellar approach versus the sub-vastus approach in TKA. The operating surgeons and each patient completed the SPSS at the same pre-assessment clinic. The SPSS assesses four criteria, namely progression of disease, pain or distress, disability or dependence on others, and loss of usual occupation. Crosstabs and agreement measures (Cohen's kappa) were performed. Overall, the four SPSS criteria showed a kappa value of 0.526, 0.796, 0.813, and 0.820, respectively, showing moderate to very good agreement between the patient and the operating consultant. Male patients showed better agreement than female patients. The Salisbury Priority Scoring System is a good means of assessing patients' needs in relation to elective surgery, with high agreement between the patient and the operating surgeon.

  15. Updated Priorities Among Effective Clinical Preventive Services

    PubMed Central

    Maciosek, Michael V.; LaFrance, Amy B.; Dehmer, Steven P.; McGree, Dana A.; Flottemesch, Thomas J.; Xu, Zack; Solberg, Leif I.

    2017-01-01

    PURPOSE The Patient Protection and Affordable Care Act’s provisions for first-dollar coverage of evidence-based preventive services have reduced an important barrier to receipt of preventive care. Safety-net providers, however, still serve a substantial uninsured population, and clinician and patient time remain limited in all primary care settings. As a consequence, decision makers continue to set priorities to help focus their efforts. This report updates estimates of relative health impact and cost-effectiveness for evidence-based preventive services. METHODS We assessed the potential impact of 28 evidence-based clinical preventive services in terms of their cost-effectiveness and clinically preventable burden, as measured by quality-adjusted life years (QALYs) saved. Each service received 1 to 5 points on each of the 2 measures—cost-effectiveness and clinically preventable burden—for a total score ranging from 2 to 10. New microsimulation models were used to provide updated estimates of 12 of these services. Priorities for improving delivery rates were established by comparing the ranking with what is known of current delivery rates nationally. RESULTS The 3 highest-ranking services, each with a total score of 10, are immunizing children, counseling to prevent tobacco initiation among youth, and tobacco-use screening and brief intervention to encourage cessation among adults. Greatest population health improvement could be obtained from increasing utilization of clinical preventive services that address tobacco use, obesity-related behaviors, and alcohol misuse, as well as colorectal cancer screening and influenza vaccinations. CONCLUSIONS This study identifies high-priority preventive services and should help decision makers select which services to emphasize in quality-improvement initiatives. PMID:28376457

  16. Expanding Access to Non-Medicalized Community-Based Rapid Testing to Men Who Have Sex with Men: An Urgent HIV Prevention Intervention (The ANRS-DRAG Study)

    PubMed Central

    Lorente, Nicolas; Preau, Marie; Vernay-Vaisse, Chantal; Mora, Marion; Blanche, Jerome; Otis, Joanne; Passeron, Alain; Le Gall, Jean-Marie; Dhotte, Philippe; Carrieri, Maria Patrizia; Suzan-Monti, Marie; Spire, Bruno

    2013-01-01

    Background Little is known about the public health benefits of community-based, non-medicalized rapid HIV testing offers (CBOffer) specifically targeting men who have sex with men (MSM), compared with the standard medicalized HIV testing offer (SMOffer) in France. This study aimed to verify whether such a CBOffer, implemented in voluntary counselling and testing centres, could improve access to less recently HIV-tested MSM who present a risk behaviour profile similar to or higher than MSM tested with the SMOffer. Method This multisite study enrolled MSM attending voluntary counselling and testing centres’ during opening hours in the SMOffer. CBOffer enrolees voluntarily came to the centres outside of opening hours, following a communication campaign in gay venues. A self-administered questionnaire was used to investigate HIV testing history and sexual behaviours including inconsistent condom use and risk reduction behaviours (in particular, a score of “intentional avoidance” for various at-risk situations was calculated). A mixed logistic regression identified factors associated with access to the CBOffer. Results Among the 330 participants, 64% attended the CBOffer. Percentages of inconsistent condom use in both offers were similar (51% CBOffer, 50% SMOffer). In multivariate analyses, those attending the CBOffer had only one or no test in the previous two years, had a lower intentional avoidance score, and met more casual partners in saunas and backrooms than SMOffer enrolees. Conclusion This specific rapid CBOffer attracted MSM less recently HIV-tested, who presented similar inconsistent condom use rates to SMOffer enrolees but who exposed themselves more to HIV-associated risks. Increasing entry points for HIV testing using community and non-medicalized tests is a priority to reach MSM who are still excluded. PMID:23613817

  17. Expanding access to non-medicalized community-based rapid testing to men who have sex with men: an urgent HIV prevention intervention (the ANRS-DRAG study).

    PubMed

    Lorente, Nicolas; Preau, Marie; Vernay-Vaisse, Chantal; Mora, Marion; Blanche, Jerome; Otis, Joanne; Passeron, Alain; Le Gall, Jean-Marie; Dhotte, Philippe; Carrieri, Maria Patrizia; Suzan-Monti, Marie; Spire, Bruno

    2013-01-01

    Little is known about the public health benefits of community-based, non-medicalized rapid HIV testing offers (CBOffer) specifically targeting men who have sex with men (MSM), compared with the standard medicalized HIV testing offer (SMOffer) in France. This study aimed to verify whether such a CBOffer, implemented in voluntary counselling and testing centres, could improve access to less recently HIV-tested MSM who present a risk behaviour profile similar to or higher than MSM tested with the SMOffer. This multisite study enrolled MSM attending voluntary counselling and testing centres' during opening hours in the SMOffer. CBOffer enrolees voluntarily came to the centres outside of opening hours, following a communication campaign in gay venues. A self-administered questionnaire was used to investigate HIV testing history and sexual behaviours including inconsistent condom use and risk reduction behaviours (in particular, a score of "intentional avoidance" for various at-risk situations was calculated). A mixed logistic regression identified factors associated with access to the CBOffer. Among the 330 participants, 64% attended the CBOffer. Percentages of inconsistent condom use in both offers were similar (51% CBOffer, 50% SMOffer). In multivariate analyses, those attending the CBOffer had only one or no test in the previous two years, had a lower intentional avoidance score, and met more casual partners in saunas and backrooms than SMOffer enrolees. This specific rapid CBOffer attracted MSM less recently HIV-tested, who presented similar inconsistent condom use rates to SMOffer enrolees but who exposed themselves more to HIV-associated risks. Increasing entry points for HIV testing using community and non-medicalized tests is a priority to reach MSM who are still excluded.

  18. Irrigation, risk aversion, and water right priority under water supply uncertainty

    NASA Astrophysics Data System (ADS)

    Li, Man; Xu, Wenchao; Rosegrant, Mark W.

    2017-09-01

    This paper explores the impacts of a water right's allocative priority—as an indicator of farmers' risk-bearing ability—on land irrigation under water supply uncertainty. We develop and use an economic model to simulate farmers' land irrigation decision and associated economic returns in eastern Idaho. Results indicate that the optimal acreage of land irrigated increases with water right priority when hydroclimate risk exhibits a negatively skewed or right-truncated distribution. Simulation results suggest that prior appropriation enables senior water rights holders to allocate a higher proportion of their land to irrigation, 6 times as much as junior rights holders do, creating a gap in the annual expected net revenue reaching up to 141.4 acre-1 or 55,800 per farm between the two groups. The optimal irrigated acreage, expected net revenue, and shadow value of a water right's priority are subject to substantial changes under a changing climate in the future, where temporal variation in water supply risks significantly affects the profitability of agricultural land use under the priority-based water sharing mechanism.

  19. Setting health research priorities using the CHNRI method: IV. Key conceptual advances

    PubMed Central

    Rudan, Igor

    2016-01-01

    Introduction Child Health and Nutrition Research Initiative (CHNRI) started as an initiative of the Global Forum for Health Research in Geneva, Switzerland. Its aim was to develop a method that could assist priority setting in health research investments. The first version of the CHNRI method was published in 2007–2008. The aim of this paper was to summarize the history of the development of the CHNRI method and its key conceptual advances. Methods The guiding principle of the CHNRI method is to expose the potential of many competing health research ideas to reduce disease burden and inequities that exist in the population in a feasible and cost–effective way. Results The CHNRI method introduced three key conceptual advances that led to its increased popularity in comparison to other priority–setting methods and processes. First, it proposed a systematic approach to listing a large number of possible research ideas, using the “4D” framework (description, delivery, development and discovery research) and a well–defined “depth” of proposed research ideas (research instruments, avenues, options and questions). Second, it proposed a systematic approach for discriminating between many proposed research ideas based on a well–defined context and criteria. The five “standard” components of the context are the population of interest, the disease burden of interest, geographic limits, time scale and the preferred style of investing with respect to risk. The five “standard” criteria proposed for prioritization between research ideas are answerability, effectiveness, deliverability, maximum potential for disease burden reduction and the effect on equity. However, both the context and the criteria can be flexibly changed to meet the specific needs of each priority–setting exercise. Third, it facilitated consensus development through measuring collective optimism on each component of each research idea among a larger group of experts using a simple scoring system. This enabled the use of the knowledge of many experts in the field, “visualising” their collective opinion and presenting the list of many research ideas with their ranks, based on an intuitive score that ranges between 0 and 100. Conclusions Two recent reviews showed that the CHNRI method, an approach essentially based on “crowdsourcing”, has become the dominant approach to setting health research priorities in the global biomedical literature over the past decade. With more than 50 published examples of implementation to date, it is now widely used in many international organisations for collective decision–making on health research priorities. The applications have been helpful in promoting better balance between investments in fundamental research, translation research and implementation research. PMID:27418959

  20. Laboratory-based and office-based risk scores and charts to predict 10-year risk of cardiovascular disease in 182 countries: a pooled analysis of prospective cohorts and health surveys.

    PubMed

    Ueda, Peter; Woodward, Mark; Lu, Yuan; Hajifathalian, Kaveh; Al-Wotayan, Rihab; Aguilar-Salinas, Carlos A; Ahmadvand, Alireza; Azizi, Fereidoun; Bentham, James; Cifkova, Renata; Di Cesare, Mariachiara; Eriksen, Louise; Farzadfar, Farshad; Ferguson, Trevor S; Ikeda, Nayu; Khalili, Davood; Khang, Young-Ho; Lanska, Vera; León-Muñoz, Luz; Magliano, Dianna J; Margozzini, Paula; Msyamboza, Kelias P; Mutungi, Gerald; Oh, Kyungwon; Oum, Sophal; Rodríguez-Artalejo, Fernando; Rojas-Martinez, Rosalba; Valdivia, Gonzalo; Wilks, Rainford; Shaw, Jonathan E; Stevens, Gretchen A; Tolstrup, Janne S; Zhou, Bin; Salomon, Joshua A; Ezzati, Majid; Danaei, Goodarz

    2017-03-01

    Worldwide implementation of risk-based cardiovascular disease (CVD) prevention requires risk prediction tools that are contemporarily recalibrated for the target country and can be used where laboratory measurements are unavailable. We present two cardiovascular risk scores, with and without laboratory-based measurements, and the corresponding risk charts for 182 countries to predict 10-year risk of fatal and non-fatal CVD in adults aged 40-74 years. Based on our previous laboratory-based prediction model (Globorisk), we used data from eight prospective studies to estimate coefficients of the risk equations using proportional hazard regressions. The laboratory-based risk score included age, sex, smoking, blood pressure, diabetes, and total cholesterol; in the non-laboratory (office-based) risk score, we replaced diabetes and total cholesterol with BMI. We recalibrated risk scores for each sex and age group in each country using country-specific mean risk factor levels and CVD rates. We used recalibrated risk scores and data from national surveys (using data from adults aged 40-64 years) to estimate the proportion of the population at different levels of CVD risk for ten countries from different world regions as examples of the information the risk scores provide; we applied a risk threshold for high risk of at least 10% for high-income countries (HICs) and at least 20% for low-income and middle-income countries (LMICs) on the basis of national and international guidelines for CVD prevention. We estimated the proportion of men and women who were similarly categorised as high risk or low risk by the two risk scores. Predicted risks for the same risk factor profile were generally lower in HICs than in LMICs, with the highest risks in countries in central and southeast Asia and eastern Europe, including China and Russia. In HICs, the proportion of people aged 40-64 years at high risk of CVD ranged from 1% for South Korean women to 42% for Czech men (using a ≥10% risk threshold), and in low-income countries ranged from 2% in Uganda (men and women) to 13% in Iranian men (using a ≥20% risk threshold). More than 80% of adults were similarly classified as low or high risk by the laboratory-based and office-based risk scores. However, the office-based model substantially underestimated the risk among patients with diabetes. Our risk charts provide risk assessment tools that are recalibrated for each country and make the estimation of CVD risk possible without using laboratory-based measurements. National Institutes of Health. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Laboratory-based and office-based risk scores and charts to predict 10-year risk of cardiovascular disease in 182 countries: a pooled analysis of prospective cohorts and health surveys

    PubMed Central

    Ueda, Peter; Woodward, Mark; Lu, Yuan; Hajifathalian, Kaveh; Al-Wotayan, Rihab; Aguilar-Salinas, Carlos A; Ahmadvand, Alireza; Azizi, Fereidoun; Bentham, James; Cifkova, Renata; Di Cesare, Mariachiara; Eriksen, Louise; Farzadfar, Farshad; Ferguson, Trevor S; Ikeda, Nayu; Khalili, Davood; Khang, Young-Ho; Lanska, Vera; León-Muñoz, Luz; Magliano, Dianna J; Margozzini, Paula; Msyamboza, Kelias P; Mutungi, Gerald; Oh, Kyungwon; Oum, Sophal; Rodríguez-Artalejo, Fernando; Rojas-Martinez, Rosalba; Valdivia, Gonzalo; Wilks, Rainford; Shaw, Jonathan E; Stevens, Gretchen A; Tolstrup, Janne S; Zhou, Bin; Salomon, Joshua A; Ezzati, Majid; Danaei, Goodarz

    2017-01-01

    Summary Background Worldwide implementation of risk-based cardiovascular disease (CVD) prevention requires risk prediction tools that are contemporarily recalibrated for the target country and can be used where laboratory measurements are unavailable. We present two cardiovascular risk scores, with and without laboratory-based measurements, and the corresponding risk charts for 182 countries to predict 10-year risk of fatal and non-fatal CVD in adults aged 40–74 years. Methods Based on our previous laboratory-based prediction model (Globorisk), we used data from eight prospective studies to estimate coefficients of the risk equations using proportional hazard regressions. The laboratory-based risk score included age, sex, smoking, blood pressure, diabetes, and total cholesterol; in the non-laboratory (office-based) risk score, we replaced diabetes and total cholesterol with BMI. We recalibrated risk scores for each sex and age group in each country using country-specific mean risk factor levels and CVD rates. We used recalibrated risk scores and data from national surveys (using data from adults aged 40–64 years) to estimate the proportion of the population at different levels of CVD risk for ten countries from different world regions as examples of the information the risk scores provide; we applied a risk threshold for high risk of at least 10% for high-income countries (HICs) and at least 20% for low-income and middle-income countries (LMICs) on the basis of national and international guidelines for CVD prevention. We estimated the proportion of men and women who were similarly categorised as high risk or low risk by the two risk scores. Findings Predicted risks for the same risk factor profile were generally lower in HICs than in LMICs, with the highest risks in countries in central and southeast Asia and eastern Europe, including China and Russia. In HICs, the proportion of people aged 40–64 years at high risk of CVD ranged from 1% for South Korean women to 42% for Czech men (using a ≥10% risk threshold), and in low-income countries ranged from 2% in Uganda (men and women) to 13% in Iranian men (using a ≥20% risk threshold). More than 80% of adults were similarly classified as low or high risk by the laboratory-based and office-based risk scores. However, the office-based model substantially underestimated the risk among patients with diabetes. Interpretation Our risk charts provide risk assessment tools that are recalibrated for each country and make the estimation of CVD risk possible without using laboratory-based measurements. PMID:28126460

  2. A review of soil heavy metal pollution from industrial and agricultural regions in China: Pollution and risk assessment.

    PubMed

    Yang, Qianqi; Li, Zhiyuan; Lu, Xiaoning; Duan, Qiannan; Huang, Lei; Bi, Jun

    2018-06-14

    Soil heavy metal pollution has been becoming serious and widespread in China. To date, there are few studies assessing the nationwide soil heavy metal pollution induced by industrial and agricultural activities in China. This review obtained heavy metal concentrations in soils of 402 industrial sites and 1041 agricultural sites in China throughout the document retrieval. Based on the database, this review assessed soil heavy metal concentration and estimated the ecological and health risks on a national scale. The results revealed that heavy metal pollution and associated risks posed by cadmium (Cd), lead (Pb) and arsenic (As) are more serious. Besides, heavy metal pollution and associated risks in industrial regions are severer than those in agricultural regions, meanwhile, those in southeast China are severer than those in northwest China. It is worth noting that children are more likely to be affected by heavy metal pollution than adults. Based on the assessment results, Cd, Pb and As are determined as the priority control heavy metals; mining areas are the priority control areas compared to other areas in industrial regions; food crop plantations are the priority control areas in agricultural regions; and children are determined as the priority protection population group. This paper provides a comprehensive ecological and health risk assessment on the heavy metals in soils in Chinese industrial and agricultural regions and thus provides insights for the policymakers regarding exposure reduction and management. Copyright © 2018. Published by Elsevier B.V.

  3. Developing points-based risk-scoring systems in the presence of competing risks.

    PubMed

    Austin, Peter C; Lee, Douglas S; D'Agostino, Ralph B; Fine, Jason P

    2016-09-30

    Predicting the occurrence of an adverse event over time is an important issue in clinical medicine. Clinical prediction models and associated points-based risk-scoring systems are popular statistical methods for summarizing the relationship between a multivariable set of patient risk factors and the risk of the occurrence of an adverse event. Points-based risk-scoring systems are popular amongst physicians as they permit a rapid assessment of patient risk without the use of computers or other electronic devices. The use of such points-based risk-scoring systems facilitates evidence-based clinical decision making. There is a growing interest in cause-specific mortality and in non-fatal outcomes. However, when considering these types of outcomes, one must account for competing risks whose occurrence precludes the occurrence of the event of interest. We describe how points-based risk-scoring systems can be developed in the presence of competing events. We illustrate the application of these methods by developing risk-scoring systems for predicting cardiovascular mortality in patients hospitalized with acute myocardial infarction. Code in the R statistical programming language is provided for the implementation of the described methods. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  4. Failure modes and effects analysis (FMEA) for Gamma Knife radiosurgery.

    PubMed

    Xu, Andy Yuanguang; Bhatnagar, Jagdish; Bednarz, Greg; Flickinger, John; Arai, Yoshio; Vacsulka, Jonet; Feng, Wenzheng; Monaco, Edward; Niranjan, Ajay; Lunsford, L Dade; Huq, M Saiful

    2017-11-01

    Gamma Knife radiosurgery is a highly precise and accurate treatment technique for treating brain diseases with low risk of serious error that nevertheless could potentially be reduced. We applied the AAPM Task Group 100 recommended failure modes and effects analysis (FMEA) tool to develop a risk-based quality management program for Gamma Knife radiosurgery. A team consisting of medical physicists, radiation oncologists, neurosurgeons, radiation safety officers, nurses, operating room technologists, and schedulers at our institution and an external physicist expert on Gamma Knife was formed for the FMEA study. A process tree and a failure mode table were created for the Gamma Knife radiosurgery procedures using the Leksell Gamma Knife Perfexion and 4C units. Three scores for the probability of occurrence (O), the severity (S), and the probability of no detection for failure mode (D) were assigned to each failure mode by 8 professionals on a scale from 1 to 10. An overall risk priority number (RPN) for each failure mode was then calculated from the averaged O, S, and D scores. The coefficient of variation for each O, S, or D score was also calculated. The failure modes identified were prioritized in terms of both the RPN scores and the severity scores. The established process tree for Gamma Knife radiosurgery consists of 10 subprocesses and 53 steps, including a subprocess for frame placement and 11 steps that are directly related to the frame-based nature of the Gamma Knife radiosurgery. Out of the 86 failure modes identified, 40 Gamma Knife specific failure modes were caused by the potential for inappropriate use of the radiosurgery head frame, the imaging fiducial boxes, the Gamma Knife helmets and plugs, the skull definition tools as well as other features of the GammaPlan treatment planning system. The other 46 failure modes are associated with the registration, imaging, image transfer, contouring processes that are common for all external beam radiation therapy techniques. The failure modes with the highest hazard scores are related to imperfect frame adaptor attachment, bad fiducial box assembly, unsecured plugs/inserts, overlooked target areas, and undetected machine mechanical failure during the morning QA process. The implementation of the FMEA approach for Gamma Knife radiosurgery enabled deeper understanding of the overall process among all professionals involved in the care of the patient and helped identify potential weaknesses in the overall process. The results of the present study give us a basis for the development of a risk based quality management program for Gamma Knife radiosurgery. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  5. Prioritizing health technologies in a Primary Care Trust.

    PubMed

    Wilson, Edward; Sussex, Jon; Macleod, Christine; Fordham, Richard

    2007-04-01

    In the English National Health Service (NHS), Primary Care Trusts (PCTs) are responsible for commissioning health-care services on behalf of their populations. As resources are finite, decisions are required as to which services best fulfil population needs. Evidence on effectiveness varies in quality and availability. Nevertheless, decisions still have to be made. We report the development and pilot application of a multi-criteria prioritization mechanism in an English PCT, capable of accommodating a wide variety of evidence to rank six service developments. The mechanism proved valuable in assisting prioritization decisions and feedback was positive. Two community-based interventions were expected to save money in the long term and were ranked at the top of the list. Based on weighted benefit score and cost, two preventive programmes were ranked third and fourth. Finally, two National Institute for Health and Clinical Excellence (NICE)-approved interventions were ranked fifth and sixth. Sensitivity analysis revealed overlap in benefit scores for some of the interventions, representing diversity of opinion among the scoring panel. The method appears to be a practical approach to prioritization for commissioners of health care, but the pilot also revealed interesting divergences in relative priority between nationally mandated service developments and local health-care priorities.

  6. Derivation and validation of a novel risk score for safe discharge after acute lower gastrointestinal bleeding: a modelling study.

    PubMed

    Oakland, Kathryn; Jairath, Vipul; Uberoi, Raman; Guy, Richard; Ayaru, Lakshmana; Mortensen, Neil; Murphy, Mike F; Collins, Gary S

    2017-09-01

    Acute lower gastrointestinal bleeding is a common reason for emergency hospital admission, and identification of patients at low risk of harm, who are therefore suitable for outpatient investigation, is a clinical and research priority. We aimed to develop and externally validate a simple risk score to identify patients with lower gastrointestinal bleeding who could safely avoid hospital admission. We undertook model development with data from the National Comparative Audit of Lower Gastrointestinal Bleeding from 143 hospitals in the UK in 2015. Multivariable logistic regression modelling was used to identify predictors of safe discharge, defined as the absence of rebleeding, blood transfusion, therapeutic intervention, 28 day readmission, or death. The model was converted into a simplified risk scoring system and was externally validated in 288 patients admitted with lower gastrointestinal bleeding (184 safely discharged) from two UK hospitals (Charing Cross Hospital, London, and Hammersmith Hospital, London) that had not contributed data to the development cohort. We calculated C statistics for the new model and did a comparative assessment with six previously developed risk scores. Of 2336 prospectively identified admissions in the development cohort, 1599 (68%) were safely discharged. Age, sex, previous admission for lower gastrointestinal bleeding, rectal examination findings, heart rate, systolic blood pressure, and haemoglobin concentration strongly discriminated safe discharge in the development cohort (C statistic 0·84, 95% CI 0·82-0·86) and in the validation cohort (0·79, 0·73-0·84). Calibration plots showed the new risk score to have good calibration in the validation cohort. The score was better than the Rockall, Blatchford, Strate, BLEED, AIMS65, and NOBLADS scores in predicting safe discharge. A score of 8 or less predicts a 95% probability of safe discharge. We developed and validated a novel clinical prediction model with good discriminative performance to identify patients with lower gastrointestinal bleeding who are suitable for safe outpatient management, which has important economic and resource implications. Bowel Disease Research Foundation and National Health Service Blood and Transplant. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. MO-D-213-02: Quality Improvement Through a Failure Mode and Effects Analysis of Pediatric External Beam Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gray, J; Lukose, R; Bronson, J

    2015-06-15

    Purpose: To conduct a failure mode and effects analysis (FMEA) as per AAPM Task Group 100 on clinical processes associated with teletherapy, and the development of mitigations for processes with identified high risk. Methods: A FMEA was conducted on clinical processes relating to teletherapy treatment plan development and delivery. Nine major processes were identified for analysis. These steps included CT simulation, data transfer, image registration and segmentation, treatment planning, plan approval and preparation, and initial and subsequent treatments. Process tree mapping was utilized to identify the steps contained within each process. Failure modes (FM) were identified and evaluated with amore » scale of 1–10 based upon three metrics: the severity of the effect, the probability of occurrence, and the detectability of the cause. The analyzed metrics were scored as follows: severity – no harm = 1, lethal = 10; probability – not likely = 1, certainty = 10; detectability – always detected = 1, undetectable = 10. The three metrics were combined multiplicatively to determine the risk priority number (RPN) which defined the overall score for each FM and the order in which process modifications should be deployed. Results: Eighty-nine procedural steps were identified with 186 FM accompanied by 193 failure effects with 213 potential causes. Eighty-one of the FM were scored with a RPN > 10, and mitigations were developed for FM with RPN values exceeding ten. The initial treatment had the most FM (16) requiring mitigation development followed closely by treatment planning, segmentation, and plan preparation with fourteen each. The maximum RPN was 400 and involved target delineation. Conclusion: The FMEA process proved extremely useful in identifying previously unforeseen risks. New methods were developed and implemented for risk mitigation and error prevention. Similar to findings reported for adult patients, the process leading to the initial treatment has an associated high risk.« less

  8. D2R2: an evidence-based decision support tool to aid prioritisation of animal health issues for government funding.

    PubMed

    Gibbens, J C; Frost, A J; Houston, C W; Lester, H; Gauntlett, F A

    2016-11-26

    An evidence-based decision support tool, 'D2R2', has been developed by Defra. It contains a wide range of standardised information about exotic and endemic diseases held in 'disease profiles'. Each profile includes 40 criteria used for scoring, enabling D2R2 to provide relative priority rankings for every disease profiled. D2R2 also provides a range of reports for each disease and the functionality to explore the impact of changes in any criterion or weighting on a disease's ranking. These outputs aid the prioritisation and management of animal diseases by government. D2R2 was developed with wide stakeholder engagement and its design was guided by clear specifications. It uses the weighted scores of a limited number of criteria to generate impact and risk scores for each disease, and relies on evidence drawn from published material wherever possible and maintained up to date. It allows efficient use of expertise, as maintained disease profiles reduce the need for on call, reactive, expert input for policy development and enables rapid simultaneous access to the same information by multiple parties, for example during exotic disease outbreaks. The experience in developing D2R2 has been shared internationally to assist others with their development of disease prioritisation and categorisation systems. British Veterinary Association.

  9. Fuzzy-based failure mode and effect analysis (FMEA) of a hybrid molten carbonate fuel cell (MCFC) and gas turbine system for marine propulsion

    NASA Astrophysics Data System (ADS)

    Ahn, Junkeon; Noh, Yeelyong; Park, Sung Ho; Choi, Byung Il; Chang, Daejun

    2017-10-01

    This study proposes a fuzzy-based FMEA (failure mode and effect analysis) for a hybrid molten carbonate fuel cell and gas turbine system for liquefied hydrogen tankers. An FMEA-based regulatory framework is adopted to analyze the non-conventional propulsion system and to understand the risk picture of the system. Since the participants of the FMEA rely on their subjective and qualitative experiences, the conventional FMEA used for identifying failures that affect system performance inevitably involves inherent uncertainties. A fuzzy-based FMEA is introduced to express such uncertainties appropriately and to provide flexible access to a risk picture for a new system using fuzzy modeling. The hybrid system has 35 components and has 70 potential failure modes, respectively. Significant failure modes occur in the fuel cell stack and rotary machine. The fuzzy risk priority number is used to validate the crisp risk priority number in the FMEA.

  10. Socioeconomic disadvantage but not remoteness affects short-term survival in prostate cancer: A population-based study using competing risks.

    PubMed

    Thomas, Audrey A; Pearce, Alison; Sharp, Linda; Gardiner, Robert Alexander; Chambers, Suzanne; Aitken, Joanne; Molcho, Michal; Baade, Peter

    2017-04-01

    We examined how sociodemographic, clinical and area-level factors are related to short-term prostate cancer mortality versus mortality from other causes, a crucial distinction for this disease that disproportionately affects men older than 60 years. We applied competing risk survival models to administrative data from the Queensland Cancer Registry (Australia) for men diagnosed with prostate cancer between January 2005 and July 2007, including stratification by Gleason score. The men (n = 7393) in the study cohort had a median follow-up of 5 years 3 months. After adjustment, remoteness and area-level disadvantage were not significantly associated with prostate cancer mortality. However, area-level disadvantage had a significant negative relationship with hazard of death from a cause other than prostate cancer within 7 years; compared with those living in the most advantaged areas, the likelihood of mortality was higher for those in the most disadvantaged (subhazard ratio [SHR] = 1.39; 95% CI, 1.01-1.90; P = 0.041), disadvantaged (SHR = 1.51; 95% CI, 1.14-2.00; P = 0.004), middle (SHR = 1.34; 95% CI, 1.02-1.75; P = 0.034) and advantaged areas (SHR = 1.44; 95% CI, 1.09-1.89; P = 0.009). Those with Gleason score of 7 and higher had a lower hazard of prostate cancer mortality if they were living with a partner, whereas those with lower Gleason scores and living a partner had lower hazards of other-cause mortality. Understanding why men living in more disadvantaged areas have higher risk of non-prostate cancer mortality should be a priority. © 2016 John Wiley & Sons Australia, Ltd.

  11. What to say and how to say it: effective communication for cardiovascular disease prevention.

    PubMed

    Navar, Ann Marie; Stone, Neil J; Martin, Seth S

    2016-09-01

    Current guidelines for cholesterol treatment emphasize the importance of engaging patients in a risk-benefit discussion prior to initiating statin therapy. Although current risk prediction algorithms are well defined, there is less data on how to communicate with patients about cardiovascular disease risk, benefits of treatment, and possible adverse effects. We propose a four-part model for effective shared decision-making: 1) Assessing patient priorities, perceived risk, and prior experience with cardiovascular risk reduction; 2) Arriving at a recommendation for therapy based on the patient's risk of disease, guideline recommendations, new clinical trial data, and patient preferences; 3) Communicating this recommendation along with risks, benefits, and alternatives to therapy following best practices for discussing numeric risk; and 4) Arriving at a shared decision with the patient with ongoing reassessment as risk factors and patient priorities change.

  12. [Perception of cardiovascular risk in an outpatient population of the Madrid Community].

    PubMed

    Pérez-Manchón, D; Álvarez-García, G M; González-López, E

    2015-01-01

    Cardiovascular diseases are responsible for the largest burden of global mortality. The study of the degree of knowledge of their population risk factors and cardiovascular risk is a priority preventive strategy. A cross-sectional study with 369 people was performed. The sociodemographic variables were cardiovascular risk and perception as well as physical and anthropometric factors. The risk was stratified with the SCORE table. A total of 49.6% were men and 50.4% were women. The proportion of diagnosis was 23.8% in HTA, 39% in hypercholesterolemia, 31.4% in smoking, 26.3% in obesity and 4.6% in diabetes. Concordance between perceived and real cardiovascular risk was very weak. The population has good knowledge about diabetes and acceptable knowledge about hypertension, and hypercholesterolemia but knowledge in prediabetic states and perception of the associated cardiovascular risk is low. Copyright © 2014 SEHLELHA. Published by Elsevier Espana. All rights reserved.

  13. Based on Real Time Remote Health Monitoring Systems: A New Approach for Prioritization "Large Scales Data" Patients with Chronic Heart Diseases Using Body Sensors and Communication Technology.

    PubMed

    Kalid, Naser; Zaidan, A A; Zaidan, B B; Salman, Omar H; Hashim, M; Albahri, O S; Albahri, A S

    2018-03-02

    This paper presents a new approach to prioritize "Large-scale Data" of patients with chronic heart diseases by using body sensors and communication technology during disasters and peak seasons. An evaluation matrix is used for emergency evaluation and large-scale data scoring of patients with chronic heart diseases in telemedicine environment. However, one major problem in the emergency evaluation of these patients is establishing a reasonable threshold for patients with the most and least critical conditions. This threshold can be used to detect the highest and lowest priority levels when all the scores of patients are identical during disasters and peak seasons. A practical study was performed on 500 patients with chronic heart diseases and different symptoms, and their emergency levels were evaluated based on four main measurements: electrocardiogram, oxygen saturation sensor, blood pressure monitoring, and non-sensory measurement tool, namely, text frame. Data alignment was conducted for the raw data and decision-making matrix by converting each extracted feature into an integer. This integer represents their state in the triage level based on medical guidelines to determine the features from different sources in a platform. The patients were then scored based on a decision matrix by using multi-criteria decision-making techniques, namely, integrated multi-layer for analytic hierarchy process (MLAHP) and technique for order performance by similarity to ideal solution (TOPSIS). For subjective validation, cardiologists were consulted to confirm the ranking results. For objective validation, mean ± standard deviation was computed to check the accuracy of the systematic ranking. This study provides scenarios and checklist benchmarking to evaluate the proposed and existing prioritization methods. Experimental results revealed the following. (1) The integration of TOPSIS and MLAHP effectively and systematically solved the patient settings on triage and prioritization problems. (2) In subjective validation, the first five patients assigned to the doctors were the most urgent cases that required the highest priority, whereas the last five patients were the least urgent cases and were given the lowest priority. In objective validation, scores significantly differed between the groups, indicating that the ranking results were identical. (3) For the first, second, and third scenarios, the proposed method exhibited an advantage over the benchmark method with percentages of 40%, 60%, and 100%, respectively. In conclusion, patients with the most and least urgent cases received the highest and lowest priority levels, respectively.

  14. Maternal and perinatal health research priorities beyond 2015: an international survey and prioritization exercise.

    PubMed

    Souza, Joao Paulo; Widmer, Mariana; Gülmezoglu, Ahmet Metin; Lawrie, Theresa Anne; Adejuyigbe, Ebunoluwa Aderonke; Carroli, Guillermo; Crowther, Caroline; Currie, Sheena M; Dowswell, Therese; Hofmeyr, Justus; Lavender, Tina; Lawn, Joy; Mader, Silke; Martinez, Francisco Eulógio; Mugerwa, Kidza; Qureshi, Zahida; Silvestre, Maria Asuncion; Soltani, Hora; Torloni, Maria Regina; Tsigas, Eleni Z; Vowles, Zoe; Ouedraogo, Léopold; Serruya, Suzanne; Al-Raiby, Jamela; Awin, Narimah; Obara, Hiromi; Mathai, Matthews; Bahl, Rajiv; Martines, José; Ganatra, Bela; Phillips, Sharon Jelena; Johnson, Brooke Ronald; Vogel, Joshua P; Oladapo, Olufemi T; Temmerman, Marleen

    2014-08-07

    Maternal mortality has declined by nearly half since 1990, but over a quarter million women still die every year of causes related to pregnancy and childbirth. Maternal-health related targets are falling short of the 2015 Millennium Development Goals and a post-2015 Development Agenda is emerging. In connection with this, setting global research priorities for the next decade is now required. We adapted the methods of the Child Health and Nutrition Research Initiative (CHNRI) to identify and set global research priorities for maternal and perinatal health for the period 2015 to 2025. Priority research questions were received from various international stakeholders constituting a large reference group, and consolidated into a final list of research questions by a technical working group. Questions on this list were then scored by the reference working group according to five independent and equally weighted criteria. Normalized research priority scores (NRPS) were calculated, and research priority questions were ranked accordingly. A list of 190 priority research questions for improving maternal and perinatal health was scored by 140 stakeholders. Most priority research questions (89%) were concerned with the evaluation of implementation and delivery of existing interventions, with research subthemes frequently concerned with training and/or awareness interventions (11%), and access to interventions and/or services (14%). Twenty-one questions (11%) involved the discovery of new interventions or technologies. Key research priorities in maternal and perinatal health were identified. The resulting ranked list of research questions provides a valuable resource for health research investors, researchers and other stakeholders. We are hopeful that this exercise will inform the post-2015 Development Agenda and assist donors, research-policy decision makers and researchers to invest in research that will ultimately make the most significant difference in the lives of mothers and babies.

  15. Research priorities by professional background - A detailed analysis of the James Lind Alliance Priority Setting Partnership.

    PubMed

    Arulkumaran, Nishkantha; Reay, Hannah; Brett, Stephen J

    2016-05-01

    The Intensive Care Foundation, in partnership with the James Lind Alliance, has supported a national project to identify and prioritise unanswered questions about adult intensive care that are important to people who have been critically ill, their families, and the health professionals who care for them. We conducted a secondary analysis to explore differences in priorities determined by different respondent groups in order to identify different groups' perceptions of gaps in knowledge. There were two surveys conducted as part of the original project. Survey 1 comprised a single open question to identify important research topics; survey 2 aimed to prioritise these topics using a 10-point Likert scale. In survey 1, despite clear differences in suggestions amongst the respondent groups, themes of comfort/communication and post-ICU rehabilitation were the within the top 2 suggestions across all groups. Patients and relatives suggested research topics to which they could easily relate, whereas there was a greater breadth of suggestions from clinicians. In survey 2, the number of research priorities that received a mode score of 10 varied from 1 to 36. Patients scored 36 out of the 37 topics with a mode score of 10. All other groups scored topics with more discrimination, with the number of topics with a mode score of 10 ranging from 1 to 20. Differences in the proportions of the representative groups are therefore unlikely to have translated to an impartial conclusion. Clinicians, patients, and family members have jointly identified the research priorities for UK ICM practice.

  16. Direct-to-consumer advertising for bleeding disorders: a content analysis and expert evaluation of advertising claims.

    PubMed

    Abel, G A; Neufeld, E J; Sorel, M; Weeks, J C

    2008-10-01

    In the United States, the Food and Drug Administration (FDA) requires that all direct-to-consumer advertising (DTCA) contain both an accurate statement of a medication's effects ('truth') and an even-handed discussion of its benefits and risks/adverse effects ('fair balance'). DTCA for medications to treat rare diseases such as bleeding disorders is unlikely to be given high priority for FDA review. We reviewed all DTCA for bleeding disorder products appearing in the patient-directed magazine HemeAware from January 2004 to June 2006. We categorized the information presented in each advertisement as benefit, risk/adverse effect, or neither, and assessed the amount of text and type size devoted to each. We also assessed the readability of each type of text using the Flesch Reading Ease Score (FRES, where a score of >or=65 is considered of average readability), and assessed the accuracy of the advertising claims utilizing a panel of five bleeding disorder experts. A total of 39 unique advertisements for 12 products were found. On average, approximately twice the amount of text was devoted to benefits as compared with risks/adverse effects, and the latter was more difficult to read [FRES of 32.0 for benefits vs. 20.5 for risks/adverse effects, a difference of 11.5 (95% CI: 4.5-18.5)]. Only about two-thirds of the advertising claims were considered by a majority of the experts to be based on at least low-quality evidence. As measured by our methods, print DTCA for bleeding disorders may not reach the FDA's standards of truth and fair balance.

  17. Prediction of 10-year coronary heart disease risk in Caribbean type 2 diabetic patients using the UKPDS risk engine.

    PubMed

    Ezenwaka, C E; Nwagbara, E; Seales, D; Okali, F; Hussaini, S; Raja, Bn; Jones-LeCointe, A; Sell, H; Avci, H; Eckel, J

    2009-03-06

    Primary prevention of Coronary Heart Disease (CHD) in diabetic patients should be based on absolute CHD risk calculation. This study was aimed to determine the levels of 10-year CHD risk in Caribbean type 2 diabetic patients using the diabetes specific United Kingdom Prospective Diabetes Study (UKPDS) risk engine calculator. Three hundred and twenty-five (106 males, 219 females) type 2 diabetic patients resident in two Caribbean Islands of Tobago and Trinidad met the UKPDS risk engine inclusion criteria. Records of their sex, age, ethnicity, smoking habit, diabetes duration, systolic blood pressure, total cholesterol, HDL-cholesterol and glycated haemoglobin were entered into the UKPDS risk engine calculator programme and the absolute 10-year CHD and stroke risk levels were computed. The 10-year CHD and stroke risks were statistically stratified into <15%, 15-30% and >30% CHD risk levels and differences between patients of African and Asian-Indian origin were compared. In comparison with patients in Tobago, type 2 diabetic patients in Trinidad, irrespective of gender, had higher proportion of 10-year CHD risk (10.4 vs. 23.6%, P<0.001) whereas the overall 10-year stroke risk prediction was higher in patients resident in Tobago (16.9 vs. 11.4%, P<0.001). Ethnicity-based analysis revealed that irrespective of gender, higher proportion of patients of Indian origin scored >30% of absolute 10-year CHD risk compared with patients of African descent (3.2 vs. 28.2%, P<0.001). The results of the study identified diabetic patients resident in Trinidad and patients of Indian origin as the most vulnerable groups for CHD. These groups of diabetic patients should have priority in primary or secondary prevention of coronary heart disease.

  18. Integrating the Ergonomics Techniques with Multi Criteria Decision Making as a New Approach for Risk Management: An Assessment of Repetitive Tasks -Entropy Case Study.

    PubMed

    Khandan, Mohammad; Nili, Majid; Koohpaei, Alireza; Mosaferchi, Saeedeh

    2016-01-01

    Nowadays, the health work decision makers need to analyze a huge amount of data and consider many conflicting evaluation criteria and sub-criteria. Therefore, an ergonomic evaluation in the work environment in order to the control occupational disorders is considered as the Multi Criteria Decision Making (MCDM) problem. In this study, the ergonomic risks factors, which may influence health, were evaluated in a manufacturing company in 2014. Then entropy method was applied to prioritize the different risk factors. This study was done with a descriptive-analytical approach and 13 tasks were included from total number of employees who were working in the seven halls of an ark opal manufacturing (240). Required information was gathered by the demographic questionnaire and Assessment of Repetitive Tasks (ART) method for repetitive task assessment. In addition, entropy was used to prioritize the risk factors based on the ergonomic control needs. The total exposure score based on the ART method calculated was equal to 30.07 ±12.43. Data analysis illustrated that 179 cases (74.6% of tasks) were in the high level of risk area and 13.8% were in the medium level of risk. ART- entropy results revealed that based on the weighted factors, higher value belongs to grip factor and the lowest value was related to neck and hand posture and duration. Based on the limited financial resources, it seems that MCDM in many challenging situations such as control procedures and priority approaches could be used successfully. Other MCDM methods for evaluating and prioritizing the ergonomic problems are recommended.

  19. What is behind the priority heuristic? A mathematical analysis and comment on Brandstätter, Gigerenzer, and Hertwig (2006).

    PubMed

    Rieger, Marc Oliver; Wang, Mei

    2008-01-01

    Comments on the article by E. Brandstätter, G. Gigerenzer, and R. Hertwig. The authors discuss the priority heuristic, a recent model for decisions under risk. They reanalyze the experimental validity of this approach and discuss how these results compare with cumulative prospect theory, the currently most established model in behavioral economics. They also discuss how general models for decisions under risk based on a heuristic approach can be understood mathematically to gain some insight in their limitations. They finally consider whether the priority heuristic model can lead to some understanding of the decision process of individuals or whether it is better seen as an as-if model. (c) 2008 APA, all rights reserved

  20. Preliminary Prioritization of California Oil and Gas Fields for Regional Groundwater Monitoring Based on Intensity of Petroleum Resource Development and Proximity to Groundwater Resources

    NASA Astrophysics Data System (ADS)

    Davis, T. A.; Landon, M. K.; Bennett, G.

    2016-12-01

    The California State Water Resources Control Board is collaborating with the U.S. Geological Survey to implement a Regional Monitoring Program (RMP) to assess where and to what degree groundwater resources may be at risk of contamination from oil and gas development activities including stimulation, well integrity issues, produced water ponds, and underground injection. A key issue in the implementation of the RMP is that the state has 487 onshore oil fields covering 8,785 square kilometers but detailed characterization work can only be done in a few oil fields annually. The first step in the RMP is to prioritize fields using available data that indicate potential risk to groundwater from oil and gas development, including vertical proximity of groundwater and oil/gas resources, density of petroleum and water wells, and volume of water injected in oil fields. This study compiled data for these factors, computed summary metrics for each oil field, analyzed statewide distributions of summary metrics, used those distributions to define relative categories of potential risk for each factor, and combined these into an overall priority ranking. Aggregated results categorized 22% (107 fields) of the total number of onshore oil and gas fields in California as high priority, 23% as moderate priority, and 55% as low priority. On an area-weighted basis, 41% of the fields ranked high, 30% moderate, and 29% low, highlighting that larger fields tend to have higher potential risk because of greater intensity of development, sometimes coupled with closer proximity to groundwater. More than half of the fields ranked as high priority were located in the southern Central Valley or the Los Angeles Basin. The prioritization does not represent an assessment of groundwater risk from oil and gas development; rather, such assessments are planned to follow based on detailed analysis of data from the RMP near the oil fields selected for study in the future.

  1. Physics First: Impact on SAT Math Scores

    ERIC Educational Resources Information Center

    Bouma, Craig E.

    2013-01-01

    Improving science, technology, engineering, and mathematics (STEM) education has become a national priority and the call to modernize secondary science has been heard. A Physics First (PF) program with the curriculum sequence of physics, chemistry, and biology (PCB) driven by inquiry- and project-based learning offers a viable alternative to the…

  2. Risk management of key issues of FPSO

    NASA Astrophysics Data System (ADS)

    Sun, Liping; Sun, Hai

    2012-12-01

    Risk analysis of key systems have become a growing topic late of because of the development of offshore structures. Equipment failures of offloading system and fire accidents were analyzed based on the floating production, storage and offloading (FPSO) features. Fault tree analysis (FTA), and failure modes and effects analysis (FMEA) methods were examined based on information already researched on modules of relex reliability studio (RRS). Equipment failures were also analyzed qualitatively by establishing a fault tree and Boolean structure function based on the shortage of failure cases, statistical data, and risk control measures examined. Failure modes of fire accident were classified according to the different areas of fire occurrences during the FMEA process, using risk priority number (RPN) methods to evaluate their severity rank. The qualitative analysis of FTA gave the basic insight of forming the failure modes of FPSO offloading, and the fire FMEA gave the priorities and suggested processes. The research has practical importance for the security analysis problems of FPSO.

  3. Risk assessment [Chapter 9

    Treesearch

    Dennis S. Ojima; Louis R. Iverson; Brent L. Sohngen; James M. Vose; Christopher W. Woodall; Grant M. Domke; David L. Peterson; Jeremy S. Littell; Stephen N. Matthews; Anantha M. Prasad; Matthew P. Peters; Gary W. Yohe; Megan M. Friggens

    2014-01-01

    What is "risk" in the context of climate change? How can a "risk-based framework" help assess the effects of climate change and develop adaptation priorities? Risk can be described by the likelihood of an impact occurring and the magnitude of the consequences of the impact (Yohe 2010) (Fig. 9.1). High-magnitude impacts are always...

  4. Fuzzy Risk Evaluation in Failure Mode and Effects Analysis Using a D Numbers Based Multi-Sensor Information Fusion Method.

    PubMed

    Deng, Xinyang; Jiang, Wen

    2017-09-12

    Failure mode and effect analysis (FMEA) is a useful tool to define, identify, and eliminate potential failures or errors so as to improve the reliability of systems, designs, and products. Risk evaluation is an important issue in FMEA to determine the risk priorities of failure modes. There are some shortcomings in the traditional risk priority number (RPN) approach for risk evaluation in FMEA, and fuzzy risk evaluation has become an important research direction that attracts increasing attention. In this paper, the fuzzy risk evaluation in FMEA is studied from a perspective of multi-sensor information fusion. By considering the non-exclusiveness between the evaluations of fuzzy linguistic variables to failure modes, a novel model called D numbers is used to model the non-exclusive fuzzy evaluations. A D numbers based multi-sensor information fusion method is proposed to establish a new model for fuzzy risk evaluation in FMEA. An illustrative example is provided and examined using the proposed model and other existing method to show the effectiveness of the proposed model.

  5. Fuzzy Risk Evaluation in Failure Mode and Effects Analysis Using a D Numbers Based Multi-Sensor Information Fusion Method

    PubMed Central

    Deng, Xinyang

    2017-01-01

    Failure mode and effect analysis (FMEA) is a useful tool to define, identify, and eliminate potential failures or errors so as to improve the reliability of systems, designs, and products. Risk evaluation is an important issue in FMEA to determine the risk priorities of failure modes. There are some shortcomings in the traditional risk priority number (RPN) approach for risk evaluation in FMEA, and fuzzy risk evaluation has become an important research direction that attracts increasing attention. In this paper, the fuzzy risk evaluation in FMEA is studied from a perspective of multi-sensor information fusion. By considering the non-exclusiveness between the evaluations of fuzzy linguistic variables to failure modes, a novel model called D numbers is used to model the non-exclusive fuzzy evaluations. A D numbers based multi-sensor information fusion method is proposed to establish a new model for fuzzy risk evaluation in FMEA. An illustrative example is provided and examined using the proposed model and other existing method to show the effectiveness of the proposed model. PMID:28895905

  6. Using Expert Elicitation to Estimate the Impacts of Plastic Pollution on Marine Wildlife

    NASA Astrophysics Data System (ADS)

    Mallos, N. J.; Wilcox, C.; Leonard, G. H.; Rodriquez, A. G.; Hardesty, B. D.

    2016-02-01

    With the rapid increase in global plastics production and the resulting large volume of litter that enters the marine environment, determining the consequences of this debris on marine fauna and ocean health has now become a critical environmental priority, particularly for threatened and endangered species. However, there are limited data about the impacts on debris on marine species from which to draw conclusions about the population consequences of anthropogenic debris. To address this knowledge gap, information was elicited from experts on the ecological threat of entanglement, ingestion and chemical contamination for three major marine taxa: seabirds, sea turtles and marine mammals. The threat assessment focused on the most common types of litter that are found along the world's coastlines, based on data gathered during three decades of international coastal clean-up efforts. Fishing related gear, balloons and plastic bags were estimated to pose the greatest entanglement risk to marine fauna. In contrast, experts identified a broader suite of items of concern for ingestion, with plastic bags and plastic utensils ranked as the greatest threats. Entanglement and ingestion affected a similar range of taxa, although entanglement was slightly worse as it is more likely to be lethal. Contamination was scored the lowest in terms of its impact, affecting a smaller portion of the taxa and being rated as having solely non-lethal impacts. Research designed to better understand and quantify the impacts of chemical contamination on marine fauna at individual, population and species levels should be a priority for conservation biologists. This work points towards a number of opportunities for both policy-based and consumer-driven changes in plastics use that could have demonstrable affects for a range of taxa that are ecologically important and serve as indicators of marine ecosystem health. Based on threat rankings, entanglement and ingestion should be a similar priority for management action, as experts ranked them nearly identically in terms of expected population impacts. Fishing gear, plastic bags, and plastic utensils were all ranked as having substantial impacts on more than half of all three marine taxa, and should be priorities for management action.

  7. Adding an alcohol-related risk score to an existing categorical risk classification for older adults: sensitivity to group differences.

    PubMed

    Wilson, Sandra R; Fink, Arlene; Verghese, Shinu; Beck, John C; Nguyen, Khue; Lavori, Philip

    2007-03-01

    To evaluate a new alcohol-related risk score for research use. Using data from a previously reported trial of a screening and education system for older adults (Computerized Alcohol-Related Problems Survey), secondary analyses were conducted comparing the ability of two different measures of risk to detect post-intervention group differences: the original categorical outcome measure and a new, finely grained quantitative risk score based on the same research-based risk factors. Three primary care group practices in southern California. Six hundred sixty-five patients aged 65 and older. A previously calculated, three-level categorical classification of alcohol-related risk and a newly developed quantitative risk score. Mean post-intervention risk scores differed between the three experimental conditions: usual care, patient report, and combined report (P<.001). The difference between the combined report and usual care was significant (P<.001) and directly proportional to baseline risk. The three-level risk classification did not reveal approximately 57.3% of the intervention effect detected by the risk score. The risk score also was sufficiently sensitive to detect the intervention effect within the subset of hypertensive patients (n=112; P=.001). As an outcome measure in intervention trials, the finely grained risk score is more sensitive than the trinary risk classification. The additional clinical value of the risk score relative to the categorical measure needs to be determined.

  8. Comparison of traditional diabetes risk scores and HbA1c to predict type 2 diabetes mellitus in a population based cohort study.

    PubMed

    Krabbe, Christine Emma Maria; Schipf, Sabine; Ittermann, Till; Dörr, Marcus; Nauck, Matthias; Chenot, Jean-François; Markus, Marcello Ricardo Paulista; Völzke, Henry

    2017-11-01

    Compare performances of diabetes risk scores and glycated hemoglobin (HbA1c) to estimate the risk of incident type 2 diabetes mellitus (T2DM) in Northeast Germany. We studied 2916 subjects (20 to 81years) from the Study of Health in Pomerania (SHIP) in a 5-year follow-up period. Diabetes risk scores included the Cooperative Health Research in the Region of Augsburg (KORA) base model, the Danish diabetes risk score and the Data from the Epidemiological Study on the Insulin Resistance syndrome (D.E.S.I.R) clinical risk score. We assessed the performance of each of the diabetes risk scores and the HbA1c for 5-year risk of T2DM by the area under the receiver-operating characteristic curve (AUC) and calibration plots. In SHIP, the incidence of T2DM was 5.4% (n=157) in the 5-year follow-up period. Diabetes risk scores and HbA1c achieved AUCs ranging from 0.76 for the D.E.S.I.R. clinical risk score to 0.82 for the KORA base model. For diabetes risk scores, the discriminative ability was lower for the age group 55 to 74years. For HbA1c, the discriminative ability also decreased for the group 55 to 74years while it was stable in the age group 30 to 64years old. All diabetes risk scores and the HbA1c showed a good prediction for the risk of T2DM in SHIP. Which model or biomarker should be used is driven by its context of use, e.g. the practicability, implementation of interventions and availability of measurement. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Proposal for a new categorization of aseptic processing facilities based on risk assessment scores.

    PubMed

    Katayama, Hirohito; Toda, Atsushi; Tokunaga, Yuji; Katoh, Shigeo

    2008-01-01

    Risk assessment of aseptic processing facilities was performed using two published risk assessment tools. Calculated risk scores were compared with experimental test results, including environmental monitoring and media fill run results, in three different types of facilities. The two risk assessment tools used gave a generally similar outcome. However, depending on the tool used, variations were observed in the relative scores between the facilities. For the facility yielding the lowest risk scores, the corresponding experimental test results showed no contamination, indicating that these ordinal testing methods are insufficient to evaluate this kind of facility. A conventional facility having acceptable aseptic processing lines gave relatively high risk scores. The facility showing a rather high risk score demonstrated the usefulness of conventional microbiological test methods. Considering the significant gaps observed in calculated risk scores and in the ordinal microbiological test results between advanced and conventional facilities, we propose a facility categorization based on risk assessment. The most important risk factor in aseptic processing is human intervention. When human intervention is eliminated from the process by advanced hardware design, the aseptic processing facility can be classified into a new risk category that is better suited for assuring sterility based on a new set of criteria rather than on currently used microbiological analysis. To fully benefit from advanced technologies, we propose three risk categories for these aseptic facilities.

  10. Can streamlined multi-criteria decision analysis be used to implement shared decision making for colorectal cancer screening?

    PubMed Central

    Dolan, James G.; Boohaker, Emily; Allison, Jeroan; Imperiale, Thomas F.

    2013-01-01

    Background Current US colorectal cancer screening guidelines that call for shared decision making regarding the choice among several recommended screening options are difficult to implement. Multi-criteria decision analysis (MCDA) is an established methodology well suited for supporting shared decision making. Our study goal was to determine if a streamlined form of MCDA using rank order based judgments can accurately assess patients’ colorectal cancer screening priorities. Methods We converted priorities for four decision criteria and three sub-criteria regarding colorectal cancer screening obtained from 484 average risk patients using the Analytic Hierarchy Process (AHP) in a prior study into rank order-based priorities using rank order centroids. We compared the two sets of priorities using Spearman rank correlation and non-parametric Bland-Altman limits of agreement analysis. We assessed the differential impact of using the rank order-based versus the AHP-based priorities on the results of a full MCDA comparing three currently recommended colorectal cancer screening strategies. Generalizability of the results was assessed using Monte Carlo simulation. Results Correlations between the two sets of priorities for the seven criteria ranged from 0.55 to 0.92. The proportions of absolute differences between rank order-based and AHP-based priorities that were more than ± 0.15 ranged from 1% to 16%. Differences in the full MCDA results were minimal and the relative rankings of the three screening options were identical more than 88% of the time. The Monte Carlo simulation results were similar. Conclusion Rank order-based MCDA could be a simple, practical way to guide individual decisions and assess population decision priorities regarding colorectal cancer screening strategies. Additional research is warranted to further explore the use of these methods for promoting shared decision making. PMID:24300851

  11. Satiety mechanisms in genetic risk of obesity.

    PubMed

    Llewellyn, Clare Heidi; Trzaskowski, Maciej; van Jaarsveld, Cornelia Hendrika Maria; Plomin, Robert; Wardle, Jane

    2014-04-01

    A better understanding of the cause of obesity is a clinical priority. Obesity is highly heritable, and specific genes are being identified. Discovering the mechanisms through which obesity-related genes influence weight would help pinpoint novel targets for intervention. One potential mechanism is satiety responsiveness. Lack of satiety characterizes many monogenic obesity disorders, and lower satiety responsiveness is linked with weight gain in population samples. To test the hypothesis that satiety responsiveness is an intermediate behavioral phenotype associated with genetic predisposition to obesity in children. Cross-sectional observational study of a population-based cohort of twins born January 1, 1994, to December 31, 1996 (Twins Early Development Study). Participants included 2258 unrelated children (53.3% female; mean [SD] age, 9.9 [0.8] years), one randomly selected from each twin pair. Genetic predisposition to obesity. We created a polygenic risk score (PRS) comprising 28 common obesity-related single-nucleotide polymorphisms identified in a meta-analysis of obesity-related genome-wide association studies. Satiety responsiveness was indexed with a standard psychometric scale (Child Eating Behavior Questionnaire). Using 1990 United Kingdom reference data, body mass index SD scores and waist SD scores were calculated from parent-reported anthropometric data for each child. Information on satiety responsiveness, anthropometrics, and genotype was available for 2258 children. We examined associations among the PRS, adiposity, and satiety responsiveness. The PRS was negatively related to satiety responsiveness (β coefficient, -0.060; 95% CI, -0.019 to -0.101) and positively related to adiposity (β coefficient, 0.177; 95% CI, 0.136-0.218 for body mass index SD scores and β coefficient, 0.167; 95% CI, 0.126-0.208 for waist SD scores). More children in the top 25% of the PRS were overweight than in the lowest 25% (18.5% vs 7.2%; odds ratio, 2.90; 95% CI, 1.98-4.25). Associations between the PRS and adiposity were significantly mediated by satiety responsiveness (P = .006 for body mass index SD scores and P = .005 for waist SD scores). These results support the hypothesis that low satiety responsiveness is one of the mechanisms through which genetic predisposition leads to weight gain in an environment rich with food. Strategies to enhance satiety responsiveness could help prevent weight gain in genetically at-risk children.

  12. Emergency Medical Services Utilization in EMS Priority Conditions in Beirut, Lebanon.

    PubMed

    El Sayed, Mazen; Tamim, Hani; Chehadeh, Ahel Al-Hajj; Kazzi, Amin A

    2016-12-01

    Early activation and use of Emergency Medical Services (EMS) are associated with improved patient outcomes in EMS priority conditions in developed EMS systems. This study describes patterns of EMS use and identifies predictors of EMS utilization in EMS priority conditions in Lebanon METHODS: This was a cross-sectional study of a random sample of adult patients presenting to the emergency department (ED) of a tertiary care center in Beirut with the following EMS priority conditions: chest pain, major trauma, respiratory distress, cardiac arrest, respiratory arrest, and airway obstruction. Patient/proxy survey (20 questions) and chart review were completed. The responses to survey questions were "disagree," "neutral," or "agree" and were scored as one, two, or three with three corresponding to higher likelihood of EMS use. A total scale score ranging from 20 to 60 was created and transformed from 0% to 100%. Data were analyzed based on mode of presentation (EMS vs other). Among the 481 patients enrolled, only 112 (23.3%) used EMS. Mean age for study population was 63.7 years (SD=18.8 years) with 56.5% males. Mean clinical severity score (Emergency Severity Index [ESI]) was 2.5 (SD=0.7) and mean pain score was 3.1 (SD=3.5) at ED presentation. Over one-half (58.8%) needed admission to hospital with 21.8% to an intensive care unit care level and with a mortality rate of 7.3%. Significant associations were found between EMS use and the following variables: severity of illness, degree of pain, familiarity with EMS activation, previous EMS use, perceived EMS benefit, availability of EMS services, trust in EMS response times and treatment, advice from family, and unavailability of immediate private mode of transport (P≤.05). Functional screening, or requiring full assistance (OR=4.77; 95% CI, 1.85-12.29); acute symptoms onset ≤ one hour (OR=2.14; 95% CI, 1.08-4.26); and higher scale scores (OR=2.99; 95% CI, 2.20-4.07) were significant predictors of EMS use. Patients with lower clinical severity (OR=0.53; 95% CI, 0.35-0.81) and those with chest pain (OR=0.05; 95% CI, 0.02-0.12) or respiratory distress (OR=0.15; 95% CI, 0.07-0.31) using cardiac arrest as a reference were less likely to use EMS. Emergency Medical Services use in EMS priority conditions in Lebanon is low. Several predictors of EMS use were identified. Emergency Medical Services initiatives addressing underutilization should result from this proposed assessment of the perspective of the EMS system's end user. El Sayed M , Tamim H , Al-Hajj Chehadeh A , Kazzi AA . Emergency Medical Services utilization in EMS priority conditions in Beirut, Lebanon. Prehosp Disaster Med. 2016;31(6):621-627.

  13. Which family physician should I choose? The analytic hierarchy process approach for ranking of criteria in the selection of a family physician.

    PubMed

    Kuruoglu, Emel; Guldal, Dilek; Mevsim, Vildan; Gunvar, Tolga

    2015-08-05

    Choosing the most appropriate family physician (FP) for the individual, plays a fundamental role in primary care. The aim of this study is to determine the selection criteria for the patients in choosing their family doctors and priority ranking of these criteria by using the multi-criteria decision-making method of the Analytic Hierarchy Process (AHP) model. The study was planned and conducted in two phases. In the first phase, factors affecting the patients' decisions were revealed with a qualitative research. In the next phase, the priorities of FP selection criteria were determined by using AHP model. Criteria were compared in pairs. 96 patient were asked to fill the information forms which contains comparison scores in the Family Health Centres. According to the analysis of focus group discussions FP selection criteria were congregated in to five groups: Individual Characteristics, Patient-Doctor relationship, Professional characteristics, the Setting, and Ethical Characteristics. For each of the 96 participants, comparison matrixes were formed based on the scores of their information forms. Of these, models of only 5 (5.2 %) of the participants were consistent, in other words, they have been able to score consistent ranking. The consistency ratios (CR) were found to be smaller than 0.10. Therefore the comparison matrix of this new model, which was formed based on the medians of scores only given by these 5 participants, was consistent (CR = 0.06 < 0.10). According to comparison results; with a 0.467 value-weight, the most important criterion for choosing a family physician is his/her 'Professional characteristics'. Selection criteria for choosing a FP were put in a priority order by using AHP model. These criteria can be used as measures for selecting alternative FPs in further researches.

  14. A Toxicological Framework for the Prioritization of Children’s Safe Product Act Data

    PubMed Central

    Smith, Marissa N.; Grice, Joshua; Cullen, Alison; Faustman, Elaine M.

    2016-01-01

    In response to concerns over hazardous chemicals in children’s products, Washington State passed the Children’s Safe Product Act (CSPA). CSPA requires manufacturers to report the concentration of 66 chemicals in children’s products. We describe a framework for the toxicological prioritization of the ten chemical groups most frequently reported under CSPA. The framework scores lifestage, exposure duration, primary, secondary and tertiary exposure routes, toxicokinetics and chemical properties to calculate an exposure score. Four toxicological endpoints were assessed based on curated national and international databases: reproductive and developmental toxicity, endocrine disruption, neurotoxicity and carcinogenicity. A total priority index was calculated from the product of the toxicity and exposure scores. The three highest priority chemicals were formaldehyde, dibutyl phthalate and styrene. Elements of the framework were compared to existing prioritization tools, such as the United States Environmental Protection Agency’s (EPA) ExpoCast and Toxicological Prioritization Index (ToxPi). The CSPA framework allowed us to examine toxicity and exposure pathways in a lifestage-specific manner, providing a relatively high throughput approach to prioritizing hazardous chemicals found in children’s products. PMID:27104547

  15. Concepts for risk-based surveillance in the field of veterinary medicine and veterinary public health: Review of current approaches

    PubMed Central

    Stärk, Katharina DC; Regula, Gertraud; Hernandez, Jorge; Knopf, Lea; Fuchs, Klemens; Morris, Roger S; Davies, Peter

    2006-01-01

    Background Emerging animal and zoonotic diseases and increasing international trade have resulted in an increased demand for veterinary surveillance systems. However, human and financial resources available to support government veterinary services are becoming more and more limited in many countries world-wide. Intuitively, issues that present higher risks merit higher priority for surveillance resources as investments will yield higher benefit-cost ratios. The rapid rate of acceptance of this core concept of risk-based surveillance has outpaced the development of its theoretical and practical bases. Discussion The principal objectives of risk-based veterinary surveillance are to identify surveillance needs to protect the health of livestock and consumers, to set priorities, and to allocate resources effectively and efficiently. An important goal is to achieve a higher benefit-cost ratio with existing or reduced resources. We propose to define risk-based surveillance systems as those that apply risk assessment methods in different steps of traditional surveillance design for early detection and management of diseases or hazards. In risk-based designs, public health, economic and trade consequences of diseases play an important role in selection of diseases or hazards. Furthermore, certain strata of the population of interest have a higher probability to be sampled for detection of diseases or hazards. Evaluation of risk-based surveillance systems shall prove that the efficacy of risk-based systems is equal or higher than traditional systems; however, the efficiency (benefit-cost ratio) shall be higher in risk-based surveillance systems. Summary Risk-based surveillance considerations are useful to support both strategic and operational decision making. This article highlights applications of risk-based surveillance systems in the veterinary field including food safety. Examples are provided for risk-based hazard selection, risk-based selection of sampling strata as well as sample size calculation based on risk considerations. PMID:16507106

  16. Cardiac Society of Australia and New Zealand position statement executive summary: coronary artery calcium scoring.

    PubMed

    Hamilton-Craig, Christian R; Chow, Clara K; Younger, John F; Jelinek, V M; Chan, Jonathan; Liew, Gary Yh

    2017-10-16

    Introduction This article summarises the Cardiac Society of Australia and New Zealand position statement on coronary artery calcium (CAC) scoring. CAC scoring is a non-invasive method for quantifying coronary artery calcification using computed tomography. It is a marker of atherosclerotic plaque burden and the strongest independent predictor of future myocardial infarction and mortality. CAC scoring provides incremental risk information beyond traditional risk calculators such as the Framingham Risk Score. Its use for risk stratification is confined to primary prevention of cardiovascular events, and can be considered as individualised coronary risk scoring for intermediate risk patients, allowing reclassification to low or high risk based on the score. Medical practitioners should carefully counsel patients before CAC testing, which should only be undertaken if an alteration in therapy, including embarking on pharmacotherapy, is being considered based on the test result. Main recommendations CAC scoring should primarily be performed on individuals without coronary disease aged 45-75 years (absolute 5-year cardiovascular risk of 10-15%) who are asymptomatic. CAC scoring is also reasonable in lower risk groups (absolute 5-year cardiovascular risk, < 10%) where risk scores traditionally underestimate risk (eg, family history of premature CVD) and in patients with diabetes aged 40-60 years. We recommend aspirin and a high efficacy statin in high risk patients, defined as those with a CAC score ≥ 400, or a CAC score of 100-399 and above the 75th percentile for age and sex. It is reasonable to treat patients with CAC scores ≥ 100 with aspirin and a statin. It is reasonable not to treat asymptomatic patients with a CAC score of zero. Changes in management as a result of this statement Cardiovascular risk is reclassified according to CAC score. High risk patients are treated with a high efficacy statin and aspirin. Very low risk patients (ie, CAC score of zero) do not benefit from treatment.

  17. Climate Change Impacts and Adaptation on Southwestern DoD Facilities

    DTIC Science & Technology

    2017-03-03

    integrating climate change risks into decision priorities. 15. SUBJECT TERMS adaptation, baseline sensitivity, climate change, climate exposure...four bases we found that integrating climate change risks into the current decision matrix, by linking projected risks to current or past impacts...data and decision tools and methods. Bases have some capacity to integrate climate-related information, but they have limited resources to undertake

  18. Association of Practice-Level Social and Medical Risk With Performance in the Medicare Physician Value-Based Payment Modifier Program

    PubMed Central

    Epstein, Arnold M.; Orav, E. John; Filice, Clara E.; Samson, Lok Wong; Joynt Maddox, Karen E.

    2017-01-01

    Importance Medicare recently launched the Physician Value-Based Payment Modifier (PVBM) Program, a mandatory pay-for-performance program for physician practices. Little is known about performance by practices that serve socially or medically high-risk patients. Objective To compare performance in the PVBM Program by practice characteristics. Design, Setting, and Participants Cross-sectional observational study using PVBM Program data for payments made in 2015 based on performance of large US physician practices caring for fee-for-service Medicare beneficiaries in 2013. Exposures High social risk (defined as practices in the top quartile of proportion of patients dually eligible for Medicare and Medicaid) and high medical risk (defined as practices in the top quartile of mean Hierarchical Condition Category risk score among fee-for-service beneficiaries). Main Outcomes and Measures Quality and cost z scores based on a composite of individual measures. Higher z scores reflect better performance on quality; lower scores, better performance on costs. Results Among 899 physician practices with 5 189 880 beneficiaries, 547 practices were categorized as low risk (neither high social nor high medical risk) (mean, 7909 beneficiaries; mean, 320 clinicians), 128 were high medical risk only (mean, 3675 beneficiaries; mean, 370 clinicians), 102 were high social risk only (mean, 1635 beneficiaries; mean, 284 clinicians), and 122 were high medical and social risk (mean, 1858 beneficiaries; mean, 269 clinicians). Practices categorized as low risk performed the best on the composite quality score (z score, 0.18 [95% CI, 0.09 to 0.28]) compared with each of the practices categorized as high risk (high medical risk only: z score, −0.55 [95% CI, −0.77 to −0.32]; high social risk only: z score, −0.86 [95% CI, −1.17 to −0.54]; and high medical and social risk: −0.78 [95% CI, −1.04 to −0.51]) (P < .001 across groups). Practices categorized as high social risk only performed the best on the composite cost score (z score, −0.52 [95% CI, −0.71 to −0.33]), low risk had the next best cost score (z score, −0.18 [95% CI, −0.25 to −0.10]), then high medical and social risk (z score, 0.40 [95% CI, 0.23 to 0.57]), and then high medical risk only (z score, 0.82 [95% CI, 0.65 to 0.99]) (P < .001 across groups). Total per capita costs were $9506 for practices categorized as low risk, $13 683 for high medical risk only, $8214 for high social risk only, and $11 692 for high medical and social risk. These patterns were associated with fewer bonuses and more penalties for high-risk practices. Conclusions and Relevance During the first year of the Medicare Physician Value-Based Payment Modifier Program, physician practices that served more socially high-risk patients had lower quality and lower costs, and practices that served more medically high-risk patients had lower quality and higher costs. PMID:28763549

  19. Communicable Diseases Prioritized for Surveillance and Epidemiological Research: Results of a Standardized Prioritization Procedure in Germany, 2011

    PubMed Central

    Balabanova, Yanina; Gilsdorf, Andreas; Buda, Silke; Burger, Reinhard; Eckmanns, Tim; Gärtner, Barbara; Groß, Uwe; Haas, Walter; Hamouda, Osamah; Hübner, Johannes; Jänisch, Thomas; Kist, Manfred; Kramer, Michael H.; Ledig, Thomas; Mielke, Martin; Pulz, Matthias; Stark, Klaus; Suttorp, Norbert; Ulbrich, Uta; Wichmann, Ole; Krause, Gérard

    2011-01-01

    Introduction To establish strategic priorities for the German national public health institute (RKI) and guide the institute's mid-term strategic decisions, we prioritized infectious pathogens in accordance with their importance for national surveillance and epidemiological research. Methods We used the Delphi process with internal (RKI) and external experts and a metric-consensus approach to score pathogens according to ten three-tiered criteria. Additional experts were invited to weight each criterion, leading to the calculation of a median weight by which each score was multiplied. We ranked the pathogens according to the total weighted score and divided them into four priority groups. Results 127 pathogens were scored. Eighty-six experts participated in the weighting; “Case fatality rate” was rated as the most important criterion. Twenty-six pathogens were ranked in the highest priority group; among those were pathogens with internationally recognised importance (e.g., Human Immunodeficiency Virus, Mycobacterium tuberculosis, Influenza virus, Hepatitis C virus, Neisseria meningitides), pathogens frequently causing large outbreaks (e.g., Campylobacter spp.), and nosocomial pathogens associated with antimicrobial resistance. Other pathogens in the highest priority group included Helicobacter pylori, Respiratory Syncytial Virus, Varicella zoster virus and Hantavirus. Discussion While several pathogens from the highest priority group already have a high profile in national and international health policy documents, high scores for other pathogens (e.g., Helicobacter pylori, Respiratory syncytial virus or Hantavirus) indicate a possible under-recognised importance within the current German public health framework. A process to strengthen respective surveillance systems and research has been started. The prioritization methodology has worked well; its modular structure makes it potentially useful for other settings. PMID:21991334

  20. A survey of kidney disease and risk-factor information on the World Wide Web.

    PubMed

    Calderón, José Luis; Zadshir, Ashraf; Norris, Keith

    2004-11-11

    Chronic kidney disease (CKD) is epidemic, and informing those at risk is a national health priority. However, the discrepancy between the readability of health information and the literacy skills of those it targets is a recognized barrier to communicating health information that may promote good health outcomes. Because the World Wide Web has become one of the most important sources of health information, we sought to assess the readability of commonly available CKD information. Twelve highly cited English-language, kidney disease Web sites were identified with 4 popular search engines. Each Web site was reviewed for the availability of 6 domains of information germane to CKD and risk-factor information. We estimated readability scores with the Flesch-Kincaid and Flesch Reading Ease Index methods. The deviation of readability scores for CKD information from readability appropriate to average literacy skills and the limited literacy skills of vulnerable populations (low socioeconomic status, health disparities, and elderly) were calculated. Eleven Web sites met the inclusion criteria. Six of 11 sites provided information on all 6 domains of CKD and risk-factor information. Mean readability scores for all 6 domains of CKD information exceeded national average literacy skills and far exceeded the fifth-grade-level readability desired for informing vulnerable populations. Information about CKD and diabetes consistently had higher readability scores. Information on the World Wide Web about CKD and its risk factors may not be readable for comprehension by the general public, especially by underserved minority populations with limited literacy skills. Barriers to health communication may be important contributors to the rising CKD epidemic and disparities in CKD health status experienced by minority populations.

  1. Obesity and cardio-metabolic risk factors in urban adults of Benin: Relationship with socio-economic status, urbanisation, and lifestyle patterns

    PubMed Central

    Sodjinou, Roger; Agueh, Victoire; Fayomi, Benjamin; Delisle, Hélène

    2008-01-01

    Background There is a dearth of information on diet-related chronic diseases in West Africa. This cross-sectional study assessed the rate of obesity and other cardiovascular disease (CVD) risk factors in a random sample of 200 urban adults in Benin and explored the associations between these factors and socio-economic status (SES), urbanisation as well as lifestyle patterns. Methods Anthropometric parameters (height, weight and waist circumference), blood pressure, fasting plasma glucose, and serum lipids (HDL-cholesterol and triglycerides) were measured. WHO cut-offs were used to define CVD risk factors. Food intake and physical activity were assessed with three non-consecutive 24-hour recalls. Information on tobacco use and alcohol consumption was collected using a questionnaire. An overall lifestyle score (OLS) was created based on diet quality, alcohol consumption, smoking, and physical activity. A SES score was computed based on education, main occupation and household amenities (as proxy for income). Results The most prevalent CVD risk factors were overall obesity (18%), abdominal obesity (32%), hypertension (23%), and low HDL-cholesterol (13%). Diabetes and hypertriglyceridemia were uncommon. The prevalence of overall obesity was roughly four times higher in women than in men (28 vs. 8%). After controlling for age and sex, the odds of obesity increased significantly with SES, while a longer exposure to the urban environment was associated with higher odds of hypertension. Of the single lifestyle factors examined, physical activity was the most strongly associated with several CVD risk factors. Logistic regression analyses revealed that the likelihood of obesity and hypertension decreased significantly as the OLS improved, while controlling for potential confounding factors. Conclusion Our data show that obesity and cardio-metabolic risk factors are highly prevalent among urban adults in Benin, which calls for urgent measures to avert the rise of diet-related chronic diseases. People with higher SES and those with a longer exposure to the urban environment are priority target groups for interventions focusing on environmental risk factors that are amenable to change in this population. Lifestyle interventions would appear appropriate, with particular emphasis on physical activity. PMID:18318907

  2. [Investigation into the capacity for risk identification, assessment, and mitigation in managing public health emergencies in China].

    PubMed

    Hu, Guo-qing; Rao, Ke-qin; Sun, Zhen-qiu

    2007-08-01

    To investigate the capacity for risk identification, assessment, and mitigation in public health emergency management in China. Four provinces were randomly selected using stratified sampling. All the municipalities under these four provinces were assessed using the 3rd subscale (Risk Identification, Risk Assessment, and Risk Mitigation) of Preparedness and Response Capacity Questionnaire for Public Health Emergencies Used in Provincial or Municipal Governments, which was developed by the Center for Health Statistics and Information, Ministry of Health of the People's Republic of China. Sixty of 66 questionnaires (90.91%) were collected. Among 60 investigated municipalities, 35 (58%) identified the potential public health emergencies, 17 (28%) assessed the risks for the identified emergencies, and 5 (8%) conducted risk assessments for the locally accident-prone factories, mines, corporations, and big establishments, 6 (10%) identified the priorities in public health emergency management based on risk assessment, 6 (10%) developed special prevention strategies for main public health emergencies, 3 (5%) assessed the vulnerability of local residents to public health emergencies, and 34 (57%) assessed or were assessing the preparedness and response capacity for public health emergencies in the past 2 years. The mean of standard total score for risk identification, assessment, and mitigation was 24.05 (95% CI: 18.32, 29.77). Risk identification, assessment, and mitigation still require further improvement in China, and both the central and local authorities should implement more effective and efficient measures.

  3. Method ranks competing projects by priorities, risk. [A method to help prioritize oil and gas pipeline project goals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moeckel, D.R.

    A practical, objective guide for ranking projects based on risk-based priorities has been developed by Sun Pipe Line Co. The deliberately simple system guides decisions on how to allocate scarce company resources because all managers employ the same criteria in weighing potential risks to the company versus benefits. Managers at all levels are continuously having to comply with an ever growing amount of legislative and regulatory requirements while at the same time trying to run their businesses effectively. The system primarily is designed for use as a compliance oversight and tracking process to document, categorize, and follow-up on work concerningmore » various issues or projects. That is, the system consists of an electronic database which is updated periodically, and is used by various levels of management to monitor progress of health, safety, environmental and compliance-related projects. Criteria used in determining a risk factor and assigning a priority also have been adapted and found useful for evaluating other types of projects. The process enables management to better define potential risks and/or loss of benefits that are being accepted when a project is rejected from an immediate work plan or budget. In times of financial austerity, it is extremely important that the right decisions are made at the right time.« less

  4. Predicting vascular complications in percutaneous coronary interventions.

    PubMed

    Piper, Winthrop D; Malenka, David J; Ryan, Thomas J; Shubrooks, Samuel J; O'Connor, Gerald T; Robb, John F; Farrell, Karen L; Corliss, Mary S; Hearne, Michael J; Kellett, Mirle A; Watkins, Matthew W; Bradley, William A; Hettleman, Bruce D; Silver, Theodore M; McGrath, Paul D; O'Mears, John R; Wennberg, David E

    2003-06-01

    Using a large, current, regional registry of percutaneous coronary interventions (PCI), we identified risk factors for postprocedure vascular complications and developed a scoring system to estimate individual patient risk. A vascular complication (access-site injury requiring treatment or bleeding requiring transfusion) is a potentially avoidable outcome of PCI. Data were collected on 18,137 consecutive patients undergoing PCI in northern New England from January 1997 to December 1999. Multivariate regression was used to identify characteristics associated with vascular complications and to develop a scoring system to predict risk. The rate of vascular complication was 2.98% (541 cases). Variables associated with increased risk in the multivariate analysis included age >or=70, odds ratio (OR) 2.7, female sex (OR 2.4), body surface area <1.6 m(2) (OR 1.9), history of congestive heart failure (OR 1.4), chronic obstructive pulmonary disease (OR 1.5), renal failure (OR 1.9), lower extremity vascular disease (OR 1.4), bleeding disorder (OR 1.68), emergent priority (OR 2.3), myocardial infarction (OR 1.7), shock (1.86), >or=1 type B2 (OR 1.32) or type C (OR 1.7) lesions, 3-vessel PCI (OR 1.5), use of thienopyridines (OR 1.4) or use of glycoprotein IIb/IIIa receptor inhibitors (OR 1.9). The model performed well in tests for significance, discrimination, and calibration. The scoring system captured 75% of actual vascular complications in its highest quintiles of predicted risk. Predicting the risk of post-PCI vascular complications is feasible. This information may be useful for clinical decision-making and institutional efforts at quality improvement.

  5. External Validation of Risk Scores for Major Bleeding in a Population-Based Cohort of Transient Ischemic Attack and Ischemic Stroke Patients.

    PubMed

    Hilkens, Nina A; Li, Linxin; Rothwell, Peter M; Algra, Ale; Greving, Jacoba P

    2018-03-01

    The S 2 TOP-BLEED score may help to identify patients at high risk of bleeding on antiplatelet drugs after a transient ischemic attack or ischemic stroke. The score was derived on trial populations, and its performance in a real-world setting is unknown. We aimed to externally validate the S 2 TOP-BLEED score for major bleeding in a population-based cohort and to compare its performance with other risk scores for bleeding. We studied risk of bleeding in 2072 patients with a transient ischemic attack or ischemic stroke on antiplatelet agents in the population-based OXVASC (Oxford Vascular Study) according to 3 scores: S 2 TOP-BLEED, REACH, and Intracranial-B 2 LEED 3 S. Performance was assessed with C statistics and calibration plots. During 8302 patient-years of follow-up, 117 patients had a major bleed. The S 2 TOP-BLEED score showed a C statistic of 0.69 (95% confidence interval [CI], 0.64-0.73) and accurate calibration for 3-year risk of major bleeding. The S 2 TOP-BLEED score was much more predictive of fatal bleeding than nonmajor bleeding (C statistics 0.77; 95% CI, 0.69-0.85 and 0.50; 95% CI, 0.44-0.58). The REACH score had a C statistic of 0.63 (95% CI, 0.58-0.69) for major bleeding and the Intracranial-B 2 LEED 3 S score a C statistic of 0.60 (95% CI, 0.51-0.70) for intracranial bleeding. The ratio of ischemic events versus bleeds decreased across risk groups of bleeding from 6.6:1 in the low-risk group to 1.8:1 in the high-risk group. The S 2 TOP-BLEED score shows modest performance in a population-based cohort of patients with a transient ischemic attack or ischemic stroke. Although bleeding risks were associated with risks of ischemic events, risk stratification may still be useful to identify a subgroup of patients at particularly high risk of bleeding, in whom preventive measures are indicated. © 2018 The Authors.

  6. ICU nurses' oral-care practices and the current best evidence.

    PubMed

    DeKeyser Ganz, Freda; Fink, Naomi Farkash; Raanan, Ofra; Asher, Miriam; Bruttin, Madeline; Nun, Maureen Ben; Benbinishty, Julie

    2009-01-01

    The purpose of this study was to describe the oral-care practices of ICU nurses, to compare those practices with current evidence-based practice, and to determine if the use of evidence-based practice was associated with personal demographic or professional characteristics. A national survey of oral-care practices of ICU nurses was conducted using a convenience sample of 218 practicing ICU nurses in 2004-05. The survey instrument included questions about demographic and professional characteristics and a checklist of oral-care practices. Nurses rated their perceived level of priority concerning oral care on a scale from 0 to 100. A score was computed representing the sum of 14 items related to equipment, solutions, assessments, and techniques associated with the current best evidence. This score was then statistically analyzed using ANOVA to determine differences of EBP based on demographic and professional characteristics. The most commonly used equipment was gauze pads (84%), followed by tongue depressors (55%), and toothbrushes (34%). Chlorhexidine was the most common solution used (75%). Less than half (44%) reported brushing their patients' teeth. The majority performed an oral assessment before beginning oral care (71%); however, none could describe what assessment tool was used. Only 57% of nurses reported documenting their oral care. Nurses rated oral care of intubated patients with a priority of 67+/-27.1. Wide variations were noted within and between units in terms of which techniques, equipment, and solutions were used. No significant relationships were found between the use of an evidence-based protocol and demographic and professional characteristics or with the priority given to oral care. While nurses ranked oral care a high priority, many did not implement the latest evidence into their current practice. The level of research utilization was not related to personal or professional characteristics. Therefore attempts should be made to encourage all ICU nurses to introduce and use evidence-based, oral-care protocols. Practicing ICU nurses in this survey were often not adhering to the latest evidence-based practice and therefore need to be educated and encouraged to do so in order to improve patient care.

  7. Energy in the Environment - Initiatives 2004-08

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paul Jehn

    Under the Energy and Environment Initiative, the GWPC/GWPRF will expand the oil and gas electronic commerce initiatives used to enhance the Risk Based Data Management System (RBDMS) and the Cost Effective Regulatory Approach (CERA). The GWPC/GWPRF has identified the following priorities for work efforts during the time period that will act as the base from which selections for each work period will be proposed. Work tasks will be presented for each reporting period by the GWPC from areas selected from the general list of priorities.

  8. Active epidemiological surveillance of musculoskeletal disorders in a shoe factory

    PubMed Central

    Roquelaure, Y; Mariel, J; Fanello, S; Boissiere, J; Chiron, H; Dano, C; Bureau, D; Penneau-Fontbonne, D

    2002-01-01

    Aims: (1) To evaluate an active method of surveillance of musculoskeletal disorders (MSDs). (2) To compare different criteria for deciding whether or not a work situation could be considered at high risk of MSDs in a large, modern shoe factory. Methods: A total of 253 blue collar workers were interviewed and examined by the same physician in 1996; 191 of them were re-examined in 1997. Risk factors of MSDs were assessed for each worker by standardised job site work analysis. Prevalence and incidence rates of carpal tunnel syndrome, rotator cuff syndrome, and tension neck syndrome were calculated for each of the nine main types of work situation. Different criteria used to assess situations with high risk of MSDs were compared. Results: On the basis of prevalence data, three types of work situation were detected to be at high risk of MSDs: cutting, sewing, and assembly preparation. The three types of work situations identified on the basis of incidence data (sewing preparation, mechanised assembling, and finishing) were different from those identified by prevalence data. At least one recognised risk factor for MSDs was identified for all groups of work situations. The ergonomic risk could be considered as serious for the four types of work situation having the highest ergonomic scores (sewing, assembly preparation, pasting, and cutting). Conclusion: The results of the health surveillance method depend largely on the definition of the criteria used to define the risk of MSDs. The criteria based on incidence data are more valid than those based on prevalence data. Health and risk factor surveillance must be combined to predict the risk of MSDs in the company. However, exposure assessment plays a greater role in determining the priorities for ergonomic intervention. PMID:12107293

  9. The French National Nutrition and Health Program score is associated with nutritional status and risk of major chronic diseases.

    PubMed

    Estaquio, Carla; Castetbon, Katia; Kesse-Guyot, Emmanuelle; Bertrais, Sandrine; Deschamps, Valérie; Dauchet, Luc; Péneau, Sandrine; Galan, Pilar; Hercberg, Serge

    2008-05-01

    Few studies have found that adherence to dietary guidelines reduces the incidence of chronic disease. In 2001, a National Nutrition and Health Program (Program National Nutrition Santé) was implemented in France and included 9 quantified priority nutritional goals involving fruit, vegetable, and nutrient intakes, nutritional status, and physical activity. We developed an index score that includes indicators of these public health objectives and examined the association between this score and the incidence of major chronic diseases in the Supplémentation en Vitamines et Minéraux AntioXydants cohort. Data from middle-aged adults free of major chronic diseases and who provided at least 3 24-h dietary records during the first 2 y of follow-up have been included in the present analysis (n = 4,976). Major chronic disease, documented during the 8-y follow-up period (n = 455), was defined as the combination of cardiovascular disease (n = 131), cancer (n = 261), or death (n = 63), whichever came first. In fully adjusted Cox models, men in the top tertile score compared with those in the lowest one had a 36% lower risk of major chronic diseases (hazard ratio = 0.64; 95% CI: 0.44-0.96). No association was found in women. Healthy diet and lifestyle were associated with a lower risk of chronic diseases, particularly in men, thereby underlying relevance of the French nutritional recommendations.

  10. Advances in liver transplantation allocation systems.

    PubMed

    Schilsky, Michael L; Moini, Maryam

    2016-03-14

    With the growing number of patients in need of liver transplantation, there is a need for adopting new and modifying existing allocation policies that prioritize patients for liver transplantation. Policy should ensure fair allocation that is reproducible and strongly predictive of best pre and post transplant outcomes while taking into account the natural history of the potential recipients liver disease and its complications. There is wide acceptance for allocation policies based on urgency in which the sickest patients on the waiting list with the highest risk of mortality receive priority. Model for end-stage liver disease and Child-Turcotte-Pugh scoring system, the two most universally applicable systems are used in urgency-based prioritization. However, other factors must be considered to achieve optimal allocation. Factors affecting pre-transplant patient survival and the quality of the donor organ also affect outcome. The optimal system should have allocation prioritization that accounts for both urgency and transplant outcome. We reviewed past and current liver allocation systems with the aim of generating further discussion about improvement of current policies.

  11. Communicable Diseases Prioritized According to Their Public Health Relevance, Sweden, 2013

    PubMed Central

    Dahl, Viktor; Tegnell, Anders; Wallensten, Anders

    2015-01-01

    To establish strategic priorities for the Public Health Agency of Sweden we prioritized pathogens according to their public health relevance in Sweden in order to guide resource allocation. We then compared the outcome to ongoing surveillance. We used a modified prioritization method developed at the Robert Koch Institute in Germany. In a Delphi process experts scored pathogens according to ten variables. We ranked the pathogens according to the total score and divided them into four priority groups. We then compared the priority groups to self-reported time spent on surveillance by epidemiologists and ongoing programmes for surveillance through mandatory and/or voluntary notifications and for surveillance of typing results. 106 pathogens were scored. The result of the prioritization process was similar to the outcome of the prioritization in Germany. Common pathogens such as calicivirus and Influenza virus as well as blood-borne pathogens such as human immunodeficiency virus, hepatitis B and C virus, gastro-intestinal infections such as Campylobacter and Salmonella and vector-borne pathogens such as Borrelia were all in the highest priority group. 63% of time spent by epidemiologists on surveillance was spent on pathogens in the highest priority group and all pathogens in the highest priority group, except for Borrelia and varicella-zoster virus, were under surveillance through notifications. Ten pathogens in the highest priority group (Borrelia, calicivirus, Campylobacter, Echinococcus multilocularis, hepatitis C virus, HIV, respiratory syncytial virus, SARS- and MERS coronavirus, tick-borne encephalitis virus and varicella-zoster virus) did not have any surveillance of typing results. We will evaluate the possibilities of surveillance for the pathogens in the highest priority group where we currently do not have any ongoing surveillance and evaluate the need of surveillance for the pathogens from the low priority group where there is ongoing surveillance in order to focus our work on the pathogens with the highest relevance. PMID:26397699

  12. Maximization of the usage of coronary CTA derived plaque information using a machine learning based algorithm to improve risk stratification; insights from the CONFIRM registry.

    PubMed

    van Rosendael, Alexander R; Maliakal, Gabriel; Kolli, Kranthi K; Beecy, Ashley; Al'Aref, Subhi J; Dwivedi, Aeshita; Singh, Gurpreet; Panday, Mohit; Kumar, Amit; Ma, Xiaoyue; Achenbach, Stephan; Al-Mallah, Mouaz H; Andreini, Daniele; Bax, Jeroen J; Berman, Daniel S; Budoff, Matthew J; Cademartiri, Filippo; Callister, Tracy Q; Chang, Hyuk-Jae; Chinnaiyan, Kavitha; Chow, Benjamin J W; Cury, Ricardo C; DeLago, Augustin; Feuchtner, Gudrun; Hadamitzky, Martin; Hausleiter, Joerg; Kaufmann, Philipp A; Kim, Yong-Jin; Leipsic, Jonathon A; Maffei, Erica; Marques, Hugo; Pontone, Gianluca; Raff, Gilbert L; Rubinshtein, Ronen; Shaw, Leslee J; Villines, Todd C; Gransar, Heidi; Lu, Yao; Jones, Erica C; Peña, Jessica M; Lin, Fay Y; Min, James K

    Machine learning (ML) is a field in computer science that demonstrated to effectively integrate clinical and imaging data for the creation of prognostic scores. The current study investigated whether a ML score, incorporating only the 16 segment coronary tree information derived from coronary computed tomography angiography (CCTA), provides enhanced risk stratification compared with current CCTA based risk scores. From the multi-center CONFIRM registry, patients were included with complete CCTA risk score information and ≥3 year follow-up for myocardial infarction and death (primary endpoint). Patients with prior coronary artery disease were excluded. Conventional CCTA risk scores (conventional CCTA approach, segment involvement score, duke prognostic index, segment stenosis score, and the Leaman risk score) and a score created using ML were compared for the area under the receiver operating characteristic curve (AUC). Only 16 segment based coronary stenosis (0%, 1-24%, 25-49%, 50-69%, 70-99% and 100%) and composition (calcified, mixed and non-calcified plaque) were provided to the ML model. A boosted ensemble algorithm (extreme gradient boosting; XGBoost) was used and the entire data was randomly split into a training set (80%) and testing set (20%). First, tuned hyperparameters were used to generate a trained model from the training data set (80% of data). Second, the performance of this trained model was independently tested on the unseen test set (20% of data). In total, 8844 patients (mean age 58.0 ± 11.5 years, 57.7% male) were included. During a mean follow-up time of 4.6 ± 1.5 years, 609 events occurred (6.9%). No CAD was observed in 48.7% (3.5% event), non-obstructive CAD in 31.8% (6.8% event), and obstructive CAD in 19.5% (15.6% event). Discrimination of events as expressed by AUC was significantly better for the ML based approach (0.771) vs the other scores (ranging from 0.685 to 0.701), P < 0.001. Net reclassification improvement analysis showed that the improved risk stratification was the result of down-classification of risk among patients that did not experience events (non-events). A risk score created by a ML based algorithm, that utilizes standard 16 coronary segment stenosis and composition information derived from detailed CCTA reading, has greater prognostic accuracy than current CCTA integrated risk scores. These findings indicate that a ML based algorithm can improve the integration of CCTA derived plaque information to improve risk stratification. Published by Elsevier Inc.

  13. The ERICE-score: the new native cardiovascular score for the low-risk and aged Mediterranean population of Spain.

    PubMed

    Gabriel, Rafael; Brotons, Carlos; Tormo, M José; Segura, Antonio; Rigo, Fernando; Elosua, Roberto; Carbayo, Julio A; Gavrila, Diana; Moral, Irene; Tuomilehto, Jaakko; Muñiz, Javier

    2015-03-01

    In Spain, data based on large population-based cohorts adequate to provide an accurate prediction of cardiovascular risk have been scarce. Thus, calibration of the EuroSCORE and Framingham scores has been proposed and done for our population. The aim was to develop a native risk prediction score to accurately estimate the individual cardiovascular risk in the Spanish population. Seven Spanish population-based cohorts including middle-aged and elderly participants were assembled. There were 11800 people (6387 women) representing 107915 person-years of follow-up. A total of 1214 cardiovascular events were identified, of which 633 were fatal. Cox regression analyses were conducted to examine the contributions of the different variables to the 10-year total cardiovascular risk. Age was the strongest cardiovascular risk factor. High systolic blood pressure, diabetes mellitus and smoking were strong predictive factors. The contribution of serum total cholesterol was small. Antihypertensive treatment also had a significant impact on cardiovascular risk, greater in men than in women. The model showed a good discriminative power (C-statistic=0.789 in men and C=0.816 in women). Ten-year risk estimations are displayed graphically in risk charts separately for men and women. The ERICE is a new native cardiovascular risk score for the Spanish population derived from the background and contemporaneous risk of several Spanish cohorts. The ERICE score offers the direct and reliable estimation of total cardiovascular risk, taking in consideration the effect of diabetes mellitus and cardiovascular risk factor management. The ERICE score is a practical and useful tool for clinicians to estimate the total individual cardiovascular risk in Spain. Copyright © 2014 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.

  14. Can we make reports of end-of-life care quality more consumer-focused? results of a nationwide quality measurement program.

    PubMed

    Smith, Dawn; Caragian, Nicole; Kazlo, Elena; Bernstein, Jennie; Richardson, Diane; Casarett, David

    2011-03-01

    The goal of this study was to define families' priorities for various aspects of end-of-life care, and to determine whether scores that reflect these priorities alter facilities' quality rankings. Nationwide telephone survey. 62 VA medical centers, including acute and long term care. For each patient who died in a participating facility, one family member was invited to participate. A survey included 14 items describing key aspects of the patient's care in his or her last month of life, and one global rating. A weighted score was calculated based on the association between each item and the global rating. Interviews were completed with family members for 3,897 of 7,110 patients (55%). Items showed an approximately 5-fold range of weights, indicating a wide variation in the importance that families placed on aspects of palliative care (low: pain management, weight = 0.54, 95% CI 0.38-0.70;/P/<0.001; high: providers were "kind, caring, and respectful: weight = 2.46, 95% CI 2.24-2.68;/P/<0.001). Weights were homogeneous across patient subgroups, and there were no significant changes in facilities' quality rankings when weights were used. Both weighted and unweighted scores showed similar evidence of the impact of process measures. There appears to be wide variation in the importance that families place on several aspects of end-of-life care. However, the impact of weighting was generally even across patient subgroups and facilities. Therefore, the use of weights to account for families' priorities is not likely to alter a facility's quality score.

  15. Mammography Decision Aid Reduces Decisional Conflict for Women in Their Forties Considering Screening.

    PubMed

    Eden, Karen B; Scariati, Paula; Klein, Krystal; Watson, Lindsey; Remiker, Mark; Hribar, Michelle; Forro, Vanessa; Michaels, LeAnn; Nelson, Heidi D

    2015-12-01

    Clinical guidelines recommend a personalized approach to mammography screening for women in their forties; however, methods to do so are lacking. An evidence-based mammography screening decision aid was developed as an electronic mobile application and evaluated in a before-after study. The decision aid (Mammopad) included modules on breast cancer, mammography, risk assessment, and priority setting about screening. Women aged 40-49 years who were patients of rural primary care clinics, had no major risk factors for breast cancer, and no mammography during the previous year were invited to use the decision aid. Twenty women participated in pretesting of the decision aid and 75 additional women completed the before-after study. The primary outcome was decisional conflict measured before and after using Mammopad. Secondary outcomes included decision self-efficacy and intention to begin or continue mammography screening. Differences comparing measures before versus after use were determined using Wilcoxon signed rank tests. After using Mammopad, women reported reduced decisional conflict based on mean Decisional Conflict Scale scores overall (46.33 versus 8.33; Z = -7.225; p < 0.001) and on all subscales (p < 0.001). Women also reported increased mean Decision Self-Efficacy Scale scores (79.67 versus 95.73; Z = 6.816, p < 0.001). Although 19% of women changed their screening intentions, this was not statistically significant. Women reported less conflict about their decisions for mammography screening, and felt more confident to make decisions after using Mammopad. This approach may help guide women through the decision making process to determine personalized screening choices that are appropriate for them.

  16. Establishing priorities for psychological interventions in pediatric settings: A decision-tree approach using the DISABKIDS-10 Index as a screening instrument

    PubMed Central

    Silva, Neuza; Moreira, Helena; Canavarro, Maria Cristina; Carona, Carlos

    2018-01-01

    Most children and adolescents with chronic health conditions have impaired health-related quality of life and are at high risk of internalizing and externalizing problems. However, few patients present clinically significant symptoms. Using a decision-tree approach, this study aimed to identify risk profiles for psychological problems based on measures that can be easily scored and interpreted by healthcare professionals in pediatric settings. The participants were 736 children and adolescents between 8–18 years of age with asthma, epilepsy, cerebral palsy, type-1diabetes or obesity. The children and adolescents completed self-report measures of health-related quality of life (DISABKIDS-10) and psychological problems (Strengths and Difficulties Questionnaire). Sociodemographic and clinical data were collected from their parents/ physicians. Children and adolescents were classified into the normal (78.5%) or borderline/clinical range (21.5%) according to the Strengths and Difficulties Questionnaire cut-off values for psychological problems. The overall accuracy of the decision-tree model was 78.1% (sensitivity = 71.5%; specificity = 79.9%), with 4 profiles predicting 71.5% of borderline/clinical cases. The strongest predictor of psychological problems was a health-related quality of life standardized score below the threshold of 57.5 for patients with cerebral palsy, epilepsy or obesity and below 70.0 for patients with asthma or diabetes. Other significant predictors were low socio-economic status, single-parent household, medication intake and younger age. The model showed adequate validity (risk = .28, SE = .02) and accuracy (area under the Receiver Operating Characteristic curve = .84; CI = .80/.87). The identification of pediatric patients at high risk for psychological problems may contribute to a more efficient allocation of health resources, particularly with regard to their referral to specialized psychological assessment and intervention. PMID:29852026

  17. Towards a harmonized approach for risk assessment of genotoxic carcinogens in the European Union.

    PubMed

    Crebelli, Riccardo

    2006-01-01

    The EU Scientific Committees have considered in the past the use of matematical models for human cancer risk estimation not adequately supported by the available scientific knowledge. Therefore, the advice given to risk managers was to reduce the exposure as far as possible, following the as low as reasonably achievable (ALARA) principle. However, ALARA does not allow to set priorities for risk management, as it does not take into consideration carcinogenic potency and level of human exposure. For this reason the European Food Safety Authority (EFSA) has identified as a priority task the development of a transparent, scientically justifiable and harmonized approach for risk assessment of genotoxic carcinogens. This approach, proposed at the end of 2005, is based on the definition of the (MOE), i.e. the relationship between a given point of the dose reponse curve in the animal and human exposure. As point of comparison EFSA recommends the BMDL10, i.e. the lower limit of the confidence interval of the Benchmark Dose associated with an incidence of 10% of induced tumors. Based on current scientific knowkedge, EFSA concluded that a MOE of 10000 or greater is associated with a low risk and low priority for risk management actions. The approach proposed does not replace the ALARA. It should find application on food contaminants, process by-product, and other substances unintentionally present in food. On the other hand, it is not intended to provide a tool for the definition of tolerable intake levels for genotoxic carcinogens deliberately added to food.

  18. An efficient sampling strategy for selection of biobank samples using risk scores.

    PubMed

    Björk, Jonas; Malmqvist, Ebba; Rylander, Lars; Rignell-Hydbom, Anna

    2017-07-01

    The aim of this study was to suggest a new sample-selection strategy based on risk scores in case-control studies with biobank data. An ongoing Swedish case-control study on fetal exposure to endocrine disruptors and overweight in early childhood was used as the empirical example. Cases were defined as children with a body mass index (BMI) ⩾18 kg/m 2 ( n=545) at four years of age, and controls as children with a BMI of ⩽17 kg/m 2 ( n=4472 available). The risk of being overweight was modelled using logistic regression based on available covariates from the health examination and prior to selecting samples from the biobank. A risk score was estimated for each child and categorised as low (0-5%), medium (6-13%) or high (⩾14%) risk of being overweight. The final risk-score model, with smoking during pregnancy ( p=0.001), birth weight ( p<0.001), BMI of both parents ( p<0.001 for both), type of residence ( p=0.04) and economic situation ( p=0.12), yielded an area under the receiver operating characteristic curve of 67% ( n=3945 with complete data). The case group ( n=416) had the following risk-score profile: low (12%), medium (46%) and high risk (43%). Twice as many controls were selected from each risk group, with further matching on sex. Computer simulations showed that the proposed selection strategy with stratification on risk scores yielded consistent improvements in statistical precision. Using risk scores based on available survey or register data as a basis for sample selection may improve possibilities to study heterogeneity of exposure effects in biobank-based studies.

  19. Risk score for first-screening of prevalent undiagnosed chronic kidney disease in Peru: the CRONICAS-CKD risk score.

    PubMed

    Carrillo-Larco, Rodrigo M; Miranda, J Jaime; Gilman, Robert H; Medina-Lezama, Josefina; Chirinos-Pacheco, Julio A; Muñoz-Retamozo, Paola V; Smeeth, Liam; Checkley, William; Bernabe-Ortiz, Antonio

    2017-11-29

    Chronic Kidney Disease (CKD) represents a great burden for the patient and the health system, particularly if diagnosed at late stages. Consequently, tools to identify patients at high risk of having CKD are needed, particularly in limited-resources settings where laboratory facilities are scarce. This study aimed to develop a risk score for prevalent undiagnosed CKD using data from four settings in Peru: a complete risk score including all associated risk factors and another excluding laboratory-based variables. Cross-sectional study. We used two population-based studies: one for developing and internal validation (CRONICAS), and another (PREVENCION) for external validation. Risk factors included clinical- and laboratory-based variables, among others: sex, age, hypertension and obesity; and lipid profile, anemia and glucose metabolism. The outcome was undiagnosed CKD: eGFR < 60 ml/min/1.73m 2 . We tested the performance of the risk scores using the area under the receiver operating characteristic (ROC) curve, sensitivity, specificity, positive/negative predictive values and positive/negative likelihood ratios. Participants in both studies averaged 57.7 years old, and over 50% were females. Age, hypertension and anemia were strongly associated with undiagnosed CKD. In the external validation, at a cut-off point of 2, the complete and laboratory-free risk scores performed similarly well with a ROC area of 76.2% and 76.0%, respectively (P = 0.784). The best assessment parameter of these risk scores was their negative predictive value: 99.1% and 99.0% for the complete and laboratory-free, respectively. The developed risk scores showed a moderate performance as a screening test. People with a score of ≥ 2 points should undergo further testing to rule out CKD. Using the laboratory-free risk score is a practical approach in developing countries where laboratories are not readily available and undiagnosed CKD has significant morbidity and mortality.

  20. A Retrospective Analysis of Pressure Ulcer Incidence and Modified Braden Scale Score Risk Classifications.

    PubMed

    Chen, Hong-Lin; Cao, Ying-Juan; Wang, Jing; Huai, Bao-Sha

    2015-09-01

    The Braden Scale is the most widely used pressure ulcer risk assessment in the world, but the currently used 5 risk classification groups do not accurately discriminate among their risk categories. To optimize risk classification based on Braden Scale scores, a retrospective analysis of all consecutively admitted patients in an acute care facility who were at risk for pressure ulcer development was performed between January 2013 and December 2013. Predicted pressure ulcer incidence first was calculated by logistic regression model based on original Braden score. Risk classification then was modified based on the predicted pressure ulcer incidence and compared between different risk categories in the modified (3-group) classification and the traditional (5-group) classification using chi-square test. Two thousand, six hundred, twenty-five (2,625) patients (mean age 59.8 ± 16.5, range 1 month to 98 years, 1,601 of whom were men) were included in the study; 81 patients (3.1%) developed a pressure ulcer. The predicted pressure ulcer incidence ranged from 0.1% to 49.7%. When the predicted pressure ulcer incidence was greater than 10.0% (high risk), the corresponding Braden scores were less than 11; when the predicted incidence ranged from 1.0% to 10.0% (moderate risk), the corresponding Braden scores ranged from 12 to 16; and when the predicted incidence was less than 1.0% (mild risk), the corresponding Braden scores were greater than 17. In the modified classification, observed pressure ulcer incidence was significantly different between each of the 3 risk categories (P less than 0.05). However, in the traditional classification, the observed incidence was not significantly different between the high-risk category and moderate-risk category (P less than 0.05) and between the mild-risk category and no-risk category (P less than 0.05). If future studies confirm the validity of these findings, pressure ulcer prevention protocols of care based on Braden Scale scores can be simplified.

  1. Delphi Study to Determine Rehabilitation Research Priorities for Older Adults With Cancer.

    PubMed

    Lyons, Kathleen Doyle; Radomski, Mary Vining; Alfano, Catherine M; Finkelstein, Marsha; Sleight, Alix G; Marshall, Timothy F; McKenna, Raymond; Fu, Jack B

    2017-05-01

    To solicit expert opinions and develop consensus around the research that is needed to improve cancer rehabilitation for older adults. Delphi methods provided a structured process to elicit and prioritize research questions from national experts. National, Web-based survey. Members (N=32) of the American Congress of Rehabilitation Medicine completed at least 1 of 3 investigator-developed surveys. Not applicable. In the first survey, participants identified up to 5 research questions that needed to be answered to improve cancer rehabilitation for older adults. In 2 subsequent surveys, participants viewed the compilation of questions, rated the importance of each question, and identified the 5 most important questions. This generated priority scores for each question. Consensus scores were created to describe the degree of agreement around the priority of each question. Highest priority research concerns the epidemiology and measurement of function and disability in older adult cancer survivors; the effects of cancer rehabilitation interventions on falls, disability, participation, survival, costs, quality of care, and health care utilization; and testing models of care that facilitate referrals from oncology to rehabilitation providers as part of coordinated, multicomponent care. A multipronged approach is needed to fill these gaps, including targeted funding opportunities developed with an advisory panel of cancer rehabilitation experts, development of a research network to facilitate novel collaborations and grant proposals, and coordinated efforts of clinical groups to advocate for funding, practice change, and policy change. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  2. Factors associated with the practice of nursing staff sharing information about patients' nutritional status with their colleagues in hospitals.

    PubMed

    Kawasaki, Y; Tamaura, Y; Akamatsu, R; Sakai, M; Fujiwara, K

    2018-01-01

    Nursing staff have an important role in patients' nutritional care. The aim of this study was to demonstrate how the practice of sharing a patient's nutritional status with colleagues was affected by the nursing staff's attitude, knowledge and their priority to provide nutritional care. The participants were 492 nursing staff. We obtained participants' demographic data, the practice of sharing patients' nutritional information and information about participants' knowledge, attitude and priority of providing nutritional care by the questionnaire. We performed partial correlation analyses and linear regression analyses to describe the relationship between the total scores of the practice of sharing patients' nutritional information based on their knowledge, attitude and priority to provide nutritional care. Among the 492 participants, 396 nursing staff (80.5%) completed the questionnaire and were included in analyses. Mean±s.d. of total score of the 396 participants was 8.4±3.1. Nursing staff shared information when they had a high nutritional knowledge (r=0.36, P<0.01) and attitude (r=0.13, P<0.05); however, their correlation coefficients were low. In the linear regression analyses, job categories (β=-0.28, P<0.01), knowledge (β=0.33, P<0.01) and attitude (β=0.10, P<0.05) were independently associated with the practice of sharing information. Nursing staff's priority to provide nutritional care practice was not significantly associated with the practice of sharing information. Knowledge and attitude were independently associated with the practice of sharing patients' nutrition information with colleagues, regardless of their priority to provide nutritional care. An effective approach should be taken to improve the practice of providing nutritional care practice.

  3. Standardized error severity score (ESS) ratings to quantify risk associated with child restraint system (CRS) and booster seat misuse.

    PubMed

    Rudin-Brown, Christina M; Kramer, Chelsea; Langerak, Robin; Scipione, Andrea; Kelsey, Shelley

    2017-11-17

    Although numerous research studies have reported high levels of error and misuse of child restraint systems (CRS) and booster seats in experimental and real-world scenarios, conclusions are limited because they provide little information regarding which installation issues pose the highest risk and thus should be targeted for change. Beneficial to legislating bodies and researchers alike would be a standardized, globally relevant assessment of the potential injury risk associated with more common forms of CRS and booster seat misuse, which could be applied with observed error frequency-for example, in car seat clinics or during prototype user testing-to better identify and characterize the installation issues of greatest risk to safety. A group of 8 leading world experts in CRS and injury biomechanics, who were members of an international child safety project, estimated the potential injury severity associated with common forms of CRS and booster seat misuse. These injury risk error severity score (ESS) ratings were compiled and compared to scores from previous research that had used a similar procedure but with fewer respondents. To illustrate their application, and as part of a larger study examining CRS and booster seat labeling requirements, the new standardized ESS ratings were applied to objective installation performance data from 26 adult participants who installed a convertible (rear- vs. forward-facing) CRS and booster seat in a vehicle, and a child test dummy in the CRS and booster seat, using labels that only just met minimal regulatory requirements. The outcome measure, the risk priority number (RPN), represented the composite scores of injury risk and observed installation error frequency. Variability within the sample of ESS ratings in the present study was smaller than that generated in previous studies, indicating better agreement among experts on what constituted injury risk. Application of the new standardized ESS ratings to installation performance data revealed several areas of misuse of the CRS/booster seat associated with high potential injury risk. Collectively, findings indicate that standardized ESS ratings are useful for estimating injury risk potential associated with real-world CRS and booster seat installation errors.

  4. The New York State risk score for predicting in-hospital/30-day mortality following percutaneous coronary intervention.

    PubMed

    Hannan, Edward L; Farrell, Louise Szypulski; Walford, Gary; Jacobs, Alice K; Berger, Peter B; Holmes, David R; Stamato, Nicholas J; Sharma, Samin; King, Spencer B

    2013-06-01

    This study sought to develop a percutaneous coronary intervention (PCI) risk score for in-hospital/30-day mortality. Risk scores are simplified linear scores that provide clinicians with quick estimates of patients' short-term mortality rates for informed consent and to determine the appropriate intervention. Earlier PCI risk scores were based on in-hospital mortality. However, for PCI, a substantial percentage of patients die within 30 days of the procedure after discharge. New York's Percutaneous Coronary Interventions Reporting System was used to develop an in-hospital/30-day logistic regression model for patients undergoing PCI in 2010, and this model was converted into a simple linear risk score that estimates mortality rates. The score was validated by applying it to 2009 New York PCI data. Subsequent analyses evaluated the ability of the score to predict complications and length of stay. A total of 54,223 patients were used to develop the risk score. There are 11 risk factors that make up the score, with risk factor scores ranging from 1 to 9, and the highest total score is 34. The score was validated based on patients undergoing PCI in the previous year, and accurately predicted mortality for all patients as well as patients who recently suffered a myocardial infarction (MI). The PCI risk score developed here enables clinicians to estimate in-hospital/30-day mortality very quickly and quite accurately. It accurately predicts mortality for patients undergoing PCI in the previous year and for MI patients, and is also moderately related to perioperative complications and length of stay. Copyright © 2013 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  5. Improving treatment intensification to reduce cardiovascular disease risk: a cluster randomized trial

    PubMed Central

    2012-01-01

    Background Blood pressure, lipid, and glycemic control are essential for reducing cardiovascular disease (CVD) risk. Many health care systems have successfully shifted aspects of chronic disease management, including population-based outreach programs designed to address CVD risk factor control, to non-physicians. The purpose of this study is to evaluate provision of new information to non-physician outreach teams on need for treatment intensification in patients with increased CVD risk. Methods Cluster randomized trial (July 1-December 31, 2008) in Kaiser Permanente Northern California registry of members with diabetes mellitus, prior CVD diagnoses and/or chronic kidney disease who were high-priority for treatment intensification: blood pressure ≥ 140 mmHg systolic, LDL-cholesterol ≥ 130 mg/dl, or hemoglobin A1c ≥ 9%; adherent to current medications; no recent treatment intensification). Randomization units were medical center-based outreach teams (4 intervention; 4 control). For intervention teams, priority flags for intensification were added monthly to the registry database with recommended next pharmacotherapeutic steps for each eligible patient. Control teams used the same database without this information. Outcomes included 3-month rates of treatment intensification and risk factor levels during follow-up. Results Baseline risk factor control rates were high (82-90%). In eligible patients, the intervention was associated with significantly greater 3-month intensification rates for blood pressure (34.1 vs. 30.6%) and LDL-cholesterol (28.0 vs 22.7%), but not A1c. No effects on risk factors were observed at 3 months or 12 months follow-up. Intervention teams initiated outreach for only 45-47% of high-priority patients, but also for 27-30% of lower-priority patients. Teams reported difficulties adapting prior outreach strategies to incorporate the new information. Conclusions Information enhancement did not improve risk factor control compared to existing outreach strategies at control centers. Familiarity with prior, relatively successful strategies likely reduced uptake of the innovation and its potential for success at intervention centers. Trial registration ClinicalTrials.gov Identifier NCT00517686 PMID:22747998

  6. The New York risk score for in-hospital and 30-day mortality for coronary artery bypass graft surgery.

    PubMed

    Hannan, Edward L; Farrell, Louise Szypulski; Wechsler, Andrew; Jordan, Desmond; Lahey, Stephen J; Culliford, Alfred T; Gold, Jeffrey P; Higgins, Robert S D; Smith, Craig R

    2013-01-01

    Simplified risk scores for coronary artery bypass graft surgery are frequently in lieu of more complicated statistical models and are valuable for informed consent and choice of intervention. Previous risk scores have been based on in-hospital mortality, but a substantial number of patients die within 30 days of the procedure. These deaths should also be accounted for, so we have developed a risk score based on in-hospital and 30-day mortality. New York's Cardiac Surgery Reporting System was used to develop an in-hospital and 30-day logistic regression model for patients undergoing coronary artery bypass graft surgery in 2009, and this model was converted into a simple linear risk score that provides estimated in-hospital and 30-day mortality rates for different values of the score. The accuracy of the risk score in predicting mortality was tested. This score was also validated by applying it to 2008 New York coronary artery bypass graft data. Subsequent analyses evaluated the ability of the risk score to predict complications and length of stay. The overall in-hospital and 30-day mortality rate for the 10,148 patients in the study was 1.79%. There are seven risk factors comprising the score, with risk factor scores ranging from 1 to 5, and the highest possible total score is 23. The score accurately predicted mortality in 2009 as well as in 2008, and was strongly correlated with complications and length of stay. The risk score is a simple way of estimating short-term mortality that accurately predicts mortality in the year the model was developed as well as in the previous year. Perioperative complications and length of stay are also well predicted by the risk score. Copyright © 2013 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  7. A review of soil heavy metal pollution from mines in China: pollution and health risk assessment.

    PubMed

    Li, Zhiyuan; Ma, Zongwei; van der Kuijp, Tsering Jan; Yuan, Zengwei; Huang, Lei

    2014-01-15

    Heavy metal pollution has pervaded many parts of the world, especially developing countries such as China. This review summarizes available data in the literature (2005-2012) on heavy metal polluted soils originating from mining areas in China. Based on these obtained data, this paper then evaluates the soil pollution levels of these collected mines and quantifies the risks these pollutants pose to human health. To assess these potential threat levels, the geoaccumulation index was applied, along with the US Environmental Protection Agency (USEPA) recommended method for health risk assessment. The results demonstrate not only the severity of heavy metal pollution from the examined mines, but also the high carcinogenic and non-carcinogenic risks that soil heavy metal pollution poses to the public, especially to children and those living in the vicinity of heavily polluted mining areas. In order to provide key management targets for relevant government agencies, based on the results of the pollution and health risk assessments, Cd, Pb, Cu, Zn, Hg, As, and Ni are selected as the priority control heavy metals; tungsten, manganese, lead-zinc, and antimony mines are selected as the priority control mine categories; and southern provinces and Liaoning province are selected as the priority control provinces. This review, therefore, provides a comprehensive assessment of soil heavy metal pollution derived from mines in China, while identifying policy recommendations for pollution mitigation and environmental management of these mines. © 2013.

  8. Factors influencing the priority of access to food and their effects on the carcass traits for Japanese Black (Wagyu) cattle.

    PubMed

    Takanishi, N; Oishi, K; Kumagai, H; Uemura, M; Hirooka, H

    2015-12-01

    The factors influencing the priority of access to food and the effects of the priority of access to food on their carcass traits were analyzed for Japanese Black (Wagyu) cattle in a semi-intensive fattening production system. The records of 96 clinically healthy steers and heifers were analyzed. The calves at ∼3 to 4 months of age were allocated to pens with four animals per pen; all four animals in the same pen were of the same sex and of similar body size. The ranking of the animals' priority of access to food (1st, 2nd, 3rd and 4th), which was determined by the farm manager, was used as an indicator of social dominance in the present study. Four models including sire line, maternal grandsire line and the difference in the animals' birth dates as fixed effects were used to analyze factors influencing the priority of access to food. Ranking was represented by ordinal scores (highest=4, lowest=1) in Model 1, and the binary scores were assigned in Model 2 (highest=1; 2nd, 3rd and 4th=0), Model 3 (1st and 2nd=1; 3rd and 4th=0) and Model 4 (1st, 2nd and 3rd=1; lowest=0). The results showed that the difference in the animals' birth dates had a significant effect on the establishment of the priority of access to food in Model 3 (P<0.05), suggesting that animals born earlier may become more dominant in the pen. The maternal grandsire line tended to affect the social rank score in Models 2 and 3 (P<0.10). Our results indicated that the maternal grandsire line may affect the temperament of calves through their mothers' genetic performance and thereby more aggressive calves may be more dominant and have higher priority of access to food. On the other hand, there was a significant effect of the priority of access to food on beef marbling score (BMS; P<0.05), and the priority of access to food also tended to influence the carcass weight (P=0.09). The highest BMS was observed for animals with the first rank of the priority of access to food (P<0.05), and the higher-ranking animals had the tendency to be heavier carcass than the lower-ranking animals. Our findings emphasized the importance of information about the priority of access to food determined by farmers' own observation on implementing best management practices in small-scaled semi-intensive beef cattle production systems.

  9. Horizon scanning for invasive alien species with the potential to threaten biodiversity in Great Britain

    PubMed Central

    Roy, Helen E; Peyton, Jodey; Aldridge, David C; Bantock, Tristan; Blackburn, Tim M; Britton, Robert; Clark, Paul; Cook, Elizabeth; Dehnen-Schmutz, Katharina; Dines, Trevor; Dobson, Michael; Edwards, François; Harrower, Colin; Harvey, Martin C; Minchin, Dan; Noble, David G; Parrott, Dave; Pocock, Michael J O; Preston, Chris D; Roy, Sugoto; Salisbury, Andrew; Schönrogge, Karsten; Sewell, Jack; Shaw, Richard H; Stebbing, Paul; Stewart, Alan J A; Walker, Kevin J

    2014-01-01

    Invasive alien species (IAS) are considered one of the greatest threats to biodiversity, particularly through their interactions with other drivers of change. Horizon scanning, the systematic examination of future potential threats and opportunities, leading to prioritization of IAS threats is seen as an essential component of IAS management. Our aim was to consider IAS that were likely to impact on native biodiversity but were not yet established in the wild in Great Britain. To achieve this, we developed an approach which coupled consensus methods (which have previously been used for collaboratively identifying priorities in other contexts) with rapid risk assessment. The process involved two distinct phases: Preliminary consultation with experts within five groups (plants, terrestrial invertebrates, freshwater invertebrates, vertebrates and marine species) to derive ranked lists of potential IAS.Consensus-building across expert groups to compile and rank the entire list of potential IAS. Five hundred and ninety-one species not native to Great Britain were considered. Ninety-three of these species were agreed to constitute at least a medium risk (based on score and consensus) with respect to them arriving, establishing and posing a threat to native biodiversity. The quagga mussel, Dreissena rostriformis bugensis, received maximum scores for risk of arrival, establishment and impact; following discussions the unanimous consensus was to rank it in the top position. A further 29 species were considered to constitute a high risk and were grouped according to their ranked risk. The remaining 63 species were considered as medium risk, and included in an unranked long list. The information collated through this novel extension of the consensus method for horizon scanning provides evidence for underpinning and prioritizing management both for the species and, perhaps more importantly, their pathways of arrival. Although our study focused on Great Britain, we suggest that the methods adopted are applicable globally. PMID:24839235

  10. Risk assessment of combined exposure to alkenylbenzenes through consumption of plant food supplements containing parsley and dill.

    PubMed

    Alajlouni, Abdalmajeed M; Al-Malahmeh, Amer J; Wesseling, Sebastiaan; Kalli, Marina; Vervoort, Jacques; Rietjens, Ivonne M C M

    2017-12-01

    A risk assessment was performed of parsley- and dill-based plant food supplements (PFS) containing apiol and related alkenylbenzenes. First, the levels of the alkenylbenzenes in the PFS and the resulting estimated daily intake (EDI) resulting from use of the PFS were quantified. Since most PFS appeared to contain more than one alkenylbenzene, a combined risk assessment was performed based on equal potency or using a so-called toxic equivalency (TEQ) approach based on toxic equivalency factors (TEFs) for the different alkenylbenzenes. The EDIs resulting from daily PFS consumption amount to 0.74-125 µg kg -1 bw for the individual alkenylbenzenes, 0.74-160 µg kg -1 bw for the sum of the alkenylbenzenes, and 0.47-64 µg kg -1 bw for the sum of alkenylbenzenes when expressed in safrole equivalents. The margins of exposure (MOEs) obtained were generally below 10,000, indicating a priority for risk management if the PFS were to be consumed on a daily basis. Considering short-term use of the PFS, MOEs would increase above 10,000, indicating low priority for risk management. It is concluded that alkenylbenzene intake through consumption of parsley- and dill-based PFS is only of concern when these PFS are used for long periods of time.

  11. Defining Priorities for Future Research: Results of the UK Kidney Transplant Priority Setting Partnership

    PubMed Central

    Metcalfe, Leanne; O’Donoghue, Katriona; Ball, Simon T.; Beale, Angela; Beale, William; Hilton, Rachel; Hodkinson, Keith; Lipkin, Graham W.; Loud, Fiona; Marson, Lorna P.; Morris, Peter J.

    2016-01-01

    Background It has been suggested that the research priorities of those funding and performing research in transplantation may differ from those of end service users such as patients, carers and healthcare professionals involved in day-to-day care. The Kidney Transplant Priority Setting Partnership (PSP) was established with the aim of involving all stakeholders in prioritising future research in the field. Methods The PSP methodology is as outlined by the James Lind Alliance. An initial survey collected unanswered research questions from patients, carers and clinicians. Duplicate and out-of-scope topics were excluded and the existing literature searched to identify topics answered by current evidence. An interim prioritisation survey asked patients and professionals to score the importance of the remaining questions to create a ranked long-list. These were considered at a final consensus workshop using a modified nominal group technique to agree a final top ten. Results The initial survey identified 497 questions from 183 respondents, covering all aspects of transplantation from assessment through to long-term follow-up. These were grouped into 90 unanswered “indicative” questions. The interim prioritisation survey received 256 responses (34.8% patients/carers, 10.9% donors and 54.3% professionals), resulting in a ranked list of 25 questions that were considered during the final workshop. Participants agreed a top ten priorities for future research that included optimisation of immunosuppression (improved monitoring, choice of regimen, personalisation), prevention of sensitisation and transplanting the sensitised patient, management of antibody-mediated rejection, long-term risks to live donors, methods of organ preservation, induction of tolerance and bioengineering of organs. There was evidence that patient and carer involvement had a significant impact on shaping the final priorities. Conclusions The final list of priorities relates to all stages of the transplant process, including access to transplantation, living donation, organ preservation, post-transplant care and management of the failing transplant. This list of priorities will provide an invaluable resource for researchers and funders to direct future activity. PMID:27776143

  12. Defining Priorities for Future Research: Results of the UK Kidney Transplant Priority Setting Partnership.

    PubMed

    Knight, Simon R; Metcalfe, Leanne; O'Donoghue, Katriona; Ball, Simon T; Beale, Angela; Beale, William; Hilton, Rachel; Hodkinson, Keith; Lipkin, Graham W; Loud, Fiona; Marson, Lorna P; Morris, Peter J

    2016-01-01

    It has been suggested that the research priorities of those funding and performing research in transplantation may differ from those of end service users such as patients, carers and healthcare professionals involved in day-to-day care. The Kidney Transplant Priority Setting Partnership (PSP) was established with the aim of involving all stakeholders in prioritising future research in the field. The PSP methodology is as outlined by the James Lind Alliance. An initial survey collected unanswered research questions from patients, carers and clinicians. Duplicate and out-of-scope topics were excluded and the existing literature searched to identify topics answered by current evidence. An interim prioritisation survey asked patients and professionals to score the importance of the remaining questions to create a ranked long-list. These were considered at a final consensus workshop using a modified nominal group technique to agree a final top ten. The initial survey identified 497 questions from 183 respondents, covering all aspects of transplantation from assessment through to long-term follow-up. These were grouped into 90 unanswered "indicative" questions. The interim prioritisation survey received 256 responses (34.8% patients/carers, 10.9% donors and 54.3% professionals), resulting in a ranked list of 25 questions that were considered during the final workshop. Participants agreed a top ten priorities for future research that included optimisation of immunosuppression (improved monitoring, choice of regimen, personalisation), prevention of sensitisation and transplanting the sensitised patient, management of antibody-mediated rejection, long-term risks to live donors, methods of organ preservation, induction of tolerance and bioengineering of organs. There was evidence that patient and carer involvement had a significant impact on shaping the final priorities. The final list of priorities relates to all stages of the transplant process, including access to transplantation, living donation, organ preservation, post-transplant care and management of the failing transplant. This list of priorities will provide an invaluable resource for researchers and funders to direct future activity.

  13. Comparing Service Priorities between Staff and Users in Association of Research Libraries (ARL) Member Libraries

    ERIC Educational Resources Information Center

    Jaggars, Damon E.; Jaggars, Shanna Smith; Duffy, Jocelyn S.

    2009-01-01

    Using the results for participating Association of Research Libraries from the 2006 LibQUAL+[R] library service quality survey, we examine the service priorities of library staff (for example, whether desired scores for each survey item are above or below average) and the extent to which they are aligned with the priorities of undergraduates,…

  14. A biomarker-based risk score to predict death in patients with atrial fibrillation: the ABC (age, biomarkers, clinical history) death risk score

    PubMed Central

    Hijazi, Ziad; Oldgren, Jonas; Lindbäck, Johan; Alexander, John H; Connolly, Stuart J; Eikelboom, John W; Ezekowitz, Michael D; Held, Claes; Hylek, Elaine M; Lopes, Renato D; Yusuf, Salim; Granger, Christopher B; Siegbahn, Agneta; Wallentin, Lars

    2018-01-01

    Abstract Aims In atrial fibrillation (AF), mortality remains high despite effective anticoagulation. A model predicting the risk of death in these patients is currently not available. We developed and validated a risk score for death in anticoagulated patients with AF including both clinical information and biomarkers. Methods and results The new risk score was developed and internally validated in 14 611 patients with AF randomized to apixaban vs. warfarin for a median of 1.9 years. External validation was performed in 8548 patients with AF randomized to dabigatran vs. warfarin for 2.0 years. Biomarker samples were obtained at study entry. Variables significantly contributing to the prediction of all-cause mortality were assessed by Cox-regression. Each variable obtained a weight proportional to the model coefficients. There were 1047 all-cause deaths in the derivation and 594 in the validation cohort. The most important predictors of death were N-terminal pro B-type natriuretic peptide, troponin-T, growth differentiation factor-15, age, and heart failure, and these were included in the ABC (Age, Biomarkers, Clinical history)-death risk score. The score was well-calibrated and yielded higher c-indices than a model based on all clinical variables in both the derivation (0.74 vs. 0.68) and validation cohorts (0.74 vs. 0.67). The reduction in mortality with apixaban was most pronounced in patients with a high ABC-death score. Conclusion A new biomarker-based score for predicting risk of death in anticoagulated AF patients was developed, internally and externally validated, and well-calibrated in two large cohorts. The ABC-death risk score performed well and may contribute to overall risk assessment in AF. ClinicalTrials.gov identifier NCT00412984 and NCT00262600 PMID:29069359

  15. The Management Standards Indicator Tool and evaluation of burnout.

    PubMed

    Ravalier, J M; McVicar, A; Munn-Giddings, C

    2013-03-01

    Psychosocial hazards in the workplace can impact upon employee health. The UK Health and Safety Executive's (HSE) Management Standards Indicator Tool (MSIT) appears to have utility in relation to health impacts but we were unable to find studies relating it to burnout. To explore the utility of the MSIT in evaluating risk of burnout assessed by the Maslach Burnout Inventory-General Survey (MBI-GS). This was a cross-sectional survey of 128 borough council employees. MSIT data were analysed according to MSIT and MBI-GS threshold scores and by using multivariate linear regression with MBI-GS factors as dependent variables. MSIT factor scores were gradated according to categories of risk of burnout according to published MBI-GS thresholds, and identified priority workplace concerns as demands, relationships, role and change. These factors also featured as significant independent variables, with control, in outcomes of the regression analysis. Exhaustion was associated with demands and control (adjusted R (2) = 0.331); cynicism was associated with change, role and demands (adjusted R (2) =0.429); and professional efficacy was associated with managerial support, role, control and demands (adjusted R (2) = 0.413). MSIT analysis generally has congruence with MBI-GS assessment of burnout. The identification of control within regression models but not as a priority concern in the MSIT analysis could suggest an issue of the setting of the MSIT thresholds for this factor, but verification requires a much larger study. Incorporation of relationship, role and change into the MSIT, missing from other conventional tools, appeared to add to its validity.

  16. Failure mode and effect analysis in blood transfusion: a proactive tool to reduce risks.

    PubMed

    Lu, Yao; Teng, Fang; Zhou, Jie; Wen, Aiqing; Bi, Yutian

    2013-12-01

    The aim of blood transfusion risk management is to improve the quality of blood products and to assure patient safety. We utilize failure mode and effect analysis (FMEA), a tool employed for evaluating risks and identifying preventive measures to reduce the risks in blood transfusion. The failure modes and effects occurring throughout the whole process of blood transfusion were studied. Each failure mode was evaluated using three scores: severity of effect (S), likelihood of occurrence (O), and probability of detection (D). Risk priority numbers (RPNs) were calculated by multiplying the S, O, and D scores. The plan-do-check-act cycle was also used for continuous improvement. Analysis has showed that failure modes with the highest RPNs, and therefore the greatest risk, were insufficient preoperative assessment of the blood product requirement (RPN, 245), preparation time before infusion of more than 30 minutes (RPN, 240), blood transfusion reaction occurring during the transfusion process (RPN, 224), blood plasma abuse (RPN, 180), and insufficient and/or incorrect clinical information on request form (RPN, 126). After implementation of preventative measures and reassessment, a reduction in RPN was detected with each risk. The failure mode with the second highest RPN, namely, preparation time before infusion of more than 30 minutes, was shown in detail to prove the efficiency of this tool. FMEA evaluation model is a useful tool in proactively analyzing and reducing the risks associated with the blood transfusion procedure. © 2013 American Association of Blood Banks.

  17. Typology of end-of-life priorities in Saudi females: averaging analysis and Q-methodology.

    PubMed

    Hammami, Muhammad M; Hammami, Safa; Amer, Hala A; Khodr, Nesrine A

    2016-01-01

    Understanding culture-and sex-related end-of-life preferences is essential to provide quality end-of-life care. We have previously explored end-of-life choices in Saudi males and found important culture-related differences and that Q-methodology is useful in identifying intraculture, opinion-based groups. Here, we explore Saudi females' end-of-life choices. A volunteer sample of 68 females rank-ordered 47 opinion statements on end-of-life issues into a nine-category symmetrical distribution. The ranking scores of the statements were analyzed by averaging analysis and Q-methodology. The mean age of the females in the sample was 30.3 years (range, 19-55 years). Among them, 51% reported average religiosity, 78% reported very good health, 79% reported very good life quality, and 100% reported high-school education or more. The extreme five overall priorities were to be able to say the statement of faith, be at peace with God, die without having the body exposed, maintain dignity, and resolve all conflicts. The extreme five overall dis-priorities were to die in the hospital, die well dressed, be informed about impending death by family/friends rather than doctor, die at peak of life, and not know if one has a fatal illness. Q-methodology identified five opinion-based groups with qualitatively different characteristics: "physical and emotional privacy concerned, family caring" (younger, lower religiosity), "whole person" (higher religiosity), "pain and informational privacy concerned" (lower life quality), "decisional privacy concerned" (older, higher life quality), and "life quantity concerned, family dependent" (high life quality, low life satisfaction). Out of the extreme 14 priorities/dis-priorities for each group, 21%-50% were not represented among the extreme 20 priorities/dis-priorities for the entire sample. Consistent with the previously reported findings in Saudi males, transcendence and dying in the hospital were the extreme end-of-life priority and dis-priority, respectively, in Saudi females. Body modesty was a major overall concern; however, concerns about pain, various types of privacy, and life quantity were variably emphasized by the five opinion-based groups but masked by averaging analysis.

  18. Typology of end-of-life priorities in Saudi females: averaging analysis and Q-methodology

    PubMed Central

    Hammami, Muhammad M; Hammami, Safa; Amer, Hala A; Khodr, Nesrine A

    2016-01-01

    Background Understanding culture-and sex-related end-of-life preferences is essential to provide quality end-of-life care. We have previously explored end-of-life choices in Saudi males and found important culture-related differences and that Q-methodology is useful in identifying intraculture, opinion-based groups. Here, we explore Saudi females’ end-of-life choices. Methods A volunteer sample of 68 females rank-ordered 47 opinion statements on end-of-life issues into a nine-category symmetrical distribution. The ranking scores of the statements were analyzed by averaging analysis and Q-methodology. Results The mean age of the females in the sample was 30.3 years (range, 19–55 years). Among them, 51% reported average religiosity, 78% reported very good health, 79% reported very good life quality, and 100% reported high-school education or more. The extreme five overall priorities were to be able to say the statement of faith, be at peace with God, die without having the body exposed, maintain dignity, and resolve all conflicts. The extreme five overall dis-priorities were to die in the hospital, die well dressed, be informed about impending death by family/friends rather than doctor, die at peak of life, and not know if one has a fatal illness. Q-methodology identified five opinion-based groups with qualitatively different characteristics: “physical and emotional privacy concerned, family caring” (younger, lower religiosity), “whole person” (higher religiosity), “pain and informational privacy concerned” (lower life quality), “decisional privacy concerned” (older, higher life quality), and “life quantity concerned, family dependent” (high life quality, low life satisfaction). Out of the extreme 14 priorities/dis-priorities for each group, 21%–50% were not represented among the extreme 20 priorities/dis-priorities for the entire sample. Conclusion Consistent with the previously reported findings in Saudi males, transcendence and dying in the hospital were the extreme end-of-life priority and dis-priority, respectively, in Saudi females. Body modesty was a major overall concern; however, concerns about pain, various types of privacy, and life quantity were variably emphasized by the five opinion-based groups but masked by averaging analysis. PMID:27274205

  19. White Matter Hyperintensities Improve Ischemic Stroke Recurrence Prediction.

    PubMed

    Andersen, Søren Due; Larsen, Torben Bjerregaard; Gorst-Rasmussen, Anders; Yavarian, Yousef; Lip, Gregory Y H; Bach, Flemming W

    2017-01-01

    Nearly one in 5 patients with ischemic stroke will invariably experience a second stroke within 5 years. Stroke risk stratification schemes based solely on clinical variables perform only modestly in non-atrial fibrillation (AF) patients and improvement of these schemes will enhance their clinical utility. Cerebral white matter hyperintensities are associated with an increased risk of incident ischemic stroke in the general population, whereas their association with the risk of ischemic stroke recurrence is more ambiguous. In a non-AF stroke cohort, we investigated the association between cerebral white matter hyperintensities and the risk of recurrent ischemic stroke, and we evaluated the predictive performance of the CHA2DS2VASc score and the Essen Stroke Risk Score (clinical scores) when augmented with information on white matter hyperintensities. In a registry-based, observational cohort study, we included 832 patients (mean age 59.6 (SD 13.9); 42.0% females) with incident ischemic stroke and no AF. We assessed the severity of white matter hyperintensities using MRI. Hazard ratios stratified by the white matter hyperintensities score and adjusted for the components of the CHA2DS2VASc score were calculated based on the Cox proportional hazards analysis. Recalibrated clinical scores were calculated by adding one point to the score for the presence of moderate to severe white matter hyperintensities. The discriminatory performance of the scores was assessed with the C-statistic. White matter hyperintensities were significantly associated with the risk of recurrent ischemic stroke after adjusting for clinical risk factors. The hazard ratios ranged from 1.65 (95% CI 0.70-3.86) for mild changes to 5.28 (95% CI 1.98-14.07) for the most severe changes. C-statistics for the prediction of recurrent ischemic stroke were 0.59 (95% CI 0.51-0.65) for the CHA2DS2VASc score and 0.60 (95% CI 0.53-0.68) for the Essen Stroke Risk Score. The recalibrated clinical scores showed improved C-statistics: the recalibrated CHA2DS2VASc score 0.62 (95% CI 0.54-0.70; p = 0.024) and the recalibrated Essen Stroke Risk Score 0.63 (95% CI 0.56-0.71; p = 0.031). C-statistics of the white matter hyperintensities score were 0.62 (95% CI 0.52-0.68) to 0.65 (95% CI 0.58-0.73). An increasing burden of white matter hyperintensities was independently associated with recurrent ischemic stroke in a cohort of non-AF ischemic stroke patients. Recalibration of the CHA2DS2VASc score and the Essen Stroke Risk Score with one point for the presence of moderate to severe white matter hyperintensities led to improved discriminatory performance in ischemic stroke recurrence prediction. Risk scores based on white matter hyperintensities alone were at least as accurate as the established clinical risk scores in the prediction of ischemic stroke recurrence. © 2016 S. Karger AG, Basel.

  20. Integrating economic costs and biological traits into global conservation priorities for carnivores.

    PubMed

    Loyola, Rafael Dias; Oliveira-Santos, Luiz Gustavo Rodrigues; Almeida-Neto, Mário; Nogueira, Denise Martins; Kubota, Umberto; Diniz-Filho, José Alexandre Felizola; Lewinsohn, Thomas Michael

    2009-08-27

    Prioritization schemes usually highlight species-rich areas, where many species are at imminent risk of extinction. To be ecologically relevant these schemes should also include species biological traits into area-setting methods. Furthermore, in a world of limited funds for conservation, conservation action is constrained by land acquisition costs. Hence, including economic costs into conservation priorities can substantially improve their conservation cost-effectiveness. We examined four global conservation scenarios for carnivores based on the joint mapping of economic costs and species biological traits. These scenarios identify the most cost-effective priority sets of ecoregions, indicating best investment opportunities for safeguarding every carnivore species, and also establish priority sets that can maximize species representation in areas harboring highly vulnerable species. We compared these results with a scenario that minimizes the total number of ecoregions required for conserving all species, irrespective of other factors. We found that cost-effective conservation investments should focus on 41 ecoregions highlighted in the scenario that consider simultaneously both ecoregion vulnerability and economic costs of land acquisition. Ecoregions included in priority sets under these criteria should yield best returns of investments since they harbor species with high extinction risk and have lower mean land cost. Our study highlights ecoregions of particular importance for the conservation of the world's carnivores defining global conservation priorities in analyses that encompass socioeconomic and life-history factors. We consider the identification of a comprehensive priority-set of areas as a first step towards an in-situ biodiversity maintenance strategy.

  1. Stakeholder Engagement to Identify Priorities for Improving the Quality and Value of Critical Care.

    PubMed

    Stelfox, Henry T; Niven, Daniel J; Clement, Fiona M; Bagshaw, Sean M; Cook, Deborah J; McKenzie, Emily; Potestio, Melissa L; Doig, Christopher J; O'Neill, Barbara; Zygun, David

    2015-01-01

    Large amounts of scientific evidence are generated, but not implemented into patient care (the 'knowledge-to-care' gap). We identified and prioritized knowledge-to-care gaps in critical care as opportunities to improve the quality and value of healthcare. We used a multi-method community-based participatory research approach to engage a Network of all adult (n = 14) and pediatric (n = 2) medical-surgical intensive care units (ICUs) in a fully integrated geographically defined healthcare system serving 4 million residents. Participants included Network oversight committee members (n = 38) and frontline providers (n = 1,790). Network committee members used a modified RAND/University of California Appropriateness Methodology, to serially propose, rate (validated 9 point scale) and revise potential knowledge-to-care gaps as priorities for improvement. The priorities were sent to frontline providers for evaluation. Results were relayed back to all frontline providers for feedback. Initially, 68 knowledge-to-care gaps were proposed, rated and revised by the committee (n = 32 participants) over 3 rounds of review and resulted in 13 proposed priorities for improvement. Then, 1,103 providers (62% response rate) evaluated the priorities, and rated 9 as 'necessary' (median score 7-9). Several factors were associated with rating priorities as necessary in multivariable logistic regression, related to the provider (experience, teaching status of ICU) and topic (strength of supporting evidence, potential to benefit the patient, potential to improve patient/family experience, potential to decrease costs). A community-based participatory research approach engaged a diverse group of stakeholders to identify 9 priorities for improving the quality and value of critical care. The approach was time and cost efficient and could serve as a model to prioritize areas for research quality improvement across other settings.

  2. An approach for setting evidence-based and stakeholder-informed research priorities in low- and middle-income countries.

    PubMed

    Rehfuess, Eva A; Durão, Solange; Kyamanywa, Patrick; Meerpohl, Joerg J; Young, Taryn; Rohwer, Anke

    2016-04-01

    To derive evidence-based and stakeholder-informed research priorities for implementation in African settings, the international research consortium Collaboration for Evidence-Based Healthcare and Public Health in Africa (CEBHA+) developed and applied a pragmatic approach. First, an online survey and face-to-face consultation between CEBHA+ partners and policy-makers generated priority research areas. Second, evidence maps for these priority research areas identified gaps and related priority research questions. Finally, study protocols were developed for inclusion within a grant proposal. Policy and practice representatives were involved throughout the process. Tuberculosis, diabetes, hypertension and road traffic injuries were selected as priority research areas. Evidence maps covered screening and models of care for diabetes and hypertension, population-level prevention of diabetes and hypertension and their risk factors, and prevention and management of road traffic injuries. Analysis of these maps yielded three priority research questions on hypertension and diabetes and one on road traffic injuries. The four resulting study protocols employ a broad range of primary and secondary research methods; a fifth promotes an integrated methodological approach across all research activities. The CEBHA+ approach, in particular evidence mapping, helped to formulate research questions and study protocols that would be owned by African partners, fill gaps in the evidence base, address policy and practice needs and be feasible given the existing research infrastructure and expertise. The consortium believes that the continuous involvement of decision-makers throughout the research process is an important means of ensuring that studies are relevant to the African context and that findings are rapidly implemented.

  3. Promoting At-Risk Preschool Children's Comprehension through Research-Based Strategy Instruction

    ERIC Educational Resources Information Center

    DeBruin-Parecki, Andrea; Squibb, Kathryn

    2011-01-01

    Young children living in poor urban neighborhoods are often at risk for reading difficulties, in part because developing listening comprehension strategies and vocabulary knowledge may not be a priority in their prekindergarten classrooms, whose curriculums typically focus heavily on phonological awareness and alphabet knowledge. Prereading…

  4. Source-oriented risk assessment of inhalation exposure to ambient polycyclic aromatic hydrocarbons and contributions of non-priority isomers in urban Nanjing, a megacity located in Yangtze River Delta, China.

    PubMed

    Zhuo, Shaojie; Shen, Guofeng; Zhu, Ying; Du, Wei; Pan, Xuelian; Li, Tongchao; Han, Yang; Li, Bengang; Liu, Junfeng; Cheng, Hefa; Xing, Baoshan; Tao, Shu

    2017-05-01

    Sixteen U.S. EPA priority polycyclic aromatic hydrocarbons (PAHs) and eleven non-priority isomers including some dibenzopyrenes were analyzed to evaluate health risk attributable to inhalation exposure to ambient PAHs and contributions of the non-priority PAHs in a megacity Nanjing, east China. The annual average mass concentration of the total 16 EPA priority PAHs in air was 51.1 ± 29.8 ng/m 3 , comprising up to 93% of the mass concentration of all 27 PAHs, however, the estimated Incremental Lifetime Cancer Risk (ILCR) due to inhalation exposure would be underestimated by 63% on average if only accounting the 16 EPA priority PAHs. The risk would be underestimated by 13% if only particulate PAHs were considered, though gaseous PAHs made up to about 70% of the total mass concentration. During the last fifteen years, ambient Benzo[a]pyrene decreased significantly in the city which was consistent with the declining trend of PAHs emissions. Source contributions to the estimated ILCR were much different from the contributions for the total mass concentration, calling for the introduce of important source-oriented risk assessments. Emissions from gasoline vehicles contributed to 12% of the total mass concentration of 27 PAHs analyzed, but regarding relative contributions to the overall health risk, gasoline vehicle emissions contributed 45% of the calculated ILCR. Dibenzopyrenes were a group of non-priority isomers largely contributing to the calculated ILCR, and vehicle emissions were probably important sources of these high molecular weight isomers. Ambient dibenzo[a,l]pyrene positively correlated with the priority PAH Benzo[g,h,i]perylene. The study indicates that inclusion of non-priority PAHs could be valuable for both PAH source apportionment and health risk assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Youth Risk Behavior Survey 2003: Commonwealth of the Northern Mariana Islands, Republic of the Marshall Islands, Republic of Palau

    ERIC Educational Resources Information Center

    Balling, Allison; Grunbaum, Jo Anne; Speicher, Nancy; McManus, Tim; Kann, Laura

    2005-01-01

    To monitor priority health-risk behaviors among youth and young adults, the Centers for Disease Control and Prevention developed the Youth Risk Behavior Surveillance System (YRBSS). The YRBSS includes national, state, territory, and local school-based surveys of high school students in grades 9-12. In addition, some states, territories, and cities…

  6. Youth Risk Behavior Survey 2005: Commonwealth of the Northern Mariana Islands, Republic of Palau, Commonwealth of Puerto Rico

    ERIC Educational Resources Information Center

    Lippe, Jaclynn; Brener, Nancy D.; McManus, Tim; Kann, Laura; Speicher, Nancy

    2008-01-01

    To monitor priority health-risk behaviors among youth and young adults, the Centers for Disease Control and Prevention (CDC) developed the Youth Risk Behavior Surveillance System (YRBSS). The YRBSS includes national, state, territorial, and local school-based surveys of high school students in grades 9-12. In addition, some states, territories,…

  7. Fault Management in an Objectives-Based/Risk-Informed View of Safety and Mission Success

    NASA Technical Reports Server (NTRS)

    Groen, Frank

    2012-01-01

    Theme of this talk: (1) Net-benefit of activities and decisions derives from objectives (and their priority) -- similarly: need for integration, value of technology/capability. (2) Risk is a lack of confidence that objectives will be met. (2a) Risk-informed decision making requires objectives. (3) Consideration of objectives is central to recent guidance.

  8. An approach for environmental risk assessment of engineered nanomaterials using Analytical Hierarchy Process (AHP) and fuzzy inference rules.

    PubMed

    Topuz, Emel; van Gestel, Cornelis A M

    2016-01-01

    The usage of Engineered Nanoparticles (ENPs) in consumer products is relatively new and there is a need to conduct environmental risk assessment (ERA) to evaluate their impacts on the environment. However, alternative approaches are required for ERA of ENPs because of the huge gap in data and knowledge compared to conventional pollutants and their unique properties that make it difficult to apply existing approaches. This study aims to propose an ERA approach for ENPs by integrating Analytical Hierarchy Process (AHP) and fuzzy inference models which provide a systematic evaluation of risk factors and reducing uncertainty about the data and information, respectively. Risk is assumed to be the combination of occurrence likelihood, exposure potential and toxic effects in the environment. A hierarchy was established to evaluate the sub factors of these components. Evaluation was made with fuzzy numbers to reduce uncertainty and incorporate the expert judgements. Overall score of each component was combined with fuzzy inference rules by using expert judgements. Proposed approach reports the risk class and its membership degree such as Minor (0.7). Therefore, results are precise and helpful to determine the risk management strategies. Moreover, priority weights calculated by comparing the risk factors based on their importance for the risk enable users to understand which factor is effective on the risk. Proposed approach was applied for Ag (two nanoparticles with different coating) and TiO2 nanoparticles for different case studies. Results verified the proposed benefits of the approach. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Hormonal contraceptive use and women's risk of HIV acquisition: priorities emerging from recent data.

    PubMed

    Ralph, Lauren J; Gollub, Erica L; Jones, Heidi E

    2015-12-01

    Understanding whether hormonal contraception increases women's risk of HIV acquisition is a public health priority. This review summarizes recent epidemiologic and biologic data, and considers the implications of new evidence on research and programmatic efforts. Two secondary analyses of HIV prevention trials demonstrated increased HIV risk among depot medroxyprogesterone acetate (DMPA) users compared with nonhormonal/no method users and norethisterone enanthate (NET-EN) users. A study of women in serodiscordant partnerships found no significant association for DMPA or implants. Two meta-analyses found elevated risks of HIV among DMPA users compared with nonhormonal/no method users, with no association for NET-EN or combined oral contraceptive pills. In-vitro and animal model studies identified plausible biological mechanisms by which progestin exposure could increase risk of HIV, depending on the type and dose of progestin, but such mechanisms have not been definitively observed in humans. Recent epidemiologic and biologic evidence on hormonal contraception and HIV suggests a harmful profile for DMPA but not combined oral contraceptives. In limited data, NET-EN appears safer than DMPA. More research is needed on other progestin-based methods, especially implants and Sayana Press. Future priorities include updating modeling studies with new pooled estimates, continued basic science to understand biological mechanisms, expanding contraceptive choice, and identifying effective ways to promote dual method use.

  10. Prediction for Intravenous Immunoglobulin Resistance by Using Weighted Genetic Risk Score Identified From Genome-Wide Association Study in Kawasaki Disease.

    PubMed

    Kuo, Ho-Chang; Wong, Henry Sung-Ching; Chang, Wei-Pin; Chen, Ben-Kuen; Wu, Mei-Shin; Yang, Kuender D; Hsieh, Kai-Sheng; Hsu, Yu-Wen; Liu, Shih-Feng; Liu, Xiao; Chang, Wei-Chiao

    2017-10-01

    Intravenous immunoglobulin (IVIG) is the treatment of choice in Kawasaki disease (KD). IVIG is used to prevent cardiovascular complications related to KD. However, a proportion of KD patients have persistent fever after IVIG treatment and are defined as IVIG resistant. To develop a risk scoring system based on genetic markers to predict IVIG responsiveness in KD patients, a total of 150 KD patients (126 IVIG responders and 24 IVIG nonresponders) were recruited for this study. A genome-wide association analysis was performed to compare the 2 groups and identified risk alleles for IVIG resistance. A weighted genetic risk score was calculated by the natural log of the odds ratio multiplied by the number of risk alleles. Eleven single-nucleotide polymorphisms were identified by genome-wide association study. The KD patients were categorized into 3 groups based on their calculated weighted genetic risk score. Results indicated a significant association between weighted genetic risk score (groups 3 and 4 versus group 1) and the response to IVIG (Fisher's exact P value 4.518×10 - 03 and 8.224×10 - 10 , respectively). This is the first weighted genetic risk score study based on a genome-wide association study in KD. The predictive model integrated the additive effects of all 11 single-nucleotide polymorphisms to provide a prediction of the responsiveness to IVIG. © 2017 The Authors.

  11. Health care priority setting in Norway a multicriteria decision analysis

    PubMed Central

    2012-01-01

    Background Priority setting in population health is increasingly based on explicitly formulated values. The Patients Rights Act of the Norwegian tax-based health service guaranties all citizens health care in case of a severe illness, a proven health benefit, and proportionality between need and treatment. This study compares the values of the country's health policy makers with these three official principles. Methods In total 34 policy makers participated in a discrete choice experiment, weighting the relative value of six policy criteria. We used multi-variate logistic regression with selection as dependent valuable to derive odds ratios for each criterion. Next, we constructed a composite league table - based on the sum score for the probability of selection - to rank potential interventions in five major disease areas. Results The group considered cost effectiveness, large individual benefits and severity of disease as the most important criteria in decision making. Priority interventions are those related to cardiovascular diseases and respiratory diseases. Less attractive interventions rank those related to mental health. Conclusions Norwegian policy makers' values are in agreement with principles formulated in national health laws. Multi-criteria decision approaches may provide a tool to support explicit allocation decisions. PMID:22335815

  12. Health care priority setting in Norway a multicriteria decision analysis.

    PubMed

    Defechereux, Thierry; Paolucci, Francesco; Mirelman, Andrew; Youngkong, Sitaporn; Botten, Grete; Hagen, Terje P; Niessen, Louis W

    2012-02-15

    Priority setting in population health is increasingly based on explicitly formulated values. The Patients Rights Act of the Norwegian tax-based health service guaranties all citizens health care in case of a severe illness, a proven health benefit, and proportionality between need and treatment. This study compares the values of the country's health policy makers with these three official principles. In total 34 policy makers participated in a discrete choice experiment, weighting the relative value of six policy criteria. We used multi-variate logistic regression with selection as dependent valuable to derive odds ratios for each criterion. Next, we constructed a composite league table - based on the sum score for the probability of selection - to rank potential interventions in five major disease areas. The group considered cost effectiveness, large individual benefits and severity of disease as the most important criteria in decision making. Priority interventions are those related to cardiovascular diseases and respiratory diseases. Less attractive interventions rank those related to mental health. Norwegian policy makers' values are in agreement with principles formulated in national health laws. Multi-criteria decision approaches may provide a tool to support explicit allocation decisions.

  13. Multicentre prospective validation of a urinary peptidome-based classifier for the diagnosis of type 2 diabetic nephropathy

    PubMed Central

    Siwy, Justyna; Schanstra, Joost P.; Argiles, Angel; Bakker, Stephan J.L.; Beige, Joachim; Boucek, Petr; Brand, Korbinian; Delles, Christian; Duranton, Flore; Fernandez-Fernandez, Beatriz; Jankowski, Marie-Luise; Al Khatib, Mohammad; Kunt, Thomas; Lajer, Maria; Lichtinghagen, Ralf; Lindhardt, Morten; Maahs, David M; Mischak, Harald; Mullen, William; Navis, Gerjan; Noutsou, Marina; Ortiz, Alberto; Persson, Frederik; Petrie, John R.; Roob, Johannes M.; Rossing, Peter; Ruggenenti, Piero; Rychlik, Ivan; Serra, Andreas L.; Snell-Bergeon, Janet; Spasovski, Goce; Stojceva-Taneva, Olivera; Trillini, Matias; von der Leyen, Heiko; Winklhofer-Roob, Brigitte M.; Zürbig, Petra; Jankowski, Joachim

    2014-01-01

    Background Diabetic nephropathy (DN) is one of the major late complications of diabetes. Treatment aimed at slowing down the progression of DN is available but methods for early and definitive detection of DN progression are currently lacking. The ‘Proteomic prediction and Renin angiotensin aldosterone system Inhibition prevention Of early diabetic nephRopathy In TYpe 2 diabetic patients with normoalbuminuria trial’ (PRIORITY) aims to evaluate the early detection of DN in patients with type 2 diabetes (T2D) using a urinary proteome-based classifier (CKD273). Methods In this ancillary study of the recently initiated PRIORITY trial we aimed to validate for the first time the CKD273 classifier in a multicentre (9 different institutions providing samples from 165 T2D patients) prospective setting. In addition we also investigated the influence of sample containers, age and gender on the CKD273 classifier. Results We observed a high consistency of the CKD273 classification scores across the different centres with areas under the curves ranging from 0.95 to 1.00. The classifier was independent of age (range tested 16–89 years) and gender. Furthermore, the use of different urine storage containers did not affect the classification scores. Analysis of the distribution of the individual peptides of the classifier over the nine different centres showed that fragments of blood-derived and extracellular matrix proteins were the most consistently found. Conclusion We provide for the first time validation of this urinary proteome-based classifier in a multicentre prospective setting and show the suitability of the CKD273 classifier to be used in the PRIORITY trial. PMID:24589724

  14. The ability of the 2013 ACC/AHA cardiovascular risk score to identify rheumatoid arthritis patients with high coronary artery calcification scores

    PubMed Central

    Kawai, Vivian K.; Chung, Cecilia P.; Solus, Joseph F.; Oeser, Annette; Raggi, Paolo; Stein, C. Michael

    2014-01-01

    Objective Patients with rheumatoid arthritis (RA) have increased risk of atherosclerotic cardiovascular disease (ASCVD) that is underestimated by the Framingham risk score (FRS). We hypothesized that the 2013 ACC/AHA 10-year risk score would perform better than the FRS and the Reynolds risk score (RRS) in identifying RA patients known to have elevated cardiovascular risk based on high coronary artery calcification (CAC) scores. Methods Among 98 RA patients eligible for risk stratification using the ACC/AHA score we identified 34 patients with high CAC (≥ 300 Agatston units or ≥75th percentile) and compared the ability of the 10-year FRS, RRS and the ACC/AHA risk scores to correctly assign these patients to an elevated risk category. Results All three risk scores were higher in patients with high CAC (P values <0.05). The percentage of patients with high CAC correctly assigned to the elevated risk category was similar among the three scores (FRS 32%, RRS 32%, ACC/AHA 41%) (P=0.233). The c-statistics for the FRS, RRS and ACC/AHA risk scores predicting the presence of high CAC were 0.65, 0.66, and 0.65, respectively. Conclusions The ACC/AHA 10-year risk score does not offer any advantage compared to the traditional FRS and RRS in the identification of RA patients with elevated risk as determined by high CAC. The ACC/AHA risk score assigned almost 60% of patients with high CAC into a low risk category. Risk scores and standard risk prediction models used in the general population do not adequately identify many RA patients with elevated cardiovascular risk. PMID:25371313

  15. Research priorities to reduce the global burden of dementia by 2025.

    PubMed

    Shah, Hiral; Albanese, Emiliano; Duggan, Cynthia; Rudan, Igor; Langa, Kenneth M; Carrillo, Maria C; Chan, Kit Yee; Joanette, Yves; Prince, Martin; Rossor, Martin; Saxena, Shekhar; Snyder, Heather M; Sperling, Reisa; Varghese, Mathew; Wang, Huali; Wortmann, Marc; Dua, Tarun

    2016-11-01

    At the First WHO Ministerial Conference on Global Action Against Dementia in March, 2015, 160 delegates, including representatives from 80 WHO Member States and four UN agencies, agreed on a call for action to reduce the global burden of dementia by fostering a collective effort to advance research. To drive this effort, we completed a globally representative research prioritisation exercise using an adapted version of the Child Health and Nutrition Research Initiative method. We elicited 863 research questions from 201 participants and consolidated these questions into 59 thematic research avenues, which were scored anonymously by 162 researchers and stakeholders from 39 countries according to five criteria. Six of the top ten research priorities were focused on prevention, identification, and reduction of dementia risk, and on delivery and quality of care for people with dementia and their carers. Other priorities related to diagnosis, biomarkers, treatment development, basic research into disease mechanisms, and public awareness and understanding of dementia. Research priorities identified by this systematic international process should be mapped onto the global dementia research landscape to identify crucial gaps and inform and motivate policy makers, funders, and researchers to support and conduct research to reduce the global burden of dementia. Efforts are needed by all stakeholders, including WHO, WHO Member States, and civil society, to continuously monitor research investments and progress, through international platforms such as a Global Dementia Observatory. With established research priorities, an opportunity now exists to translate the call for action into a global dementia action plan to reduce the global burden of dementia. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Different hip and knee priority score systems: are they good for the same thing?

    PubMed

    Escobar, Antonio; Quintana, Jose Maria; Espallargues, Mireia; Allepuz, Alejandro; Ibañez, Berta

    2010-10-01

    The aim of the present study was to compare two priority tools used for joint replacement for patients on waiting lists, which use two different methods. Two prioritization tools developed and validated by different methodologies were used on the same cohort of patients. The first, an IRYSS hip and knee priority score (IHKPS) developed by RAND method, was applied while patients were on the waiting list. The other, a Catalonia hip-knee priority score (CHKPS) developed by conjoint analysis, was adapted and applied retrospectively. In addition, all patients fulfilled pre-intervention the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC). Correlation between them was studied by Pearson correlation coefficient (r). Agreement was analysed by means of intra-class correlation coefficient (ICC), Kendall coefficient and Cohern kappa. The relationship between IHKPS, CHKPS and baseline WOMAC scores by r coefficient was studied. The sample consisted of 774 consecutive patients. Pearson correlation coefficient between IHKPS and CHKPS was 0.79. The agreement study showed that ICC was 0.74, Kendall coefficient 0.86 and kappa 0.66. Finally, correlation between CHKPS and baseline WOMAC ranged from 0.43 to 0.64. The results according to the relationship between IHKPS and WOMAC ranged from 0.50 to 0.74. Results support the hypothesis that if the final objective of the prioritization tools is to organize and sort patients on the waiting list, although they use different methodologies, the results are similar. © 2010 Blackwell Publishing Ltd.

  17. Irrigation, risk aversion, and water right priority under water supply uncertainty

    PubMed Central

    Xu, Wenchao; Rosegrant, Mark W.

    2017-01-01

    Abstract This paper explores the impacts of a water right's allocative priority—as an indicator of farmers' risk‐bearing ability—on land irrigation under water supply uncertainty. We develop and use an economic model to simulate farmers' land irrigation decision and associated economic returns in eastern Idaho. Results indicate that the optimal acreage of land irrigated increases with water right priority when hydroclimate risk exhibits a negatively skewed or right‐truncated distribution. Simulation results suggest that prior appropriation enables senior water rights holders to allocate a higher proportion of their land to irrigation, 6 times as much as junior rights holders do, creating a gap in the annual expected net revenue reaching up to $141.4 acre−1 or $55,800 per farm between the two groups. The optimal irrigated acreage, expected net revenue, and shadow value of a water right's priority are subject to substantial changes under a changing climate in the future, where temporal variation in water supply risks significantly affects the profitability of agricultural land use under the priority‐based water sharing mechanism. PMID:29200529

  18. A point-based prediction model for cardiovascular risk in orthotopic liver transplantation: The CAR-OLT score.

    PubMed

    VanWagner, Lisa B; Ning, Hongyan; Whitsett, Maureen; Levitsky, Josh; Uttal, Sarah; Wilkins, John T; Abecassis, Michael M; Ladner, Daniela P; Skaro, Anton I; Lloyd-Jones, Donald M

    2017-12-01

    Cardiovascular disease (CVD) complications are important causes of morbidity and mortality after orthotopic liver transplantation (OLT). There is currently no preoperative risk-assessment tool that allows physicians to estimate the risk for CVD events following OLT. We sought to develop a point-based prediction model (risk score) for CVD complications after OLT, the Cardiovascular Risk in Orthotopic Liver Transplantation risk score, among a cohort of 1,024 consecutive patients aged 18-75 years who underwent first OLT in a tertiary-care teaching hospital (2002-2011). The main outcome measures were major 1-year CVD complications, defined as death from a CVD cause or hospitalization for a major CVD event (myocardial infarction, revascularization, heart failure, atrial fibrillation, cardiac arrest, pulmonary embolism, and/or stroke). The bootstrap method yielded bias-corrected 95% confidence intervals for the regression coefficients of the final model. Among 1,024 first OLT recipients, major CVD complications occurred in 329 (32.1%). Variables selected for inclusion in the model (using model optimization strategies) included preoperative recipient age, sex, race, employment status, education status, history of hepatocellular carcinoma, diabetes, heart failure, atrial fibrillation, pulmonary or systemic hypertension, and respiratory failure. The discriminative performance of the point-based score (C statistic = 0.78, bias-corrected C statistic = 0.77) was superior to other published risk models for postoperative CVD morbidity and mortality, and it had appropriate calibration (Hosmer-Lemeshow P = 0.33). The point-based risk score can identify patients at risk for CVD complications after OLT surgery (available at www.carolt.us); this score may be useful for identification of candidates for further risk stratification or other management strategies to improve CVD outcomes after OLT. (Hepatology 2017;66:1968-1979). © 2017 by the American Association for the Study of Liver Diseases.

  19. Setting conservation priorities.

    PubMed

    Wilson, Kerrie A; Carwardine, Josie; Possingham, Hugh P

    2009-04-01

    A generic framework for setting conservation priorities based on the principles of classic decision theory is provided. This framework encapsulates the key elements of any problem, including the objective, the constraints, and knowledge of the system. Within the context of this framework the broad array of approaches for setting conservation priorities are reviewed. While some approaches prioritize assets or locations for conservation investment, it is concluded here that prioritization is incomplete without consideration of the conservation actions required to conserve the assets at particular locations. The challenges associated with prioritizing investments through time in the face of threats (and also spatially and temporally heterogeneous costs) can be aided by proper problem definition. Using the authors' general framework for setting conservation priorities, multiple criteria can be rationally integrated and where, how, and when to invest conservation resources can be scheduled. Trade-offs are unavoidable in priority setting when there are multiple considerations, and budgets are almost always finite. The authors discuss how trade-offs, risks, uncertainty, feedbacks, and learning can be explicitly evaluated within their generic framework for setting conservation priorities. Finally, they suggest ways that current priority-setting approaches may be improved.

  20. Utility of genetic and non-genetic risk factors in prediction of type 2 diabetes: Whitehall II prospective cohort study.

    PubMed

    Talmud, Philippa J; Hingorani, Aroon D; Cooper, Jackie A; Marmot, Michael G; Brunner, Eric J; Kumari, Meena; Kivimäki, Mika; Humphries, Steve E

    2010-01-14

    To assess the performance of a panel of common single nucleotide polymorphisms (genotypes) associated with type 2 diabetes in distinguishing incident cases of future type 2 diabetes (discrimination), and to examine the effect of adding genetic information to previously validated non-genetic (phenotype based) models developed to estimate the absolute risk of type 2 diabetes. Workplace based prospective cohort study with three 5 yearly medical screenings. 5535 initially healthy people (mean age 49 years; 33% women), of whom 302 developed new onset type 2 diabetes over 10 years. Non-genetic variables included in two established risk models-the Cambridge type 2 diabetes risk score (age, sex, drug treatment, family history of type 2 diabetes, body mass index, smoking status) and the Framingham offspring study type 2 diabetes risk score (age, sex, parental history of type 2 diabetes, body mass index, high density lipoprotein cholesterol, triglycerides, fasting glucose)-and 20 single nucleotide polymorphisms associated with susceptibility to type 2 diabetes. Cases of incident type 2 diabetes were defined on the basis of a standard oral glucose tolerance test, self report of a doctor's diagnosis, or the use of anti-diabetic drugs. A genetic score based on the number of risk alleles carried (range 0-40; area under receiver operating characteristics curve 0.54, 95% confidence interval 0.50 to 0.58) and a genetic risk function in which carriage of risk alleles was weighted according to the summary odds ratios of their effect from meta-analyses of genetic studies (area under receiver operating characteristics curve 0.55, 0.51 to 0.59) did not effectively discriminate cases of diabetes. The Cambridge risk score (area under curve 0.72, 0.69 to 0.76) and the Framingham offspring risk score (area under curve 0.78, 0.75 to 0.82) led to better discrimination of cases than did genotype based tests. Adding genetic information to phenotype based risk models did not improve discrimination and provided only a small improvement in model calibration and a modest net reclassification improvement of about 5% when added to the Cambridge risk score but not when added to the Framingham offspring risk score. The phenotype based risk models provided greater discrimination for type 2 diabetes than did models based on 20 common independently inherited diabetes risk alleles. The addition of genotypes to phenotype based risk models produced only minimal improvement in accuracy of risk estimation assessed by recalibration and, at best, a minor net reclassification improvement. The major translational application of the currently known common, small effect genetic variants influencing susceptibility to type 2 diabetes is likely to come from the insight they provide on causes of disease and potential therapeutic targets.

  1. Cardiac Society of Australia and New Zealand Position Statement: Coronary Artery Calcium Scoring.

    PubMed

    Liew, Gary; Chow, Clara; van Pelt, Niels; Younger, John; Jelinek, Michael; Chan, Jonathan; Hamilton-Craig, Christian

    2017-12-01

    Coronary Artery Calcium Scoring (CAC) is a non-invasive quantitation of coronary artery calcification using computed tomography (CT). It is a marker of atherosclerotic plaque burden and an independent predictor of future myocardial infarction and mortality. Coronary Artery Calcium Scoring provides incremental risk information beyond traditional risk calculators (eg. Framingham Risk Score). Its use for risk stratification is confined to primary prevention of cardiovascular events, and can be considered as "individualised coronary risk scoring" for those not considered to be of high or low risk. Medical practitioners should carefully counsel patients prior to CAC. Coronary Artery Calcium Scoring should only be undertaken if an alteration in therapy including embarking on pharmacotherapy is being considered based on the test result. Patient Groups to Consider Coronary Calcium Scoring: Patient Groups in Whom Coronary Calcium Scoring Should Not be Considered: Coronary Artery Calcium Scoring is not recommended for patients who are: Interpretation of CAC CAC=0 A zero score confers a very low risk of death, <1% at 10 years. CAC=1-100 Low risk, <10% CAC=101-400 Intermediate risk, 10-20% CAC=101-400 & >75th centile. Moderately high risk, 15-20% CAC >400 High risk, >20% Management Recommendations Based on CAC Optimal diet and lifestyle measures are encouraged in all risk groups and form the basis of primary prevention strategies. Patients with moderately-high or high risk based on CAC score are recommended to receive preventative medical therapy such as aspirin and statins. The evidence for pharmacotherapy is less robust in patients at intermediate levels of CAC 100-400, with modest benefit for aspirin use; though statins may be reasonable if they are above 75th centile. Aspirin and statins are generally not recommended in patients with CAC <100. Repeat CAC Testing In patients with a CAC of 0, a repeat CAC may be considered in 5 years but not sooner. In patients with positive calcium score, routine re-scanning is not currently recommended. However, an annual increase in CAC of >15% or annual increase of CAC >100 units are predictive of future myocardial infarction and mortality. Cost Effectiveness of CAC Based Primary Prevention Recommendations: There is currently no data in Australia and New Zealand that CAC is cost-effective in informing primary prevention decisions. Given the cost of testing is currently borne entirely by the patient, discussion regarding the implications of CAC results should occur before CAC is recommended and undertaken. Copyright © 2017 Australian and New Zealand Society of Cardiac and Thoracic Surgeons (ANZSCTS) and the Cardiac Society of Australia and New Zealand (CSANZ). Published by Elsevier B.V. All rights reserved.

  2. Big data privacy protection model based on multi-level trusted system

    NASA Astrophysics Data System (ADS)

    Zhang, Nan; Liu, Zehua; Han, Hongfeng

    2018-05-01

    This paper introduces and inherit the multi-level trusted system model that solves the Trojan virus by encrypting the privacy of user data, and achieve the principle: "not to read the high priority hierarchy, not to write the hierarchy with low priority". Thus ensuring that the low-priority data privacy leak does not affect the disclosure of high-priority data privacy. This paper inherits the multi-level trustworthy system model of Trojan horse and divides seven different risk levels. The priority level 1˜7 represent the low to high value of user data privacy, and realize seven kinds of encryption with different execution efficiency Algorithm, the higher the priority, the greater the value of user data privacy, at the expense of efficiency under the premise of choosing a more encrypted encryption algorithm to ensure data security. For enterprises, the price point is determined by the unit equipment users to decide the length of time. The higher the risk sub-group algorithm, the longer the encryption time. The model assumes that users prefer the lower priority encryption algorithm to ensure efficiency. This paper proposes a privacy cost model for each of the seven risk subgroups. Among them, the higher the privacy cost, the higher the priority of the risk sub-group, the higher the price the user needs to pay to ensure the privacy of the data. Furthermore, by introducing the existing pricing model of economics and the human traffic model proposed by this paper and fluctuating with the market demand, this paper improves the price of unit products when the market demand is low. On the other hand, when the market demand increases, the profit of the enterprise will be guaranteed under the guidance of the government by reducing the price per unit of product. Then, this paper introduces the dynamic factors of consumers' mood and age to optimize. At the same time, seven algorithms are selected from symmetric and asymmetric encryption algorithms to define the enterprise costs at different levels. Therefore, the proposed model solves the continuous influence caused by cascading events and ensures that the disclosure of low-level data privacy of users does not affect the high-level data privacy, thus greatly improving the safety of the private information of user.

  3. Development of risk-based trading farm scoring system to assist with the control of bovine tuberculosis in cattle in England and Wales.

    PubMed

    Adkin, A; Brouwer, A; Simons, R R L; Smith, R P; Arnold, M E; Broughan, J; Kosmider, R; Downs, S H

    2016-01-01

    Identifying and ranking cattle herds with a higher risk of being or becoming infected on known risk factors can help target farm biosecurity, surveillance schemes and reduce spread through animal trading. This paper describes a quantitative approach to develop risk scores, based on the probability of infection in a herd with bovine tuberculosis (bTB), to be used in a risk-based trading (RBT) scheme in England and Wales. To produce a practical scoring system the risk factors included need to be simple and quick to understand, sufficiently informative and derived from centralised national databases to enable verification and assess compliance. A logistic regression identified herd history of bTB, local bTB prevalence, herd size and movements of animals onto farms in batches from high risk areas as being significantly associated with the probability of bTB infection on farm. Risk factors were assigned points using the estimated odds ratios to weight them. The farm risk score was defined as the sum of these individual points yielding a range from 1 to 5 and was calculated for each cattle farm that was trading animals in England and Wales at the start of a year. Within 12 months, of those farms tested, 30.3% of score 5 farms had a breakdown (sensitivity). Of farms scoring 1-4 only 5.4% incurred a breakdown (1-specificity). The use of this risk scoring system within RBT has the potential to reduce infected cattle movements; however, there are cost implications in ensuring that the information underpinning any system is accurate and up to date. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.

  4. Development of a Traumatic Brain Injury Assessment Score using Novel Biomarkers Discovered Through Autoimmune Profiling

    DTIC Science & Technology

    2014-08-15

    N12-P12 TSNRP Research Priorities that Study or Project Addresses Primary Priority Force Health Protection: Fit and ready force Deploy...Mentoring: Health policy Recruitment and retention Preparing tomorrow’s leaders Care of the caregiver Other: Secondary Priority Force... caregiver Other: 5 Principal Investigator (Buonora, John, E) USU Project Number: N12-P12 Progress Towards Achievement of Specific Aims of

  5. Identifying management and disease priorities of Canadian dairy industry stakeholders.

    PubMed

    Bauman, C A; Barkema, H W; Dubuc, J; Keefe, G P; Kelton, D F

    2016-12-01

    The objective of this study was to identify the key management and disease issues affecting the Canadian dairy industry. An online questionnaire (FluidSurveys, http://fluidsurveys.com/) was conducted between March 1 and May 31, 2014. A total of 1,025 responses were received from across Canada of which 68% (n=698) of respondents were dairy producers, and the remaining respondents represented veterinarians, university researchers, government personnel, and other allied industries. Participants were asked to identify their top 3 management and disease priorities from 2 lists offered. Topics were subsequently ranked from highest to lowest using 3 different ranking methods based on points: 5-3-1 (5 points for first priority, 3 for second, and 1 for first), 3-2-1, and 1-1-1 (equal ranking). The 5-3-1 point system was selected because it minimized the number of duplicate point scores. Stakeholder groups showed general agreement with the top management issue identified as animal welfare and the number one health concern as lameness. Other areas identified as priorities were reproductive health, antibiotic use, bovine viral diarrhea, and Staphylococcus aureus mastitis with these rankings influenced by region, herd size, and stakeholder group. This is the first national comprehensive assessment of priorities undertaken in the Canadian dairy industry and will assist researchers, policymakers, program developers, and funding agencies make future decisions based on direct industry feedback. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  6. Identifying environmental health priorities in underserved populations: a study of rural versus urban communities

    PubMed Central

    Bernhard, M.C.; Evans, M.B.; Kent, S.T.; Johnson, E.; Threadgill, S.L.; Tyson, S.; Becker, S.M.; Gohlke, J.M.

    2013-01-01

    Objectives Understanding and effectively addressing persistent health disparities in minority communities requires a clear picture of members’ concerns and priorities. This study was intended to engage residents in urban and rural communities in order to identify environmental health priorities. Specific emphasis was placed on how the communities defined the term environment, their perceptions of environmental exposures as affecting their health, specific priorities in their communities, and differences in urban versus rural populations. Study design A community-engaged approach was used to develop and implement focus groups and compare environmental health priorities in urban versus rural communities. Methods A total of eight focus groups were conducted: four in rural and four in urban communities. Topics included defining the term environment, how the environment may affect health, and environmental priorities within their communities, using both open discussion and a predefined list. Data were analysed both qualitatively and quantitatively to identify patterns and trends. Results There were important areas of overlap in priorities between urban and rural communities; both emphasized the importance of the social environment and shared a concern over air pollution from industrial sources. In contrast, for urban focus groups, abandoned houses and their social and physical sequelae were a high priority while concerns about adequate sewer and water services and road maintenance were high priorities in rural communities. Conclusions This study was able to identify environmental health priorities in urban versus rural minority communities. In contrast to some previous risk perception research, the results of this study suggest prioritization of tangible, known risks in everyday life instead of rare, disaster-related events, even in communities that have recently experienced devastating damage from tornadoes. The findings can help inform future efforts to study, understand and effectively address environmental issues, and are particularly relevant to developing effective community-based strategies in vulnerable populations. PMID:24239281

  7. Development and use of role model stories in a community level HIV risk reduction intervention.

    PubMed Central

    Corby, N H; Enguídanos, S M; Kay, L S

    1996-01-01

    A theory-based HIV prevention intervention was implemented as part of a five-city AIDS Community Demonstration Project for the development and testing of a community-level intervention to reduce AIDS risk among historically underserved groups. This intervention employed written material containing stories of risk-reducing experiences of members of the priority populations, in this case, injecting drug users, their female sex partners, and female sex workers. These materials were distributed to members of these populations by their peers, volunteers from the population who were trained to deliver social reinforcement for interest in personal risk reduction and the materials. The participation of the priority populations in the development and implementation of the intervention was designed to increase the credibility of the intervention and the acceptance of the message. The techniques involved in developing role-model stories are described in this paper. PMID:8862158

  8. The Role of Space Medicine in Management of Risk in Spaceflight

    NASA Technical Reports Server (NTRS)

    Clark, Jonathan B.

    2001-01-01

    The purpose of Space Medicine is to ensure mission success by providing quality and comprehensive health care throughout all mission phases to optimize crew health and performance and to prevent negative long-term health consequences. Space flight presents additional hazards and associated risks to crew health, performance, and safety. With an extended human presence in space it is expected that illness and injury will occur on orbit, which may present a significant threat to crew health and performance and to mission success. Maintaining crew health, safety and performance and preventing illness and injury are high priorities necessary for mission success and agency goals. Space flight health care should meet the standards of practice of evidence based clinical medicine. The function of Space Medicine is expected to meet the agency goals as stated in the 1998 NASA Strategic Plan and the priorities established by the Critical Path Roadmap Project. The Critical Path Roadmap Project is an integrated NASA cross-disciplinary strategy to assess, understand, mitigate, and manage the risks associated with long-term exposure to the space flight environment. The evidence based approach to space medicine should be standardized, objective process yielding expected results and establishing clinical practice standards while balancing individual risk with mission (programmatic) risk. The ability to methodically apply available knowledge and expertise to individual and mission health issues will ensure appropriate priorities are assigned and resources are allocated. NASA Space Medicine risk management process is a combined clinical and engineering approach. Competition for weight, power, volume, cost, and crew time must be balanced in making decisions about the care of individual crew with competing agency resources.

  9. Dynamic TIMI Risk Score for STEMI

    PubMed Central

    Amin, Sameer T.; Morrow, David A.; Braunwald, Eugene; Sloan, Sarah; Contant, Charles; Murphy, Sabina; Antman, Elliott M.

    2013-01-01

    Background Although there are multiple methods of risk stratification for ST‐elevation myocardial infarction (STEMI), this study presents a prospectively validated method for reclassification of patients based on in‐hospital events. A dynamic risk score provides an initial risk stratification and reassessment at discharge. Methods and Results The dynamic TIMI risk score for STEMI was derived in ExTRACT‐TIMI 25 and validated in TRITON‐TIMI 38. Baseline variables were from the original TIMI risk score for STEMI. New variables were major clinical events occurring during the index hospitalization. Each variable was tested individually in a univariate Cox proportional hazards regression. Variables with P<0.05 were incorporated into a full multivariable Cox model to assess the risk of death at 1 year. Each variable was assigned an integer value based on the odds ratio, and the final score was the sum of these values. The dynamic score included the development of in‐hospital MI, arrhythmia, major bleed, stroke, congestive heart failure, recurrent ischemia, and renal failure. The C‐statistic produced by the dynamic score in the derivation database was 0.76, with a net reclassification improvement (NRI) of 0.33 (P<0.0001) from the inclusion of dynamic events to the original TIMI risk score. In the validation database, the C‐statistic was 0.81, with a NRI of 0.35 (P=0.01). Conclusions This score is a prospectively derived, validated means of estimating 1‐year mortality of STEMI at hospital discharge and can serve as a clinically useful tool. By incorporating events during the index hospitalization, it can better define risk and help to guide treatment decisions. PMID:23525425

  10. Dynamic TIMI risk score for STEMI.

    PubMed

    Amin, Sameer T; Morrow, David A; Braunwald, Eugene; Sloan, Sarah; Contant, Charles; Murphy, Sabina; Antman, Elliott M

    2013-01-29

    Although there are multiple methods of risk stratification for ST-elevation myocardial infarction (STEMI), this study presents a prospectively validated method for reclassification of patients based on in-hospital events. A dynamic risk score provides an initial risk stratification and reassessment at discharge. The dynamic TIMI risk score for STEMI was derived in ExTRACT-TIMI 25 and validated in TRITON-TIMI 38. Baseline variables were from the original TIMI risk score for STEMI. New variables were major clinical events occurring during the index hospitalization. Each variable was tested individually in a univariate Cox proportional hazards regression. Variables with P<0.05 were incorporated into a full multivariable Cox model to assess the risk of death at 1 year. Each variable was assigned an integer value based on the odds ratio, and the final score was the sum of these values. The dynamic score included the development of in-hospital MI, arrhythmia, major bleed, stroke, congestive heart failure, recurrent ischemia, and renal failure. The C-statistic produced by the dynamic score in the derivation database was 0.76, with a net reclassification improvement (NRI) of 0.33 (P<0.0001) from the inclusion of dynamic events to the original TIMI risk score. In the validation database, the C-statistic was 0.81, with a NRI of 0.35 (P=0.01). This score is a prospectively derived, validated means of estimating 1-year mortality of STEMI at hospital discharge and can serve as a clinically useful tool. By incorporating events during the index hospitalization, it can better define risk and help to guide treatment decisions.

  11. A simple risk score for identifying individuals with impaired fasting glucose in the Southern Chinese population.

    PubMed

    Wang, Hui; Liu, Tao; Qiu, Quan; Ding, Peng; He, Yan-Hui; Chen, Wei-Qing

    2015-01-23

    This study aimed to develop and validate a simple risk score for detecting individuals with impaired fasting glucose (IFG) among the Southern Chinese population. A sample of participants aged ≥20 years and without known diabetes from the 2006-2007 Guangzhou diabetes cross-sectional survey was used to develop separate risk scores for men and women. The participants completed a self-administered structured questionnaire and underwent simple clinical measurements. The risk scores were developed by multiple logistic regression analysis. External validation was performed based on three other studies: the 2007 Zhuhai rural population-based study, the 2008-2010 Guangzhou diabetes cross-sectional study and the 2007 Tibet population-based study. Performance of the scores was measured with the Hosmer-Lemeshow goodness-of-fit test and ROC c-statistic. Age, waist circumference, body mass index and family history of diabetes were included in the risk score for both men and women, with the additional factor of hypertension for men. The ROC c-statistic was 0.70 for both men and women in the derivation samples. Risk scores of ≥28 for men and ≥18 for women showed respective sensitivity, specificity, positive predictive value and negative predictive value of 56.6%, 71.7%, 13.0% and 96.0% for men and 68.7%, 60.2%, 11% and 96.0% for women in the derivation population. The scores performed comparably with the Zhuhai rural sample and the 2008-2010 Guangzhou urban samples but poorly in the Tibet sample. The performance of pre-existing USA, Shanghai, and Chengdu risk scores was poorer in our population than in their original study populations. The results suggest that the developed simple IFG risk scores can be generalized in Guangzhou city and nearby rural regions and may help primary health care workers to identify individuals with IFG in their practice.

  12. A Simple Risk Score for Identifying Individuals with Impaired Fasting Glucose in the Southern Chinese Population

    PubMed Central

    Wang, Hui; Liu, Tao; Qiu, Quan; Ding, Peng; He, Yan-Hui; Chen, Wei-Qing

    2015-01-01

    This study aimed to develop and validate a simple risk score for detecting individuals with impaired fasting glucose (IFG) among the Southern Chinese population. A sample of participants aged ≥20 years and without known diabetes from the 2006–2007 Guangzhou diabetes cross-sectional survey was used to develop separate risk scores for men and women. The participants completed a self-administered structured questionnaire and underwent simple clinical measurements. The risk scores were developed by multiple logistic regression analysis. External validation was performed based on three other studies: the 2007 Zhuhai rural population-based study, the 2008–2010 Guangzhou diabetes cross-sectional study and the 2007 Tibet population-based study. Performance of the scores was measured with the Hosmer-Lemeshow goodness-of-fit test and ROC c-statistic. Age, waist circumference, body mass index and family history of diabetes were included in the risk score for both men and women, with the additional factor of hypertension for men. The ROC c-statistic was 0.70 for both men and women in the derivation samples. Risk scores of ≥28 for men and ≥18 for women showed respective sensitivity, specificity, positive predictive value and negative predictive value of 56.6%, 71.7%, 13.0% and 96.0% for men and 68.7%, 60.2%, 11% and 96.0% for women in the derivation population. The scores performed comparably with the Zhuhai rural sample and the 2008–2010 Guangzhou urban samples but poorly in the Tibet sample. The performance of pre-existing USA, Shanghai, and Chengdu risk scores was poorer in our population than in their original study populations. The results suggest that the developed simple IFG risk scores can be generalized in Guangzhou city and nearby rural regions and may help primary health care workers to identify individuals with IFG in their practice. PMID:25625405

  13. Integrating Economic Costs and Biological Traits into Global Conservation Priorities for Carnivores

    PubMed Central

    Loyola, Rafael Dias; Oliveira-Santos, Luiz Gustavo Rodrigues; Almeida-Neto, Mário; Nogueira, Denise Martins; Kubota, Umberto; Diniz-Filho, José Alexandre Felizola; Lewinsohn, Thomas Michael

    2009-01-01

    Background Prioritization schemes usually highlight species-rich areas, where many species are at imminent risk of extinction. To be ecologically relevant these schemes should also include species biological traits into area-setting methods. Furthermore, in a world of limited funds for conservation, conservation action is constrained by land acquisition costs. Hence, including economic costs into conservation priorities can substantially improve their conservation cost-effectiveness. Methodology/Principal Findings We examined four global conservation scenarios for carnivores based on the joint mapping of economic costs and species biological traits. These scenarios identify the most cost-effective priority sets of ecoregions, indicating best investment opportunities for safeguarding every carnivore species, and also establish priority sets that can maximize species representation in areas harboring highly vulnerable species. We compared these results with a scenario that minimizes the total number of ecoregions required for conserving all species, irrespective of other factors. We found that cost-effective conservation investments should focus on 41 ecoregions highlighted in the scenario that consider simultaneously both ecoregion vulnerability and economic costs of land acquisition. Ecoregions included in priority sets under these criteria should yield best returns of investments since they harbor species with high extinction risk and have lower mean land cost. Conclusions/Significance Our study highlights ecoregions of particular importance for the conservation of the world's carnivores defining global conservation priorities in analyses that encompass socioeconomic and life-history factors. We consider the identification of a comprehensive priority-set of areas as a first step towards an in-situ biodiversity maintenance strategy. PMID:19710911

  14. Ethics education: a priority for general practitioners in occupational medicine.

    PubMed

    Alavi, S Shohreh; Makarem, Jalil; Mehrdad, Ramin

    2015-01-01

    General practitioners (GPs) who work in occupational medicine (OM) should be trained continuously. However, it seems that ethical issues have been neglected. This cross-sectional study aimed to determine educational priorities for GPs working in OM. A total of 410 GPs who participated in OM seminars were asked to answer a number of questions related to items that they usually come across in their work. The respondents were given scores on 15 items, which pertained to their frequency of experience in OM, their felt needs regarding education in the field, and their knowledge and skills. Ethical issues were the most frequently utilised item and the area in which the felt need for education was the greatest. The knowledge of and skills in ethical issues and matters were the poorest. Ethical principles and confidentiality had the highest calculated educational priority scores. It is necessary to consider ethical issues as an educational priority for GPs working in the field of OM.

  15. CONTINUOUS FORMALDEHYDE MEASUREMENT SYSTEM BASED ON MODIFIED FOURIER TRANSFORM INFRARED SPECTROSCOPY

    EPA Science Inventory

    EPA is developing advanced open-path and cell-based optical techniques for time-resolved measurement of priority hazardous air pollutants such as formaldehyde (HCHO). Due to its high National Air Toxics Assessment risk factor, there is increasing interest in continuous measuremen...

  16. [Assessment of risk of contamination of drinking water for the health of children in the Tula region].

    PubMed

    Grigorev, Yu I; Lyapina, N V

    2014-01-01

    The hygienic analysis of centralized drinking water supply in Tula region was performed. Priority contaminants of drinking water were established. On the base of the application of risk assessment methodology there was calculated carcinogenic risk for children's health. A direct relationship between certain classes of diseases and pollution of drinking water with chemical contaminants has been determined.

  17. Associations of genetic risk scores based on adult adiposity pathways with childhood growth and adiposity measures.

    PubMed

    Monnereau, Claire; Vogelezang, Suzanne; Kruithof, Claudia J; Jaddoe, Vincent W V; Felix, Janine F

    2016-08-18

    Results from genome-wide association studies (GWAS) identified many loci and biological pathways that influence adult body mass index (BMI). We aimed to identify if biological pathways related to adult BMI also affect infant growth and childhood adiposity measures. We used data from a population-based prospective cohort study among 3,975 children with a mean age of 6 years. Genetic risk scores were constructed based on the 97 SNPs associated with adult BMI previously identified with GWAS and on 28 BMI related biological pathways based on subsets of these 97 SNPs. Outcomes were infant peak weight velocity, BMI at adiposity peak and age at adiposity peak, and childhood BMI, total fat mass percentage, android/gynoid fat ratio, and preperitoneal fat area. Analyses were performed using linear regression models. A higher overall adult BMI risk score was associated with infant BMI at adiposity peak and childhood BMI, total fat mass, android/gynoid fat ratio, and preperitoneal fat area (all p-values < 0.05). Analyses focused on specific biological pathways showed that the membrane proteins genetic risk score was associated with infant peak weight velocity, and the genetic risk scores related to neuronal developmental processes, hypothalamic processes, cyclicAMP, WNT-signaling, membrane proteins, monogenic obesity and/or energy homeostasis, glucose homeostasis, cell cycle, and muscle biology pathways were associated with childhood adiposity measures (all p-values <0.05). None of the pathways were associated with childhood preperitoneal fat area. A genetic risk score based on 97 SNPs related to adult BMI was associated with peak weight velocity during infancy and general and abdominal fat measurements at the age of 6 years. Risk scores based on genetic variants linked to specific biological pathways, including central nervous system and hypothalamic processes, influence body fat development from early life onwards.

  18. Setting priorities for reducing risk and advancing patient safety.

    PubMed

    Gaffey, Ann D

    2016-04-01

    We set priorities every day in both our personal and professional lives. Some decisions are easy, while others require much more thought, participation, and resources. The difficult or less appealing priorities may not be popular, may receive push-back, and may be resource intensive. Whether personal or professional, the urgency that accompanies true priorities becomes a driving force. It is that urgency to ensure our patients' safety that brings many of us to work each day. This is not easy work. It requires us to be knowledgeable about the enterprise we are working in and to have the professional skills and competence to facilitate setting the priorities that allow our organizations to minimize risk and maximize value. © 2016 American Society for Healthcare Risk Management of the American Hospital Association.

  19. The Application of Failure Modes and Effects Analysis Methodology to Intrathecal Drug Delivery for Pain Management

    PubMed Central

    Patel, Teresa; Fisher, Stanley P.

    2016-01-01

    Objective This study aimed to utilize failure modes and effects analysis (FMEA) to transform clinical insights into a risk mitigation plan for intrathecal (IT) drug delivery in pain management. Methods The FMEA methodology, which has been used for quality improvement, was adapted to assess risks (i.e., failure modes) associated with IT therapy. Ten experienced pain physicians scored 37 failure modes in the following categories: patient selection for therapy initiation (efficacy and safety concerns), patient safety during IT therapy, and product selection for IT therapy. Participants assigned severity, probability, and detection scores for each failure mode, from which a risk priority number (RPN) was calculated. Failure modes with the highest RPNs (i.e., most problematic) were discussed, and strategies were proposed to mitigate risks. Results Strategic discussions focused on 17 failure modes with the most severe outcomes, the highest probabilities of occurrence, and the most challenging detection. The topic of the highest‐ranked failure mode (RPN = 144) was manufactured monotherapy versus compounded combination products. Addressing failure modes associated with appropriate patient and product selection was predicted to be clinically important for the success of IT therapy. Conclusions The methodology of FMEA offers a systematic approach to prioritizing risks in a complex environment such as IT therapy. Unmet needs and information gaps are highlighted through the process. Risk mitigation and strategic planning to prevent and manage critical failure modes can contribute to therapeutic success. PMID:27477689

  20. Multi-criteria decision analysis of breast cancer control in low- and middle- income countries: development of a rating tool for policy makers.

    PubMed

    Venhorst, Kristie; Zelle, Sten G; Tromp, Noor; Lauer, Jeremy A

    2014-01-01

    The objective of this study was to develop a rating tool for policy makers to prioritize breast cancer interventions in low- and middle- income countries (LMICs), based on a simple multi-criteria decision analysis (MCDA) approach. The definition and identification of criteria play a key role in MCDA, and our rating tool could be used as part of a broader priority setting exercise in a local setting. This tool may contribute to a more transparent priority-setting process and fairer decision-making in future breast cancer policy development. First, an expert panel (n = 5) discussed key considerations for tool development. A literature review followed to inventory all relevant criteria and construct an initial set of criteria. A Delphi study was then performed and questionnaires used to discuss a final list of criteria with clear definitions and potential scoring scales. For this Delphi study, multiple breast cancer policy and priority-setting experts from different LMICs were selected and invited by the World Health Organization. Fifteen international experts participated in all three Delphi rounds to assess and evaluate each criterion. This study resulted in a preliminary rating tool for assessing breast cancer interventions in LMICs. The tool consists of 10 carefully crafted criteria (effectiveness, quality of the evidence, magnitude of individual health impact, acceptability, cost-effectiveness, technical complexity, affordability, safety, geographical coverage, and accessibility), with clear definitions and potential scoring scales. This study describes the development of a rating tool to assess breast cancer interventions in LMICs. Our tool can offer supporting knowledge for the use or development of rating tools as part of a broader (MCDA based) priority setting exercise in local settings. Further steps for improving the tool are proposed and should lead to its useful adoption in LMICs.

  1. Identifying acne treatment uncertainties via a James Lind Alliance Priority Setting Partnership

    PubMed Central

    Layton, Alison; Eady, E Anne; Peat, Maggie; Whitehouse, Heather; Levell, Nick; Ridd, Matthew; Cowdell, Fiona; Patel, Mahenda; Andrews, Stephen; Oxnard, Christine; Fenton, Mark; Firkins, Lester

    2015-01-01

    Objectives The Acne Priority Setting Partnership (PSP) was set up to identify and rank treatment uncertainties by bringing together people with acne, and professionals providing care within and beyond the National Health Service (NHS). Setting The UK with international participation. Participants Teenagers and adults with acne, parents, partners, nurses, clinicians, pharmacists, private practitioners. Methods Treatment uncertainties were collected via separate online harvesting surveys, embedded within the PSP website, for patients and professionals. A wide variety of approaches were used to promote the surveys to stakeholder groups with a particular emphasis on teenagers and young adults. Survey submissions were collated using keywords and verified as uncertainties by appraising existing evidence. The 30 most popular themes were ranked via weighted scores from an online vote. At a priority setting workshop, patients and professionals discussed the 18 highest-scoring questions from the vote, and reached consensus on the top 10. Results In the harvesting survey, 2310 people, including 652 professionals and 1456 patients (58% aged 24 y or younger), made submissions containing at least one research question. After checking for relevance and rephrasing, a total of 6255 questions were collated into themes. Valid votes ranking the 30 most common themes were obtained from 2807 participants. The top 10 uncertainties prioritised at the workshop were largely focused on management strategies, optimum use of common prescription medications and the role of non-drug based interventions. More female than male patients took part in the harvesting surveys and vote. A wider range of uncertainties were provided by patients compared to professionals. Conclusions Engaging teenagers and young adults in priority setting is achievable using a variety of promotional methods. The top 10 uncertainties reveal an extensive knowledge gap about widely used interventions and the relative merits of drug versus non-drug based treatments in acne management. PMID:26187120

  2. Development of an Identification Procedure for a Large Urban School Corporation: "Identifying Culturally Diverse and Academically Gifted Elementary Students"

    ERIC Educational Resources Information Center

    Pierce, Rebecca L.; Adams, Cheryll M.; Neumeister, Kristie L. Speirs; Cassady, Jerrell C.; Dixon, Felicia A.; Cross, Tracy L.

    2006-01-01

    This paper describes the identification process of a Priority One Jacob K. Javits grant, Clustering Learners Unlocks Equity (Project CLUE), a university-school partnership. Project CLUE uses a "sift-down model" to cast the net widely as the talent pool of gifted second-grade students is formed. The model is based on standardized test scores, a…

  3. A Scoring System to Determine Risk of Delayed Bleeding After Endoscopic Mucosal Resection of Large Colorectal Lesions.

    PubMed

    Albéniz, Eduardo; Fraile, María; Ibáñez, Berta; Alonso-Aguirre, Pedro; Martínez-Ares, David; Soto, Santiago; Gargallo, Carla Jerusalén; Ramos Zabala, Felipe; Álvarez, Marco Antonio; Rodríguez-Sánchez, Joaquín; Múgica, Fernando; Nogales, Óscar; Herreros de Tejada, Alberto; Redondo, Eduardo; Guarner-Argente, Carlos; Pin, Noel; León-Brito, Helena; Pardeiro, Remedios; López-Roses, Leopoldo; Rodríguez-Téllez, Manuel; Jiménez, Alejandra; Martínez-Alcalá, Felipe; García, Orlando; de la Peña, Joaquín; Ono, Akiko; Alberca de Las Parras, Fernando; Pellisé, María; Rivero, Liseth; Saperas, Esteban; Pérez-Roldán, Francisco; Pueyo Royo, Antonio; Eguaras Ros, Javier; Zúñiga Ripa, Alba; Concepción-Martín, Mar; Huelin-Álvarez, Patricia; Colán-Hernández, Juan; Cubiella, Joaquín; Remedios, David; Bessa I Caserras, Xavier; López-Viedma, Bartolomé; Cobian, Julyssa; González-Haba, Mariano; Santiago, José; Martínez-Cara, Juan Gabriel; Valdivielso, Eduardo

    2016-08-01

    After endoscopic mucosal resection (EMR) of colorectal lesions, delayed bleeding is the most common serious complication, but there are no guidelines for its prevention. We aimed to identify risk factors associated with delayed bleeding that required medical attention after discharge until day 15 and develop a scoring system to identify patients at risk. We performed a prospective study of 1214 consecutive patients with nonpedunculated colorectal lesions 20 mm or larger treated by EMR (n = 1255) at 23 hospitals in Spain, from February 2013 through February 2015. Patients were examined 15 days after the procedure, and medical data were collected. We used the data to create a delayed bleeding scoring system, and assigned a weight to each risk factor based on the β parameter from multivariate logistic regression analysis. Patients were classified as being at low, average, or high risk for delayed bleeding. Delayed bleeding occurred in 46 cases (3.7%, 95% confidence interval, 2.7%-4.9%). In multivariate analysis, factors associated with delayed bleeding included age ≥75 years (odds ratio [OR], 2.36; P < .01), American Society of Anesthesiologist classification scores of III or IV (OR, 1.90; P ≤ .05), aspirin use during EMR (OR, 3.16; P < .05), right-sided lesions (OR, 4.86; P < .01), lesion size ≥40 mm (OR, 1.91; P ≤ .05), and a mucosal gap not closed by hemoclips (OR, 3.63; P ≤ .01). We developed a risk scoring system based on these 6 variables that assigned patients to the low-risk (score, 0-3), average-risk (score, 4-7), or high-risk (score, 8-10) categories with a receiver operating characteristic curve of 0.77 (95% confidence interval, 0.70-0.83). In these groups, the probabilities of delayed bleeding were 0.6%, 5.5%, and 40%, respectively. The risk of delayed bleeding after EMR of large colorectal lesions is 3.7%. We developed a risk scoring system based on 6 factors that determined the risk for delayed bleeding (receiver operating characteristic curve, 0.77). The factors most strongly associated with delayed bleeding were right-sided lesions, aspirin use, and mucosal defects not closed by hemoclips. Patients considered to be high risk (score, 8-10) had a 40% probability of delayed bleeding. Copyright © 2016 AGA Institute. Published by Elsevier Inc. All rights reserved.

  4. A risk score for predicting near-term incidence of hypertension: the Framingham Heart Study.

    PubMed

    Parikh, Nisha I; Pencina, Michael J; Wang, Thomas J; Benjamin, Emelia J; Lanier, Katherine J; Levy, Daniel; D'Agostino, Ralph B; Kannel, William B; Vasan, Ramachandran S

    2008-01-15

    Studies suggest that targeting high-risk, nonhypertensive individuals for treatment may delay hypertension onset, thereby possibly mitigating vascular complications. Risk stratification may facilitate cost-effective approaches to management. To develop a simple risk score for predicting hypertension incidence by using measures readily obtained in the physician's office. Longitudinal cohort study. Framingham Heart Study, Framingham, Massachusetts. 1717 nonhypertensive white individuals 20 to 69 years of age (mean age, 42 years; 54% women), without diabetes and with both parents in the original cohort of the Framingham Heart Study, contributed 5814 person-examinations. Scores were developed for predicting the 1-, 2-, and 4-year risk for new-onset hypertension, and performance characteristics of the prediction algorithm were assessed by using calibration and discrimination measures. Parental hypertension was ascertained from examinations of the original cohort of the Framingham Heart Study. During follow-up (median time over all person-examinations, 3.8 years), 796 persons (52% women) developed new-onset hypertension. In multivariable analyses, age, sex, systolic and diastolic blood pressure, body mass index, parental hypertension, and cigarette smoking were significant predictors of hypertension. According to the risk score based on these factors, the 4-year risk for incident hypertension was classified as low (<5%) in 34% of participants, medium (5% to 10%) in 19%, and high (>10%) in 47%. The c-statistic for the prediction model was 0.788, and calibration was very good. The risk score findings may not be generalizable to persons of nonwhite race or ethnicity or to persons with diabetes. The risk score algorithm has not been validated in an independent cohort and is based on single measurements of risk factors and blood pressure. The hypertension risk prediction score can be used to estimate an individual's absolute risk for hypertension on short-term follow-up, and it represents a simple, office-based tool that may facilitate management of high-risk individuals with prehypertension.

  5. 99mTc MDP SPECT-CT-Based Modified Mirels Classification for Evaluation of Risk of Fracture in Skeletal Metastasis: A Pilot Study.

    PubMed

    Riaz, Saima; Bashir, Humayun; Niazi, Imran Khalid; Butt, Sumera; Qamar, Faisal

    2018-06-01

    Mirels' scoring system quantifies the risk of sustaining a pathologic fracture in osseous metastases of weight bearing long bones. Conventional Mirels' scoring is based on radiographs. Our pilot study proposes Tc MDP bone SPECT-CT based modified Mirels' scoring system and its comparison with conventional Mirels' scoring. Cortical lysis was noted in 8(24%) by SPECT-CT versus 2 (6.3%) on X-rays. Additional SPECT-CT parameters were; circumferential involvement [1/4 (31%), 1/2 (3%), 3/4 (37.5%), 4/4 (28%)] and extra-osseous soft tissue [3%]. Our pilot study suggests the potential role of SPECT-CT in predicting risk of fracture in osseous metastases.

  6. Systematic derivation of an Australian standard for Tall Man lettering to distinguish similar drug names.

    PubMed

    Emmerton, Lynne; Rizk, Mariam F S; Bedford, Graham; Lalor, Daniel

    2015-02-01

    Confusion between similar drug names can cause harmful medication errors. Similar drug names can be visually differentiated using a typographical technique known as Tall Man lettering. While international conventions exist to derive Tall Man representation for drug names, there has been no national standard developed in Australia. This paper describes the derivation of a risk-based, standardized approach for use of Tall Man lettering in Australia, and known as National Tall Man Lettering. A three-stage approach was applied. An Australian list of similar drug names was systematically compiled from the literature and clinical error reports. Secondly, drug name pairs were prioritized using a risk matrix based on the likelihood of name confusion (a four-component score) vs. consensus ratings of the potential severity of the confusion by 31 expert reviewers. The mid-type Tall Man convention was then applied to derive the typography for the highest priority drug pair names. Of 250 pairs of confusable Australian drug names, comprising 341 discrete names, 35 pairs were identified by the matrix as an 'extreme' risk if confused. The mid-type Tall Man convention was successfully applied to the majority of the prioritized drugs; some adaption of the convention was required. This systematic process for identification of confusable drug names and associated risk, followed by application of a convention for Tall Man lettering, has produced a standard now endorsed for use in clinical settings in Australia. Periodic updating is recommended to accommodate new drug names and error reports. © 2014 John Wiley & Sons, Ltd.

  7. Beef cattle welfare in the USA: identification of priorities for future research.

    PubMed

    Tucker, Cassandra B; Coetzee, Johann F; Stookey, Joseph M; Thomson, Daniel U; Grandin, Temple; Schwartzkopf-Genswein, Karen S

    2015-12-01

    This review identifies priorities for beef cattle welfare research in the USA. Based on our professional expertise and synthesis of existing literature, we identify two themes in intensive aspects of beef production: areas where policy-based actions are needed and those where additional research is required. For some topics, considerable research informs best practice, yet gaps remain between scientific knowledge and implementation. For example, many of the risk factors and management strategies to prevent respiratory disease are understood, but only used by a relatively small portion of the industry. This is an animal health issue that will require leadership and discussion to gain widespread adoption of practices that benefit cattle welfare. There is evidence of success when such actions are taken, as illustrated by the recent improvements in handling at US slaughter facilities. Our highest priorities for additional empirical evidence are: the effect of technologies used to either promote growth or manage cattle in feedlots, identification of management risk factors for disease in feedlots, and management decisions about transport (rest stops, feed/water deprivation, climatic conditions, stocking density). Additional research is needed to inform science-based recommendations about environmental features such as dry lying areas (mounds), shade, water and feed, as well as trailer design.

  8. Protocol of a feasibility study for cognitive assessment of an ageing cohort within the Southeast Asia Community Observatory (SEACO), Malaysia

    PubMed Central

    Mohan, Devi; Stephan, Blossom C M; Allotey, Pascale; Jagger, Carol; Pearce, Mark; Siervo, Mario; Reidpath, Daniel D

    2017-01-01

    Introduction There is a growing proportion of population aged 65 years and older in low-income and middle-income countries. In Malaysia, this proportion is predicted to increase from 5.1% in 2010 to more than 15.4% by 2050. Cognitive ageing and dementia are global health priorities. However, risk factors and disease associations in a multiethnic, middle-income country like Malaysia may not be consistent with those reported in other world regions. Knowing the burden of cognitive impairment and its risk factors in Malaysia is necessary for the development of management strategies and would provide valuable information for other transitional economies. Methods and analysis This is a community-based feasibility study focused on the assessment of cognition, embedded in the longitudinal study of health and demographic surveillance site of the South East Asia Community Observatory (SEACO), in Malaysia. In total, 200 adults aged ≥50 years are selected for an in-depth health and cognitive assessment including the Mini Mental State Examination, the Montreal Cognitive Assessment, blood pressure, anthropometry, gait speed, hand grip strength, Depression Anxiety Stress Score and dried blood spots. Discussion and conclusions The results will inform the feasibility, response rates and operational challenges for establishing an ageing study focused on cognitive function in similar middle-income country settings. Knowing the burden of cognitive impairment and dementia and risk factors for disease will inform local health priorities and management, and place these within the context of increasing life expectancy. Ethics and dissemination The study protocol is approved by the Monash University Human Research Ethics Committee. Informed consent is obtained from all the participants. The project's analysed data and findings will be made available through publications and conference presentations and a data sharing archive. Reports on key findings will be made available as community briefs on the SEACO website. PMID:28104710

  9. Setting research priorities for maternal, newborn, child health and nutrition in India by engaging experts from 256 indigenous institutions contributing over 4000 research ideas: a CHNRI exercise by ICMR and INCLEN.

    PubMed

    Arora, Narendra K; Mohapatra, Archisman; Gopalan, Hema S; Wazny, Kerri; Thavaraj, Vasantha; Rasaily, Reeta; Das, Manoj K; Maheshwari, Meenu; Bahl, Rajiv; Qazi, Shamim A; Black, Robert E; Rudan, Igor

    2017-06-01

    Health research in low- and middle- income countries (LMICs) is often driven by donor priorities rather than by the needs of the countries where the research takes place. This lack of alignment of donor's priorities with local research need may be one of the reasons why countries fail to achieve set goals for population health and nutrition. India has a high burden of morbidity and mortality in women, children and infants. In order to look forward toward the Sustainable Development Goals, the Indian Council of Medical Research (ICMR) and the INCLEN Trust International (INCLEN) employed the Child Health and Nutrition Research Initiative's (CHNRI) research priority setting method for maternal, neonatal, child health and nutrition with the timeline of 2016-2025. The exercise was the largest to-date use of the CHNRI methodology, both in terms of participants and ideas generated and also expanded on the methodology. CHNRI is a crowdsourcing-based exercise that involves using the collective intelligence of a group of stakeholders, usually researchers, to generate and score research options against a set of criteria. This paper reports on a large umbrella CHNRI that was divided into four theme-specific CHNRIs (maternal, newborn, child health and nutrition). A National Steering Group oversaw the exercise and four theme-specific Research Sub-Committees technically supported finalizing the scoring criteria and refinement of research ideas for the respective thematic areas. The exercise engaged participants from 256 institutions across India - 4003 research ideas were generated from 498 experts which were consolidated into 373 research options (maternal health: 122; newborn health: 56; child health: 101; nutrition: 94); 893 experts scored these against five criteria (answerability, relevance, equity, innovation and out-of-box thinking, investment on research). Relative weights to the criteria were assigned by 79 members from the Larger Reference Group. Given India's diversity, priorities were identified at national and three regional levels: (i) the Empowered Action Group (EAG) and North-Eastern States; (ii) States and Union territories in Northern India (including West Bengal); and (iii) States and Union territories in Southern and Western parts of India. The exercise leveraged the inherent flexibility of the CHNRI method in multiple ways. It expanded on the CHNRI methodology enabling analyses for identification of research priorities at national and regional levels. However, prioritization of research options are only valuable if they are put to use, and we hope that donors will take advantage of this prioritized list of research options.

  10. MANUSCRIPT IN PRESS: DEMENTIA & GERIATRIC COGNITIVE DISORDERS

    PubMed Central

    O’Bryant, Sid E.; Xiao, Guanghua; Barber, Robert; Cullum, C. Munro; Weiner, Myron; Hall, James; Edwards, Melissa; Grammas, Paula; Wilhelmsen, Kirk; Doody, Rachelle; Diaz-Arrastia, Ramon

    2015-01-01

    Background Prior work on the link between blood-based biomarkers and cognitive status has largely been based on dichotomous classifications rather than detailed neuropsychological functioning. The current project was designed to create serum-based biomarker algorithms that predict neuropsychological test performance. Methods A battery of neuropsychological measures was administered. Random forest analyses were utilized to create neuropsychological test-specific biomarker risk scores in a training set that were entered into linear regression models predicting the respective test scores in the test set. Serum multiplex biomarker data were analyzed on 108 proteins from 395 participants (197 AD cases and 198 controls) from the Texas Alzheimer’s Research and Care Consortium. Results The biomarker risk scores were significant predictors (p<0.05) of scores on all neuropsychological tests. With the exception of premorbid intellectual status (6.6%), the biomarker risk scores alone accounted for a minimum of 12.9% of the variance in neuropsychological scores. Biomarker algorithms (biomarker risk scores + demographics) accounted for substantially more variance in scores. Review of the variable importance plots indicated differential patterns of biomarker significance for each test, suggesting the possibility of domain-specific biomarker algorithms. Conclusions Our findings provide proof-of-concept for a novel area of scientific discovery, which we term “molecular neuropsychology.” PMID:24107792

  11. Prognostic implications of serial risk score assessments in patients with pulmonary arterial hypertension: a Registry to Evaluate Early and Long-Term Pulmonary Arterial Hypertension Disease Management (REVEAL) analysis.

    PubMed

    Benza, Raymond L; Miller, Dave P; Foreman, Aimee J; Frost, Adaani E; Badesch, David B; Benton, Wade W; McGoon, Michael D

    2015-03-01

    Data from the Registry to Evaluate Early and Long-Term Pulmonary Arterial Hypertension Disease Management (REVEAL) were used previously to develop a risk score calculator to predict 1-year survival. We evaluated prognostic implications of changes in the risk score and individual risk-score parameters over 12 months. Patients were grouped by decreased, unchanged, or increased risk score from enrollment to 12 months. Kaplan-Meier estimates of subsequent 1-year survival were made based on change in the risk score during the initial 12 months of follow-up. Cox regression was used for multivariable analysis. Of 2,529 patients in the analysis cohort, the risk score was decreased in 800, unchanged in 959, and increased in 770 at 12 months post-enrollment. Six parameters (functional class, systolic blood pressure, heart rate, 6-minute walk distance, brain natriuretic peptide levels, and pericardial effusion) each changed sufficiently over time to improve or worsen risk scores in ≥5% of patients. One-year survival estimates in the subsequent year were 93.7%, 90.3%, and 84.6% in patients with a decreased, unchanged, and increased risk score at 12 months, respectively. Change in risk score significantly predicted future survival, adjusting for risk at enrollment. Considering follow-up risk concurrently with risk at enrollment, follow-up risk was a much stronger predictor, although risk at enrollment maintained a significant effect on future survival. Changes in REVEAL risk scores occur in most patients with pulmonary arterial hypertension over a 12-month period and are predictive of survival. Thus, serial risk score assessments can identify changes in disease trajectory that may warrant treatment modifications. Copyright © 2015 International Society for Heart and Lung Transplantation. All rights reserved.

  12. Demographic monitoring of wild muriqui populations: Criteria for defining priority areas and monitoring intensity.

    PubMed

    Strier, Karen B; Possamai, Carla B; Tabacow, Fernanda P; Pissinatti, Alcides; Lanna, Andre M; Rodrigues de Melo, Fabiano; Moreira, Leandro; Talebi, Maurício; Breves, Paula; Mendes, Sérgio L; Jerusalinsky, Leandro

    2017-01-01

    Demographic data are essential to assessments of the status of endangered species. However, establishing an integrated monitoring program to obtain useful data on contemporary and future population trends requires both the identification of priority areas and populations and realistic evaluations of the kinds of data that can be obtained under different monitoring regimes. We analyzed all known populations of a critically endangered primate, the muriqui (genus: Brachyteles) using population size, genetic uniqueness, geographic importance (including potential importance in corridor programs) and implementability scores to define monitoring priorities. Our analyses revealed nine priority populations for the northern muriqui (B. hypoxanthus) and nine for the southern muriqui (B. arachnoides). In addition, we employed knowledge of muriqui developmental and life history characteristics to define the minimum monitoring intensity needed to evaluate demographic trends along a continuum ranging from simple descriptive changes in population size to predictions of population changes derived from individual based life histories. Our study, stimulated by the Brazilian government's National Action Plan for the Conservation of Muriquis, is fundamental to meeting the conservation goals for this genus, and also provides a model for defining priorities and methods for the implementation of integrated demographic monitoring programs for other endangered and critically endangered species of primates.

  13. Evaluating the Laboratory Risk Indicator to Differentiate Cellulitis from Necrotizing Fasciitis in the Emergency Department

    PubMed Central

    Neeki, Michael M.; Dong, Fanglong; Au, Christine; Toy, Jake; Khoshab, Nima; Lee, Carol; Kwong, Eugene; Yuen, Ho Wang; Lee, Jonathan; Ayvazian, Arbi; Lux, Pamela; Borger, Rodney

    2017-01-01

    Introduction Necrotizing fasciitis (NF) is an uncommon but rapidly progressive infection that results in gross morbidity and mortality if not treated in its early stages. The Laboratory Risk Indicator for Necrotizing Fasciitis (LRINEC) score is used to distinguish NF from other soft tissue infections such as cellulitis or abscess. This study analyzed the ability of the LRINEC score to accurately rule out NF in patients who were confirmed to have cellulitis, as well as the capability to differentiate cellulitis from NF. Methods This was a 10-year retrospective chart-review study that included emergency department (ED) patients ≥18 years old with a diagnosis of cellulitis or NF. We calculated a LRINEC score ranging from 0–13 for each patient with all pertinent laboratory values. Three categories were developed per the original LRINEC score guidelines denoting NF risk stratification: high risk (LRINEC score ≥8), moderate risk (LRINEC score 6–7), and low risk (LRINEC score ≤5). All cases missing laboratory values were due to the absence of a C-reactive protein (CRP) value. Since the score for a negative or positive CRP value for the LRINEC score was 0 or 4 respectively, a LRINEC score of 0 or 1 without a CRP value would have placed the patient in the “low risk” group and a LRINEC score of 8 or greater without CRP value would have placed the patient in the “high risk” group. These patients missing CRP values were added to these respective groups. Results Among the 948 ED patients with cellulitis, more than one-tenth (10.7%, n=102 of 948) were moderate or high risk for NF based on LRINEC score. Of the 135 ED patients with a diagnosis of NF, 22 patients had valid CRP laboratory values and LRINEC scores were calculated. Among the other 113 patients without CRP values, six patients had a LRINEC score ≥ 8, and 19 patients had a LRINEC score ≤ 1. Thus, a total of 47 patients were further classified based on LRINEC score without a CRP value. More than half of the NF group (63.8%, n=30 of 47) had a low risk based on LRINEC ≤5. Moreover, LRINEC appeared to perform better in the diabetes population than in the non-diabetes population. Conclusion The LRINEC score may not be an accurate tool for NF risk stratification and differentiation between cellulitis and NF in the ED setting. This decision instrument demonstrated a high false positive rate when determining NF risk stratification in confirmed cases of cellulitis and a high false negative rate in cases of confirmed NF. PMID:28611889

  14. Risk-informed Management of Water Infrastructure in the United States: History, Development, and Best Practices

    NASA Astrophysics Data System (ADS)

    Wolfhope, J.

    2017-12-01

    This presentation will focus on the history, development, and best practices for evaluating the risks associated with the portfolio of water infrastructure in the United States. These practices have evolved from the early development of the Federal Guidelines for Dam Safety and the establishment of the National Dam Safety Program, to the most recent update of the Best Practices for Dam and Levee Risk Analysis jointly published by the U.S. Department of Interior Bureau of Reclamation and the U.S. Army Corps of Engineers. Since President Obama signed the Water Infrastructure Improvements for the Nation Act (WIIN) Act, on December 16, 2016, adding a new grant program under FEMA's National Dam Safety Program, the focus has been on establishing a risk-based priority system for use in identifying eligible high hazard potential dams for which grants may be made. Finally, the presentation provides thoughts on the future direction and priorities for managing the risk of dams and levees in the United States.

  15. A genetic risk score based on direct associations with coronary heart disease improves coronary heart disease risk prediction in the Atherosclerosis Risk in Communities (ARIC), but not in the Rotterdam and Framingham Offspring, Studies

    PubMed Central

    Brautbar, Ariel; Pompeii, Lisa A.; Dehghan, Abbas; Ngwa, Julius S.; Nambi, Vijay; Virani, Salim S.; Rivadeneira, Fernando; Uitterlinden, André G.; Hofman, Albert; Witteman, Jacqueline C.M.; Pencina, Michael J.; Folsom, Aaron R.; Cupples, L. Adrienne; Ballantyne, Christie M.; Boerwinkle, Eric

    2013-01-01

    Objective Multiple studies have identified single-nucleotide polymorphisms (SNPs) that are associated with coronary heart disease (CHD). We examined whether SNPs selected based on predefined criteria will improve CHD risk prediction when added to traditional risk factors (TRFs). Methods SNPs were selected from the literature based on association with CHD, lack of association with a known CHD risk factor, and successful replication. A genetic risk score (GRS) was constructed based on these SNPs. Cox proportional hazards model was used to calculate CHD risk based on the Atherosclerosis Risk in Communities (ARIC) and Framingham CHD risk scores with and without the GRS. Results The GRS was associated with risk for CHD (hazard ratio [HR] = 1.10; 95% confidence interval [CI]: 1.07–1.13). Addition of the GRS to the ARIC risk score significantly improved discrimination, reclassification, and calibration beyond that afforded by TRFs alone in non-Hispanic whites in the ARIC study. The area under the receiver operating characteristic curve (AUC) increased from 0.742 to 0.749 (Δ= 0.007; 95% CI, 0.004–0.013), and the net reclassification index (NRI) was 6.3%. Although the risk estimates for CHD in the Framingham Offspring (HR = 1.12; 95% CI: 1.10–1.14) and Rotterdam (HR = 1.08; 95% CI: 1.02–1.14) Studies were significantly improved by adding the GRS to TRFs, improvements in AUC and NRI were modest. Conclusion Addition of a GRS based on direct associations with CHD to TRFs significantly improved discrimination and reclassification in white participants of the ARIC Study, with no significant improvement in the Rotterdam and Framingham Offspring Studies. PMID:22789513

  16. Recalibration of the ACC/AHA Risk Score in Two Population-Based German Cohorts

    PubMed Central

    de las Heras Gala, Tonia; Geisel, Marie Henrike; Peters, Annette; Thorand, Barbara; Baumert, Jens; Lehmann, Nils; Jöckel, Karl-Heinz; Moebus, Susanne; Erbel, Raimund; Meisinger, Christine

    2016-01-01

    Background The 2013 ACC/AHA guidelines introduced an algorithm for risk assessment of atherosclerotic cardiovascular disease (ASCVD) within 10 years. In Germany, risk assessment with the ESC SCORE is limited to cardiovascular mortality. Applicability of the novel ACC/AHA risk score to the German population has not yet been assessed. We therefore sought to recalibrate and evaluate the ACC/AHA risk score in two German cohorts and to compare it to the ESC SCORE. Methods We studied 5,238 participants from the KORA surveys S3 (1994–1995) and S4 (1999–2001) and 4,208 subjects from the Heinz Nixdorf Recall (HNR) Study (2000–2003). There were 383 (7.3%) and 271 (6.4%) first non-fatal or fatal ASCVD events within 10 years in KORA and in HNR, respectively. Risk scores were evaluated in terms of calibration and discrimination performance. Results The original ACC/AHA risk score overestimated 10-year ASCVD rates by 37% in KORA and 66% in HNR. After recalibration, miscalibration diminished to 8% underestimation in KORA and 12% overestimation in HNR. Discrimination performance of the ACC/AHA risk score was not affected by the recalibration (KORA: C = 0.78, HNR: C = 0.74). The ESC SCORE overestimated by 5% in KORA and by 85% in HNR. The corresponding C-statistic was 0.82 in KORA and 0.76 in HNR. Conclusions The recalibrated ACC/AHA risk score showed strongly improved calibration compared to the original ACC/AHA risk score. Predicting only cardiovascular mortality, discrimination performance of the commonly used ESC SCORE remained somewhat superior to the ACC/AHA risk score. Nevertheless, the recalibrated ACC/AHA risk score may provide a meaningful tool for estimating 10-year risk of fatal and non-fatal cardiovascular disease in Germany. PMID:27732641

  17. Recalibration of the ACC/AHA Risk Score in Two Population-Based German Cohorts.

    PubMed

    de Las Heras Gala, Tonia; Geisel, Marie Henrike; Peters, Annette; Thorand, Barbara; Baumert, Jens; Lehmann, Nils; Jöckel, Karl-Heinz; Moebus, Susanne; Erbel, Raimund; Meisinger, Christine; Mahabadi, Amir Abbas; Koenig, Wolfgang

    2016-01-01

    The 2013 ACC/AHA guidelines introduced an algorithm for risk assessment of atherosclerotic cardiovascular disease (ASCVD) within 10 years. In Germany, risk assessment with the ESC SCORE is limited to cardiovascular mortality. Applicability of the novel ACC/AHA risk score to the German population has not yet been assessed. We therefore sought to recalibrate and evaluate the ACC/AHA risk score in two German cohorts and to compare it to the ESC SCORE. We studied 5,238 participants from the KORA surveys S3 (1994-1995) and S4 (1999-2001) and 4,208 subjects from the Heinz Nixdorf Recall (HNR) Study (2000-2003). There were 383 (7.3%) and 271 (6.4%) first non-fatal or fatal ASCVD events within 10 years in KORA and in HNR, respectively. Risk scores were evaluated in terms of calibration and discrimination performance. The original ACC/AHA risk score overestimated 10-year ASCVD rates by 37% in KORA and 66% in HNR. After recalibration, miscalibration diminished to 8% underestimation in KORA and 12% overestimation in HNR. Discrimination performance of the ACC/AHA risk score was not affected by the recalibration (KORA: C = 0.78, HNR: C = 0.74). The ESC SCORE overestimated by 5% in KORA and by 85% in HNR. The corresponding C-statistic was 0.82 in KORA and 0.76 in HNR. The recalibrated ACC/AHA risk score showed strongly improved calibration compared to the original ACC/AHA risk score. Predicting only cardiovascular mortality, discrimination performance of the commonly used ESC SCORE remained somewhat superior to the ACC/AHA risk score. Nevertheless, the recalibrated ACC/AHA risk score may provide a meaningful tool for estimating 10-year risk of fatal and non-fatal cardiovascular disease in Germany.

  18. The association between creatinine versus cystatin C-based eGFR and cardiovascular risk in children with chronic kidney disease using a modified PDAY risk score.

    PubMed

    Sharma, Sheena; Denburg, Michelle R; Furth, Susan L

    2017-08-01

    Children with chronic kidney disease (CKD) have a high prevalence of cardiovascular disease (CVD) risk factors which may contribute to the development of cardiovascular events in adulthood. Among adults with CKD, cystatin C-based estimates of glomerular filtration rate (eGFR) demonstrate a stronger predictive value for cardiovascular events than creatinine-based eGFR. The PDAY (Pathobiological Determinants of Atherosclerosis in Youth) risk score is a validated tool used to estimate the probability of advanced coronary atherosclerotic lesions in young adults. To assess the association between cystatin C-based versus creatinine-based eGFR (eGFR cystatin C and eGFR creatinine, respectively) and cardiovascular risk using a modified PDAY risk score as a proxy for CVD in children and young adults. We performed a cross-sectional study of 71 participants with CKD [median age 15.5 years; inter-quartile range (IQR) 13, 17], and 33 healthy controls (median age 15.1 years; IQR 13, 17). eGFR was calculated using age-appropriate creatinine- and cystatin C-based formulas. Median eGFR creatinine and eGFR cystatin C for CKD participants were 50 (IQR 30, 75) and 53 (32, 74) mL/min/1.73 m 2 , respectively. For the healthy controls, median eGFR creatinine and eGFR cystatin were 112 (IQR 85, 128) and 106 mL/min/1.73m 2 (95, 123) mL/min/1.73 m 2 , respectively. A modified PDAY risk score was calculated based on sex, age, serum lipoprotein concentrations, obesity, smoking status, hypertension, and hyperglycemia. Modified PDAY scores ranged from -2 to 20. The Spearman's correlations of eGFR creatinine and eGFR cystatin C with coronary artery PDAY scores were -0.23 (p = 0.02) and -0.28 (p = 0.004), respectively. Ordinal logistic regression also showed a similar association of higher eGFR creatinine and higher eGFR cystatin C with lower PDAY scores. When stratified by age <18 or ≥18 years, the correlations of eGFR creatinine and eGFR cystatin C with PDAY score were modest and similar in children [-0.29 (p = 0.008) vs. -0.32 (p = 0.004), respectively]. Despite a smaller sample size, the correlation in adults was stronger for eGFR cystatin C (-0.57; p = 0.006) than for eGFR creatinine (-0.40; p = 0.07). Overall, the correlation between cystatin C- or creatinine-based eGFR with PDAY risk score was similar in children. Further studies in children with CKD should explore the association between cystatin C and cardiovascular risk.

  19. A comparative study of selected Georgia elementary principals' perceptions of environmental knowledge

    NASA Astrophysics Data System (ADS)

    Campbell, Joyce League

    This study sought to establish baseline data on environmental knowledge, opinions, and perceptions of elementary principals and to make comparisons based on academic success rankings of schools and to national results. The self-reported study looked at 200 elementary principals in the state of Georgia. The population selected for the study included principals from the 100 top and 100 bottom academically ranked elementary schools as reported in the Georgia Public Policy Foundation Report Card for Parents. Their scores on the NEETF/Roper Environmental Knowledge Survey were compared between these two Georgia groups and to a national sample. Georgia elementary principals' scores were compared to environmental programs evident in their schools. The two Georgia groups were also compared on environmental opinion and perception responses on mandates, programs in schools and time devoted to these, environmental education as a priority, and the impact of various factors on the strength of environmental studies in schools. Georgia elementary principals leading schools at the bottom of the academic performance scale achieved environmental knowledge scores comparable to the national sample. However, principals of academically successful schools scored significantly higher on environmental knowledge than their colleagues from low performing schools (p < .05) and higher than the national sample (p < .001). Both Georgia principal groups strongly support a mandated environmental education curriculum for Georgia. The two groups were comparable on distributions of time devoted to environmental education across grade levels; however, principals from the more successful schools reported significantly (p < .01) greater amounts of time allotted to environmental studies. Both groups reported the same variety of environmental programs and practices evident in their schools and similar participation in these activities at various grade levels. Most significant (p < .01) was the comparison of ratings each group gave to environmental education as an instructional priority in their schools; principals supervising successful school programs viewed environmental education as a higher priority. These successful principals also recognized the importance of both administrator and staff interest as influencing factors and ranked these two variables as strongly impacting the success or failure of environmental initiatives in schools. Comparison of principals' environmental knowledge scores to numbers of programs shown no significant relationship. (Abstract shortened by UMI.)

  20. [Characterisation of thromboembolic risk in a mexican population with non-valvular atrial fibrillation and its effect on anticoagulation (MAYA Study)].

    PubMed

    Vázquez-Acosta, Jorge A; Ramírez-Gutiérrez, Álvaro E; Cerecedo-Rosendo, Mario A; Olivera-Barrera, Francisco M; Tenorio-Sánchez, Salvador S; Nieto-Villarreal, Javier; González-Borjas, José M; Villanueva-Rodríguez, Estefanie

    2016-01-01

    To evaluate the risk of stroke and bleeding using the CHA2DS2-VASc and HAS-BLED scores in Mexican patients with atrial fibrillation and to analyze whether the risk score obtained determined treatment decisions regarding antithrombotic therapy. This is an observational, retrospective study in Mexican patients recently diagnosed with atrial fibrillation. The risk of stroke was assessed using the CHA2DS2-VASc scores. The bleeding risk was evaluated using the HAS-BLED score. The frequency of use of antithrombotic therapy was calculated according to the results of the score risk assessment. A total of 350 patients with non-valvular atrial fibrillation were analyzed. A 92.9% of patients had a high risk (score ≥ 2) of stroke according to the CHA2DS2-VASc score and only 17.2% were treated with anticoagulants. A high proportion of patients with atrial fibrillation (72.5%) showed both a high risk of stroke and a high risk of bleeding based on HAS-BLED score. In this group of patients with atrial fibrillation, from Northeast Mexico, there is a remarkably underutilization of anticoagulation despite the high risk of stroke of these patients.

  1. Discovery, research, and development of new antibiotics: the WHO priority list of antibiotic-resistant bacteria and tuberculosis.

    PubMed

    Tacconelli, Evelina; Carrara, Elena; Savoldi, Alessia; Harbarth, Stephan; Mendelson, Marc; Monnet, Dominique L; Pulcini, Céline; Kahlmeter, Gunnar; Kluytmans, Jan; Carmeli, Yehuda; Ouellette, Marc; Outterson, Kevin; Patel, Jean; Cavaleri, Marco; Cox, Edward M; Houchens, Chris R; Grayson, M Lindsay; Hansen, Paul; Singh, Nalini; Theuretzbacher, Ursula; Magrini, Nicola

    2018-03-01

    The spread of antibiotic-resistant bacteria poses a substantial threat to morbidity and mortality worldwide. Due to its large public health and societal implications, multidrug-resistant tuberculosis has been long regarded by WHO as a global priority for investment in new drugs. In 2016, WHO was requested by member states to create a priority list of other antibiotic-resistant bacteria to support research and development of effective drugs. We used a multicriteria decision analysis method to prioritise antibiotic-resistant bacteria; this method involved the identification of relevant criteria to assess priority against which each antibiotic-resistant bacterium was rated. The final priority ranking of the antibiotic-resistant bacteria was established after a preference-based survey was used to obtain expert weighting of criteria. We selected 20 bacterial species with 25 patterns of acquired resistance and ten criteria to assess priority: mortality, health-care burden, community burden, prevalence of resistance, 10-year trend of resistance, transmissibility, preventability in the community setting, preventability in the health-care setting, treatability, and pipeline. We stratified the priority list into three tiers (critical, high, and medium priority), using the 33rd percentile of the bacterium's total scores as the cutoff. Critical-priority bacteria included carbapenem-resistant Acinetobacter baumannii and Pseudomonas aeruginosa, and carbapenem-resistant and third-generation cephalosporin-resistant Enterobacteriaceae. The highest ranked Gram-positive bacteria (high priority) were vancomycin-resistant Enterococcus faecium and meticillin-resistant Staphylococcus aureus. Of the bacteria typically responsible for community-acquired infections, clarithromycin-resistant Helicobacter pylori, and fluoroquinolone-resistant Campylobacter spp, Neisseria gonorrhoeae, and Salmonella typhi were included in the high-priority tier. Future development strategies should focus on antibiotics that are active against multidrug-resistant tuberculosis and Gram-negative bacteria. The global strategy should include antibiotic-resistant bacteria responsible for community-acquired infections such as Salmonella spp, Campylobacter spp, N gonorrhoeae, and H pylori. World Health Organization. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. The mortality risk score and the ADG score: two points-based scoring systems for the Johns Hopkins aggregated diagnosis groups to predict mortality in a general adult population cohort in Ontario, Canada.

    PubMed

    Austin, Peter C; Walraven, Carl van

    2011-10-01

    Logistic regression models that incorporated age, sex, and indicator variables for the Johns Hopkins' Aggregated Diagnosis Groups (ADGs) categories have been shown to accurately predict all-cause mortality in adults. To develop 2 different point-scoring systems using the ADGs. The Mortality Risk Score (MRS) collapses age, sex, and the ADGs to a single summary score that predicts the annual risk of all-cause death in adults. The ADG Score derives weights for the individual ADG diagnosis groups. : Retrospective cohort constructed using population-based administrative data. All 10,498,413 residents of Ontario, Canada, between the age of 20 and 100 years who were alive on their birthday in 2007, participated in this study. Participants were randomly divided into derivation and validation samples. : Death within 1 year. In the derivation cohort, the MRS ranged from -21 to 139 (median value 29, IQR 17 to 44). In the validation group, a logistic regression model with the MRS as the sole predictor significantly predicted the risk of 1-year mortality with a c-statistic of 0.917. A regression model with age, sex, and the ADG Score has similar performance. Both methods accurately predicted the risk of 1-year mortality across the 20 vigintiles of risk. The MRS combined values for a person's age, sex, and the John Hopkins ADGs to accurately predict 1-year mortality in adults. The ADG Score is a weighted score representing the presence or absence of the 32 ADG diagnosis groups. These scores will facilitate health services researchers conducting risk adjustment using administrative health care databases.

  3. Characteristics of Treatment Decisions to Address Challenging Behaviors in Children with Autism Spectrum Disorder.

    PubMed

    Anixt, Julia S; Meinzen-Derr, Jareen; Estridge, Halley; Smith, Laura; Brinkman, William B

    2018-05-01

    To describe the characteristics of treatment decisions to address challenging behaviors in children with autism spectrum disorder (ASD). Parents of children aged 4 to 15 years with ASD seen in a developmental behavioral pediatric (DBP) clinic completed validated measures to characterize their child's behaviors and their own level of stress. Parents reported their treatment priority before the visit. During the visit, we assessed shared decision making (SDM) using the Observing Patient Involvement (OPTION) scale and alignment of the clinician's treatment plan with the parent's priority. Before and after the visit, parents rated their uncertainty about the treatment plan using the Decisional Conflict Scale (DCS). We calculated descriptive statistics for the measures. Fifty-four families participated. Children were a mean (SD) age of 8.8 (3.3) years, and 87% were male. Children had a variety of behavioral challenges, and parents reported high levels of stress. Commonly reported parent treatment priorities were hyperactivity, tantrums, anxiety, and poor social skills. Levels of SDM were low, with a mean (SD) OPTION score of 24.5 (9.7). Parent priorities were addressed in 65% of treatment plans. Approximately 69% of parents had elevated DCS scores before the visit. Although levels of decisional conflict were lower after the visit compared with before the visit (p < 0.03), 46% of parents continued to report high scores on the DCS. Parents leave DBP visits with feelings of uncertainty about treatment decisions and with treatment plans that do not always address their priorities. SDM interventions hold promise to improve the quality of ASD treatment decisions.

  4. Factors that influence the efficiency of beef and dairy cattle recording system in Kenya: A SWOT-AHP analysis.

    PubMed

    Wasike, Chrilukovian B; Magothe, Thomas M; Kahi, Alexander K; Peters, Kurt J

    2011-01-01

    Animal recording in Kenya is characterised by erratic producer participation and high drop-out rates from the national recording scheme. This study evaluates factors influencing efficiency of beef and dairy cattle recording system. Factors influencing efficiency of animal identification and registration, pedigree and performance recording, and genetic evaluation and information utilisation were generated using qualitative and participatory methods. Pairwise comparison of factors was done by strengths, weaknesses, opportunities and threats-analytical hierarchical process analysis and priority scores to determine their relative importance to the system calculated using Eigenvalue method. For identification and registration, and evaluation and information utilisation, external factors had high priority scores. For pedigree and performance recording, threats and weaknesses had the highest priority scores. Strengths factors could not sustain the required efficiency of the system. Weaknesses of the system predisposed it to threats. Available opportunities could be explored as interventions to restore efficiency in the system. Defensive strategies such as reorienting the system to offer utility benefits to recording, forming symbiotic and binding collaboration between recording organisations and NARS, and development of institutions to support recording were feasible.

  5. The Heinz Nixdorf Recall study and its potential impact on the adoption of atherosclerosis imaging in European primary prevention guidelines.

    PubMed

    Mahabadi, Amir A; Möhlenkamp, Stefan; Moebus, Susanne; Dragano, Nico; Kälsch, Hagen; Bauer, Marcus; Jöckel, Karl-Heinz; Erbel, Raimund

    2011-10-01

    Non-contrast-enhanced computed tomography (CT) imaging of the heart enables noninvasive quantification of coronary artery calcification (CAC), a surrogate marker of the atherosclerotic burden in the coronary artery tree. Multiple studies have underlined the ability of CAC score for individual risk stratification and, accordingly, the American Heart Association recommended cardiac CT for risk assessment in individuals with an intermediate risk of cardiovascular events as measured by Framingham Risk Score. However, limitations in transcribing risk stratification algorithms based on American cohort studies into European populations have been acknowledged in the past. Moreover, data on implications for reclassification into higher- or lower-risk groups based on CAC scores were lacking. The Heinz Nixdorf Recall (HNR) study is a population-based cohort study that investigated the ability of CAC scoring in risk prediction for major cardiovascular events above and beyond traditional cardiovascular risk factors. According to Heinz Nixdorf Recall findings, CAC can be used for reclassification, especially in those in the intermediate-risk group, to advise on lifestyle changes for the reclassified low-risk category, or to implement intensive treatments for the reclassified high-risk individuals. This article discusses the present findings of the Heinz Nixdorf Recall Study with respect to the current literature, risk stratification algorithms, and current European guidelines for risk prediction.

  6. A research agenda for gastrointestinal and endoscopic surgery.

    PubMed

    Urbach, D R; Horvath, K D; Baxter, N N; Jobe, B A; Madan, A K; Pryor, A D; Khaitan, L; Torquati, A; Brower, S T; Trus, T L; Schwaitzberg, S

    2007-09-01

    Development of a research agenda may help to inform researchers and research-granting agencies about the key research gaps in an area of research and clinical care. The authors sought to develop a list of research questions for which further research was likely to have a major impact on clinical care in the area of gastrointestinal and endoscopic surgery. A formal group process was used to conduct an iterative, anonymous Web-based survey of an expert panel including the general membership of the Society of American Gastrointestinal and Endoscopic Surgeons (SAGES). In round 1, research questions were solicited, which were categorized, collapsed, and rewritten in a common format. In round 2, the expert panel rated all the questions using a priority scale ranging from 1 (lowest) to 5 (highest). In round 3, the panel re-rated the 40 questions with the highest mean priority score in round 2. A total of 241 respondents to round 1 submitted 382 questions, which were reduced by a review panel to 106 unique questions encompassing 33 topics in gastrointestinal and endoscopic surgery. In the two successive rounds, respectively, 397 and 385 respondents ranked the questions by priority, then re-ranked the 40 questions with the highest mean priority score. High-priority questions related to antireflux surgery, the oncologic and immune effects of minimally invasive surgery, and morbid obesity. The question with the highest mean priority ranking was: "What is the best treatment (antireflux surgery, endoluminal therapy, or medication) for GERD?" The second highest-ranked question was: "Does minimally invasive surgery improve oncologic outcomes as compared with open surgery?" Other questions covered a broad range of research areas including clinical research, basic science research, education and evaluation, outcomes measurement, and health technology assessment. An iterative, anonymous group survey process was used to develop a research agenda for gastrointestinal and endoscopic surgery consisting of the 40 most important research questions in the field. This research agenda can be used by researchers and research-granting agencies to focus research activity in the areas most likely to have an impact on clinical care, and to appraise the relevance of scientific contributions.

  7. Moderate Psoriasis: A Proposed Definition.

    PubMed

    Llamas-Velasco, M; de la Cueva, P; Notario, J; Martínez-Pilar, L; Martorell, A; Moreno-Ramírez, D

    2017-12-01

    The Psoriasis Area Severity Index (PASI) is the most widely used scale for assessing the severity of psoriasis and for therapeutic decision making. On the basis of the PASI score, patients have been stratified into 2 groups: mild disease and moderate-to-severe disease. To draft a proposal for the definition and characterization of moderate psoriasis based on PASI and Dermatology Life Quality Index (DLQI) scores. A group of 6 dermatologists with experience in the treatment of psoriasis undertook a critical review of the literature and a discussion of cases to draft a proposal. In order of priority, PASI, DLQI, and body surface area (BSA) are the parameters to be used in daily practice to classify psoriasis as mild, moderate, or severe. Severity should be assessed on the basis of a combined evaluation and interpretation of the PASI and DLQI. And 3, PASI and DLQI should carry equal weight in the determination of disease severity. On this basis, psoriasis severity was defined using the following criteria: mild, PASI<7 and DLQI<7; moderate, PASI=7-15 and DLQI=5-15 (classified as severe when difficult-to-treat sites are affected or when there is a significant psychosocial impact); severe, PASI >15, independently of the DLQI score. A more precise classification of psoriasis according to disease severity will improve the risk-benefit assessment essential to therapeutic decision making in these patients. Copyright © 2017 AEDV. Publicado por Elsevier España, S.L.U. All rights reserved.

  8. Predicting risk of substantial weight gain in German adults-a multi-center cohort approach.

    PubMed

    Bachlechner, Ursula; Boeing, Heiner; Haftenberger, Marjolein; Schienkiewitz, Anja; Scheidt-Nave, Christa; Vogt, Susanne; Thorand, Barbara; Peters, Annette; Schipf, Sabine; Ittermann, Till; Völzke, Henry; Nöthlings, Ute; Neamat-Allah, Jasmine; Greiser, Karin-Halina; Kaaks, Rudolf; Steffen, Annika

    2017-08-01

    A risk-targeted prevention strategy may efficiently utilize limited resources available for prevention of overweight and obesity. Likewise, more efficient intervention trials could be designed if selection of subjects was based on risk. The aim of the study was to develop a risk score predicting substantial weight gain among German adults. We developed the risk score using information on 15 socio-demographic, dietary and lifestyle factors from 32 204 participants of five population-based German cohort studies. Substantial weight gain was defined as gaining ≥10% of weight between baseline and follow-up (>6 years apart). The cases were censored according to the theoretical point in time when the threshold of 10% baseline-based weight gain was crossed assuming linearity of weight gain. Beta coefficients derived from proportional hazards regression were used as weights to compute the risk score as a linear combination of the predictors. Cross-validation was used to evaluate the score's discriminatory accuracy. The cross-validated c index (95% CI) was 0.71 (0.67-0.75). A cutoff value of ≥475 score points yielded a sensitivity of 71% and a specificity of 63%. The corresponding positive and negative predictive values were 10.4% and 97.6%, respectively. The proposed risk score may support healthcare providers in decision making and referral and facilitate an efficient selection of subjects into intervention trials. © The Author 2016. Published by Oxford University Press on behalf of the European Public Health Association.

  9. [Priority pollutants ranking and screening of coke industry based on USEtox model].

    PubMed

    Hao, Tian; Du, Peng-Fei; Du, Bin; Zeng, Si-Yu

    2014-01-01

    Thesis aims at evaluating and setting priority to human toxicity and ecotoxicity of coking pollutants. A field research and sampling project are conducted in coke plant in Shanxi so as to complete the coke emission inventory. The USEtox model representing recommended practice in LCIA characterization is applied to the emission inventory to quantify the potential impacts on human toxicity and ecotoxicity of emerging pollutants. Priority pollutants, production procedures and effects of changing plant site on the toxicity are analyzed. As conclusions, benzo(a) pyrene, benzene, Zn and As are identified as the priority pollutants in human toxicity, while pyrene and anthracene in ecotoxicity. Coal charging is the dominant procedure for organic toxicity and priority pollutants include benzo (a) pyrene, benzene, naphthalene, etc. While coke drenching is the dominant procedure for metal toxicity and priority pollutants include Zn, As, Ti, Hg etc. Emission to rural environment can reduce the organic toxicity significantly compared to the emission to urban environment. However, the site changing has no effect on metal toxicity and might increase the risk of the metal pollution to rural water and soil.

  10. PACE Continuous Innovation Indicators—a novel tool to measure progress in cancer treatments

    PubMed Central

    Paddock, Silvia; Brum, Lauren; Sorrow, Kathleen; Thomas, Samuel; Spence, Susan; Maulbecker-Armstrong, Catharina; Goodman, Clifford; Peake, Michael; McVie, Gordon; Geipel, Gary; Li, Rose

    2015-01-01

    Concerns about rising health care costs and the often incremental nature of improvements in health outcomes continue to fuel intense debates about ‘progress’ and ‘value’ in cancer research. In times of tightening fiscal constraints, it is increasingly important for patients and their representatives to define what constitutes ’value’ to them. It is clear that diverse stakeholders have different priorities. Harmonisation of values may be neither possible nor desirable. Stakeholders lack tools to visualise or otherwise express these differences and to track progress in cancer treatments based on variable sets of values. The Patient Access to Cancer care Excellence (PACE) Continuous Innovation Indicators are novel, scientifically rigorous progress trackers that employ a three-step process to quantify progress in cancer treatments: 1) mine the literature to determine the strength of the evidence supporting each treatment; 2) allow users to weight the analysis according to their priorities and values; and 3) calculate Evidence Scores (E-Scores), a novel measure to track progress, based on the strength of the evidence weighted by the assigned value. We herein introduce a novel, flexible value model, show how the values from the model can be used to weight the evidence from the scientific literature to obtain E-Scores, and illustrate how assigning different values to new treatments influences the E-Scores. The Indicators allow users to learn how differing values lead to differing assessments of progress in cancer research and to check whether current incentives for innovation are aligned with their value model. By comparing E-Scores generated by this tool, users are able to visualise the relative pace of innovation across areas of cancer research and how stepwise innovation can contribute to substantial progress against cancer over time. Learning from experience and mapping current unmet needs will help to support a broad audience of stakeholders in their efforts to accelerate and maximise progress against cancer. PMID:25624879

  11. PACE Continuous Innovation Indicators-a novel tool to measure progress in cancer treatments.

    PubMed

    Paddock, Silvia; Brum, Lauren; Sorrow, Kathleen; Thomas, Samuel; Spence, Susan; Maulbecker-Armstrong, Catharina; Goodman, Clifford; Peake, Michael; McVie, Gordon; Geipel, Gary; Li, Rose

    2015-01-01

    Concerns about rising health care costs and the often incremental nature of improvements in health outcomes continue to fuel intense debates about 'progress' and 'value' in cancer research. In times of tightening fiscal constraints, it is increasingly important for patients and their representatives to define what constitutes 'value' to them. It is clear that diverse stakeholders have different priorities. Harmonisation of values may be neither possible nor desirable. Stakeholders lack tools to visualise or otherwise express these differences and to track progress in cancer treatments based on variable sets of values. The Patient Access to Cancer care Excellence (PACE) Continuous Innovation Indicators are novel, scientifically rigorous progress trackers that employ a three-step process to quantify progress in cancer treatments: 1) mine the literature to determine the strength of the evidence supporting each treatment; 2) allow users to weight the analysis according to their priorities and values; and 3) calculate Evidence Scores (E-Scores), a novel measure to track progress, based on the strength of the evidence weighted by the assigned value. We herein introduce a novel, flexible value model, show how the values from the model can be used to weight the evidence from the scientific literature to obtain E-Scores, and illustrate how assigning different values to new treatments influences the E-Scores. The Indicators allow users to learn how differing values lead to differing assessments of progress in cancer research and to check whether current incentives for innovation are aligned with their value model. By comparing E-Scores generated by this tool, users are able to visualise the relative pace of innovation across areas of cancer research and how stepwise innovation can contribute to substantial progress against cancer over time. Learning from experience and mapping current unmet needs will help to support a broad audience of stakeholders in their efforts to accelerate and maximise progress against cancer.

  12. Clinical practice guidelines within the Southern African development community: a descriptive study of the quality of guideline development and concordance with best evidence for five priority diseases

    PubMed Central

    2012-01-01

    Background Reducing the burden of disease relies on availability of evidence-based clinical practice guidelines (CPGs). There is limited data on availability, quality and content of guidelines within the Southern African Development Community (SADC). This evaluation aims to address this gap in knowledge and provide recommendations for regional guideline development. Methods We prioritised five diseases: HIV in adults, malaria in children and adults, pre-eclampsia, diarrhoea in children and hypertension in primary care. A comprehensive electronic search to locate guidelines was conducted between June and October 2010 and augmented with email contact with SADC Ministries of Health. Independent reviewers used the AGREE II tool to score six quality domains reporting the guideline development process. Alignment of the evidence-base of the guidelines was evaluated by comparing their content with key recommendations from accepted reference guidelines, identified with a content expert, and percentage scores were calculated. Findings We identified 30 guidelines from 13 countries, publication dates ranging from 2003-2010. Overall the 'scope and purpose' and 'clarity and presentation' domains of the AGREE II instrument scored highest, median 58%(range 19-92) and 83%(range 17-100) respectively. 'Stakeholder involvement' followed with median 39%(range 6-75). 'Applicability', 'rigour of development' and 'editorial independence' scored poorly, all below 25%. Alignment with evidence was variable across member states, the lowest scores occurring in older guidelines or where the guideline being evaluated was part of broader primary healthcare CPG rather than a disease-specific guideline. Conclusion This review identified quality gaps and variable alignment with best evidence in available guidelines within SADC for five priority diseases. Future guideline development processes within SADC should better adhere to global reporting norms requiring broader consultation of stakeholders and transparency of process. A regional guideline support committee could harness local capacity to support context appropriate guideline development. PMID:22221856

  13. Predicting 10-Year Risk of Fatal Cardiovascular Disease in Germany: An Update Based on the SCORE-Deutschland Risk Charts

    PubMed Central

    Rücker, Viktoria; Keil, Ulrich; Fitzgerald, Anthony P; Malzahn, Uwe; Prugger, Christof; Ertl, Georg; Heuschmann, Peter U; Neuhauser, Hannelore

    2016-01-01

    Estimation of absolute risk of cardiovascular disease (CVD), preferably with population-specific risk charts, has become a cornerstone of CVD primary prevention. Regular recalibration of risk charts may be necessary due to decreasing CVD rates and CVD risk factor levels. The SCORE risk charts for fatal CVD risk assessment were first calibrated for Germany with 1998 risk factor level data and 1999 mortality statistics. We present an update of these risk charts based on the SCORE methodology including estimates of relative risks from SCORE, risk factor levels from the German Health Interview and Examination Survey for Adults 2008–11 (DEGS1) and official mortality statistics from 2012. Competing risks methods were applied and estimates were independently validated. Updated risk charts were calculated based on cholesterol, smoking, systolic blood pressure risk factor levels, sex and 5-year age-groups. The absolute 10-year risk estimates of fatal CVD were lower according to the updated risk charts compared to the first calibration for Germany. In a nationwide sample of 3062 adults aged 40–65 years free of major CVD from DEGS1, the mean 10-year risk of fatal CVD estimated by the updated charts was lower by 29% and the estimated proportion of high risk people (10-year risk > = 5%) by 50% compared to the older risk charts. This recalibration shows a need for regular updates of risk charts according to changes in mortality and risk factor levels in order to sustain the identification of people with a high CVD risk. PMID:27612145

  14. Novel risk score of contrast-induced nephropathy after percutaneous coronary intervention.

    PubMed

    Ji, Ling; Su, XiaoFeng; Qin, Wei; Mi, XuHua; Liu, Fei; Tang, XiaoHong; Li, Zi; Yang, LiChuan

    2015-08-01

    Contrast-induced nephropathy (CIN) post-percutaneous coronary intervention (PCI) is a major cause of acute kidney injury. In this study, we established a comprehensive risk score model to assess risk of CIN after PCI procedure, which could be easily used in a clinical environment. A total of 805 PCI patients, divided into analysis cohort (70%) and validation cohort (30%), were enrolled retrospectively in this study. Risk factors for CIN were identified using univariate analysis and multivariate logistic regression in the analysis cohort. Risk score model was developed based on multiple regression coefficients. Sensitivity and specificity of the new risk score system was validated in the validation cohort. Comparisons between the new risk score model and previous reported models were applied. The incidence of post-PCI CIN in the analysis cohort (n = 565) was 12%. Considerably high CIN incidence (50%) was observed in patients with chronic kidney disease (CKD). Age >75, body mass index (BMI) >25, myoglobin level, cardiac function level, hypoalbuminaemia, history of chronic kidney disease (CKD), Intra-aortic balloon pump (IABP) and peripheral vascular disease (PVD) were identified as independent risk factors of post-PCI CIN. A novel risk score model was established using multivariate regression coefficients, which showed highest sensitivity and specificity (0.917, 95%CI 0.877-0.957) compared with previous models. A new post-PCI CIN risk score model was developed based on a retrospective study of 805 patients. Application of this model might be helpful to predict CIN in patients undergoing PCI procedure. © 2015 Asian Pacific Society of Nephrology.

  15. Using Photovoice, Latina Transgender Women Identify Priorities in a New Immigrant-Destination State

    PubMed Central

    Rhodes, Scott D.; Alonzo, Jorge; Mann, Lilli; Simán, Florence; Garcia, Manuel; Abraham, Claire; Sun, Christina J.

    2016-01-01

    Little is known about the immigrant Latino/a transgender community in the southeastern United States. This study used photovoice, a methodology aligned with community-based participatory research, to explore needs, assets, and priorities of Latina transgender women in North Carolina. Nine immigrant Latina male-to-female transgender women documented their daily experiences through photography, engaged in empowerment-based photo-discussions, and organized a bilingual community forum to move knowledge to action. From the participants’ photographs and words, 11 themes emerged in three domains: daily challenges (e.g., health risks, uncertainty about the future, discrimination, and anxiety about family reactions); needs and priorities (e.g., health and social services, emotional support, and collective action); and community strengths and assets (e.g., supportive individuals and institutions, wisdom through lived experiences, and personal and professional goals). At the community forum, 60 influential advocates, including Latina transgender women, representatives from community-based organizations, health and social service providers, and law enforcement, reviewed findings and identified ten recommended actions. Overall, photovoice served to obtain rich qualitative insight into the lived experiences of Latina transgender women that was then shared with local leaders and agencies to help address priorities. PMID:27110226

  16. Use of mechanistic simulations as a quantitative risk-ranking tool within the quality by design framework.

    PubMed

    Stocker, Elena; Toschkoff, Gregor; Sacher, Stephan; Khinast, Johannes G

    2014-11-20

    The purpose of this study is to evaluate the use of computer simulations for generating quantitative knowledge as a basis for risk ranking and mechanistic process understanding, as required by ICH Q9 on quality risk management systems. In this specific publication, the main focus is the demonstration of a risk assessment workflow, including a computer simulation for the generation of mechanistic understanding of active tablet coating in a pan coater. Process parameter screening studies are statistically planned under consideration of impacts on a potentially critical quality attribute, i.e., coating mass uniformity. Based on computer simulation data the process failure mode and effects analysis of the risk factors is performed. This results in a quantitative criticality assessment of process parameters and the risk priority evaluation of failure modes. The factor for a quantitative reassessment of the criticality and risk priority is the coefficient of variation, which represents the coating mass uniformity. The major conclusion drawn from this work is a successful demonstration of the integration of computer simulation in the risk management workflow leading to an objective and quantitative risk assessment. Copyright © 2014. Published by Elsevier B.V.

  17. Building on IUCN regional red lists to produce lists of species of conservation priority: a model with Irish bees.

    PubMed

    Fitzpatrick, Una; Murray, Tomás E; Paxton, Robert J; Brown, Mark J F

    2007-10-01

    A World Conservation Union (IUCN) regional red list is an objective assessment of regional extinction risk and is not the same as a list of conservation priority species. Recent research reveals the widespread, but incorrect, assumption that IUCN Red List categories represent a hierarchical list of priorities for conservation action. We developed a simple eight-step priority-setting process and applied it to the conservation of bees in Ireland. Our model is based on the national red list but also considers the global significance of the national population; the conservation status at global, continental, and regional levels; key biological, economic, and societal factors; and is compatible with existing conservation agreements and legislation. Throughout Ireland, almost one-third of the bee fauna is threatened (30 of 100 species), but our methodology resulted in a reduced list of only 17 priority species. We did not use the priority species list to broadly categorize species to the conservation action required; instead, we indicated the individual action required for all threatened, near-threatened, and data-deficient species on the national red list based on the IUCN's conservation-actions template file. Priority species lists will strongly influence prioritization of conservation actions at national levels, but action should not be exclusive to listed species. In addition, all species on this list will not necessarily require immediate action. Our method is transparent, reproducible, and readily applicable to other taxa and regions.

  18. Risk Factors for Suicidal Ideation in People at Risk for Huntington's Disease.

    PubMed

    Anderson, Karen E; Eberly, Shirley; Groves, Mark; Kayson, Elise; Marder, Karen; Young, Anne B; Shoulson, Ira

    2016-12-15

    Suicidal ideation (SI) and attempts are increased in Huntington's disease (HD), making risk factor assessment a priority. To determine whether, hopelessness, irritability, aggression, anxiety, CAG expansion status, depression, and motor signs/symptoms were associated with Suicidal Ideation (SI) in those at risk for HD. Behavioral and neurological data were collected from subjects in an observational study. Subject characteristics were calculated by CAG status and SI. Logistic regression models were adjusted for demographics. Separate logistic regressions were used to compare SI and non-SI subjects. A combined logistic regression model, including 4 pre-specified predictors, (hopelessness, irritability, aggression, anxiety) was used to assess the relationship of SI to these predictors. 801 subjects were assessed, 40 were classified as having SI, 6.3% of CAG mutation expansion carriers had SI, compared with 4.3% of non- CAG mutation expansion carriers (p = 0.2275). SI subjects had significantly increased depression (p < 0.0001), hopelessness (p < 0.0001), irritability (p < 0.0001), aggression (p = 0.0089), and anxiety (p < 0.0001), and an elevated motor score (p = 0.0098). Impulsivity, assessed in a subgroup of subjects, was also associated with SI (p = 0.0267). Hopelessness and anxiety remained significant in combined model (p < 0.001; p < 0.0198, respectively) even when motor score was included. Behavioral symptoms were significantly higher in those reporting SI. Hopelessness and anxiety showed a particularly strong association with SI. Risk identification could assist in assessment of suicidality in this group.

  19. Let’s prevent diabetes: study protocol for a cluster randomised controlled trial of an educational intervention in a multi-ethnic UK population with screen detected impaired glucose regulation

    PubMed Central

    2012-01-01

    Background The prevention of type 2 diabetes is a globally recognised health care priority, but there is a lack of rigorous research investigating optimal methods of translating diabetes prevention programmes, based on the promotion of a healthy lifestyle, into routine primary care. The aim of the study is to establish whether a pragmatic structured education programme targeting lifestyle and behaviour change in conjunction with motivational maintenance via the telephone can reduce the incidence of type 2 diabetes in people with impaired glucose regulation (a composite of impaired glucose tolerance and/or impaired fasting glucose) identified through a validated risk score screening programme in primary care. Design Cluster randomised controlled trial undertaken at the level of primary care practices. Follow-up will be conducted at 12, 24 and 36 months. The primary outcome is the incidence of type 2 diabetes. Secondary outcomes include changes in HbA1c, blood glucose levels, cardiovascular risk, the presence of the Metabolic Syndrome and the cost-effectiveness of the intervention. Methods The study consists of screening and intervention phases within 44 general practices coordinated from a single academic research centre. Those at high risk of impaired glucose regulation or type 2 diabetes are identified using a risk score and invited for screening using a 75 g-oral glucose tolerance test. Those with screen detected impaired glucose regulation will be invited to take part in the trial. Practices will be randomised to standard care or the intensive arm. Participants from intensive arm practices will receive a structured education programme with motivational maintenance via the telephone and annual refresher sessions. The study will run from 2009–2014. Discussion This study will provide new evidence surrounding the long-term effectiveness of a diabetes prevention programme conducted within routine primary care in the United Kingdom. Trial registration Clinicaltrials.gov NCT00677937 PMID:22607160

  20. Emergency planning and management in health care: priority research topics.

    PubMed

    Boyd, Alan; Chambers, Naomi; French, Simon; Shaw, Duncan; King, Russell; Whitehead, Alison

    2014-06-01

    Many major incidents have significant impacts on people's health, placing additional demands on health-care organisations. The main aim of this paper is to suggest a prioritised agenda for organisational and management research on emergency planning and management relevant to U.K. health care, based on a scoping study. A secondary aim is to enhance knowledge and understanding of health-care emergency planning among the wider research community, by highlighting key issues and perspectives on the subject and presenting a conceptual model. The study findings have much in common with those of previous U.S.-focused scoping reviews, and with a recent U.K.-based review, confirming the relative paucity of U.K.-based research. No individual research topic scored highly on all of the key measures identified, with communities and organisations appearing to differ about which topics are the most important. Four broad research priorities are suggested: the affected public; inter- and intra-organisational collaboration; preparing responders and their organisations; and prioritisation and decision making.

  1. Evaluating Environment, Erosion and Sedimentation Aspects in Coastal Area to Determine Priority Handling (A Case Study in Jepara Regency, northern Central Java, Indonesia)

    NASA Astrophysics Data System (ADS)

    Wahyudi, S. I.; Adi, H. P.

    2018-04-01

    Many areas of the northern coastal in Central Java, Indonesia, have been suffering from damage. One of the areas is Jepara, which has been experiencing this kind of damage for 7.6 kilometres from total 72 kilometres long beach. All damages are mostly caused by coastal erosion, sedimentation, environment and tidal flooding. Several efforts have been done, such as replanting mangroves, building revetment and groins, but it still could not mitigated the coastal damage. The purposes of this study are to map the coastal damages, to analyze handling priority and to determine coastal protection model. The method used are by identifying and plotting the coastal damage on the map, assessing score of each variable, and determining the handling priority and suitable coastal protection model. There are five levels of coastal damage used in this study, namely as light damage, medium, heavy, very heavy, and extremely heavy. Based on the priority assessment of coastal damage, it needs to be followed up by designing in detail and implementing through soft structure for example mangrove, sand nourishes and hard structure, such as breakwater, groins and revetment.

  2. Tongues on the EDGE: language preservation priorities based on threat and lexical distinctiveness

    PubMed Central

    Davies, T. Jonathan

    2017-01-01

    Languages are being lost at rates exceeding the global loss of biodiversity. With the extinction of a language we lose irreplaceable dimensions of culture and the insight it provides on human history and the evolution of linguistic diversity. When setting conservation goals, biologists give higher priority to species likely to go extinct. Recent methods now integrate information on species evolutionary relationships to prioritize the conservation of those with a few close relatives. Advances in the construction of language trees allow us to use these methods to develop language preservation priorities that minimize loss of linguistic diversity. The evolutionarily distinct and globally endangered (EDGE) metric, used in conservation biology, accounts for a species’ originality (evolutionary distinctiveness—ED) and its likelihood of extinction (global endangerment—GE). Here, we use a similar framework to inform priorities for language preservation by generating rankings for 350 Austronesian languages. Kavalan, Tanibili, Waropen and Sengseng obtained the highest EDGE scores, while Xârâcùù (Canala), Nengone and Palauan are among the most linguistically distinct, but are not currently threatened. We further provide a way of dealing with incomplete trees, a common issue for both species and language trees. PMID:29308253

  3. Aggregate risk score based on markers of inflammation, cell stress, and coagulation is an independent predictor of adverse cardiovascular outcomes.

    PubMed

    Eapen, Danny J; Manocha, Pankaj; Patel, Riyaz S; Hammadah, Muhammad; Veledar, Emir; Wassel, Christina; Nanjundappa, Ravi A; Sikora, Sergey; Malayter, Dylan; Wilson, Peter W F; Sperling, Laurence; Quyyumi, Arshed A; Epstein, Stephen E

    2013-07-23

    This study sought to determine an aggregate, pathway-specific risk score for enhanced prediction of death and myocardial infarction (MI). Activation of inflammatory, coagulation, and cellular stress pathways contribute to atherosclerotic plaque rupture. We hypothesized that an aggregate risk score comprised of biomarkers involved in these different pathways-high-sensitivity C-reactive protein (CRP), fibrin degradation products (FDP), and heat shock protein 70 (HSP70) levels-would be a powerful predictor of death and MI. Serum levels of CRP, FDP, and HSP70 were measured in 3,415 consecutive patients with suspected or confirmed coronary artery disease (CAD) undergoing cardiac catheterization. Survival analyses were performed with models adjusted for established risk factors. Median follow-up was 2.3 years. Hazard ratios (HRs) for all-cause death and MI based on cutpoints were as follows: CRP ≥3.0 mg/l, HR: 1.61; HSP70 >0.625 ng/ml, HR; 2.26; and FDP ≥1.0 μg/ml, HR: 1.62 (p < 0.0001 for all). An aggregate biomarker score between 0 and 3 was calculated based on these cutpoints. Compared with the group with a 0 score, HRs for all-cause death and MI were 1.83, 3.46, and 4.99 for those with scores of 1, 2, and 3, respectively (p for each: <0.001). Annual event rates were 16.3% for the 4.2% of patients with a score of 3 compared with 2.4% in 36.4% of patients with a score of 0. The C statistic and net reclassification improved (p < 0.0001) with the addition of the biomarker score. An aggregate score based on serum levels of CRP, FDP, and HSP70 is a predictor of future risk of death and MI in patients with suspected or known CAD. Copyright © 2013 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  4. Characterization and assessment of potential environmental risk of tailings stored in seven impoundments in the Aries river basin, Western Romania

    PubMed Central

    2013-01-01

    Background The objective of this study was to examine the potential environmental risk of tailings resulted after precious and base metal ores processing, stored in seven impoundments located in the Aries river basin, Romania. The tailings were characterized by mineralogical and elemental composition, contamination indices, acid rock drainage generation potential and water leachability of hazardous/priority hazardous metals and ions. Multivariate statistical methods were used for data interpretation. Results Tailings were found to be highly contaminated with several hazardous/priority hazardous metals (As, Cu, Cd, Pb), and pose potential contamination risk for soil, sediments, surface and groundwater. Two out of the seven studied impoundments does not satisfy the criteria required for inert wastes, shows acid rock drainage potential and thus can contaminate the surface and groundwater. Three impoundments were found to be highly contaminated with As, Pb and Cd, two with As and other two with Cu. The tailings impoundments were grouped based on the enrichment factor, geoaccumulation index, contamination factor and contamination degree of 7 hazardous/priority hazardous metals (As, Cd, Cr, Cu, Ni, Pb, Zn) considered typical for the studied tailings. Principal component analysis showed that 47% of the elemental variability was attributable to alkaline silicate rocks, 31% to acidic S-containing minerals, 12% to carbonate minerals and 5% to biogenic elements. Leachability of metals and ions was ascribed in proportion of 61% to silicates, 11% to acidic minerals and 6% to the organic matter. A variability of 18% was attributed to leachability of biogenic elements (Na, K, Cl-, NO3-) with no potential environmental risk. Pattern recognition by agglomerative hierarchical clustering emphasized the grouping of impoundments in agreement with their contamination degree and acid rock drainage generation potential. Conclusions Tailings stored in the studied impoundments were found to be contaminated with some hazardous/ priority hazardous metals, fluoride and sulphate and thus presents different contamination risk for the environment. A long term monitoring program of these tailings impoundments and the expansion of the ecologization measures in the area is required. PMID:23311708

  5. Vulnerability and fragility risk indices for non-renewable resources.

    PubMed

    Miller, Anne E; Steele, Nicholas; Tobin, Benjamin W

    2018-06-02

    Protected areas are tasked with mitigating impacts to a wide range of invaluable resources. These resources are often subject to a variety of potential natural and anthropogenic impacts that require monitoring efforts and management actions to minimize the degradation of these resources. However, due to insufficient funding and staff, managers often have to prioritize efforts, leaving some resources at higher risk to impact. Attempts to address this issue have resulted in numerous qualitative and semi-quantitative frameworks for prioritization based on resource vulnerability. Here, we add to those methods by modifying an internationally standardized vulnerability framework, quantify both resource vulnerability, susceptibility to human disturbance, and fragility, susceptibility to natural disturbance. This modified framework quantifies impacts through a six-step process: identifying the resource and management objectives, identifying exposure and sensitivity indicators, define scoring criteria for each indicator, collect and compile data, calculate indices, and prioritize sites for mitigations. We applied this methodology to two resource types in Grand Canyon National Park (GRCA): caves and fossil sites. Three hundred sixty-five cave sites and 127 fossil sites in GRCA were used for this analysis. The majority of cave and fossil sites scored moderate to low vulnerability (0-6 out of 10 points) and moderate to low fragility for fossils. The percentage of sites that fell in the high-priority range was 5.5% for fossils and 21.9% for caves. These results are consistent with the known state of these resources and the results present a tool for managers to utilize to prioritize monitoring and management needs.

  6. [Leather bags production: organization study, general identification of hazards, biomechanical overload risk pre-evaluation using an easily applied evaluation tool].

    PubMed

    Montomoli, Loretta; Coppola, Giuseppina; Sarrini, Daniela; Sartorelli, P

    2011-01-01

    Craft industries are the backbone of the Italian manufacturing system and in this sector the leather trade plays a crucial role. The aim of the study was to experiment with a risk pre-mapping data sheet in leather bag manufacture by analyzing the production cycle. The prevalence of biomechanical, organizational and physical factors was demonstrated in tanneries. With regard to chemical agents the lack of any priority of intervention could be due to the lack of information on the chemicals used. In the 2 enterprises that used mechanical processes the results showed different priorities for intervention and a different level of the extent of such intervention. In particular in the first enterprise biomechanical overload was a top priority, while in the second the results were very similar to those of the tannery. The analysis showed in both companies that there was a high prevalence of risk of upper limb biomechanical overload in leather bag manufacture. Chemical risk assessment was not shown as a priority because the list of chemicals used was neither complete nor sufficient. The risk pre-mapping data sheet allowed us to obtain a preliminary overview of all the major existing risks in the leather industry. Therefore the method can prove a useful tool for employers as it permits instant identification of priorities for intervention for the different risks.

  7. Does inclusion of education and marital status improve SCORE performance in central and eastern europe and former soviet union? findings from MONICA and HAPIEE cohorts.

    PubMed

    Vikhireva, Olga; Broda, Grazyna; Kubinova, Ruzena; Malyutina, Sofia; Pająk, Andrzej; Tamosiunas, Abdonas; Skodova, Zdena; Simonova, Galina; Bobak, Martin; Pikhart, Hynek

    2014-01-01

    The SCORE scale predicts the 10-year risk of fatal atherosclerotic cardiovascular disease (CVD), based on conventional risk factors. The high-risk version of SCORE is recommended for Central and Eastern Europe and former Soviet Union (CEE/FSU), due to high CVD mortality rates in these countries. Given the pronounced social gradient in cardiovascular mortality in the region, it is important to consider social factors in the CVD risk prediction. We investigated whether adding education and marital status to SCORE benefits its prognostic performance in two sets of population-based CEE/FSU cohorts. The WHO MONICA (MONItoring of trends and determinants in CArdiovascular disease) cohorts from the Czech Republic, Poland (Warsaw and Tarnobrzeg), Lithuania (Kaunas), and Russia (Novosibirsk) were followed from the mid-1980s (577 atherosclerotic CVD deaths among 14,969 participants with non-missing data). The HAPIEE (Health, Alcohol, and Psychosocial factors In Eastern Europe) study follows Czech, Polish (Krakow), and Russian (Novosibirsk) cohorts from 2002-05 (395 atherosclerotic CVD deaths in 19,900 individuals with non-missing data). In MONICA and HAPIEE, the high-risk SCORE ≥5% at baseline strongly and significantly predicted fatal CVD both before and after adjustment for education and marital status. After controlling for SCORE, lower education and non-married status were significantly associated with CVD mortality in some samples. SCORE extension by these additional risk factors only slightly improved indices of calibration and discrimination (integrated discrimination improvement <5% in men and ≤1% in women). Extending SCORE by education and marital status failed to substantially improve its prognostic performance in population-based CEE/FSU cohorts.

  8. Incorporating public priorities in the Ocean Health Index: Canada as a case study.

    PubMed

    Daigle, Rémi M; Archambault, Philippe; Halpern, Benjamin S; Stewart Lowndes, Julia S; Côté, Isabelle M

    2017-01-01

    The Ocean Health Index (OHI) is a framework to assess ocean health by considering many benefits (called 'goals') provided by the ocean provides to humans, such as food provision, tourism opportunities, and coastal protection. The OHI framework can be used to assess marine areas at global or regional scales, but how various OHI goals should be weighted to reflect priorities at those scales remains unclear. In this study, we adapted the framework in two ways for application to Canada as a case study. First, we customized the OHI goals to create a national Canadian Ocean Health Index (COHI). In particular, we altered the list of iconic species assessed, added methane clathrates and subsea permafrost as carbon storage habitats, and developed a new goal, 'Aboriginal Needs', to measure access of Aboriginal people to traditional marine hunting and fishing grounds. Second, we evaluated various goal weighting schemes based on preferences elicited from the general public in online surveys. We quantified these public preferences in three ways: using Likert scores, simple ranks from a best-worst choice experiment, and model coefficients from the analysis of elicited choice experiment. The latter provided the clearest statistical discrimination among goals, and we recommend their use because they can more accurately reflect both public opinion and the trade-offs faced by policy-makers. This initial iteration of the COHI can be used as a baseline against which future COHI scores can be compared, and could potentially be used as a management tool to prioritise actions on a national scale and predict public support for these actions given that the goal weights are based on public priorities.

  9. Incorporating public priorities in the Ocean Health Index: Canada as a case study

    PubMed Central

    Archambault, Philippe; Halpern, Benjamin S.; Stewart Lowndes, Julia S.; Côté, Isabelle M.

    2017-01-01

    The Ocean Health Index (OHI) is a framework to assess ocean health by considering many benefits (called ‘goals’) provided by the ocean provides to humans, such as food provision, tourism opportunities, and coastal protection. The OHI framework can be used to assess marine areas at global or regional scales, but how various OHI goals should be weighted to reflect priorities at those scales remains unclear. In this study, we adapted the framework in two ways for application to Canada as a case study. First, we customized the OHI goals to create a national Canadian Ocean Health Index (COHI). In particular, we altered the list of iconic species assessed, added methane clathrates and subsea permafrost as carbon storage habitats, and developed a new goal, 'Aboriginal Needs', to measure access of Aboriginal people to traditional marine hunting and fishing grounds. Second, we evaluated various goal weighting schemes based on preferences elicited from the general public in online surveys. We quantified these public preferences in three ways: using Likert scores, simple ranks from a best-worst choice experiment, and model coefficients from the analysis of elicited choice experiment. The latter provided the clearest statistical discrimination among goals, and we recommend their use because they can more accurately reflect both public opinion and the trade-offs faced by policy-makers. This initial iteration of the COHI can be used as a baseline against which future COHI scores can be compared, and could potentially be used as a management tool to prioritise actions on a national scale and predict public support for these actions given that the goal weights are based on public priorities. PMID:28542394

  10. [Interaction Between Occupational Vanadium Exposure and hsp70-hom on Neurobehavioral Function].

    PubMed

    Zhang, Qin; Liu, Yun-xing; Cui, Li; Li, Shun-pin; Gao, Wei; Hu, Gao-lin; Zhang, Zu-hui; Lan, Ya-jia

    2016-01-01

    In determine the effect of heat shock protein 70-hom gene (hsp70-hom) polymorphism on the neurobehavioral function of workers exposed to vanadium. Workers from the vanadium products and chemical industry were recruited by cluster sampling. Demographic data and exposure information were collected using a questionnaire. Neurobehavioral function was assessed by Neurobehavioral Core Test Battery. The hsp70-hom genotype was detected by restricted fragment length polymorphism-polymerase chain reaction (RFLP-PCR). A neurobehavioral index (NBI) was formulated through principal component analysis. Workers with a T/C genotype had worse performance in average reaction time, visual retention, digital span (backward), Santa Ana aiming (non-habitual hand), pursuit aiming (right points, total points), digit symbol and NBI score than others (P < 0.05). The relative risk of abnormal NBI score of the workers with a T/C genotype was 1.748 fold of those with a T/T genotype. The relative risk of abnormal.NBI score of the workers exposed to vanadium was 3.048 fold of controls (P < 0.05). But after adjustment with age and education, only vanadium exposure appeared with a significant effect on NBI score. When gene polymorphism and vanadium exposure coexisted, the effect of vanadium on neurobehavioral function was attenuated, but the influence of T/C genotype increased Codds ratio (OR = 4.577, P < 0.05). After adjustment with age and education, the OR of T/C genotype further increased to 7.777 (P < 0.05). Vanadium exposure and T/C genotype had.a bio-interaction effect on NBI score Crelative excess risk due to interaction (RERI) = 4.12, attributable proportion (AP) = 0.7, synergy index (S) = 6.45]. After adjustment with age and education, the RERI became 2.49 and the AP became 0.75, but no coefficient of interaction was produced. Priorities of occupational protection should be given to vanadium-exposed workers with a hsp70-hom T/C genotype and low education level.

  11. Youth Risk Behavior Surveillance--United States, 2013. Morbidity and Mortality Weekly Report (MMWR). Surveillance Summaries. Volume 63, Number SS-4

    ERIC Educational Resources Information Center

    Kann, Laura; Kinchen, Steve; Shanklin, Shari L.; Flint, Katherine H.; Hawkins, Joseph; Harris, William A.; Lowry, Richard; Olsen, Emily O'Malley; McManus, Tim; Chyen, David; Whittle, Lisa; Taylor, Eboni; Demissie, Zewditu; Brener, Nancy; Thornton, Jemekia; Moore, John; Zaza, Stephanie

    2014-01-01

    Problem: Priority health-risk behaviors contribute to the leading causes of morbidity and mortality among youth and adults. Population-based data on these behaviors at the national, state, and local levels can help monitor the effectiveness of public health interventions designed to protect and promote the health of youth nationwide. Reporting…

  12. Youth Risk Behavior Surveillance--United States, 2015. Morbidity and Mortality Weekly Report. Surveillance Summaries. Volume 65, Number 6

    ERIC Educational Resources Information Center

    Kann, Laura; McManus, Tim; Harris, William A.; Shanklin, Shari L.; Flint, Katherine H.; Hawkins, Joseph; Queen, Barbara; Lowry, Richard; Olsen, Emily O'Malley; Chyen, David; Whittle, Lisa; Thornton, Jemekia; Lim, Connie; Yamakawa, Yoshimi; Brener, Nancy; Zaza, Stephanie

    2016-01-01

    Problem: Priority health-risk behaviors contribute to the leading causes of morbidity and mortality among youth and adults. Population-based data on these behaviors at the national, state, and local levels can help monitor the effectiveness of public health interventions designed to protect and promote the health of youth nationwide. Reporting…

  13. Illustrative case using the RISK21 roadmap and matrix: prioritization for evaluation of chemicals found in drinking water

    PubMed Central

    Wolf, Douglas C.; Bachman, Ammie; Barrett, Gordon; Bellin, Cheryl; Goodman, Jay I.; Jensen, Elke; Moretto, Angelo; McMullin, Tami; Pastoor, Timothy P.; Schoeny, Rita; Slezak, Brian; Wend, Korinna; Embry, Michelle R.

    2016-01-01

    ABSTRACT The HESI-led RISK21 effort has developed a framework supporting the use of twenty-first century technology in obtaining and using information for chemical risk assessment. This framework represents a problem formulation-based, exposure-driven, tiered data acquisition approach that leads to an informed decision on human health safety to be made when sufficient evidence is available. It provides a transparent and consistent approach to evaluate information in order to maximize the ability of assessments to inform decisions and to optimize the use of resources. To demonstrate the application of the framework’s roadmap and matrix, this case study evaluates a large number of chemicals that could be present in drinking water. The focus is to prioritize which of these should be considered for human health risk as individual contaminants. The example evaluates 20 potential drinking water contaminants, using the tiered RISK21 approach in combination with graphical representation of information at each step, using the RISK21 matrix. Utilizing the framework, 11 of the 20 chemicals were assigned low priority based on available exposure data alone, which demonstrated that exposure was extremely low. The remaining nine chemicals were further evaluated, using refined estimates of toxicity based on readily available data, with three deemed high priority for further evaluation. In the present case study, it was determined that the greatest value of additional information would be from improved exposure models and not from additional hazard characterization. PMID:26451723

  14. Illustrative case using the RISK21 roadmap and matrix: prioritization for evaluation of chemicals found in drinking water.

    PubMed

    Wolf, Douglas C; Bachman, Ammie; Barrett, Gordon; Bellin, Cheryl; Goodman, Jay I; Jensen, Elke; Moretto, Angelo; McMullin, Tami; Pastoor, Timothy P; Schoeny, Rita; Slezak, Brian; Wend, Korinna; Embry, Michelle R

    2016-01-01

    The HESI-led RISK21 effort has developed a framework supporting the use of twenty-first century technology in obtaining and using information for chemical risk assessment. This framework represents a problem formulation-based, exposure-driven, tiered data acquisition approach that leads to an informed decision on human health safety to be made when sufficient evidence is available. It provides a transparent and consistent approach to evaluate information in order to maximize the ability of assessments to inform decisions and to optimize the use of resources. To demonstrate the application of the framework's roadmap and matrix, this case study evaluates a large number of chemicals that could be present in drinking water. The focus is to prioritize which of these should be considered for human health risk as individual contaminants. The example evaluates 20 potential drinking water contaminants, using the tiered RISK21 approach in combination with graphical representation of information at each step, using the RISK21 matrix. Utilizing the framework, 11 of the 20 chemicals were assigned low priority based on available exposure data alone, which demonstrated that exposure was extremely low. The remaining nine chemicals were further evaluated, using refined estimates of toxicity based on readily available data, with three deemed high priority for further evaluation. In the present case study, it was determined that the greatest value of additional information would be from improved exposure models and not from additional hazard characterization.

  15. A web-based study of bipolarity and impulsivity in athletes engaging in extreme and high-risk sports.

    PubMed

    Dudek, Dominika; Siwek, Marcin; Jaeschke, Rafał; Drozdowicz, Katarzyna; Styczeń, Krzysztof; Arciszewska, Aleksandra; Chrobak, Adrian A; Rybakowski, Janusz K

    2016-06-01

    We hypothesised that men and women who engage in extreme or high-risk sports would score higher on standardised measures of bipolarity and impulsivity compared to age and gender matched controls. Four-hundred and eighty extreme or high-risk athletes (255 males and 225 females) and 235 age-matched control persons (107 males and 128 females) were enrolled into the web-based case-control study. The Mood Disorder Questionnaire (MDQ) and Barratt Impulsiveness Scale (BIS-11) were administered to screen for bipolarity and impulsive behaviours, respectively. Results indicated that extreme or high-risk athletes had significantly higher scores of bipolarity and impulsivity, and lower scores on cognitive complexity of the BIS-11, compared to controls. Further, there were positive correlations between the MDQ and BIS-11 scores. These results showed greater rates of bipolarity and impulsivity, in the extreme or high-risk athletes, suggesting these measures are sensitive to high-risk behaviours.

  16. A rainfall risk analysis thanks to an GIS based estimation of urban vulnerability

    NASA Astrophysics Data System (ADS)

    Renard, Florent; Pierre-Marie, Chapon

    2010-05-01

    The urban community of Lyon, situated in France in the north of the Rhône valley, comprises 1.2 million inhabitants within 515 km ². With such a concentration of issues, policy makers and local elected officials therefore attach great importance to the management of hydrological risks, particularly due to the inherent characteristics of the territory. If the hazards associated with these risks in the territory of Lyon have been the subject of numerous analyses, studies on the vulnerability of greater Lyon are rare and have common shortcomings that impair their validity. We recall that the risk is seen as the classic relationship between the probability of occurrence of hazards and vulnerability. In this article, this vulnerability will be composed of two parts. The first one is the sensitivity of the stakes facing hydrological hazards as urban runoff, that is to say, their propensity to suffer damage during a flood (Gleize and Reghezza, 2007). The second factor is their relative importance in the functioning of the community. Indeed, not all the stakes could provide the same role and contribution to the Greater Lyon. For example, damage to the urban furniture such as bus shelter seems less harmful to the activities of the urban area than that of transport infrastructure (Renard and Chapon, 2010). This communication proposes to assess the vulnerability of Lyon urban area facing to hydrological hazards. This territory is composed of human, environmental and material stakes. The first part of this work is to identify all these issues so as to completeness. Then, is it required to build a "vulnerability index" (Tixier et al, 2006). Thus, it is necessary to use methods of multicriteria decision aid to evaluate the two components of vulnerability: the sensitivity and the contribution to the functioning of the community. Finally, the results of the overall vulnerability are presented, and then coupled to various hazards related to water such as runoff associated with heavy rains, to locate areas of risk in the urban area. The targets that share the same rank of this vulnerability index do not possess the same importance, or the same sensitivity to the flood hazard. Therefore, the second part of this work is to define the priorities and sensitivities of different targets based on the judgments of experts. Multicriteria decision methods are used to prioritize elements and are therefore adapted to the modelling of the sensitivity of the issues of greater Lyon (Griot, 2008). The purpose of these methods is the assessment of priorities between the different components of the situation. Thomas Saaty's analytic hierarchy process (1980) is the most frequently used because of its many advantages. On this basis, the formal calculations of priorities and sensitivities of the elements have been conducted. These calculations are based on the judgments of experts. Indeed, during semi-structured interview, the 38 experts in our sample delivered a verdict on issues that seem relatively more important than others by binary comparison. They carry the same manner to determine sensitivity's stakes to hazard flooding. Finally, the consistency of answers given by experts is validated by calculating a ratio of coherence, and their results are aggregated to provide functions of priority (based on the relative importance of each stakes), and functions of sensitivity (based on the relative sensitivity of each stakes). From these functions of priority and sensitivity is obtained the general function of vulnerability. The vulnerability functions allow defining the importance of the stakes of Greater Lyon and their sensitivity to hydrological hazards. The global vulnerability function is obtained from sensitivity and priority functions and shows the great importance of human issues (75 %). The vulnerability factor of environmental targets represents 12 % of the global vulnerability function, as much as the materials issues. However, it can be seen that the environmental and material stakes do not represent the same weight into the priority and sensitivity functions. Indeed, the environmental issues seem more important than the material ones (17 % for the environmental stakes whereas only 5 % for the material stakes in the priority function), but less sensitive to an hydrological hazard (6 % for the environmental issues while 20 % for the material issues in the sensitivity function). Similarly, priority functions and sensitivity are established for all stakes at all levels. The stakes are then converted into a mesh form (100 meters wide). This will standardize the collection framework and the heterogeneous nature of data to allow their comparison. Finally, it is obtained a detailed, consistent and objective vulnerability of the territory of Greater Lyon. At the end, to get a direct reading of risk, combination of hazard and vulnerability, it is overlaid the two maps.

  17. Sleepiness and sleep-disordered breathing in truck drivers : risk analysis of road accidents.

    PubMed

    Catarino, Rosa; Spratley, Jorge; Catarino, Isabel; Lunet, Nuno; Pais-Clemente, Manuel

    2014-03-01

    Portugal has one of the highest road traffic fatality rates in Europe. A clear association between sleep-disordered breathing (SDB) and traffic accidents has been previously demonstrated. This study aimed to determine prevalence of excessive daytime sleepiness (EDS) and other sleep disorder symptoms among truck drivers and to identify which individual traits and work habits are associated to increased sleepiness and accident risk. We evaluated a sample of 714 truck drivers using a questionnaire (244 face-to-face interviews, 470 self-administered) that included sociodemographic data, personal habits, previous accidents, Epworth Sleepiness Scale (ESS), and the Berlin questionnaire (BQ). Twenty percent of drivers had EDS and 29 % were at high risk for having obstructive sleep apnea syndrome (OSAS). Two hundred sixty-one drivers (36.6 %) reported near-miss accidents (42.5 % sleep related) and 264 (37.0 %), a driving accident (16.3 % sleep related). ESS score ≥ 11 was a risk factor for both near-miss accidents (odds ratio (OR)=3.84, p<0.01) and accidents (OR=2.25, p<0.01). Antidepressant use was related to accidents (OR=3.30, p=0.03). We found an association between high Mallampati score (III-IV) and near misses (OR=1.89, p=0.04). In this sample of Portuguese truck drivers, we observed a high prevalence of EDS and other sleep disorder symptoms. Accident risk was related to sleepiness and antidepressant use. Identifying drivers at risk for OSAS should be a major priority of medical assessment centers, as a public safety policy.

  18. The ACTA PORT-score for predicting perioperative risk of blood transfusion for adult cardiac surgery.

    PubMed

    Klein, A A; Collier, T; Yeates, J; Miles, L F; Fletcher, S N; Evans, C; Richards, T

    2017-09-01

    A simple and accurate scoring system to predict risk of transfusion for patients undergoing cardiac surgery is lacking. We identified independent risk factors associated with transfusion by performing univariate analysis, followed by logistic regression. We then simplified the score to an integer-based system and tested it using the area under the receiver operator characteristic (AUC) statistic with a Hosmer-Lemeshow goodness-of-fit test. Finally, the scoring system was applied to the external validation dataset and the same statistical methods applied to test the accuracy of the ACTA-PORT score. Several factors were independently associated with risk of transfusion, including age, sex, body surface area, logistic EuroSCORE, preoperative haemoglobin and creatinine, and type of surgery. In our primary dataset, the score accurately predicted risk of perioperative transfusion in cardiac surgery patients with an AUC of 0.76. The external validation confirmed accuracy of the scoring method with an AUC of 0.84 and good agreement across all scores, with a minor tendency to under-estimate transfusion risk in very high-risk patients. The ACTA-PORT score is a reliable, validated tool for predicting risk of transfusion for patients undergoing cardiac surgery. This and other scores can be used in research studies for risk adjustment when assessing outcomes, and might also be incorporated into a Patient Blood Management programme. © The Author 2017. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  19. Polygenic Risk Score, Parental Socioeconomic Status, Family History of Psychiatric Disorders, and the Risk for Schizophrenia: A Danish Population-Based Study and Meta-analysis.

    PubMed

    Agerbo, Esben; Sullivan, Patrick F; Vilhjálmsson, Bjarni J; Pedersen, Carsten B; Mors, Ole; Børglum, Anders D; Hougaard, David M; Hollegaard, Mads V; Meier, Sandra; Mattheisen, Manuel; Ripke, Stephan; Wray, Naomi R; Mortensen, Preben B

    2015-07-01

    Schizophrenia has a complex etiology influenced both by genetic and nongenetic factors but disentangling these factors is difficult. To estimate (1) how strongly the risk for schizophrenia relates to the mutual effect of the polygenic risk score, parental socioeconomic status, and family history of psychiatric disorders; (2) the fraction of cases that could be prevented if no one was exposed to these factors; (3) whether family background interacts with an individual's genetic liability so that specific subgroups are particularly risk prone; and (4) to what extent a proband's genetic makeup mediates the risk associated with familial background. We conducted a nested case-control study based on Danish population-based registers. The study consisted of 866 patients diagnosed as having schizophrenia between January 1, 1994, and December 31, 2006, and 871 matched control individuals. Genome-wide data and family psychiatric and socioeconomic background information were obtained from neonatal biobanks and national registers. Results from a separate meta-analysis (34,600 cases and 45,968 control individuals) were applied to calculate polygenic risk scores. Polygenic risk scores, parental socioeconomic status, and family psychiatric history. Odds ratios (ORs), attributable risks, liability R2 values, and proportions mediated. Schizophrenia was associated with the polygenic risk score (OR, 8.01; 95% CI, 4.53-14.16 for highest vs lowest decile), socioeconomic status (OR, 8.10; 95% CI, 3.24-20.3 for 6 vs no exposures), and a history of schizophrenia/psychoses (OR, 4.18; 95% CI, 2.57-6.79). The R2 values were 3.4% (95% CI, 2.1-4.6) for the polygenic risk score, 3.1% (95% CI, 1.9-4.3) for parental socioeconomic status, and 3.4% (95% CI, 2.1-4.6) for family history. Socioeconomic status and psychiatric history accounted for 45.8% (95% CI, 36.1-55.5) and 25.8% (95% CI, 21.2-30.5) of cases, respectively. There was an interaction between the polygenic risk score and family history (P = .03). A total of 17.4% (95% CI, 9.1-26.6) of the effect associated with family history of schizophrenia/psychoses was mediated through the polygenic risk score. Schizophrenia was associated with the polygenic risk score, family psychiatric history, and socioeconomic status. Our study demonstrated that family history of schizophrenia/psychoses is partly mediated through the individual's genetic liability.

  20. Personalized Risk Scoring for Critical Care Prognosis Using Mixtures of Gaussian Processes.

    PubMed

    Alaa, Ahmed M; Yoon, Jinsung; Hu, Scott; van der Schaar, Mihaela

    2018-01-01

    In this paper, we develop a personalized real-time risk scoring algorithm that provides timely and granular assessments for the clinical acuity of ward patients based on their (temporal) lab tests and vital signs; the proposed risk scoring system ensures timely intensive care unit admissions for clinically deteriorating patients. The risk scoring system is based on the idea of sequential hypothesis testing under an uncertain time horizon. The system learns a set of latent patient subtypes from the offline electronic health record data, and trains a mixture of Gaussian Process experts, where each expert models the physiological data streams associated with a specific patient subtype. Transfer learning techniques are used to learn the relationship between a patient's latent subtype and her static admission information (e.g., age, gender, transfer status, ICD-9 codes, etc). Experiments conducted on data from a heterogeneous cohort of 6321 patients admitted to Ronald Reagan UCLA medical center show that our score significantly outperforms the currently deployed risk scores, such as the Rothman index, MEWS, APACHE, and SOFA scores, in terms of timeliness, true positive rate, and positive predictive value. Our results reflect the importance of adopting the concepts of personalized medicine in critical care settings; significant accuracy and timeliness gains can be achieved by accounting for the patients' heterogeneity. The proposed risk scoring methodology can confer huge clinical and social benefits on a massive number of critically ill inpatients who exhibit adverse outcomes including, but not limited to, cardiac arrests, respiratory arrests, and septic shocks.

  1. New scoring system for intra-abdominal injury diagnosis after blunt trauma.

    PubMed

    Shojaee, Majid; Faridaalaee, Gholamreza; Yousefifard, Mahmoud; Yaseri, Mehdi; Arhami Dolatabadi, Ali; Sabzghabaei, Anita; Malekirastekenari, Ali

    2014-01-01

    An accurate scoring system for intra-abdominal injury (IAI) based on clinical manifestation and examination may decrease unnecessary CT scans, save time, and reduce healthcare cost. This study is designed to provide a new scoring system for a better diagnosis of IAI after blunt trauma. This prospective observational study was performed from April 2011 to October 2012 on patients aged above 18 years and suspected with blunt abdominal trauma (BAT) admitted to the emergency department (ED) of Imam Hussein Hospital and Shohadaye Hafte Tir Hospital. All patients were assessed and treated based on Advanced Trauma Life Support and ED protocol. Diagnosis was done according to CT scan findings, which was considered as the gold standard. Data were gathered based on patient's history, physical exam, ultrasound and CT scan findings by a general practitioner who was not blind to this study. Chi-square test and logistic regression were done. Factors with significant relationship with CT scan were imported in multivariate regression models, where a coefficient (β) was given based on the contribution of each of them. Scoring system was developed based on the obtained total β of each factor. Altogether 261 patients (80.1% male) were enrolled (48 cases of IAI). A 24-point blunt abdominal trauma scoring system (BATSS) was developed. Patients were divided into three groups including low (score<8), moderate (8≤score<12) and high risk (score≥12). In high risk group immediate laparotomy should be done, moderate group needs further assessments, and low risk group should be kept under observation. Low risk patients did not show positive CT-scans (specificity 100%). Conversely, all high risk patients had positive CT-scan findings (sensitivity 100%). The receiver operating characteristic curve indicated a close relationship between the results of CT scan and BATSS (sensitivity=99.3%). The present scoring system furnishes a high precision and reproducible diagnostic tool for BAT detection and has the potential to reduce unnecessary CT scan and cut unnecessary costs.

  2. HySafe research priorities workshop report Summary of the workshop organized in cooperation with US DOE and supported by EC JRC in Washington DC November 10-11 2014.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keller, Jay; Hill, Laura; Kiuru, Kristian

    The HySafe research priorities workshop is held on the even years between the International Conference on Hydrogen Safety (ICHS) which is held on the odd years. The research priorities workshop is intended to identify the state-of-the-art in understanding of the physical behavior of hydrogen and hydrogen systems with a focus on safety. Typical issues addressed include behavior of unintended hydrogen releases, transient combustion phenomena, effectiveness of mitigation measures, and hydrogen effects in materials. In the workshop critical knowledge gaps are identified. Areas of research and coordinated actions for the near and medium term are derived and prioritized from these knowledgemore » gaps. The stimulated research helps pave the way for the rapid and safe deployment of hydrogen technologies on a global scale. To support the idea of delivering globally accepted research priorities for hydrogen safety the workshop is organized as an internationally open meeting. In attendance are stakeholders from the academic community (universities, national laboratories), funding agencies, and industry. The industry participation is critically important to ensure that the research priorities align with the current needs of the industry responsible for the deployment of hydrogen technologies. This report presents the results of the HySafe Research Priorities Workshop held in Washing- ton, D.C. on November 10-11, 2014. At the workshop the participants presented updates (since the previous workshop organized two years before in Berlin, Germany) of their research and development work on hydrogen safety. Following the workshop, participants were asked to provide feedback on high-priority topics for each of the research areas discussed and to rank research area categories and individual research topics within these categories. The research areas were ranked as follows (with the percentage of the vote in parenthesis): 1. Quantitative Risk Assessment (QRA) Tools (23%) 2. Reduced Model Tools (15%) 3. Indoor (13%) 4. Unintended Release-Liquid (11%) 5. Unintended Release-Gas (8%) 6. Storage (8%) 7. Integration Platforms (7%) 8. Hydrogen Safety Training (7%) 9. Materials Compatibility/Sensors (7%) 10. Applications (2%) The workshop participants ranked the need for Quantitative Risk Analysis (QRA) tools as the top priority by a large margin. QRA tools enable an informed expert to quantify the risk asso- ciated with a particular hydrogen system in a particular scenario. With appropriate verification and validation such tools will enable: * system designers to achieve a desired level of risk with suitable risk mitigation strategies, * permitting officials to determine if a particular system installation meets the desired risk level (performance based Regulations, Codes, and Standards (RCS) rather than prescrip- tive RCS), and * allow code developers to develop code language based on rigorous and validated physical models, statistics and standardized QRA methodologies. Another important research topic identified is the development of validated reduced physical models for use in the QRA tools. Improvement of the understanding and modeling of specific release phenomena, in particular liquid releases, are also highly ranked research topics. Acknowledgement The International Association HySafe, represented here by the authors, would like to thank all participants of the workshop for their valuable contributions. Particularly appreciated is the active participation of the industry representatives and the steady support by the European Com- mission's Joint Research Centre (JRC). Deep gratitude is owed for the great support by the United States Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy's Fuel Cell Technologies Office (EERE/FCTO) for the organization of the 2014 version of the hydrogen safety research priorities workshop. This page intentionally left blank.« less

  3. NATIONAL-SCALE ASSESSMENT OF AIR TOXICS RISKS ...

    EPA Pesticide Factsheets

    The national-scale assessment of air toxics risks is a modeling assessment which combines emission inventory development, atmospheric fate and transport modeling, exposure modeling, and risk assessment to characterize the risk associated with inhaling air toxics from outdoor sources. This national-scale effort will be initiated for the base year 1996 and repeated every three years thereafter to track trends and inform program development. Provide broad-scale understanding of inhalation risks for a subset of atmospherically-emitted air toxics to inform further data-gathering efforts and priority-setting for the EPA's Air Toxics Programs.

  4. Exploring Citizen Infrastructure and Environmental Priorities in Mumbai, India

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sperling, Joshua; Romero-Lankao, Patricia; Beig, Gufran

    Many cities worldwide seek to understand local policy priorities among their general populations. This study explores how differences in local conditions and among citizens within and across Mumbai, India shape local infrastructure (e.g. energy, water, transport) and environmental (e.g. managing pollution, climate-related extreme weather events) policy priorities for change that may or may not be aligned with local government action or global environmental sustainability concerns such as low-carbon development. In this rapidly urbanizing city, multiple issues compete for prominence, ranging from improved management of pollution and extreme weather to energy and other infrastructure services. To inform a broader perspective ofmore » policy priorities for urban development and risk mitigation, a survey was conducted among over 1200 citizens. The survey explored the state of local conditions, the challenges citizens face, and the ways in which differences in local conditions (socio-institutional, infrastructure, and health-related) demonstrate inequities and influence how citizens perceive risks and rank priorities for the future design and implementation of local planning, policy, and community-based efforts. With growing discussion and tensions surrounding the new urban sustainable development goal, announced by the UN in late September 2015, and a new global urban agenda document to be agreed upon at 'Habitat III', issues on whether sustainable urbanization priorities should be set at the international, national or local level remain controversial. As such, this study aims to first understand determinants of and variations in local priorities across one city, with implications discussed for local-to-global urban sustainability. Findings from survey results indicate the determinants and variation in conditions such as age, assets, levels of participation in residential action groups, the health outcome of chronic asthma, and the infrastructure service of piped water provision to homes are significant in shaping the top infrastructure and environmental policy priorities that include water supply and sanitation, air pollution, waste, and extreme heat.« less

  5. TU-FG-201-12: Designing a Risk-Based Quality Assurance Program for a Newly Implemented Y-90 Microspheres Procedure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vile, D; Zhang, L; Cuttino, L

    2016-06-15

    Purpose: To create a quality assurance program based upon a risk-based assessment of a newly implemented SirSpheres Y-90 procedure. Methods: A process map was created for a newly implemented SirSpheres procedure at a community hospital. The process map documented each step of this collaborative procedure, as well as the roles and responsibilities of each member. From the process map, different potential failure modes were determined as well as any current controls in place. From this list, a full failure mode and effects analysis (FMEA) was performed by grading each failure mode’s likelihood of occurrence, likelihood of detection, and potential severity.more » These numbers were then multiplied to compute the risk priority number (RPN) for each potential failure mode. Failure modes were then ranked based on their RPN. Additional controls were then added, with failure modes corresponding to the highest RPNs taking priority. Results: A process map was created that succinctly outlined each step in the SirSpheres procedure in its current implementation. From this, 72 potential failure modes were identified and ranked according to their associated RPN. Quality assurance controls and safety barriers were then added for failure modes associated with the highest risk being addressed first. Conclusion: A quality assurance program was created from a risk-based assessment of the SirSpheres process. Process mapping and FMEA were effective in identifying potential high-risk failure modes for this new procedure, which were prioritized for new quality assurance controls. TG 100 recommends the fault tree analysis methodology to design a comprehensive and effective QC/QM program, yet we found that by simply introducing additional safety barriers to address high RPN failure modes makes the whole process simpler and safer.« less

  6. Cardiovascular Disease Risk Score: Results from the Filipino-American Women Cardiovascular Study.

    PubMed

    Ancheta, Irma B; Battie, Cynthia A; Volgman, Annabelle S; Ancheta, Christine V; Palaniappan, Latha

    2017-02-01

    Although cardiovascular disease (CVD) is a leading cause of morbidity and mortality of Filipino-Americans, conventional CVD risk calculators may not be accurate for this population. CVD risk scores of a group of Filipino-American women (FAW) were measured using the major risk calculators. Secondly, the sensitivity of the various calculators to obesity was determined. This is a cross-sectional descriptive study that enrolled 40-65-year-old FAW (n = 236), during a community-based health screening study. Ten-year CVD risk was calculated using the Framingham Risk Score (FRS), Reynolds Risk Score (RRS), and Atherosclerotic Cardiovascular Disease (ASCVD) calculators. The 30-year risk FRS and the lifetime ASCVD calculators were also determined. Levels of predicted CVD risk varied as a function of the calculator. The 10-year ASCVD calculator classified 12 % of participants with ≥10 % risk, but the 10-year FRS and RRS calculators classified all participants with ≤10 % risk. The 30-year "Hard" Lipid and BMI FRS calculators classified 32 and 43 % of participants with high (≥20 %) risk, respectively, while 95 % of participants were classified with ≥20 % risk by the lifetime ASCVD calculator. The percent of participants with elevated CVD risk increased as a function of waist circumference for most risk score calculators. Differences in risk score as a function of the risk score calculator indicate the need for outcome studies in this population. Increased waist circumference was associated with increased CVD risk scores underscoring the need for obesity control as a primary prevention of CVD in FAW.

  7. A challenge for land and risk managers: differents stakeholders, differents definitions of the risks

    NASA Astrophysics Data System (ADS)

    Fernandez, M.; Ruegg, J.

    2012-04-01

    In developing countries, mountain populations and territories are subject to multiple risks and vulnerabilities. In addition, they face even greater challenges than developed countries due to lack of knowledge, resources and technology. There are many different types of actors in society that manage risk at various scales and levels (i.e. engineers, geologists, administrators, land use planners, merchants and local indigenous and non-indigenous people). Because of limited resources and possibilities to reduce all types of risk, these different actors, or 'risk managers' have to choose and compete to prioritize which types of risks to address. This paper addresses a case study from San Cristobal Altaverapaz, Guatemala where a large landslide "Los Chorros", a catastrophic collapse of 6 millions cubic meters of rock, is affecting several communities and one of the country's main west-east access highways. In this case, the government established that the "primary" risk is the landslide, whereas other local stakeholders consider the primary risks to be economic This paper, situated at the cross section between political science, geography and disaster risk management, addresses the social conflict and competition for priorities and solutions for risk management, depending on the group of actors based on the on-going Los Chorros, Guatemala landslide mitigation process. This work is based on the analysis of practices, (Practical Science), policies and institutions in order to understand how the inclusion of multiple stakeholders in determining risk priorities can lead to more sustainable risk management in a given territory. The main objective of this investigation is first to identify and understand the juxtaposition of different readings of the risk equation, usually considered the interface between vulnerability, exposure and hazards. Secondly, it is to analyze the mechanisms of actions taken by various stakeholders, or risk managers. The analysis focuses on the various solutions proposed for reducing vulnerabilities (and consequentially their risks). To resolve a post-disaster situation, the actors prioritize one main type of vulnerability to address a set of vulnerabilities (in a multi-vulnerability context). With this choice, they define their own acceptable risk limits and the type of action that is most relevant. In doing so, they have to determine what elements can be changed and improved and which elements must be considered essential and preserved or the priority variables. These may include: equipment, production facilities, networks, services, modes of production and organizations, etc. or various economic and social capitals upon which individuals and groups rely for recovering from a post-disaster situation. Depending on the actor, certain factors will be will be emphasized over others and these may change over time. Linked with this political, institutional and geographical analysis of risk management, this work also questions who are the legitimate actors and the right criteria to prioritize risk reduction actions using public funds criteria and finally, which motivations are satisfied. In this sense, the challenge for managers of natural hazards is to move from risk management in the strict sense, which focuses mainly on hazards only, to a broader risks management, taking into consideration what is important for society and for the functioning of systems (what have not be vulnerable in a territorial system). In a context where risk and risk management is produced and managed by both formal and informal stakeholders, the main issue is how to engage the various stakeholders and evaluate different priorities of risk in order to determine which actions are best suited for a more balanced approach to risk management. This case study demonstrates that reducing landslide risk is subject to interpretation depending on the stakeholder and the result of priorities, providing on the role of each actor, their needs and range of action with a territory.

  8. The utility of diabetes risk score items as predictors of incident type 2 diabetes in Asian populations: An evidence-based review.

    PubMed

    Hu, Pei Lin; Koh, Yi Ling Eileen; Tan, Ngiap Chuan

    2016-12-01

    The prevalence of type 2 diabetes mellitus is rising, with many Asian countries featured in the top 10 countries with the highest numbers of persons with diabetes. Reliable diabetes risk scores enable the identification of individuals at risk of developing diabetes for early intervention. This article aims to identify common risk factors in the risk scores with the highest discrimination; factors with the most influence on the risk score in Asian populations, and to propose a set of factors translatable to the multi-ethnic Singapore population. A systematic search of PubMed and EMBASE databases was conducted to identify studies published before August 2016 that developed risk prediction models for incident diabetes. 12 studies were identified. Risk scores that included laboratory measurements had better discrimination. Coefficient analysis showed fasting glucose and HbA1c having the greatest impact on the risk score. A proposed Asian risk score would include: family history of diabetes, age, gender, smoking status, body mass index, waist circumference, hypertension, fasting plasma glucose, HbA1c, HDL-cholesterol and triglycerides. Future research is required on the influence of ethnicity in Singapore. The risk score may potentially be used to stratify individuals for enrolment into diabetes prevention programmes. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. Development and validation of multivariable predictive model for thromboembolic events in lymphoma patients.

    PubMed

    Antic, Darko; Milic, Natasa; Nikolovski, Srdjan; Todorovic, Milena; Bila, Jelena; Djurdjevic, Predrag; Andjelic, Bosko; Djurasinovic, Vladislava; Sretenovic, Aleksandra; Vukovic, Vojin; Jelicic, Jelena; Hayman, Suzanne; Mihaljevic, Biljana

    2016-10-01

    Lymphoma patients are at increased risk of thromboembolic events but thromboprophylaxis in these patients is largely underused. We sought to develop and validate a simple model, based on individual clinical and laboratory patient characteristics that would designate lymphoma patients at risk for thromboembolic event. The study population included 1,820 lymphoma patients who were treated in the Lymphoma Departments at the Clinics of Hematology, Clinical Center of Serbia and Clinical Center Kragujevac. The model was developed using data from a derivation cohort (n = 1,236), and further assessed in the validation cohort (n = 584). Sixty-five patients (5.3%) in the derivation cohort and 34 (5.8%) patients in the validation cohort developed thromboembolic events. The variables independently associated with risk for thromboembolism were: previous venous and/or arterial events, mediastinal involvement, BMI>30 kg/m(2) , reduced mobility, extranodal localization, development of neutropenia and hemoglobin level < 100g/L. Based on the risk model score, the population was divided into the following risk categories: low (score 0-1), intermediate (score 2-3), and high (score >3). For patients classified at risk (intermediate and high-risk scores), the model produced negative predictive value of 98.5%, positive predictive value of 25.1%, sensitivity of 75.4%, and specificity of 87.5%. A high-risk score had positive predictive value of 65.2%. The diagnostic performance measures retained similar values in the validation cohort. Developed prognostic Thrombosis Lymphoma - ThroLy score is more specific for lymphoma patients than any other available score targeting thrombosis in cancer patients. Am. J. Hematol. 91:1014-1019, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  10. Research options for controlling zoonotic disease in India, 2010-2015.

    PubMed

    Sekar, Nitin; Shah, Naman K; Abbas, Syed Shahid; Kakkar, Manish

    2011-02-25

    Zoonotic infections pose a significant public health challenge for low- and middle-income countries and have traditionally been a neglected area of research. The Roadmap to Combat Zoonoses in India (RCZI) initiative conducted an exercise to systematically identify and prioritize research options needed to control zoonoses in India. Priority setting methods developed by the Child Health and Nutrition Research Initiative were adapted for the diversity of sectors, disciplines, diseases and populations relevant for zoonoses in India. A multidisciplinary group of experts identified priority zoonotic diseases and knowledge gaps and proposed research options to address key knowledge gaps within the next five years. Each option was scored using predefined criteria by another group of experts. The scores were weighted using relative ranks among the criteria based upon the feedback of a larger reference group. We categorized each research option by type of research, disease targeted, factorials, and level of collaboration required. We analysed the research options by tabulating them along these categories. Seventeen experts generated four universal research themes and 103 specific research options, the majority of which required a high to medium level of collaboration across sectors. Research options designated as pertaining to 'social, political and economic' factorials predominated and scored higher than options focussing on ecological, genetic and biological, or environmental factors. Research options related to 'health policy and systems' scored highest while those related to 'research for development of new interventions' scored the lowest. We methodically identified research themes and specific research options incorporating perspectives of a diverse group of stakeholders. These outputs reflect the diverse nature of challenges posed by zoonoses and should be acceptable across diseases, disciplines, and sectors. The identified research options capture the need for 'actionable research' for advancing the prevention and control of zoonoses in India.

  11. Identification of men with low-risk biopsy-confirmed prostate cancer as candidates for active surveillance.

    PubMed

    Lin, Daniel W; Crawford, E David; Keane, Thomas; Evans, Brent; Reid, Julia; Rajamani, Saradha; Brown, Krystal; Gutin, Alexander; Tward, Jonathan; Scardino, Peter; Brawer, Michael; Stone, Steven; Cuzick, Jack

    2018-06-01

    A combined clinical cell-cycle risk (CCR) score that incorporates prognostic molecular and clinical information has been recently developed and validated to improve prostate cancer mortality (PCM) risk stratification over clinical features alone. As clinical features are currently used to select men for active surveillance (AS), we developed and validated a CCR score threshold to improve the identification of men with low-risk disease who are appropriate for AS. The score threshold was selected based on the 90th percentile of CCR scores among men who might typically be considered for AS based on NCCN low/favorable-intermediate risk criteria (CCR = 0.8). The threshold was validated using 10-year PCM in an unselected, conservatively managed cohort and in the subset of the same cohort after excluding men with high-risk features. The clinical effect was evaluated in a contemporary clinical cohort. In the unselected validation cohort, men with CCR scores below the threshold had a predicted mean 10-year PCM of 2.7%, and the threshold significantly dichotomized low- and high-risk disease (P = 1.2 × 10 -5 ). After excluding high-risk men from the validation cohort, men with CCR scores below the threshold had a predicted mean 10-year PCM of 2.3%, and the threshold significantly dichotomized low- and high-risk disease (P = 0.020). There were no prostate cancer-specific deaths in men with CCR scores below the threshold in either analysis. The proportion of men in the clinical testing cohort identified as candidates for AS was substantially higher using the threshold (68.8%) compared to clinicopathologic features alone (42.6%), while mean 10-year predicted PCM risks remained essentially identical (1.9% vs. 2.0%, respectively). The CCR score threshold appropriately dichotomized patients into low- and high-risk groups for 10-year PCM, and may enable more appropriate selection of patients for AS. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Clinical predictors of risk for atrial fibrillation: implications for diagnosis and monitoring.

    PubMed

    Brunner, Kyle J; Bunch, T Jared; Mullin, Christopher M; May, Heidi T; Bair, Tami L; Elliot, David W; Anderson, Jeffrey L; Mahapatra, Srijoy

    2014-11-01

    To create a risk score using clinical factors to determine whom to screen and monitor for atrial fibrillation (AF). The AF risk score was developed based on the summed odds ratios (ORs) for AF development of 7 accepted clinical risk factors. The AF risk score is intended to assess the risk of AF similar to how the CHA2DS2-VASc score assesses stroke risk. Seven validated risk factors for AF were used to develop the AF risk score: age, coronary artery disease, diabetes mellitus, sex, heart failure, hypertension, and valvular disease. The AF risk score was tested within a random population sample of the Intermountain Healthcare outpatient database. Outcomes were stratified by AF risk score for OR and Kaplan-Meier analysis. A total of 100,000 patient records with an index follow-up from January 1, 2002, through December 31, 2007, were selected and followed up for the development of AF through the time of this analysis, May 13, 2013, through September 6, 2013. Mean ± SD follow-up time was 3106±819 days. The ORs of subsequent AF diagnosis of patients with AF risk scores of 1, 2, 3, 4, and 5 or higher were 3.05, 12.9, 22.8, 34.0, and 48.0, respectively. The area under the curve statistic for the AF risk score was 0.812 (95% CI, 0.805-0.820). We developed a simple AF risk score made up of common clinical factors that may be useful to possibly select patients for long-term monitoring for AF detection. Copyright © 2014 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.

  13. Temporal trends of postinjury multiple-organ failure: Still resource intensive, morbid, and lethal

    PubMed Central

    Sauaia, Angela; Moore, Ernest E.; Johnson, Jeffrey L.; Chin, Theresa L.; Banerjee, Anirban; Sperry, Jason L.; Maier, Ronald V.; Burlew, C. Cothren

    2014-01-01

    BACKGROUND While the incidence of postinjury multiple-organ failure (MOF) has declined during the past decade, temporal trends of its morbidity, mortality, presentation patterns, and health care resources use have been inconsistent. The purpose of this study was to describe the evolving epidemiology of postinjury MOF from 2003 to 2010 in multiple trauma centers sharing standard treatment protocols. METHODS “Inflammation and Host Response to Injury Collaborative Program” institutions that enrolled more than 20 eligible patients per biennial during the 2003 to 2010 study period were included. The patients were aged 16 years to 90 years, sustained blunt torso trauma with hemorrhagic shock (systolic blood pressure < 90 mm Hg, base deficit ≥ 6 mEq/L, blood transfusion within the first 12 hours), but without severe head injury (motor Glasgow Coma Scale [GCS] score < 4). MOF temporal trends (Denver MOF score > 3) were adjusted for admission risk factors (age, sex, body max index, Injury Severity Score [ISS], systolic blood pressure, and base deficit) using survival analysis. RESULTS A total of 1,643 patients from four institutions were evaluated. MOF incidence decreased over time (from 17% in 2003–2004 to 9.8% in 2009–2010). MOF-related death rate (33% in 2003–2004 to 36% in 2009–2010), intensive care unit stay, and mechanical ventilation duration did not change over the study period. Adjustment for admission risk factors confirmed the crude trends. MOF patients required much longer ventilation and intensive care unit stay, compared with non-MOF patients. Most of the MOF-related deaths occurred within 2 days of the MOF diagnosis. Lung and cardiac dysfunctions became less frequent (57.6% to 50.8%, 20.9% to 12.5%, respectively), but kidney and liver failure rates did not change (10.1% to 12.5%, 15.2% to 14.1%). CONCLUSION Postinjury MOF remains a resource-intensive, morbid, and lethal condition. Lung injury is an enduring challenge and should be a research priority. The lack of outcome improvements suggests that reversing MOF is difficult and prevention is still the best strategy. LEVEL OF EVIDENCE Epidemiologic study, level III. PMID:24553523

  14. Psoriasis and cardiovascular risk. Assessment by different cardiovascular risk scores.

    PubMed

    Fernández-Torres, R; Pita-Fernández, S; Fonseca, E

    2013-12-01

    Psoriasis is an inflammatory disease associated with an increased risk of cardiovascular morbidity and mortality. However, very few studies determine cardiovascular risk by means of Framingham risk score or other indices more appropriate for countries with lower prevalence of cardiovascular risk factors. To determine multiple cardiovascular risk scores in psoriasis patients, the relation between cardiovascular risk and psoriasis features and to compare our results with those in the literature. We assessed demographic data, smoking status, psoriasis features, blood pressure and analytical data. Cardiovascular risk was determined by means of Framingham, SCORE, DORICA and REGICOR scores. A total of 395 patients (59.7% men and 40.3% women) aged 18-86 years were included. The proportion of patients at intermediate and high risk of suffering a major cardiovascular event in the next 10 years was 30.5% and 11.4%, respectively, based on Framingham risk score; 26.9% and 2.2% according to DORICA and 6.8% and 0% using REGICOR score. According to the SCORE index, 22.1% of patients had a high risk of death due to a cardiovascular event over the next 10 years. Cardiovascular risk was not related to psoriasis characteristics, except for the Framingham index, with higher risk in patients with more severe psoriasis (P = 0.032). A considerable proportion of patients had intermediate or high cardiovascular risk, without relevant relationship with psoriasis characteristics and treatment schedules. Therefore, systematic evaluation of cardiovascular risk scores in all psoriasis patients could be useful to identify those with increased cardiovascular risk, subsidiary of lifestyle changes or therapeutic interventions. © 2012 The Authors. Journal of the European Academy of Dermatology and Venereology © 2012 European Academy of Dermatology and Venereology.

  15. Risk assessment of Pakistani individuals for diabetes (RAPID).

    PubMed

    Riaz, Musarrat; Basit, Abdul; Hydrie, Muhammad Zafar Iqbal; Shaheen, Fariha; Hussain, Akhtar; Hakeem, Rubina; Shera, Abdus Samad

    2012-12-01

    To develop and evaluate a risk score to predict people at high risk of developing type 2 diabetes in Pakistan. Cross sectional data regarding primary prevention of diabetes in Pakistan. Diabetes risk score was developed by using simple parameters namely age, waist circumference, and family history of diabetes. Odds ratios of the model were used to assign a score value for each variable and the diabetes risk score was calculated as the sum of those scores. We externally validated the score using two data from 1264 subjects and 856 subjects aged 25 years and above from two separate studies respectively. Validating this score using the first data from the second screening study gave an area under the receive operator characteristics curve [AROC] of 0.758. A cut point of 4 had a sensitivity of 47.0% and specificity of 88% and in the second data AROC is 0.7 with 44% sensitivity and 89% specificity. A simple diabetes risk score, based on a set of variables can be used for the identification of high risk individuals for early intervention to delay or prevent type 2 diabetes in Pakistani population. Copyright © 2012 Primary Care Diabetes Europe. Published by Elsevier Ltd. All rights reserved.

  16. CRISP: Catheterization RISk score for Pediatrics: A Report from the Congenital Cardiac Interventional Study Consortium (CCISC).

    PubMed

    Nykanen, David G; Forbes, Thomas J; Du, Wei; Divekar, Abhay A; Reeves, Jaxk H; Hagler, Donald J; Fagan, Thomas E; Pedra, Carlos A C; Fleming, Gregory A; Khan, Danyal M; Javois, Alexander J; Gruenstein, Daniel H; Qureshi, Shakeel A; Moore, Phillip M; Wax, David H

    2016-02-01

    We sought to develop a scoring system that predicts the risk of serious adverse events (SAE's) for individual pediatric patients undergoing cardiac catheterization procedures. Systematic assessment of risk of SAE in pediatric catheterization can be challenging in view of a wide variation in procedure and patient complexity as well as rapidly evolving technology. A 10 component scoring system was originally developed based on expert consensus and review of the existing literature. Data from an international multi-institutional catheterization registry (CCISC) between 2008 and 2013 were used to validate this scoring system. In addition we used multivariate methods to further refine the original risk score to improve its predictive power of SAE's. Univariate analysis confirmed the strong correlation of each of the 10 components of the original risk score with SAE attributed to a pediatric cardiac catheterization (P < 0.001 for all variables). Multivariate analysis resulted in a modified risk score (CRISP) that corresponds to an increase in value of area under a receiver operating characteristic curve (AUC) from 0.715 to 0.741. The CRISP score predicts risk of occurrence of an SAE for individual patients undergoing pediatric cardiac catheterization procedures. © 2015 Wiley Periodicals, Inc.

  17. Construction of an Exome-Wide Risk Score for Schizophrenia Based on a Weighted Burden Test.

    PubMed

    Curtis, David

    2018-01-01

    Polygenic risk scores obtained as a weighted sum of associated variants can be used to explore association in additional data sets and to assign risk scores to individuals. The methods used to derive polygenic risk scores from common SNPs are not suitable for variants detected in whole exome sequencing studies. Rare variants, which may have major effects, are seen too infrequently to judge whether they are associated and may not be shared between training and test subjects. A method is proposed whereby variants are weighted according to their frequency, their annotations and the genes they affect. A weighted sum across all variants provides an individual risk score. Scores constructed in this way are used in a weighted burden test and are shown to be significantly different between schizophrenia cases and controls using a five-way cross-validation procedure. This approach represents a first attempt to summarise exome sequence variation into a summary risk score, which could be combined with risk scores from common variants and from environmental factors. It is hoped that the method could be developed further. © 2017 John Wiley & Sons Ltd/University College London.

  18. Predictive Factors for Developing Venous Thrombosis during Cisplatin-Based Chemotherapy in Testicular Cancer.

    PubMed

    Heidegger, Isabel; Porres, Daniel; Veek, Nica; Heidenreich, Axel; Pfister, David

    2017-01-01

    Malignancies and cisplatin-based chemotherapy are both known to correlate with a high risk of venous thrombotic events (VTT). In testicular cancer, the information regarding the incidence and reason of VTT in patients undergoing cisplatin-based chemotherapy is still discussed controversially. Moreover, no risk factors for developing a VTT during cisplatin-based chemotherapy have been elucidated so far. We retrospectively analyzed 153 patients with testicular cancer undergoing cisplatin-based chemotherapy at our institution for the development of a VTT during or after chemotherapy. Clinical and pathological parameters for identifying possible risk factors for VTT were analyzed. The Khorana risk score was used to calculate the risk of VTT. Student t test was applied for calculating the statistical significance of differences between the treatment groups. Twenty-six out of 153 patients (17%) developed a VTT during chemotherapy. When we analyzed the risk factors for developing a VTT, we found that Lugano stage ≥IIc was significantly (p = 0.0006) correlated with the risk of developing a VTT during chemotherapy. On calculating the VTT risk using the Khorana risk score model, we found that only 2 out of 26 patients (7.7%) were in the high-risk Khorana group (≥3). Patients with testicular cancer with a high tumor volume have a significant risk of developing a VTT with cisplatin-based chemotherapy. The Khorana risk score is not an accurate tool for predicting VTT in testicular cancer. © 2017 S. Karger AG, Basel.

  19. A genetic risk score based on direct associations with coronary heart disease improves coronary heart disease risk prediction in the Atherosclerosis Risk in Communities (ARIC), but not in the Rotterdam and Framingham Offspring, Studies.

    PubMed

    Brautbar, Ariel; Pompeii, Lisa A; Dehghan, Abbas; Ngwa, Julius S; Nambi, Vijay; Virani, Salim S; Rivadeneira, Fernando; Uitterlinden, André G; Hofman, Albert; Witteman, Jacqueline C M; Pencina, Michael J; Folsom, Aaron R; Cupples, L Adrienne; Ballantyne, Christie M; Boerwinkle, Eric

    2012-08-01

    Multiple studies have identified single-nucleotide polymorphisms (SNPs) that are associated with coronary heart disease (CHD). We examined whether SNPs selected based on predefined criteria will improve CHD risk prediction when added to traditional risk factors (TRFs). SNPs were selected from the literature based on association with CHD, lack of association with a known CHD risk factor, and successful replication. A genetic risk score (GRS) was constructed based on these SNPs. Cox proportional hazards model was used to calculate CHD risk based on the Atherosclerosis Risk in Communities (ARIC) and Framingham CHD risk scores with and without the GRS. The GRS was associated with risk for CHD (hazard ratio [HR] = 1.10; 95% confidence interval [CI]: 1.07-1.13). Addition of the GRS to the ARIC risk score significantly improved discrimination, reclassification, and calibration beyond that afforded by TRFs alone in non-Hispanic whites in the ARIC study. The area under the receiver operating characteristic curve (AUC) increased from 0.742 to 0.749 (Δ = 0.007; 95% CI, 0.004-0.013), and the net reclassification index (NRI) was 6.3%. Although the risk estimates for CHD in the Framingham Offspring (HR = 1.12; 95% CI: 1.10-1.14) and Rotterdam (HR = 1.08; 95% CI: 1.02-1.14) Studies were significantly improved by adding the GRS to TRFs, improvements in AUC and NRI were modest. Addition of a GRS based on direct associations with CHD to TRFs significantly improved discrimination and reclassification in white participants of the ARIC Study, with no significant improvement in the Rotterdam and Framingham Offspring Studies. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  20. Genetic predisposition to coronary heart disease and stroke using an additive genetic risk score: a population-based study in Greece

    USDA-ARS?s Scientific Manuscript database

    Objective: To determine the extent to which the risk for incident coronary heart disease (CHD) increases in relation to a genetic risk score (GRS) that additively integrates the influence of high-risk alleles in nine documented single nucleotide polymorphisms (SNPs) for CHD, and to examine whether t...

  1. A Climate Change Vulnerability Assessment of California's At-Risk Birds

    PubMed Central

    Gardali, Thomas; Seavy, Nathaniel E.; DiGaudio, Ryan T.; Comrack, Lyann A.

    2012-01-01

    Conservationists must develop new strategies and adapt existing tools to address the consequences of anthropogenic climate change. To support statewide climate change adaptation, we developed a framework for assessing climate change vulnerability of California's at-risk birds and integrating it into the existing California Bird Species of Special Concern list. We defined climate vulnerability as the amount of evidence that climate change will negatively impact a population. We quantified climate vulnerability by scoring sensitivity (intrinsic characteristics of an organism that make it vulnerable) and exposure (the magnitude of climate change expected) for each taxon. Using the combined sensitivity and exposure scores as an index, we ranked 358 avian taxa, and classified 128 as vulnerable to climate change. Birds associated with wetlands had the largest representation on the list relative to other habitat groups. Of the 29 state or federally listed taxa, 21 were also classified as climate vulnerable, further raising their conservation concern. Integrating climate vulnerability and California's Bird Species of Special Concern list resulted in the addition of five taxa and an increase in priority rank for ten. Our process illustrates a simple, immediate action that can be taken to inform climate change adaptation strategies for wildlife. PMID:22396726

  2. A climate change vulnerability assessment of California's at-risk birds.

    PubMed

    Gardali, Thomas; Seavy, Nathaniel E; DiGaudio, Ryan T; Comrack, Lyann A

    2012-01-01

    Conservationists must develop new strategies and adapt existing tools to address the consequences of anthropogenic climate change. To support statewide climate change adaptation, we developed a framework for assessing climate change vulnerability of California's at-risk birds and integrating it into the existing California Bird Species of Special Concern list. We defined climate vulnerability as the amount of evidence that climate change will negatively impact a population. We quantified climate vulnerability by scoring sensitivity (intrinsic characteristics of an organism that make it vulnerable) and exposure (the magnitude of climate change expected) for each taxon. Using the combined sensitivity and exposure scores as an index, we ranked 358 avian taxa, and classified 128 as vulnerable to climate change. Birds associated with wetlands had the largest representation on the list relative to other habitat groups. Of the 29 state or federally listed taxa, 21 were also classified as climate vulnerable, further raising their conservation concern. Integrating climate vulnerability and California's Bird Species of Special Concern list resulted in the addition of five taxa and an increase in priority rank for ten. Our process illustrates a simple, immediate action that can be taken to inform climate change adaptation strategies for wildlife.

  3. A Study of Correlation of Neck Circumference with Framingham Risk Score as a Predictor of Coronary Artery Disease.

    PubMed

    Koppad, Anand K; Kaulgud, Ram S; Arun, B S

    2017-09-01

    It has been observed that metabolic syndrome is risk factor for Coronary Artery Disease (CAD) and exerts its effects through fat deposition and vascular aging. CAD has been acknowledged as a leading cause of death. In earlier studies, the metabolic risk has been estimated by Framingham risk score. Recent studies have shown that Neck Circumference (NC) has a good correlation with other traditional anthropometric measurements and can be used as marker of obesity. It also correlates with Framingham risk score, which is slightly more sophisticated measure of CAD risk. To assess the risk of CAD in a subject based on NC and to correlate the NC to Framingham risk score. The present cross-sectional study, done at Karnataka Institute of Medical Sciences, Hubli, Karnataka, India, includes 100 subjects. The study duration was of one year from 1 st January 2015 to 31 st December 2015. Anthropometric indices Body Mass Index (BMI) and NC were correlated with 10 year CAD risk as calculated by Framingham risk score. The correlation between BMI, NC, vascular age and Framingham risk score was calculated using Karl Pearson's correlation method. NC has a strong correlation with 10 year CAD risk (p≤0.001). NC was significantly greater in males as compared to females (p≤0.001). Males had greater risk of cardiovascular disease as reflected by higher 10 year Framingham risk score (p≤0.0035). NC gives simple and easy prediction of CAD risk and is more reliable than traditional risk markers like BMI. NC correlates positively with 10 year Framingham risk score.

  4. Estimation of Soil Erosion Dynamics in the Koshi Basin Using GIS and Remote Sensing to Assess Priority Areas for Conservation

    PubMed Central

    Uddin, Kabir; Murthy, M. S. R.; Wahid, Shahriar M.; Matin, Mir A.

    2016-01-01

    High levels of water-induced erosion in the transboundary Himalayan river basins are contributing to substantial changes in basin hydrology and inundation. Basin-wide information on erosion dynamics is needed for conservation planning, but field-based studies are limited. This study used remote sensing (RS) data and a geographic information system (GIS) to estimate the spatial distribution of soil erosion across the entire Koshi basin, to identify changes between 1990 and 2010, and to develop a conservation priority map. The revised universal soil loss equation (RUSLE) was used in an ArcGIS environment with rainfall erosivity, soil erodibility, slope length and steepness, cover-management, and support practice factors as primary parameters. The estimated annual erosion from the basin was around 40 million tonnes (40 million tonnes in 1990 and 42 million tonnes in 2010). The results were within the range of reported levels derived from isolated plot measurements and model estimates. Erosion risk was divided into eight classes from very low to extremely high and mapped to show the spatial pattern of soil erosion risk in the basin in 1990 and 2010. The erosion risk class remained unchanged between 1990 and 2010 in close to 87% of the study area, but increased over 9.0% of the area and decreased over 3.8%, indicating an overall worsening of the situation. Areas with a high and increasing risk of erosion were identified as priority areas for conservation. The study provides the first assessment of erosion dynamics at the basin level and provides a basis for identifying conservation priorities across the Koshi basin. The model has a good potential for application in similar river basins in the Himalayan region. PMID:26964039

  5. Estimation of Soil Erosion Dynamics in the Koshi Basin Using GIS and Remote Sensing to Assess Priority Areas for Conservation.

    PubMed

    Uddin, Kabir; Murthy, M S R; Wahid, Shahriar M; Matin, Mir A

    2016-01-01

    High levels of water-induced erosion in the transboundary Himalayan river basins are contributing to substantial changes in basin hydrology and inundation. Basin-wide information on erosion dynamics is needed for conservation planning, but field-based studies are limited. This study used remote sensing (RS) data and a geographic information system (GIS) to estimate the spatial distribution of soil erosion across the entire Koshi basin, to identify changes between 1990 and 2010, and to develop a conservation priority map. The revised universal soil loss equation (RUSLE) was used in an ArcGIS environment with rainfall erosivity, soil erodibility, slope length and steepness, cover-management, and support practice factors as primary parameters. The estimated annual erosion from the basin was around 40 million tonnes (40 million tonnes in 1990 and 42 million tonnes in 2010). The results were within the range of reported levels derived from isolated plot measurements and model estimates. Erosion risk was divided into eight classes from very low to extremely high and mapped to show the spatial pattern of soil erosion risk in the basin in 1990 and 2010. The erosion risk class remained unchanged between 1990 and 2010 in close to 87% of the study area, but increased over 9.0% of the area and decreased over 3.8%, indicating an overall worsening of the situation. Areas with a high and increasing risk of erosion were identified as priority areas for conservation. The study provides the first assessment of erosion dynamics at the basin level and provides a basis for identifying conservation priorities across the Koshi basin. The model has a good potential for application in similar river basins in the Himalayan region.

  6. Validation of the German Diabetes Risk Score within a population-based representative cohort.

    PubMed

    Hartwig, S; Kuss, O; Tiller, D; Greiser, K H; Schulze, M B; Dierkes, J; Werdan, K; Haerting, J; Kluttig, A

    2013-09-01

    To validate the German Diabetes Risk Score within the population-based cohort of the Cardiovascular Disease - Living and Ageing in Halle (CARLA) study. The sample included 582 women and 719 men, aged 45-83 years, who did not have diabetes at baseline. The individual risk of every participant was calculated using the German Diabetes Risk Score, which was modified for 4 years of follow-up. Predicted probabilities and observed outcomes were compared using Hosmer-Lemeshow goodness-of-fit tests and receiver-operator characteristic analyses. Changes in prediction power were investigated by expanding the German Diabetes Risk Score to include metabolic variables and by subgroup analyses. We found 58 cases of incident diabetes. The median 4-year probability of developing diabetes based on the German Diabetes Risk Score was 6.5%. The observed and predicted probabilities of developing diabetes were similar, although estimation was imprecise owing to the small number of cases, and the Hosmer-Lemeshow test returned a poor correlation (chi-squared = 55.3; P = 5.8*10⁻¹²). The area under the receiver-operator characteristic curve (AUC) was 0.70 (95% CI 0.64-0.77), and after excluding participants ≥66 years old, the AUC increased to 0.77 (95% CI 0.70-0.84). Consideration of glycaemic diagnostic variables, in addition to self-reported diabetes, reduced the AUC to 0.65 (95% CI 0.58-0.71). A new model that included the German Diabetes Risk Score and blood glucose concentration (AUC 0.81; 95% CI 0.76-0.86) or HbA(1c) concentration (AUC 0.84; 95% CI 0.80-0.91) was found to peform better. Application of the German Diabetes Risk Score in the CARLA cohort did not reproduce the findings in the European Prospective Investigation into Cancer and Nutrition (EPIC) Potsdam study, which may be explained by cohort differences and model overfit in the latter; however, a high score does provide an indication of increased risk of diabetes. © 2013 The Authors. Diabetic Medicine © 2013 Diabetes UK.

  7. Anticancer drugs in Portuguese surface waters - Estimation of concentrations and identification of potentially priority drugs.

    PubMed

    Santos, Mónica S F; Franquet-Griell, Helena; Lacorte, Silvia; Madeira, Luis M; Alves, Arminda

    2017-10-01

    Anticancer drugs, used in chemotherapy, have emerged as new water contaminants due to their increasing consumption trends and poor elimination efficiency in conventional water treatment processes. As a result, anticancer drugs have been reported in surface and even drinking waters, posing the environment and human health at risk. However, the occurrence and distribution of anticancer drugs depend on the area studied and the hydrological dynamics, which determine the risk towards the environment. The main objective of the present study was to evaluate the risk of anticancer drugs in Portugal. This work includes an extensive analysis of the consumption trends of 171 anticancer drugs, sold or dispensed in Portugal between 2007 and 2015. The consumption data was processed aiming at the estimation of predicted environmental loads of anticancer drugs and 11 compounds were identified as potentially priority drugs based on an exposure-based approach (PEC b > 10 ng L -1 and/or PEC c > 1 ng L -1 ). In a national perspective, mycophenolic acid and mycophenolate mofetil are suspected to pose high risk to aquatic biota. Moderate and low risk was also associated to cyclophosphamide and bicalutamide exposition, respectively. Although no evidences of risk exist yet for the other anticancer drugs, concerns may be associated with long term effects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. An update on risk factors for cartilage loss in knee osteoarthritis assessed using MRI-based semiquantitative grading methods.

    PubMed

    Alizai, Hamza; Roemer, Frank W; Hayashi, Daichi; Crema, Michel D; Felson, David T; Guermazi, Ali

    2015-03-01

    Arthroscopy-based semiquantitative scoring systems such as Outerbridge and Noyes' scores were the first to be developed for the purpose of grading cartilage defects. As magnetic resonance imaging (MRI) became available for evaluation of the osteoarthritic knee joint, these systems were adapted for use with MRI. Later on, grading methods such as the Whole Organ Magnetic Resonance Score, the Boston-Leeds Osteoarthritis Knee Score and the MRI Osteoarthritis Knee Score were designed specifically for performing whole-organ assessment of the knee joint structures, including cartilage. Cartilage grades on MRI obtained with these scoring systems represent optimal outcome measures for longitudinal studies, and are designed to enhance understanding of the knee osteoarthritis disease process. The purpose of this narrative review is to describe cartilage assessment in knee osteoarthritis using currently available MRI-based semiquantitative whole-organ scoring systems, and to provide an update on the risk factors for cartilage loss in knee osteoarthritis as assessed with these scoring systems.

  9. Setting research priorities for maternal, newborn, child health and nutrition in India by engaging experts from 256 indigenous institutions contributing over 4000 research ideas: a CHNRI exercise by ICMR and INCLEN

    PubMed Central

    Arora, Narendra K; Mohapatra, Archisman; Gopalan, Hema S; Wazny, Kerri; Thavaraj, Vasantha; Rasaily, Reeta; Das, Manoj K; Maheshwari, Meenu; Bahl, Rajiv; Qazi, Shamim A; Black, Robert E; Rudan, Igor

    2017-01-01

    Background Health research in low– and middle– income countries (LMICs) is often driven by donor priorities rather than by the needs of the countries where the research takes place. This lack of alignment of donor’s priorities with local research need may be one of the reasons why countries fail to achieve set goals for population health and nutrition. India has a high burden of morbidity and mortality in women, children and infants. In order to look forward toward the Sustainable Development Goals, the Indian Council of Medical Research (ICMR) and the INCLEN Trust International (INCLEN) employed the Child Health and Nutrition Research Initiative’s (CHNRI) research priority setting method for maternal, neonatal, child health and nutrition with the timeline of 2016–2025. The exercise was the largest to–date use of the CHNRI methodology, both in terms of participants and ideas generated and also expanded on the methodology. Methods CHNRI is a crowdsourcing–based exercise that involves using the collective intelligence of a group of stakeholders, usually researchers, to generate and score research options against a set of criteria. This paper reports on a large umbrella CHNRI that was divided into four theme–specific CHNRIs (maternal, newborn, child health and nutrition). A National Steering Group oversaw the exercise and four theme–specific Research Sub–Committees technically supported finalizing the scoring criteria and refinement of research ideas for the respective thematic areas. The exercise engaged participants from 256 institutions across India – 4003 research ideas were generated from 498 experts which were consolidated into 373 research options (maternal health: 122; newborn health: 56; child health: 101; nutrition: 94); 893 experts scored these against five criteria (answerability, relevance, equity, innovation and out–of–box thinking, investment on research). Relative weights to the criteria were assigned by 79 members from the Larger Reference Group. Given India’s diversity, priorities were identified at national and three regional levels: (i) the Empowered Action Group (EAG) and North–Eastern States; (ii) States and Union territories in Northern India (including West Bengal); and (iii) States and Union territories in Southern and Western parts of India. Conclusions The exercise leveraged the inherent flexibility of the CHNRI method in multiple ways. It expanded on the CHNRI methodology enabling analyses for identification of research priorities at national and regional levels. However, prioritization of research options are only valuable if they are put to use, and we hope that donors will take advantage of this prioritized list of research options. PMID:28686749

  10. Prioritizing research for integrated implementation of early childhood development and maternal, newborn, child and adolescent health and nutrition platforms.

    PubMed

    Sharma, Renee; Gaffey, Michelle F; Alderman, Harold; Bassani, Diego G; Bogard, Kimber; Darmstadt, Gary L; Das, Jai K; de Graft-Johnson, Joseph E; Hamadani, Jena D; Horton, Susan; Huicho, Luis; Hussein, Julia; Lye, Stephen; Pérez-Escamilla, Rafael; Proulx, Kerrie; Marfo, Kofi; Mathews-Hanna, Vanessa; Mclean, Mireille S; Rahman, Atif; Silver, Karlee L; Singla, Daisy R; Webb, Patrick; Bhutta, Zulfiqar A

    2017-06-01

    Existing health and nutrition services present potential platforms for scaling up delivery of early childhood development (ECD) interventions within sensitive windows across the life course, especially in the first 1000 days from conception to age 2 years. However, there is insufficient knowledge on how to optimize implementation for such strategies in an integrated manner. In light of this knowledge gap, we aimed to systematically identify a set of integrated implementation research priorities for health, nutrition and early child development within the 2015 to 2030 timeframe of the Sustainable Development Goals (SDGs). We applied the Child Health and Nutrition Research Initiative method, and consulted a diverse group of global health experts to develop and score 57 research questions against five criteria: answerability, effectiveness, deliverability, impact, and effect on equity. These questions were ranked using a research priority score, and the average expert agreement score was calculated for each question. The research priority scores ranged from 61.01 to 93.52, with a median of 82.87. The average expert agreement scores ranged from 0.50 to 0.90, with a median of 0.75. The top-ranked research question were: i) "How can interventions and packages to reduce neonatal mortality be expanded to include ECD and stimulation interventions?"; ii) "How does the integration of ECD and MNCAH&N interventions affect human resource requirements and capacity development in resource-poor settings?"; and iii) "How can integrated interventions be tailored to vulnerable refugee and migrant populations to protect against poor ECD and MNCAH&N outcomes?". Most highly-ranked research priorities varied across the life course and highlighted key aspects of scaling up coverage of integrated interventions in resource-limited settings, including: workforce and capacity development, cost-effectiveness and strategies to reduce financial barriers, and quality assessment of programs. Investing in ECD is critical to achieving several of the SDGs, including SDG 2 on ending all forms of malnutrition, SDG 3 on ensuring health and well-being for all, and SDG 4 on ensuring inclusive and equitable quality education and promotion of life-long learning opportunities for all. The generated research agenda is expected to drive action and investment on priority approaches to integrating ECD interventions within existing health and nutrition services.

  11. Prioritizing research for integrated implementation of early childhood development and maternal, newborn, child and adolescent health and nutrition platforms

    PubMed Central

    Sharma, Renee; Gaffey, Michelle F; Alderman, Harold; Bassani, Diego G; Bogard, Kimber; Darmstadt, Gary L; Das, Jai K; de Graft–Johnson, Joseph E; Hamadani, Jena D; Horton, Susan; Huicho, Luis; Hussein, Julia; Lye, Stephen; Pérez–Escamilla, Rafael; Proulx, Kerrie; Marfo, Kofi; Mathews–Hanna, Vanessa; Mclean, Mireille S; Rahman, Atif; Silver, Karlee L; Singla, Daisy R; Webb, Patrick; Bhutta, Zulfiqar A

    2017-01-01

    Background Existing health and nutrition services present potential platforms for scaling up delivery of early childhood development (ECD) interventions within sensitive windows across the life course, especially in the first 1000 days from conception to age 2 years. However, there is insufficient knowledge on how to optimize implementation for such strategies in an integrated manner. In light of this knowledge gap, we aimed to systematically identify a set of integrated implementation research priorities for health, nutrition and early child development within the 2015 to 2030 timeframe of the Sustainable Development Goals (SDGs). Methods We applied the Child Health and Nutrition Research Initiative method, and consulted a diverse group of global health experts to develop and score 57 research questions against five criteria: answerability, effectiveness, deliverability, impact, and effect on equity. These questions were ranked using a research priority score, and the average expert agreement score was calculated for each question. Findings The research priority scores ranged from 61.01 to 93.52, with a median of 82.87. The average expert agreement scores ranged from 0.50 to 0.90, with a median of 0.75. The top–ranked research question were: i) “How can interventions and packages to reduce neonatal mortality be expanded to include ECD and stimulation interventions?”; ii) “How does the integration of ECD and MNCAH&N interventions affect human resource requirements and capacity development in resource–poor settings?”; and iii) “How can integrated interventions be tailored to vulnerable refugee and migrant populations to protect against poor ECD and MNCAH&N outcomes?”. Most highly–ranked research priorities varied across the life course and highlighted key aspects of scaling up coverage of integrated interventions in resource–limited settings, including: workforce and capacity development, cost–effectiveness and strategies to reduce financial barriers, and quality assessment of programs. Conclusions Investing in ECD is critical to achieving several of the SDGs, including SDG 2 on ending all forms of malnutrition, SDG 3 on ensuring health and well–being for all, and SDG 4 on ensuring inclusive and equitable quality education and promotion of life–long learning opportunities for all. The generated research agenda is expected to drive action and investment on priority approaches to integrating ECD interventions within existing health and nutrition services. PMID:28685048

  12. Clinical risk scoring system for predicting extended-spectrum β-lactamase-producing Escherichia coli infection in hospitalized patients.

    PubMed

    Kengkla, K; Charoensuk, N; Chaichana, M; Puangjan, S; Rattanapornsompong, T; Choorassamee, J; Wilairat, P; Saokaew, S

    2016-05-01

    Extended spectrum β-lactamase-producing Escherichia coli (ESBL-EC) has important implications for infection control and empiric antibiotic prescribing. This study aims to develop a risk scoring system for predicting ESBL-EC infection based on local epidemiology. The study retrospectively collected eligible patients with a positive culture for E. coli during 2011 to 2014. The risk scoring system was developed using variables independently associated with ESBL-EC infection through logistic regression-based prediction. Area under the receiver-operator characteristic curve (AuROC) was determined to confirm the prediction power of the model. Predictors for ESBL-EC infection were male gender [odds ratio (OR): 1.53], age ≥55 years (OR: 1.50), healthcare-associated infection (OR: 3.21), hospital-acquired infection (OR: 2.28), sepsis (OR: 1.79), prolonged hospitalization (OR: 1.88), history of ESBL infection within one year (OR: 7.88), prior use of broad-spectrum cephalosporins within three months (OR: 12.92), and prior use of other antibiotics within three months (OR: 2.14). Points scored ranged from 0 to 47, and were divided into three groups based on diagnostic performance parameters: low risk (score: 0-8; 44.57%), moderate risk (score: 9-11; 21.85%) and high risk (score: ≥12; 33.58%). The model displayed moderate power of prediction (AuROC: 0.773; 95% confidence interval: 0.742-0.805) and good calibration (Hosmer-Lemeshow χ(2) = 13.29; P = 0.065). This tool may optimize the prescribing of empirical antibiotic therapy, minimize time to identify patients, and prevent spreading of ESBL-EC. Prior to adoption into routine clinical practice, further validation study of the tool is needed. Copyright © 2016 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  13. Predicting risk of substantial weight gain in German adults—a multi-center cohort approach

    PubMed Central

    Bachlechner, Ursula; Boeing, Heiner; Haftenberger, Marjolein; Schienkiewitz, Anja; Scheidt-Nave, Christa; Vogt, Susanne; Thorand, Barbara; Peters, Annette; Schipf, Sabine; Ittermann, Till; Völzke, Henry; Nöthlings, Ute; Neamat-Allah, Jasmine; Greiser, Karin-Halina; Kaaks, Rudolf

    2017-01-01

    Abstract Background A risk-targeted prevention strategy may efficiently utilize limited resources available for prevention of overweight and obesity. Likewise, more efficient intervention trials could be designed if selection of subjects was based on risk. The aim of the study was to develop a risk score predicting substantial weight gain among German adults. Methods We developed the risk score using information on 15 socio-demographic, dietary and lifestyle factors from 32 204 participants of five population-based German cohort studies. Substantial weight gain was defined as gaining ≥10% of weight between baseline and follow-up (>6 years apart). The cases were censored according to the theoretical point in time when the threshold of 10% baseline-based weight gain was crossed assuming linearity of weight gain. Beta coefficients derived from proportional hazards regression were used as weights to compute the risk score as a linear combination of the predictors. Cross-validation was used to evaluate the score’s discriminatory accuracy. Results The cross-validated c index (95% CI) was 0.71 (0.67–0.75). A cutoff value of ≥475 score points yielded a sensitivity of 71% and a specificity of 63%. The corresponding positive and negative predictive values were 10.4% and 97.6%, respectively. Conclusions The proposed risk score may support healthcare providers in decision making and referral and facilitate an efficient selection of subjects into intervention trials. PMID:28013243

  14. An analysis of key environmental and social risks in the development of concentrated solar power projects

    NASA Astrophysics Data System (ADS)

    Otieno, George A.; Loosen, Alexander E.

    2016-05-01

    Concentrated Solar Power projects have impacts on local environment and social conditions. This research set out to investigate the environmental and social risks in the development of such projects and rank these risks from highest to lowest. The risks were analysed for parabolic trough and tower technologies only. A literature review was undertaken, identifying seventeen risks that were then proposed to six CSP experts for scoring. The risks were scored based of five factors on a five tier scale. The scores from the experts were compiled to develop an overall rank of the identified risks. The risk of disruption of local water resources was found to represent the highest risk before and after mitigation with a score of moderate-high and moderate respectively. This score is linked to the importance of water in water scarce regions typified by the best regions for CSP. The risks to avian species, to worker health and safety, due to noise on the environment, to visual and recreational resources completed the top five risks after mitigation.

  15. Explicit criteria for prioritization of cataract surgery

    PubMed Central

    Ma Quintana, José; Escobar, Antonio; Bilbao, Amaia

    2006-01-01

    Background Consensus techniques have been used previously to create explicit criteria to prioritize cataract extraction; however, the appropriateness of the intervention was not included explicitly in previous studies. We developed a prioritization tool for cataract extraction according to the RAND method. Methods Criteria were developed using a modified Delphi panel judgment process. A panel of 11 ophthalmologists was assembled. Ratings were analyzed regarding the level of agreement among panelists. We studied the effect of all variables on the final panel score using general linear and logistic regression models. Priority scoring systems were developed by means of optimal scaling and general linear models. The explicit criteria developed were summarized by means of regression tree analysis. Results Eight variables were considered to create the indications. Of the 310 indications that the panel evaluated, 22.6% were considered high priority, 52.3% intermediate priority, and 25.2% low priority. Agreement was reached for 31.9% of the indications and disagreement for 0.3%. Logistic regression and general linear models showed that the preoperative visual acuity of the cataractous eye, visual function, and anticipated visual acuity postoperatively were the most influential variables. Alternative and simple scoring systems were obtained by optimal scaling and general linear models where the previous variables were also the most important. The decision tree also shows the importance of the previous variables and the appropriateness of the intervention. Conclusion Our results showed acceptable validity as an evaluation and management tool for prioritizing cataract extraction. It also provides easy algorithms for use in clinical practice. PMID:16512893

  16. Public Perception of Extreme Cold Weather-Related Health Risk in a Cold Area of Northeast China.

    PubMed

    Ban, Jie; Lan, Li; Yang, Chao; Wang, Jian; Chen, Chen; Huang, Ganlin; Li, Tiantian

    2017-08-01

    A need exists for public health strategies regarding extreme weather disasters, which in recent years have become more frequent. This study aimed to understand the public's perception of extreme cold and its related health risks, which may provide detailed information for public health preparedness during an extreme cold weather event. To evaluate public perceptions of cold-related health risk and to identify vulnerable groups, we collected responses from 891 participants in a face-to-face survey in Harbin, China. Public perception was measured by calculating the score for each perception question. Locals perceived that extreme cold weather and related health risks were serious, but thought they could not avoid these risks. The significant difference in perceived acceptance level between age groups suggested that the elderly are a "high health risk, low risk perception" group, meaning that they are relatively more vulnerable owing to their high susceptibility and low awareness of the health risks associated with extreme cold weather. The elderly should be a priority in risk communication and health protective interventions. This study demonstrated that introducing risk perception into the public health field can identify vulnerable groups with greater needs, which may improve the decision-making of public health intervention strategies. (Disaster Med Public Health Preparedness. 2017;11:417-421).

  17. [FMEA applied to the radiotherapy patient care process].

    PubMed

    Meyrieux, C; Garcia, R; Pourel, N; Mège, A; Bodez, V

    2012-10-01

    Failure modes and effects analysis (FMEA), is a risk analysis method used at the Radiotherapy Department of Institute Sainte-Catherine as part of a strategy seeking to continuously improve the quality and security of treatments. The method comprises several steps: definition of main processes; for each of them, description for every step of prescription, treatment preparation, treatment application; identification of the possible risks, their consequences, their origins; research of existing safety elements which may avoid these risks; grading of risks to assign a criticality score resulting in a numerical organisation of the risks. Finally, the impact of proposed corrective actions was then estimated by a new grading round. For each process studied, a detailed map of the risks was obtained, facilitating the identification of priority actions to be undertaken. For example, we obtain five steps in patient treatment planning with an unacceptable level of risk, 62 a level of moderate risk and 31 an acceptable level of risk. The FMEA method, used in the industrial domain and applied here to health care, is an effective tool for the management of risks in patient care. However, the time and training requirements necessary to implement this method should not be underestimated. Copyright © 2012 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.

  18. Screening and syndromic approaches to identify gonorrhea and chlamydial infection among women.

    PubMed

    Sloan, N L; Winikoff, B; Haberland, N; Coggins, C; Elias, C

    2000-03-01

    The standard diagnostic tools to identify sexually transmitted infections are often expensive and have laboratory and infrastructure requirements that make them unavailable to family planning and primary health-care clinics in developing countries. Therefore, inexpensive, accessible tools that rely on symptoms, signs, and/or risk factors have been developed to identify and treat reproductive tract infections without the need for laboratory diagnostics. Studies were reviewed that used standard diagnostic tests to identify gonorrhea and cervical chlamydial infection among women and that provided adequate information about the usefulness of the tools for screening. Aggregation of the studies' results suggest that risk factors, algorithms, and risk scoring for syndromic management are poor indicators of gonorrhea and chlamydial infection in samples of both low and high prevalence and, consequently, are not effective mechanisms with which to identify or manage these conditions. The development and evaluation of other approaches to identify gonorrhea and chlamydial infections, including inexpensive and simple laboratory screening tools, periodic universal treatment, and other alternatives must be given priority.

  19. What makes children with cerebral palsy vulnerable to malnutrition? Findings from the Bangladesh cerebral palsy register (BCPR).

    PubMed

    Jahan, Israt; Muhit, Mohammad; Karim, Tasneem; Smithers-Sheedy, Hayley; Novak, Iona; Jones, Cheryl; Badawi, Nadia; Khandaker, Gulam

    2018-04-16

    To assess the nutritional status and underlying risk factors for malnutrition among children with cerebral palsy in rural Bangladesh. We used data from the Bangladesh Cerebral Palsy Register; a prospective population based surveillance of children with cerebral palsy aged 0-18 years in a rural subdistrict of Bangladesh (i.e., Shahjadpur). Socio-demographic, clinical and anthropometric measurements were collected using Bangladesh Cerebral Palsy Register record form. Z scores were calculated using World Health Organization Anthro and World Health Organization AnthroPlus software. A total of 726 children with cerebral palsy were registered into the Bangladesh Cerebral Palsy Register (mean age 7.6 years, standard deviation 4.5, 38.1% female) between January 2015 and December 2016. More than two-third of children were underweight (70.0%) and stunted (73.1%). Mean z score for weight for age, height for age and weight for height were -2.8 (standard deviation 1.8), -3.1 (standard deviation 2.2) and -1.2 (standard deviation 2.3) respectively. Moderate to severe undernutrition (i.e., both underweight and stunting) were significantly associated with age, monthly family income, gross motor functional classification system and neurological type of cerebral palsy. The burden of undernutrition is high among children with cerebral palsy in rural Bangladesh which is augmented by both poverty and clinical severity. Enhancing clinical nutritional services for children with cerebral palsy should be a public health priority in Bangladesh. Implications for Rehabilitation Population-based surveillance data on nutritional status of children with cerebral palsy in Bangladesh indicates substantially high burden of malnutrition among children with CP in rural Bangladesh. Children with severe form of cerebral palsy, for example, higher Gross Motor Function Classification System (GMFCS) level, tri/quadriplegic cerebral palsy presents the highest proportion of severe malnutrition; hence, these vulnerable groups should be focused in designing nutrition intervention and rehabilitation programs. Disability inclusive and focused nutrition intervention programme need to be kept as priority in national nutrition policies and nutrition action plans specially in low- and middle-income countries. Community-based management of malnutrition has the potential to overcome this poor nutritional scenario of children with disability (i.e., cerebral palsy). The global leaders such as World Health Organization, national and international organizations should take this in account and conduct further research to develop nutritional guidelines for this vulnerable group of population.

  20. Formulary evaluation of third-generation cephalosporins using decision analysis.

    PubMed

    Cano, S B; Fujita, N K

    1988-03-01

    A structured, objective approach to formulary review of third-generation cephalosporins using the decision-analysis model is described. The pharmacy and therapeutics (P&T) committee approved the evaluation criteria for this drug class and assigned priority weights (as percentages of 100) to those drug characteristics deemed most important. Clinical data (spectrum of activity, pharmacokinetics, adverse effects, and stability) and financial data (cost of acquisition and cost of therapy per day) were used to determine ranking scores for each drug. Total scores were determined by multiplying ranking scores by the assigned priority weights for the criteria. The two highest-scoring drugs were selected for inclusion in the formulary. By this decision-analysis process, the P&T committee recommended that all current third-generation cephalosporins (cefotaxime, cefoperazone, and moxalactam) be removed from the institutions's formulary and be replaced with ceftazidime and ceftriaxone. P&T committees at other institutions may structure their criteria differently, and different recommendations may result. Using decision analysis for formulary review may promote rational drug therapy and achieve cost savings.

  1. Diversity efforts, admissions, and national rankings: can we align priorities?

    PubMed

    Heller, Caren A; Rúa, Sandra Hurtado; Mazumdar, Madhu; Moon, Jennifer E; Bardes, Charles; Gotto, Antonio M

    2014-01-01

    Increasing student body diversity is a priority for national health education and professional organizations and for many medical schools. However, national rankings of medical schools, such as those published by U.S. News & World Report, place a heavy emphasis on grade point average (GPA) and Medical College Admissions Test (MCAT) scores, without considering student body diversity. These rankings affect organizational reputation and admissions outcomes, even though there is considerable controversy surrounding the predictive value of GPA and MCAT scores. Our aim in this article was to explore the relationship between standard admissions practices, which typically aim to attract students with the highest academic scores, and student body diversity. We examined how changes in GPA and MCAT scores over 5 years correlated with the percentage of enrolled students who are underrepresented in medicine. In a majority of medical schools in the United States from 2005 to 2009, average GPA and MCAT scores of applicants increased, whereas the percentage of enrolled students who are underrepresented in medicine decreased. Our findings suggest that efforts to increase the diversity of medical school student bodies may be complicated by a desire to maintain high average GPA and MCAT scores. We propose that U.S. News revise its ranking methodology by incorporating a new diversity score into its student selectivity score and by reducing the weight placed on GPA and MCAT scores.

  2. Garden-Based Learning: An Experience with "At Risk" Secondary Education Students

    ERIC Educational Resources Information Center

    Ruiz-Gallardo, José-Reyes; Verde, Alonso; Valdés, Arturo

    2013-01-01

    The reengagement of disenchanted secondary students is one of the priorities of the educational system. Over a six-year period (2003-2004 to 2008-2009), 63 disruptive and low-performance secondary school students were integrated into a two-year garden-based learning program, which took place in southeastern Spain. This article intends to assess…

  3. Determination and risk assessment of naturally occurring genotoxic and carcinogenic alkenylbenzenes in nutmeg-based plant food supplements.

    PubMed

    Al-Malahmeh, Amer J; Alajlouni, Abdalmajeed M; Ning, Jia; Wesseling, Sebastiaan; Vervoort, Jacques; Rietjens, Ivonne M C M

    2017-10-01

    A risk assessment of nutmeg-based plant food supplements (PFS) containing different alkenylbenzenes was performed based on the alkenylbenzene levels quantified in a series of PFS collected via the online market. The estimated daily intake (EDI) of the alkenylbenzenes amounted to 0.3 to 312 μg kg -1 body weight (bw) for individual alkenylbenzenes, to 1.5 to 631 μg kg -1 bw when adding up the alkenylbenzene levels assuming equal potency, and to 0.4 to 295 μg kg -1 bw when expressed in safrole equivalents using toxic equivalency factors (TEFs). The margin of exposure approach (MOE) was used to evaluate the potential risks. Independent of the method used for the intake estimate, the MOE values obtained were generally lower than 10000 indicating a priority for risk management. When taking into account that PFS may be used for shorter periods of time and using Haber's rule to correct for shorter than lifetime exposure it was shown that limiting exposure to only 1 or 2 weeks would result in MOE values that would be, with the presently determined levels of alkenylbenzenes and proposed uses of the PFS, of low priority for risk management (MOE > 10000). It is concluded that the results of the present paper reveal that nutmeg-based PFS consumption following recommendations for daily intake especially for longer periods of time raise a concern. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Prognostic discrimination for early chronic phase chronic myeloid leukemia in imatinib era: comparison of Sokal, Euro, and EUTOS scores in Korean population.

    PubMed

    Yahng, Seung-Ah; Jang, Eun-Jung; Choi, Soo-Young; Lee, Sung-Eun; Kim, Soo-Hyun; Kim, Dong-Wook

    2014-08-01

    Beyond the conventional Sokal and Euro scores, a new prognostic risk classification, based on the European Treatment Outcome Study (EUTOS), has been developed to predict the outcome of treatment with tyrosine kinase inhibitors (TKI) in chronic myeloid leukemia (CML). In the present study, each risk score was validated by various endpoints in 206 Korean patients with early chronic-phase CML treated with up-front standard dose imatinib. In our analysis, all three scores were found to be valid. The 5-year event-free survival (EFS) was significantly discriminated using Sokal (P = 0.002), Euro (P = 0.003), and EUTOS (P = 0.029), with the worst probability by Euro high-risk (62 vs. 49 vs. 67 %) and better EFS in Sokal low-risk (89 vs. 86 vs. 82 %). Combining all scores identified 6 % of all patients having homogeneous high-risk with distinctively worse outcomes (5-year EFS of 41 %, cumulative complete cytogenetic response rate of 56 %, and cumulative major molecular response rate of 27 %), whereas the group of discordance in risk scores (60 %) had similar results to those of intermediate-risk groups of Sokal and Euro scores. Combining all risk scores for baseline risk assessment may be useful in clinical practice for identifying groups of patients who may benefit from treatment initiation with a more potent TKI among the currently available first-line TKIs.

  5. Predictive value of CHADS2 and CHA2DS2-VASc scores for acute myocardial infarction in patients with atrial fibrillation.

    PubMed

    Pang, Hui; Han, Bing; Fu, Qiang; Zong, Zhenkun

    2017-07-05

    The presence of acute myocardial infarction (AMI) confers a poor prognosis in atrial fibrillation (AF), associated with increased mortality dramatically. This study aimed to evaluate the predictive value of CHADS 2 and CHA 2 DS 2 -VASc scores for AMI in patients with AF. This retrospective study enrolled 5140 consecutive nonvalvular AF patients, 300 patients with AMI and 4840 patients without AMI. We identified the optimal cut-off values of the CHADS 2 and CHA 2 DS 2 -VASc scores each based on receiver operating characteristic curves to predict the risk of AMI. Both CHADS 2 score and CHA 2 DS 2 -VASc score were associated with an increased odds ratio of the prevalence of AMI in patients with AF, after adjustment for hyperlipidaemia, hyperuricemia, hyperthyroidism, hypothyroidism and obstructive sleep apnea. The present results showed that the area under the curve (AUC) for CHADS 2 score was 0.787 with a similar accuracy of the CHA 2 DS 2 -VASc score (AUC 0.750) in predicting "high-risk" AF patients who developed AMI. However, the predictive accuracy of the two clinical-based risk scores was fair. The CHA 2 DS 2 -VASc score has fair predictive value for identifying high-risk patients with AF and is not significantly superior to CHADS 2 in predicting patients who develop AMI.

  6. Simple Scoring System to Predict In-Hospital Mortality After Surgery for Infective Endocarditis.

    PubMed

    Gatti, Giuseppe; Perrotti, Andrea; Obadia, Jean-François; Duval, Xavier; Iung, Bernard; Alla, François; Chirouze, Catherine; Selton-Suty, Christine; Hoen, Bruno; Sinagra, Gianfranco; Delahaye, François; Tattevin, Pierre; Le Moing, Vincent; Pappalardo, Aniello; Chocron, Sidney

    2017-07-20

    Aspecific scoring systems are used to predict the risk of death postsurgery in patients with infective endocarditis (IE). The purpose of the present study was both to analyze the risk factors for in-hospital death, which complicates surgery for IE, and to create a mortality risk score based on the results of this analysis. Outcomes of 361 consecutive patients (mean age, 59.1±15.4 years) who had undergone surgery for IE in 8 European centers of cardiac surgery were recorded prospectively, and a risk factor analysis (multivariable logistic regression) for in-hospital death was performed. The discriminatory power of a new predictive scoring system was assessed with the receiver operating characteristic curve analysis. Score validation procedures were carried out. Fifty-six (15.5%) patients died postsurgery. BMI >27 kg/m 2 (odds ratio [OR], 1.79; P =0.049), estimated glomerular filtration rate <50 mL/min (OR, 3.52; P <0.0001), New York Heart Association class IV (OR, 2.11; P =0.024), systolic pulmonary artery pressure >55 mm Hg (OR, 1.78; P =0.032), and critical state (OR, 2.37; P =0.017) were independent predictors of in-hospital death. A scoring system was devised to predict in-hospital death postsurgery for IE (area under the receiver operating characteristic curve, 0.780; 95% CI, 0.734-0.822). The score performed better than 5 of 6 scoring systems for in-hospital death after cardiac surgery that were considered. A simple scoring system based on risk factors for in-hospital death was specifically created to predict mortality risk postsurgery in patients with IE. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.

  7. Development and evaluation of an office ergonomic risk checklist: ROSA--rapid office strain assessment.

    PubMed

    Sonne, Michael; Villalta, Dino L; Andrews, David M

    2012-01-01

    The Rapid Office Strain Assessment (ROSA) was designed to quickly quantify risks associated with computer work and to establish an action level for change based on reports of worker discomfort. Computer use risk factors were identified in previous research and standards on office design for the chair, monitor, telephone, keyboard and mouse. The risk factors were diagrammed and coded as increasing scores from 1 to 3. ROSA final scores ranged in magnitude from 1 to 10, with each successive score representing an increased presence of risk factors. Total body discomfort and ROSA final scores for 72 office workstations were significantly correlated (R = 0.384). ROSA final scores exhibited high inter- and intra-observer reliability (ICCs of 0.88 and 0.91, respectively). Mean discomfort increased with increasing ROSA scores, with a significant difference occurring between scores of 3 and 5 (out of 10). A ROSA final score of 5 might therefore be useful as an action level indicating when immediate change is necessary. ROSA proved to be an effective and reliable method for identifying computer use risk factors related to discomfort. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  8. A new methodology for surcharge risk management in urban areas (case study: Gonbad-e-Kavus city).

    PubMed

    Hooshyaripor, Farhad; Yazdi, Jafar

    2017-02-01

    This research presents a simulation-optimization model for urban flood mitigation integrating Non-dominated Sorting Genetic Algorithm (NSGA-II) with Storm Water Management Model (SWMM) hydraulic model under a curve number-based hydrologic model of low impact development technologies in Gonbad-e-Kavus, a small city in the north of Iran. In the developed model, the best performance of the system relies on the optimal layout and capacity of retention ponds over the study area in order to reduce surcharge from the manholes underlying a set of storm event loads, while the available investment plays a restricting role. Thus, there is a multi-objective optimization problem with two conflicting objectives solved successfully by NSGA-II to find a set of optimal solutions known as the Pareto front. In order to analyze the results, a new factor, investment priority index (IPI), is defined which shows the risk of surcharging over the network and priority of the mitigation actions. The IPI is calculated using the probability of pond selection for candidate locations and average depth of the ponds in all Pareto front solutions. The IPI can help the decision makers to arrange a long-term progressive plan with the priority of high-risk areas when an optimal solution has been selected.

  9. Enhancing the Value of Population-Based Risk Scores for Institutional-Level Use.

    PubMed

    Raza, Sajjad; Sabik, Joseph F; Rajeswaran, Jeevanantham; Idrees, Jay J; Trezzi, Matteo; Riaz, Haris; Javadikasgari, Hoda; Nowicki, Edward R; Svensson, Lars G; Blackstone, Eugene H

    2016-07-01

    We hypothesized that factors associated with an institution's residual risk unaccounted for by population-based models may be identifiable and used to enhance the value of population-based risk scores for quality improvement. From January 2000 to January 2010, 4,971 patients underwent aortic valve replacement (AVR), either isolated (n = 2,660) or with concomitant coronary artery bypass grafting (AVR+CABG; n = 2,311). Operative mortality and major morbidity and mortality predicted by The Society of Thoracic Surgeons (STS) risk models were compared with observed values. After adjusting for patients' STS score, additional and refined risk factors were sought to explain residual risk. Differences between STS model coefficients (risk-factor strength) and those specific to our institution were calculated. Observed operative mortality was less than predicted for AVR (1.6% [42 of 2,660] vs 2.8%, p < 0.0001) and AVR+CABG (2.6% [59 of 2,311] vs 4.9%, p < 0.0001). Observed major morbidity and mortality was also lower than predicted for isolated AVR (14.6% [389 of 2,660] vs 17.5%, p < 0.0001) and AVR+CABG (20.0% [462 of 2,311] vs 25.8%, p < 0.0001). Shorter height, higher bilirubin, and lower albumin were identified as additional institution-specific risk factors, and body surface area, creatinine, glomerular filtration rate, blood urea nitrogen, and heart failure across all levels of functional class were identified as refined risk-factor variables associated with residual risk. In many instances, risk-factor strength differed substantially from that of STS models. Scores derived from population-based models can be enhanced for institutional level use by adjusting for institution-specific additional and refined risk factors. Identifying these and measuring differences in institution-specific versus population-based risk-factor strength can identify areas to target for quality improvement initiatives. Copyright © 2016 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  10. Systemic Inflammation-Based Biomarkers and Survival in HIV-Positive Subject With Solid Cancer in an Italian Multicenter Study.

    PubMed

    Raffetti, Elena; Donato, Francesco; Pezzoli, Chiara; Digiambenedetto, Simona; Bandera, Alessandra; Di Pietro, Massimo; Di Filippo, Elisa; Maggiolo, Franco; Sighinolfi, Laura; Fornabaio, Chiara; Castelnuovo, Filippo; Ladisa, Nicoletta; Castelli, Francesco; Quiros Roldan, Eugenia

    2015-08-15

    Recently, some systemic inflammation-based biomarkers have been demonstrated useful for predicting risk of death in patients with solid cancer independently of tumor characteristics. This study aimed to investigate the prognostic role of systemic inflammation-based biomarkers in HIV-infected patients with solid tumors and to propose a risk score for mortality in these subjects. Clinical and pathological data on solid AIDS-defining cancer (ADC) and non-AIDS-defining cancer (NADC), diagnosed between 1998 and 2012 in an Italian cohort, were analyzed. To evaluate the prognostic role of systemic inflammation- and nutrition-based markers, univariate and multivariable Cox regression models were applied. To compute the risk score equation, the patients were randomly assigned to a derivation and a validation sample. A total of 573 patients (76.3% males) with a mean age of 46.2 years (SD = 10.3) were enrolled. 178 patients died during a median of 3.2 years of follow-up. For solid NADCs, elevated Glasgow Prognostic Score, modified Glasgow Prognostic Score, neutrophil/lymphocyte ratio, platelet/lymphocyte ratio, and Prognostic Nutritional Index were independently associated with risk of death; for solid ADCs, none of these markers was associated with risk of death. For solid NADCs, we computed a mortality risk score on the basis of age at cancer diagnosis, intravenous drug use, and Prognostic Nutritional Index. The areas under the receiver operating characteristic curve were 0.67 (95% confidence interval: 0.58 to 0.75) in the derivation sample and 0.66 (95% confidence interval: 0.54 to 0.79) in the validation sample. Inflammatory biomarkers were associated with risk of death in HIV-infected patients with solid NADCs but not with ADCs.

  11. Joint use of cardio-embolic and bleeding risk scores in elderly patients with atrial fibrillation.

    PubMed

    Marcucci, Maura; Nobili, Alessandro; Tettamanti, Mauro; Iorio, Alfonso; Pasina, Luca; Djade, Codjo D; Franchi, Carlotta; Marengoni, Alessandra; Salerno, Francesco; Corrao, Salvatore; Violi, Francesco; Mannucci, Pier Mannuccio

    2013-12-01

    Scores for cardio-embolic and bleeding risk in patients with atrial fibrillation are described in the literature. However, it is not clear how they co-classify elderly patients with multimorbidity, nor whether and how they affect the physician's decision on thromboprophylaxis. Four scores for cardio-embolic and bleeding risks were retrospectively calculated for ≥ 65 year old patients with atrial fibrillation enrolled in the REPOSI registry. The co-classification of patients according to risk categories based on different score combinations was described and the relationship between risk categories tested. The association between the antithrombotic therapy received and the scores was investigated by logistic regressions and CART analyses. At admission, among 543 patients the median scores (range) were: CHADS2 2 (0-6), CHA2DS2-VASc 4 (1-9), HEMORR2HAGES 3 (0-7), HAS-BLED 2 (1-6). Most of the patients were at high cardio-embolic/high-intermediate bleeding risk (70.5% combining CHADS2 and HEMORR2HAGES, 98.3% combining CHA2DS2-VASc and HAS-BLED). 50-60% of patients were classified in a cardio-embolic risk category higher than the bleeding risk category. In univariate and multivariable analyses, a higher bleeding score was negatively associated with warfarin prescription, and positively associated with aspirin prescription. The cardio-embolic scores were associated with the therapeutic choice only after adjusting for bleeding score or age. REPOSI patients represented a population at high cardio-embolic and bleeding risks, but most of them were classified by the scores as having a higher cardio-embolic than bleeding risk. Yet, prescription and type of antithrombotic therapy appeared to be primarily dictated by the bleeding risk. © 2013.

  12. Glaucoma and quality of life: fall and driving risk.

    PubMed

    Montana, Cynthia L; Bhorade, Anjali M

    2018-03-01

    Numerous population-based studies suggest that glaucoma is an independent risk factor for falling and motor vehicle collisions, particularly for older adults. These adverse events lead to increased healthcare expenditures and decreased quality of life. Current research priorities, therefore, include identifying factors that predispose glaucoma patients to falling and unsafe driving, and developing screening strategies and targeted rehabilitation. The purpose of this article is to review recent studies that address these priorities. Studies continue to support that glaucoma patients, particularly those with advanced disease, have an increased risk of falling or unsafe driving. Risk factors, however, remain variable and include severity and location of visual field defects, contrast sensitivity, and performance on divided attention tasks. Such variability is likely because of the multifactorial nature of ambulating and driving and compensatory strategies used by patients. Falls and unsafe driving remain a serious public health issue for older adults with glaucoma. Ambulation and driving are complex tasks and there is no consensus yet, regarding the best methods for risk stratification and targeted interventions to increase safety. Therefore, comprehensive and individualized assessments are recommended to most effectively evaluate a patient's risk for falling or unsafe driving.

  13. Innovative Surveillance and Risk Reduction Systems for Family Maltreatment, Suicidality, and Substance Problems in the USAF

    DTIC Science & Technology

    2006-03-01

    suicidality, and alcohol/drug problems. Managing risk and increasing resilience in military human resources (i.e., “Force Health Protection”) is a top...problems. Managing risk and increasing resilience in military human resources (i.e., “Force Health Protection”) is a top priority for DoD and Armed...Behavioral Health representatives, as well as at AF- IDS meetings. 4 Wave 1 Bases (Tyndall AFB, Barksdale AFB, Shaw AFB) The primary purpose of

  14. Prioritizing Populations for Conservation Using Phylogenetic Networks

    PubMed Central

    Volkmann, Logan; Martyn, Iain; Moulton, Vincent; Spillner, Andreas; Mooers, Arne O.

    2014-01-01

    In the face of inevitable future losses to biodiversity, ranking species by conservation priority seems more than prudent. Setting conservation priorities within species (i.e., at the population level) may be critical as species ranges become fragmented and connectivity declines. However, existing approaches to prioritization (e.g., scoring organisms by their expected genetic contribution) are based on phylogenetic trees, which may be poor representations of differentiation below the species level. In this paper we extend evolutionary isolation indices used in conservation planning from phylogenetic trees to phylogenetic networks. Such networks better represent population differentiation, and our extension allows populations to be ranked in order of their expected contribution to the set. We illustrate the approach using data from two imperiled species: the spotted owl Strix occidentalis in North America and the mountain pygmy-possum Burramys parvus in Australia. Using previously published mitochondrial and microsatellite data, we construct phylogenetic networks and score each population by its relative genetic distinctiveness. In both cases, our phylogenetic networks capture the geographic structure of each species: geographically peripheral populations harbor less-redundant genetic information, increasing their conservation rankings. We note that our approach can be used with all conservation-relevant distances (e.g., those based on whole-genome, ecological, or adaptive variation) and suggest it be added to the assortment of tools available to wildlife managers for allocating effort among threatened populations. PMID:24586451

  15. Predictive accuracy of combined genetic and environmental risk scores.

    PubMed

    Dudbridge, Frank; Pashayan, Nora; Yang, Jian

    2018-02-01

    The substantial heritability of most complex diseases suggests that genetic data could provide useful risk prediction. To date the performance of genetic risk scores has fallen short of the potential implied by heritability, but this can be explained by insufficient sample sizes for estimating highly polygenic models. When risk predictors already exist based on environment or lifestyle, two key questions are to what extent can they be improved by adding genetic information, and what is the ultimate potential of combined genetic and environmental risk scores? Here, we extend previous work on the predictive accuracy of polygenic scores to allow for an environmental score that may be correlated with the polygenic score, for example when the environmental factors mediate the genetic risk. We derive common measures of predictive accuracy and improvement as functions of the training sample size, chip heritabilities of disease and environmental score, and genetic correlation between disease and environmental risk factors. We consider simple addition of the two scores and a weighted sum that accounts for their correlation. Using examples from studies of cardiovascular disease and breast cancer, we show that improvements in discrimination are generally small but reasonable degrees of reclassification could be obtained with current sample sizes. Correlation between genetic and environmental scores has only minor effects on numerical results in realistic scenarios. In the longer term, as the accuracy of polygenic scores improves they will come to dominate the predictive accuracy compared to environmental scores. © 2017 WILEY PERIODICALS, INC.

  16. Predictive accuracy of combined genetic and environmental risk scores

    PubMed Central

    Pashayan, Nora; Yang, Jian

    2017-01-01

    ABSTRACT The substantial heritability of most complex diseases suggests that genetic data could provide useful risk prediction. To date the performance of genetic risk scores has fallen short of the potential implied by heritability, but this can be explained by insufficient sample sizes for estimating highly polygenic models. When risk predictors already exist based on environment or lifestyle, two key questions are to what extent can they be improved by adding genetic information, and what is the ultimate potential of combined genetic and environmental risk scores? Here, we extend previous work on the predictive accuracy of polygenic scores to allow for an environmental score that may be correlated with the polygenic score, for example when the environmental factors mediate the genetic risk. We derive common measures of predictive accuracy and improvement as functions of the training sample size, chip heritabilities of disease and environmental score, and genetic correlation between disease and environmental risk factors. We consider simple addition of the two scores and a weighted sum that accounts for their correlation. Using examples from studies of cardiovascular disease and breast cancer, we show that improvements in discrimination are generally small but reasonable degrees of reclassification could be obtained with current sample sizes. Correlation between genetic and environmental scores has only minor effects on numerical results in realistic scenarios. In the longer term, as the accuracy of polygenic scores improves they will come to dominate the predictive accuracy compared to environmental scores. PMID:29178508

  17. A Simple Model to Rank Shellfish Farming Areas Based on the Risk of Disease Introduction and Spread.

    PubMed

    Thrush, M A; Pearce, F M; Gubbins, M J; Oidtmann, B C; Peeler, E J

    2017-08-01

    The European Union Council Directive 2006/88/EC requires that risk-based surveillance (RBS) for listed aquatic animal diseases is applied to all aquaculture production businesses. The principle behind this is the efficient use of resources directed towards high-risk farm categories, animal types and geographic areas. To achieve this requirement, fish and shellfish farms must be ranked according to their risk of disease introduction and spread. We present a method to risk rank shellfish farming areas based on the risk of disease introduction and spread and demonstrate how the approach was applied in 45 shellfish farming areas in England and Wales. Ten parameters were used to inform the risk model, which were grouped into four risk themes based on related pathways for transmission of pathogens: (i) live animal movement, (ii) transmission via water, (iii) short distance mechanical spread (birds) and (iv) long distance mechanical spread (vessels). Weights (informed by expert knowledge) were applied both to individual parameters and to risk themes for introduction and spread to reflect their relative importance. A spreadsheet model was developed to determine quantitative scores for the risk of pathogen introduction and risk of pathogen spread for each shellfish farming area. These scores were used to independently rank areas for risk of introduction and for risk of spread. Thresholds were set to establish risk categories (low, medium and high) for introduction and spread based on risk scores. Risk categories for introduction and spread for each area were combined to provide overall risk categories to inform a risk-based surveillance programme directed at the area level. Applying the combined risk category designation framework for risk of introduction and spread suggested by European Commission guidance for risk-based surveillance, 4, 10 and 31 areas were classified as high, medium and low risk, respectively. © 2016 Crown copyright.

  18. Major bleeding and intracranial hemorrhage risk prediction in patients with atrial fibrillation: Attention to modifiable bleeding risk factors or use of a bleeding risk stratification score? A nationwide cohort study.

    PubMed

    Chao, Tze-Fan; Lip, Gregory Y H; Lin, Yenn-Jiang; Chang, Shih-Lin; Lo, Li-Wei; Hu, Yu-Feng; Tuan, Ta-Chuan; Liao, Jo-Nan; Chung, Fa-Po; Chen, Tzeng-Ji; Chen, Shih-Ann

    2018-03-01

    While modifiable bleeding risks should be addressed in all patients with atrial fibrillation (AF), use of a bleeding risk score enables clinicians to 'flag up' those at risk of bleeding for more regular patient contact reviews. We compared a risk assessment strategy for major bleeding and intracranial hemorrhage (ICH) based on modifiable bleeding risk factors (referred to as a 'MBR factors' score) against established bleeding risk stratification scores (HEMORR 2 HAGES, HAS-BLED, ATRIA, ORBIT). A nationwide cohort study of 40,450 AF patients who received warfarin for stroke prevention was performed. The clinical endpoints included ICH and major bleeding. Bleeding scores were compared using receiver operating characteristic (ROC) curves (areas under the ROC curves [AUCs], or c-index) and the net reclassification index (NRI). During a follow up of 4.60±3.62years, 1581 (3.91%) patients sustained ICH and 6889 (17.03%) patients sustained major bleeding events. All tested bleeding risk scores at baseline were higher in those sustaining major bleeds. When compared to no ICH, patients sustaining ICH had higher baseline HEMORR 2 HAGES (p=0.003), HAS-BLED (p<0.001) and MBR factors score (p=0.013) but not ATRIA and ORBIT scores. When HAS-BLED was compared to other bleeding scores, c-indexes were significantly higher compared to MBR factors (p<0.001) and ORBIT (p=0.05) scores for major bleeding. C-indexes for the MBR factors score was significantly lower compared to all other scores (De long test, all p<0.001). When NRI was performed, HAS-BLED outperformed all other bleeding risk scores for major bleeding (all p<0.001). C-indexes for ATRIA and ORBIT scores suggested no significant prediction for ICH. All contemporary bleeding risk scores had modest predictive value for predicting major bleeding but the best predictive value and NRI was found for the HAS-BLED score. Simply depending on modifiable bleeding risk factors had suboptimal predictive value for the prediction of major bleeding in AF patients, when compared to the HAS-BLED score. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  19. Upper gastrointestinal bleeding risk scores: Who, when and why?

    PubMed Central

    Monteiro, Sara; Gonçalves, Tiago Cúrdia; Magalhães, Joana; Cotter, José

    2016-01-01

    Upper gastrointestinal bleeding (UGIB) remains a significant cause of hospital admission. In order to stratify patients according to the risk of the complications, such as rebleeding or death, and to predict the need of clinical intervention, several risk scores have been proposed and their use consistently recommended by international guidelines. The use of risk scoring systems in early assessment of patients suffering from UGIB may be useful to distinguish high-risks patients, who may need clinical intervention and hospitalization, from low risk patients with a lower chance of developing complications, in which management as outpatients can be considered. Although several scores have been published and validated for predicting different outcomes, the most frequently cited ones are the Rockall score and the Glasgow Blatchford score (GBS). While Rockall score, which incorporates clinical and endoscopic variables, has been validated to predict mortality, the GBS, which is based on clinical and laboratorial parameters, has been studied to predict the need of clinical intervention. Despite the advantages previously reported, their use in clinical decisions is still limited. This review describes the different risk scores used in the UGIB setting, highlights the most important research, explains why and when their use may be helpful, reflects on the problems that remain unresolved and guides future research with practical impact. PMID:26909231

  20. 5 CFR 302.401 - Selection and appointment.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... reemployment, reemployment, or regular list on which candidates have not received numerical scores, an agency... candidates have received numerical scores, the agency must make its selection for each vacancy from not more... method, an agency is not required to— (1) Accord an applicant on its priority reemployment or...

  1. Exposure-Based Screening and Priority-Setting (WC10)

    EPA Science Inventory

    The U.S. National Academy of Sciences report “Using 21st Century Science to Improve Risk-Related Evaluations” recognized that high-throughput screening (HTS) and exposure prediction tools are necessary to prioritize thousands of chemicals with the potential to pose human health r...

  2. Genetic markers enhance coronary risk prediction in men: the MORGAM prospective cohorts.

    PubMed

    Hughes, Maria F; Saarela, Olli; Stritzke, Jan; Kee, Frank; Silander, Kaisa; Klopp, Norman; Kontto, Jukka; Karvanen, Juha; Willenborg, Christina; Salomaa, Veikko; Virtamo, Jarmo; Amouyel, Phillippe; Arveiler, Dominique; Ferrières, Jean; Wiklund, Per-Gunner; Baumert, Jens; Thorand, Barbara; Diemert, Patrick; Trégouët, David-Alexandre; Hengstenberg, Christian; Peters, Annette; Evans, Alun; Koenig, Wolfgang; Erdmann, Jeanette; Samani, Nilesh J; Kuulasmaa, Kari; Schunkert, Heribert

    2012-01-01

    More accurate coronary heart disease (CHD) prediction, specifically in middle-aged men, is needed to reduce the burden of disease more effectively. We hypothesised that a multilocus genetic risk score could refine CHD prediction beyond classic risk scores and obtain more precise risk estimates using a prospective cohort design. Using data from nine prospective European cohorts, including 26,221 men, we selected in a case-cohort setting 4,818 healthy men at baseline, and used Cox proportional hazards models to examine associations between CHD and risk scores based on genetic variants representing 13 genomic regions. Over follow-up (range: 5-18 years), 1,736 incident CHD events occurred. Genetic risk scores were validated in men with at least 10 years of follow-up (632 cases, 1361 non-cases). Genetic risk score 1 (GRS1) combined 11 SNPs and two haplotypes, with effect estimates from previous genome-wide association studies. GRS2 combined 11 SNPs plus 4 SNPs from the haplotypes with coefficients estimated from these prospective cohorts using 10-fold cross-validation. Scores were added to a model adjusted for classic risk factors comprising the Framingham risk score and 10-year risks were derived. Both scores improved net reclassification (NRI) over the Framingham score (7.5%, p = 0.017 for GRS1, 6.5%, p = 0.044 for GRS2) but GRS2 also improved discrimination (c-index improvement 1.11%, p = 0.048). Subgroup analysis on men aged 50-59 (436 cases, 603 non-cases) improved net reclassification for GRS1 (13.8%) and GRS2 (12.5%). Net reclassification improvement remained significant for both scores when family history of CHD was added to the baseline model for this male subgroup improving prediction of early onset CHD events. Genetic risk scores add precision to risk estimates for CHD and improve prediction beyond classic risk factors, particularly for middle aged men.

  3. Quantifying the impact of using Coronary Artery Calcium Score for risk categorization instead of Framingham Score or European Heart SCORE in lipid lowering algorithms in a Middle Eastern population.

    PubMed

    Isma'eel, Hussain A; Almedawar, Mohamad M; Harbieh, Bernard; Alajaji, Wissam; Al-Shaar, Laila; Hourani, Mukbil; El-Merhi, Fadi; Alam, Samir; Abchee, Antoine

    2015-10-01

    The use of the Coronary Artery Calcium Score (CACS) for risk categorization instead of the Framingham Risk Score (FRS) or European Heart SCORE (EHS) to improve classification of individuals is well documented. However, the impact of reclassifying individuals using CACS on initiating lipid lowering therapy is not well understood. We aimed to determine the percentage of individuals not requiring lipid lowering therapy as per the FRS and EHS models but are found to require it using CACS and vice versa; and to determine the level of agreement between CACS, FRS and EHS based models. Data was collected for 500 consecutive patients who had already undergone CACS. However, only 242 patients met the inclusion criteria and were included in the analysis. Risk stratification comparisons were conducted according to CACS, FRS, and EHS, and the agreement (Kappa) between them was calculated. In accordance with the models, 79.7% to 81.5% of high-risk individuals were down-classified by CACS, while 6.8% to 7.6% of individuals at intermediate risk were up-classified to high risk by CACS, with slight to moderate agreement. Moreover, CACS recommended treatment to 5.7% and 5.8% of subjects untreated according to European and Canadian guidelines, respectively; whereas 75.2% to 81.2% of those treated in line with the guidelines would not be treated based on CACS. In this simulation, using CACS for risk categorization warrants lipid lowering treatment for 5-6% and spares 70-80% from treatment in accordance with the guidelines. Current strong evidence from double randomized clinical trials is in support of guideline recommendations. Our results call for a prospective trial to explore the benefits/risks of a CACS-based approach before any recommendations can be made.

  4. Quantifying the impact of using Coronary Artery Calcium Score for risk categorization instead of Framingham Score or European Heart SCORE in lipid lowering algorithms in a Middle Eastern population

    PubMed Central

    Isma’eel, Hussain A.; Almedawar, Mohamad M.; Harbieh, Bernard; Alajaji, Wissam; Al-Shaar, Laila; Hourani, Mukbil; El-Merhi, Fadi; Alam, Samir; Abchee, Antoine

    2015-01-01

    Background The use of the Coronary Artery Calcium Score (CACS) for risk categorization instead of the Framingham Risk Score (FRS) or European Heart SCORE (EHS) to improve classification of individuals is well documented. However, the impact of reclassifying individuals using CACS on initiating lipid lowering therapy is not well understood. We aimed to determine the percentage of individuals not requiring lipid lowering therapy as per the FRS and EHS models but are found to require it using CACS and vice versa; and to determine the level of agreement between CACS, FRS and EHS based models. Methods Data was collected for 500 consecutive patients who had already undergone CACS. However, only 242 patients met the inclusion criteria and were included in the analysis. Risk stratification comparisons were conducted according to CACS, FRS, and EHS, and the agreement (Kappa) between them was calculated. Results In accordance with the models, 79.7% to 81.5% of high-risk individuals were down-classified by CACS, while 6.8% to 7.6% of individuals at intermediate risk were up-classified to high risk by CACS, with slight to moderate agreement. Moreover, CACS recommended treatment to 5.7% and 5.8% of subjects untreated according to European and Canadian guidelines, respectively; whereas 75.2% to 81.2% of those treated in line with the guidelines would not be treated based on CACS. Conclusion In this simulation, using CACS for risk categorization warrants lipid lowering treatment for 5–6% and spares 70–80% from treatment in accordance with the guidelines. Current strong evidence from double randomized clinical trials is in support of guideline recommendations. Our results call for a prospective trial to explore the benefits/risks of a CACS-based approach before any recommendations can be made. PMID:26557741

  5. A risk perception gap? Comparing expert, producer and consumer prioritization of food hazard controls.

    PubMed

    Hartmann, Christina; Hübner, Philipp; Siegrist, Michael

    2018-06-01

    Using a survey approach, the study examined how experts (i.e. food control representatives), producers (i.e. food industry representatives) and consumers prioritized control activities for 28 hazards related to food and other everyday items. The investigated hazards encompassed a wide range of safety issues, including health risks, consumer deception and poor food hygiene behaviour. The participants included 41 experts, 138 producers and 243 consumers from the German- and French-speaking parts of Switzerland. Based on detailed descriptions of the hazards, they were asked to rank these on a score sheet in terms of the perceived importance of monitoring by food control authorities. A between-group comparison of average rankings showed that consumers and experts differed significantly in relation to 17 of the 28 hazards. While the experts assigned higher priority to hazards related to everyday items such as nitrosamine in mascara and chromium VI in leather products, producers and consumers tended to prioritize products related to plant treatment and genetic modification of food and feeds. Producer and consumer rankings of the hazards were highly correlated (r = .96, p < .001). Rankings were also similar among participants from the two cultural regions (i.e. German and French-speaking parts of Switzerland). Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Supply Chain Vulnerability Analysis Using Scenario-Based Input-Output Modeling: Application to Port Operations.

    PubMed

    Thekdi, Shital A; Santos, Joost R

    2016-05-01

    Disruptive events such as natural disasters, loss or reduction of resources, work stoppages, and emergent conditions have potential to propagate economic losses across trade networks. In particular, disruptions to the operation of container port activity can be detrimental for international trade and commerce. Risk assessment should anticipate the impact of port operation disruptions with consideration of how priorities change due to uncertain scenarios and guide investments that are effective and feasible for implementation. Priorities for protective measures and continuity of operations planning must consider the economic impact of such disruptions across a variety of scenarios. This article introduces new performance metrics to characterize resiliency in interdependency modeling and also integrates scenario-based methods to measure economic sensitivity to sudden-onset disruptions. The methods will be demonstrated on a U.S. port responsible for handling $36.1 billion of cargo annually. The methods will be useful to port management, private industry supply chain planning, and transportation infrastructure management. © 2015 Society for Risk Analysis.

  7. Development of a self-assessment score for metabolic syndrome risk in non-obese Korean adults.

    PubMed

    Je, Youjin; Kim, Youngyo; Park, Taeyoung

    2017-03-01

    There is a need for simple risk scores that identify individuals at high risk for metabolic syndrome (MetS). Therefore, this study was performed to develop and validate a self-assessment score for MetS risk in non-obese Korean adults. Data from the fourth Korea National Health and Nutrition Examination Survey (KNHANES IV), 2007-2009 were used to develop a MetS risk score. We included a total of 5,508 non-obese participants aged 19-64 years who were free of a self-reported diagnosis of diabetes, hyperlipidemia, hypertension, stroke, angina, or cancer. Multivariable logistic regression model coefficients were used to assign each variable category a score. The validity of the score was assessed in an independent population survey performed in 2010 and 2011, KNHANES V (n=3,892). Age, BMI, physical activity, smoking, alcohol consumption, dairy consumption, dietary habit of eating less salty and food insecurity were selected as categorical variables. The MetS risk score value varied from 0 to 13, and a cut-point MetS risk score of >=7 was selected based on the highest Youden index. The cut-point provided a sensitivity of 81%, specificity of 61%, positive predictive value of 14%, and negative predictive value of 98%, with an area under the curve (AUC) of 0.78. Consistent results were obtained in the validation data sets. This simple risk score may be used to identify individuals at high risk for MetS without laboratory tests among non-obese Korean adults. Further studies are needed to verify the usefulness and feasibility of this score in various settings.

  8. Factors predicting high estimated 10-year stroke risk: thai epidemiologic stroke study.

    PubMed

    Hanchaiphiboolkul, Suchat; Puthkhao, Pimchanok; Towanabut, Somchai; Tantirittisak, Tasanee; Wangphonphatthanasiri, Khwanrat; Termglinchan, Thanes; Nidhinandana, Samart; Suwanwela, Nijasri Charnnarong; Poungvarin, Niphon

    2014-08-01

    The purpose of the study was to determine the factors predicting high estimated 10-year stroke risk based on a risk score, and among the risk factors comprising the risk score, which factors had a greater impact on the estimated risk. Thai Epidemiologic Stroke study was a community-based cohort study, which recruited participants from the general population from 5 regions of Thailand. Cross-sectional baseline data of 16,611 participants aged 45-69 years who had no history of stroke were included in this analysis. Multiple logistic regression analysis was used to identify the predictors of high estimated 10-year stroke risk based on the risk score of the Japan Public Health Center Study, which estimated the projected 10-year risk of incident stroke. Educational level, low personal income, occupation, geographic area, alcohol consumption, and hypercholesterolemia were significantly associated with high estimated 10-year stroke risk. Among these factors, unemployed/house work class had the highest odds ratio (OR, 3.75; 95% confidence interval [CI], 2.47-5.69) followed by illiterate class (OR, 2.30; 95% CI, 1.44-3.66). Among risk factors comprising the risk score, the greatest impact as a stroke risk factor corresponded to age, followed by male sex, diabetes mellitus, systolic blood pressure, and current smoking. Socioeconomic status, in particular, unemployed/house work and illiterate class, might be good proxy to identify the individuals at higher risk of stroke. The most powerful risk factors were older age, male sex, diabetes mellitus, systolic blood pressure, and current smoking. Copyright © 2014 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  9. The evaluation of acute physiology and chronic health evaluation II score, poisoning severity score, sequential organ failure assessment score combine with lactate to assess the prognosis of the patients with acute organophosphate pesticide poisoning.

    PubMed

    Yuan, Shaoxin; Gao, Yusong; Ji, Wenqing; Song, Junshuai; Mei, Xue

    2018-05-01

    The aim of this study was to assess the ability of acute physiology and chronic health evaluation II (APACHE II) score, poisoning severity score (PSS) as well as sequential organ failure assessment (SOFA) score combining with lactate (Lac) to predict mortality in the Emergency Department (ED) patients who were poisoned with organophosphate.A retrospective review of 59 stands-compliant patients was carried out. Receiver operating characteristic (ROC) curves were constructed based on the APACHE II score, PSS, SOFA score with or without Lac, respectively, and the areas under the ROC curve (AUCs) were determined to assess predictive value. According to SOFA-Lac (a combination of SOFA and Lac) classification standard, acute organophosphate pesticide poisoning (AOPP) patients were divided into low-risk and high-risk groups. Then mortality rates were compared between risk levels.Between survivors and non-survivors, there were significant differences in the APACHE II score, PSS, SOFA score, and Lac (all P < .05). The AUCs of the APACHE II score, PSS, and SOFA score were 0.876, 0.811, and 0.837, respectively. However, after combining with Lac, the AUCs were 0.922, 0.878, and 0.956, respectively. According to SOFA-Lac, the mortality of high-risk group was significantly higher than low-risk group (P < .05) and the patients of the non-survival group were all at high risk.These data suggest the APACHE II score, PSS, SOFA score can all predict the prognosis of AOPP patients. For its simplicity and objectivity, the SOFA score is a superior predictor. Lac significantly improved the predictive abilities of the 3 scoring systems, especially for the SOFA score. The SOFA-Lac system effectively distinguished the high-risk group from the low-risk group. Therefore, the SOFA-Lac system is significantly better at predicting mortality in AOPP patients.

  10. Development and Validation of a Risk Scoring System for Severe Acute Lower Gastrointestinal Bleeding.

    PubMed

    Aoki, Tomonori; Nagata, Naoyoshi; Shimbo, Takuro; Niikura, Ryota; Sakurai, Toshiyuki; Moriyasu, Shiori; Okubo, Hidetaka; Sekine, Katsunori; Watanabe, Kazuhiro; Yokoi, Chizu; Yanase, Mikio; Akiyama, Junichi; Mizokami, Masashi; Uemura, Naomi

    2016-11-01

    We aimed to develop and validate a risk scoring system to determine the risk of severe lower gastrointestinal bleeding (LGIB) and predict patient outcomes. We first performed a retrospective analysis of data from 439 patients emergently hospitalized for acute LGIB at the National Center for Global Health and Medicine in Japan, from January 2009 through December 2013. We used data on comorbidities, medication, presenting symptoms, and vital signs, and laboratory test results to develop a scoring system for severe LGIB (defined as continuous and/or recurrent bleeding). We validated the risk score in a prospective study of 161 patients with acute LGIB admitted to the same center from April 2014 through April 2015. We assessed the system's accuracy in predicting patient outcome using area under the receiver operating characteristics curve (AUC) analysis. All patients underwent colonoscopy. In the first study, 29% of the patients developed severe LGIB. We devised a risk scoring system based on nonsteroidal anti-inflammatory drugs use, no diarrhea, no abdominal tenderness, blood pressure of 100 mm Hg or lower, antiplatelet drugs use, albumin level less than 3.0 g/dL, disease scores of 2 or higher, and syncope (NOBLADS), which all were independent correlates of severe LGIB. Severe LGIB developed in 75.7% of patients with scores of 5 or higher compared with 2% of patients without any of the factors correlated with severe LGIB (P < .001). The NOBLADS score determined the severity of LGIB with an AUC value of 0.77. In the validation (second) study, severe LGIB developed in 35% of patients; the NOBLADS score predicted the severity of LGIB with an AUC value of 0.76. Higher NOBLADS scores were associated with a requirement for blood transfusion, longer hospital stay, and intervention (P < .05 for trend). We developed and validated a scoring system for risk of severe LGIB based on 8 factors (NOBLADS score). The system also determined the risk for blood transfusion, longer hospital stay, and intervention. It might be used in decision making regarding intervention and management. Copyright © 2016 AGA Institute. Published by Elsevier Inc. All rights reserved.

  11. A Comparison of the Updated Diamond-Forrester, CAD Consortium, and CONFIRM History-Based Risk Scores for Predicting Obstructive Coronary Artery Disease in Patients With Stable Chest Pain: The SCOT-HEART Coronary CTA Cohort.

    PubMed

    Baskaran, Lohendran; Danad, Ibrahim; Gransar, Heidi; Ó Hartaigh, Bríain; Schulman-Marcus, Joshua; Lin, Fay Y; Peña, Jessica M; Hunter, Amanda; Newby, David E; Adamson, Philip D; Min, James K

    2018-04-13

    This study sought to compare the performance of history-based risk scores in predicting obstructive coronary artery disease (CAD) among patients with stable chest pain from the SCOT-HEART study. Risk scores for estimating pre-test probability of CAD are derived from referral-based populations with a high prevalence of disease. The generalizability of these scores to lower prevalence populations in the initial patient encounter for chest pain is uncertain. We compared 3 scores among patients with suspected CAD in the coronary computed tomographic angiography (CTA) randomized arm of the SCOT-HEART study for the outcome of obstructive CAD by coronary CTA: the updated Diamond-Forrester score (UDF), CAD Consortium clinical score (CAD2), and CONFIRM risk score (CRS). We tested calibration with goodness-of-fit, discrimination with area under the receiver-operating curve (AUC), and reclassification with net reclassification improvement (NRI) to identify low-risk patients. In 1,738 patients (58 ± 10 years and 44.0% women), overall calibration was best for UDF, with underestimation by CRS and CAD2. Discrimination by AUC was highest for CAD2 at 0.79 (95% confidence interval [CI]: 0.77 to 0.81) than for UDF (0.77 [95% CI: 0.74 to 0.79]) or CRS (0.75 [95% CI: 0.73 to 0.77]) (p < 0.001 for both comparisons). Reclassification of low-risk patients at the 10% probability threshold was best for CAD2 (NRI 0.31, 95% CI: 0.27 to 0.35) followed by CRS (NRI 0.21, 95% CI: 0.17 to 0.25) compared with UDF (p < 0.001 for all comparisons), with a consistent trend at the 15% threshold. In this multicenter clinic-based cohort of patients with suspected CAD and uniform CAD evaluation by coronary CTA, CAD2 provided the best discrimination and classification, despite overestimation of obstructive CAD as evaluated by coronary CTA. CRS exhibited intermediate performance followed by UDF for discrimination and reclassification. Copyright © 2018. Published by Elsevier Inc.

  12. The East London glaucoma prediction score: web-based validation of glaucoma risk screening tool

    PubMed Central

    Stephen, Cook; Benjamin, Longo-Mbenza

    2013-01-01

    AIM It is difficult for Optometrists and General Practitioners to know which patients are at risk. The East London glaucoma prediction score (ELGPS) is a web based risk calculator that has been developed to determine Glaucoma risk at the time of screening. Multiple risk factors that are available in a low tech environment are assessed to provide a risk assessment. This is extremely useful in settings where access to specialist care is difficult. Use of the calculator is educational. It is a free web based service. Data capture is user specific. METHOD The scoring system is a web based questionnaire that captures and subsequently calculates the relative risk for the presence of Glaucoma at the time of screening. Three categories of patient are described: Unlikely to have Glaucoma; Glaucoma Suspect and Glaucoma. A case review methodology of patients with known diagnosis is employed to validate the calculator risk assessment. RESULTS Data from the patient records of 400 patients with an established diagnosis has been captured and used to validate the screening tool. The website reports that the calculated diagnosis correlates with the actual diagnosis 82% of the time. Biostatistics analysis showed: Sensitivity = 88%; Positive predictive value = 97%; Specificity = 75%. CONCLUSION Analysis of the first 400 patients validates the web based screening tool as being a good method of screening for the at risk population. The validation is ongoing. The web based format will allow a more widespread recruitment for different geographic, population and personnel variables. PMID:23550097

  13. Paediatric nutrition risk scores in clinical practice: children with inflammatory bowel disease.

    PubMed

    Wiskin, A E; Owens, D R; Cornelius, V R; Wootton, S A; Beattie, R M

    2012-08-01

    There has been increasing interest in the use of nutrition risk assessment tools in paediatrics to identify those who need nutrition support. Four non-disease specific screening tools have been developed, although there is a paucity of data on their application in clinical practice and the degree of inter-tool agreement. The concurrent validity of four nutrition screening tools [Screening Tool for the Assessment of Malnutrition in Paediatrics (STAMP), Screening Tool for Risk On Nutritional status and Growth (STRONGkids), Paediatric Yorkhill Malnutrition Score (PYMS) and Simple Paediatric Nutrition Risk Score (PNRS)] was examined in 46 children with inflammatory bowel disease. Degree of malnutrition was determined by anthropometry alone using World Health Organization International Classification of Diseases (ICD-10) criteria. There was good agreement between STAMP, STRONGkids and PNRS (kappa > 0.6) but there was only modest agreement between PYMS and the other scores (kappa = 0.3). No children scored low risk with STAMP, STRONGkids or PNRS; however, 23 children scored low risk with PYMS. There was no agreement between the risk tools and the degree of malnutrition based on anthropometric data (kappa < 0.1). Three children had anthropometry consistent with malnutrition and these were all scored high risk. Four children had body mass index SD scores < -2, one of which was scored at low nutrition risk. The relevance of nutrition screening tools for children with chronic disease is unclear. In addition, there is the potential to under recognise nutritional impairment (and therefore nutritional risk) in children with inflammatory bowel disease. © 2012 The Authors. Journal of Human Nutrition and Dietetics © 2012 The British Dietetic Association Ltd.

  14. Patients' self-interested preferences: empirical evidence from a priority setting experiment.

    PubMed

    Alvarez, Begoña; Rodríguez-Míguez, Eva

    2011-04-01

    This paper explores whether patients act according to self-interest in priority setting experiments. The analysis is based on a ranking experiment, conducted in Galicia (Spain), to elicit preferences regarding the prioritization of patients on a waiting list for an elective surgical intervention (prostatectomy for benign prostatic hyperplasia). Participants were patients awaiting a similar intervention and members of the general populations. All of them were asked to rank hypothetical patients on a waiting list. A rank-ordered logit was then applied to their responses in order to obtain a prioritization scoring system. Using these estimations, we first test for differences in preferences between patients and general population. Second, we implement a procedure based on the similarity between respondents (true patients) and the hypothetical scenarios they evaluate (hypothetical patients) to analyze whether patients provide self-interested rankings. Our results show that patient preferences differ significantly from general population preferences. The findings also indicate that, when patients rank the hypothetical scenarios on the waiting list, they consider not only the explicit attributes but also the similarity of each scenario to their own. In particular, they assign a higher priority to scenarios that more closely match their own states. We also find that such a preference structure increases their likelihood of reporting "irrational" answers. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Fall prevention: is the STRATIFY tool the right instrument in Italian Hospital inpatient? A retrospective observational study.

    PubMed

    Castellini, Greta; Demarchi, Antonia; Lanzoni, Monica; Castaldi, Silvana

    2017-09-15

    Although several risk assessment tools are in use, uncertainties on their accuracy in detecting fall risk already exist. Choosing the most accurate tool for hospital inpatient is still a challenge for the organizations. We aimed to retrospectively assess the appropriateness of a fall risk prevention program with the STRATIFY assessment tool in detecting acute-care inpatient fall risk. Number of falls and near falls, occurred from January 2014 to March 2015, was collected through the incident reporting web-system implemented in the hospital's intranet. We reported whether the fall risk was assessed with the STRATIFY assessment tool and, if so, which was the judgement. Primary outcome was the proportion of inpatients identified as high risk of fall among inpatients who fell (True Positive Rate), and the proportion of inpatients identified as low-risk that experienced a fall howsoever (False Negative Rate). Characteristics of population and fall events were described among subgroups of low risk and high risk inpatients. We collected 365 incident reports from 40 hospital units, 349 (95.6%) were real falls and 16 (4.4%) were near falls. The fall risk assessment score at patient's admission had been reported in 289 (79%) of the overall incident reports. Thus, 74 (20.3%) fallers were actually not assessed with the STRATIFY, even though the majority of them presented risk recommended to be assessed. The True Positive Rate was 35.6% (n = 101, 95% CI 30% - 41.1%). The False Negative Rate was 64.4% (n = 183, 95% CI 58.9%-70%) of fallers, nevertheless they incurred in a fall. The STRATIFY mean score was 1.3 ± 1.4; the median was 1 (IQQ 0-2). The prevention program using only the STRATIFY tool was found to be not adequate to screen our inpatients population. The incorrect identification of patients' needs leads to allocate resources to erroneous priorities and to untargeted interventions, decreasing healthcare performance and quality.

  16. Identifying Priority Areas for Conservation: A Global Assessment for Forest-Dependent Birds

    PubMed Central

    Buchanan, Graeme M.; Donald, Paul F.; Butchart, Stuart H. M.

    2011-01-01

    Limited resources are available to address the world's growing environmental problems, requiring conservationists to identify priority sites for action. Using new distribution maps for all of the world's forest-dependent birds (60.6% of all bird species), we quantify the contribution of remaining forest to conserving global avian biodiversity. For each of the world's partly or wholly forested 5-km cells, we estimated an impact score of its contribution to the distribution of all the forest bird species estimated to occur within it, and so is proportional to the impact on the conservation status of the world's forest-dependent birds were the forest it contains lost. The distribution of scores was highly skewed, a very small proportion of cells having scores several orders of magnitude above the global mean. Ecoregions containing the highest values of this score included relatively species-poor islands such as Hawaii and Palau, the relatively species-rich islands of Indonesia and the Philippines, and the megadiverse Atlantic Forests and northern Andes of South America. Ecoregions with high impact scores and high deforestation rates (2000–2005) included montane forests in Cameroon and the Eastern Arc of Tanzania, although deforestation data were not available for all ecoregions. Ecoregions with high impact scores, high rates of recent deforestation and low coverage by the protected area network included Indonesia's Seram rain forests and the moist forests of Trinidad and Tobago. Key sites in these ecoregions represent some of the most urgent priorities for expansion of the global protected areas network to meet Convention on Biological Diversity targets to increase the proportion of land formally protected to 17% by 2020. Areas with high impact scores, rapid deforestation, low protection and high carbon storage values may represent significant opportunities for both biodiversity conservation and climate change mitigation, for example through Reducing Emissions from Deforestation and Forest Degradation (REDD+) initiatives. PMID:22205998

  17. Functional Movement Screen: Pain versus composite score and injury risk.

    PubMed

    Alemany, Joseph A; Bushman, Timothy T; Grier, Tyson; Anderson, Morgan K; Canham-Chervak, Michelle; North, William J; Jones, Bruce H

    2017-11-01

    The Functional Movement Screen (FMS™) has been used as a screening tool to determine musculoskeletal injury risk using composite scores based on movement quality and/or pain. However, no direct comparisons between movement quality and pain have been quantified. Retrospective injury data analysis. Male Soldiers (n=2154, 25.0±1.3years; 26.2±.7kg/m 2 ) completed the FMS (scored from 0 points (pain) to 3 points (no pain and perfect movement quality)) with injury data over the following six months. The FMS is seven movements. Injury data were collected six months after FMS completion. Sensitivity, specificity, receiver operator characteristics and positive and negative predictive values were calculated for pain occurrence and low (≤14 points) composite score. Risk, risk ratios (RR) and 95% confidence intervals were calculated for injury risk. Pain was associated with slightly higher injury risk (RR=1.62) than a composite score of ≤14 points (RR=1.58). When comparing injury risk between those who scored a 1, 2 or 3 on each individual movement, no differences were found (except deep squat). However, Soldiers who experienced pain on any movement had a greater injury risk than those who scored 3 points for that movement (p<0.05). A progressive increase in the relative risk occurred as the number of movements in which pain occurrence increased, so did injury risk (p<0.01). Pain occurrence may be a stronger indicator of injury risk than a low composite score and provides a simpler method of evaluating injury risk compared to the full FMS. Published by Elsevier Ltd.

  18. Reversal of Hartmann's procedure: a high-risk operation?

    PubMed

    Schmelzer, Thomas M; Mostafa, Gamal; Norton, H James; Newcomb, William L; Hope, William W; Lincourt, Amy E; Kercher, Kent W; Kuwada, Timothy S; Gersin, Keith S; Heniford, B Todd

    2007-10-01

    Patients who undergo Hartmann's procedure often do not have their colostomy closed based on the perceived risk of the operation. This study evaluated the outcome of reversal of Hartmann's procedure based on preoperative risk factors. We retrospectively reviewed adult patients who underwent reversal of Hartmann's procedure at our tertiary referral institution. Patient outcomes were compared based on identified risk factors (age >60 years, American Society of Anesthesiologists [ASA] score >2, and >2 preoperative comorbidities). One-hundred thirteen patients were included. Forty-four patients (39%) had an ASA score of >or=3. The mean hospital duration of stay was 6.8 days. There were 28 (25%) postoperative complications and no mortality. Patients >60 years old had significantly longer LOS compared with the rest of the group (P = .02). There were no differences in outcomes between groups based on ASA score or the presence of multiple preoperative comorbidities. An albumin level of <3.5 was the only significant predictor of postoperative complications (P = .04). The reversal of Hartmann's operation appears to be a safe operation with acceptable morbidity rates and can be considered in patients, including those with significant operative risk factors.

  19. Competing priorities in treatment decision-making: a US national survey of individuals with depression and clinicians who treat depression.

    PubMed

    Barr, Paul J; Forcino, Rachel C; Mishra, Manish; Blitzer, Rachel; Elwyn, Glyn

    2016-01-08

    To identify information priorities for consumers and clinicians making depression treatment decisions and assess shared decision-making (SDM) in routine depression care. 20 questions related to common features of depression treatments were provided. Participants were initially asked to select which features were important, and in a second stage they were asked to rank their top 5 'important features' in order of importance. Clinicians were asked to provide rankings according to both consumer and clinician perspectives. Consumers completed CollaboRATE, a measure of SDM. Multiple logistic regression analysis identified consumer characteristics associated with CollaboRATE scores. Online cross-sectional surveys fielded in September to December 2014. We administered surveys to convenience samples of US adults with depression and clinicians who treat depression. Consumer sampling was targeted to reflect age, gender and educational attainment of adults with depression in the USA. Information priority rankings; CollaboRATE, a 3-item consumer-reported measure of SDM. 972 consumers and 244 clinicians completed the surveys. The highest ranked question for both consumers and clinicians was 'Will the treatment work?' Clinicians were aware of consumers' priorities, yet did not always prioritise that information themselves, particularly insurance coverage and cost of treatment. Only 18% of consumers reported high levels of SDM. Working with a psychiatrist (OR 1.87; 95% CI 1.07 to 3.26) and female gender (OR 2.04; 95% CI 1.25 to 3.34) were associated with top CollaboRATE scores. While clinicians know what information is important to consumers making depression treatment decisions, they do not always address these concerns. This mismatch, coupled with low SDM, adversely affects the quality of depression care. Development of a decision support intervention based on our findings can improve levels of SDM and provide clinicians and consumers with a tool to address the existing misalignment in information priorities. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  20. Management of heart failure in the new era: the role of scores.

    PubMed

    Mantegazza, Valentina; Badagliacca, Roberto; Nodari, Savina; Parati, Gianfranco; Lombardi, Carolina; Di Somma, Salvatore; Carluccio, Erberto; Dini, Frank Lloyd; Correale, Michele; Magrì, Damiano; Agostoni, Piergiuseppe

    2016-08-01

    Heart failure is a widespread syndrome involving several organs, still characterized by high mortality and morbidity, and whose clinical course is heterogeneous and hardly predictable.In this scenario, the assessment of heart failure prognosis represents a fundamental step in clinical practice. A single parameter is always unable to provide a very precise prognosis. Therefore, risk scores based on multiple parameters have been introduced, but their clinical utility is still modest. In this review, we evaluated several prognostic models for acute, right, chronic, and end-stage heart failure based on multiple parameters. In particular, for chronic heart failure we considered risk scores essentially based on clinical evaluation, comorbidities analysis, baroreflex sensitivity, heart rate variability, sleep disorders, laboratory tests, echocardiographic imaging, and cardiopulmonary exercise test parameters. What is at present established is that a single parameter is not sufficient for an accurate prediction of prognosis in heart failure because of the complex nature of the disease. However, none of the scoring systems available is widely used, being in some cases complex, not user-friendly, or based on expensive or not easily available parameters. We believe that multiparametric scores for risk assessment in heart failure are promising but their widespread use needs to be experienced.

  1. Do People Taking Flu Vaccines Need Them the Most?

    PubMed Central

    Gu, Qian; Sood, Neeraj

    2011-01-01

    Background A well targeted flu vaccine strategy can ensure that vaccines go to those who are at the highest risk of getting infected if unvaccinated. However, prior research has not explicitly examined the association between the risk of flu infection and vaccination rates. Purpose This study examines the relationship between the risk of flu infection and the probability of getting vaccinated. Methods Nationally representative data from the US and multivariate regression models were used to estimate what individual characteristics are associated with (1) the risk of flu infection when unvaccinated and (2) flu vaccination rates. These results were used to estimate the correlation between the probability of infection and the probability of getting vaccinated. Separate analyses were performed for the general population and the high priority population that is at increased risk of flu related complications. Results We find that the high priority population was more likely to get vaccinated compared to the general population. However, within both the high priority and general populations the risk of flu infection when unvaccinated was negatively correlated with vaccination rates (r = −0.067, p<0.01). This negative association between the risk of infection when unvaccinated and the probability of vaccination was stronger for the high priority population (r = −0.361, p<0.01). Conclusions There is a poor match between those who get flu vaccines and those who have a high risk of flu infection within both the high priority and general populations. Targeting vaccination to people with low socioeconomic status, people who are engaged in unhealthy behaviors, working people, and families with kids will likely improve effectiveness of flu vaccine policy. PMID:22164202

  2. Utility of genetic and non-genetic risk factors in predicting coronary heart disease in Singaporean Chinese.

    PubMed

    Chang, Xuling; Salim, Agus; Dorajoo, Rajkumar; Han, Yi; Khor, Chiea-Chuen; van Dam, Rob M; Yuan, Jian-Min; Koh, Woon-Puay; Liu, Jianjun; Goh, Daniel Yt; Wang, Xu; Teo, Yik-Ying; Friedlander, Yechiel; Heng, Chew-Kiat

    2017-01-01

    Background Although numerous phenotype based equations for predicting risk of 'hard' coronary heart disease are available, data on the utility of genetic information for such risk prediction is lacking in Chinese populations. Design Case-control study nested within the Singapore Chinese Health Study. Methods A total of 1306 subjects comprising 836 men (267 incident cases and 569 controls) and 470 women (128 incident cases and 342 controls) were included. A Genetic Risk Score comprising 156 single nucleotide polymorphisms that have been robustly associated with coronary heart disease or its risk factors ( p < 5 × 10 -8 ) in at least two independent cohorts of genome-wide association studies was built. For each gender, three base models were used: recalibrated Adult Treatment Panel III (ATPIII) Model (M 1 ); ATP III model fitted using Singapore Chinese Health Study data (M 2 ) and M 3 : M 2 + C-reactive protein + creatinine. Results The Genetic Risk Score was significantly associated with incident 'hard' coronary heart disease ( p for men: 1.70 × 10 -10 -1.73 × 10 -9 ; p for women: 0.001). The inclusion of the Genetic Risk Score in the prediction models improved discrimination in both genders (c-statistics: 0.706-0.722 vs. 0.663-0.695 from base models for men; 0.788-0.790 vs. 0.765-0.773 for women). In addition, the inclusion of the Genetic Risk Score also improved risk classification with a net gain of cases being reclassified to higher risk categories (men: 12.4%-16.5%; women: 10.2% (M 3 )), while not significantly reducing the classification accuracy in controls. Conclusions The Genetic Risk Score is an independent predictor for incident 'hard' coronary heart disease in our ethnic Chinese population. Inclusion of genetic factors into coronary heart disease prediction models could significantly improve risk prediction performance.

  3. Progressive Band Selection

    NASA Technical Reports Server (NTRS)

    Fisher, Kevin; Chang, Chein-I

    2009-01-01

    Progressive band selection (PBS) reduces spectral redundancy without significant loss of information, thereby reducing hyperspectral image data volume and processing time. Used onboard a spacecraft, it can also reduce image downlink time. PBS prioritizes an image's spectral bands according to priority scores that measure their significance to a specific application. Then it uses one of three methods to select an appropriate number of the most useful bands. Key challenges for PBS include selecting an appropriate criterion to generate band priority scores, and determining how many bands should be retained in the reduced image. The image's Virtual Dimensionality (VD), once computed, is a reasonable estimate of the latter. We describe the major design details of PBS and test PBS in a land classification experiment.

  4. Priority of a Hesitant Fuzzy Linguistic Preference Relation with a Normal Distribution in Meteorological Disaster Risk Assessment.

    PubMed

    Wang, Lihong; Gong, Zaiwu

    2017-10-10

    As meteorological disaster systems are large complex systems, disaster reduction programs must be based on risk analysis. Consequently, judgment by an expert based on his or her experience (also known as qualitative evaluation) is an important link in meteorological disaster risk assessment. In some complex and non-procedural meteorological disaster risk assessments, a hesitant fuzzy linguistic preference relation (HFLPR) is often used to deal with a situation in which experts may be hesitant while providing preference information of a pairwise comparison of alternatives, that is, the degree of preference of one alternative over another. This study explores hesitation from the perspective of statistical distributions, and obtains an optimal ranking of an HFLPR based on chance-restricted programming, which provides a new approach for hesitant fuzzy optimisation of decision-making in meteorological disaster risk assessments.

  5. Climate Change Resilience Planning at the Department of Energy's Savannah River Site

    NASA Astrophysics Data System (ADS)

    Werth, D. W.; Johnson, A.

    2015-12-01

    The Savannah River National Laboratory (SRNL) is developing a site sustainability plan for the Department of Energy's Savannah River Site (SRS) in South Carolina in accordance with Executive Order 13693, which charges each DOE agency with "identifying and addressing projected impacts of climate change" and "calculating the potential cost and risk to mission associated with agency operations". The plan will comprise i) projections of climate change, ii) surveys of site managers to estimate the effects of climate change on site operations, and iii) a determination of adaptive actions. Climate change projections for SRS are obtained from multiple sources, including an online repository of downscaled global climate model (GCM) simulations of future climate and downscaled GCM simulations produced at SRNL. Taken together, we have projected data for temperature, precipitation, humidity, and wind - all variables with a strong influence on site operations. SRNL is working to engage site facility managers and facilitate a "bottom up" approach to climate change resilience planning, where the needs and priorities of stakeholders are addressed throughout the process. We make use of the Vulnerability Assessment Scoring Tool, an Excel-based program designed to accept as input various climate scenarios ('exposure'), the susceptibility of assets to climate change ('sensitivity'), and the ability of these assets to cope with climate change ('adaptive capacity'). These are combined to produce a series of scores that highlight vulnerabilities. Working with site managers, we have selected the most important assets, estimated their expected response to climate change, and prepared a report highlighting the most endangered facilities. Primary risks include increased energy consumption, decreased water availability, increased forest fire danger, natural resource degradation, and compromised outdoor worker safety in a warmer and more humid climate. Results of this study will aid in driving future management decisions and promoting sustainable practices at SRS.

  6. A risk score for identifying methicillin-resistant Staphylococcus aureus in patients presenting to the hospital with pneumonia

    PubMed Central

    2013-01-01

    Background Methicillin-resistant Staphylococcus aureus (MRSA) represents an important pathogen in healthcare-associated pneumonia (HCAP). The concept of HCAP, though, may not perform well as a screening test for MRSA and can lead to overuse of antibiotics. We developed a risk score to identify patients presenting to the hospital with pneumonia unlikely to have MRSA. Methods We identified patients admitted with pneumonia (Apr 2005 – Mar 2009) at 62 hospitals in the US. We only included patients with lab evidence of bacterial infection (e.g., positive respiratory secretions, blood, or pleural cultures or urinary antigen testing). We determined variables independently associated with the presence of MRSA based on logistic regression (two-thirds of cohort) and developed a risk prediction model based on these factors. We validated the model in the remaining population. Results The cohort included 5975 patients and MRSA was identified in 14%. The final risk score consisted of eight variables and a potential total score of 10. Points were assigned as follows: two for recent hospitalization or ICU admission; one each for age < 30 or > 79 years, prior IV antibiotic exposure, dementia, cerebrovascular disease, female with diabetes, or recent exposure to a nursing home/long term acute care facility/skilled nursing facility. This study shows how the prevalence of MRSA rose with increasing score after stratifying the scores into Low (0 to 1 points), Medium (2 to 5 points) and High (6 or more points) risk. When the score was 0 or 1, the prevalence of MRSA was < 10% while the prevalence of MRSA climbed to > 30% when the score was 6 or greater. Conclusions MRSA represents a cause of pneumonia presenting to the hospital. This simple risk score identifies patients at low risk for MRSA and in whom anti-MRSA therapy might be withheld. PMID:23742753

  7. Developing risk-based priorities for reducing air pollution in urban settings in Ukraine.

    PubMed

    Brody, Michael; Caldwell, Jane; Golub, Alexander

    2007-02-01

    Ukraine, when part of the former Soviet Union, was responsible for about 25% of its overall industrial production. This aging industrial infrastructure continues to emit enormous volumes of air and water pollution and wastes. The National Report on the State of Environment in Ukraine 1999 (Ukraine Ministry of Environmental Protection [MEP], 2000) shows significant air pollution. There are numerous emissions that have been associated with developmental effects, chronic long-term health effects, and cancer. Ukraine also has been identified as a major source of transboundary air pollution for the eastern Mediterranean region. Ukraine's Environment Ministry is not currently able to strategically target high-priority emissions and lacks the resources to address all these problems. For these reasons, the U.S. Environmental Protection Agency set up a partnership with Ukraine's Ministry of Environmental Protection to strengthen its capacity to set environmental priorities through the use of comparative environmental risk assessment and economic analysis--the Capacity Building Project. The project is also addressing improvements in the efficiency and effectiveness of the use of its National Environmental Protection Fund. The project consists of a series of workshops with Ukrainian MEP officials in comparative risk assessment of air pollutant emissions in several heavily industrialized oblasts; cost-benefit and cost-effectiveness analysis; and environmental finance. Pilot risk assessment analyses have been completed. At the end of the Capacity Building Project it is expected that the use of the National Environmental Protection fund and the regional level oblast environmental protection funds will begin to target and identify the highest health and environmental risk emissions.

  8. Risk stratification on the basis of Deauville score on PET-CT and the presence of Epstein-Barr virus DNA after completion of primary treatment for extranodal natural killer/T-cell lymphoma, nasal type: a multicentre, retrospective analysis.

    PubMed

    Kim, Seok Jin; Choi, Joon Young; Hyun, Seung Hyup; Ki, Chang-Seok; Oh, Dongryul; Ahn, Yong Chan; Ko, Young Hyeh; Choi, Sunkyu; Jung, Sin-Ho; Khong, Pek-Lan; Tang, Tiffany; Yan, Xuexian; Lim, Soon Thye; Kwong, Yok-Lam; Kim, Won Seog

    2015-02-01

    Assessment of tumour viability after treatment is essential for prediction of treatment failure in patients with extranodal natural killer/T-cell lymphoma (ENKTL). We aimed to assess the use of the post-treatment Deauville score on PET-CT and Epstein-Barr virus DNA as a predictor of residual tumour, to establish the risk of treatment failure in patients with newly diagnosed ENKTL. In a retrospective analysis of patient data we assessed the prognostic relevance of the Deauville score (five-point scale) on PET-CT and circulating Epstein-Barr virus DNA after completion of treatment in consecutive patients with ENKTL who met eligibility criteria (newly diagnosed and received non-anthracycline-based chemotherapy, concurrent chemoradiotherapy, or both together) diagnosed at the Samsung Medical Center in Seoul, South Korea. The primary aim was to assess the association between progression-free survival and risk stratification based on post-treatment Deauville score and Epstein-Barr virus DNA. With an independent cohort from two different hospitals (Hong Kong and Singapore), we validated the prognostic value of our risk model. We included 102 patients diagnosed with ENKTL between Jan 6, 2005, and Nov 18, 2013, in the study cohort, and 38 patients diagnosed with ENKTL between Jan 7, 2009, and June 27, 2013, in the validation cohort. In the study cohort after a median follow-up of 47·2 months (IQR 30·0-65·5), 45 (44%) patients had treatment failure and 33 (32%) had died. Post-treatment Deauville score and Epstein-Barr virus DNA positivity were independently associated with progression-free and overall survival in the multivariable analysis (for post-treatment Deauville score of 3-4, progression-free survival hazard ratio [HR] 3·607, 95% CI 1·772-7·341, univariable p<0·0001; for post-treatment Epstein-Barr virus DNA positivity, progression-free survival HR 3·595, 95% CI 1·598-8·089, univariable p<0·0001). We stratified patients into three groups based on risk of treatment failure: a low-risk group (post-treatment Epstein-Barr virus negativity and post-treatment Deauville score of 1-2), a high-risk group (post-treatment Epstein-Barr virus negativity with a Deauville score 3-4, or post-treatment Epstein-Barr virus positivity with a Deauville score 1-2), and treatment failure (Deauville score of 5 or post-treatment Epstein-Barr positivity with a Deauville of score 3-4). This risk model showed a significant association with progression-free survival (for low risk vs high risk, HR 7·761, 95% CI 2·592-23·233, p<0·0001; for low risk vs failure, HR 18·546, 95% CI 5·997-57·353, p<0·0001). The validation cohort showed the same associations (for low risk vs high risk, HR 22·909, 95% CI 2·850-184·162, p=0·003; for low risk vs failure, HR 50·652, 95% CI 6·114-419·610, p<0·0001). Post-treatment Deauville score on PET-CT scan and the presence of Epstein-Barr virus DNA can predict the risk of treatment failure in patients with ENKTL. Our results might be able to help guide clinical practice. Samsung Biomedical Research Institute. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Adherence index based on the AHA 2006 diet and lifestyle recommendations is associated with select cardiovascular disease risk factors in older Puerto Ricans.

    PubMed

    Bhupathiraju, Shilpa N; Lichtenstein, Alice H; Dawson-Hughes, Bess; Tucker, Katherine L

    2011-03-01

    In 2006, the AHA released diet and lifestyle recommendations (AHA-DLR) for cardiovascular disease (CVD) risk reduction. The effect of adherence to these recommendations on CVD risk is unknown. Our objective was to develop a unique diet and lifestyle score based on the AHA-DLR and to evaluate this score in relation to available CVD risk factors. In a cross-sectional study of Puerto Rican adults aged 45-75 y living in the greater Boston area, information was available for the following variables: diet (semiquantitative FFQ), blood pressure, waist circumference (WC), 10-y risk of coronary heart disease (CHD) (Framingham risk score), and fasting plasma lipids, serum glucose, insulin, and C-reactive protein (CRP) concentrations. We developed a diet and lifestyle score (AHA-DLS) based on the AHA-DLR. The AHA-DLS had both internal consistency and content validity. It was associated with plasma HDL cholesterol (P = 0.001), serum insulin (P = 0.0003), and CRP concentrations (P = 0.02), WC (P < 0.0001), and 10-y risk of CHD score (P = 0.01 in women). The AHA-DLS was inversely associated with serum glucose among those with a BMI < 25 (P = 0.01). Women and men in the highest quartile of the AHA-DLS had lower serum insulin (P-trend = 0.0003) and CRP concentrations (P-trend = 0.002), WC (P-trend = 0.0003), and higher HDL cholesterol (P-trend = 0.008). The AHA-DLS is a useful tool to measure adherence to the AHA-DLR and may be used to examine associations between diet and lifestyle behaviors and CVD risk.

  10. Adherence Index Based on the AHA 2006 Diet and Lifestyle Recommendations Is Associated with Select Cardiovascular Disease Risk Factors in Older Puerto Ricans123

    PubMed Central

    Bhupathiraju, Shilpa N.; Lichtenstein, Alice H.; Dawson-Hughes, Bess; Tucker, Katherine L.

    2011-01-01

    In 2006, the AHA released diet and lifestyle recommendations (AHA-DLR) for cardiovascular disease (CVD) risk reduction. The effect of adherence to these recommendations on CVD risk is unknown. Our objective was to develop a unique diet and lifestyle score based on the AHA-DLR and to evaluate this score in relation to available CVD risk factors. In a cross-sectional study of Puerto Rican adults aged 45–75 y living in the greater Boston area, information was available for the following variables: diet (semiquantitative FFQ), blood pressure, waist circumference (WC), 10-y risk of coronary heart disease (CHD) (Framingham risk score), and fasting plasma lipids, serum glucose, insulin, and C-reactive protein (CRP) concentrations. We developed a diet and lifestyle score (AHA-DLS) based on the AHA-DLR. The AHA-DLS had both internal consistency and content validity. It was associated with plasma HDL cholesterol (P = 0.001), serum insulin (P = 0.0003), and CRP concentrations (P = 0.02), WC (P < 0.0001), and 10-y risk of CHD score (P = 0.01 in women). The AHA-DLS was inversely associated with serum glucose among those with a BMI < 25 (P = 0.01). Women and men in the highest quartile of the AHA-DLS had lower serum insulin (P-trend = 0.0003) and CRP concentrations (P-trend = 0.002), WC (P-trend = 0.0003), and higher HDL cholesterol (P-trend = 0.008). The AHA-DLS is a useful tool to measure adherence to the AHA-DLR and may be used to examine associations between diet and lifestyle behaviors and CVD risk. PMID:21270369

  11. Low-carbohydrate diet and type 2 diabetes risk in Japanese men and women: the Japan Public Health Center-Based Prospective Study.

    PubMed

    Nanri, Akiko; Mizoue, Tetsuya; Kurotani, Kayo; Goto, Atsushi; Oba, Shino; Noda, Mitsuhiko; Sawada, Norie; Tsugane, Shoichiro

    2015-01-01

    Evidence is sparse and contradictory regarding the association between low-carbohydrate diet score and type 2 diabetes risk, and no prospective study examined the association among Asians, who consume greater amount of carbohydrate. We prospectively investigated the association of low-carbohydrate diet score with type 2 diabetes risk. Participants were 27,799 men and 36,875 women aged 45-75 years who participated in the second survey of the Japan Public Health Center-Based Prospective Study and who had no history of diabetes. Dietary intake was ascertained by using a validated food-frequency questionnaire, and low-carbohydrate diet score was calculated from total carbohydrate, fat, and protein intake. The scores for high animal protein and fat or for high plant protein and fat were also calculated. Odds ratios of self-reported, physician-diagnosed type 2 diabetes over 5-year were estimated by using logistic regression. During the 5-year period, 1191 new cases of type 2 diabetes were self-reported. Low-carbohydrate diet score for high total protein and fat was significantly associated with a decreased risk of type 2 diabetes in women (P for trend <0.001); the multivariable-adjusted odds ratio of type 2 diabetes for the highest quintile of the score were 0.63 (95% confidence interval 0.46-0.84), compared with those for the lowest quintile. Additional adjustment for dietary glycemic load attenuated the association (odds ratio 0.75, 95% confidence interval 0.45-1.25). When the score separated for animal and for plant protein and fat, the score for high animal protein and fat was inversely associated with type 2 diabetes in women, whereas the score for high plant protein and fat was not associated in both men and women. Low-carbohydrate diet was associated with decreased risk of type 2 diabetes in Japanese women and this association may be partly attributable to high intake of white rice. The association for animal-based and plant-based low-carbohydrate diet warrants further investigation.

  12. Evolving biomarkers improve prediction of long-term mortality in patients with stable coronary artery disease: the BIO-VILCAD score.

    PubMed

    Kleber, M E; Goliasch, G; Grammer, T B; Pilz, S; Tomaschitz, A; Silbernagel, G; Maurer, G; März, W; Niessner, A

    2014-08-01

    Algorithms to predict the future long-term risk of patients with stable coronary artery disease (CAD) are rare. The VIenna and Ludwigshafen CAD (VILCAD) risk score was one of the first scores specifically tailored for this clinically important patient population. The aim of this study was to refine risk prediction in stable CAD creating a new prediction model encompassing various pathophysiological pathways. Therefore, we assessed the predictive power of 135 novel biomarkers for long-term mortality in patients with stable CAD. We included 1275 patients with stable CAD from the LUdwigshafen RIsk and Cardiovascular health study with a median follow-up of 9.8 years to investigate whether the predictive power of the VILCAD score could be improved by the addition of novel biomarkers. Additional biomarkers were selected in a bootstrapping procedure based on Cox regression to determine the most informative predictors of mortality. The final multivariable model encompassed nine clinical and biochemical markers: age, sex, left ventricular ejection fraction (LVEF), heart rate, N-terminal pro-brain natriuretic peptide, cystatin C, renin, 25OH-vitamin D3 and haemoglobin A1c. The extended VILCAD biomarker score achieved a significantly improved C-statistic (0.78 vs. 0.73; P = 0.035) and net reclassification index (14.9%; P < 0.001) compared to the original VILCAD score. Omitting LVEF, which might not be readily measureable in clinical practice, slightly reduced the accuracy of the new BIO-VILCAD score but still significantly improved risk classification (net reclassification improvement 12.5%; P < 0.001). The VILCAD biomarker score based on routine parameters complemented by novel biomarkers outperforms previous risk algorithms and allows more accurate classification of patients with stable CAD, enabling physicians to choose more personalized treatment regimens for their patients.

  13. Prioritizing Sites for Protection and Restoration for Grizzly Bears (Ursus arctos) in Southwestern Alberta, Canada.

    PubMed

    Braid, Andrew C R; Nielsen, Scott E

    2015-01-01

    As the influence of human activities on natural systems continues to expand, there is a growing need to prioritize not only pristine sites for protection, but also degraded sites for restoration. We present an approach for simultaneously prioritizing sites for protection and restoration that considers landscape patterns for a threatened population of grizzly bears (Ursus arctos) in southwestern Alberta, Canada. We considered tradeoffs between bottom-up (food resource supply) and top-down (mortality risk from roads) factors affecting seasonal habitat quality for bears. Simulated annealing was used to prioritize source-like sites (high habitat productivity, low mortality risk) for protection, as well as sink-like sites (high habitat productivity, high mortality risk) for restoration. Priority source-like habitats identified key conservation areas where future developments should be limited, whereas priority sink-like habitats identified key areas for mitigating road-related mortality risk with access management. Systematic conservation planning methods can be used to complement traditional habitat-based methods for individual focal species by identifying habitats where conservation actions (both protection and restoration) have the highest potential utility.

  14. Vulnerability in Determining the Cost of Information System Project to Avoid Loses

    NASA Astrophysics Data System (ADS)

    Haryono, Kholid; Ikhsani, Zulfa Amalia

    2018-03-01

    Context: This study discusses the priority of cost required in software development projects. Objectives: To show the costing models, the variables involved, and how practitioners assess and decide the priorities of each variable. To strengthen the information, each variable also confirmed the risk if ignored. Method: The method is done by two approaches. First, systematic literature reviews to find the models and variables used to decide the cost of software development. Second, confirm and take judgments about the level of importance and risk of each variable to the software developer. Result: Obtained about 54 variables that appear on the 10 models discussed. The variables are categorized into 15 groups based on the similarity of meaning. Each group becomes a variable. Confirmation results with practitioners on the level of importance and risk. It shown there are two variables that are considered very important and high risk if ignored. That is duration and effort. Conclusion: The relationship of variable rates between the results of literature studies and confirmation of practitioners contributes to the use of software business actors in considering project cost variables.

  15. Association between the Family Nutrition and Physical Activity screening tool and cardiovascular disease risk factors in 10-year old children

    NASA Astrophysics Data System (ADS)

    Yee, Kimbo Edward

    Purpose. To examine the association of the Family Nutrition and Physical Activity (FNPA) screening tool, a behaviorally based screening tool designed to assess the obesogenic family environment and behaviors, with cardiovascular disease (CVD) risk factors in 10-year old children. Methods. One hundred nineteen children were assessed for body mass index (BMI), percent body fat (%BF), waist circumference (WC), total cholesterol, HDL-cholesterol, and resting blood pressure. A continuous CVD risk score was created using total cholesterol to HDL-cholesterol ratio (TC:HDL), mean arterial pressure (MAP), and WC. The FNPA survey was completed by parents. The associations between the FNPA score and individual CVD risk factors and the continuous CVD risk score were examined using correlation analyses. Results. Approximately 35% of the sample were overweight (19%) or obese (16%). The mean FNPA score was 24.6 +/- 2.5 (range 18 to 29). Significant correlations were found between the FNPA score and WC (r = -.35, p<.01), BMI percentile (r = -.38, p<.01), %BF (r = -.43, p<.01), and the continuous CVD risk score (r = -.22, p = .02). No significant association was found between the FNPA score and TC:HDL (r=0.10, p=0.88) or MAP (r=-0.12, p=0.20). Conclusion. Children from a high-risk, obesogenic family environment as indicated with a lower FNPA score have a higher CVD risk factor profile than children from a low-risk family environment.

  16. A Study on the Priority Selection of Sediment-related Desaster Evacuation Using Debris Flow Combination Degree of Risk

    NASA Astrophysics Data System (ADS)

    Woo, C.; Kang, M.; Seo, J.; Kim, D.; Lee, C.

    2017-12-01

    As the mountainous urbanization has increased the concern about landslides in the living area, it is essential to develop the technology to minimize the damage through quick identification and sharing of the disaster occurrence information. In this study, to establish an effective system of alert evacuation that has influence on the residents, we used the debris flow combination degree of risk to predict the risk of the disaster and the level of damage and to select evacuation priorities. Based on the GIS information, the physical strength and social vulnerability were determined by following the debris flow combination of the risk formula. The results classify the physical strength hazard rating of the debris flow combination of the through the normalization process. Debris flow the estimated residential population included in the damage range of the damage prediction map is based on the area and the unit size data. Prediction of occupant formula was calculated by applying different weighting to the resident population and users, and the result was classified into 5 classes as the debris flow physical strength. The debris flow occurrence physical strength and social and psychological vulnerability were classified into the classifications to be reflected in the debris flow integrated risk map using the matrix technique. In addition, to supplement the risk of incorporation of debris flow, we added weight to disaster vulnerable facilities that require a lot of time and manpower to evacuate. The basic model of welfare facilities was supplemented by using basic data, population density, employment density and GDP. First, evacuate areas with high integrated degree of risk level, and evacuate with consideration of physical class differences if classification difficult because of the same or similar grade among the management areas. When the physical hazard class difference is similar, the population difference of the area including the welfare facility is considered first, and the priority is decided in order of age distribution, population density by period, and class difference of residential facility. The results of this study are expected be used as basic data for establishing a safety net for landslide by evacuation systems for disasters. Keyword: Landslide, Debris flow, Early warning system, evacuation

  17. Use and Customization of Risk Scores for Predicting Cardiovascular Events Using Electronic Health Record Data.

    PubMed

    Wolfson, Julian; Vock, David M; Bandyopadhyay, Sunayan; Kottke, Thomas; Vazquez-Benitez, Gabriela; Johnson, Paul; Adomavicius, Gediminas; O'Connor, Patrick J

    2017-04-24

    Clinicians who are using the Framingham Risk Score (FRS) or the American College of Cardiology/American Heart Association Pooled Cohort Equations (PCE) to estimate risk for their patients based on electronic health data (EHD) face 4 questions. (1) Do published risk scores applied to EHD yield accurate estimates of cardiovascular risk? (2) Are FRS risk estimates, which are based on data that are up to 45 years old, valid for a contemporary patient population seeking routine care? (3) Do the PCE make the FRS obsolete? (4) Does refitting the risk score using EHD improve the accuracy of risk estimates? Data were extracted from the EHD of 84 116 adults aged 40 to 79 years who received care at a large healthcare delivery and insurance organization between 2001 and 2011. We assessed calibration and discrimination for 4 risk scores: published versions of FRS and PCE and versions obtained by refitting models using a subset of the available EHD. The published FRS was well calibrated (calibration statistic K=9.1, miscalibration ranging from 0% to 17% across risk groups), but the PCE displayed modest evidence of miscalibration (calibration statistic K=43.7, miscalibration from 9% to 31%). Discrimination was similar in both models (C-index=0.740 for FRS, 0.747 for PCE). Refitting the published models using EHD did not substantially improve calibration or discrimination. We conclude that published cardiovascular risk models can be successfully applied to EHD to estimate cardiovascular risk; the FRS remains valid and is not obsolete; and model refitting does not meaningfully improve the accuracy of risk estimates. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.

  18. Point-of-Care Testing for Anemia, Diabetes, and Hypertension: A Pharmacy-Based Model in Lima, Peru.

    PubMed

    Saldarriaga, Enrique M; Vodicka, Elisabeth; La Rosa, Sayda; Valderrama, Maria; Garcia, Patricia J

    Prevention and control of chronic diseases is a high priority for many low- and middle-income countries. This study evaluated the feasibility and acceptability of training pharmacy workers to provide point-of-care testing for 3 chronic diseases-hypertension, diabetes, and anemia-to improve disease detection and awareness through private pharmacies. We developed a multiphase training curriculum for pharmacists and pharmacy technicians to build capacity for identification of risk factors, patient education, point-of-care testing, and referral for abnormal results. We conducted a pre-post evaluation with participants and evaluated results using Student t test for proportions. We conducted point-of-care testing with pharmacy clients and evaluated acceptability by patient characteristics (age, gender, and type of patient) using multiple logistic regression. In total, 72 pharmacy workers (66%) completed the full training curriculum. Pretest scores indicated that pharmacists had more knowledge and skills in chronic disease risk factors, patient education, and testing than pharmacy technicians. All participants improved their knowledge and skills after the training, and post-test scores indicated that pharmacy technicians achieved the same level of competency as pharmacists (P < .01). Additionally, 698 clients received at least 1 test during the study; 53% completed the acceptability survey. Nearly 100% thought the pharmacy could provide faster results, faster and better attention, and better access to basic screening for hypertension, diabetes, and anemia than a traditional health center. Fast service was very important: 41% ranked faster results and 30% ranked faster attention as the most important factor for receiving diagnostic testing in the pharmacy. We found that it is both feasible for pharmacies and acceptable to clients to train pharmacy workers to provide point-of-care testing for anemia, diabetes, and hypertension. This innovative approach holds potential to increase early detection of risk factors and bolster disease prevention and management efforts in Peru and other low- and middle-income settings. Copyright © 2017. Published by Elsevier Inc.

  19. SU-E-T-87: A TG-100 Approach for Quality Improvement of Associated Dosimetry Equipment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manger, R; Pawlicki, T; Kim, G

    2015-06-15

    Purpose: Dosimetry protocols devote so much time to the discussion of ionization chamber choice, use and performance that is easy to forget about the importance of the associated dosimetry equipment (ADE) in radiation dosimetry - barometer, thermometer, electrometer, phantoms, triaxial cables, etc. Improper use and inaccuracy of these devices may significantly affect the accuracy of radiation dosimetry. The purpose of this study is to evaluate the risk factors in the monthly output dosimetry procedure and recommend corrective actions using a TG-100 approach. Methods: A failure mode and effects analysis (FMEA) of the monthly linac output check procedure was performed tomore » determine which steps and failure modes carried the greatest risk. In addition, a fault tree analysis (FTA) was performed to expand the initial list of failure modes making sure that none were overlooked. After determining the failure modes with the highest risk priority numbers (RPNs), 11 physicists were asked to score corrective actions based on their ease of implementation and potential impact. The results were aggregated into an impact map to determine the implementable corrective actions. Results: Three of the top five failure modes were related to the thermometer and barometer. The two highest RPN-ranked failure modes were related to barometric pressure inaccuracy due to their high lack-of-detectability scores. Six corrective actions were proposed to address barometric pressure inaccuracy, and the survey results found the following two corrective actions to be implementable: 1) send the barometer for recalibration at a calibration laboratory and 2) check the barometer accuracy against the local airport and correct for elevation. Conclusion: An FMEA on monthly output measurements displayed the importance of ADE for accurate radiation dosimetry. When brainstorming for corrective actions, an impact map is helpful for visualizing the overall impact versus the ease of implementation.« less

  20. SCORE should be preferred to Framingham to predict cardiovascular death in French population.

    PubMed

    Marchant, Ivanny; Boissel, Jean-Pierre; Kassaï, Behrouz; Bejan, Theodora; Massol, Jacques; Vidal, Chrystelle; Amsallem, Emmanuel; Naudin, Florence; Galan, Pilar; Czernichow, Sébastien; Nony, Patrice; Gueyffier, François

    2009-10-01

    Numerous studies have examined the validity of available scores to predict the absolute cardiovascular risk. We developed a virtual population based on data representative of the French population and compared the performances of the two most popular risk equations to predict cardiovascular death: Framingham and SCORE. A population was built based on official French demographic statistics and summarized data from representative observational studies. The 10-year coronary and cardiovascular death risk and their ratio were computed for each individual by SCORE and Framingham equations. The resulting rates were compared with those derived from national vital statistics. Framingham overestimated French coronary deaths by 2.8 in men and 1.9 in women, and cardiovascular deaths by 1.5 in men and 1.3 in women. SCORE overestimated coronary death by 1.6 in men and 1.7 in women, and underestimated cardiovascular death by 0.94 in men and 0.85 in women. Our results revealed an exaggerated representation of coronary among cardiovascular death predicted by Framingham, with coronary death exceeding cardiovascular death in some individual profiles. Sensitivity analyses gave some insights to explain the internal inconsistency of the Framingham equations. Evidence is that SCORE should be preferred to Framingham to predict cardiovascular death risk in French population. This discrepancy between prediction scores is likely to be observed in other populations. To improve the validation of risk equations, specific guidelines should be issued to harmonize the outcomes definition across epidemiologic studies. Prediction models should be calibrated for risk differences in the space and time dimensions.

  1. Comparison of the Between the Flags calling criteria to the MEWS, NEWS and the electronic Cardiac Arrest Risk Triage (eCART) score for the identification of deteriorating ward patients.

    PubMed

    Green, Malcolm; Lander, Harvey; Snyder, Ashley; Hudson, Paul; Churpek, Matthew; Edelson, Dana

    2018-02-01

    Traditionally, paper based observation charts have been used to identify deteriorating patients, with emerging recent electronic medical records allowing electronic algorithms to risk stratify and help direct the response to deterioration. We sought to compare the Between the Flags (BTF) calling criteria to the Modified Early Warning Score (MEWS), National Early Warning Score (NEWS) and electronic Cardiac Arrest Risk Triage (eCART) score. Multicenter retrospective analysis of electronic health record data from all patients admitted to five US hospitals from November 2008-August 2013. Cardiac arrest, ICU transfer or death within 24h of a score RESULTS: Overall accuracy was highest for eCART, with an AUC of 0.801 (95% CI 0.799-0.802), followed by NEWS, MEWS and BTF respectively (0.718 [0.716-0.720]; 0.698 [0.696-0.700]; 0.663 [0.661-0.664]). BTF criteria had a high risk (Red Zone) specificity of 95.0% and a moderate risk (Yellow Zone) specificity of 27.5%, which corresponded to MEWS thresholds of >=4 and >=2, NEWS thresholds of >=5 and >=2, and eCART thresholds of >=12 and >=4, respectively. At those thresholds, eCART caught 22 more adverse events per 10,000 patients than BTF using the moderate risk criteria and 13 more using high risk criteria, while MEWS and NEWS identified the same or fewer. An electronically generated eCART score was more accurate than commonly used paper based observation tools for predicting the composite outcome of in-hospital cardiac arrest, ICU transfer and death within 24h of observation. The outcomes of this analysis lend weight for a move towards an algorithm based electronic risk identification tool for deteriorating patients to ensure earlier detection and prevent adverse events in the hospital. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Incident Risk Factors and Major Bleeding in Patients with Atrial Fibrillation Treated with Oral Anticoagulants: A Comparison of Baseline, Follow-up and Delta HAS-BLED Scores with an Approach Focused on Modifiable Bleeding Risk Factors.

    PubMed

    Chao, Tze-Fan; Lip, Gregory Y H; Lin, Yenn-Jiang; Chang, Shih-Lin; Lo, Li-Wei; Hu, Yu-Feng; Tuan, Ta-Chuan; Liao, Jo-Nan; Chung, Fa-Po; Chen, Tzeng-Ji; Chen, Shih-Ann

    2018-04-01

     When assessing bleeding risk in patients with atrial fibrillation (AF), risk stratification is often based on the baseline risks. We aimed to investigate changes in bleeding risk factors and alterations in the HAS-BLED score in AF patients. We hypothesized that a follow-up HAS-BLED score and the 'delta HAS-BLED score' (reflecting the change in score between baseline and follow-up) would be more predictive of major bleeding, when compared with baseline HAS-BLED score.  A total of 19,566 AF patients receiving warfarin and baseline HAS-BLED score ≤2 were studied. After a follow-up of 93,783 person-years, 3,032 major bleeds were observed. The accuracies of baseline, follow-up, and delta HAS-BLED scores as well as cumulative numbers of baseline modifiable bleeding risk factors, in predicting subsequent major bleeding, were analysed and compared. The mean baseline HAS-BLED score was 1.43 which increased to 2.45 with a mean 'delta HAS-BLED score' of 1.03. The HAS-BLED score remained unchanged in 38.2% of patients. Of those patients experiencing major bleeding, 76.6% had a 'delta HAS-BLED' score ≥1, compared with only 59.0% in patients without major bleeding ( p  < 0.001). For prediction of major bleeding, AUC was significantly higher for the follow-up HAS-BLED (0.63) or delta HAS-BLED (0.62) scores, compared with baseline HAS-BLED score (0.54). The number of baseline modifiable risk factors was non-significantly predictive of major bleeding (AUC = 0.49).  In this 'real-world' nationwide AF cohort, follow-up HAS-BLED or 'delta HAS-BLED score' was more predictive of major bleeding compared with baseline HAS-BLED or the simple determination of 'modifiable bleeding risk factors'. Bleeding risk in AF is a dynamic process and use of the HAS-BLED score should be to 'flag up' patients potentially at risk for more regular review and follow-up, and to address the modifiable bleeding risk factors during follow-up visits. Schattauer GmbH Stuttgart.

  3. [Subjective parental stress as indicator for child abuse risk: the role of emotional regulation and attachment].

    PubMed

    Spangler, Gottfried; Bovenschen, Ina; Globisch, Jutta; Krippl, Martin; Ast-Scheitenberger, Stephanie

    2009-01-01

    The Child Abuse Potential Inventory (CAPI) is an evidence-based procedure for the assessment of the risk for child abuse in parents. In this study, a German translation of the CAPI was applied to a normal sample of German parents (N = 944). Descriptive analysis of the CAPI scores in the German provides findings comparable to the original standardization sample. The subjects' child abuse risk score was associated with demographic characteristics like education, marital status, occupation and gender. Long-term stability of the child abuse risk score and associations with individual differences in emotional regulation and attachment were investigated in a sub-sample of mothers with high and low child abuse risk scores (N = 69). The findings proved long-term stability. Furthermore associations between the child abuse risk score and anger dispositions were found which, however, were moderated by attachment differences. The findings suggest attachment security as a protective factor against child abuse.

  4. Getting offshoring right.

    PubMed

    Aron, Ravi; Singh, Jitendra V

    2005-12-01

    The prospect of offshoring and outsourcing business processes has captured the imagination of CEOs everywhere. In the past five years, a rising number of companies in North America and Europe have experimented with this strategy, hoping to reduce costs and gain strategic advantage. But many businesses have had mixed results. According to several studies, half the organizations that have shifted processes offshore have failed to generate the expected financial benefits. What's more, many of them have faced employee resistance and consumer dissatisfaction. Clearly, companies have to rethink how they formulate their offshoring strategies. A three-part methodology can help. First, companies need to prioritize their processes, ranking each based on two criteria: the value it creates for customers and the degree to which the company can capture some of that value. Companies will want to keep their core (highest-priority) processes in-house and consider outsourcing their commodity (low-priority) processes; critical (moderate-priority) processes are up for debate and must be considered carefully. Second, businesses should analyze all the risks that accompany offshoring and look systematically at their critical and commodity processes in terms of operational risk (the risk that processes won't operate smoothly after being offshored) and structural risk (the risk that relationships with service providers may not work as expected). Finally, companies should determine possible locations for their offshore efforts, as well as the organizational forms--such as captive centers and joint ventures--that those efforts might take. They can do so by examining each process's operational and structural risks side by side. This article outlines the tools that will help companies choose the right processes to offshore. It also describes a new organizational structure called the extended organization, in which companies specify the quality of services they want and work alongside providers to get that quality.

  5. Associations between emotional intelligence, depression and suicide risk in nursing students.

    PubMed

    Aradilla-Herrero, Amor; Tomás-Sábado, Joaquín; Gómez-Benito, Juana

    2014-04-01

    The most important factor which predisposes young people to suicide is depression, although protective factors such as self-esteem, emotional adaptation and social support may reduce the probability of suicidal ideation and suicide attempts. Several studies have indicated an elevated risk of suicide for health-related professions. Little is known, however, about the relationship between perceived emotional intelligence and suicide risk among nursing students. The main goals were to determine the prevalence of suicide risk in a sample of nursing students, to examine the relationship between suicide risk and perceived emotional intelligence, depression, trait anxiety and self-esteem, and to identify any gender differences in relation to these variables. Cross-sectional study of nursing students (n=93) who completed self-report measures of perceived emotional intelligence (Trait Meta-Mood Scale, which evaluates three dimensions: emotional attention, clarity and repair), suicide risk (Plutchik Suicide Risk Scale), self-esteem (Rosenberg Self-esteem Scale), depression (Zung Self-Rating Depression Scale) and anxiety (Trait scale of the State-Trait Anxiety Inventory). Linear regression analysis confirmed that depression and emotional attention are significant predictors of suicidal ideation. Moreover, suicide risk showed a significant negative association with self-esteem and with emotional clarity and repair. Gender differences were only observed in relation to depression, on which women scored significantly higher. Overall, 14% of the students were considered to present a substantial suicide risk. The findings suggest that interventions to prevent suicidal ideation among nursing students should include strategies to detect mood disorders (especially depression) and to improve emotional coping skills. In line with previous research the results indicate that high scores on emotional attention are linked to heightened emotional susceptibility and an increased risk of suicide. The identification and prevention of factors associated with suicidal behaviour in nursing students should be regarded as a priority. © 2013.

  6. Development of a common priority list of pharmaceuticals relevant for the water cycle.

    PubMed

    de Voogt, P; Janex-Habibi, M-L; Sacher, F; Puijker, L; Mons, M

    2009-01-01

    Pharmaceutically active compounds (PhACs), including prescription drugs, over-the-counter medications, drugs used in hospitals and veterinary drugs, have been found throughout the water cycle. A desk study was initiated by the Global Water Research Coalition to consolidate a uniform selection of such compounds in order to judge risks of PhACs for the water cycle. By identifying major existing prioritization efforts and evaluating the criteria they use, this study yields a representative and qualitative profile ('umbrella view') of priority pharmaceuticals based on an extensive set of criteria. This can then be used for further studies on analytical methods, occurrence, treatability and potential risks associated with exposure to PhACs in water supply, identifying compounds most likely to be encountered and that may have significant impact on human health. For practical reasons, the present study excludes veterinary drugs. The pragmatic approach adopted provides an efficient tool to manage risks related to pharmaceuticals and provides assistance for selecting compounds for future studies.

  7. Excess mortality in persons with severe mental disorders: a multilevel intervention framework and priorities for clinical practice, policy and research agendas

    PubMed Central

    Liu, Nancy H.; Daumit, Gail L.; Dua, Tarun; Aquila, Ralph; Charlson, Fiona; Cuijpers, Pim; Druss, Benjamin; Dudek, Kenn; Freeman, Melvyn; Fujii, Chiyo; Gaebel, Wolfgang; Hegerl, Ulrich; Levav, Itzhak; Munk Laursen, Thomas; Ma, Hong; Maj, Mario; Elena Medina‐Mora, Maria; Nordentoft, Merete; Prabhakaran, Dorairaj; Pratt, Karen; Prince, Martin; Rangaswamy, Thara; Shiers, David; Susser, Ezra; Thornicroft, Graham; Wahlbeck, Kristian; Fekadu Wassie, Abe; Whiteford, Harvey; Saxena, Shekhar

    2017-01-01

    Excess mortality in persons with severe mental disorders (SMD) is a major public health challenge that warrants action. The number and scope of truly tested interventions in this area remain limited, and strategies for implementation and scaling up of programmes with a strong evidence base are scarce. Furthermore, the majority of available interventions focus on a single or an otherwise limited number of risk factors. Here we present a multilevel model highlighting risk factors for excess mortality in persons with SMD at the individual, health system and socio‐environmental levels. Informed by that model, we describe a comprehensive framework that may be useful for designing, implementing and evaluating interventions and programmes to reduce excess mortality in persons with SMD. This framework includes individual‐focused, health system‐focused, and community level and policy‐focused interventions. Incorporating lessons learned from the multilevel model of risk and the comprehensive intervention framework, we identify priorities for clinical practice, policy and research agendas. PMID:28127922

  8. Failure mode and effects analysis: A community practice perspective.

    PubMed

    Schuller, Bradley W; Burns, Angi; Ceilley, Elizabeth A; King, Alan; LeTourneau, Joan; Markovic, Alexander; Sterkel, Lynda; Taplin, Brigid; Wanner, Jennifer; Albert, Jeffrey M

    2017-11-01

    To report our early experiences with failure mode and effects analysis (FMEA) in a community practice setting. The FMEA facilitator received extensive training at the AAPM Summer School. Early efforts focused on department education and emphasized the need for process evaluation in the context of high profile radiation therapy accidents. A multidisciplinary team was assembled with representation from each of the major department disciplines. Stereotactic radiosurgery (SRS) was identified as the most appropriate treatment technique for the first FMEA evaluation, as it is largely self-contained and has the potential to produce high impact failure modes. Process mapping was completed using breakout sessions, and then compiled into a simple electronic format. Weekly sessions were used to complete the FMEA evaluation. Risk priority number (RPN) values > 100 or severity scores of 9 or 10 were considered high risk. The overall time commitment was also tracked. The final SRS process map contained 15 major process steps and 183 subprocess steps. Splitting the process map into individual assignments was a successful strategy for our group. The process map was designed to contain enough detail such that another radiation oncology team would be able to perform our procedures. Continuous facilitator involvement helped maintain consistent scoring during FMEA. Practice changes were made responding to the highest RPN scores, and new resulting RPN scores were below our high-risk threshold. The estimated person-hour equivalent for project completion was 258 hr. This report provides important details on the initial steps we took to complete our first FMEA, providing guidance for community practices seeking to incorporate this process into their quality assurance (QA) program. Determining the feasibility of implementing complex QA processes into different practice settings will take on increasing significance as the field of radiation oncology transitions into the new TG-100 QA paradigm. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  9. Stroke prevention with oral anticoagulation in older people with atrial fibrillation - a pragmatic approach.

    PubMed

    Ali, Ali; Bailey, Claire; Abdelhafiz, Ahmed H

    2012-08-01

    With advancing age, the prevalence of both stroke and non valvular atrial fibrillation (NVAF) is increasing. NVAF in old age has a high embolic potential if not anticoagulated. Oral anticoagulation therapy is cost effective in older people with NVAF due to their high base line stroke risk. The current stroke and bleeding risk scoring schemes have been based on complex scoring systems that are difficult to apply in clinical practice. Both scoring schemes include similar risk factors for ischemic and bleeding events which may lead to confusion in clinical decision making to balance the risks of bleeding against the risks of stroke, thereby limiting the applicability of such schemes. The difficulty in application of such schemes combined with physicians' fear of inducing bleeding complications has resulted in under use of anticoagulation therapy in older people. As older people (≥75 years) with NVAF are all at high risk of stroke, we are suggesting a pragmatic approach based on a yes/no decision rather than a risk scoring stratification which involves an opt out rather an opt in approach unless there is a contraindication for oral anticoagulation. Antiplatelet agents should not be an alternative option for antithrombotic treatment in older people with NVAF due to lack of efficacy and the potential of being used as an excuse of not prescribing anticoagulation. Bleeding risk should be assessed on individual basis and the decision to anticoagulate should include patients' views.

  10. Stroke Prevention with Oral Anticoagulation in Older People with Atrial Fibrillation - A Pragmatic Approach

    PubMed Central

    Ali, Ali; Bailey, Claire; Abdelhafiz, Ahmed H

    2012-01-01

    With advancing age, the prevalence of both stroke and non valvular atrial fibrillation (NVAF) is increasing. NVAF in old age has a high embolic potential if not anticoagulated. Oral anticoagulation therapy is cost effective in older people with NVAF due to their high base line stroke risk. The current stroke and bleeding risk scoring schemes have been based on complex scoring systems that are difficult to apply in clinical practice. Both scoring schemes include similar risk factors for ischemic and bleeding events which may lead to confusion in clinical decision making to balance the risks of bleeding against the risks of stroke, thereby limiting the applicability of such schemes. The difficulty in application of such schemes combined with physicians’ fear of inducing bleeding complications has resulted in under use of anticoagulation therapy in older people. As older people (≥75 years) with NVAF are all at high risk of stroke, we are suggesting a pragmatic approach based on a yes/no decision rather than a risk scoring stratification which involves an opt out rather an opt in approach unless there is a contraindication for oral anticoagulation. Antiplatelet agents should not be an alternative option for antithrombotic treatment in older people with NVAF due to lack of efficacy and the potential of being used as an excuse of not prescribing anticoagulation. Bleeding risk should be assessed on individual basis and the decision to anticoagulate should include patients’ views. PMID:23185715

  11. Protocol of a feasibility study for cognitive assessment of an ageing cohort within the Southeast Asia Community Observatory (SEACO), Malaysia.

    PubMed

    Mohan, Devi; Stephan, Blossom C M; Allotey, Pascale; Jagger, Carol; Pearce, Mark; Siervo, Mario; Reidpath, Daniel D

    2017-01-19

    There is a growing proportion of population aged 65 years and older in low-income and middle-income countries. In Malaysia, this proportion is predicted to increase from 5.1% in 2010 to more than 15.4% by 2050. Cognitive ageing and dementia are global health priorities. However, risk factors and disease associations in a multiethnic, middle-income country like Malaysia may not be consistent with those reported in other world regions. Knowing the burden of cognitive impairment and its risk factors in Malaysia is necessary for the development of management strategies and would provide valuable information for other transitional economies. This is a community-based feasibility study focused on the assessment of cognition, embedded in the longitudinal study of health and demographic surveillance site of the South East Asia Community Observatory (SEACO), in Malaysia. In total, 200 adults aged ≥50 years are selected for an in-depth health and cognitive assessment including the Mini Mental State Examination, the Montreal Cognitive Assessment, blood pressure, anthropometry, gait speed, hand grip strength, Depression Anxiety Stress Score and dried blood spots. The results will inform the feasibility, response rates and operational challenges for establishing an ageing study focused on cognitive function in similar middle-income country settings. Knowing the burden of cognitive impairment and dementia and risk factors for disease will inform local health priorities and management, and place these within the context of increasing life expectancy. The study protocol is approved by the Monash University Human Research Ethics Committee. Informed consent is obtained from all the participants. The project's analysed data and findings will be made available through publications and conference presentations and a data sharing archive. Reports on key findings will be made available as community briefs on the SEACO website. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  12. Predicting stroke through genetic risk functions: the CHARGE Risk Score Project.

    PubMed

    Ibrahim-Verbaas, Carla A; Fornage, Myriam; Bis, Joshua C; Choi, Seung Hoan; Psaty, Bruce M; Meigs, James B; Rao, Madhu; Nalls, Mike; Fontes, Joao D; O'Donnell, Christopher J; Kathiresan, Sekar; Ehret, Georg B; Fox, Caroline S; Malik, Rainer; Dichgans, Martin; Schmidt, Helena; Lahti, Jari; Heckbert, Susan R; Lumley, Thomas; Rice, Kenneth; Rotter, Jerome I; Taylor, Kent D; Folsom, Aaron R; Boerwinkle, Eric; Rosamond, Wayne D; Shahar, Eyal; Gottesman, Rebecca F; Koudstaal, Peter J; Amin, Najaf; Wieberdink, Renske G; Dehghan, Abbas; Hofman, Albert; Uitterlinden, André G; Destefano, Anita L; Debette, Stephanie; Xue, Luting; Beiser, Alexa; Wolf, Philip A; Decarli, Charles; Ikram, M Arfan; Seshadri, Sudha; Mosley, Thomas H; Longstreth, W T; van Duijn, Cornelia M; Launer, Lenore J

    2014-02-01

    Beyond the Framingham Stroke Risk Score, prediction of future stroke may improve with a genetic risk score (GRS) based on single-nucleotide polymorphisms associated with stroke and its risk factors. The study includes 4 population-based cohorts with 2047 first incident strokes from 22,720 initially stroke-free European origin participants aged ≥55 years, who were followed for up to 20 years. GRSs were constructed with 324 single-nucleotide polymorphisms implicated in stroke and 9 risk factors. The association of the GRS to first incident stroke was tested using Cox regression; the GRS predictive properties were assessed with area under the curve statistics comparing the GRS with age and sex, Framingham Stroke Risk Score models, and reclassification statistics. These analyses were performed per cohort and in a meta-analysis of pooled data. Replication was sought in a case-control study of ischemic stroke. In the meta-analysis, adding the GRS to the Framingham Stroke Risk Score, age and sex model resulted in a significant improvement in discrimination (all stroke: Δjoint area under the curve=0.016, P=2.3×10(-6); ischemic stroke: Δjoint area under the curve=0.021, P=3.7×10(-7)), although the overall area under the curve remained low. In all the studies, there was a highly significantly improved net reclassification index (P<10(-4)). The single-nucleotide polymorphisms associated with stroke and its risk factors result only in a small improvement in prediction of future stroke compared with the classical epidemiological risk factors for stroke.

  13. Computer-based malnutrition risk calculation may enhance the ability to identify pediatric patients at malnutrition-related risk for unfavorable outcome.

    PubMed

    Karagiozoglou-Lampoudi, Thomais; Daskalou, Efstratia; Lampoudis, Dimitrios; Apostolou, Aggeliki; Agakidis, Charalampos

    2015-05-01

    The study aimed to test the hypothesis that computer-based calculation of malnutrition risk may enhance the ability to identify pediatric patients at malnutrition-related risk for an unfavorable outcome. The Pediatric Digital Scaled MAlnutrition Risk screening Tool (PeDiSMART), incorporating the World Health Organization (WHO) growth reference data and malnutrition-related parameters, was used. This was a prospective cohort study of 500 pediatric patients aged 1 month to 17 years. Upon admission, the PeDiSMART score was calculated and anthropometry was performed. Pediatric Yorkhill Malnutrition Score (PYMS), Screening Tool Risk on Nutritional Status and Growth (STRONGkids), and Screening Tool for the Assessment of Malnutrition in Pediatrics (STAMP) malnutrition screening tools were also applied. PeDiSMART's association with the clinical outcome measures (weight loss/nutrition support and hospitalization duration) was assessed and compared with the other screening tools. The PeDiSMART score was inversely correlated with anthropometry and bioelectrical impedance phase angle (BIA PhA). The score's grading scale was based on BIA Pha quartiles. Weight loss/nutrition support during hospitalization was significantly independently associated with the malnutrition risk group allocation on admission, after controlling for anthropometric parameters and age. Receiver operating characteristic curve analysis showed a sensitivity of 87% and a specificity of 75% and a significant area under the curve, which differed significantly from that of STRONGkids and STAMP. In the subgroups of patients with PeDiSMART-based risk allocation different from that based on the other tools, PeDiSMART allocation was more closely related to outcome measures. PeDiSMART, applicable to the full age range of patients hospitalized in pediatric departments, graded according to BIA PhA, and embeddable in medical electronic records, enhances efficacy and reproducibility in identifying pediatric patients at malnutrition-related risk for an unfavorable outcome. Patient allocation according to the PeDiSMART score on admission is associated with clinical outcome measures. © 2014 American Society for Parenteral and Enteral Nutrition.

  14. A health priority for developing countries: the prevention of chronic fetal malnutrition.

    PubMed

    Villar, J; Altobelli, L; Kestler, E; Beliźan, J

    1986-01-01

    A prospective study of 3557 consecutively born neonates from a lower middle class district in Guatemala City documented a 23.8% incidence of intrauterine growth retardation due to fetal malnutrition. Those infants whose weights are below the 10th percentile of a sex- and race-specific birthweight and gestational age distribution, based on a developed country population, were considered to manifest intrauterine growth retardation. Ponderal index values were then used to further classify this population as having chronic fetal malnutrition (above the 10th percentile of the standard distribution) or subacute fetal malnutrition (below the 10th percentile); the incidences of these conditions were 79.1% and 20.8%, respectively. The results of numerous studies carried out in various populations suggest that developing countries have a higher incidence of chronically malnourished infants within the intrauterine growth retardation population, while subacute fetal malnutrition is more prevalent in developed countries. Moreover, it has been shown that chronically malnourished infants do not recover from their intrauterine damage and score the lowest in mental development tests even up to school age. They remain lighter, shorter, and with a smaller head circumference until at least 3 years of age. Based on the incidence rates ascertained in this study, it can be estimated that at least 2 million infants born each year in Latin America are at risk of chronic intrauterine growth retardation. Screening programs are needed to identify at-risk mothers early in pregnancy so that medical and nutritional interventions can be implemented.

  15. A scoring system based on artificial neural network for predicting 10-year survival in stage II A colon cancer patients after radical surgery.

    PubMed

    Peng, Jian-Hong; Fang, Yu-Jing; Li, Cai-Xia; Ou, Qing-Jian; Jiang, Wu; Lu, Shi-Xun; Lu, Zhen-Hai; Li, Pei-Xing; Yun, Jing-Ping; Zhang, Rong-Xin; Pan, Zhi-Zhong; Wan, De Sen

    2016-04-19

    Nearly 20% patients with stage II A colon cancer will develop recurrent disease post-operatively. The present study aims to develop a scoring system based on Artificial Neural Network (ANN) model for predicting 10-year survival outcome. The clinical and molecular data of 117 stage II A colon cancer patients from Sun Yat-sen University Cancer Center were used for training set and test set; poor pathological grading (score 49), reduced expression of TGFBR2 (score 33), over-expression of TGF-β (score 45), MAPK (score 32), pin1 (score 100), β-catenin in tumor tissue (score 50) and reduced expression of TGF-β in normal mucosa (score 22) were selected as the prognostic risk predictors. According to the developed scoring system, the patients were divided into 3 subgroups, which were supposed with higher, moderate and lower risk levels. As a result, for the 3 subgroups, the 10-year overall survival (OS) rates were 16.7%, 62.9% and 100% (P < 0.001); and the 10-year disease free survival (DFS) rates were 16.7%, 61.8% and 98.8% (P < 0.001) respectively. It showed that this scoring system for stage II A colon cancer could help to predict long-term survival and screen out high-risk individuals for more vigorous treatment.

  16. Impact of risk factors on cardiovascular risk: a perspective on risk estimation in a Swiss population.

    PubMed

    Chrubasik, Sigrun A; Chrubasik, Cosima A; Piper, Jörg; Schulte-Moenting, Juergen; Erne, Paul

    2015-01-01

    In models and scores for estimating cardiovascular risk (CVR), the relative weightings given to blood pressure measurements (BPMs), and biometric and laboratory variables are such that even large differences in blood pressure lead to rather low differences in the resulting total risk when compared with other concurrent risk factors. We evaluated this phenomenon based on the PROCAM score, using BPMs made by volunteer subjects at home (HBPMs) and automated ambulatory BPMs (ABPMs) carried out in the same subjects. A total of 153 volunteers provided the data needed to estimate their CVR by means of the PROCAM formula. Differences (deltaCVR) between the risk estimated by entering the ABPM and that estimated with the HBPM were compared with the differences (deltaBPM) between the ABPM and the corresponding HBPM. In addition to the median values (= second quartile), the first and third quartiles of blood pressure profiles were also considered. PROCAM risk values were converted to European Society of Cardiology (ESC) risk values and all participants were assigned to the risk groups low, medium and high. Based on the PROCAM score, 132 participants had a low risk for suffering myocardial infarction, 16 a medium risk and 5 a high risk. The calculated ESC scores classified 125 participants into the low-risk group, 26 into the medium- and 2 into the high-risk group for death from a cardiovascular event. Mean ABPM tended to be higher than mean HBPM. Use of mean systolic ABPM or HBPM in the PROCAM formula had no major impact on the risk level. Our observations are in agreement with the rather low weighting of blood pressure as risk determinant in the PROCAM score. BPMs assessed with different methods had relatively little impact on estimation of cardiovascular risk in the given context of other important determinants. The risk calculations in our unselected population reflect the given classification of Switzerland as a so-called cardiovascular "low risk country".

  17. Sensors vs. experts - a performance comparison of sensor-based fall risk assessment vs. conventional assessment in a sample of geriatric patients.

    PubMed

    Marschollek, Michael; Rehwald, Anja; Wolf, Klaus-Hendrik; Gietzelt, Matthias; Nemitz, Gerhard; zu Schwabedissen, Hubertus Meyer; Schulze, Mareike

    2011-06-28

    Fall events contribute significantly to mortality, morbidity and costs in our ageing population. In order to identify persons at risk and to target preventive measures, many scores and assessment tools have been developed. These often require expertise and are costly to implement. Recent research investigates the use of wearable inertial sensors to provide objective data on motion features which can be used to assess individual fall risk automatically. So far it is unknown how well this new method performs in comparison with conventional fall risk assessment tools. The aim of our research is to compare the predictive performance of our new sensor-based method with conventional and established methods, based on prospective data. In a first study phase, 119 inpatients of a geriatric clinic took part in motion measurements using a wireless triaxial accelerometer during a Timed Up&Go (TUG) test and a 20 m walk. Furthermore, the St. Thomas Risk Assessment Tool in Falling Elderly Inpatients (STRATIFY) was performed, and the multidisciplinary geriatric care team estimated the patients' fall risk. In a second follow-up phase of the study, 46 of the participants were interviewed after one year, including a fall and activity assessment. The predictive performances of the TUG, the STRATIFY and team scores are compared. Furthermore, two automatically induced logistic regression models based on conventional clinical and assessment data (CONV) as well as sensor data (SENSOR) are matched. Among the risk assessment scores, the geriatric team score (sensitivity 56%, specificity 80%) outperforms STRATIFY and TUG. The induced logistic regression models CONV and SENSOR achieve similar performance values (sensitivity 68%/58%, specificity 74%/78%, AUC 0.74/0.72, +LR 2.64/2.61). Both models are able to identify more persons at risk than the simple scores. Sensor-based objective measurements of motion parameters in geriatric patients can be used to assess individual fall risk, and our prediction model's performance matches that of a model based on conventional clinical and assessment data. Sensor-based measurements using a small wearable device may contribute significant information to conventional methods and are feasible in an unsupervised setting. More prospective research is needed to assess the cost-benefit relation of our approach.

  18. Sensors vs. experts - A performance comparison of sensor-based fall risk assessment vs. conventional assessment in a sample of geriatric patients

    PubMed Central

    2011-01-01

    Background Fall events contribute significantly to mortality, morbidity and costs in our ageing population. In order to identify persons at risk and to target preventive measures, many scores and assessment tools have been developed. These often require expertise and are costly to implement. Recent research investigates the use of wearable inertial sensors to provide objective data on motion features which can be used to assess individual fall risk automatically. So far it is unknown how well this new method performs in comparison with conventional fall risk assessment tools. The aim of our research is to compare the predictive performance of our new sensor-based method with conventional and established methods, based on prospective data. Methods In a first study phase, 119 inpatients of a geriatric clinic took part in motion measurements using a wireless triaxial accelerometer during a Timed Up&Go (TUG) test and a 20 m walk. Furthermore, the St. Thomas Risk Assessment Tool in Falling Elderly Inpatients (STRATIFY) was performed, and the multidisciplinary geriatric care team estimated the patients' fall risk. In a second follow-up phase of the study, 46 of the participants were interviewed after one year, including a fall and activity assessment. The predictive performances of the TUG, the STRATIFY and team scores are compared. Furthermore, two automatically induced logistic regression models based on conventional clinical and assessment data (CONV) as well as sensor data (SENSOR) are matched. Results Among the risk assessment scores, the geriatric team score (sensitivity 56%, specificity 80%) outperforms STRATIFY and TUG. The induced logistic regression models CONV and SENSOR achieve similar performance values (sensitivity 68%/58%, specificity 74%/78%, AUC 0.74/0.72, +LR 2.64/2.61). Both models are able to identify more persons at risk than the simple scores. Conclusions Sensor-based objective measurements of motion parameters in geriatric patients can be used to assess individual fall risk, and our prediction model's performance matches that of a model based on conventional clinical and assessment data. Sensor-based measurements using a small wearable device may contribute significant information to conventional methods and are feasible in an unsupervised setting. More prospective research is needed to assess the cost-benefit relation of our approach. PMID:21711504

  19. Analysis of Nursing Clinical Decision Support Requests and Strategic Plan in a Large Academic Health System.

    PubMed

    Whalen, Kimberly; Bavuso, Karen; Bouyer-Ferullo, Sharon; Goldsmith, Denise; Fairbanks, Amanda; Gesner, Emily; Lagor, Charles; Collins, Sarah

    2016-01-01

    To understand requests for nursing Clinical Decision Support (CDS) interventions at a large integrated health system undergoing vendor-based EHR implementation. In addition, to establish a process to guide both short-term implementation and long-term strategic goals to meet nursing CDS needs. We conducted an environmental scan to understand current state of nursing CDS over three months. The environmental scan consisted of a literature review and an analysis of CDS requests received from across our health system. We identified existing high priority CDS and paper-based tools used in nursing practice at our health system that guide decision-making. A total of 46 nursing CDS requests were received. Fifty-six percent (n=26) were specific to a clinical specialty; 22 percent (n=10) were focused on facilitating clinical consults in the inpatient setting. "Risk Assessments/Risk Reduction/Promotion of Healthy Habits" (n=23) was the most requested High Priority Category received for nursing CDS. A continuum of types of nursing CDS needs emerged using the Data-Information-Knowledge-Wisdom Conceptual Framework: 1) facilitating data capture, 2) meeting information needs, 3) guiding knowledge-based decision making, and 4) exposing analytics for wisdom-based clinical interpretation by the nurse. Identifying and prioritizing paper-based tools that can be modified into electronic CDS is a challenge. CDS strategy is an evolving process that relies on close collaboration and engagement with clinical sites for short-term implementation and should be incorporated into a long-term strategic plan that can be optimized and achieved overtime. The Data-Information-Knowledge-Wisdom Conceptual Framework in conjunction with the High Priority Categories established may be a useful tool to guide a strategic approach for meeting short-term nursing CDS needs and aligning with the organizational strategic plan.

  20. Assessing the impact of a cattle risk-based trading scheme on the movement of bovine tuberculosis infected animals in England and Wales.

    PubMed

    Adkin, A; Brouwer, A; Downs, S H; Kelly, L

    2016-01-01

    The adoption of bovine tuberculosis (bTB) risk-based trading (RBT) schemes has the potential to reduce the risk of bTB spread. However, any scheme will have cost implications that need to be balanced against its likely success in reducing bTB. This paper describes the first stochastic quantitative model assessing the impact of the implementation of a cattle risk-based trading scheme to inform policy makers and contribute to cost-benefit analyses. A risk assessment for England and Wales was developed to estimate the number of infected cattle traded using historic movement data recorded between July 2010 and June 2011. Three scenarios were implemented: cattle traded with no RBT scheme in place, voluntary provision of the score and a compulsory, statutory scheme applying a bTB risk score to each farm. For each scenario, changes in trade were estimated due to provision of the risk score to potential purchasers. An estimated mean of 3981 bTB infected animals were sold to purchasers with no RBT scheme in place in one year, with 90% confidence the true value was between 2775 and 5288. This result is dependent on the estimated between herd prevalence used in the risk assessment which is uncertain. With the voluntary provision of the risk score by farmers, on average, 17% of movements was affected (purchaser did not wish to buy once the risk score was available), with a reduction of 23% in infected animals being purchased initially. The compulsory provision of the risk score in a statutory scheme resulted in an estimated mean change to 26% of movements, with a reduction of 37% in infected animals being purchased initially, increasing to a 53% reduction in infected movements from higher risk sellers (score 4 and 5). The estimated mean reduction in infected animals being purchased could be improved to 45% given a 10% reduction in risky purchase behaviour by farmers which may be achieved through education programmes, or to an estimated mean of 49% if a rule was implemented preventing farmers from the purchase of animals of higher risk than their own herd. Given voluntary trials currently taking place of a trading scheme, recommendations for future work include the monitoring of initial uptake and changes in the purchase patterns of farmers. Such data could be used to update the risk assessment to reduce uncertainty associated with model estimates. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.

  1. Developing a new case based computer-aided detection scheme and an adaptive cueing method to improve performance in detecting mammographic lesions

    PubMed Central

    Tan, Maxine; Aghaei, Faranak; Wang, Yunzhi; Zheng, Bin

    2017-01-01

    The purpose of this study is to evaluate a new method to improve performance of computer-aided detection (CAD) schemes of screening mammograms with two approaches. In the first approach, we developed a new case based CAD scheme using a set of optimally selected global mammographic density, texture, spiculation, and structural similarity features computed from all four full-field digital mammography (FFDM) images of the craniocaudal (CC) and mediolateral oblique (MLO) views by using a modified fast and accurate sequential floating forward selection feature selection algorithm. Selected features were then applied to a “scoring fusion” artificial neural network (ANN) classification scheme to produce a final case based risk score. In the second approach, we combined the case based risk score with the conventional lesion based scores of a conventional lesion based CAD scheme using a new adaptive cueing method that is integrated with the case based risk scores. We evaluated our methods using a ten-fold cross-validation scheme on 924 cases (476 cancer and 448 recalled or negative), whereby each case had all four images from the CC and MLO views. The area under the receiver operating characteristic curve was AUC = 0.793±0.015 and the odds ratio monotonically increased from 1 to 37.21 as CAD-generated case based detection scores increased. Using the new adaptive cueing method, the region based and case based sensitivities of the conventional CAD scheme at a false positive rate of 0.71 per image increased by 2.4% and 0.8%, respectively. The study demonstrated that supplementary information can be derived by computing global mammographic density image features to improve CAD-cueing performance on the suspicious mammographic lesions. PMID:27997380

  2. PREDICT-PD: An online approach to prospectively identify risk indicators of Parkinson's disease.

    PubMed

    Noyce, Alastair J; R'Bibo, Lea; Peress, Luisa; Bestwick, Jonathan P; Adams-Carr, Kerala L; Mencacci, Niccolo E; Hawkes, Christopher H; Masters, Joseph M; Wood, Nicholas; Hardy, John; Giovannoni, Gavin; Lees, Andrew J; Schrag, Anette

    2017-02-01

    A number of early features can precede the diagnosis of Parkinson's disease (PD). To test an online, evidence-based algorithm to identify risk indicators of PD in the UK population. Participants aged 60 to 80 years without PD completed an online survey and keyboard-tapping task annually over 3 years, and underwent smell tests and genotyping for glucocerebrosidase (GBA) and leucine-rich repeat kinase 2 (LRRK2) mutations. Risk scores were calculated based on the results of a systematic review of risk factors and early features of PD, and individuals were grouped into higher (above 15th centile), medium, and lower risk groups (below 85th centile). Previously defined indicators of increased risk of PD ("intermediate markers"), including smell loss, rapid eye movement-sleep behavior disorder, and finger-tapping speed, and incident PD were used as outcomes. The correlation of risk scores with intermediate markers and movement of individuals between risk groups was assessed each year and prospectively. Exploratory Cox regression analyses with incident PD as the dependent variable were performed. A total of 1323 participants were recruited at baseline and >79% completed assessments each year. Annual risk scores were correlated with intermediate markers of PD each year and baseline scores were correlated with intermediate markers during follow-up (all P values < 0.001). Incident PD diagnoses during follow-up were significantly associated with baseline risk score (hazard ratio = 4.39, P = .045). GBA variants or G2019S LRRK2 mutations were found in 47 participants, and the predictive power for incident PD was improved by the addition of genetic variants to risk scores. The online PREDICT-PD algorithm is a unique and simple method to identify indicators of PD risk. © 2017 The Authors. Movement Disorders published by Wiley Periodicals, Inc. on behalf of International Parkinson and Movement Disorder Society. © 2016 International Parkinson and Movement Disorder Society.

  3. Principal component analysis of dietary and lifestyle patterns in relation to risk of subtypes of esophageal and gastric cancer

    PubMed Central

    Silvera, Stephanie A. Navarro; Mayne, Susan T; Risch, Harvey A.; Gammon, Marilie D; Vaughan, Thomas; Chow, Wong-Ho; Dubin, Joel A; Dubrow, Robert; Schoenberg, Janet; Stanford, Janet L; West, A. Brian; Rotterdam, Heidrun; Blot, William J

    2011-01-01

    Purpose To perform pattern analyses of dietary and lifestyle factors in relation to risk of esophageal and gastric cancers. Methods We evaluated risk factors for esophageal adenocarcinoma (EA), esophageal squamous cell carcinoma (ESCC), gastric cardia adenocarcinoma (GCA), and other gastric cancers (OGA) using data from a population-based case-control study conducted in Connecticut, New Jersey, and western Washington state. Dietary/lifestyle patterns were created using principal component analysis (PCA). Impact of the resultant scores on cancer risk was estimated through logistic regression. Results PCA identified six patterns: meat/nitrite, fruit/vegetable, smoking/alcohol, legume/meat alternate, GERD/BMI, and fish/vitamin C. Risk of each cancer under study increased with rising meat/nitrite score. Risk of EA increased with increasing GERD/BMI score, and risk of ESCC rose with increasing smoking/alcohol score and decreasing GERD/BMI score. Fruit/vegetable scores were inversely associated with EA, ESCC, and GCA. Conclusions PCA may provide a useful approach for summarizing extensive dietary/lifestyle data into fewer interpretable combinations that discriminate between cancer cases and controls. The analyses suggest that meat/nitrite intake is associated with elevated risk of each cancer under study, while fruit/vegetable intake reduces risk of EA, ESCC, and GCA. GERD/obesity were confirmed as risk factors for EA and smoking/alcohol as risk factors for ESCC. PMID:21435900

  4. Risk factors for child maltreatment in an Australian population-based birth cohort.

    PubMed

    Doidge, James C; Higgins, Daryl J; Delfabbro, Paul; Segal, Leonie

    2017-02-01

    Child maltreatment and other adverse childhood experiences adversely influence population health and socioeconomic outcomes. Knowledge of the risk factors for child maltreatment can be used to identify children at risk and may represent opportunities for prevention. We examined a range of possible child, parent and family risk factors for child maltreatment in a prospective 27-year population-based birth cohort of 2443 Australians. Physical abuse, sexual abuse, emotional abuse, neglect and witnessing of domestic violence were recorded retrospectively in early adulthood. Potential risk factors were collected prospectively during childhood or reported retrospectively. Associations were estimated using bivariate and multivariate logistic regressions and combined into cumulative risk scores. Higher levels of economic disadvantage, poor parental mental health and substance use, and social instability were strongly associated with increased risk of child maltreatment. Indicators of child health displayed mixed associations and infant temperament was uncorrelated to maltreatment. Some differences were observed across types of maltreatment but risk profiles were generally similar. In multivariate analyses, nine independent risk factors were identified, including some that are potentially modifiable: economic disadvantage and parental substance use problems. Risk of maltreatment increased exponentially with the number of risk factors experienced, with prevalence of maltreatment in the highest risk groups exceeding 80%. A cumulative risk score based on the independent risk factors allowed identification of individuals at very high risk of maltreatment, while a score that incorporated all significant risk and protective factors provided better identification of low-risk individuals. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Gestational weight gain and the risk of offspring obesity at 10 and 16 years: a prospective cohort study in low-income women.

    PubMed

    Diesel, J C; Eckhardt, C L; Day, N L; Brooks, M M; Arslanian, S A; Bodnar, L M

    2015-09-01

    To study the association between gestational weight gain (GWG) and offspring obesity risk at ages chosen to approximate prepuberty (10 years) and postpuberty (16 years). Prospective pregnancy cohort. Pittsburgh, PA, USA. Low-income pregnant women (n = 514) receiving prenatal care at an obstetric residency clinic and their singleton offspring. Gestational weight gain was classified based on maternal GWG-for-gestational-age Z-score charts and was modelled using flexible spline terms in modified multivariable Poisson regression models. Obesity at 10 or 16 years, defined as body mass index (BMI) Z-scores ≥95th centile of the 2000 CDC references, based on measured height and weight. The prevalence of offspring obesity was 20% at 10 years and 22% at 16 years. In the overall sample, the risk of offspring obesity at 10 and 16 years increased when GWG exceeded a GWG Z-score of 0 SD (equivalent to 30 kg at 40 weeks); but for gains below a Z-score of 0 SD there was no relationship with child obesity risk. The association between GWG and offspring obesity varied by prepregnancy BMI. Among mothers with a pregravid BMI <25 kg/m(2) , the risk of offspring obesity increased when GWG Z-score exceeded 0 SD, yet among overweight women (BMI ≥25 kg/m(2) ), there was no association between GWG Z-scores and offspring obesity risk. Among lean women, higher GWG may have lasting effects on offspring obesity risk. © 2015 Royal College of Obstetricians and Gynaecologists.

  6. Antipsychotics and mortality: adjusting for mortality risk scores to address confounding by terminal illness.

    PubMed

    Park, Yoonyoung; Franklin, Jessica M; Schneeweiss, Sebastian; Levin, Raisa; Crystal, Stephen; Gerhard, Tobias; Huybrechts, Krista F

    2015-03-01

    To determine whether adjustment for prognostic indices specifically developed for nursing home (NH) populations affect the magnitude of previously observed associations between mortality and conventional and atypical antipsychotics. Cohort study. A merged data set of Medicaid, Medicare, Minimum Data Set (MDS), Online Survey Certification and Reporting system, and National Death Index for 2001 to 2005. Dual-eligible individuals aged 65 and older who initiated antipsychotic treatment in a NH (N=75,445). Three mortality risk scores (Mortality Risk Index Score, Revised MDS Mortality Risk Index, Advanced Dementia Prognostic Tool) were derived for each participant using baseline MDS data, and their performance was assessed using c-statistics and goodness-of-fit tests. The effect of adjusting for these indices in addition to propensity scores (PSs) on the association between antipsychotic medication and mortality was evaluated using Cox models with and without adjustment for risk scores. Each risk score showed moderate discrimination for 6-month mortality, with c-statistics ranging from 0.61 to 0.63. There was no evidence of lack of fit. Imbalances in risk scores between conventional and atypical antipsychotic users, suggesting potential confounding, were much lower within PS deciles than the imbalances in the full cohort. Accounting for each score in the Cox model did not change the relative risk estimates: 2.24 with PS-only adjustment versus 2.20, 2.20, and 2.22 after further adjustment for the three risk scores. Although causality cannot be proven based on nonrandomized studies, this study adds to the body of evidence rejecting explanations other than causality for the greater mortality risk associated with conventional antipsychotics than with atypical antipsychotics. © 2015, Copyright the Authors Journal compilation © 2015, The American Geriatrics Society.

  7. Comparative evaluation of Indian Diabetes Risk Score and Finnish Diabetes Risk Score for predicting risk of diabetes mellitus type II: A teaching hospital-based survey in Maharashtra.

    PubMed

    Pawar, Shivshakti D; Naik, Jayashri D; Prabhu, Priya; Jatti, Gajanan M; Jadhav, Sachin B; Radhe, B K

    2017-01-01

    India is currently becoming capital for diabetes mellitus. This significantly increasing incidence of diabetes putting an additional burden on health care in India. Unfortunately, half of diabetic individuals are unknown about their diabetic status. Hence, there is an emergent need of effective screening instrument to identify "diabetes risk" individuals. The aim is to evaluate and compare the diagnostic accuracy and clinical utility of Indian Diabetes Risk Score (IDRS) and Finnish Diabetes Risk Score (FINDRISC). This is retrospective, record-based study of diabetes detection camp organized by a teaching hospital. Out of 780 people attended this camp voluntarily only 763 fulfilled inclusion criteria of the study. In this camp, pro forma included the World Health Organization STEP guidelines for surveillance of noncommunicable diseases. Included primary sociodemographic characters, physical measurements, and clinical examination. After that followed the random blood glucose estimation of each individual. Diagnostic accuracy of IDRS and FINDRISC compared by using receiver operative characteristic curve (ROC). Sensitivity, specificity, likelihood ratio, positive predictive and negative predictive values were compared. Clinical utility index (CUI) of each score also compared. SPSS version 22, Stata 13, R3.2.9 used. Out of 763 individuals, 38 were new diabetics. By IDRS 347 and by FINDRISC 96 people were included in high-risk category for diabetes. Odds ratio for high-risk people in FINDRISC for getting affected by diabetes was 10.70. Similarly, it was 4.79 for IDRS. Area under curves of ROCs of both scores were indifferent ( P = 0.98). Sensitivity and specificity of IDRS was 78.95% and 56.14%; whereas for FINDRISC it was 55.26% and 89.66%, respectively. CUI was excellent (0.86) for FINDRISC while IDRS it was "satisfactory" (0.54). Bland-Altman plot and Cohen's Kappa suggested fair agreement between these score in measuring diabetes risk. Diagnostic accuracy and clinical utility of FINDRISC is fairly good than IDRS.

  8. The Environmental Protection Agency's Community-Focused Exposure and Risk Screening Tool (C-FERST) and its potential use for environmental justice efforts.

    PubMed

    Zartarian, Valerie G; Schultz, Bradley D; Barzyk, Timothy M; Smuts, Marybeth; Hammond, Davyda M; Medina-Vera, Myriam; Geller, Andrew M

    2011-12-01

    Our primary objective was to provide higher quality, more accessible science to address challenges of characterizing local-scale exposures and risks for enhanced community-based assessments and environmental decision-making. After identifying community needs, priority environmental issues, and current tools, we designed and populated the Community-Focused Exposure and Risk Screening Tool (C-FERST) in collaboration with stakeholders, following a set of defined principles, and considered it in the context of environmental justice. C-FERST is a geographic information system and resource access Web tool under development for supporting multimedia community assessments. Community-level exposure and risk research is being conducted to address specific local issues through case studies. C-FERST can be applied to support environmental justice efforts. It incorporates research to develop community-level data and modeled estimates for priority environmental issues, and other relevant information identified by communities. Initial case studies are under way to refine and test the tool to expand its applicability and transferability. Opportunities exist for scientists to address the many research needs in characterizing local cumulative exposures and risks and for community partners to apply and refine C-FERST.

  9. Patient Ethnicity Affects Triage Assessments and Patient Prioritization in U.S. Department of Veterans Affairs Emergency Departments

    PubMed Central

    Vigil, Jacob M.; Coulombe, Patrick; Alcock, Joe; Kruger, Eric; Stith, Sarah S.; Strenth, Chance; Parshall, Mark; Cichowski, Sara B.

    2016-01-01

    Abstract Ethnic minority patients receive lower priority triage assignments in Veteran's Affairs (VA) emergency departments (EDs) compared to White patients, but it is currently unknown whether this disparity arises from generalized biases across the triage assessment process or from differences in how objective and/or subjective institution-level or person-level information is incorporated into the triage assessment process, thus contributing to disparate treatment. The VA database of electronic medical records of patients who presented to the VA ED from 2008 to 2012 was used to measure patient ethnicity, self-reported pain intensity (PI) levels, heart rate (HR), respiratory rate (RR), and nurse-provided triage assignment, the Emergency Severity Index (ESI) score. Multilevel, random effects linear modeling was used to control for demographic and clinical characteristics of patients as well as age, gender, and experience of triage nurses. A total of 359,642 patient/provider encounters between 129,991 VA patients and 774 nurses were included in the study. Patients were 61% non-Hispanic White [NHW], 28% African-American, 7% Hispanic, 2% Asian-American, <1% American Indian/Alaska Native, and 1% mixed ethnicity. After controlling for demographic characteristics of nurses and patients, African-American, Hispanic, and mixed-ethnicity patients reported higher average PI scores but lower HRs and RRs than NHW patients. NHW patients received higher priority ESI ratings with lower PI when compared against African-American patients. NHW patients with low to moderate HRs also received higher priority ESI scoring than African-American, Hispanic, Asian-American, and Mixed-ethnicity patients; however, when HR was high NHWs received lower priority ESI ratings than each of the minority groups (except for African-Americans). This study provides evidence for systemic differences in how patients’ vital signs are applied for determining ESI scores for different ethnic groups. Additional prospective research will be needed to determine how this specific person-level mechanism affects healthcare quality and outcomes. PMID:27057847

  10. Aortic pulse wave velocity and HeartSCORE: improving cardiovascular risk stratification. a sub-analysis of the EDIVA (Estudo de DIstensibilidade VAscular) project.

    PubMed

    Pereira, T; Maldonado, J; Polónia, J; Silva, J A; Morais, J; Rodrigues, T; Marques, M

    2014-04-01

    HeartSCORE is a tool for assessing cardiovascular risk, basing its estimates on the relative weight of conventional cardiovascular risk factors. However, new markers of cardiovascular risk have been identified, such as aortic pulse wave velocity (PWV). The purpose of this study was to evaluate to what extent the incorporation of PWV in HeartSCORE increases its discriminative power of major cardiovascular events (MACE). This study is a sub-analysis of the EDIVA project, which is a prospective cohort, multicenter and observational study involving 2200 individuals of Portuguese nationality (1290 men and 910 women) aged between 18 and 91 years (mean 46.33 ± 13.76 years), with annual measurements of PWV (Complior). Only participants above 35 years old were included in the present re-analysis, resulting in a population of 1709 participants. All MACE - death, cerebrovascular accident, coronary accidents (coronary heart disease), peripheral arterial disease and renal failure - were recorded. During a mean follow-up period of 21.42 ± 10.76 months, there were 47 non-fatal MACE (2.1% of the sample). Cardiovascular risk was estimated in all patients based on the HeartSCORE risk factors. For the analysis, the refitted HeartSCORE and PWV were divided into three risk categories. The event-free survival at 2 years was 98.6%, 98.0% and 96.1%, respectively in the low-, intermediate- and high-risk categories of HeartSCORE (log-rank p < 0.001). The multi-adjusted hazard ratio (HR) per 1 - standard deviation (SD) of MACE was 1.86 (95% CI 1.37-2.53, p < 0.001) for PWV. The risk of MACE by tertiles of PWV and risk categories of the HeartSCORE increased linearly, and the risk was particularly more pronounced in the highest tertile of PWV for any category of the HeartSCORE, demonstrating an improvement in the prediction of cardiovascular risk. It was clearly depicted a high discriminative capacity of PWV even in groups of apparent intermediate cardiovascular risk. Measures of model fit, discrimination and calibration revealed an improvement in risk classification when PWV was added to the risk-factor model. The C statistics improved from 0.69 to 0.78 (adding PWV, p = 0.005). The net reclassification improvement (NRI) and integrated discrimination improvement (IDI) were also determined, and indicated further evidence of improvements in discrimination of the outcome when including PWV in the risk-factor model (NRI = 0.265; IDI = 0.012). The results clearly illustrate the benefits of integrating PWV in the risk assessment strategies, as advocated by HeartSCORE, insofar as it contributes to a better discriminative capacity of global cardiovascular risk, particularly in individuals with low or moderate cardiovascular risk.

  11. The associations between a polygenic score, reproductive and menstrual risk factors and breast cancer risk.

    PubMed

    Warren Andersen, Shaneda; Trentham-Dietz, Amy; Gangnon, Ronald E; Hampton, John M; Figueroa, Jonine D; Skinner, Halcyon G; Engelman, Corinne D; Klein, Barbara E; Titus, Linda J; Newcomb, Polly A

    2013-07-01

    We evaluated whether 13 single nucleotide polymorphisms (SNPs) identified in genome-wide association studies interact with one another and with reproductive and menstrual risk factors in association with breast cancer risk. DNA samples and information on parity, breastfeeding, age at menarche, age at first birth, and age at menopause were collected through structured interviews from 1,484 breast cancer cases and 1,307 controls who participated in a population-based case-control study conducted in three US states. A polygenic score was created as the sum of risk allele copies multiplied by the corresponding log odds estimate. Logistic regression was used to test the associations between SNPs, the score, reproductive and menstrual factors, and breast cancer risk. Nonlinearity of the score was assessed by the inclusion of a quadratic term for polygenic score. Interactions between the aforementioned variables were tested by including a cross-product term in models. We confirmed associations between rs13387042 (2q35), rs4973768 (SLC4A7), rs10941679 (5p12), rs2981582 (FGFR2), rs3817198 (LSP1), rs3803662 (TOX3), and rs6504950 (STXBP4) with breast cancer. Women in the score's highest quintile had 2.2-fold increased risk when compared to women in the lowest quintile (95 % confidence interval: 1.67-2.88). The quadratic polygenic score term was not significant in the model (p = 0.85), suggesting that the established breast cancer loci are not associated with increased risk more than the sum of risk alleles. Modifications of menstrual and reproductive risk factors associations with breast cancer risk by polygenic score were not observed. Our results suggest that the interactions between breast cancer susceptibility loci and reproductive factors are not strong contributors to breast cancer risk.

  12. Designing the Healthy Bodies, Healthy Souls Church-Based Diabetes Prevention Program through a Participatory Process

    ERIC Educational Resources Information Center

    Summers, Amber; Confair, Amy R.; Flamm, Laura; Goheer, Attia; Graham, Karlene; Muindi, Mwende; Gittelsohn, Joel

    2013-01-01

    Background: The Healthy Bodies, Healthy Souls (HBHS) program aims to reduce diabetes risk among urban African Americans by creating healthy food and physical activity environments within churches. Participant engagement supports the development of applicable intervention strategies by identifying priority concerns, resources, and opportunities.…

  13. International scientists' priorities for research on pharmaceutical and personal care products in the environment.

    PubMed

    Rudd, Murray A; Ankley, Gerald T; Boxall, Alistair B A; Brooks, Bryan W

    2014-10-01

    Pharmaceuticals and personal care products (PPCPs) are widely discharged into the environment via diverse pathways. The effects of PPCPs in the environment have potentially important human and ecosystem health implications, so credible, salient, and legitimate scientific evidence is needed to inform regulatory and policy responses that address potential risks. A recent "big questions" exercise with participants largely from North America identified 22 important research questions around the risks of PPCP in the environment that would help address the most pressing knowledge gaps over the next decade. To expand that analysis, we developed a survey that was completed by 535 environmental scientists from 57 countries, of whom 49% identified environmental or analytical chemistry as their primary disciplinary background. They ranked the 22 original research questions and submitted 171 additional candidate research questions they felt were also of high priority. Of the original questions, the 3 perceived to be of highest importance related to: 1) the effects of long-term exposure to low concentrations of PPCP mixtures on nontarget organisms, 2) effluent treatment methods that can reduce the effects of PPCPs in the environment while not increasing the toxicity of whole effluents, and 3) the assessment of the environmental risks of metabolites and environmental transformation products of PPCPs. A question regarding the role of cultural perspectives in PPCP risk assessment was ranked as the lowest priority. There were significant differences in research orientation between scientists who completed English and Chinese language versions of the survey. We found that the Chinese respondents were strongly orientated to issues of managing risk profiles, effluent treatment, residue bioavailability, and regional assessment. Among English language respondents, further differences in research orientation were associated with respondents' level of consistency when ranking the survey's 15 comparisons. There was increasing emphasis on the role of various other stressors relative to PPCPs and on risk prioritization as internal decision making consistency increased. Respondents' consistency in their ranking choices was significantly and positively correlated with SETAC membership, authors' number of publications, and longer survey completion times. Our research highlighted international scientists' research priorities and should help inform decisions about the type of hazard and risk-based research needed to best inform decisions regarding PPCPs in the environment. Disciplinary training of a scientist or engineer appears to strongly influence preferences for research priorities to understand PPCPs in the environment. Selection of participants and the depth and breadth of research prioritization efforts thus have potential effects on the outcomes of research prioritization exercises. Further elucidation of how patterns of research priority vary between academic and government scientists and between scientists and other government and stakeholders would be useful in the future and provide information that helps focus scientific effort on socially relevant challenges relating to PPCPs in the environment. It also suggests the potential for future collaborative research between industry, government, and academia on environmental contaminants beyond PPCPs. © 2014 SETAC.

  14. Isocyanates and human health: multistakeholder information needs and research priorities.

    PubMed

    Lockey, James E; Redlich, Carrie A; Streicher, Robert; Pfahles-Hutchens, Andrea; Hakkinen, Pertti Bert J; Ellison, Gary L; Harber, Philip; Utell, Mark; Holland, John; Comai, Andrew; White, Marc

    2015-01-01

    To outline the knowledge gaps and research priorities identified by a broad base of stakeholders involved in the planning and participation of an international conference and research agenda workshop on isocyanates and human health held in Potomac, Maryland, in April 2013. A multimodal iterative approach was used for data collection including preconference surveys, review of a 2001 consensus conference on isocyanates, oral and poster presentations, focused break-out sessions, panel discussions, and postconference research agenda workshop. Participants included representatives of consumer and worker health, health professionals, regulatory agencies, academic and industry scientists, labor, and trade associations. Recommendations were summarized regarding knowledge gaps and research priorities in the following areas: worker and consumer exposures; toxicology, animal models, and biomarkers; human cancer risk; environmental exposure and monitoring; and respiratory epidemiology and disease, and occupational health surveillance.

  15. Family cumulative risk and at-risk kindergarteners' social competence: the mediating role of parent representations of the attachment relationship.

    PubMed

    Sparks, Lauren A; Trentacosta, Christopher J; Owusu, Erika; McLear, Caitlin; Smith-Darden, Joanne

    2018-08-01

    Secure attachment relationships have been linked to social competence in at-risk children. In the current study, we examined the role of parent secure base scripts in predicting at-risk kindergarteners' social competence. Parent representations of secure attachment were hypothesized to mediate the relationship between lower family cumulative risk and children's social competence. Participants included 106 kindergarteners and their primary caregivers recruited from three urban charter schools serving low-income families as a part of a longitudinal study. Lower levels of cumulative risk predicted greater secure attachment representations in parents, and scores on the secure base script assessment predicted children's social competence. An indirect relationship between lower cumulative risk and kindergarteners' social competence via parent secure base script scores was also supported. Parent script-based representations of the attachment relationship appear to be an important link between lower levels of cumulative risk and low-income kindergarteners' social competence. Implications of these findings for future interventions are discussed.

  16. Assuring safety without animal testing: Unilever's ongoing research programme to deliver novel ways to assure consumer safety.

    PubMed

    Westmoreland, Carl; Carmichael, Paul; Dent, Matt; Fentem, Julia; MacKay, Cameron; Maxwell, Gavin; Pease, Camilla; Reynolds, Fiona

    2010-01-01

    Assuring consumer safety without the generation of new animal data is currently a considerable challenge. However, through the application of new technologies and the further development of risk-based approaches for safety assessment, we remain confident it is ultimately achievable. For many complex, multi-organ consumer safety endpoints, the development, evaluation and application of new, non-animal approaches is hampered by a lack of biological understanding of the underlying mechanistic processes involved. The enormity of this scientific challenge should not be underestimated. To tackle this challenge a substantial research programme was initiated by Unilever in 2004 to critically evaluate the feasibility of a new conceptual approach based upon the following key components: 1.Developing new, exposure-driven risk assessment approaches. 2.Developing new biological (in vitro) and computer-based (in silico) predictive models. 3.Evaluating the applicability of new technologies for generating data (e.g. "omics", informatics) and for integrating new types of data (e.g. systems approaches) for risk-based safety assessment. Our research efforts are focussed in the priority areas of skin allergy, cancer and general toxicity (including inhaled toxicity). In all of these areas, a long-term investment is essential to increase the scientific understanding of the underlying biology and molecular mechanisms that we believe will ultimately form a sound basis for novel risk assessment approaches. Our research programme in these priority areas consists of in-house research as well as Unilever-sponsored academic research, involvement in EU-funded projects (e.g. Sens-it-iv, Carcinogenomics), participation in cross-industry collaborative research (e.g. Colipa, EPAA) and ongoing involvement with other scientific initiatives on non-animal approaches to risk assessment (e.g. UK NC3Rs, US "Human Toxicology Project" consortium).

  17. The TRIAGE-ProADM Score for an Early Risk Stratification of Medical Patients in the Emergency Department - Development Based on a Multi-National, Prospective, Observational Study

    PubMed Central

    Hausfater, Pierre; Amin, Devendra; Amin, Adina; Canavaggio, Pauline; Sauvin, Gabrielle; Bernard, Maguy; Conca, Antoinette; Haubitz, Sebastian; Struja, Tristan; Huber, Andreas; Mueller, Beat; Schuetz, Philipp

    2016-01-01

    Introduction The inflammatory biomarker pro-adrenomedullin (ProADM) provides additional prognostic information for the risk stratification of general medical emergency department (ED) patients. The aim of this analysis was to develop a triage algorithm for improved prognostication and later use in an interventional trial. Methods We used data from the multi-national, prospective, observational TRIAGE trial including consecutive medical ED patients from Switzerland, France and the United States. We investigated triage effects when adding ProADM at two established cut-offs to a five-level ED triage score with respect to adverse clinical outcome. Results Mortality in the 6586 ED patients showed a step-wise, 25-fold increase from 0.6% to 4.5% and 15.4%, respectively, at the two ProADM cut-offs (≤0.75nmol/L, >0.75–1.5nmol/L, >1.5nmol/L, p ANOVA <0.0001). Risk stratification by combining ProADM within cut-off groups and the triage score resulted in the identification of 1662 patients (25.2% of the population) at a very low risk of mortality (0.3%, n = 5) and 425 patients (6.5% of the population) at very high risk of mortality (19.3%, n = 82). Risk estimation by using ProADM and the triage score from a logistic regression model allowed for a more accurate risk estimation in the whole population with a classification of 3255 patients (49.4% of the population) in the low risk group (0.3% mortality, n = 9) and 1673 (25.4% of the population) in the high-risk group (15.1% mortality, n = 252). Conclusions Within this large international multicenter study, a combined triage score based on ProADM and established triage scores allowed a more accurate mortality risk discrimination. The TRIAGE-ProADM score improved identification of both patients at the highest risk of mortality who may benefit from early therapeutic interventions (rule in), and low risk patients where deferred treatment without negatively affecting outcome may be possible (rule out). PMID:28005916

  18. Concordance with World Cancer Research Fund/American Institute for Cancer Research (WCRF/AICR) guidelines for cancer prevention and obesity-related cancer risk in the Framingham Offspring cohort (1991-2008).

    PubMed

    Makarem, Nour; Lin, Yong; Bandera, Elisa V; Jacques, Paul F; Parekh, Niyati

    2015-02-01

    This prospective cohort study evaluates associations between healthful behaviors consistent with WCRF/AICR cancer prevention guidelines and obesity-related cancer risk, as a third of cancers are estimated to be preventable. The study sample consisted of adults from the Framingham Offspring cohort (n = 2,983). From 1991 to 2008, 480 incident doctor-diagnosed obesity-related cancers were identified. Data on diet, measured by a food frequency questionnaire, anthropometric measures, and self-reported physical activity, collected in 1991 was used to construct a 7-component score based on recommendations for body fatness, physical activity, foods that promote weight gain, plant foods, animal foods, alcohol, and food preservation, processing, and preparation. Multivariable Cox regression models were used to estimate associations between the computed score, its components, and subcomponents in relation to obesity-related cancer risk. The overall score was not associated with obesity-related cancer risk after adjusting for age, sex, smoking, energy, and preexisting conditions (HR 0.94, 95 % CI 0.86-1.02). When score components were evaluated separately, for every unit increment in the alcohol score, there was 29 % lower risk of obesity-related cancers (HR 0.71, 95 % CI 0.51-0.99) and 49-71 % reduced risk of breast, prostate, and colorectal cancers. Every unit increment in the subcomponent score for non-starchy plant foods (fruits, vegetables, and legumes) among participants who consume starchy vegetables was associated with 66 % reduced risk of colorectal cancer (HR 0.44, 95 % CI 0.22-0.88). Lower alcohol consumption and a plant-based diet consistent with the cancer prevention guidelines were associated with reduced risk of obesity-related cancers in this population.

  19. Molecular Classification Substitutes for the Prognostic Variables Stage, Age, and MYCN Status in Neuroblastoma Risk Assessment.

    PubMed

    Rosswog, Carolina; Schmidt, Rene; Oberthuer, André; Juraeva, Dilafruz; Brors, Benedikt; Engesser, Anne; Kahlert, Yvonne; Volland, Ruth; Bartenhagen, Christoph; Simon, Thorsten; Berthold, Frank; Hero, Barbara; Faldum, Andreas; Fischer, Matthias

    2017-12-01

    Current risk stratification systems for neuroblastoma patients consider clinical, histopathological, and genetic variables, and additional prognostic markers have been proposed in recent years. We here sought to select highly informative covariates in a multistep strategy based on consecutive Cox regression models, resulting in a risk score that integrates hazard ratios of prognostic variables. A cohort of 695 neuroblastoma patients was divided into a discovery set (n=75) for multigene predictor generation, a training set (n=411) for risk score development, and a validation set (n=209). Relevant prognostic variables were identified by stepwise multivariable L1-penalized least absolute shrinkage and selection operator (LASSO) Cox regression, followed by backward selection in multivariable Cox regression, and then integrated into a novel risk score. The variables stage, age, MYCN status, and two multigene predictors, NB-th24 and NB-th44, were selected as independent prognostic markers by LASSO Cox regression analysis. Following backward selection, only the multigene predictors were retained in the final model. Integration of these classifiers in a risk scoring system distinguished three patient subgroups that differed substantially in their outcome. The scoring system discriminated patients with diverging outcome in the validation cohort (5-year event-free survival, 84.9±3.4 vs 63.6±14.5 vs 31.0±5.4; P<.001), and its prognostic value was validated by multivariable analysis. We here propose a translational strategy for developing risk assessment systems based on hazard ratios of relevant prognostic variables. Our final neuroblastoma risk score comprised two multigene predictors only, supporting the notion that molecular properties of the tumor cells strongly impact clinical courses of neuroblastoma patients. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  20. In vivo and in vitro methods for evaluating soil arsenic bioavailability: relevant to human health risk assessment

    EPA Science Inventory

    Arsenic (As) is the most frequently occurring contaminant on the priority list of hazardous substances, which lists substances of greatest public health concern to people living at or near U.S. National Priorities List site. Accurate assessment of human health risks from exposure...

  1. Support Vector Hazards Machine: A Counting Process Framework for Learning Risk Scores for Censored Outcomes.

    PubMed

    Wang, Yuanjia; Chen, Tianle; Zeng, Donglin

    2016-01-01

    Learning risk scores to predict dichotomous or continuous outcomes using machine learning approaches has been studied extensively. However, how to learn risk scores for time-to-event outcomes subject to right censoring has received little attention until recently. Existing approaches rely on inverse probability weighting or rank-based regression, which may be inefficient. In this paper, we develop a new support vector hazards machine (SVHM) approach to predict censored outcomes. Our method is based on predicting the counting process associated with the time-to-event outcomes among subjects at risk via a series of support vector machines. Introducing counting processes to represent time-to-event data leads to a connection between support vector machines in supervised learning and hazards regression in standard survival analysis. To account for different at risk populations at observed event times, a time-varying offset is used in estimating risk scores. The resulting optimization is a convex quadratic programming problem that can easily incorporate non-linearity using kernel trick. We demonstrate an interesting link from the profiled empirical risk function of SVHM to the Cox partial likelihood. We then formally show that SVHM is optimal in discriminating covariate-specific hazard function from population average hazard function, and establish the consistency and learning rate of the predicted risk using the estimated risk scores. Simulation studies show improved prediction accuracy of the event times using SVHM compared to existing machine learning methods and standard conventional approaches. Finally, we analyze two real world biomedical study data where we use clinical markers and neuroimaging biomarkers to predict age-at-onset of a disease, and demonstrate superiority of SVHM in distinguishing high risk versus low risk subjects.

  2. Apparent diffusion coefficient on magnetic resonance imaging (MRI) in bladder cancer: relations with recurrence/progression risk

    PubMed Central

    Kikuchi, Ken; Shigihara, Takeshi; Hashimoto, Yuko; Miyajima, Masayuki; Haga, Nobuhiro; Kojima, Yoshiyuki; Shishido, Fumio

    2017-01-01

    Abstract AIMS: To evaluate the relationship between the apparent diffusion coefficient (ADC) value for bladder cancer and the recurrence/progression risk of post-transurethral resection (TUR). METHODS: Forty-one patients with initial and non-muscle-invasive bladder cancer underwent MRI from 2009 to 2012. Two radiologists measured ADC values. A pathologist calculated the recurrence/progression scores, and risk was classified based on the scores. Pearson’s correlation was used to analyze the correlations of ADC value with each score and with each risk group, and the optimal cut-off value was established based on receiver operating characteristic (ROC) curve analysis. Furthermore, the relationship between actual recurrence / progression of cases and ADC values was examined by Unpaird U test. RESULTS: There were significant correlations between ADC value and the recurrence score as well as the progression score (P<0.01, P<0.01, respectively). There were also significant correlations between ADC value and the recurrence risk group as well as progression risk group (P=0.042, P<0.01, respectively). The ADC cut-off value on ROC analysis was 1.365 (sensitivity 100%; specificity 97.4%) for the low and intermediate recurrence risk groups, 1.024 (sensitivity 47.4%; specificity 100%) for the intermediate and high recurrence risk groups, 1.252 (sensitivity 83.3%; specificity 81.3%) for the low and intermediate progression risk groups, and 0.955 (sensitivity 87.5%; specificity 63.2%) between the intermediate and high progression risk groups. The difference between the ADC values of the recurrence and nonrecurrence group in Unpaired t test was significant (P<0.05). CONCLUSION: ADC on MRI in bladder cancer could potentially be useful, non-invasive measurement for estimating the risks of recurrence and progression. PMID:28680010

  3. Oral Hygiene and Cardiometabolic Disease Risk in the Survey of the Health of Wisconsin

    PubMed Central

    VanWormer, Jeffrey J.; Acharya, Amit; Greenlee, Robert T.; Nieto, F. Javier

    2012-01-01

    Objectives Poor oral health is an increasingly recognized risk factor for cardiovascular disease (CVD) and type 2 diabetes (T2D), but little is known about the association between toothbrushing or flossing and cardiometabolic disease risk. The purpose of this study was to examine the degree to which an oral hygiene index was associated with CVD and T2D risk scores among disease-free adults in the Survey of the Health of Wisconsin. Methods All variables were measured in 2008–2010 in this cross-sectional design. Based on toothbrushing and flossing frequency, and oral hygiene index (poor, fair, good, excellent) was created as the primary predictor variable. The outcomes, CVD and T2D risk score, were based on previous estimates from large cohort studies. There were 712 and 296 individuals with complete data available for linear regression analyses in the CVD and T2D samples, respectively. Results After covariate adjustment, the final model indicated that participants in the excellent (β±SE=−0.019±0.008, p=0.020) oral hygiene category had a significantly lower CVD risk score as compared to participants in the poor oral hygiene category. Sensitivity analyses indicated that both toothbrushing and flossing were independently associated with CVD risk score, and various modifiable risk factors. Oral hygiene was not significantly associated with T2D risk score. Conclusions Regular toothbrushing and flossing are associated with a more favorable CVD risk profile, but more experimental research is needed in this area to precisely determine the effects of various oral self-care maintenance behaviors on the control of individual cardiometabolic risk factors. These findings may inform future joint medical-dental initiatives designed to close gaps in the primary prevention of oral and systemic diseases. PMID:23106415

  4. Improving Immunization Rates Using Lean Six Sigma Processes: Alliance of Independent Academic Medical Centers National Initiative III Project.

    PubMed

    Hina-Syeda, Hussaini; Kimbrough, Christina; Murdoch, William; Markova, Tsveti

    2013-01-01

    Quality improvement education and work in interdisciplinary teams is a healthcare priority. Healthcare systems are trying to meet core measures and provide excellent patient care, thus improving their Hospital Consumer Assessment of Healthcare Providers & Systems scores. Crittenton Hospital Medical Center in Rochester Hills, MI, aligned educational and clinical objectives, focusing on improving immunization rates against pneumonia and influenza prior to the rates being implemented as core measures. Improving immunization rates prevents infections, minimizes hospitalizations, and results in overall improved patient care. Teaching hospitals offer an effective way to work on clinical projects by bringing together the skill sets of residents, faculty, and hospital staff to achieve superior results. WE DESIGNED AND IMPLEMENTED A STRUCTURED CURRICULUM IN WHICH INTERDISCIPLINARY TEAMS ACQUIRED KNOWLEDGE ON QUALITY IMPROVEMENT AND TEAMWORK, WHILE FOCUSING ON A SPECIFIC CLINICAL PROJECT: improving global immunization rates. We used the Lean Six Sigma process tools to quantify the initial process capability to immunize against pneumococcus and influenza. The hospital's process to vaccinate against pneumonia overall was operating at a Z score of 3.13, and the influenza vaccination Z score was 2.53. However, the process to vaccinate high-risk patients against pneumonia operated at a Z score of 1.96. Improvement in immunization rates of high-risk patients became the focus of the project. After the implementation of solutions, the process to vaccinate high-risk patients against pneumonia operated at a Z score of 3.9 with a defects/million opportunities rate of 9,346 and a yield of 93.5%. Revisions to the adult assessment form fixed 80% of the problems identified. This process improvement project was not only beneficial in terms of improved quality of patient care but was also a positive learning experience for the interdisciplinary team, particularly for the residents. The hospital has completed quality improvement projects in the past; however, this project was the first in which residents were actively involved. The didactic components and experiential learning were powerfully synergistic. This and similar projects can have far-reaching implications in terms of promoting patient health and improving the quality of care delivered by the healthcare systems and teaching hospitals.

  5. Validation of an imaging based cardiovascular risk score in a Scottish population.

    PubMed

    Kockelkoren, Remko; Jairam, Pushpa M; Murchison, John T; Debray, Thomas P A; Mirsadraee, Saeed; van der Graaf, Yolanda; Jong, Pim A de; van Beek, Edwin J R

    2018-01-01

    A radiological risk score that determines 5-year cardiovascular disease (CVD) risk using routine care CT and patient information readily available to radiologists was previously developed. External validation in a Scottish population was performed to assess the applicability and validity of the risk score in other populations. 2915 subjects aged ≥40 years who underwent routine clinical chest CT scanning for non-cardiovascular diagnostic indications were followed up until first diagnosis of, or death from, CVD. Using a case-cohort approach, all cases and a random sample of 20% of the participant's CT examinations were visually graded for cardiovascular calcifications and cardiac diameter was measured. The radiological risk score was determined using imaging findings, age, gender, and CT indication. Performance on 5-year CVD risk prediction was assessed. 384 events occurred in 2124 subjects during a mean follow-up of 4.25 years (0-6.4 years). The risk score demonstrated reasonable performance in the studied population. Calibration showed good agreement between actual and 5-year predicted risk of CVD. The c-statistic was 0.71 (95%CI:0.67-0.75). The radiological CVD risk score performed adequately in the Scottish population offering a potential novel strategy for identifying patients at high risk for developing cardiovascular disease using routine care CT data. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Multilocus genetic risk scores for venous thromboembolism risk assessment.

    PubMed

    Soria, José Manuel; Morange, Pierre-Emmanuel; Vila, Joan; Souto, Juan Carlos; Moyano, Manel; Trégouët, David-Alexandre; Mateo, José; Saut, Noémi; Salas, Eduardo; Elosua, Roberto

    2014-10-23

    Genetics plays an important role in venous thromboembolism (VTE). Factor V Leiden (FVL or rs6025) and prothrombin gene G20210A (PT or rs1799963) are the genetic variants currently tested for VTE risk assessment. We hypothesized that primary VTE risk assessment can be improved by using genetic risk scores with more genetic markers than just FVL-rs6025 and prothrombin gene PT-rs1799963. To this end, we have designed a new genetic risk score called Thrombo inCode (TiC). TiC was evaluated in terms of discrimination (Δ of the area under the receiver operating characteristic curve) and reclassification (integrated discrimination improvement and net reclassification improvement). This evaluation was performed using 2 age- and sex-matched case-control populations: SANTPAU (248 cases, 249 controls) and the Marseille Thrombosis Association study (MARTHA; 477 cases, 477 controls). TiC was compared with other literature-based genetic risk scores. TiC including F5 rs6025/rs118203906/rs118203905, F2 rs1799963, F12 rs1801020, F13 rs5985, SERPINC1 rs121909548, and SERPINA10 rs2232698 plus the A1 blood group (rs8176719, rs7853989, rs8176743, rs8176750) improved the area under the curve compared with a model based only on F5-rs6025 and F2-rs1799963 in SANTPAU (0.677 versus 0.575, P<0.001) and MARTHA (0.605 versus 0.576, P=0.008). TiC showed good integrated discrimination improvement of 5.49 (P<0.001) for SANTPAU and 0.96 (P=0.045) for MARTHA. Among the genetic risk scores evaluated, the proportion of VTE risk variance explained by TiC was the highest. We conclude that TiC greatly improves prediction of VTE risk compared with other genetic risk scores. TiC should improve prevention, diagnosis, and treatment of VTE. © 2014 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  7. Multilocus Genetic Risk Scores for Venous Thromboembolism Risk Assessment

    PubMed Central

    Soria, José Manuel; Morange, Pierre‐Emmanuel; Vila, Joan; Souto, Juan Carlos; Moyano, Manel; Trégouët, David‐Alexandre; Mateo, José; Saut, Noémi; Salas, Eduardo; Elosua, Roberto

    2014-01-01

    Background Genetics plays an important role in venous thromboembolism (VTE). Factor V Leiden (FVL or rs6025) and prothrombin gene G20210A (PT or rs1799963) are the genetic variants currently tested for VTE risk assessment. We hypothesized that primary VTE risk assessment can be improved by using genetic risk scores with more genetic markers than just FVL‐rs6025 and prothrombin gene PT‐rs1799963. To this end, we have designed a new genetic risk score called Thrombo inCode (TiC). Methods and Results TiC was evaluated in terms of discrimination (Δ of the area under the receiver operating characteristic curve) and reclassification (integrated discrimination improvement and net reclassification improvement). This evaluation was performed using 2 age‐ and sex‐matched case–control populations: SANTPAU (248 cases, 249 controls) and the Marseille Thrombosis Association study (MARTHA; 477 cases, 477 controls). TiC was compared with other literature‐based genetic risk scores. TiC including F5 rs6025/rs118203906/rs118203905, F2 rs1799963, F12 rs1801020, F13 rs5985, SERPINC1 rs121909548, and SERPINA10 rs2232698 plus the A1 blood group (rs8176719, rs7853989, rs8176743, rs8176750) improved the area under the curve compared with a model based only on F5‐rs6025 and F2‐rs1799963 in SANTPAU (0.677 versus 0.575, P<0.001) and MARTHA (0.605 versus 0.576, P=0.008). TiC showed good integrated discrimination improvement of 5.49 (P<0.001) for SANTPAU and 0.96 (P=0.045) for MARTHA. Among the genetic risk scores evaluated, the proportion of VTE risk variance explained by TiC was the highest. Conclusions We conclude that TiC greatly improves prediction of VTE risk compared with other genetic risk scores. TiC should improve prevention, diagnosis, and treatment of VTE. PMID:25341889

  8. Identifying Aboriginal-specific AUDIT-C and AUDIT-3 cutoff scores for at-risk, high-risk, and likely dependent drinkers using measures of agreement with the 10-item Alcohol Use Disorders Identification Test.

    PubMed

    Calabria, Bianca; Clifford, Anton; Shakeshaft, Anthony P; Conigrave, Katherine M; Simpson, Lynette; Bliss, Donna; Allan, Julaine

    2014-09-01

    The Alcohol Use Disorders Identification Test (AUDIT) is a 10-item alcohol screener that has been recommended for use in Aboriginal primary health care settings. The time it takes respondents to complete AUDIT, however, has proven to be a barrier to its routine delivery. Two shorter versions, AUDIT-C and AUDIT-3, have been used as screening instruments in primary health care. This paper aims to identify the AUDIT-C and AUDIT-3 cutoff scores that most closely identify individuals classified as being at-risk drinkers, high-risk drinkers, or likely alcohol dependent by the 10-item AUDIT. Two cross-sectional surveys were conducted from June 2009 to May 2010 and from July 2010 to June 2011. Aboriginal Australian participants (N = 156) were recruited through an Aboriginal Community Controlled Health Service, and a community-based drug and alcohol treatment agency in rural New South Wales (NSW), and through community-based Aboriginal groups in Sydney NSW. Sensitivity, specificity, and positive and negative predictive values of each score on the AUDIT-C and AUDIT-3 were calculated, relative to cutoff scores on the 10-item AUDIT for at-risk, high-risk, and likely dependent drinkers. Receiver operating characteristic (ROC) curve analyses were conducted to measure the detection characteristics of AUDIT-C and AUDIT-3 for the three categories of risk. The areas under the receiver operating characteristic (AUROC) curves were high for drinkers classified as being at-risk, high-risk, and likely dependent. Recommended cutoff scores for Aboriginal Australians are as follows: at-risk drinkers AUDIT-C ≥ 5, AUDIT-3 ≥ 1; high-risk drinkers AUDIT-C ≥ 6, AUDIT-3 ≥ 2; and likely dependent drinkers AUDIT-C ≥ 9, AUDIT-3 ≥ 3. Adequate sensitivity and specificity were achieved for recommended cutoff scores. AUROC curves were above 0.90.

  9. Measuring coding intensity in the Medicare Advantage program.

    PubMed

    Kronick, Richard; Welch, W Pete

    2014-01-01

    In 2004, Medicare implemented a system of paying Medicare Advantage (MA) plans that gave them greater incentive than fee-for-service (FFS) providers to report diagnoses. Risk scores for all Medicare beneficiaries 2004-2013 and Medicare Current Beneficiary Survey (MCBS) data, 2006-2011. Change in average risk score for all enrollees and for stayers (beneficiaries who were in either FFS or MA for two consecutive years). Prevalence rates by Hierarchical Condition Category (HCC). Each year the average MA risk score increased faster than the average FFS score. Using the risk adjustment model in place in 2004, the average MA score as a ratio of the average FFS score would have increased from 90% in 2004 to 109% in 2013. Using the model partially implemented in 2014, the ratio would have increased from 88% to 102%. The increase in relative MA scores appears to largely reflect changes in diagnostic coding, not real increases in the morbidity of MA enrollees. In survey-based data for 2006-2011, the MA-FFS ratio of risk scores remained roughly constant at 96%. Intensity of coding varies widely by contract, with some contracts coding very similarly to FFS and others coding much more intensely than the MA average. Underpinning this relative growth in scores is particularly rapid relative growth in a subset of HCCs. Medicare has taken significant steps to mitigate the effects of coding intensity in MA, including implementing a 3.4% coding intensity adjustment in 2010 and revising the risk adjustment model in 2013 and 2014. Given the continuous relative increase in the average MA risk score, further policy changes will likely be necessary.

  10. The PER (Preoperative Esophagectomy Risk) Score: A Simple Risk Score to Predict Short-Term and Long-Term Outcome in Patients with Surgically Treated Esophageal Cancer.

    PubMed

    Reeh, Matthias; Metze, Johannes; Uzunoglu, Faik G; Nentwich, Michael; Ghadban, Tarik; Wellner, Ullrich; Bockhorn, Maximilian; Kluge, Stefan; Izbicki, Jakob R; Vashist, Yogesh K

    2016-02-01

    Esophageal resection in patients with esophageal cancer (EC) is still associated with high mortality and morbidity rates. We aimed to develop a simple preoperative risk score for the prediction of short-term and long-term outcomes for patients with EC treated by esophageal resection. In total, 498 patients suffering from esophageal carcinoma, who underwent esophageal resection, were included in this retrospective cohort study. Three preoperative esophagectomy risk (PER) groups were defined based on preoperative functional evaluation of different organ systems by validated tools (revised cardiac risk index, model for end-stage liver disease score, and pulmonary function test). Clinicopathological parameters, morbidity, and mortality as well as disease-free survival (DFS) and overall survival (OS) were correlated to the PER score. The PER score significantly predicted the short-term outcome of patients with EC who underwent esophageal resection. PER 2 and PER 3 patients had at least double the risk of morbidity and mortality compared to PER 1 patients. Furthermore, a higher PER score was associated with shorter DFS (P < 0.001) and OS (P < 0.001). The PER score was identified as an independent predictor of tumor recurrence (hazard ratio [HR] 2.1; P < 0.001) and OS (HR 2.2; P < 0.001). The PER score allows preoperative objective allocation of patients with EC into different risk categories for morbidity, mortality, and long-term outcomes. Thus, multicenter studies are needed for independent validation of the PER score.

  11. Trace metals accumulation in soil irrigated with polluted water and assessment of human health risk from vegetable consumption in Bangladesh.

    PubMed

    Islam, Md Atikul; Romić, Davor; Akber, Md Ali; Romić, Marija

    2018-02-01

    Trace metals accumulation in soil irrigated with polluted water and human health risk from vegetable consumption was assessed based on the data available in the literature on metals pollution of water, soil, sediment and vegetables from the cites of Bangladesh. The quantitative data on metal concentrations, their contamination levels and their pollution sources have not been systematically gathered and studied so far. The data on metal concentrations, sources, contamination levels, sample collection and analytical tools used were collected, compared and discussed. The USEPA-recommended method for health risk assessment was used to estimate human risk from vegetable consumption. Concentrations of metals in water were highly variable, and the mean concentrations of Cd, Cr, Cu and As in water were found to be higher than the FAO irrigation water quality standard. In most cases, mean concentrations of metals in soil were higher than the Bangladesh background value. Based on geoaccumulation index (I geo ) values, soils of Dhaka city are considered as highly contaminated. The I geo shows Cd, As, Cu, Ni, Pb and Cr contamination of agricultural soils and sediments of the cities all over the Bangladesh. Polluted water irrigation and agrochemicals are identified as dominant sources of metals in agricultural soils. Vegetable contamination by metals poses both non-carcinogenic and carcinogenic risks to the public. Based on the results of the pollution and health risk assessments, Cd, As, Cr, Cu, Pb and Ni are identified as the priority control metals and the Dhaka city is recommended as the priority control city. This study provides quantitative evidence demonstrating the critical need for strengthened wastewater discharge regulations in order to protect residents from heavy metal discharges into the environment.

  12. Pediatric Heart Donor Assessment Tool (PH-DAT): A novel donor risk scoring system to predict 1-year mortality in pediatric heart transplantation.

    PubMed

    Zafar, Farhan; Jaquiss, Robert D; Almond, Christopher S; Lorts, Angela; Chin, Clifford; Rizwan, Raheel; Bryant, Roosevelt; Tweddell, James S; Morales, David L S

    2018-03-01

    In this study we sought to quantify hazards associated with various donor factors into a cumulative risk scoring system (the Pediatric Heart Donor Assessment Tool, or PH-DAT) to predict 1-year mortality after pediatric heart transplantation (PHT). PHT data with complete donor information (5,732) were randomly divided into a derivation cohort and a validation cohort (3:1). From the derivation cohort, donor-specific variables associated with 1-year mortality (exploratory p-value < 0.2) were incorporated into a multivariate logistic regression model. Scores were assigned to independent predictors (p < 0.05) based on relative odds ratios (ORs). The final model had an acceptable predictive value (c-statistic = 0.62). The significant 5 variables (ischemic time, stroke as the cause of death, donor-to-recipient height ratio, donor left ventricular ejection fraction, glomerular filtration rate) were used for the scoring system. The validation cohort demonstrated a strong correlation between the observed and expected rates of 1-year mortality (r = 0.87). The risk of 1-year mortality increases by 11% (OR 1.11 [1.08 to 1.14]; p < 0.001) in the derivation cohort and 9% (OR 1.09 [1.04 to 1.14]; p = 0.001) in the validation cohort with an increase of 1-point in score. Mortality risk increased 5 times from the lowest to the highest donor score in this cohort. Based on this model, a donor score range of 10 to 28 predicted 1-year recipient mortality of 11% to 31%. This novel pediatric-specific, donor risk scoring system appears capable of predicting post-transplant mortality. Although the PH-DAT may benefit organ allocation and assessment of recipient risk while controlling for donor risk, prospective validation of this model is warranted. Copyright © 2018 International Society for the Heart and Lung Transplantation. Published by Elsevier Inc. All rights reserved.

  13. Assessment of fracture risk: value of random population-based samples--the Geelong Osteoporosis Study.

    PubMed

    Henry, M J; Pasco, J A; Seeman, E; Nicholson, G C; Sanders, K M; Kotowicz, M A

    2001-01-01

    Fracture risk is determined by bone mineral density (BMD). The T-score, a measure of fracture risk, is the position of an individual's BMD in relation to a reference range. The aim of this study was to determine the magnitude of change in the T-score when different sampling techniques were used to produce the reference range. Reference ranges were derived from three samples, drawn from the same region: (1) an age-stratified population-based random sample, (2) unselected volunteers, and (3) a selected healthy subset of the population-based sample with no diseases or drugs known to affect bone. T-scores were calculated using the three reference ranges for a cohort of women who had sustained a fracture and as a group had a low mean BMD (ages 35-72 yr; n = 484). For most comparisons, the T-scores for the fracture cohort were more negative using the population reference range. The difference in T-scores reached 1.0 SD. The proportion of the fracture cohort classified as having osteoporosis at the spine was 26, 14, and 23% when the population, volunteer, and healthy reference ranges were applied, respectively. The use of inappropriate reference ranges results in substantial changes to T-scores and may lead to inappropriate management.

  14. Nursing care complexity in a psychiatric setting: results of an observational study.

    PubMed

    Petrucci, C; Marcucci, G; Carpico, A; Lancia, L

    2014-02-01

    For nurses working in mental health service settings, it is a priority to perform patient assessments to identify patients' general and behavioural risks and nursing care complexity using objective criteria, to meet the demand for care and to improve the quality of service by reducing health threat conditions to the patients' selves or to others (adverse events). This study highlights that there is a relationship between the complexity of psychiatric patient care, which was assigned a numerical value after the nursing assessment, and the occurrence of psychiatric adverse events in the recent histories of the patients. The results suggest that nursing supervision should be enhanced for patients with high care complexity scores. © 2013 John Wiley & Sons Ltd.

  15. [Prognostic scores for pulmonary embolism].

    PubMed

    Junod, Alain

    2016-03-23

    Nine prognostic scores for pulmonary embolism (PE), based on retrospective and prospective studies, published between 2000 and 2014, have been analyzed and compared. Most of them aim at identifying PE cases with a low risk to validate their ambulatory care. Important differences in the considered outcomes: global mortality, PE-specific mortality, other complications, sizes of low risk groups, exist between these scores. The most popular score appears to be the PESI and its simplified version. Few good quality studies have tested the applicability of these scores to PE outpatient care, although this approach tends to already generalize in the medical practice.

  16. Environmental and occupational health needs assessment in West Africa: opportunities for research and training.

    PubMed

    Sanyang, Edrisa; Butler-Dawson, Jaime; Mikulski, Marek A; Cook, Thomas; Kuye, Rex A; Venzke, Kristina; Fuortes, Laurence J

    2017-03-01

    Data are lacking on environmental and occupational health risks and resources available for the prevention of related diseases in the West African subregion. A needs assessment survey was conducted to identify environmental and occupational health concerns, and needs and strategies for skills training in the region. The survey was followed by a consensus-building workshop to discuss research and training priorities with representatives from countries participating in the study. Two hundred and two respondents from 12 countries participated in the survey. Vector-borne diseases, solid waste, deforestation, surface and ground water contamination together with work-related stress, occupational injury and pesticide toxicity were ranked as top environmental and occupational health priorities, respectively, in the region. Top training priorities included occupational health, environmental toxicology and analytic laboratory techniques with semester-long Africa-based courses as the preferred type of training for the majority of the courses. Major differences were found between the subregion's three official language groups, both in perceived health risks and training courses needed. The study results have implications for regional policies and practice in the area of environmental and occupational health research and training.

  17. Environmental and occupational health needs assessment in West Africa: opportunities for research and training

    PubMed Central

    Sanyang, Edrisa; Butler-Dawson, Jaime; Mikulski, Marek A.; Cook, Thomas; Kuye, Rex A.; Venzke, Kristina

    2016-01-01

    Objectives Data are lacking on environmental and occupational health risks and resources available for the prevention of related diseases in the West African subregion. Methods A needs assessment survey was conducted to identify environmental and occupational health concerns, and needs and strategies for skills training in the region. The survey was followed by a consensus-building workshop to discuss research and training priorities with representatives from countries participating in the study. Results Two hundred and two respondents from 12 countries participated in the survey. Vector-borne diseases, solid waste, deforestation, surface and ground water contamination together with work-related stress, occupational injury and pesticide toxicity were ranked as top environmental and occupational health priorities, respectively, in the region. Top training priorities included occupational health, environmental toxicology and analytic laboratory techniques with semester-long Africa-based courses as the preferred type of training for the majority of the courses. Major differences were found between the subregion’s three official language groups, both in perceived health risks and training courses needed. Conclusions The study results have implications for regional policies and practice in the area of environmental and occupational health research and training. PMID:27592360

  18. Clinical utility of metabolic syndrome severity scores: considerations for practitioners

    PubMed Central

    DeBoer, Mark D; Gurka, Matthew J

    2017-01-01

    The metabolic syndrome (MetS) is marked by abnormalities in central obesity, high blood pressure, high triglycerides, low high-density lipoprotein-cholesterol, and high fasting glucose and appears to be produced by underlying processes of inflammation, oxidative stress, and adipocyte dysfunction. MetS has traditionally been classified based on dichotomous criteria that deny that MetS-related risk likely exists as a spectrum. Continuous MetS scores provide a way to track MetS-related risk over time. We generated MetS severity scores that are sex- and race/ethnicity-specific, acknowledging that the way MetS is manifested may be different by sex and racial/ethnic subgroup. These scores are correlated with long-term risk for type 2 diabetes mellitus and cardiovascular disease. Clinical use of scores like these provide a potential opportunity to identify patients at highest risk, motivate patients toward lifestyle change, and follow treatment progress over time. PMID:28255250

  19. Allometric considerations when assessing aortic aneurysms in Turner syndrome: Implications for activity recommendations and medical decision-making.

    PubMed

    Corbitt, Holly; Maslen, Cheryl; Prakash, Siddharth; Morris, Shaine A; Silberbach, Michael

    2018-02-01

    In Turner syndrome, the potential to form thoracic aortic aneurysms requires routine patient monitoring. However, the short stature that typically occurs complicates the assessment of severity and risk because the relationship of body size to aortic dimensions is different in Turner syndrome compared to the general population. Three allometric formula have been proposed to adjust aortic dimensions, all employing body surface area: aortic size index, Turner syndrome-specific Z-scores, and Z-scores based on a general pediatric and young adult population. In order to understand the differences between these formula we evaluated the relationship between age and aortic size index and compared Turner syndrome-specific Z-scores and pediatric/young adult based Z-scores in a group of girls and women with Turner syndrome. Our results suggest that the aortic size index is highly age-dependent for those under 15 years; and that Turner-specific Z-scores are significantly lower than Z-scores referenced to the general population. Higher Z-scores derived from the general reference population could result in stigmatization, inappropriate restriction from sports, and increasing the risk of unneeded medical or operative treatments. We propose that when estimating aortic dissection risk clinicians use Turner syndrome-specific Z-score for those under fifteen years of age. © 2017 Wiley Periodicals, Inc.

  20. Subclinical cardiovascular disease assessment and its relationship with cardiovascular risk SCORE in a healthy adult population: A cross-sectional community-based study.

    PubMed

    Mitu, Ovidiu; Roca, Mihai; Floria, Mariana; Petris, Antoniu Octavian; Graur, Mariana; Mitu, Florin

    The aim of this study is to evaluate the relationship and the accuracy of SCORE (Systematic Coronary Risk Evaluation Project) risk correlated to multiple methods for determining subclinical cardiovascular disease (CVD) in a healthy population. This cross-sectional study included 120 completely asymptomatic subjects, with an age range 35-75 years, and randomly selected from the general population. The individuals were evaluated clinically and biochemical, and the SCORE risk was computed. Subclinical atherosclerosis was assessed by various methods: carotid ultrasound for intima-media thickness (cIMT) and plaque detection; aortic pulse wave velocity (aPWV); echocardiography - left ventricular mass index (LVMI) and aortic atheromatosis (AA); ankle-brachial index (ABI). SCORE mean value was 2.95±2.71, with 76% of subjects having SCORE <5. Sixty-four percent of all subjects have had increased subclinical CVD changes, and SCORE risk score was correlated positively with all markers, except for ABI. In the multivariate analysis, increased cIMT and aPWV were significantly associated with high value of SCORE risk (OR 4.14, 95% CI: 1.42-12.15, p=0.009; respectively OR 1.41, 95% CI: 1.01-1.96, p=0.039). A positive linear relationship was observed between 3 territories of subclinical CVD (cIMT, LVMI, aPWV) and SCORE risk (p<0.0001). There was evidence of subclinical CVD in 60% of subjects with a SCORE value <5. As most subjects with a SCORE value <5 have subclinical CVD abnormalities, a more tailored subclinical CVD primary prevention program should be encouraged. Copyright © 2016 Sociedad Española de Arteriosclerosis. Publicado por Elsevier España, S.L.U. All rights reserved.

  1. Establishing research priorities for patient safety in emergency medicine: a multidisciplinary consensus panel.

    PubMed

    Plint, Amy C; Stang, Antonia S; Calder, Lisa A

    2015-01-01

    Patient safety in the context of emergency medicine is a relatively new field of study. To date, no broad research agenda for patient safety in emergency medicine has been established. The objective of this study was to establish patient safety-related research priorities for emergency medicine. These priorities would provide a foundation for high-quality research, important direction to both researchers and health-care funders, and an essential step in improving health-care safety and patient outcomes in the high-risk emergency department (ED) setting. A four-phase consensus procedure with a multidisciplinary expert panel was organized to identify, assess, and agree on research priorities for patient safety in emergency medicine. The 19-member panel consisted of clinicians, administrators, and researchers from adult and pediatric emergency medicine, patient safety, pharmacy, and mental health; as well as representatives from patient safety organizations. In phase 1, we developed an initial list of potential research priorities by electronically surveying a purposeful and convenience sample of patient safety experts, ED clinicians, administrators, and researchers from across North America using contact lists from multiple organizations. We used simple content analysis to remove duplication and categorize the research priorities identified by survey respondents. Our expert panel reached consensus on a final list of research priorities through an in-person meeting (phase 3) and two rounds of a modified Delphi process (phases 2 and 4). After phases 1 and 2, 66 unique research priorities were identified for expert panel review. At the end of phase 4, consensus was reached for 15 research priorities. These priorities represent four themes: (1) methods to identify patient safety issues (five priorities), (2) understanding human and environmental factors related to patient safety (four priorities), (3) the patient perspective (one priority), and (4) interventions for improving patient safety (five priorities). This study established expert, consensus-based research priorities for patient safety in emergency medicine. This framework could be used by researchers and health-care funders and represents an essential guiding step towards enhancing quality of care and patient safety in the ED.

  2. Value of the CHA2DS2-VASc score and Fabry-specific score for predicting new-onset or recurrent stroke/TIA in Fabry disease patients without atrial fibrillation.

    PubMed

    Liu, Dan; Hu, Kai; Schmidt, Marie; Müntze, Jonas; Maniuc, Octavian; Gensler, Daniel; Oder, Daniel; Salinger, Tim; Weidemann, Frank; Ertl, Georg; Frantz, Stefan; Wanner, Christoph; Nordbeck, Peter

    2018-05-24

    To evaluate potential risk factors for stroke or transient ischemic attacks (TIA) and to test the feasibility and efficacy of a Fabry-specific stroke risk score in Fabry disease (FD) patients without atrial fibrillation (AF). FD patients often experience cerebrovascular events (stroke/TIA) at young age. 159 genetically confirmed FD patients without AF (aged 40 ± 14 years, 42.1% male) were included, and risk factors for stroke/TIA events were determined. All patients were followed up over a median period of 60 (quartiles 35-90) months. The pre-defined primary outcomes included new-onset or recurrent stroke/TIA and all-cause death. Prior stroke/TIA (HR 19.97, P < .001), angiokeratoma (HR 4.06, P = .010), elevated creatinine (HR 3.74, P = .011), significant left ventricular hypertrophy (HR 4.07, P = .017), and reduced global systolic strain (GLS, HR 5.19, P = .002) remained as independent risk predictors of new-onset or recurrent stroke/TIA in FD patients without AF. A Fabry-specific score was established based on above defined risk factors, proving somehow superior to the CHA 2 DS 2 -VASc score in predicting new-onset or recurrent stroke/TIA in this cohort (AUC 0.87 vs. 0.75, P = .199). Prior stroke/TIA, angiokeratoma, renal dysfunction, left ventricular hypertrophy, and global systolic dysfunction are independent risk factors for new-onset or recurrent stroke/TIA in FD patients without AF. It is feasible to predict new or recurrent cerebral events with the Fabry-specific score based on the above defined risk factors. Future studies are warranted to test if FD patients with high risk for new-onset or recurrent stroke/TIA, as defined by the Fabry-specific score (≥ 2 points), might benefit from antithrombotic therapy. Clinical trial registration HEAL-FABRY (evaluation of HEArt invoLvement in patients with FABRY disease, NCT03362164).

  3. Using a genetic/clinical risk score to stop smoking (GeTSS): randomised controlled trial.

    PubMed

    Nichols, John A A; Grob, Paul; Kite, Wendy; Williams, Peter; de Lusignan, Simon

    2017-10-23

    As genetic tests become cheaper, the possibility of their widespread availability must be considered. This study involves a risk score for lung cancer in smokers that is roughly 50% genetic (50% clinical criteria). The risk score has been shown to be effective as a smoking cessation motivator in hospital recruited subjects (not actively seeking cessation services). This was an RCT set in a United Kingdom National Health Service (NHS) smoking cessation clinic. Smokers were identified from medical records. Subjects that wanted to participate were randomised to a test group that was administered a gene-based risk test and given a lung cancer risk score, or a control group where no risk score was performed. Each group had 8 weeks of weekly smoking cessation sessions involving group therapy and advice on smoking cessation pharmacotherapy and follow-up at 6 months. The primary endpoint was smoking cessation at 6 months. Secondary outcomes included ranking of the risk score and other motivators. 67 subjects attended the smoking cessation clinic. The 6 months quit rates were 29.4%, (10/34; 95% CI 14.1-44.7%) for the test group and 42.9% (12/28; 95% CI 24.6-61.2%) for the controls. The difference is not significant. However, the quit rate for test group subjects with a "very high" risk score was 89% (8/9; 95% CI 68.4-100%) which was significant when compared with the control group (p = 0.023) and test group subjects with moderate risk scores had a 9.5% quit rate (2/21; 95% CI 2.7-28.9%) which was significantly lower than for above moderate risk score 61.5% (8/13; 95% CI 35.5-82.3; p = 0.03). Only the sub-group with the highest risk score showed an increased quit rate. Controls and test group subjects with a moderate risk score were relatively unlikely to have achieved and maintained non-smoker status at 6 months. ClinicalTrials.gov ID NCT01176383 (date of registration: 3 August 2010).

  4. Impact of Replacing the Pooled Cohort Equation With Other Cardiovascular Disease Risk Scores on Atherosclerotic Cardiovascular Disease Risk Assessment (from the Multi-Ethnic Study of Atherosclerosis [MESA]).

    PubMed

    Qureshi, Waqas T; Michos, Erin D; Flueckiger, Peter; Blaha, Michael; Sandfort, Veit; Herrington, David M; Burke, Gregory; Yeboah, Joseph

    2016-09-01

    The increase in statin eligibility by the new cholesterol guidelines is mostly driven by the Pooled Cohort Equation (PCE) criterion (≥7.5% 10-year PCE). The impact of replacing the PCE with either the modified Framingham Risk Score (FRS) or the Systematic Coronary Risk Evaluation (SCORE) on assessment of atherosclerotic cardiovascular disease (ASCVD) risk assessment and statin eligibility remains unknown. We assessed the comparative benefits of using the PCE, FRS, and SCORE for ASCVD risk assessment in the Multi-Ethnic Study of Atherosclerosis. Of 6,815 participants, 654 (mean age 61.4 ± 10.3; 47.1% men; 37.1% whites; 27.2% blacks; 22.3% Hispanics; 12.0% Chinese-Americans) were included in analysis. Area under the curve (AUC) and decision curve analysis were used to compare the 3 risk scores. Decision curve analysis is the plot of net benefit versus probability thresholds; net benefit = true positive rate - (false positive rate × weighting factor). Weighting factor = Threshold probability/1 - threshold probability. After a median of 8.6 years, 342 (6.0%) ASCVD events (myocardial infarction, coronary heart disease death, fatal or nonfatal stroke) occurred. All 4 risk scores had acceptable discriminative ability for incident ASCVD events; (AUC [95% CI] PCE: 0.737 [0.713 to 0.762]; FRS: 0.717 [0.691 to 0.743], SCORE (high risk) 0.722 [0.696 to 0.747], and SCORE (low risk): 0.721 [0.696 to 0.746]. At the ASCVD risk threshold recommended for statin eligibility for primary prevention (≥7.5%), the PCE provides the best net benefit. Replacing the PCE with the SCORE (high), SCORE (low) and FRS results in a 2.9%, 8.9%, and 17.1% further increase in statin eligibility. The PCE has the best discrimination and net benefit for primary ASCVD risk assessment in a US-based multiethnic cohort compared with the SCORE or the FRS. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Real-Time Risk Prediction on the Wards: A Feasibility Study.

    PubMed

    Kang, Michael A; Churpek, Matthew M; Zadravecz, Frank J; Adhikari, Richa; Twu, Nicole M; Edelson, Dana P

    2016-08-01

    Failure to detect clinical deterioration in the hospital is common and associated with poor patient outcomes and increased healthcare costs. Our objective was to evaluate the feasibility and accuracy of real-time risk stratification using the electronic Cardiac Arrest Risk Triage score, an electronic health record-based early warning score. We conducted a prospective black-box validation study. Data were transmitted via HL7 feed in real time to an integration engine and database server wherein the scores were calculated and stored without visualization for clinical providers. The high-risk threshold was set a priori. Timing and sensitivity of electronic Cardiac Arrest Risk Triage score activation were compared with standard-of-care Rapid Response Team activation for patients who experienced a ward cardiac arrest or ICU transfer. Three general care wards at an academic medical center. A total of 3,889 adult inpatients. The system generated 5,925 segments during 5,751 admissions. The area under the receiver operating characteristic curve for electronic Cardiac Arrest Risk Triage score was 0.88 for cardiac arrest and 0.80 for ICU transfer, consistent with previously published derivation results. During the study period, eight of 10 patients with a cardiac arrest had high-risk electronic Cardiac Arrest Risk Triage scores, whereas the Rapid Response Team was activated on two of these patients (p < 0.05). Furthermore, electronic Cardiac Arrest Risk Triage score identified 52% (n = 201) of the ICU transfers compared with 34% (n = 129) by the current system (p < 0.001). Patients met the high-risk electronic Cardiac Arrest Risk Triage score threshold a median of 30 hours prior to cardiac arrest or ICU transfer versus 1.7 hours for standard Rapid Response Team activation. Electronic Cardiac Arrest Risk Triage score identified significantly more cardiac arrests and ICU transfers than standard Rapid Response Team activation and did so many hours in advance.

  6. Quantifying the relative risk of sex offenders: risk ratios for static-99R.

    PubMed

    Hanson, R Karl; Babchishin, Kelly M; Helmus, Leslie; Thornton, David

    2013-10-01

    Given the widespread use of empirical actuarial risk tools in corrections and forensic mental health, it is important that evaluators and decision makers understand how scores relate to recidivism risk. In the current study, we found strong evidence for a relative risk interpretation of Static-99R scores using 8 samples from Canada, United Kingdom, and Western Europe (N = 4,037 sex offenders). Each increase in Static-99R score was associated with a stable and consistent increase in relative risk (as measured by an odds ratio or hazard ratio of approximately 1.4). Hazard ratios from Cox regression were used to calculate risk ratios that can be reported for Static-99R. We recommend that evaluators consider risk ratios as a useful, nonarbitrary metric for quantifying and communicating risk information. To avoid misinterpretation, however, risk ratios should be presented with recidivism base rates.

  7. Development and validation of a melanoma risk score based on pooled data from 16 case-control studies

    PubMed Central

    Davies, John R; Chang, Yu-mei; Bishop, D Timothy; Armstrong, Bruce K; Bataille, Veronique; Bergman, Wilma; Berwick, Marianne; Bracci, Paige M; Elwood, J Mark; Ernstoff, Marc S; Green, Adele; Gruis, Nelleke A; Holly, Elizabeth A; Ingvar, Christian; Kanetsky, Peter A; Karagas, Margaret R; Lee, Tim K; Le Marchand, Loïc; Mackie, Rona M; Olsson, Håkan; Østerlind, Anne; Rebbeck, Timothy R; Reich, Kristian; Sasieni, Peter; Siskind, Victor; Swerdlow, Anthony J; Titus, Linda; Zens, Michael S; Ziegler, Andreas; Gallagher, Richard P.; Barrett, Jennifer H; Newton-Bishop, Julia

    2015-01-01

    Background We report the development of a cutaneous melanoma risk algorithm based upon 7 factors; hair colour, skin type, family history, freckling, nevus count, number of large nevi and history of sunburn, intended to form the basis of a self-assessment webtool for the general public. Methods Predicted odds of melanoma were estimated by analysing a pooled dataset from 16 case-control studies using logistic random coefficients models. Risk categories were defined based on the distribution of the predicted odds in the controls from these studies. Imputation was used to estimate missing data in the pooled datasets. The 30th, 60th and 90th centiles were used to distribute individuals into four risk groups for their age, sex and geographic location. Cross-validation was used to test the robustness of the thresholds for each group by leaving out each study one by one. Performance of the model was assessed in an independent UK case-control study dataset. Results Cross-validation confirmed the robustness of the threshold estimates. Cases and controls were well discriminated in the independent dataset (area under the curve 0.75, 95% CI 0.73-0.78). 29% of cases were in the highest risk group compared with 7% of controls, and 43% of controls were in the lowest risk group compared with 13% of cases. Conclusion We have identified a composite score representing an estimate of relative risk and successfully validated this score in an independent dataset. Impact This score may be a useful tool to inform members of the public about their melanoma risk. PMID:25713022

  8. Comprehensive Environmental Assessment Applied to Multiwalled Carbon Nanotube Flame-Retardant Coatings in Upholstery Textiles: A Case Study Presenting Priority Research Gaps for Future Risk Assessments (Final Report)

    EPA Science Inventory

    In September 2013, EPA announced the availability of the final report, Comprehensive Environmental Assessment Applied to Multiwalled Carbon Nanotube Flame-Retardant Coatings in Upholstery Textiles: A Case Study Presenting Priority Research Gaps for Future Risk Assessments...

  9. Associations of CAIDE Dementia Risk Score with MRI, PIB-PET measures, and cognition.

    PubMed

    Stephen, Ruth; Liu, Yawu; Ngandu, Tiia; Rinne, Juha O; Kemppainen, Nina; Parkkola, Riitta; Laatikainen, Tiina; Paajanen, Teemu; Hänninen, Tuomo; Strandberg, Timo; Antikainen, Riitta; Tuomilehto, Jaakko; Keinänen Kiukaanniemi, Sirkka; Vanninen, Ritva; Helisalmi, Seppo; Levälahti, Esko; Kivipelto, Miia; Soininen, Hilkka; Solomon, Alina

    2017-01-01

    CAIDE Dementia Risk Score is the first validated tool for estimating dementia risk based on a midlife risk profile. This observational study investigated longitudinal associations of CAIDE Dementia Risk Score with brain MRI, amyloid burden evaluated with PIB-PET, and detailed cognition measures. FINGER participants were at-risk elderly without dementia. CAIDE Risk Score was calculated using data from previous national surveys (mean age 52.4 years). In connection to baseline FINGER visit (on average 17.6 years later, mean age 70.1 years), 132 participants underwent MRI scans, and 48 underwent PIB-PET scans. All 1,260 participants were cognitively assessed (Neuropsychological Test Battery, NTB). Neuroimaging assessments included brain cortical thickness and volumes (Freesurfer 5.0.3), visually rated medial temporal atrophy (MTA), white matter lesions (WML), and amyloid accumulation. Higher CAIDE Dementia Risk Score was related to more pronounced deep WML (OR 1.22, 95% CI 1.05-1.43), lower total gray matter (β-coefficient -0.29, p = 0.001) and hippocampal volume (β-coefficient -0.28, p = 0.003), lower cortical thickness (β-coefficient -0.19, p = 0.042), and poorer cognition (β-coefficients -0.31 for total NTB score, -0.25 for executive functioning, -0.33 for processing speed, and -0.20 for memory, all p < 0.001). Higher CAIDE Dementia Risk Score including APOE genotype was additionally related to more pronounced MTA (OR 1.15, 95% CI 1.00-1.30). No associations were found with periventricular WML or amyloid accumulation. The CAIDE Dementia Risk Score was related to indicators of cerebrovascular changes and neurodegeneration on MRI, and cognition. The lack of association with brain amyloid accumulation needs to be verified in studies with larger sample sizes.

  10. Dietary inflammatory index and risk of reflux oesophagitis, Barrett's oesophagus and oesophageal adenocarcinoma: a population-based case-control study.

    PubMed

    Shivappa, Nitin; Hebert, James R; Anderson, Lesley A; Shrubsole, Martha J; Murray, Liam J; Getty, Lauren B; Coleman, Helen G

    2017-05-01

    The dietary inflammatory index (DIITM) is a novel composite score based on a range of nutrients and foods known to be associated with inflammation. DII scores have been linked to the risk of a number of cancers, including oesophageal squamous cell cancer and oesophageal adenocarcinoma (OAC). Given that OAC stems from acid reflux and that the oesophageal epithelium undergoes a metaplasia-dysplasia transition from the resulting inflammation, it is plausible that a high DII score (indicating a pro-inflammatory diet) may exacerbate risk of OAC and its precursor conditions. The aim of this analytical study was to explore the association between energy-adjusted dietary inflammatory index (E-DIITM) in relation to risk of reflux oesophagitis, Barrett's oesophagus and OAC. Between 2002 and 2005, reflux oesophagitis (n 219), Barrett's oesophagus (n 220) and OAC (n 224) patients, and population-based controls (n 256), were recruited to the Factors influencing the Barrett's Adenocarcinoma Relationship study in Northern Ireland and the Republic of Ireland. E-DII scores were derived from a 101-item FFQ. Unconditional logistic regression analysis was applied to determine odds of oesophageal lesions according to E-DII intakes, adjusting for potential confounders. High E-DII scores were associated with borderline increase in odds of reflux oesophagitis (OR 1·87; 95 % CI 0·93, 3·73), and significantly increased odds of Barrett's oesophagus (OR 2·05; 95 % CI 1·22, 3·47), and OAC (OR 2·29; 95 % CI 1·32, 3·96), when comparing the highest with the lowest tertiles of E-DII scores. In conclusion, a pro-inflammatory diet may exacerbate the risk of the inflammation-metaplasia-adenocarcinoma pathway in oesophageal carcinogenesis.

  11. National survey of emergency physicians for transient ischemic attack (TIA) risk stratification consensus and appropriate treatment for a given level of risk.

    PubMed

    Perry, Jeffrey J; Losier, Justin H; Stiell, Ian G; Sharma, Mukul; Abdulaziz, Kasim

    2016-01-01

    Five percent of transient ischemic attack (TIA) patients have a subsequent stroke within 7 days. The Canadian TIA Score uses clinical findings to calculate the subsequent stroke risk within 7 days. Our objectives were to assess 1) anticipated use; 2) component face validity; 3) risk strata for stroke within 7 days; and 4) actions required, for a given risk for subsequent stroke. After a rigorous development process, a survey questionnaire was administered to a random sample of 300 emergency physicians selected from those registered in a national medical directory. The surveys were distributed using a modified Dillman technique. From a total of 271 eligible surveys, we received 131 (48.3%) completed surveys; 96.2% of emergency physicians would use a validated Canadian TIA Score; 8 of 13 components comprising the Canadian TIA Score were rated as Very Important or Important by survey respondents. Risk categories for subsequent stroke were defined as minimal-risk: 10% risk of subsequent stroke within 7 days. A validated Canadian TIA Score will likely be used by emergency physicians. Most components of the TIA Score have high face validity. Risk strata are definable, which may allow physicians to determine immediate actions, based on subsequent stroke risk, in the emergency department.

  12. Patient Safety Leadership WalkRounds.

    PubMed

    Frankel, Allan; Graydon-Baker, Erin; Neppl, Camilla; Simmonds, Terri; Gustafson, Michael; Gandhi, Tejal K

    2003-01-01

    In the WalkRounds concept, a core group, which includes the senior executives and/or vice presidents, conducts weekly visits to different areas of the hospital. The group, joined by one or two nurses in the area and other available staff, asks specific questions about adverse events or near misses and about the factors or systems issues that led to these events. ANALYSIS OF EVENTS: Events in the Walkrounds are entered into a database and classified according to the contributing factors. The data are aggregated by contributing factors and priority scores to highlight the root issues. The priority scores are used to determine QI pilots and make best use of limited resources. Executives are surveyed quarterly about actions they have taken as a direct result of WalkRounds and are asked what they have learned from the rounds. As of September 2002, 47 Patient Safety Leadership WalkRounds visited a total of 48 different areas of the hospital, with 432 individual comments. The WalkRounds require not only knowledgeable and invested senior leadership but also a well-organized support structure. Quality and safety personnel are needed to collect data and maintain a database of confidential information, evaluate the data from a systems approach, and delineate systems-based actions to improve care delivery. Comments of frontline clinicians and executives suggested that WalkRounds helps educate leadership and frontline staff in patient safety concepts and will lead to cultural changes, as manifested in more open discussion of adverse events and an improved rate of safety-based changes.

  13. Development of a semi-quantitative risk assessment model for evaluating environmental threat posed by the three first EU watch-list pharmaceuticals to urban wastewater treatment plants: An Irish case study.

    PubMed

    Tahar, Alexandre; Tiedeken, Erin Jo; Clifford, Eoghan; Cummins, Enda; Rowan, Neil

    2017-12-15

    Contamination of receiving waters with pharmaceutical compounds is of pressing concern. This constitutes the first study to report on the development of a semi-quantitative risk assessment (RA) model for evaluating the environmental threat posed by three EU watch list pharmaceutical compounds namely, diclofenac, 17-beta-estradiol and 17-alpha-ethinylestradiol, to aquatic ecosystems using Irish data as a case study. This RA model adopts the Irish Environmental Protection Agency Source-Pathway-Receptor concept to define relevant parameters for calculating low, medium or high risk score for each agglomeration of wastewater treatment plant (WWTP), which include catchment, treatments, operational and management factors. This RA model may potentially be used on a national scale to (i) identify WWTPs that pose a particular risk as regards releasing disproportionally high levels of these pharmaceutical compounds, and (ii) help identify priority locations for introducing or upgrading control measures (e.g. tertiary treatment, source reduction). To assess risks for these substances of emerging concern, the model was applied to 16 urban WWTPs located in different regions in Ireland that were scored for the three different compounds and ranked as low, medium or high risk. As a validation proxy, this case study used limited monitoring data recorded at some these plants receiving waters. It is envisaged that this semi-quantitative RA approach may aid other EU countries investigate and screen for potential risks where limited measured or predicted environmental pollutant concentrations and/or hydrological data are available. This model is semi-quantitative, as other factors such as influence of climate change and drug usage or prescription data will need to be considered in a future point for estimating and predicting risks. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. A decision-making tool for exchange transfusions in infants with severe hyperbilirubinemia in resource-limited settings.

    PubMed

    Olusanya, B O; Iskander, I F; Slusher, T M; Wennberg, R P

    2016-05-01

    Late presentation and ineffective phototherapy account for excessive rates of avoidable exchange transfusions (ETs) in many low- and middle-income countries. Several system-based constraints sometimes limit the ability to provide timely ETs for all infants at risk of kernicterus, thus necessitating a treatment triage to optimize available resources. This article proposes a practical priority-setting model for term and near-term infants requiring ET after the first 48 h of life. The proposed model combines plasma/serum bilirubin estimation, clinical signs of acute bilirubin encephalopathy and neurotoxicity risk factors for predicting the risk of kernicterus based on available evidence in the literature.

  15. Development and validation of a predictive score for perioperative transfusion in patients with hepatocellular carcinoma undergoing liver resection.

    PubMed

    Wang, Hai-Qing; Yang, Jian; Yang, Jia-Yin; Wang, Wen-Tao; Yan, Lu-Nan

    2015-08-01

    Liver resection is a major surgery requiring perioperative blood transfusion. Predicting the need for blood transfusion for patients undergoing liver resection is of great importance. The present study aimed to develop and validate a model for predicting transfusion requirement in HBV-related hepatocellular carcinoma patients undergoing liver resection. A total of 1543 consecutive liver resections were included in the study. Randomly selected sample set of 1080 cases (70% of the study cohort) were used to develop a predictive score for transfusion requirement and the remaining 30% (n=463) was used to validate the score. Based on the preoperative and predictable intraoperative parameters, logistic regression was used to identify risk factors and to create an integer score for the prediction of transfusion requirement. Extrahepatic procedure, major liver resection, hemoglobin level and platelets count were identified as independent predictors for transfusion requirement by logistic regression analysis. A score system integrating these 4 factors was stratified into three groups which could predict the risk of transfusion, with a rate of 11.4%, 24.7% and 57.4% for low, moderate and high risk, respectively. The prediction model appeared accurate with good discriminatory abilities, generating an area under the receiver operating characteristic curve of 0.736 in the development set and 0.709 in the validation set. We have developed and validated an integer-based risk score to predict perioperative transfusion for patients undergoing liver resection in a high-volume surgical center. This score allows identifying patients at a high risk and may alter transfusion practices.

  16. The 5-minute Apgar score as a predictor of childhood cancer: a population-based cohort study in five million children.

    PubMed

    Li, Jiong; Cnattingus, Sven; Gissler, Mika; Vestergaard, Mogens; Obel, Carsten; Ahrensberg, Jette; Olsen, Jørn

    2012-01-01

    The aetiology of childhood cancer remains largely unknown but recent research indicates that uterine environment plays an important role. We aimed to examine the association between the Apgar score at 5 min after birth and the risk of childhood cancer. Nationwide population-based cohort study. Nationwide register data in Denmark and Sweden. All live-born singletons born in Denmark from 1978 to 2006 (N=1 771 615) and in Sweden from 1973 to 2006 (N=3 319 573). Children were followed up from birth to 14 years of age. Rates and HRs for all childhood cancers and for specific childhood cancers. A total of 8087 children received a cancer diagnosis (1.6 per 1000). Compared to children with a 5-min Apgar score of 9-10, children with a score of 0-5 had a 46% higher risk of cancer (adjusted HR 1.46, 95% CI 1.15 to 1.89). The potential effect of low Apgar score on overall cancer risk was mostly confined to children diagnosed before 6 months of age. Children with an Apgar score of 0-5 had higher risks for several specific childhood cancers including Wilms' tumour (HR 4.33, 95% CI 2.42 to 7.73). A low 5 min Apgar score was associated with a higher risk of childhood cancers diagnosed shortly after birth. Our data suggest that environmental factors operating before or during delivery may play a role on the development of several specific childhood cancers.

  17. The SAFARI Score to Assess the Risk of Convulsive Seizure During Admission for Aneurysmal Subarachnoid Hemorrhage.

    PubMed

    Jaja, Blessing N R; Schweizer, Tom A; Claassen, Jan; Le Roux, Peter; Mayer, Stephan A; Macdonald, R Loch

    2018-06-01

    Seizure is a significant complication in patients under acute admission for aneurysmal SAH and could result in poor outcomes. Treatment strategies to optimize management will benefit from methods to better identify at-risk patients. To develop and validate a risk score for convulsive seizure during acute admission for SAH. A risk score was developed in 1500 patients from a single tertiary hospital and externally validated in 852 patients. Candidate predictors were identified by systematic review of the literature and were included in a backward stepwise logistic regression model with in-hospital seizure as a dependent variable. The risk score was assessed for discrimination using the area under the receiver operator characteristics curve (AUC) and for calibration using a goodness-of-fit test. The SAFARI score, based on 4 items (age ≥ 60 yr, seizure occurrence before hospitalization, ruptured aneurysm in the anterior circulation, and hydrocephalus requiring cerebrospinal fluid diversion), had AUC = 0.77, 95% confidence interval (CI): 0.73-0.82 in the development cohort. The validation cohort had AUC = 0.65, 95% CI 0.56-0.73. A calibrated increase in the risk of seizure was noted with increasing SAFARI score points. The SAFARI score is a simple tool that adequately stratified SAH patients according to their risk for seizure using a few readily derived predictor items. It may contribute to a more individualized management of seizure following SAH.

  18. WE-G-BRA-08: Failure Modes and Effects Analysis (FMEA) for Gamma Knife Radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Y; Bhatnagar, J; Bednarz, G

    2015-06-15

    Purpose: To perform a failure modes and effects analysis (FMEA) study for Gamma Knife (GK) radiosurgery processes at our institution based on our experience with the treatment of more than 13,000 patients. Methods: A team consisting of medical physicists, nurses, radiation oncologists, neurosurgeons at the University of Pittsburgh Medical Center and an external physicist expert was formed for the FMEA study. A process tree and a failure mode table were created for the GK procedures using the Leksell GK Perfexion and 4C units. Three scores for the probability of occurrence (O), the severity (S), and the probability of no detectionmore » (D) for failure modes were assigned to each failure mode by each professional on a scale from 1 to 10. The risk priority number (RPN) for each failure mode was then calculated (RPN = OxSxD) as the average scores from all data sets collected. Results: The established process tree for GK radiosurgery consists of 10 sub-processes and 53 steps, including a sub-process for frame placement and 11 steps that are directly related to the frame-based nature of the GK radiosurgery. Out of the 86 failure modes identified, 40 failure modes are GK specific, caused by the potential for inappropriate use of the radiosurgery head frame, the imaging fiducial boxes, the GK helmets and plugs, and the GammaPlan treatment planning system. The other 46 failure modes are associated with the registration, imaging, image transfer, contouring processes that are common for all radiation therapy techniques. The failure modes with the highest hazard scores are related to imperfect frame adaptor attachment, bad fiducial box assembly, overlooked target areas, inaccurate previous treatment information and excessive patient movement during MRI scan. Conclusion: The implementation of the FMEA approach for Gamma Knife radiosurgery enabled deeper understanding of the overall process among all professionals involved in the care of the patient and helped identify potential weaknesses in the overall process.« less

  19. Complete remission and early death after intensive chemotherapy in patients aged 60 years or older with acute myeloid leukaemia: a web-based application for prediction of outcomes.

    PubMed

    Krug, Utz; Röllig, Christoph; Koschmieder, Anja; Heinecke, Achim; Sauerland, Maria Cristina; Schaich, Markus; Thiede, Christian; Kramer, Michael; Braess, Jan; Spiekermann, Karsten; Haferlach, Torsten; Haferlach, Claudia; Koschmieder, Steffen; Rohde, Christian; Serve, Hubert; Wörmann, Bernhard; Hiddemann, Wolfgang; Ehninger, Gerhard; Berdel, Wolfgang E; Büchner, Thomas; Müller-Tidow, Carsten

    2010-12-11

    About 50% of patients (age ≥60 years) who have acute myeloid leukaemia and are otherwise medically healthy (ie, able to undergo intensive chemotherapy) achieve a complete remission (CR) after intensive chemotherapy, but with a substantially increased risk of early death (ED) compared with younger patients. We verified the association of standard clinical and laboratory variables with CR and ED and developed a web-based application for risk assessment of intensive chemotherapy in these patients. Multivariate regression analysis was used to develop risk scores with or without knowledge of the cytogenetic and molecular risk profiles for a cohort of 1406 patients (aged ≥60 years) with acute myeloid leukaemia, but otherwise medically healthy, who were treated with two courses of intensive induction chemotherapy (tioguanine, standard-dose cytarabine, and daunorubicin followed by high-dose cytarabine and mitoxantrone; or with high-dose cytarabine and mitoxantrone in the first and second induction courses) in the German Acute Myeloid Leukaemia Cooperative Group 1999 study. Risk prediction was validated in an independent cohort of 801 patients (aged >60 years) with acute myeloid leukaemia who were given two courses of cytarabine and daunorubicin in the Acute Myeloid Leukaemia 1996 study. Body temperature, age, de-novo leukaemia versus leukaemia secondary to cytotoxic treatment or an antecedent haematological disease, haemoglobin, platelet count, fibrinogen, and serum concentration of lactate dehydrogenase were significantly associated with CR or ED. The probability of CR with knowledge of cytogenetic and molecular risk (score 1) was from 12% to 91%, and without knowledge (score 2) from 21% to 80%. The predicted risk of ED was from 6% to 69% for score 1 and from 7% to 63% for score 2. The predictive power of the risk scores was confirmed in the independent patient cohort (CR score 1, from 10% to 91%; CR score 2, from 16% to 80%; ED score 1, from 6% to 69%; and ED score 2, from 7% to 61%). The scores for acute myeloid leukaemia can be used to predict the probability of CR and the risk of ED in older patients with acute myeloid leukaemia, but otherwise medically healthy, for whom intensive induction chemotherapy is planned. This information can help physicians with difficult decisions for treatment of these patients. Deutsche Krebshilfe and Deutsche Forschungsgemeinschaft. Copyright © 2010 Elsevier Ltd. All rights reserved.

  20. The Veterans Affairs Cardiac Risk Score: Recalibrating the Atherosclerotic Cardiovascular Disease Score for Applied Use.

    PubMed

    Sussman, Jeremy B; Wiitala, Wyndy L; Zawistowski, Matthew; Hofer, Timothy P; Bentley, Douglas; Hayward, Rodney A

    2017-09-01

    Accurately estimating cardiovascular risk is fundamental to good decision-making in cardiovascular disease (CVD) prevention, but risk scores developed in one population often perform poorly in dissimilar populations. We sought to examine whether a large integrated health system can use their electronic health data to better predict individual patients' risk of developing CVD. We created a cohort using all patients ages 45-80 who used Department of Veterans Affairs (VA) ambulatory care services in 2006 with no history of CVD, heart failure, or loop diuretics. Our outcome variable was new-onset CVD in 2007-2011. We then developed a series of recalibrated scores, including a fully refit "VA Risk Score-CVD (VARS-CVD)." We tested the different scores using standard measures of prediction quality. For the 1,512,092 patients in the study, the Atherosclerotic cardiovascular disease risk score had similar discrimination as the VARS-CVD (c-statistic of 0.66 in men and 0.73 in women), but the Atherosclerotic cardiovascular disease model had poor calibration, predicting 63% more events than observed. Calibration was excellent in the fully recalibrated VARS-CVD tool, but simpler techniques tested proved less reliable. We found that local electronic health record data can be used to estimate CVD better than an established risk score based on research populations. Recalibration improved estimates dramatically, and the type of recalibration was important. Such tools can also easily be integrated into health system's electronic health record and can be more readily updated.

  1. Research Options for Controlling Zoonotic Disease in India, 2010–2015

    PubMed Central

    Sekar, Nitin; Shah, Naman K.; Abbas, Syed Shahid; Kakkar, Manish

    2011-01-01

    Background Zoonotic infections pose a significant public health challenge for low- and middle-income countries and have traditionally been a neglected area of research. The Roadmap to Combat Zoonoses in India (RCZI) initiative conducted an exercise to systematically identify and prioritize research options needed to control zoonoses in India. Methods and Findings Priority setting methods developed by the Child Health and Nutrition Research Initiative were adapted for the diversity of sectors, disciplines, diseases and populations relevant for zoonoses in India. A multidisciplinary group of experts identified priority zoonotic diseases and knowledge gaps and proposed research options to address key knowledge gaps within the next five years. Each option was scored using predefined criteria by another group of experts. The scores were weighted using relative ranks among the criteria based upon the feedback of a larger reference group. We categorized each research option by type of research, disease targeted, factorials, and level of collaboration required. We analysed the research options by tabulating them along these categories. Seventeen experts generated four universal research themes and 103 specific research options, the majority of which required a high to medium level of collaboration across sectors. Research options designated as pertaining to ‘social, political and economic’ factorials predominated and scored higher than options focussing on ecological, genetic and biological, or environmental factors. Research options related to ‘health policy and systems’ scored highest while those related to ‘research for development of new interventions’ scored the lowest. Conclusions We methodically identified research themes and specific research options incorporating perspectives of a diverse group of stakeholders. These outputs reflect the diverse nature of challenges posed by zoonoses and should be acceptable across diseases, disciplines, and sectors. The identified research options capture the need for ‘actionable research’ for advancing the prevention and control of zoonoses in India. PMID:21364879

  2. [Study on the infectious risk model of AIDS among men who have sex with men in Guangzhou].

    PubMed

    Hu, Pei; Zhong, Fei; Cheng, Wei-Bin; Xu, Hui-Fang; Ling, Li

    2012-07-01

    To develop a human immune deficiency virus (HIV) infection risk appraisal model suitable for men who has sex with men (MSM) in Guangzhou, and to provide tools for follow-up the outcomes on health education and behavior intervention. A cros-sectional study was conducted in Guangzhou from 2008 to 2010. Based on the HIV surveillance data, the main risk factors of HIV infection among MSM were screened by means of logistic regression. Degree on relative risk was transformed into risk scores by adopting the statistics models. Individual risk scores, group risk scores and individual infection risk in comparison with usual MSM groups could then be calculated according to the rate of exposure on those risk factors appeared in data from the surveillance programs. Risk factors related to HIV infection among MSM and the quantitative assessment standard (risk scores and risk scores table of population groups) for those factors were set up by multiple logistic regression, including age, location of registered residence, monthly income, major location for finding their sexual partners, HIV testing in the past year, age when having the first sexual intercourse, rate of condom use in the past six months, symptoms related to sexually transmitted diseases (STDs) and syphilis in particular. The average risk score of population was 6.06, with risk scores for HIV positive and negative as 3.10 and 18.08 respectively (P < 0.001). The rates of HIV infection for different score groups were 0.9%, 2.0%, 7.0%, 14.4% and 33.3%, respectively. The sensitivity and specificity on the prediction of scores were 54.4% and 75.4% respectively, with the accuracy rate as 74.2%. HIV infection risk model could be used to quantify and classify the individual's infectious status and related factors among MSM more directly and effectively, so as to help the individuals to identify their high-risk behaviors as well as lifestyles. We felt that it could also serve as an important tool used for personalized HIV health education and behavior intervention programs.

  3. Ecological risk assessment of sedimentary hydrocarbons in a subtropical estuary as tools to select priority areas for environmental management.

    PubMed

    Dauner, Ana L L; Dias, Thais H; Ishii, Fernanda K; Libardoni, Bruno G; Parizzi, Rafael A; Martins, César C

    2018-06-23

    The concentration, distribution, and ecological risk of hydrocarbons, as well as bulk parameters, were determined in surface sediments of the Babitonga Bay, a subtropical human-impacted estuary in South Atlantic. Total aliphatic and polycyclic aromatic hydrocarbons (PAHs) ranged between 0.8 and 201.2 μg g -1 and from 8.7 to 5489 ng g -1 , respectively. Saguaçú Lagoon, the region near the ferry boat and the vicinity of São Francisco harbour (SFH), presented high hydrocarbon concentrations. Despite the low accumulation trend in this region, the SFH and city may act as a punctual hydrocarbon source. The inner portion of the estuary had the finest sediment grains and the highest concentrations of carbon, nitrogen, and sulphur, indicating its importance as a depositional and cumulative area. The occurrence of unresolved complex mixture suggested chronic oil contamination. Petrogenic (based on the high percentage of alkylated PAHs) and pyrolytic (according to the diagnostic ratios of PAH isomer pairs) sources were confirmed. Ecological risk assessment was evaluated by the risk quotient (RQ). All samples had at least one priority PAH present at above the negligible concentration, including naphthalene, which was observed in all samples. Only the sites near the ferry boat and at the Saguaçú Lagoon contained compounds with concentrations above their maximum permissible concentrations, while all other sampling sites are classified as "Low-risk." The spatial distribution of RQs coincides with PAHs distribution, indicating that the regions near SFH, ferry-boat, and the Saguaçú Lagoon should be considered to be priority areas when making environmental monitoring policies. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. [Clinical scores for the risk of bleeding with or without anticoagulation].

    PubMed

    Junod, Alain

    2016-09-14

    The assessment of hemorragic risk related to therapeutic anticoagulation is made difficult because of the variety of existing drugs, the heterogeneity of treatment strategies and their duration. Six prognostic scores have been analyzed. For three of them, external validations have revealed a marked decrease in the discrimination power. One British study, Qbleed, based on the data of more than 1 million of ambulatory patients, has repeatedly satisfied quality criteria. Two scores have also studied the bleeding risk during hospital admission for acute medical disease. The development of new and effective anticoagulants with fewer side-effects is more likely to solve this problem than the production of new clinical scores.

  5. Los Angeles County Department of Public Health's Health Hazard Assessment: putting the "health" into hazard assessment.

    PubMed

    Dean, Brandon; Bagwell, Dee Ann; Dora, Vinita; Khan, Sinan; Plough, Alonzo

    2013-01-01

    A ll communities, explicitly or implicitly, assess and prepare for the natural and manmade hazards that they know could impact their community. The commonality of hazard-based threats in most all communities does not usually result in standard or evidence-based preparedness practice and outcomes across those communities. Without specific efforts to build a shared perspective and prioritization, "all-hazards" preparedness can result in a random hodgepodge of priorities and preparedness strategies, resulting in diminished emergency response capabilities. Traditional risk assessments, with a focus on physical infrastructure, do not present the potential health and medical impacts of specific hazards and threats. With the implementation of Centers for Disease Control and Prevention's capability-based planning, there is broad recognition that a health-focused hazard assessment process--that engages the "Whole of Community"--is needed. Los Angeles County's Health Hazard Assessment and Prioritization tool provides a practical and innovative approach to enhance existing planning capacities. Successful utilization of this tool can provide a way for local and state health agencies and officials to more effectively identify the health consequences related to hazard-specific threats and risk, determine priorities, and develop improved and better coordinated agency planning, including community engagement in prioritization.

  6. Risk Score for Detecting Dysglycemia: A Cross-Sectional Study of a Working-Age Population in an Oil Field in China.

    PubMed

    Tian, Xiubiao; Liu, Yan; Han, Ying; Shi, Jieli; Zhu, Tiehong

    2017-06-11

    BACKGROUND Dysglycemia (pre-diabetes or diabetes) in young adults has increased rapidly. However, the risk scores for detecting dysglycemia in oil field staff and workers in China are limited. This study developed a risk score for the early identification of dysglycemia based on epidemiological and health examination data in an oil field working-age population with increased risk of diabetes. MATERIAL AND METHODS Multivariable logistic regression was used to develop the risk score model in a population-based, cross-sectional study. All subjects completed the questionnaires and underwent physical examination and oral glucose tolerance tests. The performance of the risk score models was evaluated using the area under the receiver operating characteristic curve (AUC). RESULTS The study population consisted of 1995 participants, 20-64 years old (49.4% males), with undiagnosed diabetes or pre-diabetes who underwent periodic health examinations from March 2014 to June 2015 in Dagang oil field, Tianjin, China. Age, sex, body mass index, history of high blood glucose, smoking, triglyceride, and fasting plasma glucose (FPG) constituted the Dagang dysglycemia risk score (Dagang DRS) model. The performance of Dagang DRS was superior to m-FINDRISC (AUC: 0.791; 95% confidence interval (CI), 0.773-0.809 vs. 0.633; 95% CI, 0.611-0.654). At the cut-off value of 5.6 mmol/L, the Dagang DRS (AUC: 0.616; 95% CI, 0.592-0.641) was better than the FPG value alone (AUC: 0.571; 95% CI, 0.546-0.596) in participants with FPG <6.1 mmol/L (n=1545, P=0.028). CONCLUSIONS Dagang DRS is a valuable tool for detecting dysglycemia, especially when FPG <6.1 mmol/L, in oil field workers in China.

  7. Predictors of violent behavior among acute psychiatric patients: clinical study.

    PubMed

    Amore, Mario; Menchetti, Marco; Tonti, Cristina; Scarlatti, Fabiano; Lundgren, Eva; Esposito, William; Berardi, Domenico

    2008-06-01

    Violence risk prediction is a priority issue for clinicians working with mentally disordered offenders. The aim of the present study was to determine violence risk factors in acute psychiatric inpatients. The study was conducted in a locked, short-term psychiatric inpatient unit and involved 374 patients consecutively admitted in a 1-year period. Sociodemographic and clinical data were obtained through a review of the medical records and patient interviews. Psychiatric symptoms at admission were assessed using the Brief Psychiatric Rating Scale (BPRS). Psychiatric diagnosis was formulated using the Structured Clinical Interview for DSM-IV. Past aggressive behavior was evaluated by interviewing patients, caregivers or other collateral informants. Aggressive behaviors in the ward were assessed using the Overt Aggression Scale. Patients who perpetrated verbal and against-object aggression or physical aggression in the month before admission were compared to non-aggressive patients, moreover, aggressive behavior during hospitalization and persistence of physical violence after admission were evaluated. Violent behavior in the month before admission was associated with male sex, substance abuse and positive symptoms. The most significant risk factor for physical violence was a past history of physically aggressive behavior. The persistent physical assaultiveness before and during hospitalization was related to higher BPRS total scores and to more severe thought disturbances. Higher levels of hostility-suspiciousness BPRS scores predicted a change for the worse in violent behavior, from verbal to physical. A comprehensive evaluation of the history of past aggressive behavior and psychopathological variables has important implications for the prediction of violence in psychiatric settings.

  8. Practical Implementation of Failure Mode and Effects Analysis for Safety and Efficiency in Stereotactic Radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younge, Kelly Cooper, E-mail: kyounge@med.umich.edu; Wang, Yizhen; Thompson, John

    2015-04-01

    Purpose: To improve the safety and efficiency of a new stereotactic radiosurgery program with the application of failure mode and effects analysis (FMEA) performed by a multidisciplinary team of health care professionals. Methods and Materials: Representatives included physicists, therapists, dosimetrists, oncologists, and administrators. A detailed process tree was created from an initial high-level process tree to facilitate the identification of possible failure modes. Group members were asked to determine failure modes that they considered to be the highest risk before scoring failure modes. Risk priority numbers (RPNs) were determined by each group member individually and then averaged. Results: A totalmore » of 99 failure modes were identified. The 5 failure modes with an RPN above 150 were further analyzed to attempt to reduce these RPNs. Only 1 of the initial items that the group presumed to be high-risk (magnetic resonance imaging laterality reversed) was ranked in these top 5 items. New process controls were put in place to reduce the severity, occurrence, and detectability scores for all of the top 5 failure modes. Conclusions: FMEA is a valuable team activity that can assist in the creation or restructuring of a quality assurance program with the aim of improved safety, quality, and efficiency. Performing the FMEA helped group members to see how they fit into the bigger picture of the program, and it served to reduce biases and preconceived notions about which elements of the program were the riskiest.« less

  9. Effects of bottom-up and top-down intervention principles in emergent literacy in children at risk of developmental dyslexia: a longitudinal study.

    PubMed

    Helland, Turid; Tjus, Tomas; Hovden, Marit; Ofte, Sonja; Heimann, Mikael

    2011-01-01

    This longitudinal study focused on the effects of two different principles of intervention in children at risk of developing dyslexia from 5 to 8 years old. The children were selected on the basis of a background questionnaire given to parents and preschool teachers, with cognitive and functional magnetic resonance imaging results substantiating group differences in neuropsychological processes associated with phonology, orthography, and phoneme-grapheme correspondence (i.e., alphabetic principle). The two principles of intervention were bottom-up (BU), "from sound to meaning", and top-down (TD), "from meaning to sound." Thus, four subgroups were established: risk/BU, risk/TD, control/BU, and control/TD. Computer-based training took place for 2 months every spring, and cognitive assessments were performed each fall of the project period. Measures of preliteracy skills for reading and spelling were phonological awareness, working memory, verbal learning, and letter knowledge. Literacy skills were assessed by word reading and spelling. At project end the control group scored significantly above age norm, whereas the risk group scored within the norm. In the at-risk group, training based on the BU principle had the strongest effects on phonological awareness and working memory scores, whereas training based on the TD principle had the strongest effects on verbal learning, letter knowledge, and literacy scores. It was concluded that appropriate, specific, data-based intervention starting in preschool can mitigate literacy impairment and that interventions should contain BU training for preliteracy skills and TD training for literacy training.

  10. A pilot study of an online workplace nutrition program: the value of participant input in program development.

    PubMed

    Cousineau, Tara; Houle, Brian; Bromberg, Jonas; Fernandez, Kathrine C; Kling, Whitney C

    2008-01-01

    Tailored nutrition Web programs constitute an emerging trend in obesity prevention. Initial investment in innovative technology necessitates that the target population be well understood. This pilot study's purpose was to determine the feasibility of a workplace nutrition Web program. Formative research was conducted with gaming industry employees and benefits managers to develop a consensus on workplace-specific nutrition needs. A demonstration Web program was piloted with stakeholders to determine feasibility. Indiana, Mississippi, Nevada, and New Jersey gaming establishments. 86 employees, 18 benefits managers. Prototype Web program. Concept mapping; 16-item nutrition knowledge test; satisfaction. Concept mapping was used to aggregate importance ratings on programmatic content, which informed Web program curriculum. Chi-square tests were performed postintervention to determine knowledge improvement. (1) Employees and benefits managers exhibited moderate agreement about content priorities for the program (r = 0.48). (2) There was a significant increase in employees' nutrition knowledge scores postintervention (t = 7.16, df = 36, P < .001); those with less knowledge exhibited the greatest gains in knowledge scores (r = -0.647, P < .001). Employees and benefit managers do not necessarily agree on the priority of nutrition-related content, suggesting a need for programs to appeal to various stakeholders. Computer-based approaches can address various stakeholder health concerns via tailored, customized programming.

  11. Weighted Fuzzy Risk Priority Number Evaluation of Turbine and Compressor Blades Considering Failure Mode Correlations

    NASA Astrophysics Data System (ADS)

    Gan, Luping; Li, Yan-Feng; Zhu, Shun-Peng; Yang, Yuan-Jian; Huang, Hong-Zhong

    2014-06-01

    Failure mode, effects and criticality analysis (FMECA) and Fault tree analysis (FTA) are powerful tools to evaluate reliability of systems. Although single failure mode issue can be efficiently addressed by traditional FMECA, multiple failure modes and component correlations in complex systems cannot be effectively evaluated. In addition, correlated variables and parameters are often assumed to be precisely known in quantitative analysis. In fact, due to the lack of information, epistemic uncertainty commonly exists in engineering design. To solve these problems, the advantages of FMECA, FTA, fuzzy theory, and Copula theory are integrated into a unified hybrid method called fuzzy probability weighted geometric mean (FPWGM) risk priority number (RPN) method. The epistemic uncertainty of risk variables and parameters are characterized by fuzzy number to obtain fuzzy weighted geometric mean (FWGM) RPN for single failure mode. Multiple failure modes are connected using minimum cut sets (MCS), and Boolean logic is used to combine fuzzy risk priority number (FRPN) of each MCS. Moreover, Copula theory is applied to analyze the correlation of multiple failure modes in order to derive the failure probabilities of each MCS. Compared to the case where dependency among multiple failure modes is not considered, the Copula modeling approach eliminates the error of reliability analysis. Furthermore, for purpose of quantitative analysis, probabilities importance weight from failure probabilities are assigned to FWGM RPN to reassess the risk priority, which generalize the definition of probability weight and FRPN, resulting in a more accurate estimation than that of the traditional models. Finally, a basic fatigue analysis case drawn from turbine and compressor blades in aeroengine is used to demonstrate the effectiveness and robustness of the presented method. The result provides some important insights on fatigue reliability analysis and risk priority assessment of structural system under failure correlations.

  12. Efficacy of functional movement screening for predicting injuries in coast guard cadets.

    PubMed

    Knapik, Joseph J; Cosio-Lima, Ludimila M; Reynolds, Katy L; Shumway, Richard S

    2015-05-01

    Functional movement screening (FMS) examines the ability of individuals to perform highly specific movements with the aim of identifying individuals who have functional limitations or asymmetries. It is assumed that individuals who can more effectively accomplish the required movements have a lower injury risk. This study determined the ability of FMS to predict injuries in the United States Coast Guard (USCG) cadets. Seven hundred seventy male and 275 female USCG freshman cadets were administered the 7 FMS tests before the physically intense 8-week Summer Warfare Annual Basic (SWAB) training. Physical training-related injuries were recorded during SWAB training. Cumulative injury incidence was calculated at various FMS cutpoint scores. The ability of the FMS total score to predict injuries was examined by calculating sensitivity and specificity. Determination of the FMS cutpoint that maximized specificity and sensitivity was determined from the Youden's index (sensitivity + specificity - 1). For men, FMS scores ≤ 12 were associated with higher injury risk than scores >12; for women, FMS scores ≤ 15 were associated with higher injury risk than scores >15. The Youden's Index indicated that the optimal FMS cutpoint was ≤ 11 for men (22% sensitivity, 87% specificity) and ≤ 14 for women (60% sensitivity, 61% specificity). Functional movement screening demonstrated moderate prognostic accuracy for determining injury risk among female Coast Guard cadets but relatively low accuracy among male cadets. Attempting to predict injury risk based on the FMS test seems to have some limited promise based on the present and past investigations.

  13. Towards a Canadian Educational Research Policy. (Vers une Politique Canadienne de la Recherche Pedagogique.)

    ERIC Educational Resources Information Center

    Canadian Council for Research in Education, Ottawa (Ontario).

    Based on workshop discussions, this report attempts to lay down broad guidelines for Canadian educational research and development. The guidelines postulate that a comprehensive policy should include a pattern of priorities that (1) encourage the development of high quality scholarly institutions, (2) provide for risk capital for exploratory basic…

  14. Substance Use as a Longitudinal Predictor of the Perpetration of Teen Dating Violence

    ERIC Educational Resources Information Center

    Temple, Jeff R.; Shorey, Ryan C.; Fite, Paula; Stuart, Gregory L.; Le, Vi Donna

    2013-01-01

    The prevention of teen dating violence is a major public health priority. However, the dearth of longitudinal studies makes it difficult to develop programs that effectively target salient risk factors. Using a school-based sample of ethnically diverse adolescents, this longitudinal study examined whether substance use (alcohol, marijuana, and…

  15. America's Challenge: Effective Teachers for At-Risk Schools and Students

    ERIC Educational Resources Information Center

    Dwyer, Carol A., Ed.

    2007-01-01

    The National Comprehensive Center for Teacher Quality (NCCTQ) was launched in 2005 as part of a comprehensive system of content-based technical assistance to support states in implementing the priorities of the No Child Left Behind (NCLB) Act. NCCTQ's mission is to support Regional Comprehensive Centers (RCCs), states, and other education…

  16. Evaluation of the Prostate Cancer Prevention Trial Risk Calculator in a High-Risk Screening Population

    PubMed Central

    Kaplan, David J.; Boorjian, Stephen A.; Ruth, Karen; Egleston, Brian L.; Chen, David Y.T.; Viterbo, Rosalia; Uzzo, Robert G.; Buyyounouski, Mark K.; Raysor, Susan; Giri, Veda N.

    2009-01-01

    Introduction Clinical factors in addition to PSA have been evaluated to improve risk assessment for prostate cancer. The Prostate Cancer Prevention Trial (PCPT) risk calculator provides an assessment of prostate cancer risk based on age, PSA, race, prior biopsy, and family history. This study evaluated the risk calculator in a screening cohort of young, racially diverse, high-risk men with a low baseline PSA enrolled in the Prostate Cancer Risk Assessment Program. Patients and Methods Eligibility for PRAP include men ages 35-69 who are African-American, have a family history of prostate cancer, or have a known BRCA1/2 mutation. PCPT risk scores were determined for PRAP participants, and were compared to observed prostate cancer rates. Results 624 participants were evaluated, including 382 (61.2%) African-American men and 375 (60%) men with a family history of prostate cancer. Median age was 49.0 years (range 34.0-69.0), and median PSA was 0.9 (range 0.1-27.2). PCPT risk score correlated with prostate cancer diagnosis, as the median baseline risk score in patients diagnosed with prostate cancer was 31.3%, versus 14.2% in patients not diagnosed with prostate cancer (p<0.0001). The PCPT calculator similarly stratified the risk of diagnosis of Gleason score ≥7 disease, as the median risk score was 36.2% in patients diagnosed with Gleason ≥7 prostate cancer versus 15.2% in all other participants (p<0.0001). Conclusion PCPT risk calculator score was found to stratify prostate cancer risk in a cohort of young, primarily African-American men with a low baseline PSA. These results support further evaluation of this predictive tool for prostate cancer risk assessment in high-risk men. PMID:19709072

  17. An overview of the effectiveness and efficiency of HIV prevention programs.

    PubMed Central

    Holtgrave, D R; Qualls, N L; Curran, J W; Valdiserri, R O; Guinan, M E; Parra, W C

    1995-01-01

    Because of the enormity of the HIV-AIDS epidemic and the urgency for preventing transmission, HIV prevention programs are a high priority for careful and timely evaluations. Information on program effectiveness and efficiency is needed for decision-making about future HIV prevention priorities. General characteristics of successful HIV prevention programs, programs empirically evaluated and found to change (or not change) high-risk behaviors or in need of further empirical study, and economic evaluations of certain programs are described and summarized with attention limited to programs that have a behavioral basis. HIV prevention programs have an impact on averting or reducing risk behaviors, particularly when they are delivered with sufficient resources, intensity, and cultural competency and are based on a firm foundation of behavioral and social science theory and past research. Economic evaluations have found that some of these behaviorally based programs yield net economic benefits to society, and others are likely cost-effective (even if not cost-saving) relative to other health programs. Still, specific improvements should be made in certain HIV prevention programs. PMID:7630989

  18. Health literacy is associated with healthy eating index scores and sugar-sweetened beverage intake: findings from the rural lower Mississippi delta

    USDA-ARS?s Scientific Manuscript database

    Although health literacy has been a public health priority area for more than a decade, the relationship between health literacy and dietary quality has not been thoroughly explored. This study, evaluates health literacy skills in relation to Healthy Eating Index (HEI) scores and sugar-sweetened bev...

  19. The Fight's Not Always Fixed: Using Literary Response to Transcend Standardized Test Scores

    ERIC Educational Resources Information Center

    Avila, JuliAnna

    2012-01-01

    In 2004, the National Endowment for the Arts (NEA) concluded that "literature reading is fading as a meaningful activity, especially among younger people." How can educators continue to teach students about the power of literary response when the priority is for them to achieve proficiency on standardized tests, whose scores can only be narrowly…

  20. Individual traveller health priorities and the pre-travel health consultation.

    PubMed

    Flaherty, Gerard T; Chen, Bingling; Avalos, Gloria

    2017-09-01

    The purpose of this study was to examine the principal travel health priorities of travellers. The most frequently selected travel health concerns were accessing medical care abroad, dying abroad, insect bites, malaria, personal safety and travel security threats. The travel health risks of least concern were culture shock, fear of flying, jet lag and sexually transmitted infections. This study is the first to develop a hierarchy of self-declared travel health risk priorities among travellers. © International Society of Travel Medicine, 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

Top