Methodological quality assessment of paper-based systematic reviews published in oral health.
Wasiak, J; Shen, A Y; Tan, H B; Mahar, R; Kan, G; Khoo, W R; Faggion, C M
2016-04-01
This study aimed to conduct a methodological assessment of paper-based systematic reviews (SR) published in oral health using a validated checklist. A secondary objective was to explore temporal trends on methodological quality. Two electronic databases (OVID Medline and OVID EMBASE) were searched for paper-based SR of interventions published in oral health from inception to October 2014. Manual searches of the reference lists of paper-based SR were also conducted. Methodological quality of included paper-based SR was assessed using an 11-item questionnaire, Assessment of Multiple Systematic Reviews (AMSTAR) checklist. Methodological quality was summarized using the median and inter-quartile range (IQR) of the AMSTAR score over different categories and time periods. A total of 643 paper-based SR were included. The overall median AMSTAR score was 4 (IQR 2-6). The highest median score (5) was found in the pain dentistry and periodontology fields, while the lowest median score (3) was found in implant dentistry, restorative dentistry, oral medicine, and prosthodontics. The number of paper-based SR per year and the median AMSTAR score increased over time (median score in 1990s was 2 (IQR 2-3), 2000s was 4 (IQR 2-5), and 2010 onwards was 5 (IQR 3-6)). Although the methodological quality of paper-based SR published in oral health has improved in the last few years, there is still scope for improving quality in most evaluated dental specialties. Large-scale assessment of methodological quality of dental SR highlights areas of methodological strengths and weaknesses that can be targeted in future publications to encourage better quality review methodology.
Performance-based methodology for assessing seismic vulnerability and capacity of buildings
NASA Astrophysics Data System (ADS)
Shibin, Lin; Lili, Xie; Maosheng, Gong; Ming, Li
2010-06-01
This paper presents a performance-based methodology for the assessment of seismic vulnerability and capacity of buildings. The vulnerability assessment methodology is based on the HAZUS methodology and the improved capacitydemand-diagram method. The spectral displacement ( S d ) of performance points on a capacity curve is used to estimate the damage level of a building. The relationship between S d and peak ground acceleration (PGA) is established, and then a new vulnerability function is expressed in terms of PGA. Furthermore, the expected value of the seismic capacity index (SCev) is provided to estimate the seismic capacity of buildings based on the probability distribution of damage levels and the corresponding seismic capacity index. The results indicate that the proposed vulnerability methodology is able to assess seismic damage of a large number of building stock directly and quickly following an earthquake. The SCev provides an effective index to measure the seismic capacity of buildings and illustrate the relationship between the seismic capacity of buildings and seismic action. The estimated result is compared with damage surveys of the cities of Dujiangyan and Jiangyou in the M8.0 Wenchuan earthquake, revealing that the methodology is acceptable for seismic risk assessment and decision making. The primary reasons for discrepancies between the estimated results and the damage surveys are discussed.
Patrizi, Alfredo; Pennestrì, Ettore; Valentini, Pier Paolo
2016-01-01
The paper deals with the comparison between a high-end marker-based acquisition system and a low-cost marker-less methodology for the assessment of the human posture during working tasks. The low-cost methodology is based on the use of a single Microsoft Kinect V1 device. The high-end acquisition system is the BTS SMART that requires the use of reflective markers to be placed on the subject's body. Three practical working activities involving object lifting and displacement have been investigated. The operational risk has been evaluated according to the lifting equation proposed by the American National Institute for Occupational Safety and Health. The results of the study show that the risk multipliers computed from the two acquisition methodologies are very close for all the analysed activities. In agreement to this outcome, the marker-less methodology based on the Microsoft Kinect V1 device seems very promising to promote the dissemination of computer-aided assessment of ergonomics while maintaining good accuracy and affordable costs. PRACTITIONER’S SUMMARY: The study is motivated by the increasing interest for on-site working ergonomics assessment. We compared a low-cost marker-less methodology with a high-end marker-based system. We tested them on three different working tasks, assessing the working risk of lifting loads. The two methodologies showed comparable precision in all the investigations.
Charpentier, R.R.; Klett, T.R.
2005-01-01
During the last 30 years, the methodology for assessment of undiscovered conventional oil and gas resources used by the Geological Survey has undergone considerable change. This evolution has been based on five major principles. First, the U.S. Geological Survey has responsibility for a wide range of U.S. and world assessments and requires a robust methodology suitable for immaturely explored as well as maturely explored areas. Second, the assessments should be based on as comprehensive a set of geological and exploration history data as possible. Third, the perils of methods that solely use statistical methods without geological analysis are recognized. Fourth, the methodology and course of the assessment should be documented as transparently as possible, within the limits imposed by the inevitable use of subjective judgement. Fifth, the multiple uses of the assessments require a continuing effort to provide the documentation in such ways as to increase utility to the many types of users. Undiscovered conventional oil and gas resources are those recoverable volumes in undiscovered, discrete, conventional structural or stratigraphic traps. The USGS 2000 methodology for these resources is based on a framework of assessing numbers and sizes of undiscovered oil and gas accumulations and the associated risks. The input is standardized on a form termed the Seventh Approximation Data Form for Conventional Assessment Units. Volumes of resource are then calculated using a Monte Carlo program named Emc2, but an alternative analytic (non-Monte Carlo) program named ASSESS also can be used. The resource assessment methodology continues to change. Accumulation-size distributions are being examined to determine how sensitive the results are to size-distribution assumptions. The resource assessment output is changing to provide better applicability for economic analysis. The separate methodology for assessing continuous (unconventional) resources also has been evolving. Further studies of the relationship between geologic models of conventional and continuous resources will likely impact the respective resource assessment methodologies. ?? 2005 International Association for Mathematical Geology.
Methodological Issues in Curriculum-Based Reading Assessment.
ERIC Educational Resources Information Center
Fuchs, Lynn S.; And Others
1984-01-01
Three studies involving elementary students examined methodological issues in curriculum-based reading assessment. Results indicated that (1) whereas sample duration did not affect concurrent validity, increasing duration reduced performance instability and increased performance slopes and (2) domain size was related inversely to performance slope…
A Security Assessment Mechanism for Software-Defined Networking-Based Mobile Networks.
Luo, Shibo; Dong, Mianxiong; Ota, Kaoru; Wu, Jun; Li, Jianhua
2015-12-17
Software-Defined Networking-based Mobile Networks (SDN-MNs) are considered the future of 5G mobile network architecture. With the evolving cyber-attack threat, security assessments need to be performed in the network management. Due to the distinctive features of SDN-MNs, such as their dynamic nature and complexity, traditional network security assessment methodologies cannot be applied directly to SDN-MNs, and a novel security assessment methodology is needed. In this paper, an effective security assessment mechanism based on attack graphs and an Analytic Hierarchy Process (AHP) is proposed for SDN-MNs. Firstly, this paper discusses the security assessment problem of SDN-MNs and proposes a methodology using attack graphs and AHP. Secondly, to address the diversity and complexity of SDN-MNs, a novel attack graph definition and attack graph generation algorithm are proposed. In order to quantify security levels, the Node Minimal Effort (NME) is defined to quantify attack cost and derive system security levels based on NME. Thirdly, to calculate the NME of an attack graph that takes the dynamic factors of SDN-MN into consideration, we use AHP integrated with the Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) as the methodology. Finally, we offer a case study to validate the proposed methodology. The case study and evaluation show the advantages of the proposed security assessment mechanism.
A Security Assessment Mechanism for Software-Defined Networking-Based Mobile Networks
Luo, Shibo; Dong, Mianxiong; Ota, Kaoru; Wu, Jun; Li, Jianhua
2015-01-01
Software-Defined Networking-based Mobile Networks (SDN-MNs) are considered the future of 5G mobile network architecture. With the evolving cyber-attack threat, security assessments need to be performed in the network management. Due to the distinctive features of SDN-MNs, such as their dynamic nature and complexity, traditional network security assessment methodologies cannot be applied directly to SDN-MNs, and a novel security assessment methodology is needed. In this paper, an effective security assessment mechanism based on attack graphs and an Analytic Hierarchy Process (AHP) is proposed for SDN-MNs. Firstly, this paper discusses the security assessment problem of SDN-MNs and proposes a methodology using attack graphs and AHP. Secondly, to address the diversity and complexity of SDN-MNs, a novel attack graph definition and attack graph generation algorithm are proposed. In order to quantify security levels, the Node Minimal Effort (NME) is defined to quantify attack cost and derive system security levels based on NME. Thirdly, to calculate the NME of an attack graph that takes the dynamic factors of SDN-MN into consideration, we use AHP integrated with the Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) as the methodology. Finally, we offer a case study to validate the proposed methodology. The case study and evaluation show the advantages of the proposed security assessment mechanism. PMID:26694409
Assessment of methodologies for analysis of the dungeness B accidental aircraft crash risk.
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaChance, Jeffrey L.; Hansen, Clifford W.
2010-09-01
The Health and Safety Executive (HSE) has requested Sandia National Laboratories (SNL) to review the aircraft crash methodology for nuclear facilities that are being used in the United Kingdom (UK). The scope of the work included a review of one method utilized in the UK for assessing the potential for accidental airplane crashes into nuclear facilities (Task 1) and a comparison of the UK methodology against similar International Atomic Energy Agency (IAEA), United States (US) Department of Energy (DOE), and the US Nuclear Regulatory Commission (NRC) methods (Task 2). Based on the conclusions from Tasks 1 and 2, an additionalmore » Task 3 would provide an assessment of a site-specific crash frequency for the Dungeness B facility using one of the other methodologies. This report documents the results of Task 2. The comparison of the different methods was performed for the three primary contributors to aircraft crash risk at the Dungeness B site: airfield related crashes, crashes below airways, and background crashes. The methods and data specified in each methodology were compared for each of these risk contributors, differences in the methodologies were identified, and the importance of these differences was qualitatively and quantitatively assessed. The bases for each of the methods and the data used were considered in this assessment process. A comparison of the treatment of the consequences of the aircraft crashes was not included in this assessment because the frequency of crashes into critical structures is currently low based on the existing Dungeness B assessment. Although the comparison found substantial differences between the UK and the three alternative methodologies (IAEA, NRC, and DOE) this assessment concludes that use of any of these alternative methodologies would not change the conclusions reached for the Dungeness B site. Performance of Task 3 is thus not recommended.« less
Charpentier, Ronald R.; Moore, Thomas E.; Gautier, D.L.
2017-11-15
The methodological procedures used in the geologic assessments of the 2008 Circum-Arctic Resource Appraisal (CARA) were based largely on the methodology developed for the 2000 U.S. Geological Survey World Petroleum Assessment. The main variables were probability distributions for numbers and sizes of undiscovered accumulations with an associated risk of occurrence. The CARA methodology expanded on the previous methodology in providing additional tools and procedures more applicable to the many Arctic basins that have little or no exploration history. Most importantly, geologic analogs from a database constructed for this study were used in many of the assessments to constrain numbers and sizes of undiscovered oil and gas accumulations.
USGS Methodology for Assessing Continuous Petroleum Resources
Charpentier, Ronald R.; Cook, Troy A.
2011-01-01
The U.S. Geological Survey (USGS) has developed a new quantitative methodology for assessing resources in continuous (unconventional) petroleum deposits. Continuous petroleum resources include shale gas, coalbed gas, and other oil and gas deposits in low-permeability ("tight") reservoirs. The methodology is based on an approach combining geologic understanding with well productivities. The methodology is probabilistic, with both input and output variables as probability distributions, and uses Monte Carlo simulation to calculate the estimates. The new methodology is an improvement of previous USGS methodologies in that it better accommodates the uncertainties in undrilled or minimally drilled deposits that must be assessed using analogs. The publication is a collection of PowerPoint slides with accompanying comments.
Wightman, Jade; Julio, Flávia; Virués-Ortega, Javier
2014-05-01
Experimental functional analysis is an assessment methodology to identify the environmental factors that maintain problem behavior in individuals with developmental disabilities and in other populations. Functional analysis provides the basis for the development of reinforcement-based approaches to treatment. This article reviews the procedures, validity, and clinical implementation of the methodological variations of functional analysis and function-based interventions. We present six variations of functional analysis methodology in addition to the typical functional analysis: brief functional analysis, single-function tests, latency-based functional analysis, functional analysis of precursors, and trial-based functional analysis. We also present the three general categories of function-based interventions: extinction, antecedent manipulation, and differential reinforcement. Functional analysis methodology is a valid and efficient approach to the assessment of problem behavior and the selection of treatment strategies.
Improving the Method of Roof Fall Susceptibility Assessment based on Fuzzy Approach
NASA Astrophysics Data System (ADS)
Ghasemi, Ebrahim; Ataei, Mohammad; Shahriar, Kourosh
2017-03-01
Retreat mining is always accompanied by a great amount of accidents and most of them are due to roof fall. Therefore, development of methodologies to evaluate the roof fall susceptibility (RFS) seems essential. Ghasemi et al. (2012) proposed a systematic methodology to assess the roof fall risk during retreat mining based on risk assessment classic approach. The main defect of this method is ignorance of subjective uncertainties due to linguistic input value of some factors, low resolution, fixed weighting, sharp class boundaries, etc. To remove this defection and improve the mentioned method, in this paper, a novel methodology is presented to assess the RFS using fuzzy approach. The application of fuzzy approach provides an effective tool to handle the subjective uncertainties. Furthermore, fuzzy analytical hierarchy process (AHP) is used to structure and prioritize various risk factors and sub-factors during development of this method. This methodology is applied to identify the susceptibility of roof fall occurrence in main panel of Tabas Central Mine (TCM), Iran. The results indicate that this methodology is effective and efficient in assessing RFS.
Hensel, Desiree
The use of a concept-based curriculum in nursing education is increasing, but assessing its impact remains challenging. This project discusses how Q methodology was used to evaluate our prelicensure program's outcome of creating practitioners who were ready to practice in diverse environments before and after a concept-based curricular revision. The successes and challenges of the revision are discussed.
1983-12-30
AD-Ri46 57? ARCHITECTURE DESIGN AND SYSTEM; PERFORMANCE ASSESSMENT i/i AND DEVELOPMENT ME..(U) NAVAL SURFACE WEAPONS CENTER SILYER SPRING MD J...AD-A 146 577 NSIWC TR 83-324 ARCHITECTURE , DESIGN , AND SYSTEM; PERFORMANCE ASSESSMENT AND DEVELOPMENT METHODOLOGY...REPORT NUMBER 12. GOVT ACCESSION NO.3. RECIPIENT’S CATALOG NUMBER NSWC TR 83-324 10- 1 1 51’ 4. ?ITLE (and subtitle) ARCHITECTURE , DESIGN , AND SYSTEM; S
USDA-ARS?s Scientific Manuscript database
We developed a cost-based methodology to assess the value of forested watersheds to improve water quality in public water supplies. The developed methodology is applicable to other source watersheds to determine ecosystem services for water quality. We assess the value of forest land for source wate...
KSC management training system project
NASA Technical Reports Server (NTRS)
Sepulveda, Jose A.
1993-01-01
The stated objectives for the summer of 1993 were: to review the Individual Development Plan Surveys for 1994 in order to automate the analysis of the Needs Assessment effort; and to develop and implement evaluation methodologies to perform ongoing program-wide course-to-course assessment. This includes the following: to propose a methodology to develop and implement objective, performance-based assessment instruments for each training effort; to mechanize course evaluation forms and develop software to facilitate the data gathering, analysis, and reporting processes; and to implement the methodology, forms, and software in at lease one training course or seminar selected among those normally offered in the summer at KSC. Section two of this report addresses the work done in regard to the Individual Development Plan Surveys for 1994. Section three presents the methodology proposed to develop and implement objective, performance-based assessment instruments for each training course offered at KSC.
Khan, F I; Iqbal, A; Ramesh, N; Abbasi, S A
2001-10-12
As it is conventionally done, strategies for incorporating accident--prevention measures in any hazardous chemical process industry are developed on the basis of input from risk assessment. However, the two steps-- risk assessment and hazard reduction (or safety) measures--are not linked interactively in the existing methodologies. This prevents a quantitative assessment of the impacts of safety measures on risk control. We have made an attempt to develop a methodology in which risk assessment steps are interactively linked with implementation of safety measures. The resultant system tells us the extent of reduction of risk by each successive safety measure. It also tells based on sophisticated maximum credible accident analysis (MCAA) and probabilistic fault tree analysis (PFTA) whether a given unit can ever be made 'safe'. The application of the methodology has been illustrated with a case study.
Assessing Personality and Mood With Adjective Check List Methodology: A Review
ERIC Educational Resources Information Center
Craig, Robert J.
2005-01-01
This article addresses the benefits and problems in using adjective check list methodology to assess personality. Recent developments in this assessment method are reviewed, emphasizing seminal adjective-based personality tests (Gough's Adjective Check List), mood tests (Lubin's Depressive Adjective Test, Multiple Affect Adjective Check List),…
A Methodology for Anatomic Ultrasound Image Diagnostic Quality Assessment.
Hemmsen, Martin Christian; Lange, Theis; Brandt, Andreas Hjelm; Nielsen, Michael Bachmann; Jensen, Jorgen Arendt
2017-01-01
This paper discusses the methods for the assessment of ultrasound image quality based on our experiences with evaluating new methods for anatomic imaging. It presents a methodology to ensure a fair assessment between competing imaging methods using clinically relevant evaluations. The methodology is valuable in the continuing process of method optimization and guided development of new imaging methods. It includes a three phased study plan covering from initial prototype development to clinical assessment. Recommendations to the clinical assessment protocol, software, and statistical analysis are presented. Earlier uses of the methodology has shown that it ensures validity of the assessment, as it separates the influences between developer, investigator, and assessor once a research protocol has been established. This separation reduces confounding influences on the result from the developer to properly reveal the clinical value. This paper exemplifies the methodology using recent studies of synthetic aperture sequential beamforming tissue harmonic imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boissonnade, A; Hossain, Q; Kimball, J
Since the mid-l980's, assessment of the wind and tornado risks at the Department of Energy (DOE) high and moderate hazard facilities has been based on the straight wind/tornado hazard curves given in UCRL-53526 (Coats, 1985). These curves were developed using a methodology that utilized a model, developed by McDonald, for severe winds at sub-tornado wind speeds and a separate model, developed by Fujita, for tornado wind speeds. For DOE sites not covered in UCRL-53526, wind and tornado hazard assessments are based on the criteria outlined in DOE-STD-1023-95 (DOE, 1996), utilizing the methodology in UCRL-53526; Subsequent to the publication of UCRL53526,more » in a study sponsored by the Nuclear Regulatory Commission (NRC), the Pacific Northwest Laboratory developed tornado wind hazard curves for the contiguous United States, NUREG/CR-4461 (Ramsdell, 1986). Because of the different modeling assumptions and underlying data used to develop the tornado wind information, the wind speeds at specified exceedance levels, at a given location, based on the methodology in UCRL-53526, are different than those based on the methodology in NUREG/CR-4461. In 1997, Lawrence Livermore National Laboratory (LLNL) was funded by the DOE to review the current methodologies for characterizing tornado wind hazards and to develop a state-of-the-art wind/tornado characterization methodology based on probabilistic hazard assessment techniques and current historical wind data. This report describes the process of developing the methodology and the database of relevant tornado information needed to implement the methodology. It also presents the tornado wind hazard curves obtained from the application of the method to DOE sites throughout the contiguous United States.« less
Information security system quality assessment through the intelligent tools
NASA Astrophysics Data System (ADS)
Trapeznikov, E. V.
2018-04-01
The technology development has shown the automated system information security comprehensive analysis necessity. The subject area analysis indicates the study relevance. The research objective is to develop the information security system quality assessment methodology based on the intelligent tools. The basis of the methodology is the information security assessment model in the information system through the neural network. The paper presents the security assessment model, its algorithm. The methodology practical implementation results in the form of the software flow diagram are represented. The practical significance of the model being developed is noted in conclusions.
Management of the aging of critical safety-related concrete structures in light-water reactor plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naus, D.J.; Oland, C.B.; Arndt, E.G.
1990-01-01
The Structural Aging Program has the overall objective of providing the USNRC with an improved basis for evaluating nuclear power plant safety-related structures for continued service. The program consists of a management task and three technical tasks: materials property data base, structural component assessment/repair technology, and quantitative methodology for continued-service determinations. Objectives, accomplishments, and planned activities under each of these tasks are presented. Major program accomplishments include development of a materials property data base for structural materials as well as an aging assessment methodology for concrete structures in nuclear power plants. Furthermore, a review and assessment of inservice inspection techniquesmore » for concrete materials and structures has been complete, and work on development of a methodology which can be used for performing current as well as reliability-based future condition assessment of concrete structures is well under way. 43 refs., 3 tabs.« less
Assessment methodology for computer-based instructional simulations.
Koenig, Alan; Iseli, Markus; Wainess, Richard; Lee, John J
2013-10-01
Computer-based instructional simulations are becoming more and more ubiquitous, particularly in military and medical domains. As the technology that drives these simulations grows ever more sophisticated, the underlying pedagogical models for how instruction, assessment, and feedback are implemented within these systems must evolve accordingly. In this article, we review some of the existing educational approaches to medical simulations, and present pedagogical methodologies that have been used in the design and development of games and simulations at the University of California, Los Angeles, Center for Research on Evaluation, Standards, and Student Testing. In particular, we present a methodology for how automated assessments of computer-based simulations can be implemented using ontologies and Bayesian networks, and discuss their advantages and design considerations for pedagogical use. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.
Galuschka, Katharina; Rothe, Josefine; Schulte-Körne, Gerd
2015-09-01
This article looks at a means of objectively evaluating the quality of psychometric tests. This approach enables users to evaluate psychometric tests based on their methodological characteristics, in order to decide which instrument should be used. Reading and spelling assessment tools serve as examples. The paper also provides a review of German psychometric tests for the assessment of reading and spelling skills. This method facilitates the identification of psychometric tests.of high methodological quality which can be used for the assessment of reading and spelling skills. Reading performance should ideally be assessed with the following instruments: ELFE 1-6, LGVT 6-12, LESEN 6-7, LESEN 8-9, or WLLP-R. The tests to be used for the evaluation of spelling skills are DERET 1-2+, DERET 3-4+, WRT 1+, WRT 2+, WRT 3+, WRT 4+ or HSP 1-10.
De Ambrogi, Francesco; Ratti, Elisabetta Ceppi
2011-01-01
Today the Italian national debate over the Work-Related Stress Risk Assessment methodology is rather heated. Several methodological proposals and guidelines have been published in recent months, not least those by the "Commissione Consultiva". But despite this wide range of proposals, it appears that there is still a lack of attention to some of the basic methodological issues that must be taken into account in order to correctly implement the above-mentioned guidelines. The aim of this paper is to outline these methodological issues. In order to achieve this, the most authoritative methodological proposals and guidelines have been reviewed. The study focuses in particular on the methodological issues that could lead to important biases if not considered properly. The study leads to some considerations about the methodological validity of a Work-Related Stress Risk Assessment based exclusively on the literal interpretation of the considered proposals. Furthermore, the study provides some hints and working hypotheses on how to overcome these methodological limits. This study should be considered as a starting point for further investigations and debate on the Work-Related Stress Risk Assessment methodology on a national level.
Passenger rail vehicle safety assessment methodology. Volume I, Summary of safe performance limits.
DOT National Transportation Integrated Search
2000-04-01
This report presents a methodology based on computer simulation that asseses the safe dyamic performance limits of commuter passenger vehicles. The methodology consists of determining the critical design parameters and characteristic properties of bo...
A new approach to subjectively assess quality of plenoptic content
NASA Astrophysics Data System (ADS)
Viola, Irene; Řeřábek, Martin; Ebrahimi, Touradj
2016-09-01
Plenoptic content is becoming increasingly popular thanks to the availability of acquisition and display devices. Thanks to image-based rendering techniques, a plenoptic content can be rendered in real time in an interactive manner allowing virtual navigation through the captured scenes. This way of content consumption enables new experiences, and therefore introduces several challenges in terms of plenoptic data processing, transmission and consequently visual quality evaluation. In this paper, we propose a new methodology to subjectively assess the visual quality of plenoptic content. We also introduce a prototype software to perform subjective quality assessment according to the proposed methodology. The proposed methodology is further applied to assess the visual quality of a light field compression algorithm. Results show that this methodology can be successfully used to assess the visual quality of plenoptic content.
Health economic assessment: a methodological primer.
Simoens, Steven
2009-12-01
This review article aims to provide an introduction to the methodology of health economic assessment of a health technology. Attention is paid to defining the fundamental concepts and terms that are relevant to health economic assessments. The article describes the methodology underlying a cost study (identification, measurement and valuation of resource use, calculation of costs), an economic evaluation (type of economic evaluation, the cost-effectiveness plane, trial- and model-based economic evaluation, discounting, sensitivity analysis, incremental analysis), and a budget impact analysis. Key references are provided for those readers who wish a more advanced understanding of health economic assessments.
Health Economic Assessment: A Methodological Primer
Simoens, Steven
2009-01-01
This review article aims to provide an introduction to the methodology of health economic assessment of a health technology. Attention is paid to defining the fundamental concepts and terms that are relevant to health economic assessments. The article describes the methodology underlying a cost study (identification, measurement and valuation of resource use, calculation of costs), an economic evaluation (type of economic evaluation, the cost-effectiveness plane, trial- and model-based economic evaluation, discounting, sensitivity analysis, incremental analysis), and a budget impact analysis. Key references are provided for those readers who wish a more advanced understanding of health economic assessments. PMID:20049237
Conceptual and methodological challenges to integrating SEA and cumulative effects assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gunn, Jill, E-mail: jill.gunn@usask.c; Noble, Bram F.
The constraints to assessing and managing cumulative environmental effects in the context of project-based environmental assessment are well documented, and the potential benefits of a more strategic approach to cumulative effects assessment (CEA) are well argued; however, such benefits have yet to be clearly demonstrated in practice. While it is widely assumed that cumulative effects are best addressed in a strategic context, there has been little investigation as to whether CEA and strategic environmental assessment (SEA) are a 'good fit' - conceptually or methodologically. This paper identifies a number of conceptual and methodological challenges to the integration of CEA andmore » SEA. Based on results of interviews with international experts and practitioners, this paper demonstrates that: definitions and conceptualizations of CEA are typically weak in practice; approaches to effects aggregation vary widely; a systems perspective lacks in both SEA and CEA; the multifarious nature of SEA complicates CEA; tiering arrangements between SEA and project-based assessment are limited to non-existing; and the relationship of SEA to regional planning remains unclear.« less
NASA Astrophysics Data System (ADS)
Sakellariou, J. S.; Fassois, S. D.
2006-11-01
A stochastic output error (OE) vibration-based methodology for damage detection and assessment (localization and quantification) in structures under earthquake excitation is introduced. The methodology is intended for assessing the state of a structure following potential damage occurrence by exploiting vibration signal measurements produced by low-level earthquake excitations. It is based upon (a) stochastic OE model identification, (b) statistical hypothesis testing procedures for damage detection, and (c) a geometric method (GM) for damage assessment. The methodology's advantages include the effective use of the non-stationary and limited duration earthquake excitation, the handling of stochastic uncertainties, the tackling of the damage localization and quantification subproblems, the use of "small" size, simple and partial (in both the spatial and frequency bandwidth senses) identified OE-type models, and the use of a minimal number of measured vibration signals. Its feasibility and effectiveness are assessed via Monte Carlo experiments employing a simple simulation model of a 6 storey building. It is demonstrated that damage levels of 5% and 20% reduction in a storey's stiffness characteristics may be properly detected and assessed using noise-corrupted vibration signals.
Mhaskar, Rahul; Djulbegovic, Benjamin; Magazin, Anja; Soares, Heloisa P.; Kumar, Ambuj
2011-01-01
Objectives To assess whether reported methodological quality of randomized controlled trials (RCTs) reflect the actual methodological quality, and to evaluate the association of effect size (ES) and sample size with methodological quality. Study design Systematic review Setting Retrospective analysis of all consecutive phase III RCTs published by 8 National Cancer Institute Cooperative Groups until year 2006. Data were extracted from protocols (actual quality) and publications (reported quality) for each study. Results 429 RCTs met the inclusion criteria. Overall reporting of methodological quality was poor and did not reflect the actual high methodological quality of RCTs. The results showed no association between sample size and actual methodological quality of a trial. Poor reporting of allocation concealment and blinding exaggerated the ES by 6% (ratio of hazard ratio [RHR]: 0.94, 95%CI: 0.88, 0.99) and 24% (RHR: 1.24, 95%CI: 1.05, 1.43), respectively. However, actual quality assessment showed no association between ES and methodological quality. Conclusion The largest study to-date shows poor quality of reporting does not reflect the actual high methodological quality. Assessment of the impact of quality on the ES based on reported quality can produce misleading results. PMID:22424985
Developing dementia prevention trials: baseline report of the Home-Based Assessment study.
Sano, Mary; Egelko, Susan; Donohue, Michael; Ferris, Steven; Kaye, Jeffrey; Hayes, Tamara L; Mundt, James C; Sun, Chung-Kai; Paparello, Silvia; Aisen, Paul S
2013-01-01
This report describes the baseline experience of the multicenter, Home-Based Assessment study, designed to develop methods for dementia prevention trials using novel technologies for test administration and data collection. Nondemented individuals of 75 years of age or more were recruited and evaluated in-person using established clinical trial outcomes of cognition and function, and randomized to one of 3 assessment methodologies: (1) mail-in questionnaire/live telephone interviews [mail-in/phone (MIP)]; (2) automated telephone with interactive voice recognition; and (3) internet-based computer Kiosk. Brief versions of cognitive and noncognitive outcomes were adapted to each methodology and administered at baseline and repeatedly over a 4-year period. "Efficiency" measures assessed the time from screening to baseline, and staff time required for each methodology. A total of 713 individuals signed consent and were screened; 640 met eligibility and were randomized to one of 3 assessment arms; and 581 completed baseline. Dropout, time from screening to baseline, and total staff time were highest among those assigned to internet-based computer Kiosk. However, efficiency measures were driven by nonrecurring start-up activities suggesting that differences may be mitigated over a long trial. Performance among Home-Based Assessment instruments collected through different technologies will be compared with established outcomes over this 4-year study.
MULTI-MEDIA MICROBIOLOGICAL RISK ASSESSMENT METHODOLOGY FOR MUNICIPAL WASTEWATER SLUDGES
In order to reduce the risk of municipal sludge to acceptable levels, the U.S. EPA has undertaken a regulatory program based on risk assessment and risk management. The key to such a program is the development of a methodology which allows the regulatory agency to quantify the re...
NASA Astrophysics Data System (ADS)
Papathoma-Köhle, Maria
2016-08-01
The assessment of the physical vulnerability of elements at risk as part of the risk analysis is an essential aspect for the development of strategies and structural measures for risk reduction. Understanding, analysing and, if possible, quantifying physical vulnerability is a prerequisite for designing strategies and adopting tools for its reduction. The most common methods for assessing physical vulnerability are vulnerability matrices, vulnerability curves and vulnerability indicators; however, in most of the cases, these methods are used in a conflicting way rather than in combination. The article focuses on two of these methods: vulnerability curves and vulnerability indicators. Vulnerability curves express physical vulnerability as a function of the intensity of the process and the degree of loss, considering, in individual cases only, some structural characteristics of the affected buildings. However, a considerable amount of studies argue that vulnerability assessment should focus on the identification of these variables that influence the vulnerability of an element at risk (vulnerability indicators). In this study, an indicator-based methodology (IBM) for mountain hazards including debris flow (Kappes et al., 2012) is applied to a case study for debris flows in South Tyrol, where in the past a vulnerability curve has been developed. The relatively "new" indicator-based method is being scrutinised and recommendations for its improvement are outlined. The comparison of the two methodological approaches and their results is challenging since both methodological approaches deal with vulnerability in a different way. However, it is still possible to highlight their weaknesses and strengths, show clearly that both methodologies are necessary for the assessment of physical vulnerability and provide a preliminary "holistic methodological framework" for physical vulnerability assessment showing how the two approaches may be used in combination in the future.
Gu, Jing; Wang, Qi; Wang, Xiaogang; Li, Hailong; Gu, Mei; Ming, Haixia; Dong, Xiaoli; Yang, Kehu; Wu, Hongyan
2014-01-01
Background. This review provides the first methodological information assessment of protocol of acupuncture RCTs registered in WHO International Clinical Trials Registry Platform (ICTRP). Methods. All records of acupuncture RCTs registered in the ICTRP have been collected. The methodological design assessment involved whether the randomization methods, allocation concealment, and blinding were adequate or not based on the information of registration records (protocols of acupuncture RCTs). Results. A total of 453 records, found in 11 registries, were examined. Methodological details were insufficient in registration records; there were 76.4%, 89.0%, and 21.4% records that did not provide information on randomization methods, allocation concealment, and blinding respectively. The proportions of adequate randomization methods, allocation concealment, and blinding were only 107 (23.6%), 48 (10.6%), and 210 (46.4%), respectively. The methodological design improved year by year, especially after 2007. Additionally, methodology of RCTs with ethics approval was clearly superior to those without ethics approval and different among registries. Conclusions. The overall methodological design based on registration records of acupuncture RCTs is not very well but improved year by year. The insufficient information on randomization methods, allocation concealment, and blinding maybe due to the relevant description is not taken seriously in acupuncture RCTs' registration. PMID:24688591
Gu, Jing; Wang, Qi; Wang, Xiaogang; Li, Hailong; Gu, Mei; Ming, Haixia; Dong, Xiaoli; Yang, Kehu; Wu, Hongyan
2014-01-01
Background. This review provides the first methodological information assessment of protocol of acupuncture RCTs registered in WHO International Clinical Trials Registry Platform (ICTRP). Methods. All records of acupuncture RCTs registered in the ICTRP have been collected. The methodological design assessment involved whether the randomization methods, allocation concealment, and blinding were adequate or not based on the information of registration records (protocols of acupuncture RCTs). Results. A total of 453 records, found in 11 registries, were examined. Methodological details were insufficient in registration records; there were 76.4%, 89.0%, and 21.4% records that did not provide information on randomization methods, allocation concealment, and blinding respectively. The proportions of adequate randomization methods, allocation concealment, and blinding were only 107 (23.6%), 48 (10.6%), and 210 (46.4%), respectively. The methodological design improved year by year, especially after 2007. Additionally, methodology of RCTs with ethics approval was clearly superior to those without ethics approval and different among registries. Conclusions. The overall methodological design based on registration records of acupuncture RCTs is not very well but improved year by year. The insufficient information on randomization methods, allocation concealment, and blinding maybe due to the relevant description is not taken seriously in acupuncture RCTs' registration.
Functional-Based Assessment of Social Behavior: Introduction and Overview.
ERIC Educational Resources Information Center
Lewis, Timothy J.; Sugai, George
1994-01-01
This introduction to and overview of a special issue on social behavior assessment within schools discusses the impact of function-based methodologies on assessment and intervention practices in identification and remediation of challenging social behaviors. (JDD)
Sjögren, P; Ordell, S; Halling, A
2003-12-01
The aim was to describe and systematically review the methodology and reporting of validation in publications describing epidemiological registration methods for dental caries. BASIC RESEARCH METHODOLOGY: Literature searches were conducted in six scientific databases. All publications fulfilling the predetermined inclusion criteria were assessed for methodology and reporting of validation using a checklist including items described previously as well as new items. The frequency of endorsement of the assessed items was analysed. Moreover, the type and strength of evidence, was evaluated. Reporting of predetermined items relating to methodology of validation and the frequency of endorsement of the assessed items were of primary interest. Initially 588 publications were located. 74 eligible publications were obtained, 23 of which fulfilled the inclusion criteria and remained throughout the analyses. A majority of the studies reported the methodology of validation. The reported methodology of validation was generally inadequate, according to the recommendations of evidence-based medicine. The frequencies of reporting the assessed items (frequencies of endorsement) ranged from four to 84 per cent. A majority of the publications contributed to a low strength of evidence. There seems to be a need to improve the methodology and the reporting of validation in publications describing professionally registered caries epidemiology. Four of the items assessed in this study are potentially discriminative for quality assessments of reported validation.
ERIC Educational Resources Information Center
Zapata-Rivera, Diego; VanWinkle, Waverely; Doyle, Bryan; Buteux, Alyssa; Bauer, Malcolm
2009-01-01
Purpose: The purpose of this paper is to propose and demonstrate an evidence-based scenario design framework for assessment-based computer games. Design/methodology/approach: The evidence-based scenario design framework is presented and demonstrated by using BELLA, a new assessment-based gaming environment aimed at supporting student learning of…
NASA Astrophysics Data System (ADS)
Jia, Xiaodong; Jin, Chao; Buzza, Matt; Di, Yuan; Siegel, David; Lee, Jay
2018-01-01
Successful applications of Diffusion Map (DM) in machine failure detection and diagnosis have been reported in several recent studies. DM provides an efficient way to visualize the high-dimensional, complex and nonlinear machine data, and thus suggests more knowledge about the machine under monitoring. In this paper, a DM based methodology named as DM-EVD is proposed for machine degradation assessment, abnormality detection and diagnosis in an online fashion. Several limitations and challenges of using DM for machine health monitoring have been analyzed and addressed. Based on the proposed DM-EVD, a deviation based methodology is then proposed to include more dimension reduction methods. In this work, the incorporation of Laplacian Eigen-map and Principal Component Analysis (PCA) are explored, and the latter algorithm is named as PCA-Dev and is validated in the case study. To show the successful application of the proposed methodology, case studies from diverse fields are presented and investigated in this work. Improved results are reported by benchmarking with other machine learning algorithms.
Recommendations for benefit-risk assessment methodologies and visual representations.
Hughes, Diana; Waddingham, Ed; Mt-Isa, Shahrul; Goginsky, Alesia; Chan, Edmond; Downey, Gerald F; Hallgreen, Christine E; Hockley, Kimberley S; Juhaeri, Juhaeri; Lieftucht, Alfons; Metcalf, Marilyn A; Noel, Rebecca A; Phillips, Lawrence D; Ashby, Deborah; Micaleff, Alain
2016-03-01
The purpose of this study is to draw on the practical experience from the PROTECT BR case studies and make recommendations regarding the application of a number of methodologies and visual representations for benefit-risk assessment. Eight case studies based on the benefit-risk balance of real medicines were used to test various methodologies that had been identified from the literature as having potential applications in benefit-risk assessment. Recommendations were drawn up based on the results of the case studies. A general pathway through the case studies was evident, with various classes of methodologies having roles to play at different stages. Descriptive and quantitative frameworks were widely used throughout to structure problems, with other methods such as metrics, estimation techniques and elicitation techniques providing ways to incorporate technical or numerical data from various sources. Similarly, tree diagrams and effects tables were universally adopted, with other visualisations available to suit specific methodologies or tasks as required. Every assessment was found to follow five broad stages: (i) Planning, (ii) Evidence gathering and data preparation, (iii) Analysis, (iv) Exploration and (v) Conclusion and dissemination. Adopting formal, structured approaches to benefit-risk assessment was feasible in real-world problems and facilitated clear, transparent decision-making. Prior to this work, no extensive practical application and appraisal of methodologies had been conducted using real-world case examples, leaving users with limited knowledge of their usefulness in the real world. The practical guidance provided here takes us one step closer to a harmonised approach to benefit-risk assessment from multiple perspectives. Copyright © 2016 John Wiley & Sons, Ltd.
Faggion, Clovis M; Huda, Fahd; Wasiak, Jason
2014-06-01
To evaluate the methodological approaches used to assess the quality of studies included in systematic reviews (SRs) in periodontology and implant dentistry. Two electronic databases (PubMed and Cochrane Database of Systematic Reviews) were searched independently to identify SRs examining interventions published through 2 September 2013. The reference lists of included SRs and records of 10 specialty dental journals were searched manually. Methodological approaches were assessed using seven criteria based on the Cochrane Handbook for Systematic Reviews of Interventions. Temporal trends in methodological quality were also explored. Of the 159 SRs with meta-analyses included in the analysis, 44 (28%) reported the use of domain-based tools, 15 (9%) reported the use of checklists and 7 (4%) reported the use of scales. Forty-two (26%) SRs reported use of more than one tool. Criteria were met heterogeneously; authors of 15 (9%) publications incorporated the quality of evidence of primary studies into SRs, whereas 69% of SRs reported methodological approaches in the Materials/Methods section. Reporting of four criteria was significantly better in recent (2010-2013) than in previous publications. The analysis identified several methodological limitations of approaches used to assess evidence in studies included in SRs in periodontology and implant dentistry. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Methodological Quality Assessment of Meta-Analyses of Hyperthyroidism Treatment.
Qin, Yahong; Yao, Liang; Shao, Feifei; Yang, Kehu; Tian, Limin
2018-01-01
Hyperthyroidism is a common condition that is associated with increased morbidity and mortality. A number of meta-analyses (MAs) have assessed the therapeutic measures for hyperthyroidism, including antithyroid drugs, surgery, and radioiodine, however, the methodological quality has not been evaluated. This study evaluated the methodological quality and summarized the evidence obtained from MAs of hyperthyroidism treatments for radioiodine, antithyroid drugs, and surgery. We searched the PubMed, EMBASE, Cochrane Library, Web of Science, and Chinese Biomedical Literature Database databases. Two investigators independently assessed the meta-analyses titles and abstracts for inclusion. Methodological quality was assessed using the validated AMSTAR (Assessing the Methodological Quality of Systematic Reviews) tool. A total of 26 MAs fulfilled the inclusion criteria. Based on the AMSTAR scores, the average methodological quality was 8.31, with large variability ranging from 4 to 11. The methodological quality of English meta-analyses was better than that of Chinese meta-analyses. Cochrane reviews had better methodological quality than non-Cochrane reviews due to better study selection and data extraction, the inclusion of unpublished studies, and better reporting of study characteristics. The authors did not report conflicts of interest in 53.8% meta-analyses, and 19.2% did not report the harmful effects of treatment. Publication bias was not assessed in 38.5% of meta-analyses, and 19.2% did not report the follow-up time. Large-scale assessment of methodological quality of meta-analyses of hyperthyroidism treatment highlighted methodological strengths and weaknesses. Consideration of scientific quality when formulating conclusions should be made explicit. Future meta-analyses should improve on reporting conflict of interest. © Georg Thieme Verlag KG Stuttgart · New York.
A Methodological Proposal for Learning Games Selection and Quality Assessment
ERIC Educational Resources Information Center
Dondi, Claudio; Moretti, Michela
2007-01-01
This paper presents a methodological proposal elaborated in the framework of two European projects dealing with game-based learning, both of which have focused on "quality" aspects in order to create suitable tools that support European educators, practitioners and lifelong learners in selecting and assessing learning games for use in…
Mhaskar, Rahul; Djulbegovic, Benjamin; Magazin, Anja; Soares, Heloisa P; Kumar, Ambuj
2012-06-01
To assess whether the reported methodological quality of randomized controlled trials (RCTs) reflects the actual methodological quality and to evaluate the association of effect size (ES) and sample size with methodological quality. Systematic review. This is a retrospective analysis of all consecutive phase III RCTs published by eight National Cancer Institute Cooperative Groups up to 2006. Data were extracted from protocols (actual quality) and publications (reported quality) for each study. Four hundred twenty-nine RCTs met the inclusion criteria. Overall reporting of methodological quality was poor and did not reflect the actual high methodological quality of RCTs. The results showed no association between sample size and actual methodological quality of a trial. Poor reporting of allocation concealment and blinding exaggerated the ES by 6% (ratio of hazard ratio [RHR]: 0.94; 95% confidence interval [CI]: 0.88, 0.99) and 24% (RHR: 1.24; 95% CI: 1.05, 1.43), respectively. However, actual quality assessment showed no association between ES and methodological quality. The largest study to date shows that poor quality of reporting does not reflect the actual high methodological quality. Assessment of the impact of quality on the ES based on reported quality can produce misleading results. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes, These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.
There is an urgent need for broad and integrated studies that address the risks of engineered nanomaterials (ENMs) along the different endpoints of the society, environment, and economy (SEE) complex adaptive system. This article presents an integrated science-based methodology ...
Development and application of a safety assessment methodology for waste disposals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Little, R.H.; Torres, C.; Schaller, K.H.
1996-12-31
As part of a European Commission funded research programme, QuantiSci (formerly the Environmental Division of Intera Information Technologies) and Instituto de Medio Ambiente of the Centro de Investigaciones Energeticas Medioambientales y Tecnologicas (IMA/CIEMAT) have developed and applied a comprehensive, yet practicable, assessment methodology for post-disposal safety assessment of land-based disposal facilities. This Safety Assessment Comparison (SACO) Methodology employs a systematic approach to the collection, evaluation and use of waste and disposal system data. It can be used to assess engineered barrier performance, the attenuating properties of host geological formations, and the long term impacts of a facility on the environmentmore » and human health, as well as allowing the comparison of different disposal options for radioactive, mixed and non-radioactive wastes. This paper describes the development of the methodology and illustrates its use.« less
Chen, Ching-Ho; Wu, Ray-Shyan; Liu, Wei-Lin; Su, Wen-Ray; Chang, Yu-Min
2009-01-01
Some countries, including Taiwan, have adopted strategic environmental assessment (SEA) to assess and modify proposed policies, plans, and programs (PPPs) in the planning phase for pursuing sustainable development. However, there were only some sketchy steps focusing on policy assessment in the system of Taiwan. This study aims to develop a methodology for SEA in Taiwan to enhance the effectiveness associated with PPPs. The proposed methodology comprises an SEA procedure involving PPP management and assessment in various phases, a sustainable assessment framework, and an SEA management system. The SEA procedure is devised based on the theoretical considerations by systems thinking and the regulative requirements in Taiwan. The positive and negative impacts on ecology, society, and economy are simultaneously considered in the planning (including policy generation and evaluation), implementation, and control phases of the procedure. This study used the analytic hierarchy process, Delphi technique, and systems analysis to develop a sustainable assessment framework. An SEA management system was built based on geographic information system software to process spatial, attribute, and satellite image data during the assessment procedure. The proposed methodology was applied in the SEA of golf course installation policy in 2001 as a case study, which was the first SEA in Taiwan. Most of the 82 existing golf courses in 2001 were installed on slope lands and caused a serious ecological impact. Assessment results indicated that 15 future golf courses installed on marginal lands (including buffer zones, remedied lands, and wastelands) were acceptable because the comprehensive environmental (ecological, social, and economic) assessment value was better based on environmental characteristics and management regulations of Taiwan. The SEA procedure in the planning phase for this policy was completed but the implementation phase of this policy was not begun because the related legislation procedure could not be arranged due to a few senators' resistance. A self-review of the control phase was carried out in 2006 using this methodology. Installation permits for 12 courses on slope lands were terminated after 2001 and then 27 future courses could be installed on marginal lands. The assessment value of this policy using the data on ecological, social, and economic conditions from 2006 was higher than that using the data from 2001. The analytical results illustrate that the proposed methodology can be used to effectively and efficiently assist the related authorities for SEA.
E-therapy for mental health problems: a systematic review.
Postel, Marloes G; de Haan, Hein A; De Jong, Cor A J
2008-09-01
The widespread availability of the Internet offers opportunities for improving access to therapy for people with mental health problems. There is a seemingly infinite supply of Internet-based interventions available on the World Wide Web. The aim of the present study is to systematically assess the methodological quality of randomized controlled trials (RCTs) concerning e-therapy for mental health problems. Two reviewers independently assessed the methodological quality of the RCTs, based on a list of criteria for the methodological quality assessment as recommended by the Cochrane Back Review Group. The search yielded 14 papers that reported RCTs concerning e-therapy for mental-health problems. The methodological quality of studies included in this review was generally low. It is concluded that e-therapy may turn out to be an appropriate therapeutic entity, but the evidence needs to be more convincing. Recommendations are made concerning the method of reporting RCTs and the need to add some content items to an e-therapy study.
Tolaymat, Thabet; El Badawy, Amro; Sequeira, Reynold; Genaidy, Ash
2015-11-15
There is an urgent need for broad and integrated studies that address the risks of engineered nanomaterials (ENMs) along the different endpoints of the society, environment, and economy (SEE) complex adaptive system. This article presents an integrated science-based methodology to assess the potential risks of engineered nanomaterials. To achieve the study objective, two major tasks are accomplished, knowledge synthesis and algorithmic computational methodology. The knowledge synthesis task is designed to capture "what is known" and to outline the gaps in knowledge from ENMs risk perspective. The algorithmic computational methodology is geared toward the provision of decisions and an understanding of the risks of ENMs along different endpoints for the constituents of the SEE complex adaptive system. The approach presented herein allows for addressing the formidable task of assessing the implications and risks of exposure to ENMs, with the long term goal to build a decision-support system to guide key stakeholders in the SEE system towards building sustainable ENMs and nano-enabled products. Published by Elsevier B.V.
Ojaveer, Henn; Eero, Margit
2011-04-29
Assessments of the environmental status of marine ecosystems are increasingly needed to inform management decisions and regulate human pressures to meet the objectives of environmental policies. This paper addresses some generic methodological challenges and related uncertainties involved in marine ecosystem assessment, using the central Baltic Sea as a case study. The objectives of good environmental status of the Baltic Sea are largely focusing on biodiversity, eutrophication and hazardous substances. In this paper, we conduct comparative evaluations of the status of these three segments, by applying different methodological approaches. Our analyses indicate that the assessment results are sensitive to a selection of indicators for ecological quality objectives that are affected by a broad spectrum of human activities and natural processes (biodiversity), less so for objectives that are influenced by a relatively narrow array of drivers (eutrophications, hazardous substances). The choice of indicator aggregation rule appeared to be of essential importance for assessment results for all three segments, whereas the hierarchical structure of indicators had only a minor influence. Trend-based assessment was shown to be a useful supplement to reference-based evaluation, being independent of the problems related to defining reference values and indicator aggregation methodologies. Results of this study will help in setting priorities for future efforts to improve environmental assessments in the Baltic Sea and elsewhere, and to ensure the transparency of the assessment procedure.
Introducing a methodology for estimating duration of surgery in health services research.
Redelmeier, Donald A; Thiruchelvam, Deva; Daneman, Nick
2008-09-01
The duration of surgery is an indicator for the quality, risks, and efficiency of surgical procedures. We introduce a new methodology for assessing the duration of surgery based on anesthesiology billing records, along with reviewing its fundamental logic and limitations. The validity of the methodology was assessed through a population-based cohort of patients (n=480,986) undergoing elective operations in 246 Ontario hospitals with 1,084 anesthesiologists between April 1, 1992 and March 31, 2002 (10 years). The weaknesses of the methodology relate to missing data, self-serving exaggerations by providers, imprecisions from clinical diversity, upper limits due to accounting regulations, fluctuations from updates over the years, national differences in reimbursement schedules, and the general failings of claims base analyses. The strengths of the methodology are in providing data that match clinical experiences, correspond to chart review, are consistent over time, can detect differences where differences would be anticipated, and might have implications for examining patient outcomes after long surgical times. We suggest that an understanding and application of large studies of surgical duration may help scientists explore selected questions concerning postoperative complications.
Methodology for assessing laser-based equipment
NASA Astrophysics Data System (ADS)
Pelegrina-Bonilla, Gabriel; Hermsdorf, Jörg; Thombansen, Ulrich; Abels, Peter; Kaierle, Stefan; Neumann, Jörg
2017-10-01
Methodologies for the assessment of technology's maturity are widely used in industry and research. Probably the best known are technology readiness levels (TRLs), initially pioneered by the National Aeronautics and Space Administration (NASA). At the beginning, only descriptively defined TRLs existed, but over time, automated assessment techniques in the form of questionnaires emerged in order to determine TRLs. Originally TRLs targeted equipment for space applications, but the demands on industrial relevant equipment are partly different in terms of, for example, overall costs, product quantities, or the presence of competitors. Therefore, we present a commonly valid assessment methodology with the aim of assessing laser-based equipment for industrial use, in general. The assessment is carried out with the help of a questionnaire, which allows for a user-friendly and easy accessible way to monitor the progress from the lab-proven state to the application-ready product throughout the complete development period. The assessment result is presented in a multidimensional metric in order to reveal the current specific strengths and weaknesses of the equipment development process, which can be used to direct the remaining development process of the equipment in the right direction.
Olea, R.A.; Houseknecht, D.W.; Garrity, C.P.; Cook, T.A.
2011-01-01
Shale gas is a form of continuous unconventional hydrocarbon accumulation whose resource estimation is unfeasible through the inference of pore volume. Under these circumstances, the usual approach is to base the assessment on well productivity through estimated ultimate recovery (EUR). Unconventional resource assessments that consider uncertainty are typically done by applying analytical procedures based on classical statistics theory that ignores geographical location, does not take into account spatial correlation, and assumes independence of EUR from other variables that may enter into the modeling. We formulate a new, more comprehensive approach based on sequential simulation to test methodologies known to be capable of more fully utilizing the data and overcoming unrealistic simplifications. Theoretical requirements demand modeling of EUR as areal density instead of well EUR. The new experimental methodology is illustrated by evaluating a gas play in the Woodford Shale in the Arkoma Basin of Oklahoma. Differently from previous assessments, we used net thickness and vitrinite reflectance as secondary variables correlated to cell EUR. In addition to the traditional probability distribution for undiscovered resources, the new methodology provides maps of EUR density and maps with probabilities to reach any given cell EUR, which are useful to visualize geographical variations in prospectivity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sebastiani, M.; Llambi, L.D.; Marquez, E.
1998-07-01
In Venezuela, the idea of tiering information between land-use ordering instruments and impact assessment is absent. In this article the authors explore a methodological alternative to bridge the information presented in land-use ordering instruments with the information requirements for impact assessment. The methodology is based on the steps carried out for an environmental impact assessment as well as on those considered to develop land-use ordering instruments. The methodology is applied to the territorial ordering plan and its proposal for the protection zone of the Cataniapo River basin. The purpose of the protection zone is to preserve the water quality andmore » quantity of the river basin for human consumption.« less
ERIC Educational Resources Information Center
Gosselin, Julie; Gahagan, Sheila; Amiel-Tison, Claudine
2005-01-01
The Amiel-Tison Neurological Assessment at Term (ATNAT) is part of a set of three different instruments based on a neuro-maturative framework. By sharing a same methodology and a similar scoring system, the use of these three assessments prevents any rupture in the course of high risk children follow-up from 32 weeks post-conception to 6 years of…
Manfredi, Simone; Cristobal, Jorge
2016-09-01
Trying to respond to the latest policy needs, the work presented in this article aims at developing a life-cycle based framework methodology to quantitatively evaluate the environmental and economic sustainability of European food waste management options. The methodology is structured into six steps aimed at defining boundaries and scope of the evaluation, evaluating environmental and economic impacts and identifying best performing options. The methodology is able to accommodate additional assessment criteria, for example the social dimension of sustainability, thus moving towards a comprehensive sustainability assessment framework. A numerical case study is also developed to provide an example of application of the proposed methodology to an average European context. Different options for food waste treatment are compared, including landfilling, composting, anaerobic digestion and incineration. The environmental dimension is evaluated with the software EASETECH, while the economic assessment is conducted based on different indicators expressing the costs associated with food waste management. Results show that the proposed methodology allows for a straightforward identification of the most sustainable options for food waste, thus can provide factual support to decision/policy making. However, it was also observed that results markedly depend on a number of user-defined assumptions, for example on the choice of the indicators to express the environmental and economic performance. © The Author(s) 2016.
Development of a Probabilistic Assessment Methodology for Evaluation of Carbon Dioxide Storage
Burruss, Robert A.; Brennan, Sean T.; Freeman, P.A.; Merrill, Matthew D.; Ruppert, Leslie F.; Becker, Mark F.; Herkelrath, William N.; Kharaka, Yousif K.; Neuzil, Christopher E.; Swanson, Sharon M.; Cook, Troy A.; Klett, Timothy R.; Nelson, Philip H.; Schenk, Christopher J.
2009-01-01
This report describes a probabilistic assessment methodology developed by the U.S. Geological Survey (USGS) for evaluation of the resource potential for storage of carbon dioxide (CO2) in the subsurface of the United States as authorized by the Energy Independence and Security Act (Public Law 110-140, 2007). The methodology is based on USGS assessment methodologies for oil and gas resources created and refined over the last 30 years. The resource that is evaluated is the volume of pore space in the subsurface in the depth range of 3,000 to 13,000 feet that can be described within a geologically defined storage assessment unit consisting of a storage formation and an enclosing seal formation. Storage assessment units are divided into physical traps (PTs), which in most cases are oil and gas reservoirs, and the surrounding saline formation (SF), which encompasses the remainder of the storage formation. The storage resource is determined separately for these two types of storage. Monte Carlo simulation methods are used to calculate a distribution of the potential storage size for individual PTs and the SF. To estimate the aggregate storage resource of all PTs, a second Monte Carlo simulation step is used to sample the size and number of PTs. The probability of successful storage for individual PTs or the entire SF, defined in this methodology by the likelihood that the amount of CO2 stored will be greater than a prescribed minimum, is based on an estimate of the probability of containment using present-day geologic knowledge. The report concludes with a brief discussion of needed research data that could be used to refine assessment methodologies for CO2 sequestration.
Teaching and assessing procedural skills using simulation: metrics and methodology.
Lammers, Richard L; Davenport, Moira; Korley, Frederick; Griswold-Theodorson, Sharon; Fitch, Michael T; Narang, Aneesh T; Evans, Leigh V; Gross, Amy; Rodriguez, Elliot; Dodge, Kelly L; Hamann, Cara J; Robey, Walter C
2008-11-01
Simulation allows educators to develop learner-focused training and outcomes-based assessments. However, the effectiveness and validity of simulation-based training in emergency medicine (EM) requires further investigation. Teaching and testing technical skills require methods and assessment instruments that are somewhat different than those used for cognitive or team skills. Drawing from work published by other medical disciplines as well as educational, behavioral, and human factors research, the authors developed six research themes: measurement of procedural skills; development of performance standards; assessment and validation of training methods, simulator models, and assessment tools; optimization of training methods; transfer of skills learned on simulator models to patients; and prevention of skill decay over time. The article reviews relevant and established educational research methodologies and identifies gaps in our knowledge of how physicians learn procedures. The authors present questions requiring further research that, once answered, will advance understanding of simulation-based procedural training and assessment in EM.
A methodology for the rigorous verification of plasma simulation codes
NASA Astrophysics Data System (ADS)
Riva, Fabio
2016-10-01
The methodology used to assess the reliability of numerical simulation codes constitutes the Verification and Validation (V&V) procedure. V&V is composed by two separate tasks: the verification, which is a mathematical issue targeted to assess that the physical model is correctly solved, and the validation, which determines the consistency of the code results, and therefore of the physical model, with experimental data. In the present talk we focus our attention on the verification, which in turn is composed by the code verification, targeted to assess that a physical model is correctly implemented in a simulation code, and the solution verification, that quantifies the numerical error affecting a simulation. Bridging the gap between plasma physics and other scientific domains, we introduced for the first time in our domain a rigorous methodology for the code verification, based on the method of manufactured solutions, as well as a solution verification based on the Richardson extrapolation. This methodology was applied to GBS, a three-dimensional fluid code based on a finite difference scheme, used to investigate the plasma turbulence in basic plasma physics experiments and in the tokamak scrape-off layer. Overcoming the difficulty of dealing with a numerical method intrinsically affected by statistical noise, we have now generalized the rigorous verification methodology to simulation codes based on the particle-in-cell algorithm, which are employed to solve Vlasov equation in the investigation of a number of plasma physics phenomena.
Explosion/Blast Dynamics for Constellation Launch Vehicles Assessment
NASA Technical Reports Server (NTRS)
Baer, Mel; Crawford, Dave; Hickox, Charles; Kipp, Marlin; Hertel, Gene; Morgan, Hal; Ratzel, Arthur; Cragg, Clinton H.
2009-01-01
An assessment methodology is developed to guide quantitative predictions of adverse physical environments and the subsequent effects on the Ares-1 crew launch vehicle associated with the loss of containment of cryogenic liquid propellants from the upper stage during ascent. Development of the methodology is led by a team at Sandia National Laboratories (SNL) with guidance and support from a number of National Aeronautics and Space Administration (NASA) personnel. The methodology is based on the current Ares-1 design and feasible accident scenarios. These scenarios address containment failure from debris impact or structural response to pressure or blast loading from an external source. Once containment is breached, the envisioned assessment methodology includes predictions for the sequence of physical processes stemming from cryogenic tank failure. The investigative techniques, analysis paths, and numerical simulations that comprise the proposed methodology are summarized and appropriate simulation software is identified in this report.
ERIC Educational Resources Information Center
Vázquez-Alonso, Ángel; Manassero-Mas, María-Antonia; García-Carmona, Antonio; Montesano de Talavera, Marisa
2016-01-01
This study applies a new quantitative methodological approach to diagnose epistemology conceptions in a large sample. The analyses use seven multiple-rating items on the epistemology of science drawn from the item pool Views on Science-Technology-Society (VOSTS). The bases of the new methodological diagnostic approach are the empirical…
DOT National Transportation Integrated Search
1995-09-01
This report describes the development of a methodology designed to assure that a sufficiently high level of safety is achieved and maintained in computer-based systems which perform safety critical functions in high-speed rail or magnetic levitation ...
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.
A probabilistic seismic risk assessment procedure for nuclear power plants: (I) Methodology
Huang, Y.-N.; Whittaker, A.S.; Luco, N.
2011-01-01
A new procedure for probabilistic seismic risk assessment of nuclear power plants (NPPs) is proposed. This procedure modifies the current procedures using tools developed recently for performance-based earthquake engineering of buildings. The proposed procedure uses (a) response-based fragility curves to represent the capacity of structural and nonstructural components of NPPs, (b) nonlinear response-history analysis to characterize the demands on those components, and (c) Monte Carlo simulations to determine the damage state of the components. The use of response-rather than ground-motion-based fragility curves enables the curves to be independent of seismic hazard and closely related to component capacity. The use of Monte Carlo procedure enables the correlation in the responses of components to be directly included in the risk assessment. An example of the methodology is presented in a companion paper to demonstrate its use and provide the technical basis for aspects of the methodology. ?? 2011 Published by Elsevier B.V.
Sewell, Fiona; Doe, John; Gellatly, Nichola; Ragan, Ian; Burden, Natalie
2017-10-01
The current animal-based paradigm for safety assessment must change. In September 2016, the UK National Centre for Replacement, Refinement and Reduction of Animals in Research (NC3Rs) brought together scientists from regulatory authorities, academia and industry to review progress in bringing new methodology into regulatory use, and to identify ways to expedite progress. Progress has been slow. Science is advancing to make this possible but changes are necessary. The new paradigm should allow new methodology to be adopted once it is developed rather than being based on a fixed set of studies. Regulatory authorities can help by developing Performance-Based Standards. The most pressing need is in repeat dose toxicology, although setting standards will be more complex than in areas such as sensitization. Performance standards should be aimed directly at human safety, not at reproducing the results of animal studies. Regulatory authorities can also aid progress towards the acceptance of non-animal based methodology by promoting "safe-haven" trials where traditional and new methodology data can be submitted in parallel to build up experience in the new methods. Industry can play its part in the acceptance of new methodology, by contributing to the setting of performance standards and by actively contributing to "safe-haven" trials. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
A methodology for the assessment of manned flight simulator fidelity
NASA Technical Reports Server (NTRS)
Hess, Ronald A.; Malsbury, Terry N.
1989-01-01
A relatively simple analytical methodology for assessing the fidelity of manned flight simulators for specific vehicles and tasks is offered. The methodology is based upon an application of a structural model of the human pilot, including motion cue effects. In particular, predicted pilot/vehicle dynamic characteristics are obtained with and without simulator limitations. A procedure for selecting model parameters can be implemented, given a probable pilot control strategy. In analyzing a pair of piloting tasks for which flight and simulation data are available, the methodology correctly predicted the existence of simulator fidelity problems. The methodology permitted the analytical evaluation of a change in simulator characteristics and indicated that a major source of the fidelity problems was a visual time delay in the simulation.
Hasan, Haroon; Muhammed, Taaha; Yu, Jennifer; Taguchi, Kelsi; Samargandi, Osama A; Howard, A Fuchsia; Lo, Andrea C; Olson, Robert; Goddard, Karen
2017-10-01
The objective of our study was to evaluate the methodological quality of systematic reviews and meta-analyses in Radiation Oncology. A systematic literature search was conducted for all eligible systematic reviews and meta-analyses in Radiation Oncology from 1966 to 2015. Methodological characteristics were abstracted from all works that satisfied the inclusion criteria and quality was assessed using the critical appraisal tool, AMSTAR. Regression analyses were performed to determine factors associated with a higher score of quality. Following exclusion based on a priori criteria, 410 studies (157 systematic reviews and 253 meta-analyses) satisfied the inclusion criteria. Meta-analyses were found to be of fair to good quality while systematic reviews were found to be of less than fair quality. Factors associated with higher scores of quality in the multivariable analysis were including primary studies consisting of randomized control trials, performing a meta-analysis, and applying a recommended guideline related to establishing a systematic review protocol and/or reporting. Systematic reviews and meta-analyses may introduce a high risk of bias if applied to inform decision-making based on AMSTAR. We recommend that decision-makers in Radiation Oncology scrutinize the methodological quality of systematic reviews and meta-analyses prior to assessing their utility to inform evidence-based medicine and researchers adhere to methodological standards outlined in validated guidelines when embarking on a systematic review. Copyright © 2017 Elsevier Ltd. All rights reserved.
Shea, Beverley J; Grimshaw, Jeremy M; Wells, George A; Boers, Maarten; Andersson, Neil; Hamel, Candyce; Porter, Ashley C; Tugwell, Peter; Moher, David; Bouter, Lex M
2007-02-15
Our objective was to develop an instrument to assess the methodological quality of systematic reviews, building upon previous tools, empirical evidence and expert consensus. A 37-item assessment tool was formed by combining 1) the enhanced Overview Quality Assessment Questionnaire (OQAQ), 2) a checklist created by Sacks, and 3) three additional items recently judged to be of methodological importance. This tool was applied to 99 paper-based and 52 electronic systematic reviews. Exploratory factor analysis was used to identify underlying components. The results were considered by methodological experts using a nominal group technique aimed at item reduction and design of an assessment tool with face and content validity. The factor analysis identified 11 components. From each component, one item was selected by the nominal group. The resulting instrument was judged to have face and content validity. A measurement tool for the 'assessment of multiple systematic reviews' (AMSTAR) was developed. The tool consists of 11 items and has good face and content validity for measuring the methodological quality of systematic reviews. Additional studies are needed with a focus on the reproducibility and construct validity of AMSTAR, before strong recommendations can be made on its use.
2011-09-01
a quality evaluation with limited data, a model -based assessment must be...that affect system performance, a multistage approach to system validation, a modeling and experimental methodology for efficiently addressing a ...affect system performance, a multistage approach to system validation, a modeling and experimental methodology for efficiently addressing a wide range
Assessment of Student Learning in Virtual Spaces, Using Orders of Complexity in Levels of Thinking
ERIC Educational Resources Information Center
Capacho, Jose
2017-01-01
This paper aims at showing a new methodology to assess student learning in virtual spaces supported by Information and Communications Technology-ICT. The methodology is based on the Conceptual Pedagogy Theory, and is supported both on knowledge instruments (KI) and intelectual operations (IO). KI are made up of teaching materials embedded in the…
Perimal-Lewis, Lua; Teubner, David; Hakendorf, Paul; Horwood, Chris
2016-12-01
Effective and accurate use of routinely collected health data to produce Key Performance Indicator reporting is dependent on the underlying data quality. In this research, Process Mining methodology and tools were leveraged to assess the data quality of time-based Emergency Department data sourced from electronic health records. This research was done working closely with the domain experts to validate the process models. The hospital patient journey model was used to assess flow abnormalities which resulted from incorrect timestamp data used in time-based performance metrics. The research demonstrated process mining as a feasible methodology to assess data quality of time-based hospital performance metrics. The insight gained from this research enabled appropriate corrective actions to be put in place to address the data quality issues. © The Author(s) 2015.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dandini, Vincent John; Duran, Felicia Angelica; Wyss, Gregory Dane
2003-09-01
This article describes how features of event tree analysis and Monte Carlo-based discrete event simulation can be combined with concepts from object-oriented analysis to develop a new risk assessment methodology, with some of the best features of each. The resultant object-based event scenario tree (OBEST) methodology enables an analyst to rapidly construct realistic models for scenarios for which an a priori discovery of event ordering is either cumbersome or impossible. Each scenario produced by OBEST is automatically associated with a likelihood estimate because probabilistic branching is integral to the object model definition. The OBEST methodology is then applied to anmore » aviation safety problem that considers mechanisms by which an aircraft might become involved in a runway incursion incident. The resulting OBEST model demonstrates how a close link between human reliability analysis and probabilistic risk assessment methods can provide important insights into aviation safety phenomenology.« less
NASA Technical Reports Server (NTRS)
1974-01-01
A methodology for the display and analysis of postulated energy futures for the United States is presented. A systems approach methodology including the methodology of technology assessment is used to examine three energy scenarios--the Westinghouse Nuclear Electric Economy, the Ford Technical Fix Base Case and a MEGASTAR generated Alternate to the Ford Technical Fix Base Case. The three scenarios represent different paths of energy consumption from the present to the year 2000. Associated with these paths are various mixes of fuels, conversion, distribution, conservation and end-use technologies. MEGASTAR presents the estimated times and unit requirements to supply the fuels, conversion and distribution systems for the postulated end uses for the three scenarios and then estimates the aggregate manpower, materials, and capital requirements needed to develop the energy system described by the particular scenario.
Mutel, Christopher L; Pfister, Stephan; Hellweg, Stefanie
2012-01-17
We describe a new methodology for performing regionalized life cycle assessment and systematically choosing the spatial scale of regionalized impact assessment methods. We extend standard matrix-based calculations to include matrices that describe the mapping from inventory to impact assessment spatial supports. Uncertainty in inventory spatial data is modeled using a discrete spatial distribution function, which in a case study is derived from empirical data. The minimization of global spatial autocorrelation is used to choose the optimal spatial scale of impact assessment methods. We demonstrate these techniques on electricity production in the United States, using regionalized impact assessment methods for air emissions and freshwater consumption. Case study results show important differences between site-generic and regionalized calculations, and provide specific guidance for future improvements of inventory data sets and impact assessment methods.
NASA Astrophysics Data System (ADS)
Gorai, A. K.; Hasni, S. A.; Iqbal, Jawed
2016-11-01
Groundwater is the most important natural resource for drinking water to many people around the world, especially in rural areas where the supply of treated water is not available. Drinking water resources cannot be optimally used and sustained unless the quality of water is properly assessed. To this end, an attempt has been made to develop a suitable methodology for the assessment of drinking water quality on the basis of 11 physico-chemical parameters. The present study aims to select the fuzzy aggregation approach for estimation of the water quality index of a sample to check the suitability for drinking purposes. Based on expert's opinion and author's judgement, 11 water quality (pollutant) variables (Alkalinity, Dissolved Solids (DS), Hardness, pH, Ca, Mg, Fe, Fluoride, As, Sulphate, Nitrates) are selected for the quality assessment. The output results of proposed methodology are compared with the output obtained from widely used deterministic method (weighted arithmetic mean aggregation) for the suitability of the developed methodology.
Angelis, Aris; Kanavos, Panos
2016-05-01
In recent years, multiple criteria decision analysis (MCDA) has emerged as a likely alternative to address shortcomings in health technology assessment (HTA) by offering a more holistic perspective to value assessment and acting as an alternative priority setting tool. In this paper, we argue that MCDA needs to subscribe to robust methodological processes related to the selection of objectives, criteria and attributes in order to be meaningful in the context of healthcare decision making and fulfil its role in value-based assessment (VBA). We propose a methodological process, based on multi-attribute value theory (MAVT) methods comprising five distinct phases, outline the stages involved in each phase and discuss their relevance in the HTA process. Importantly, criteria and attributes need to satisfy a set of desired properties, otherwise the outcome of the analysis can produce spurious results and misleading recommendations. Assuming the methodological process we propose is adhered to, the application of MCDA presents three very distinct advantages to decision makers in the context of HTA and VBA: first, it acts as an instrument for eliciting preferences on the performance of alternative options across a wider set of explicit criteria, leading to a more complete assessment of value; second, it allows the elicitation of preferences across the criteria themselves to reflect differences in their relative importance; and, third, the entire process of preference elicitation can be informed by direct stakeholder engagement, and can therefore reflect their own preferences. All features are fully transparent and facilitate decision making.
Chauvin, Anthony; Truchot, Jennifer; Bafeta, Aida; Pateron, Dominique; Plaisance, Patrick; Yordanov, Youri
2018-04-01
The number of trials assessing Simulation-Based Medical Education (SBME) interventions has rapidly expanded. Many studies show that potential flaws in design, conduct and reporting of randomized controlled trials (RCTs) can bias their results. We conducted a methodological review of RCTs assessing a SBME in Emergency Medicine (EM) and examined their methodological characteristics. We searched MEDLINE via PubMed for RCT that assessed a simulation intervention in EM, published in 6 general and internal medicine and in the top 10 EM journals. The Cochrane Collaboration risk of Bias tool was used to assess risk of bias, intervention reporting was evaluated based on the "template for intervention description and replication" checklist, and methodological quality was evaluated by the Medical Education Research Study Quality Instrument. Reports selection and data extraction was done by 2 independents researchers. From 1394 RCTs screened, 68 trials assessed a SBME intervention. They represent one quarter of our sample. Cardiopulmonary resuscitation (CPR) is the most frequent topic (81%). Random sequence generation and allocation concealment were performed correctly in 66 and 49% of trials. Blinding of participants and assessors was performed correctly in 19 and 68%. Risk of attrition bias was low in three-quarters of the studies (n = 51). Risk of selective reporting bias was unclear in nearly all studies. The mean MERQSI score was of 13.4/18.4% of the reports provided a description allowing the intervention replication. Trials assessing simulation represent one quarter of RCTs in EM. Their quality remains unclear, and reproducing the interventions appears challenging due to reporting issues.
Assessment of health risks of policies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ádám, Balázs, E-mail: badam@cmss.sdu.dk; Department of Preventive Medicine, Faculty of Public Health, University of Debrecen, P.O. Box 9, H-4012 Debrecen; Molnár, Ágnes, E-mail: MolnarAg@smh.ca
The assessment of health risks of policies is an inevitable, although challenging prerequisite for the inclusion of health considerations in political decision making. The aim of our project was to develop a so far missing methodological guide for the assessment of the complex impact structure of policies. The guide was developed in a consensual way based on experiences gathered during the assessment of specific national policies selected by the partners of an EU project. Methodological considerations were discussed and summarized in workshops and pilot tested on the EU Health Strategy for finalization. The combined tool, which includes a textual guidancemore » and a checklist, follows the top-down approach, that is, it guides the analysis of causal chains from the policy through related health determinants and risk factors to health outcomes. The tool discusses the most important practical issues of assessment by impact level. It emphasises the transparent identification and prioritisation of factors, the consideration of the feasibility of exposure and outcome assessment with special focus on quantification. The developed guide provides useful methodological instructions for the comprehensive assessment of health risks of policies that can be effectively used in the health impact assessment of policy proposals. - Highlights: • Methodological guide for the assessment of health risks of policies is introduced. • The tool is developed based on the experiences from several case studies. • The combined tool consists of a textual guidance and a checklist. • The top-down approach is followed through the levels of the full impact chain. • The guide provides assistance for the health impact assessment of policy proposals.« less
Sustainability assessment in forest management based on individual preferences.
Martín-Fernández, Susana; Martinez-Falero, Eugenio
2018-01-15
This paper presents a methodology to elicit the preferences of any individual in the assessment of sustainable forest management at the stand level. The elicitation procedure was based on the comparison of the sustainability of pairs of forest locations. A sustainability map of the whole territory was obtained according to the individual's preferences. Three forest sustainability indicators were pre-calculated for each point in a study area in a Scots pine forest in the National Park of Sierra de Guadarrama in the Madrid Region in Spain to obtain the best management plan with the sustainability map. We followed a participatory process involving fifty people to assess the sustainability of the forest management and the methodology. The results highlighted the demand for conservative forest management, the usefulness of the methodology for managers, and the importance and necessity of incorporating stakeholders into forestry decision-making processes. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sahoo, Madhumita; Sahoo, Satiprasad; Dhar, Anirban; Pradhan, Biswajeet
2016-10-01
Groundwater vulnerability assessment has been an accepted practice to identify the zones with relatively increased potential for groundwater contamination. DRASTIC is the most popular secondary information-based vulnerability assessment approach. Original DRASTIC approach considers relative importance of features/sub-features based on subjective weighting/rating values. However variability of features at a smaller scale is not reflected in this subjective vulnerability assessment process. In contrast to the subjective approach, the objective weighting-based methods provide flexibility in weight assignment depending on the variation of the local system. However experts' opinion is not directly considered in the objective weighting-based methods. Thus effectiveness of both subjective and objective weighting-based approaches needs to be evaluated. In the present study, three methods - Entropy information method (E-DRASTIC), Fuzzy pattern recognition method (F-DRASTIC) and Single parameter sensitivity analysis (SA-DRASTIC), were used to modify the weights of the original DRASTIC features to include local variability. Moreover, a grey incidence analysis was used to evaluate the relative performance of subjective (DRASTIC and SA-DRASTIC) and objective (E-DRASTIC and F-DRASTIC) weighting-based methods. The performance of the developed methodology was tested in an urban area of Kanpur City, India. Relative performance of the subjective and objective methods varies with the choice of water quality parameters. This methodology can be applied without/with suitable modification. These evaluations establish the potential applicability of the methodology for general vulnerability assessment in urban context.
ERIC Educational Resources Information Center
Gale, Jessica; Wind, Stefanie; Koval, Jayma; Dagosta, Joseph; Ryan, Mike; Usselman, Marion
2016-01-01
This paper illustrates the use of simulation-based performance assessment (PA) methodology in a recent study of eighth-grade students' understanding of physical science concepts. A set of four simulation-based PA tasks were iteratively developed to assess student understanding of an array of physical science concepts, including net force,…
Enviroplan—a summary methodology for comprehensive environmental planning and design
Robert Allen Jr.; George Nez; Fred Nicholson; Larry Sutphin
1979-01-01
This paper will discuss a comprehensive environmental assessment methodology that includes a numerical method for visual management and analysis. This methodology employs resource and human activity units as a means to produce a visual form unit which is the fundamental unit of the perceptual environment. The resource unit is based on the ecosystem as the fundamental...
Scrum Methodology in Higher Education: Innovation in Teaching, Learning and Assessment
ERIC Educational Resources Information Center
Jurado-Navas, Antonio; Munoz-Luna, Rosa
2017-01-01
The present paper aims to detail the experience developed in a classroom of English Studies from the Spanish University of Málaga, where an alternative project-based learning methodology has been implemented. Such methodology is inspired by scrum sessions widely extended in technological companies where staff members work in teams and are assigned…
Qualitative Assessment of Inquiry-Based Teaching Methods
ERIC Educational Resources Information Center
Briggs, Michael; Long, George; Owens, Katrina
2011-01-01
A new approach to teaching method assessment using student focused qualitative studies and the theoretical framework of mental models is proposed. The methodology is considered specifically for the advantages it offers when applied to the assessment of inquiry-based teaching methods. The theoretical foundation of mental models is discussed, and…
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for designs failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.
Assessing value-based health care delivery for haemodialysis.
Parra, Eduardo; Arenas, María Dolores; Alonso, Manuel; Martínez, María Fernanda; Gamen, Ángel; Aguarón, Juan; Escobar, María Teresa; Moreno-Jiménez, José María; Alvarez-Ude, Fernando
2017-06-01
Disparities in haemodialysis outcomes among centres have been well-documented. Besides, attempts to assess haemodialysis results have been based on non-comprehensive methodologies. This study aimed to develop a comprehensive methodology for assessing haemodialysis centres, based on the value of health care. The value of health care is defined as the patient benefit from a specific medical intervention per monetary unit invested (Value = Patient Benefit/Cost). This study assessed the value of health care and ranked different haemodialysis centres. A nephrology quality management group identified the criteria for the assessment. An expert group composed of stakeholders (patients, clinicians and managers) agreed on the weighting of each variable, considering values and preferences. Multi-criteria methodology was used to analyse the data. Four criteria and their weights were identified: evidence-based clinical performance measures = 43 points; yearly mortality = 27 points; patient satisfaction = 13 points; and health-related quality of life = 17 points (100-point scale). Evidence-based clinical performance measures included five sub-criteria, with respective weights, including: dialysis adequacy; haemoglobin concentration; mineral and bone disorders; type of vascular access; and hospitalization rate. The patient benefit was determined from co-morbidity-adjusted results and corresponding weights. The cost of each centre was calculated as the average amount expended per patient per year. The study was conducted in five centres (1-5). After adjusting for co-morbidity, value of health care was calculated, and the centres were ranked. A multi-way sensitivity analysis that considered different weights (10-60% changes) and costs (changes of 10% in direct and 30% in allocated costs) showed that the methodology was robust. The rankings: 4-5-3-2-1 and 4-3-5-2-1 were observed in 62.21% and 21.55%, respectively, of simulations, when weights were varied by 60%. Value assessments may integrate divergent stakeholder perceptions, create a context for improvement and aid in policy-making decisions. © 2015 John Wiley & Sons, Ltd.
Tsipouras, Markos G; Giannakeas, Nikolaos; Tzallas, Alexandros T; Tsianou, Zoe E; Manousou, Pinelopi; Hall, Andrew; Tsoulos, Ioannis; Tsianos, Epameinondas
2017-03-01
Collagen proportional area (CPA) extraction in liver biopsy images provides the degree of fibrosis expansion in liver tissue, which is the most characteristic histological alteration in hepatitis C virus (HCV). Assessment of the fibrotic tissue is currently based on semiquantitative staging scores such as Ishak and Metavir. Since its introduction as a fibrotic tissue assessment technique, CPA calculation based on image analysis techniques has proven to be more accurate than semiquantitative scores. However, CPA has yet to reach everyday clinical practice, since the lack of standardized and robust methods for computerized image analysis for CPA assessment have proven to be a major limitation. The current work introduces a three-stage fully automated methodology for CPA extraction based on machine learning techniques. Specifically, clustering algorithms have been employed for background-tissue separation, as well as for fibrosis detection in liver tissue regions, in the first and the third stage of the methodology, respectively. Due to the existence of several types of tissue regions in the image (such as blood clots, muscle tissue, structural collagen, etc.), classification algorithms have been employed to identify liver tissue regions and exclude all other non-liver tissue regions from CPA computation. For the evaluation of the methodology, 79 liver biopsy images have been employed, obtaining 1.31% mean absolute CPA error, with 0.923 concordance correlation coefficient. The proposed methodology is designed to (i) avoid manual threshold-based and region selection processes, widely used in similar approaches presented in the literature, and (ii) minimize CPA calculation time. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Failure mode effect analysis and fault tree analysis as a combined methodology in risk management
NASA Astrophysics Data System (ADS)
Wessiani, N. A.; Yoshio, F.
2018-04-01
There have been many studies reported the implementation of Failure Mode Effect Analysis (FMEA) and Fault Tree Analysis (FTA) as a method in risk management. However, most of the studies usually only choose one of these two methods in their risk management methodology. On the other side, combining these two methods will reduce the drawbacks of each methods when implemented separately. This paper aims to combine the methodology of FMEA and FTA in assessing risk. A case study in the metal company will illustrate how this methodology can be implemented. In the case study, this combined methodology will assess the internal risks that occur in the production process. Further, those internal risks should be mitigated based on their level of risks.
Review of Natural Phenomena Hazard (NPH) Assessments for the Hanford 200 Areas (Non-Seismic)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snow, Robert L.; Ross, Steven B.; Sullivan, Robin S.
2010-09-24
The purpose of this review is to assess the need for updating Natural Phenomena Hazard (NPH) assessments for the Hanford 200 Areas, as required by DOE Order 420.1B Chapter IV, Natural Phenomena Hazards Mitigation, based on significant changes in state-of-the-art NPH assessment methodology or site-specific information. The review includes all natural phenomena hazards with the exception of seismic/earthquake hazards, which are being addressed under a separate effort. It was determined that existing non-seismic NPH assessments are consistent with current design methodology and site specific data.
Simoens, Steven
2013-01-01
Objectives This paper aims to assess the methodological quality of economic evaluations included in Belgian reimbursement applications for Class 1 drugs. Materials and Methods For 19 reimbursement applications submitted during 2011 and Spring 2012, a descriptive analysis assessed the methodological quality of the economic evaluation, evaluated the assessment of that economic evaluation by the Drug Reimbursement Committee and the response to that assessment by the company. Compliance with methodological guidelines issued by the Belgian Healthcare Knowledge Centre was assessed using a detailed checklist of 23 methodological items. The rate of compliance was calculated based on the number of economic evaluations for which the item was applicable. Results Economic evaluations tended to comply with guidelines regarding perspective, target population, subgroup analyses, comparator, use of comparative clinical data and final outcome measures, calculation of costs, incremental analysis, discounting and time horizon. However, more attention needs to be paid to the description of limitations of indirect comparisons, the choice of an appropriate analytic technique, the expression of unit costs in values for the current year, the estimation and valuation of outcomes, the presentation of results of sensitivity analyses, and testing the face validity of model inputs and outputs. Also, a large variation was observed in the scope and depth of the quality assessment by the Drug Reimbursement Committee. Conclusions Although general guidelines exist, pharmaceutical companies and the Drug Reimbursement Committee would benefit from the existence of a more detailed checklist of methodological items that need to be reported in an economic evaluation. PMID:24386474
Simoens, Steven
2013-01-01
This paper aims to assess the methodological quality of economic evaluations included in Belgian reimbursement applications for Class 1 drugs. For 19 reimbursement applications submitted during 2011 and Spring 2012, a descriptive analysis assessed the methodological quality of the economic evaluation, evaluated the assessment of that economic evaluation by the Drug Reimbursement Committee and the response to that assessment by the company. Compliance with methodological guidelines issued by the Belgian Healthcare Knowledge Centre was assessed using a detailed checklist of 23 methodological items. The rate of compliance was calculated based on the number of economic evaluations for which the item was applicable. Economic evaluations tended to comply with guidelines regarding perspective, target population, subgroup analyses, comparator, use of comparative clinical data and final outcome measures, calculation of costs, incremental analysis, discounting and time horizon. However, more attention needs to be paid to the description of limitations of indirect comparisons, the choice of an appropriate analytic technique, the expression of unit costs in values for the current year, the estimation and valuation of outcomes, the presentation of results of sensitivity analyses, and testing the face validity of model inputs and outputs. Also, a large variation was observed in the scope and depth of the quality assessment by the Drug Reimbursement Committee. Although general guidelines exist, pharmaceutical companies and the Drug Reimbursement Committee would benefit from the existence of a more detailed checklist of methodological items that need to be reported in an economic evaluation.
A methodology for dam inventory and inspection with remotely sensed data
NASA Technical Reports Server (NTRS)
Berger, J. P.; Philipson, W. R.; Liang, T.
1979-01-01
A methodology is presented to increase the efficiency and accuracy of dam inspection by incorporating remote sensing techniques into field-based monitoring programs. The methodology focuses on New York State and places emphasis on readily available remotely sensed data aerial photographs and Landsat data. Aerial photographs are employed in establishing a state-wide data base, referenced on county highway and U.S. Geological Survey 1:24,000 scale, topographic maps. Data base updates are conducted by county or region, using aerial photographs or Landsat as a primary source of information. Field investigations are generally limited to high-hazard or special problem dams, or to dams which cannot be assessed adequately with aerial photographs. Although emphasis is placed on available data, parameters for acquiring new aircraft data for assessing dam condition are outlined. Large scale (1:10,000) vertical, stereoscopic, color-infrared photography, flown during the spring or fall, is recommended.
Threat Assessment and Remediation Analysis (TARA)
2014-10-01
of countermeasure selection strategies that prescribe the application of countermeasures based on level of risk tolerance. This paper outlines the...catalog data, which are discussed later in this paper . The methodology can be described as conjoined trade studies, where the first trade identifies and...ranks vulnerabilities based on assessed risk, and the second identifies and selects countermeasures based on assessed utility and cost. This paper
Developing Dementia Prevention Trials: Baseline Report of the Home-Based Assessment Study
Sano, Mary; Egelko, Susan; Donohue, Michael; Ferris, Steven; Kaye, Jeffrey; Hayes, Tamara L.; Mundt, James C.; Sun, C.K.; Paparello, Silvia; Aisen, Paul S.
2014-01-01
This report describes the baseline experience of the multi-center, Home Based Assessment (HBA) study, designed to develop methods for dementia prevention trials using novel technologies for test administration and data collection. Non-demented individuals ≥ 75 years old were recruited and evaluated in-person using established clinical trial outcomes of cognition and function, and randomized to one of 3 assessment methodologies: 1) mail-in questionnaire/live telephone interviews (MIP); 2) automated telephone with interactive voice recognition (IVR); and 3) internet-based computer Kiosk (KIO). Brief versions of cognitive and non-cognitive outcomes, were adapted to each methodology and administered at baseline and repeatedly over a 4-year period. “Efficiency” measures assessed the time from screening to baseline, and staff time required for each methodology. 713 individuals signed consent and were screened; 640 met eligibility and were randomized to one of 3 assessment arms and 581 completed baseline. Drop out, time from screening to baseline and total staff time were highest among those assigned to KIO. However efficiency measures were driven by non-recurring start-up activities suggesting that differences may be mitigated over a long trial. Performance among HBA instruments collected via different technologies will be compared to established outcomes over this 4 year study. PMID:23151596
Matias, Carla; O'Connor, Thomas G; Futh, Annabel; Scott, Stephen
2014-01-01
Conceptually and methodologically distinct models exist for assessing quality of parent-child relationships, but few studies contrast competing models or assess their overlap in predicting developmental outcomes. Using observational methodology, the current study examined the distinctiveness of attachment theory-based and social learning theory-based measures of parenting in predicting two key measures of child adjustment: security of attachment narratives and social acceptance in peer nominations. A total of 113 5-6-year-old children from ethnically diverse families participated. Parent-child relationships were rated using standard paradigms. Measures derived from attachment theory included sensitive responding and mutuality; measures derived from social learning theory included positive attending, directives, and criticism. Child outcomes were independently-rated attachment narrative representations and peer nominations. Results indicated that Attachment theory-based and Social Learning theory-based measures were modestly correlated; nonetheless, parent-child mutuality predicted secure child attachment narratives independently of social learning theory-based measures; in contrast, criticism predicted peer-nominated fighting independently of attachment theory-based measures. In young children, there is some evidence that attachment theory-based measures may be particularly predictive of attachment narratives; however, no single model of measuring parent-child relationships is likely to best predict multiple developmental outcomes. Assessment in research and applied settings may benefit from integration of different theoretical and methodological paradigms.
Bridge condition assessment based on long-term strain monitoring
NASA Astrophysics Data System (ADS)
Sun, LiMin; Sun, Shouwang
2011-04-01
In consideration of the important role that bridges play as transportation infrastructures, their safety, durability and serviceability have always been deeply concerned. Structural Health Monitoring Systems (SHMS) have been installed to many long-span bridges to provide bridge engineers with the information needed in making rational decisions for maintenance. However, SHMS also confronted bridge engineers with the challenge of efficient use of monitoring data. Thus, methodologies which are robust to random disturbance and sensitive to damage become a subject on which many researches in structural condition assessment concentrate. In this study, an innovative probabilistic approach for condition assessment of bridge structures was proposed on the basis of long-term strain monitoring on steel girder of a cable-stayed bridge. First, the methodology of damage detection in the vicinity of monitoring point using strain-based indices was investigated. Then, the composition of strain response of bridge under operational loads was analyzed. Thirdly, the influence of temperature and wind on strains was eliminated and thus strain fluctuation under vehicle loads is obtained. Finally, damage evolution assessment was carried out based on the statistical characteristics of rain-flow cycles derived from the strain fluctuation under vehicle loads. The research conducted indicates that the methodology proposed is qualified for structural condition assessment so far as the following respects are concerned: (a) capability of revealing structural deterioration; (b) immunity to the influence of environmental variation; (c) adaptability to the random characteristic exhibited by long-term monitoring data. Further examination of the applicability of the proposed methodology in aging bridge may provide a more convincing validation.
A multicriteria decision making model for assessment and selection of an ERP in a logistics context
NASA Astrophysics Data System (ADS)
Pereira, Teresa; Ferreira, Fernanda A.
2017-07-01
The aim of this work is to apply a methodology of decision support based on a multicriteria decision analyses (MCDA) model that allows the assessment and selection of an Enterprise Resource Planning (ERP) in a Portuguese logistics company by Group Decision Maker (GDM). A Decision Support system (DSS) that implements a MCDA - Multicriteria Methodology for the Assessment and Selection of Information Systems / Information Technologies (MMASSI / IT) is used based on its features and facility to change and adapt the model to a given scope. Using this DSS it was obtained the information system that best suited to the decisional context, being this result evaluated through a sensitivity and robustness analysis.
Method and system for dynamic probabilistic risk assessment
NASA Technical Reports Server (NTRS)
Dugan, Joanne Bechta (Inventor); Xu, Hong (Inventor)
2013-01-01
The DEFT methodology, system and computer readable medium extends the applicability of the PRA (Probabilistic Risk Assessment) methodology to computer-based systems, by allowing DFT (Dynamic Fault Tree) nodes as pivot nodes in the Event Tree (ET) model. DEFT includes a mathematical model and solution algorithm, supports all common PRA analysis functions and cutsets. Additional capabilities enabled by the DFT include modularization, phased mission analysis, sequence dependencies, and imperfect coverage.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflights systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for design, failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.
Giansanti, Daniele; Morelli, Sandra; Maccioni, Giovanni; Guerriero, Lorenzo; Bedini, Remo; Pepe, Gennaro; Colombo, Cesare; Borghi, Gabriella; Macellari, Velio
2009-01-01
Due to major advances in the information technology, telemedicine applications are ready for a widespread use. Nonetheless, to allow their diffusion in National Health Care Systems (NHCSs) specific methodologies of health technology assessment (HTA) should be used to assess the standardization, the overall quality, the interoperability, the addressing to legal, economic and cost benefit aspects. One of the limits to the diffusion of the digital tele-echocardiography (T-E) applications in the NHCS lacking of a specific methodology for the HTA. In the present study, a solution offering a structured HTA of T-E products was designed. The methodology assured also the definition of standardized quality levels for the application. The first level represents the minimum level of acceptance; the other levels are accessory levels useful for a more accurate assessment of the product. The methodology showed to be useful to rationalize the process of standardization and has received a high degree of acceptance by the subjects involved in the study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jetter, R. I.; Messner, M. C.; Sham, T. -L.
The goal of the proposed integrated Elastic Perfectly-Plastic (EPP) and Simplified Model Test (SMT) methodology is to incorporate an SMT data based approach for creep-fatigue damage evaluation into the EPP methodology to avoid the separate evaluation of creep and fatigue damage and eliminate the requirement for stress classification in current methods; thus greatly simplifying evaluation of elevated temperature cyclic service. This methodology should minimize over-conservatism while properly accounting for localized defects and stress risers. To support the implementation of the proposed methodology and to verify the applicability of the code rules, analytical studies and evaluation of thermomechanical test results continuedmore » in FY17. This report presents the results of those studies. An EPP strain limits methodology assessment was based on recent two-bar thermal ratcheting test results on 316H stainless steel in the temperature range of 405 to 7050C. Strain range predictions from the EPP evaluation of the two-bar tests were also evaluated and compared with the experimental results. The role of sustained primary loading on cyclic life was assessed using the results of pressurized SMT data from tests on Alloy 617 at 9500C. A viscoplastic material model was used in an analytic simulation of two-bar tests to compare with EPP strain limits assessments using isochronous stress strain curves that are consistent with the viscoplastic material model. A finite element model of a prior 304H stainless steel Oak Ridge National Laboratory (ORNL) nozzle-to-sphere test was developed and used for an EPP strain limits and creep-fatigue code case damage evaluations. A theoretical treatment of a recurring issue with convergence criteria for plastic shakedown illustrated the role of computer machine precision in EPP calculations.« less
Ge, Long; Tian, Jin-hui; Li, Xiu-xia; Song, Fujian; Li, Lun; Zhang, Jun; Li, Ge; Pei, Gai-qin; Qiu, Xia; Yang, Ke-hu
2016-01-01
Because of the methodological complexity of network meta-analyses (NMAs), NMAs may be more vulnerable to methodological risks than conventional pair-wise meta-analysis. Our study aims to investigate epidemiology characteristics, conduction of literature search, methodological quality and reporting of statistical analysis process in the field of cancer based on PRISMA extension statement and modified AMSTAR checklist. We identified and included 102 NMAs in the field of cancer. 61 NMAs were conducted using a Bayesian framework. Of them, more than half of NMAs did not report assessment of convergence (60.66%). Inconsistency was assessed in 27.87% of NMAs. Assessment of heterogeneity in traditional meta-analyses was more common (42.62%) than in NMAs (6.56%). Most of NMAs did not report assessment of similarity (86.89%) and did not used GRADE tool to assess quality of evidence (95.08%). 43 NMAs were adjusted indirect comparisons, the methods used were described in 53.49% NMAs. Only 4.65% NMAs described the details of handling of multi group trials and 6.98% described the methods of similarity assessment. The median total AMSTAR-score was 8.00 (IQR: 6.00–8.25). Methodological quality and reporting of statistical analysis did not substantially differ by selected general characteristics. Overall, the quality of NMAs in the field of cancer was generally acceptable. PMID:27848997
Simulation-Based Probabilistic Tsunami Hazard Analysis: Empirical and Robust Hazard Predictions
NASA Astrophysics Data System (ADS)
De Risi, Raffaele; Goda, Katsuichiro
2017-08-01
Probabilistic tsunami hazard analysis (PTHA) is the prerequisite for rigorous risk assessment and thus for decision-making regarding risk mitigation strategies. This paper proposes a new simulation-based methodology for tsunami hazard assessment for a specific site of an engineering project along the coast, or, more broadly, for a wider tsunami-prone region. The methodology incorporates numerous uncertain parameters that are related to geophysical processes by adopting new scaling relationships for tsunamigenic seismic regions. Through the proposed methodology it is possible to obtain either a tsunami hazard curve for a single location, that is the representation of a tsunami intensity measure (such as inundation depth) versus its mean annual rate of occurrence, or tsunami hazard maps, representing the expected tsunami intensity measures within a geographical area, for a specific probability of occurrence in a given time window. In addition to the conventional tsunami hazard curve that is based on an empirical statistical representation of the simulation-based PTHA results, this study presents a robust tsunami hazard curve, which is based on a Bayesian fitting methodology. The robust approach allows a significant reduction of the number of simulations and, therefore, a reduction of the computational effort. Both methods produce a central estimate of the hazard as well as a confidence interval, facilitating the rigorous quantification of the hazard uncertainties.
ERIC Educational Resources Information Center
Faggella-Luby, Michael; Lombardi, Allison; Lalor, Adam R.; Dukes, Lyman, III
2014-01-01
In order to assess the status of the research base that informs "what works" for students with disabilities in higher education, it is necessary to conduct an examination of the methodologies used in the literature. The authors of the current study analyzed the methodological trends across the thirty-year lifespan of the "Journal of…
System Dynamics Aviation Readiness Modeling Demonstration
2005-08-31
requirements. It is recommended that the Naval Aviation Enterprise take a close look at the requirements i.e., performance measures, methodology ...unit’s capability to perform specific Joint Mission Essential Task List (JMETL) requirements now and in the future. This assessment methodology must...the time-associated costs. The new methodology must base decisions on currently available data and databases. A “useful” readiness model should be
The Benefits of Standards-Based Grading: A Critical Evaluation of Modern Grading Practices
ERIC Educational Resources Information Center
Iamarino, Danielle L.
2014-01-01
This paper explores the methodology and application of an assessment philosophy known as standards-based grading, via a critical comparison of standards-based grading to other assessment philosophies commonly employed at the elementary, secondary, and post-secondary levels of education. Evidenced by examples of increased student engagement and…
Verma, Mahendra K.; Warwick, Peter D.
2011-01-01
The Energy Independence and Security Act of 2007 (Public Law 110-140) authorized the U.S. Geological Survey (USGS) to conduct a national assessment of geologic storage resources for carbon dioxide (CO2) and requested that the USGS estimate the "potential volumes of oil and gas recoverable by injection and sequestration of industrial carbon dioxide in potential sequestration formations" (121 Stat. 1711). The USGS developed a noneconomic, probability-based methodology to assess the Nation's technically assessable geologic storage resources available for sequestration of CO2 (Brennan and others, 2010) and is currently using the methodology to assess the Nation's CO2 geologic storage resources. Because the USGS has not developed a methodology to assess the potential volumes of technically recoverable hydrocarbons that could be produced by injection and sequestration of CO2, the Geologic Carbon Sequestration project initiated an effort in 2010 to develop a methodology for the assessment of the technically recoverable hydrocarbon potential in the sedimentary basins of the United States using enhanced oil recovery (EOR) techniques with CO2 (CO2-EOR). In collaboration with Stanford University, the USGS hosted a 2-day CO2-EOR workshop in May 2011, attended by 28 experts from academia, natural resource agencies and laboratories of the Federal Government, State and international geologic surveys, and representatives from the oil and gas industry. The geologic and the reservoir engineering and operations working groups formed during the workshop discussed various aspects of geology, reservoir engineering, and operations to make recommendations for the methodology.
Code of Federal Regulations, 2014 CFR
2014-04-01
... justified by a newly created property-based needs assessment (a life-cycle physical needs assessments... calculated as the sum of total operating cost, modernization cost, and costs to address accrual needs. Costs... assist PHAs in completing the assessments. The spreadsheet calculator is designed to walk housing...
Code of Federal Regulations, 2013 CFR
2013-04-01
... justified by a newly created property-based needs assessment (a life-cycle physical needs assessments... calculated as the sum of total operating cost, modernization cost, and costs to address accrual needs. Costs... assist PHAs in completing the assessments. The spreadsheet calculator is designed to walk housing...
Using Risk Assessment Methodologies to Meet Management Objectives
NASA Technical Reports Server (NTRS)
DeMott, D. L.
2015-01-01
Current decision making involves numerous possible combinations of technology elements, safety and health issues, operational aspects and process considerations to satisfy program goals. Identifying potential risk considerations as part of the management decision making process provides additional tools to make more informed management decision. Adapting and using risk assessment methodologies can generate new perspectives on various risk and safety concerns that are not immediately apparent. Safety and operational risks can be identified and final decisions can balance these considerations with cost and schedule risks. Additional assessments can also show likelihood of event occurrence and event consequence to provide a more informed basis for decision making, as well as cost effective mitigation strategies. Methodologies available to perform Risk Assessments range from qualitative identification of risk potential, to detailed assessments where quantitative probabilities are calculated. Methodology used should be based on factors that include: 1) type of industry and industry standards, 2) tasks, tools, and environment 3) type and availability of data and 4) industry views and requirements regarding risk & reliability. Risk Assessments are a tool for decision makers to understand potential consequences and be in a position to reduce, mitigate or eliminate costly mistakes or catastrophic failures.
Mani, Suresh; Sharma, Shobha; Omar, Baharudin; Paungmali, Aatit; Joseph, Leonard
2017-04-01
Purpose The purpose of this review is to systematically explore and summarise the validity and reliability of telerehabilitation (TR)-based physiotherapy assessment for musculoskeletal disorders. Method A comprehensive systematic literature review was conducted using a number of electronic databases: PubMed, EMBASE, PsycINFO, Cochrane Library and CINAHL, published between January 2000 and May 2015. The studies examined the validity, inter- and intra-rater reliabilities of TR-based physiotherapy assessment for musculoskeletal conditions were included. Two independent reviewers used the Quality Appraisal Tool for studies of diagnostic Reliability (QAREL) and the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool to assess the methodological quality of reliability and validity studies respectively. Results A total of 898 hits were achieved, of which 11 articles based on inclusion criteria were reviewed. Nine studies explored the concurrent validity, inter- and intra-rater reliabilities, while two studies examined only the concurrent validity. Reviewed studies were moderate to good in methodological quality. The physiotherapy assessments such as pain, swelling, range of motion, muscle strength, balance, gait and functional assessment demonstrated good concurrent validity. However, the reported concurrent validity of lumbar spine posture, special orthopaedic tests, neurodynamic tests and scar assessments ranged from low to moderate. Conclusion TR-based physiotherapy assessment was technically feasible with overall good concurrent validity and excellent reliability, except for lumbar spine posture, orthopaedic special tests, neurodynamic testa and scar assessment.
NASA Astrophysics Data System (ADS)
Camacho-Navarro, Jhonatan; Ruiz, Magda; Villamizar, Rodolfo; Mujica, Luis; Moreno-Beltrán, Gustavo; Quiroga, Jabid
2017-05-01
Continuous monitoring for damage detection in structural assessment comprises implementation of low cost equipment and efficient algorithms. This work describes the stages involved in the design of a methodology with high feasibility to be used in continuous damage assessment. Specifically, an algorithm based on a data-driven approach by using principal component analysis and pre-processing acquired signals by means of cross-correlation functions, is discussed. A carbon steel pipe section and a laboratory tower were used as test structures in order to demonstrate the feasibility of the methodology to detect abrupt changes in the structural response when damages occur. Two types of damage cases are studied: crack and leak for each structure, respectively. Experimental results show that the methodology is promising in the continuous monitoring of real structures.
Development of a methodology for assessing the safety of embedded software systems
NASA Technical Reports Server (NTRS)
Garrett, C. J.; Guarro, S. B.; Apostolakis, G. E.
1993-01-01
A Dynamic Flowgraph Methodology (DFM) based on an integrated approach to modeling and analyzing the behavior of software-driven embedded systems for assessing and verifying reliability and safety is discussed. DFM is based on an extension of the Logic Flowgraph Methodology to incorporate state transition models. System models which express the logic of the system in terms of causal relationships between physical variables and temporal characteristics of software modules are analyzed to determine how a certain state can be reached. This is done by developing timed fault trees which take the form of logical combinations of static trees relating the system parameters at different point in time. The resulting information concerning the hardware and software states can be used to eliminate unsafe execution paths and identify testing criteria for safety critical software functions.
Remedial Action Assessment System: A computer-based methodology for conducting feasibility studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, M.K.; Buelt, J.L.; Stottlemyre, J.A.
1991-02-01
Because of the complexity and number of potential waste sites facing the US Department of Energy (DOE) for potential cleanup, DOE is supporting the development of a computer-based methodology to streamline the remedial investigation/feasibility study process. The Remedial Action Assessment System (RAAS), can be used for screening, linking, and evaluating established technology processes in support of conducting feasibility studies. It is also intended to do the same in support of corrective measures studies. The user interface employs menus, windows, help features, and graphical information while RAAS is in operation. Object-oriented programming is used to link unit processes into sets ofmore » compatible processes that form appropriate remedial alternatives. Once the remedial alternatives are formed, the RAAS methodology can evaluate them in terms of effectiveness, implementability, and cost. RAAS will access a user-selected risk assessment code to determine the reduction of risk after remedial action by each recommended alternative. The methodology will also help determine the implementability of the remedial alternatives at a site and access cost estimating tools to provide estimates of capital, operating, and maintenance costs. This paper presents the characteristics of two RAAS prototypes currently being developed. These include the RAAS Technology Information System, which accesses graphical, tabular and textual information about technologies, and the main RAAS methodology, which screens, links, and evaluates remedial technologies. 4 refs., 3 figs., 1 tab.« less
Vaidya, Anil; Joore, Manuela A; ten Cate-Hoek, Arina J; Kleinegris, Marie-Claire; ten Cate, Hugo; Severens, Johan L
2014-01-01
Lower extremity artery disease (LEAD) is a sign of wide spread atherosclerosis also affecting coronary, cerebral and renal arteries and is associated with increased risk of cardiovascular events. Many economic evaluations have been published for LEAD due to its clinical, social and economic importance. The aim of this systematic review was to assess modelling methods used in published economic evaluations in the field of LEAD. Our review appraised and compared the general characteristics, model structure and methodological quality of published models. Electronic databases MEDLINE and EMBASE were searched until February 2013 via OVID interface. Cochrane database of systematic reviews, Health Technology Assessment database hosted by National Institute for Health research and National Health Services Economic Evaluation Database (NHSEED) were also searched. The methodological quality of the included studies was assessed by using the Philips' checklist. Sixteen model-based economic evaluations were identified and included. Eleven models compared therapeutic health technologies; three models compared diagnostic tests and two models compared a combination of diagnostic and therapeutic options for LEAD. Results of this systematic review revealed an acceptable to low methodological quality of the included studies. Methodological diversity and insufficient information posed a challenge for valid comparison of the included studies. In conclusion, there is a need for transparent, methodologically comparable and scientifically credible model-based economic evaluations in the field of LEAD. Future modelling studies should include clinically and economically important cardiovascular outcomes to reflect the wider impact of LEAD on individual patients and on the society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramuhalli, Pradeep; Hirt, Evelyn H.; Veeramany, Arun
This research report summaries the development and evaluation of a prototypic enhanced risk monitor (ERM) methodology (framework) that includes alternative risk metrics and uncertainty analysis. This updated ERM methodology accounts for uncertainty in the equipment condition assessment (ECA), the prognostic result, and the probabilistic risk assessment (PRA) model. It is anticipated that the ability to characterize uncertainty in the estimated risk and update the risk estimates in real time based on equipment condition assessment (ECA) will provide a mechanism for optimizing plant performance while staying within specified safety margins. These results (based on impacting active component O&M using real-time equipmentmore » condition information) are a step towards ERMs that, if integrated with AR supervisory plant control systems, can help control O&M costs and improve affordability of advanced reactors.« less
Spanish methodological approach for biosphere assessment of radioactive waste disposal.
Agüero, A; Pinedo, P; Cancio, D; Simón, I; Moraleda, M; Pérez-Sánchez, D; Trueba, C
2007-10-01
The development of radioactive waste disposal facilities requires implementation of measures that will afford protection of human health and the environment over a specific temporal frame that depends on the characteristics of the wastes. The repository design is based on a multi-barrier system: (i) the near-field or engineered barrier, (ii) far-field or geological barrier and (iii) the biosphere system. Here, the focus is on the analysis of this last system, the biosphere. A description is provided of conceptual developments, methodological aspects and software tools used to develop the Biosphere Assessment Methodology in the context of high-level waste (HLW) disposal facilities in Spain. This methodology is based on the BIOMASS "Reference Biospheres Methodology" and provides a logical and systematic approach with supplementary documentation that helps to support the decisions necessary for model development. It follows a five-stage approach, such that a coherent biosphere system description and the corresponding conceptual, mathematical and numerical models can be built. A discussion on the improvements implemented through application of the methodology to case studies in international and national projects is included. Some facets of this methodological approach still require further consideration, principally an enhanced integration of climatology, geography and ecology into models considering evolution of the environment, some aspects of the interface between the geosphere and biosphere, and an accurate quantification of environmental change processes and rates.
ERIC Educational Resources Information Center
Crisp, Victoria; Novakovic, Nadezda
2009-01-01
The consistency of assessment demands is important to validity. This research investigated the comparability of the demands of college-assessed units within a vocationally related qualification, drawing on methodological approaches that have previously been used to compare assessments. Assessment materials from five colleges were obtained. After…
Assessment of continuous gas resources in the Khorat Plateau Province, Thailand and Laos, 2016
Schenk, Christopher J.; Klett, Timothy R.; Mercier, Tracey J.; Finn, Thomas M.; Tennyson, Marilyn E.; Gaswirth, Stephanie B.; Marra, Kristen R.; Le, Phuong A.; Drake, Ronald M.
2017-05-25
Using a geology-based assessment methodology, the U.S. Geological Survey assessed mean undiscovered, technically recoverable resources of 2.3 trillion cubic feet of continuous gas in the Khorat Plateau Province of Thailand and Laos.
Stoffenmanager exposure model: company-specific exposure assessments using a Bayesian methodology.
van de Ven, Peter; Fransman, Wouter; Schinkel, Jody; Rubingh, Carina; Warren, Nicholas; Tielemans, Erik
2010-04-01
The web-based tool "Stoffenmanager" was initially developed to assist small- and medium-sized enterprises in the Netherlands to make qualitative risk assessments and to provide advice on control at the workplace. The tool uses a mechanistic model to arrive at a "Stoffenmanager score" for exposure. In a recent study it was shown that variability in exposure measurements given a certain Stoffenmanager score is still substantial. This article discusses an extension to the tool that uses a Bayesian methodology for quantitative workplace/scenario-specific exposure assessment. This methodology allows for real exposure data observed in the company of interest to be combined with the prior estimate (based on the Stoffenmanager model). The output of the tool is a company-specific assessment of exposure levels for a scenario for which data is available. The Bayesian approach provides a transparent way of synthesizing different types of information and is especially preferred in situations where available data is sparse, as is often the case in small- and medium sized-enterprises. Real-world examples as well as simulation studies were used to assess how different parameters such as sample size, difference between prior and data, uncertainty in prior, and variance in the data affect the eventual posterior distribution of a Bayesian exposure assessment.
Assessing Consequential Scenarios in a Complex Operational Environment Using Agent Based Simulation
2017-03-16
RWISE) 93 5.1.5 Conflict Modeling, Planning, and Outcomes Experimentation Program (COMPOEX) 94 5.1.6 Joint Non -Kinetic Effects Model (JNEM)/Athena... experimental design and testing. 4.3.8 Types and Attributes of Agent-Based Model Design Patterns Using the aforementioned ABM flowchart design methodology ...speed, or flexibility during tactical US Army wargaming. The report considers methodologies to improve analysis of the human domain, identifies
Incorporating hydrologic data and ecohydrologic relationships into ecological site descriptions
C. Jason Williams; Frederick B. Pierson; Kenneth E. Spaeth; Joel R. Brown; Osama Z. Al-Hamdan; Mark A. Weltz; Mark A. Nearing; Jeffrey E. Herrick; Jan Boll; Pete Robichaud; David C. Goodrich; Phillip Heilman; D. Phillip Guertin; Mariano Hernandez; Haiyan Wei; Stuart P. Hardegree; Eva K. Strand; Jonathan D. Bates; Loretta J. Metz; Mary H. Nichols
2016-01-01
The purpose of this paper is to recommend a framework and methodology for incorporating hydrologic data and ecohydrologic relationships in Ecological Site Descriptions (ESDs) and thereby enhance the utility of ESDs for assessing rangelands and guiding resilience-based management strategies. Resilience-based strategies assess and manage ecological state...
Teachers' Reactions towards Performance-Based Language Assessment
ERIC Educational Resources Information Center
Chinda, Bordin
2014-01-01
This research aims at examining the reactions of tertiary EFL teachers towards the use of performance-based language assessment. The study employed a mixed-method research methodology. For the quantitative method, 36 teachers responded to a questionnaire survey. In addition, four teachers participated in the in-depth interviews which were…
Using Tasks to Assess Spanish Language Learning
ERIC Educational Resources Information Center
Herrera Mosquera, Leonardo
2012-01-01
The methodology of Task-based teaching (TBT) has been positively regarded by many researchers and language teachers around the world. Yet, this language teaching methodology has been mainly implemented in English as a second language (ESL) classrooms and in English for specific purpose (ESP) courses; and more specifically with advanced-level…
In this paper, the methodological concept of landscape optimization presented by Seppelt and Voinov [Ecol. Model. 151 (2/3) (2002) 125] is analyzed. Two aspects are chosen for detailed study. First, we generalize the performance criterion to assess a vector of ecosystem functi...
Probabilistic assessment methodology for continuous-type petroleum accumulations
Crovelli, R.A.
2003-01-01
The analytic resource assessment method, called ACCESS (Analytic Cell-based Continuous Energy Spreadsheet System), was developed to calculate estimates of petroleum resources for the geologic assessment model, called FORSPAN, in continuous-type petroleum accumulations. The ACCESS method is based upon mathematical equations derived from probability theory in the form of a computer spreadsheet system. ?? 2003 Elsevier B.V. All rights reserved.
Measurement-based auralization methodology for the assessment of noise mitigation measures
NASA Astrophysics Data System (ADS)
Thomas, Pieter; Wei, Weigang; Van Renterghem, Timothy; Botteldooren, Dick
2016-09-01
The effect of noise mitigation measures is generally expressed by noise levels only, neglecting the listener's perception. In this study, an auralization methodology is proposed that enables an auditive preview of noise abatement measures for road traffic noise, based on the direction dependent attenuation of a priori recordings made with a dedicated 32-channel spherical microphone array. This measurement-based auralization has the advantage that all non-road traffic sounds that create the listening context are present. The potential of this auralization methodology is evaluated through the assessment of the effect of an L-shaped mound. The angular insertion loss of the mound is estimated by using the ISO 9613-2 propagation model, the Pierce barrier diffraction model and the Harmonoise point-to-point model. The realism of the auralization technique is evaluated by listening tests, indicating that listeners had great difficulty in differentiating between a posteriori recordings and auralized samples, which shows the validity of the followed approaches.
A new scenario-based approach to damage detection using operational modal parameter estimates
NASA Astrophysics Data System (ADS)
Hansen, J. B.; Brincker, R.; López-Aenlle, M.; Overgaard, C. F.; Kloborg, K.
2017-09-01
In this paper a vibration-based damage localization and quantification method, based on natural frequencies and mode shapes, is presented. The proposed technique is inspired by a damage assessment methodology based solely on the sensitivity of mass-normalized experimental determined mode shapes. The present method differs by being based on modal data extracted by means of Operational Modal Analysis (OMA) combined with a reasonable Finite Element (FE) representation of the test structure and implemented in a scenario-based framework. Besides a review of the basic methodology this paper addresses fundamental theoretical as well as practical considerations which are crucial to the applicability of a given vibration-based damage assessment configuration. Lastly, the technique is demonstrated on an experimental test case using automated OMA. Both the numerical study as well as the experimental test case presented in this paper are restricted to perturbations concerning mass change.
Cooney, Lewis; Loke, Yoon K; Golder, Su; Kirkham, Jamie; Jorgensen, Andrea; Sinha, Ian; Hawcutt, Daniel
2017-06-02
Many medicines are dosed to achieve a particular therapeutic range, and monitored using therapeutic drug monitoring (TDM). The evidence base for a therapeutic range can be evaluated using systematic reviews, to ensure it continues to reflect current indications, doses, routes and formulations, as well as updated adverse effect data. There is no consensus on the optimal methodology for systematic reviews of therapeutic ranges. An overview of systematic reviews of therapeutic ranges was undertaken. The following databases were used: Cochrane Database of Systematic Reviews (CDSR), Database of Abstracts and Reviews of Effects (DARE) and MEDLINE. The published methodologies used when systematically reviewing the therapeutic range of a drug were analyzed. Step by step recommendations to optimize such systematic reviews are proposed. Ten systematic reviews that investigated the correlation between serum concentrations and clinical outcomes encompassing a variety of medicines and indications were assessed. There were significant variations in the methodologies used (including the search terms used, data extraction methods, assessment of bias, and statistical analyses undertaken). Therapeutic ranges should be population and indication specific and based on clinically relevant outcomes. Recommendations for future systematic reviews based on these findings have been developed. Evidence based therapeutic ranges have the potential to improve TDM practice. Current systematic reviews investigating therapeutic ranges have highly variable methodologies and there is no consensus of best practice when undertaking systematic reviews in this field. These recommendations meet a need not addressed by standard protocols.
Quality of systematic reviews in pediatric oncology--a systematic review.
Lundh, Andreas; Knijnenburg, Sebastiaan L; Jørgensen, Anders W; van Dalen, Elvira C; Kremer, Leontien C M
2009-12-01
To ensure evidence-based decision making in pediatric oncology systematic reviews are necessary. The objective of our study was to evaluate the methodological quality of all currently existing systematic reviews in pediatric oncology. We identified eligible systematic reviews through a systematic search of the literature. Data on clinical and methodological characteristics of the included systematic reviews were extracted. The methodological quality of the included systematic reviews was assessed using the overview quality assessment questionnaire, a validated 10-item quality assessment tool. We compared the methodological quality of systematic reviews published in regular journals with that of Cochrane systematic reviews. We included 117 systematic reviews, 99 systematic reviews published in regular journals and 18 Cochrane systematic reviews. The average methodological quality of systematic reviews was low for all ten items, but the quality of Cochrane systematic reviews was significantly higher than systematic reviews published in regular journals. On a 1-7 scale, the median overall quality score for all systematic reviews was 2 (range 1-7), with a score of 1 (range 1-7) for systematic reviews in regular journals compared to 6 (range 3-7) in Cochrane systematic reviews (p<0.001). Most systematic reviews in the field of pediatric oncology seem to have serious methodological flaws leading to a high risk of bias. While Cochrane systematic reviews were of higher methodological quality than systematic reviews in regular journals, some of them also had methodological problems. Therefore, the methodology of each individual systematic review should be scrutinized before accepting its results.
Assessment of unconventional tight-gas resources of the Magallanes Basin Province, Chile, 2015
Schenk, Christopher J.; Charpentier, Ronald R.; Pitman, Janet K.; Tennyson, Marilyn E.; Brownfield, Michael E.; Gaswirth, Stephanie B.; Le, Phuong A.; Leathers-Miller, Heidi M.; Marra, Kristen R.
2016-01-20
Using a geology-based assessment methodology, the U.S. Geological Survey assessed a technically recoverable mean resource of 8.3 trillion cubic feet of unconventional tight gas in the Zona Glauconitica of the Magallanes Basin Province, Chile.
Assessment of undiscovered gas resources of the Thrace Basin, Turkey, 2015
Schenk, Christopher J.; Klett, Timothy R.; Tennyson, Marilyn E.; Pitman, Janet K.; Gaswirth, Stephanie B.; Le, Phuong A.; Leathers-Miller, Heidi M.; Mercier, Tracey J.; Marra, Kristen R.; Hawkins, Sarah J.; Brownfield, Michael E.
2016-01-27
Using a geology-based assessment methodology, the U.S. Geological Survey assessed undiscovered, technically recoverable mean resources of 787 billion cubic feet of conventional gas and 1,630 billion cubic feet of unconventional gas in the Thrace Basin, Turkey.
Assessment of continuous oil and gas resources in the Perth Basin Province, Australia, 2017
Schenk, Christopher J.; Tennyson, Marilyn E.; Finn, Thomas M.; Mercier, Tracey J.; Hawkins, Sarah J.; Gaswirth, Stephanie B.; Marra, Kristen R.; Klett, Timothy R.; Le, Phuong A.; Leathers-Miller, Heidi M.; Woodall, Cheryl A.
2017-07-17
Using a geology-based assessment methodology, the U.S. Geological Survey assessed undiscovered, technically recoverable mean resources of 223 million barrels of oil and 14.5 trillion cubic feet of gas in the Perth Basin Province, Australia.
Assessment of undiscovered oil and gas resources of the Lusitanian Basin Province, Portugal, 2016
Schenk, Christopher J.; Tennyson, Marilyn E.; Klett, Timothy R.; Finn, Thomas M.; Mercier, Tracey J.; Gaswirth, Stephanie B.; Marra, Kristen R.; Le, Phuong A.; Hawkins, Sarah J.; Leathers-Miller, Heidi M.; Paxton, Stanley T.
2016-11-04
Using a geology-based assessment methodology, the U.S. Geological Survey assessed mean undiscovered, technically recoverable resources of 121 million barrels of oil and 212 billion cubic feet of gas in the Lusitanian Basin Province, Portugal.
MASQOT: a method for cDNA microarray spot quality control
Bylesjö, Max; Eriksson, Daniel; Sjödin, Andreas; Sjöström, Michael; Jansson, Stefan; Antti, Henrik; Trygg, Johan
2005-01-01
Background cDNA microarray technology has emerged as a major player in the parallel detection of biomolecules, but still suffers from fundamental technical problems. Identifying and removing unreliable data is crucial to prevent the risk of receiving illusive analysis results. Visual assessment of spot quality is still a common procedure, despite the time-consuming work of manually inspecting spots in the range of hundreds of thousands or more. Results A novel methodology for cDNA microarray spot quality control is outlined. Multivariate discriminant analysis was used to assess spot quality based on existing and novel descriptors. The presented methodology displays high reproducibility and was found superior in identifying unreliable data compared to other evaluated methodologies. Conclusion The proposed methodology for cDNA microarray spot quality control generates non-discrete values of spot quality which can be utilized as weights in subsequent analysis procedures as well as to discard spots of undesired quality using the suggested threshold values. The MASQOT approach provides a consistent assessment of spot quality and can be considered an alternative to the labor-intensive manual quality assessment process. PMID:16223442
Quality and methodological challenges in Internet-based mental health trials.
Ye, Xibiao; Bapuji, Sunita Bayyavarapu; Winters, Shannon; Metge, Colleen; Raynard, Mellissa
2014-08-01
To review the quality of Internet-based mental health intervention studies and their methodological challenges. We searched multiple literature databases to identify relevant studies according to the Population, Interventions, Comparators, Outcomes, and Study Design framework. Two reviewers independently assessed selection bias, allocation bias, confounding bias, blinding, data collection methods, and withdrawals/dropouts, using the Quality Assessment Tool for Quantitative Studies. We rated each component as strong, moderate, or weak and assigned a global rating (strong, moderate, or weak) to each study. We discussed methodological issues related to the study quality. Of 122 studies included, 31 (25%), 44 (36%), and 47 (39%) were rated strong, moderate, and weak, respectively. Only five studies were rated strong for all of the six quality components (three of them were published by the same group). Lack of blinding, selection bias, and low adherence were the top three challenges in Internet-based mental health intervention studies. The overall quality of Internet-based mental health intervention needs to improve. In particular, studies need to improve sample selection, intervention allocation, and blinding.
Diffendorfer, James E.; Beston, Julie A.; Merrill, Matthew D.; Stanton, Jessica C.; Corum, Margo D.; Loss, Scott R.; Thogmartin, Wayne E.; Johnson, Douglas H.; Erickson, Richard A.; Heist, Kevin W.
2015-01-01
Components of the methodology are based on simplifying assumptions and require information that, for many species, may be sparse or unreliable. These assumptions are presented in the report and should be carefully considered when using output from the methodology. In addition, this methodology can be used to recommend species for more intensive demographic modeling or highlight those species that may not require any additional protection because effects of wind energy development on their populations are projected to be small.
NASA Astrophysics Data System (ADS)
Czechowski, Piotr Oskar; Owczarek, Tomasz; Badyda, Artur; Majewski, Grzegorz; Rogulski, Mariusz; Ogrodnik, Paweł
2018-01-01
The paper presents selected preliminary stage key issues proposed extended equivalence measurement results assessment for new portable devices - the comparability PM10 concentration results hourly series with reference station measurement results with statistical methods. In article presented new portable meters technical aspects. The emphasis was placed on the comparability the results using the stochastic and exploratory methods methodology concept. The concept is based on notice that results series simple comparability in the time domain is insufficient. The comparison of regularity should be done in three complementary fields of statistical modeling: time, frequency and space. The proposal is based on model's results of five annual series measurement results new mobile devices and WIOS (Provincial Environmental Protection Inspectorate) reference station located in Nowy Sacz city. The obtained results indicate both the comparison methodology completeness and the high correspondence obtained new measurements results devices with reference.
Climate change vulnerability for species-Assessing the assessments.
Wheatley, Christopher J; Beale, Colin M; Bradbury, Richard B; Pearce-Higgins, James W; Critchlow, Rob; Thomas, Chris D
2017-09-01
Climate change vulnerability assessments are commonly used to identify species at risk from global climate change, but the wide range of methodologies available makes it difficult for end users, such as conservation practitioners or policymakers, to decide which method to use as a basis for decision-making. In this study, we evaluate whether different assessments consistently assign species to the same risk categories and whether any of the existing methodologies perform well at identifying climate-threatened species. We compare the outputs of 12 climate change vulnerability assessment methodologies, using both real and simulated species, and validate the methods using historic data for British birds and butterflies (i.e. using historical data to assign risks and more recent data for validation). Our results show that the different vulnerability assessment methods are not consistent with one another; different risk categories are assigned for both the real and simulated sets of species. Validation of the different vulnerability assessments suggests that methods incorporating historic trend data into the assessment perform best at predicting distribution trends in subsequent time periods. This study demonstrates that climate change vulnerability assessments should not be used interchangeably due to the poor overall agreement between methods when considering the same species. The results of our validation provide more support for the use of trend-based rather than purely trait-based approaches, although further validation will be required as data become available. © 2017 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Shroff, Ronnie H.; Deneen, Christopher
2011-01-01
This paper assesses textual feedback to support student intrinsic motivation using a collaborative text-based dialogue system. A research model is presented based on research into intrinsic motivation, and the specific construct of feedback provides a framework for the model. A qualitative research methodology is used to validate the model.…
2011-11-01
elastic range, and with some simple forms of progressing damage . However, a general physics-based methodology to assess the initial and lifetime... damage evolution in the RVE for all possible load histories. Microstructural data on initial configuration and damage progression in CMCs were...the damaged elements will have changed, hence, a progressive damage model. The crack opening for each crack type in each element is stored as a
Schenk, Christopher J.; Charpentier, Ronald R.; Klett, Timothy R.; Mercier, Tracey J.; Tennyson, Marilyn E.; Pitman, Janet K.; Brownfield, Michael E.
2014-01-01
Using a geology-based assessment methodology, the U.S. Geological Survey assessed potential technically recoverable mean resources of 53 million barrels of shale oil and 320 billion cubic feet of shale gas in the Phitsanulok Basin, onshore Thailand.
NASA Technical Reports Server (NTRS)
Kumasaka, Henry A.; Martinez, Michael M.; Weir, Donald S.
1996-01-01
This report describes the methodology for assessing the impact of component noise reduction on total airplane system noise. The methodology is intended to be applied to the results of individual study elements of the NASA-Advanced Subsonic Technology (AST) Noise Reduction Program, which will address the development of noise reduction concepts for specific components. Program progress will be assessed in terms of noise reduction achieved, relative to baseline levels representative of 1992 technology airplane/engine design and performance. In this report, the 1992 technology reference levels are defined for assessment models based on four airplane sizes - an average business jet and three commercial transports: a small twin, a medium sized twin, and a large quad. Study results indicate that component changes defined as program final goals for nacelle treatment and engine/airframe source noise reduction would achieve from 6-7 EPNdB reduction of total airplane noise at FAR 36 Stage 3 noise certification conditions for all of the airplane noise assessment models.
Development of a diaphragmatic motion-based elastography framework for assessment of liver stiffness
NASA Astrophysics Data System (ADS)
Weis, Jared A.; Johnsen, Allison M.; Wile, Geoffrey E.; Yankeelov, Thomas E.; Abramson, Richard G.; Miga, Michael I.
2015-03-01
Evaluation of mechanical stiffness imaging biomarkers, through magnetic resonance elastography (MRE), has shown considerable promise for non-invasive assessment of liver stiffness to monitor hepatic fibrosis. MRE typically requires specialized externally-applied vibratory excitation and scanner-specific motion-sensitive pulse sequences. In this work, we have developed an elasticity imaging approach that utilizes natural diaphragmatic respiratory motion to induce deformation and eliminates the need for external deformation excitation hardware and specialized pulse sequences. Our approach uses clinically-available standard of care volumetric imaging acquisitions, combined with offline model-based post-processing to generate volumetric estimates of stiffness within the liver and surrounding tissue structures. We have previously developed a novel methodology for non-invasive elasticity imaging which utilizes a model-based elasticity reconstruction algorithm and MR image volumes acquired under different states of deformation. In prior work, deformation was external applied through inflation of an air bladder placed within the MR radiofrequency coil. In this work, we extend the methodology with the goal of determining the feasibility of assessing liver mechanical stiffness using diaphragmatic respiratory motion between end-inspiration and end-expiration breath-holds as a source of deformation. We present initial investigations towards applying this methodology to assess liver stiffness in healthy volunteers and cirrhotic patients. Our preliminary results suggest that this method is capable of non-invasive image-based assessment of liver stiffness using natural diaphragmatic respiratory motion and provides considerable enthusiasm for extension of our approach towards monitoring liver stiffness in cirrhotic patients with limited impact to standard-of-care clinical imaging acquisition workflow.
Assessing Perceptions of Interpersonal Behavior with a Video-Based Situational Judgment Test
ERIC Educational Resources Information Center
Golubovich, Juliya; Seybert, Jacob; Martin-Raugh, Michelle; Naemi, Bobby; Vega, Ronald P.; Roberts, Richard D.
2017-01-01
Accurate appraisal of others' behavior is critical for the production of skilled interpersonal behavior. We used an ecologically valid methodology, a video-based situational judgment test with true-false items, to assess the accuracy with which students (N = 947) perceive the interpersonal behavior of actors involved in workplace situations.…
A Bootstrap Generalization of Modified Parallel Analysis for IRT Dimensionality Assessment
ERIC Educational Resources Information Center
Finch, Holmes; Monahan, Patrick
2008-01-01
This article introduces a bootstrap generalization to the Modified Parallel Analysis (MPA) method of test dimensionality assessment using factor analysis. This methodology, based on the use of Marginal Maximum Likelihood nonlinear factor analysis, provides for the calculation of a test statistic based on a parametric bootstrap using the MPA…
Principles of Assessment for Project and Research Based Learning
ERIC Educational Resources Information Center
Hunaiti, Ziad; Grimaldi, Silvia; Goven, Dharmendra; Mootanah, Rajshree; Martin, Louise
2010-01-01
Purpose: The purpose of this paper is to provide assessment guidelines which help to implement research-based education in science and technology areas, which would benefit from the quality of this type of education within this subject area. Design/methodology/approach: This paper is a reflection on, and analysis of, different aspects of…
ERIC Educational Resources Information Center
Kim, Jin-Young
2015-01-01
This study explores and describes different viewpoints on Computer Based Assessment (CBA) by using Q methodology to identify perspectives of students and instructors and classify these into perceptional typologies. Thirty undergraduate students taking CBA courses and fifteen instructors adopting CBA into their curriculum at a university in Korea,…
Experience and the Arts: An Examination of an Arts-Based Chemistry Class
ERIC Educational Resources Information Center
Wunsch, Patricia Ann
2013-01-01
Many high school students are either intimidated or unmotivated when faced with science courses taught with a traditional teaching methodology. The focus of this study was the integration of the arts, specifically the Creative Arts Laboratory (CAL) approach, into the teaching methodology and assessment of a high school chemistry class, with…
Technological Leverage in Higher Education: An Evolving Pedagogy
ERIC Educational Resources Information Center
Pillai, K. Rajasekharan; Prakash, Ashish Viswanath
2017-01-01
Purpose: The purpose of the study is to analyse the perception of students toward a computer-based exam on a custom-made digital device and their willingness to adopt the same for high-stake summative assessment. Design/methodology/approach: This study followed an analytical methodology using survey design. A modified version of students'…
An E-Assessment Approach for Evaluation in Engineering Overcrowded Groups
ERIC Educational Resources Information Center
Mora, M. C.; Sancho-Bru, J. L.; Iserte, J. L.; Sanchez, F. T.
2012-01-01
The construction of the European Higher Education Area has been an adaptation challenge for Spanish universities. New methodologies require a more active role on the students' part and come into conflict with the previous educational model characterised by a high student/professor ratio, a lecture-based teaching methodology and a summative…
A Protean Practice? Perspectives on the Practice of Action Learning
ERIC Educational Resources Information Center
Brook, Cheryl; Pedler, Mike; Burgoyne, John G
2013-01-01
Purpose: The purpose of the paper is to assess the extent to which these practitioners ' perspectives and practices match Willis's conception of a Revans "gold standard" of action learning. Design/methodology/approach: This study adopts a qualitative design and methodology based on interviews and the collection of cases or accounts of…
Neuroethics and animals: methods and philosophy.
Takala, Tuija; Häyry, Matti
2014-04-01
This article provides an overview of the six other contributions in the Neuroethics and Animals special section. In addition, it discusses the methodological and theoretical problems of interdisciplinary fields. The article suggests that interdisciplinary approaches without established methodological and theoretical bases are difficult to assess scientifically. This might cause these fields to expand without actually advancing.
ERIC Educational Resources Information Center
Bloch, Barbara; Thomson, Peter
Between July and November 1993, a cross-section of Australia's school- and workplace-based vocational education and training programs was studied to identify programs using innovative assessment strategies and materials. As innovative strategies/materials were identified, the study methodology was revised and a case study approach was adopted. The…
Cömert, Musa; Zill, Jördis Maria; Christalle, Eva; Dirmaier, Jörg; Härter, Martin; Scholl, Isabelle
2016-01-01
Background Teaching and assessment of communication skills have become essential in medical education. The Objective Structured Clinical Examination (OSCE) has been found as an appropriate means to assess communication skills within medical education. Studies have demonstrated the importance of a valid assessment of medical students’ communication skills. Yet, the validity of the performance scores depends fundamentally on the quality of the rating scales used in an OSCE. Thus, this systematic review aimed at providing an overview of existing rating scales, describing their underlying definition of communication skills, determining the methodological quality of psychometric studies and the quality of psychometric properties of the identified rating scales. Methods We conducted a systematic review to identify psychometrically tested rating scales, which have been applied in OSCE settings to assess communication skills of medical students. Our search strategy comprised three databases (EMBASE, PsycINFO, and PubMed), reference tracking and consultation of experts. We included studies that reported psychometric properties of communication skills assessment rating scales used in OSCEs by examiners only. The methodological quality of included studies was assessed using the COnsensus based Standards for the selection of health status Measurement INstruments (COSMIN) checklist. The quality of psychometric properties was evaluated using the quality criteria of Terwee and colleagues. Results Data of twelve studies reporting on eight rating scales on communication skills assessment in OSCEs were included. Five of eight rating scales were explicitly developed based on a specific definition of communication skills. The methodological quality of studies was mainly poor. The psychometric quality of the eight rating scales was mainly intermediate. Discussion Our results reveal that future psychometric evaluation studies focusing on improving the methodological quality are needed in order to yield psychometrically sound results of the OSCEs assessing communication skills. This is especially important given that most OSCE rating scales are used for summative assessment, and thus have an impact on medical students’ academic success. PMID:27031506
Cömert, Musa; Zill, Jördis Maria; Christalle, Eva; Dirmaier, Jörg; Härter, Martin; Scholl, Isabelle
2016-01-01
Teaching and assessment of communication skills have become essential in medical education. The Objective Structured Clinical Examination (OSCE) has been found as an appropriate means to assess communication skills within medical education. Studies have demonstrated the importance of a valid assessment of medical students' communication skills. Yet, the validity of the performance scores depends fundamentally on the quality of the rating scales used in an OSCE. Thus, this systematic review aimed at providing an overview of existing rating scales, describing their underlying definition of communication skills, determining the methodological quality of psychometric studies and the quality of psychometric properties of the identified rating scales. We conducted a systematic review to identify psychometrically tested rating scales, which have been applied in OSCE settings to assess communication skills of medical students. Our search strategy comprised three databases (EMBASE, PsycINFO, and PubMed), reference tracking and consultation of experts. We included studies that reported psychometric properties of communication skills assessment rating scales used in OSCEs by examiners only. The methodological quality of included studies was assessed using the COnsensus based Standards for the selection of health status Measurement INstruments (COSMIN) checklist. The quality of psychometric properties was evaluated using the quality criteria of Terwee and colleagues. Data of twelve studies reporting on eight rating scales on communication skills assessment in OSCEs were included. Five of eight rating scales were explicitly developed based on a specific definition of communication skills. The methodological quality of studies was mainly poor. The psychometric quality of the eight rating scales was mainly intermediate. Our results reveal that future psychometric evaluation studies focusing on improving the methodological quality are needed in order to yield psychometrically sound results of the OSCEs assessing communication skills. This is especially important given that most OSCE rating scales are used for summative assessment, and thus have an impact on medical students' academic success.
The methodological quality of systematic reviews of animal studies in dentistry.
Faggion, C M; Listl, S; Giannakopoulos, N N
2012-05-01
Systematic reviews and meta-analyses of animal studies are important for improving estimates of the effects of treatment and for guiding future clinical studies on humans. The purpose of this systematic review was to assess the methodological quality of systematic reviews and meta-analyses of animal studies in dentistry through using a validated checklist. A literature search was conducted independently and in duplicate in the PubMed and LILACS databases. References in selected systematic reviews were assessed to identify other studies not captured by the electronic searches. The methodological quality of studies was assessed independently and in duplicate by using the AMSTAR checklist; the quality was scored as low, moderate, or high. The reviewers were calibrated before the assessment and agreement between them was assessed using Cohen's Kappa statistic. Of 444 studies retrieved, 54 systematic reviews were selected after full-text assessment. Agreement between the reviewers was regarded as excellent. Only two studies were scored as high quality; 17 and 35 studies were scored as medium and low quality, respectively. There is room for improvement of the methodological quality of systematic reviews of animal studies in dentistry. Checklists, such as AMSTAR, can guide researchers in planning and executing systematic reviews and meta-analyses. For determining the need for additional investigations in animals and in order to provide good data for potential application in human, such reviews should be based on animal experiments performed according to sound methodological principles. Copyright © 2011 Elsevier Ltd. All rights reserved.
Applications of Landsat data and the data base approach
Lauer, D.T.
1986-01-01
A generalized methodology for applying digital Landsat data to resource inventory and assessment tasks is currently being used by several bureaux and agencies within the US Department of the Interior. The methodology includes definition of project objectives and output, identification of source materials, construction of the digital data base, performance of computer-assisted analyses, and generation of output. The USGS, Bureau of Land Management, US Fish and Wildlife Service, Bureau of Indian Affairs, Bureau of Reclamation, and National Park Service have used this generalized methodology to assemble comprehensive digital data bases for resource management. Advanced information processing techniques have been applied to these data bases for making regional environmental surveys on millions of acres of public lands at costs ranging from $0.01 to $0.08 an acre.-Author
A methodology for physically based rockfall hazard assessment
NASA Astrophysics Data System (ADS)
Crosta, G. B.; Agliardi, F.
Rockfall hazard assessment is not simple to achieve in practice and sound, physically based assessment methodologies are still missing. The mobility of rockfalls implies a more difficult hazard definition with respect to other slope instabilities with minimal runout. Rockfall hazard assessment involves complex definitions for "occurrence probability" and "intensity". This paper is an attempt to evaluate rockfall hazard using the results of 3-D numerical modelling on a topography described by a DEM. Maps portraying the maximum frequency of passages, velocity and height of blocks at each model cell, are easily combined in a GIS in order to produce physically based rockfall hazard maps. Different methods are suggested and discussed for rockfall hazard mapping at a regional and local scale both along linear features or within exposed areas. An objective approach based on three-dimensional matrixes providing both a positional "Rockfall Hazard Index" and a "Rockfall Hazard Vector" is presented. The opportunity of combining different parameters in the 3-D matrixes has been evaluated to better express the relative increase in hazard. Furthermore, the sensitivity of the hazard index with respect to the included variables and their combinations is preliminarily discussed in order to constrain as objective as possible assessment criteria.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-08
... and Translation Webinar on the Assessment of Data Quality in Animal Studies; Notice of Public Webinar...- based meeting on the assessment of data quality in animal studies. The Office of Health Assessment and... meetings with a focus on methodological issues related to OHAT implementing systematic review. The first...
NASA Astrophysics Data System (ADS)
Tangen, Steven Anthony
Due to the complexities of modern military operations and the technologies employed on today's military systems, acquisition costs and development times are becoming increasingly large. Meanwhile, the transformation of the global security environment is driving the U.S. military's own transformation. In order to meet the required capabilities of the next generation without buying prohibitively costly new systems, it is necessary for the military to evolve across the spectrum of doctrine, organization, training, materiel, leadership and education, personnel, and facilities (DOTMLPF). However, the methods for analyzing DOTMLPF approaches within the early acquisition phase of a capability-based assessment (CBA) are not as well established as the traditional technology design techniques. This makes it difficult for decision makers to decide if investments should be made in materiel or non-materiel solutions. This research develops an agent-based constructive simulation to quantitatively assess doctrine alongside materiel approaches. Additionally, life-cycle cost techniques are provided to enable a cost-effectiveness trade. These techniques are wrapped together in a decision-making environment that brings crucial information forward so informed and appropriate acquisition choices can be made. The methodology is tested on a future unmanned aerial vehicle design problem. Through the implementation of this quantitative methodology on the proof-of-concept study, it is shown that doctrinal changes including fleet composition, asset allocation, and patrol pattern were capable of dramatic improvements in system effectiveness at a much lower cost than the incorporation of candidate technologies. Additionally, this methodology was able to quantify the precise nature of strong doctrine-doctrine and doctrine-technology interactions which have been observed only qualitatively throughout military history. This dissertation outlines the methodology and demonstrates how potential approaches to capability-gaps can be identified with respect to effectiveness, cost, and time. When implemented, this methodology offers the opportunity to achieve system capabilities in a new way, improve the design of acquisition programs, and field the right combination of ways and means to address future challenges to national security.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Domínguez-Gómez, J. Andrés, E-mail: andres@uhu.es
In the last twenty years, both the increase in academic production and the expansion of professional involvement in Environmental Impact Assessment (EIA) and Social Impact Assessment (SIA) have evidenced growing scientific and business interest in risk and impact analysis. However, this growth has not brought with it parallel progress in addressing the main shortcomings of EIA/SIA, i.e. insufficient integration of environmental and social factors into development project analyses and, in cases where the social aspects are considered, technical-methodological failings in their analysis and assessment. It is clear that these weaknesses carry with them substantial threats to the sustainability (social, environmentalmore » and economic) of projects which impact on the environment, and consequently to the local contexts where they are carried out and to the delicate balance of the global ecosystem. This paper argues that, in a sociological context of complexity and dynamism, four conceptual elements should underpin approaches to socio-environmental risk and impact assessment in development projects: a theoretical base in actor–network theory; an ethical grounding in values which are internationally recognized (though not always fulfilled in practice); a (new) epistemological-scientific base; and a methodological foundation in social participation. - Highlights: • A theoretical foundation in actor–network theory • An ethical grounding in values which are internationally recognized, but rarely carried through into practice • A (new) epistemological-scientific base • A methodological foundation in social participation.« less
Assessment of tight-gas resources in Canyon sandstones of the Val Verde Basin, Texas, 2016
Schenk, Christopher J.; Tennyson, Marilyn E.; Klett, Timothy R.; Mercier, Tracey J.; Brownfield, Michael E.; Gaswirth, Stephanie B.; Hawkins, Sarah J.; Leathers-Miller, Heidi M.; Marra, Kristen R.; Finn, Thomas M.; Pitman, Janet K.
2016-07-08
Using a geology-based assessment methodology, the U.S. Geological Survey assessed mean resources of 5 trillion cubic feet of gas and 187 million barrels of natural gas liquids in tight-gas assessment units in the Canyon sandstones of the Val Verde Basin, Texas.
Li, Honghe; Ding, Ning; Zhang, Yuanyuan; Liu, Yang; Wen, Deliang
2017-01-01
Over the last three decades, various instruments were developed and employed to assess medical professionalism, but their measurement properties have yet to be fully evaluated. This study aimed to systematically evaluate these instruments' measurement properties and the methodological quality of their related studies within a universally acceptable standardized framework and then provide corresponding recommendations. A systematic search of the electronic databases PubMed, Web of Science, and PsycINFO was conducted to collect studies published from 1990-2015. After screening titles, abstracts, and full texts for eligibility, the articles included in this study were classified according to their respective instrument's usage. A two-phase assessment was conducted: 1) methodological quality was assessed by following the COnsensus-based Standards for the selection of health status Measurement INstruments (COSMIN) checklist; and 2) the quality of measurement properties was assessed according to Terwee's criteria. Results were integrated using best-evidence synthesis to look for recommendable instruments. After screening 2,959 records, 74 instruments from 80 existing studies were included. The overall methodological quality of these studies was unsatisfactory, with reasons including but not limited to unknown missing data, inadequate sample sizes, and vague hypotheses. Content validity, cross-cultural validity, and criterion validity were either unreported or negative ratings in most studies. Based on best-evidence synthesis, three instruments were recommended: Hisar's instrument for nursing students, Nurse Practitioners' Roles and Competencies Scale, and Perceived Faculty Competency Inventory. Although instruments measuring medical professionalism are diverse, only a limited number of studies were methodologically sound. Future studies should give priority to systematically improving the performance of existing instruments and to longitudinal studies.
How effective is drug testing as a workplace safety strategy? A systematic review of the evidence.
Pidd, Ken; Roche, Ann M
2014-10-01
The growing prevalence of workplace drug testing and the narrow scope of previous reviews of the evidence base necessitate a comprehensive review of research concerning the efficacy of drug testing as a workplace strategy. A systematic qualitative review of relevant research published between January 1990 and January 2013 was undertaken. Inclusion criteria were studies that evaluated the effectiveness of drug testing in deterring employee drug use or reducing workplace accident or injury rates. Methodological adequacy was assessed using a published assessment tool specifically designed to assess the quality of intervention studies. A total of 23 studies were reviewed and assessed, six of which reported on the effectiveness of testing in reducing employee drug use and 17 which reported on occupational accident or injury rates. No studies involved randomised control trials. Only one study was assessed as demonstrating strong methodological rigour. That study found random alcohol testing reduced fatal accidents in the transport industry. The majority of studies reviewed contained methodological weaknesses including; inappropriate study design, limited sample representativeness, the use of ecological data to evaluate individual behaviour change and failure to adequately control for potentially confounding variables. This latter finding is consistent with previous reviews and indicates the evidence base for the effectiveness of testing in improving workplace safety is at best tenuous. Better dissemination of the current evidence in relation to workplace drug testing is required to support evidence-informed policy and practice. There is also a pressing need for more methodologically rigorous research to evaluate the efficacy and utility of drug testing. Copyright © 2014 Elsevier Ltd. All rights reserved.
Solar energy program evaluation: an introduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
deLeon, P.
The Program Evaluation Methodology provides an overview of the practice and methodology of program evaluation and defines more precisely the evaluation techniques and methodologies that would be most appropriate to government organizations which are actively involved in the research, development, and commercialization of solar energy systems. Formal evaluation cannot be treated as a single methodological approach for assessing a program. There are four basic types of evaluation designs - the pre-experimental design; the quasi-experimental design based on time series; the quasi-experimental design based on comparison groups; and the true experimental design. This report is organized to first introduce the rolemore » and issues of evaluation. This is to provide a set of issues to organize the subsequent sections detailing the national solar energy programs. Then, these two themes are integrated by examining the evaluation strategies and methodologies tailored to fit the particular needs of the various individual solar energy programs. (MCW)« less
Review and evaluation of innovative technologies for measuring diet in nutritional epidemiology.
Illner, A-K; Freisling, H; Boeing, H; Huybrechts, I; Crispim, S P; Slimani, N
2012-08-01
The use of innovative technologies is deemed to improve dietary assessment in various research settings. However, their relative merits in nutritional epidemiological studies, which require accurate quantitative estimates of the usual intake at individual level, still need to be evaluated. To report on the inventory of available innovative technologies for dietary assessment and to critically evaluate their strengths and weaknesses as compared with the conventional methodologies (i.e. Food Frequency Questionnaires, food records, 24-hour dietary recalls) used in epidemiological studies. A list of currently available technologies was identified from English-language journals, using PubMed and Web of Science. The search criteria were principally based on the date of publication (between 1995 and 2011) and pre-defined search keywords. Six main groups of innovative technologies were identified ('Personal Digital Assistant-', 'Mobile-phone-', 'Interactive computer-', 'Web-', 'Camera- and tape-recorder-' and 'Scan- and sensor-based' technologies). Compared with the conventional food records, Personal Digital Assistant and mobile phone devices seem to improve the recording through the possibility for 'real-time' recording at eating events, but their validity to estimate individual dietary intakes was low to moderate. In 24-hour dietary recalls, there is still limited knowledge regarding the accuracy of fully automated approaches; and methodological problems, such as the inaccuracy in self-reported portion sizes might be more critical than in interview-based applications. In contrast, measurement errors in innovative web-based and in conventional paper-based Food Frequency Questionnaires are most likely similar, suggesting that the underlying methodology is unchanged by the technology. Most of the new technologies in dietary assessment were seen to have overlapping methodological features with the conventional methods predominantly used for nutritional epidemiology. Their main potential to enhance dietary assessment is through more cost- and time-effective, less laborious ways of data collection and higher subject acceptance, though their integration in epidemiological studies would need additional considerations, such as the study objectives, the target population and the financial resources available. However, even in innovative technologies, the inherent individual bias related to self-reported dietary intake will not be resolved. More research is therefore crucial to investigate the validity of innovative dietary assessment technologies.
Geologic framework for the national assessment of carbon dioxide storage resources
Warwick, Peter D.; Corum, Margo D.
2012-01-01
The 2007 Energy Independence and Security Act (Public Law 110–140) directs the U.S. Geological Survey (USGS) to conduct a national assessment of potential geologic storage resources for carbon dioxide (CO2) and to consult with other Federal and State agencies to locate the pertinent geological data needed for the assessment. The geologic sequestration of CO2 is one possible way to mitigate its effects on climate change. The methodology used for the national CO2 assessment (Open-File Report 2010-1127; http://pubs.usgs.gov/of/2010/1127/) is based on previous USGS probabilistic oil and gas assessment methodologies. The methodology is non-economic and intended to be used at regional to subbasinal scales. The operational unit of the assessment is a storage assessment unit (SAU), composed of a porous storage formation with fluid flow and an overlying sealing unit with low permeability. Assessments are conducted at the SAU level and are aggregated to basinal and regional results. This report identifies and contains geologic descriptions of SAUs in separate packages of sedimentary rocks within the assessed basin and focuses on the particular characteristics, specified in the methodology, that influence the potential CO2 storage resource in those SAUs. Specific descriptions of the SAU boundaries as well as their sealing and reservoir units are included. Properties for each SAU such as depth to top, gross thickness, net porous thickness, porosity, permeability, groundwater quality, and structural reservoir traps are provided to illustrate geologic factors critical to the assessment. Although assessment results are not contained in this report, the geologic information included here will be employed, as specified in the methodology, to calculate a statistical Monte Carlo-based distribution of potential storage space in the various SAUs. Figures in this report show SAU boundaries and cell maps of well penetrations through the sealing unit into the top of the storage formation. Wells sharing the same well borehole are treated as a single penetration. Cell maps show the number of penetrating wells within one square mile and are derived from interpretations of incompletely attributed well data, a digital compilation that is known not to include all drilling. The USGS does not expect to know the location of all wells and cannot guarantee the amount of drilling through specific formations in any given cell shown on cell maps.
Calhoun, Aaron W; Rider, Elizabeth A; Peterson, Eleanor; Meyer, Elaine C
2010-09-01
Multi-rater assessment with gap analysis is a powerful method for assessing communication skills and self-insight, and enhancing self-reflection. We demonstrate the use of this methodology. The Program for the Approach to Complex Encounters (PACE) is an interdisciplinary simulation-based communication skills program. Encounters are assessed using an expanded Kalamazoo Consensus Statement Essential Elements Checklist adapted for multi-rater feedback and gap analysis. Data from a representative conversation were analyzed. Likert and forced-choice data with gap analysis are used to assess performance. Participants were strong in Demonstrating Empathy and Providing Closure, and needed to improve Relationship Building, Gathering Information, and understanding the Patient's/Family's Perspective. Participants under-appraised their abilities in Relationship Building, Providing Closure, and Demonstrating Empathy, as well as their overall performance. The conversion of these results into verbal feedback is discussed. We describe an evaluation methodology using multi-rater assessment with gap analysis to assess communication skills and self-insight. This methodology enables faculty to identify undervalued skills and perceptual blind spots, provide comprehensive, data driven, feedback, and encourage reflection. Implementation of graphical feedback forms coupled with one-on-one discussion using the above methodology has the potential to enhance trainee self-awareness and reflection, improving the impact of educational programs. Copyright (c) 2010 Elsevier Ireland Ltd. All rights reserved.
How Can You Support RIDM/CRM/RM Through the Use of PRA
NASA Technical Reports Server (NTRS)
DoVemto. Tpmu
2011-01-01
Probabilistic Risk Assessment (PRA) is one of key Risk Informed Decision Making (RIDM) tools. It is a scenario-based methodology aimed at identifying and assessing Safety and Technical Performance risks in complex technological systems.
Risk analysis within environmental impact assessment of proposed construction activity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeleňáková, Martina; Zvijáková, Lenka
Environmental impact assessment is an important process, prior to approval of the investment plan, providing a detailed examination of the likely and foreseeable impacts of proposed construction activity on the environment. The objective of this paper is to develop a specific methodology for the analysis and evaluation of environmental impacts of selected constructions – flood protection structures using risk analysis methods. The application of methodology designed for the process of environmental impact assessment will develop assumptions for further improvements or more effective implementation and performance of this process. The main objective of the paper is to improve the implementation ofmore » the environmental impact assessment process. Through the use of risk analysis methods in environmental impact assessment process, the set objective has been achieved. - Highlights: This paper is informed by an effort to develop research with the aim of: • Improving existing qualitative and quantitative methods for assessing the impacts • A better understanding of relations between probabilities and consequences • Methodology for the EIA of flood protection constructions based on risk analysis • Creative approaches in the search for environmentally friendly proposed activities.« less
Cramer, Robert J.; Johnson, Shara M.; McLaughlin, Jennifer; Rausch, Emilie M.; Conroy, Mary Alice
2014-01-01
Clinical and counseling psychology programs currently lack adequate evidence-based competency goals and training in suicide risk assessment. To begin to address this problem, this article proposes core competencies and an integrated training framework that can form the basis for training and research in this area. First, we evaluate the extent to which current training is effective in preparing trainees for suicide risk assessment. Within this discussion, sample and methodological issues are reviewed. Second, as an extension of these methodological training issues, we integrate empirically- and expert-derived suicide risk assessment competencies from several sources with the goal of streamlining core competencies for training purposes. Finally, a framework for suicide risk assessment training is outlined. The approach employs Objective Structured Clinical Examination (OSCE) methodology, an approach commonly utilized in medical competency training. The training modality also proposes the Suicide Competency Assessment Form (SCAF), a training tool evaluating self- and observer-ratings of trainee core competencies. The training framework and SCAF are ripe for empirical evaluation and potential training implementation. PMID:24672588
Cramer, Robert J; Johnson, Shara M; McLaughlin, Jennifer; Rausch, Emilie M; Conroy, Mary Alice
2013-02-01
Clinical and counseling psychology programs currently lack adequate evidence-based competency goals and training in suicide risk assessment. To begin to address this problem, this article proposes core competencies and an integrated training framework that can form the basis for training and research in this area. First, we evaluate the extent to which current training is effective in preparing trainees for suicide risk assessment. Within this discussion, sample and methodological issues are reviewed. Second, as an extension of these methodological training issues, we integrate empirically- and expert-derived suicide risk assessment competencies from several sources with the goal of streamlining core competencies for training purposes. Finally, a framework for suicide risk assessment training is outlined. The approach employs Objective Structured Clinical Examination (OSCE) methodology, an approach commonly utilized in medical competency training. The training modality also proposes the Suicide Competency Assessment Form (SCAF), a training tool evaluating self- and observer-ratings of trainee core competencies. The training framework and SCAF are ripe for empirical evaluation and potential training implementation.
A Framework for Preliminary Design of Aircraft Structures Based on Process Information. Part 1
NASA Technical Reports Server (NTRS)
Rais-Rohani, Masoud
1998-01-01
This report discusses the general framework and development of a computational tool for preliminary design of aircraft structures based on process information. The described methodology is suitable for multidisciplinary design optimization (MDO) activities associated with integrated product and process development (IPPD). The framework consists of three parts: (1) product and process definitions; (2) engineering synthesis, and (3) optimization. The product and process definitions are part of input information provided by the design team. The backbone of the system is its ability to analyze a given structural design for performance as well as manufacturability and cost assessment. The system uses a database on material systems and manufacturing processes. Based on the identified set of design variables and an objective function, the system is capable of performing optimization subject to manufacturability, cost, and performance constraints. The accuracy of the manufacturability measures and cost models discussed here depend largely on the available data on specific methods of manufacture and assembly and associated labor requirements. As such, our focus in this research has been on the methodology itself and not so much on its accurate implementation in an industrial setting. A three-tier approach is presented for an IPPD-MDO based design of aircraft structures. The variable-complexity cost estimation methodology and an approach for integrating manufacturing cost assessment into design process are also discussed. This report is presented in two parts. In the first part, the design methodology is presented, and the computational design tool is described. In the second part, a prototype model of the preliminary design Tool for Aircraft Structures based on Process Information (TASPI) is described. Part two also contains an example problem that applies the methodology described here for evaluation of six different design concepts for a wing spar.
A methodology to assess the economic impact of power storage technologies.
El-Ghandour, Laila; Johnson, Timothy C
2017-08-13
We present a methodology for assessing the economic impact of power storage technologies. The methodology is founded on classical approaches to the optimal stopping of stochastic processes but involves an innovation that circumvents the need to, ex ante , identify the form of a driving process and works directly on observed data, avoiding model risks. Power storage is regarded as a complement to the intermittent output of renewable energy generators and is therefore important in contributing to the reduction of carbon-intensive power generation. Our aim is to present a methodology suitable for use by policy makers that is simple to maintain, adaptable to different technologies and easy to interpret. The methodology has benefits over current techniques and is able to value, by identifying a viable optimal operational strategy, a conceived storage facility based on compressed air technology operating in the UK.This article is part of the themed issue 'Energy management: flexibility, risk and optimization'. © 2017 The Author(s).
Developing a Competency-Based Assessment Approach for Student Learning
ERIC Educational Resources Information Center
Dunning, Pamela T.
2014-01-01
Higher education accrediting bodies are increasing the emphasis on assessing student learning outcomes as opposed to teaching methodology. The purpose of this article is to describe the process used by Troy University's Master of Public Administration program to change their assessment approach from a course learning objective perspective to a…
Assessment of continuous oil and gas resources in the Neuquén Basin Province, Argentina, 2016
Schenk, Christopher J.; Klett, Timothy R.; Tennyson, Marilyn E.; Mercier, Tracey J.; Pitman, Janet K.; Gaswirth, Stephanie B.; Finn, Thomas M.; Brownfield, Michael E.; Le, Phuong A.; Leathers-Miller, Heidi M.; Marra, Kristen R.
2017-05-23
Using a geology-based assessment methodology, the U.S. Geological Survey assessed undiscovered, technically recoverable mean continuous resources of 14.4 billion barrels of oil and 38 trillion cubic feet of gas in the Neuquén Basin Province, Argentina.
BASE (Basin-Scale Assessments for Sustainable Ecosystems) is a research program developed by the Ecosystems Research Division of the National Exposure Research Laboratory to explore and formulate approaches for assessing the sustainability of ecological resources within watershed...
Toward a Linguistically Realistic Assessment of Language Vitality: The Case of Jejueo
ERIC Educational Resources Information Center
Yang, Changyong; O'Grady, William; Yang, Sejung
2017-01-01
The assessment of language endangerment requires accurate estimates of speaker populations, including information about the proficiency of different groups within those populations. Typically, this information is based on self-assessments, a methodology whose reliability is open to question. We outline an approach that seeks to improve the…
Assessing Quality of Critical Thought in Online Discussion
ERIC Educational Resources Information Center
Weltzer-Ward, Lisa; Baltes, Beate; Lynn, Laura Knight
2009-01-01
Purpose: The purpose of this paper is to describe a theoretically based coding framework for an integrated analysis and assessment of critical thinking in online discussion. Design/methodology/approach: The critical thinking assessment framework (TAF) is developed through review of theory and previous research, verified by comparing results to…
This report summarizes the methodologies and findings of three regional assessments and considers the role of decision support in assisting adaptation to climate change. Background. In conjunction with the US Global Change Research Program’s (USGCRP’s) National Assessment of ...
NASA Astrophysics Data System (ADS)
Dolly, Steven R.; Anastasio, Mark A.; Yu, Lifeng; Li, Hua
2017-03-01
In current radiation therapy practice, image quality is still assessed subjectively or by utilizing physically-based metrics. Recently, a methodology for objective task-based image quality (IQ) assessment in radiation therapy was proposed by Barrett et al.1 In this work, we present a comprehensive implementation and evaluation of this new IQ assessment methodology. A modular simulation framework was designed to perform an automated, computer-simulated end-to-end radiation therapy treatment. A fully simulated framework was created that utilizes new learning-based stochastic object models (SOM) to obtain known organ boundaries, generates a set of images directly from the numerical phantoms created with the SOM, and automates the image segmentation and treatment planning steps of a radiation therapy work ow. By use of this computational framework, therapeutic operating characteristic (TOC) curves can be computed and the area under the TOC curve (AUTOC) can be employed as a figure-of-merit to guide optimization of different components of the treatment planning process. The developed computational framework is employed to optimize X-ray CT pre-treatment imaging. We demonstrate that use of the radiation therapy-based-based IQ measures lead to different imaging parameters than obtained by use of physical-based measures.
ERIC Educational Resources Information Center
Kaiser, Gabriele; Busse, Andreas; Hoth, Jessica; König, Johannes; Blömeke, Sigrid
2015-01-01
Research on the evaluation of the professional knowledge of mathematics teachers (comprising for example mathematical content knowledge, mathematics pedagogical content knowledge and general pedagogical knowledge) has become prominent in the last decade; however, the development of video-based assessment approaches is a more recent topic. This…
Taking Control: Stealth Assessment of Deterministic Behaviors within a Game-Based System
ERIC Educational Resources Information Center
Snow, Erica L.; Likens, Aaron D.; Allen, Laura K.; McNamara, Danielle S.
2016-01-01
Game-based environments frequently afford students the opportunity to exert agency over their learning paths by making various choices within the environment. The combination of log data from these systems and dynamic methodologies may serve as a stealth means to assess how students behave (i.e., deterministic or random) within these learning…
Knowledge Systems and Value Chain Integration: The Case of Linseed Production in Ethiopia
ERIC Educational Resources Information Center
Chagwiza, Clarietta; Muradian, Roldan; Ruben, Ruerd
2017-01-01
Purpose: This study uses data from a sample of 150 oilseed farming households from Arsi Robe, Ethiopia, to assess the impact of different knowledge bases (education, training and experience) and their interactions on linseed productivity. Methodology: A multiple regression analysis was employed to assess the combined effect of the knowledge bases,…
Taking Control: Stealth Assessment of Deterministic Behaviors within a Game-Based System
ERIC Educational Resources Information Center
Snow, Erica L.; Likens, Aaron D.; Allen, Laura K.; McNamara, Danielle S.
2015-01-01
Game-based environments frequently afford students the opportunity to exert agency over their learning paths by making various choices within the environment. The combination of log data from these systems and dynamic methodologies may serve as a stealth means to assess how students behave (i.e., deterministic or random) within these learning…
ERIC Educational Resources Information Center
Baltes, Kenneth G.; Hendrix, Vernon L.
Two recent developments in management information system technology and higher education administration have brought about the need for this study, designed to develop a methodology for revealing a relational model of the data base that administrators are operating from currently or would like to be able to operate from in the future.…
ERIC Educational Resources Information Center
Cano, M.-D.
2011-01-01
The creation of the new European Higher Education Area (EHEA), with the corresponding changes in the structure and content of university degrees, offers a great opportunity to review learning methodologies. This paper investigates the effect on students of moving from a traditional learning process, based on lectures and laboratory work, to an…
ERIC Educational Resources Information Center
McManus, Richard; Haddock-Fraser, Janet; Rands, Peter
2017-01-01
The need to understand how prospective students decide which higher education institution to attend is becoming of paramount importance as the policy context for higher education moves towards market-based systems in many countries. This paper provides a novel methodology by which student preferences between institutions can be assessed, using the…
NASA Astrophysics Data System (ADS)
Scaini, C.; Felpeto, A.; Martí, J.; Carniel, R.
2014-05-01
This paper presents a GIS-based methodology to estimate damages produced by volcanic eruptions. The methodology is constituted by four parts: definition and simulation of eruptive scenarios, exposure analysis, vulnerability assessment and estimation of expected damages. Multi-hazard eruptive scenarios are defined for the Teide-Pico Viejo active volcanic complex, and simulated through the VORIS tool. The exposure analysis identifies the elements exposed to the hazard at stake and focuses on the relevant assets for the study area. The vulnerability analysis is based on previous studies on the built environment and complemented with the analysis of transportation and urban infrastructures. Damage assessment is performed associating a qualitative damage rating to each combination of hazard and vulnerability. This operation consists in a GIS-based overlap, performed for each hazardous phenomenon considered and for each element. The methodology is then automated into a GIS-based tool using an ArcGIS® program. Given the eruptive scenarios and the characteristics of the exposed elements, the tool produces expected damage maps. The tool is applied to the Icod Valley (North of Tenerife Island) which is likely to be affected by volcanic phenomena in case of eruption from both the Teide-Pico Viejo volcanic complex and North-West basaltic rift. Results are thematic maps of vulnerability and damage that can be displayed at different levels of detail, depending on the user preferences. The aim of the tool is to facilitate territorial planning and risk management in active volcanic areas.
Delvosalle, Christian; Fievez, Cécile; Pipart, Aurore; Debray, Bruno
2006-03-31
In the frame of the Accidental Risk Assessment Methodology for Industries (ARAMIS) project, this paper aims at presenting the work carried out in the part of the project devoted to the definition of accident scenarios. This topic is a key-point in risk assessment and serves as basis for the whole risk quantification. The first result of the work is the building of a methodology for the identification of major accident hazards (MIMAH), which is carried out with the development of generic fault and event trees based on a typology of equipment and substances. The term "major accidents" must be understood as the worst accidents likely to occur on the equipment, assuming that no safety systems are installed. A second methodology, called methodology for the identification of reference accident scenarios (MIRAS) takes into account the influence of safety systems on both the frequencies and possible consequences of accidents. This methodology leads to identify more realistic accident scenarios. The reference accident scenarios are chosen with the help of a tool called "risk matrix", crossing the frequency and the consequences of accidents. This paper presents both methodologies and an application on an ethylene oxide storage.
The use of bibliometrics to measure research quality in UK higher education institutions.
Adams, Jonathan
2009-01-01
Research assessment in the UK has evolved over a quarter of a century from a loosely structured, peer-review based process to one with a well understood data portfolio and assessment methodology. After 2008, the assessment process will shift again, to the use of indicators based largely on publication and citation data. These indicators will in part follow the format introduced in 2008, with a profiling of assessment outcomes at national and international levels. However, the shift from peer assessment to a quantitative methodology raises critical issues about which metrics are appropriate and informative and how such metrics should be managed to produce weighting factors for funding formulae. The link between publication metrics and other perceptions of research quality needs to be thoroughly tested and reviewed, and may be variable between disciplines. Many of the indicators that drop out of publication data are poorly linked to quality and should not be used at all. There are also issues about which publications are the correct base for assessment, which staff should be included in a review, how subjects should be structured and how the citation data should be normalised to account for discipline-dependent variables. Finally, it is vital to consider the effect that any assessment process will have on the behaviour of those to be assessed.
NASA Astrophysics Data System (ADS)
Shukla, Nagesh; Wickramasuriya, Rohan; Miller, Andrew; Perez, Pascal
2015-05-01
This paper proposes an integrated modelling process to assess the population accessibility to radiotherapy treatment services in future based on future cancer incidence and road network-based accessibility. Previous research efforts assessed travel distance/time barriers affecting access to cancer treatment services, as well as epidemiological studies that showed that cancer incidence rates vary with population demography. It is established that travel distances to treatment centres and demographic profiles of the accessible regions greatly influence the demand for cancer radiotherapy (RT) services. However, an integrated service planning approach that combines spatially-explicit cancer incidence projections, and the RT services accessibility based on patient road network have never been attempted. This research work presents this novel methodology for the accessibility assessment of RT services and demonstrates its viability by modelling New South Wales (NSW) cancer incidence rates for different age-sex groups based on observed cancer incidence trends; estimating the road network-based access to current NSW treatment centres; and, projecting the demand for RT services in New South Wales, Australia from year 2011 to 2026.
Environmental Risk Assessment of dredging processes - application to Marin harbour (NW Spain)
NASA Astrophysics Data System (ADS)
Gómez, A. G.; García Alba, J.; Puente, A.; Juanes, J. A.
2014-04-01
A methodological procedure to estimate the environmental risk of dredging operations in aquatic systems has been developed. Environmental risk estimations are based on numerical models results, which provide an appropriated spatio-temporal framework analysis to guarantee an effective decision-making process. The methodological procedure has been applied on a real dredging operation in the port of Marin (NW Spain). Results from Marin harbour confirmed the suitability of the developed methodology and the conceptual approaches as a comprehensive and practical management tool.
Lewis, Cara C; Stanick, Cameo F; Martinez, Ruben G; Weiner, Bryan J; Kim, Mimi; Barwick, Melanie; Comtois, Katherine A
2015-01-08
Identification of psychometrically strong instruments for the field of implementation science is a high priority underscored in a recent National Institutes of Health working meeting (October 2013). Existing instrument reviews are limited in scope, methods, and findings. The Society for Implementation Research Collaboration Instrument Review Project's objectives address these limitations by identifying and applying a unique methodology to conduct a systematic and comprehensive review of quantitative instruments assessing constructs delineated in two of the field's most widely used frameworks, adopt a systematic search process (using standard search strings), and engage an international team of experts to assess the full range of psychometric criteria (reliability, construct and criterion validity). Although this work focuses on implementation of psychosocial interventions in mental health and health-care settings, the methodology and results will likely be useful across a broad spectrum of settings. This effort has culminated in a centralized online open-access repository of instruments depicting graphical head-to-head comparisons of their psychometric properties. This article describes the methodology and preliminary outcomes. The seven stages of the review, synthesis, and evaluation methodology include (1) setting the scope for the review, (2) identifying frameworks to organize and complete the review, (3) generating a search protocol for the literature review of constructs, (4) literature review of specific instruments, (5) development of an evidence-based assessment rating criteria, (6) data extraction and rating instrument quality by a task force of implementation experts to inform knowledge synthesis, and (7) the creation of a website repository. To date, this multi-faceted and collaborative search and synthesis methodology has identified over 420 instruments related to 34 constructs (total 48 including subconstructs) that are relevant to implementation science. Despite numerous constructs having greater than 20 available instruments, which implies saturation, preliminary results suggest that few instruments stem from gold standard development procedures. We anticipate identifying few high-quality, psychometrically sound instruments once our evidence-based assessment rating criteria have been applied. The results of this methodology may enhance the rigor of implementation science evaluations by systematically facilitating access to psychometrically validated instruments and identifying where further instrument development is needed.
Application of the Hardman methodology to the Army Remotely Piloted Vehicle (RPV)
NASA Technical Reports Server (NTRS)
1983-01-01
The application of the HARDMAN Methodology to the Remotely Piloted Vehicle (RPV) is described. The methodology was used to analyze the manpower, personnel, and training (MPT) requirements of the proposed RPV system design for a number of operating scenarios. The RPV system is defined as consisting of the equipment, personnel, and operational procedures needed to perform five basic artillery missions: reconnaissance, target acquisition, artillery adjustment, target designation and damage assessment. The RPV design evaluated includes an air vehicle (AV), a modular integrated communications and navigation system (MICNS), a ground control station (GCS), a launch subsystem (LS), a recovery subsystem (RS), and a number of ground support requirements. The HARDMAN Methodology is an integrated set of data base management techniques and analytic tools, designed to provide timely and fully documented assessments of the human resource requirements associated with an emerging system's design.
Analysis and methodology for aeronautical systems technology program planning
NASA Technical Reports Server (NTRS)
White, M. J.; Gershkoff, I.; Lamkin, S.
1983-01-01
A structured methodology was developed that allows the generation, analysis, and rank-ordering of system concepts by their benefits and costs, indicating the preferred order of implementation. The methodology is supported by a base of data on civil transport aircraft fleet growth projections and data on aircraft performance relating the contribution of each element of the aircraft to overall performance. The performance data are used to assess the benefits of proposed concepts. The methodology includes a computer program for performing the calculations needed to rank-order the concepts and compute their cumulative benefit-to-cost ratio. The use of the methodology and supporting data is illustrated through the analysis of actual system concepts from various sources.
Assessment of continuous oil and gas resources of the Cooper Basin, Australia, 2016
Schenk, Christopher J.; Tennyson, Marilyn E.; Mercier, Tracey J.; Klett, Timothy R.; Finn, Thomas M.; Le, Phuong A.; Brownfield, Michael E.; Gaswirth, Stephanie B.; Marra, Kristen R.; Hawkins, Sarah J.; Leathers-Miller, Heidi M.
2016-07-15
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean continuous resources of 482 million barrels of oil and 29.8 trillion cubic feet of gas in the Cooper Basin of Australia.
Assessment of Paleozoic shale gas resources in the Sichuan Basin of China, 2015
Potter, Christopher J.; Schenk, Christopher J.; Charpentier, Ronald R.; Gaswirth, Stephanie B.; Klett, Timothy R.; Leathers, Heidi M.; Brownfield, Michael E.; Mercier, Tracey J.; Tennyson, Marilyn E.; Pitman, Janet K.
2015-10-14
Using a geology-based assessment methodology, the U.S. Geological Survey estimated a mean of 23.9 trillion cubic feet of technically recoverable shale gas resources in Paleozoic formations in the Sichuan Basin of China.
Schenk, Christopher J.; Brownfield, Michael E.; Tennyson, Marilyn E.; Klett, Timothy R.; Mercier, Tracey J.; Hawkins, Sarah J.; Gaswirth, Stephanie B.; Marra, Kristen R.; Finn, Thomas M.; Le, Phuong A.; Leathers-Miller, Heidi M.
2017-02-21
Using a geology-based assessment methodology, the U.S. Geological Survey estimated undiscovered, technically recoverable mean resources of 5.27 trillion cubic feet of coalbed gas in the Karoo Basin Province.
Spatially resolved hazard and exposure assessments: an example of lead in soil at Lavrion, Greece.
Tristán, E; Demetriades, A; Ramsey, M H; Rosenbaum, M S; Stavrakis, P; Thornton, I; Vassiliades, E; Vergou, K
2000-01-01
Spatially resolved hazard assessment (SRHA) and spatially resolved exposure assessment (SREA) are methodologies that have been devised for assessing child exposure to soil containing environmental pollutants. These are based on either a quantitative or a semiquantitative approach. The feasibility of the methodologies has been demonstrated in a study assessing child exposure to Pb accessible in soil at the town of Lavrion in Greece. Using a quantitative approach, both measured and kriged concentrations of Pb in soil are compared with an "established" statutory threshold value. The probabilistic approach gives a refined classification of the contaminated land, since it takes into consideration the uncertainty in both the actual measurement and estimated kriged values. Two exposure assessment models (i.e., IEUBK and HESP) are used as the basis of the quantitative SREA methodologies. The significant correlation between the blood-Pb predictions, using the IEUBK model, and measured concentrations provides a partial validation of the method, because it allows for the uncertainty in the measurements and the lack of some site-specific measurements. The semiquantitative applications of SRHA and SREA incorporate both qualitative information (e.g., land use and dustiness of waste) and quantitative information (e.g., distance from wastes and distance from industry). The significant correlation between the results of these assessments and the measured blood-Pb levels confirms the robust nature of this approach. Successful application of these methodologies could reduce the cost of the assessment and allow areas to be prioritized for further investigation, remediation, or risk management.
Integrated assessment of urban drainage system under the framework of uncertainty analysis.
Dong, X; Chen, J; Zeng, S; Zhao, D
2008-01-01
Due to a rapid urbanization as well as the presence of large number of aging urban infrastructures in China, the urban drainage system is facing a dual pressure of construction and renovation nationwide. This leads to the need for an integrated assessment when an urban drainage system is under planning or re-design. In this paper, an integrated assessment methodology is proposed based upon the approaches of analytic hierarchy process (AHP), uncertainty analysis, mathematical simulation of urban drainage system and fuzzy assessment. To illustrate this methodology, a case study in Shenzhen City of south China has been implemented to evaluate and compare two different urban drainage system renovation plans, i.e., the distributed plan and the centralized plan. By comparing their water quality impacts, ecological impacts, technological feasibility and economic costs, the integrated performance of the distributed plan is found to be both better and robust. The proposed methodology is also found to be both effective and practical. (c) IWA Publishing 2008.
Costa, Susana P F; Pinto, Paula C A G; Lapa, Rui A S; Saraiva, M Lúcia M F S
2015-03-02
A fully automated Vibrio fischeri methodology based on sequential injection analysis (SIA) has been developed. The methodology was based on the aspiration of 75 μL of bacteria and 50 μL of inhibitor followed by measurement of the luminescence of bacteria. The assays were conducted for contact times of 5, 15, and 30 min, by means of three mixing chambers that ensured adequate mixing conditions. The optimized methodology provided a precise control of the reaction conditions which is an asset for the analysis of a large number of samples. The developed methodology was applied to the evaluation of the impact of a set of ionic liquids (ILs) on V. fischeri and the results were compared with those provided by a conventional assay kit (Biotox(®)). The collected data evidenced the influence of different cation head groups and anion moieties on the toxicity of ILs. Generally, aromatic cations and fluorine-containing anions displayed higher impact on V. fischeri, evidenced by lower EC50. The proposed methodology was validated through statistical analysis which demonstrated a strong positive correlation (P>0.98) between assays. It is expected that the automated methodology can be tested for more classes of compounds and used as alternative to microplate based V. fischeri assay kits. Copyright © 2014 Elsevier B.V. All rights reserved.
Multi-scale landslide hazard assessment: Advances in global and regional methodologies
NASA Astrophysics Data System (ADS)
Kirschbaum, Dalia; Peters-Lidard, Christa; Adler, Robert; Hong, Yang
2010-05-01
The increasing availability of remotely sensed surface data and precipitation provides a unique opportunity to explore how smaller-scale landslide susceptibility and hazard assessment methodologies may be applicable at larger spatial scales. This research first considers an emerging satellite-based global algorithm framework, which evaluates how the landslide susceptibility and satellite derived rainfall estimates can forecast potential landslide conditions. An analysis of this algorithm using a newly developed global landslide inventory catalog suggests that forecasting errors are geographically variable due to improper weighting of surface observables, resolution of the current susceptibility map, and limitations in the availability of landslide inventory data. These methodological and data limitation issues can be more thoroughly assessed at the regional level, where available higher resolution landslide inventories can be applied to empirically derive relationships between surface variables and landslide occurrence. The regional empirical model shows improvement over the global framework in advancing near real-time landslide forecasting efforts; however, there are many uncertainties and assumptions surrounding such a methodology that decreases the functionality and utility of this system. This research seeks to improve upon this initial concept by exploring the potential opportunities and methodological structure needed to advance larger-scale landslide hazard forecasting and make it more of an operational reality. Sensitivity analysis of the surface and rainfall parameters in the preliminary algorithm indicates that surface data resolution and the interdependency of variables must be more appropriately quantified at local and regional scales. Additionally, integrating available surface parameters must be approached in a more theoretical, physically-based manner to better represent the physical processes underlying slope instability and landslide initiation. Several rainfall infiltration and hydrological flow models have been developed to model slope instability at small spatial scales. This research investigates the potential of applying a more quantitative hydrological model to larger spatial scales, utilizing satellite and surface data inputs that are obtainable over different geographic regions. Due to the significant role that data and methodological uncertainties play in the effectiveness of landslide hazard assessment outputs, the methodology and data inputs are considered within an ensemble uncertainty framework in order to better resolve the contribution and limitations of model inputs and to more effectively communicate the model skill for improved landslide hazard assessment.
Hazmat transport: a methodological framework for the risk analysis of marshalling yards.
Cozzani, Valerio; Bonvicini, Sarah; Spadoni, Gigliola; Zanelli, Severino
2007-08-17
A methodological framework was outlined for the comprehensive risk assessment of marshalling yards in the context of quantified area risk analysis. Three accident typologies were considered for yards: (i) "in-transit-accident-induced" releases; (ii) "shunting-accident-induced" spills; and (iii) "non-accident-induced" leaks. A specific methodology was developed for the assessment of expected release frequencies and equivalent release diameters, based on the application of HazOp and Fault Tree techniques to reference schemes defined for the more common types of railcar vessels used for "hazmat" transportation. The approach was applied to the assessment of an extended case-study. The results evidenced that "non-accident-induced" leaks in marshalling yards represent an important contribution to the overall risk associated to these zones. Furthermore, the results confirmed the considerable role of these fixed installations to the overall risk associated to "hazmat" transportation.
Tailoring a Human Reliability Analysis to Your Industry Needs
NASA Technical Reports Server (NTRS)
DeMott, D. L.
2016-01-01
Companies at risk of accidents caused by human error that result in catastrophic consequences include: airline industry mishaps, medical malpractice, medication mistakes, aerospace failures, major oil spills, transportation mishaps, power production failures and manufacturing facility incidents. Human Reliability Assessment (HRA) is used to analyze the inherent risk of human behavior or actions introducing errors into the operation of a system or process. These assessments can be used to identify where errors are most likely to arise and the potential risks involved if they do occur. Using the basic concepts of HRA, an evolving group of methodologies are used to meet various industry needs. Determining which methodology or combination of techniques will provide a quality human reliability assessment is a key element to developing effective strategies for understanding and dealing with risks caused by human errors. There are a number of concerns and difficulties in "tailoring" a Human Reliability Assessment (HRA) for different industries. Although a variety of HRA methodologies are available to analyze human error events, determining the most appropriate tools to provide the most useful results can depend on industry specific cultures and requirements. Methodology selection may be based on a variety of factors that include: 1) how people act and react in different industries, 2) expectations based on industry standards, 3) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 4) type and availability of data, 5) how the industry views risk & reliability, and 6) types of emergencies, contingencies and routine tasks. Other considerations for methodology selection should be based on what information is needed from the assessment. If the principal concern is determination of the primary risk factors contributing to the potential human error, a more detailed analysis method may be employed versus a requirement to provide a numerical value as part of a probabilistic risk assessment. Industries involved with humans operating large equipment or transport systems (ex. railroads or airlines) would have more need to address the man machine interface than medical workers administering medications. Human error occurs in every industry; in most cases the consequences are relatively benign and occasionally beneficial. In cases where the results can have disastrous consequences, the use of Human Reliability techniques to identify and classify the risk of human errors allows a company more opportunities to mitigate or eliminate these types of risks and prevent costly tragedies.
C. Alina Cansler; Donald McKenzie
2012-01-01
Remotely sensed indices of burn severity are now commonly used by researchers and land managers to assess fire effects, but their relationship to field-based assessments of burn severity has been evaluated only in a few ecosystems. This analysis illustrates two cases in which methodological refinements to field-based and remotely sensed indices of burn severity...
Dekant, Wolfgang; Bridges, James
2016-11-01
Quantitative weight of evidence (QWoE) methodology utilizes detailed scoring sheets to assess the quality/reliability of each publication on toxicity of a chemical and gives numerical scores for quality and observed toxicity. This QWoE-methodology was applied to the reproductive toxicity data on diisononylphthalate (DINP), di-n-hexylphthalate (DnHP), and dicyclohexylphthalate (DCHP) to determine if the scientific evidence for adverse effects meets the requirements for classification as reproductive toxicants. The scores for DINP were compared to those when applying the methodology DCHP and DnHP that have harmonized classifications. Based on the quality/reliability scores, application of the QWoE shows that the three databases are of similar quality; but effect scores differ widely. Application of QWoE to DINP studies resulted in an overall score well below the benchmark required to trigger classification. For DCHP, the QWoE also results in low scores. The high scores from the application of the QWoE methodology to the toxicological data for DnHP represent clear evidence for adverse effects and justify a classification of DnHP as category 1B for both development and fertility. The conclusions on classification based on the QWoE are well supported using a narrative assessment of consistency and biological plausibility. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
An integrated science-based methodology to assess potential ...
There is an urgent need for broad and integrated studies that address the risks of engineered nanomaterials (ENMs) along the different endpoints of the society, environment, and economy (SEE) complex adaptive system. This article presents an integrated science-based methodology to assess the potential risks of engineered nanomaterials. To achieve the study objective, two major tasks are accomplished, knowledge synthesis and algorithmic computational methodology. The knowledge synthesis task is designed to capture “what is known” and to outline the gaps in knowledge from ENMs risk perspective. The algorithmic computational methodology is geared toward the provision of decisions and an understanding of the risks of ENMs along different endpoints for the constituents of the SEE complex adaptive system. The approach presented herein allows for addressing the formidable task of assessing the implications and risks of exposure to ENMs, with the long term goal to build a decision-support system to guide key stakeholders in the SEE system towards building sustainable ENMs and nano-enabled products. The following specific aims are formulated to achieve the study objective: (1) to propose a system of systems (SoS) architecture that builds a network management among the different entities in the large SEE system to track the flow of ENMs emission, fate and transport from the source to the receptor; (2) to establish a staged approach for knowledge synthesis methodo
Active learning on the ward: outcomes from a comparative trial with traditional methods.
Melo Prado, Hegla; Hannois Falbo, Gilliatt; Rodrigues Falbo, Ana; Natal Figueirôa, José
2011-03-01
Academic activity during internship is essentially practical and ward rounds are traditionally considered the cornerstone of clinical education. However, the efficacy and effectiveness of ward rounds for learning purposes have been under-investigated and it is necessary to assess alternative educational paradigms for this activity. This study aimed to compare the educational effectiveness of ward rounds conducted with two different learning methodologies. Student subjects were first tested on 30 true/false questions to assess their initial degree of knowledge on pneumonia and diarrhoea. Afterwards, they attended ward rounds conducted using an active and a traditional learning methodology. The participants were submitted to a second test 48hours later in order to assess knowledge acquisition and were asked to answer two questions about self-directed learning and their opinions on the two learning methodologies used. Seventy-two medical students taking part in a paediatric clinic rotation were enrolled. The active methodology proved to be more effective than the traditional methodology for the three outcomes considered: knowledge acquisition (33 students [45.8%] versus 21 students [29.2%]; p=0.03); self-directed learning (38 students [52.8%] versus 11 students [15.3%]; p<0.001), and student opinion on the methods (61 students [84.7%] versus 38 students [52.8%]; p<0.001). The active methodology produced better results than the traditional methodology in a ward-based context. This study seems to be valuable in terms of the new evidence it demonstrates on learning methodologies in the context of the ward round. © Blackwell Publishing Ltd 2011.
Product environmental footprint in policy and market decisions: Applicability and impact assessment.
Lehmann, Annekatrin; Bach, Vanessa; Finkbeiner, Matthias
2015-07-01
In April 2013, the European Commission published the Product and Organisation Environmental Footprint (PEF/OEF) methodology--a life cycle-based multicriteria measure of the environmental performance of products, services, and organizations. With its approach of "comparability over flexibility," the PEF/OEF methodology aims at harmonizing existing methods, while decreasing the flexibility provided by the International Organization for Standardization (ISO) standards regarding methodological choices. Currently, a 3-y pilot phase is running, aiming at testing the methodology and developing product category and organization sector rules (PEFCR/OEFSR). Although a harmonized method is in theory a good idea, the PEF/OEF methodology presents challenges, including a risk of confusion and limitations in applicability to practice. The paper discusses the main differences between the PEF and ISO methodologies and highlights challenges regarding PEF applicability, with a focus on impact assessment. Some methodological aspects of the PEF and PEFCR Guides are found to contradict the ISO 14044 (2006) and ISO 14025 (2006). Others, such as prohibition of inventory cutoffs, are impractical. The evaluation of the impact assessment methods proposed in the PEF/OEF Guide showed that the predefined methods for water consumption, land use, and abiotic resources are not adequate because of modeling artefacts, missing inventory data, or incomplete characterization factors. However, the methods for global warming and ozone depletion perform very well. The results of this study are relevant for the PEF (and OEF) pilot phase, which aims at testing the PEF (OEF) methodology (and potentially adapting it) as well as addressing challenges and coping with them. © 2015 SETAC.
Solecki, Roland; Rauch, Martina; Gall, Andrea; Buschmann, Jochen; Clark, Ruth; Fuchs, Antje; Kan, Haidong; Heinrich, Verena; Kellner, Rupert; Knudsen, Thomas B; Li, Weihua; Makris, Susan L; Ooshima, Yojiro; Paumgartten, Francisco; Piersma, Aldert H; Schönfelder, Gilbert; Oelgeschläger, Michael; Schaefer, Christof; Shiota, Kohei; Ulbrich, Beate; Ding, Xuncheng; Chahoud, Ibrahim
2015-11-01
This article is a report of the 8th Berlin Workshop on Developmental Toxicity held in May 2014. The main aim of the workshop was the continuing harmonization of terminology and innovations for methodologies used in the assessment of embryo- and fetotoxic findings. The following main topics were discussed: harmonized categorization of external, skeletal, visceral and materno-fetal findings into malformations, variations and grey zone anomalies, aspects of developmental anomalies in humans and laboratory animals, and innovations for new methodologies in developmental toxicology. The application of Version 2 terminology in the DevTox database was considered as a useful improvement in the categorization of developmental anomalies. Participants concluded that initiation of a project for comparative assessments of developmental anomalies in humans and laboratory animals could support regulatory risk assessment and university-based training. Improvement of new methodological approaches for alternatives to animal testing should be triggered for a better understanding of developmental outcomes. Copyright © 2015. Published by Elsevier Inc.
Multi-Disciplinary Knowledge Synthesis for Human Health Assessment on Earth and in Space
NASA Astrophysics Data System (ADS)
Christakos, G.
We discuss methodological developments in multi-disciplinary knowledge synthesis (KS) of human health assessment. A theoretical KS framework can provide the rational means for the assimilation of various information bases (general, site-specific etc.) that are relevant to the life system of interest. KS-based techniques produce a realistic representation of the system, provide a rigorous assessment of the uncertainty sources, and generate informative health state predictions across space-time. The underlying epistemic cognition methodology is based on teleologic criteria and stochastic logic principles. The mathematics of KS involves a powerful and versatile spatiotemporal random field model that accounts rigorously for the uncertainty features of the life system and imposes no restriction on the shape of the probability distributions or the form of the predictors. KS theory is instrumental in understanding natural heterogeneities, assessing crucial human exposure correlations and laws of physical change, and explaining toxicokinetic mechanisms and dependencies in a spatiotemporal life system domain. It is hoped that a better understanding of KS fundamentals would generate multi-disciplinary models that are useful for the maintenance of human health on Earth and in Space.
2014-01-01
Background To ensure evidence-based decision-making in pediatric oral health, Cochrane systematic reviews that address topics pertinent to this field are necessary. We aimed to identify all systematic reviews of paediatric dentistry and oral health by the Cochrane Oral Health Group (COHG), summarize their characteristics and assess their methodological quality. Our second objective was to assess implications for practice in the review conclusions and provide an overview of clinical implications about the usefulness of paediatric oral health interventions in practice. Methods We conducted a methodological survey including all paediatric dentistry reviews from the COHG. We extracted data on characteristics of included reviews, then assessed the methodological quality using a validated 11-item quality assessment tool (AMSTAR). Finally, we coded each review to indicate whether its authors concluded that an intervention should be implemented in practice, was not supported or was refuted by the evidence, or should be used only in research (inconclusive evidence). Results We selected 37 reviews; most concerned the prevention of caries. The methodological quality was high, except for the assessment of reporting bias. In 7 reviews (19%), the research showed that benefits outweighed harms; in 1, the experimental intervention was found ineffective; and in 29 (78%), evidence was insufficient to assess benefits and harms. In the 7 reviews, topical fluoride treatments (with toothpaste, gel or varnish) were found effective for permanent and deciduous teeth in children and adolescents, and sealants for occlusal tooth surfaces of permanent molars. Conclusions Cochrane reviews of paediatric dentistry were of high quality. They provided strong evidence that topical fluoride treatments and sealants are effective for children and adolescents and thus should be implemented in practice. However, a substantial number of reviews yielded inconclusive evidence. PMID:24716532
Bayesian experimental design for models with intractable likelihoods.
Drovandi, Christopher C; Pettitt, Anthony N
2013-12-01
In this paper we present a methodology for designing experiments for efficiently estimating the parameters of models with computationally intractable likelihoods. The approach combines a commonly used methodology for robust experimental design, based on Markov chain Monte Carlo sampling, with approximate Bayesian computation (ABC) to ensure that no likelihood evaluations are required. The utility function considered for precise parameter estimation is based upon the precision of the ABC posterior distribution, which we form efficiently via the ABC rejection algorithm based on pre-computed model simulations. Our focus is on stochastic models and, in particular, we investigate the methodology for Markov process models of epidemics and macroparasite population evolution. The macroparasite example involves a multivariate process and we assess the loss of information from not observing all variables. © 2013, The International Biometric Society.
C3I system modification and EMC (electromagnetic compatibility) methodology, volume 1
NASA Astrophysics Data System (ADS)
Wilson, J. L.; Jolly, M. B.
1984-01-01
A methodology (i.e., consistent set of procedures) for assessing the electromagnetic compatibility (EMC) of RF subsystem modifications on C3I aircraft was generated during this study (Volume 1). An IEMCAP (Intrasystem Electromagnetic Compatibility Analysis Program) database for the E-3A (AWACS) C3I aircraft RF subsystem was extracted to support the design of the EMC assessment methodology (Volume 2). Mock modifications were performed on the E-3A database to assess, using a preliminary form of the methodology, the resulting EMC impact. Application of the preliminary assessment methodology to modifications in the E-3A database served to fine tune the form of a final assessment methodology. The resulting final assessment methodology is documented in this report in conjunction with the overall study goals, procedures, and database. It is recommended that a similar EMC assessment methodology be developed for the power subsystem within C3I aircraft. It is further recommended that future EMC assessment methodologies be developed around expert systems (i.e., computer intelligent agents) to control both the excruciating detail and user requirement for transparency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Longgao; Yang, Xiaoyan; School of Environmental Science and Spatial Informatics, China University of Mining and Technology, Xuzhou 221116
The implementation of land use planning (LUP) has a large impact on environmental quality. There lacks a widely accepted and consolidated approach to assess the LUP environmental impact using Strategic Environmental Assessment (SEA). In this paper, we developed a state-impact-state (SIS) model employed in the LUP environmental impact assessment (LUPEA). With the usage of Matter-element (ME) and Extenics method, the methodology based on the SIS model was established and applied in the LUPEA of Zoucheng County, China. The results show that: (1) this methodology provides an intuitive and easy understanding logical model for both the theoretical analysis and application ofmore » LUPEA; (2) the spatial multi-temporal assessment from base year, near-future year to planning target year suggests the positive impact on the environmental quality in the whole County despite certain environmental degradation in some towns; (3) besides the spatial assessment, other achievements including the environmental elements influenced by land use and their weights, the identification of key indicators in LUPEA, and the appropriate environmental mitigation measures were obtained; and (4) this methodology can be used to achieve multi-temporal assessment of LUP environmental impact of County or Town level in other areas. - Highlights: • A State-Impact-State model for Land Use Planning Environmental Assessment (LUPEA). • Matter-element (ME) and Extenics methods were embedded in the LUPEA. • The model was applied to the LUPEA of Zoucheng County. • The assessment shows improving environment quality since 2000 in Zoucheng County. • The method provides a useful tool for the LUPEA in the county level.« less
Assessment of Alternative Student Aid Delivery Systems: Assessment of the Current Delivery System.
ERIC Educational Resources Information Center
Advanced Technology, Inc., Reston, VA.
The effects of the current system for delivering federal financial assistance to students under the Pell Grant, Guaranteed Student Loan (GSL), and campus-based programs are analyzed. Information is included on the use of the assessment model, which combines program evaluation, systems research, and policy analysis methodologies.…
Improving Performance in Very Small Firms through Effective Assessment and Feedback
ERIC Educational Resources Information Center
Lorenzet, Steven J.; Cook, Ronald G.; Ozeki, Cynthia
2006-01-01
Purpose: The purpose of this paper is to improve assessment and feedback processes in the training practices of very small firms, thereby improving the firms' human capital. Design/methodology/approach: The paper reviews research and practice on effective assessment and feedback. Findings: Based on this paper, human resources are increasingly seen…
Assessment of shale-gas resources of the Karoo Province, South Africa and Lesotho, Africa, 2016
Brownfield, Michael E.; Schenk, Christopher J.; Klett, Timothy R.; Pitman, Janet K.; Tennyson, Marilyn E.; Gaswirth, Stephanie B.; Le, Phuong A.; Leathers-Miller, Heidi M.; Mercier, Tracey J.; Finn, Thomas M.
2016-07-08
Using a geology-based assessment methodology, the U.S. Geological Survey estimated undiscovered, technically recoverable mean resource of 44.5 trillion cubic feet of shale gas in the Karoo Province of South Africa and Lesotho, Africa.
Assessment of potential shale-oil and shale-gas resources in Silurian shales of Jordan, 2014
Schenk, Christopher J.; Pitman, Janet K.; Charpentier, Ronald R.; Klett, Timothy R.; Tennyson, Marilyn E.; Mercier, Tracey J.; Nelson, Philip H.; Brownfield, Michael E.; Pawlewicz, Mark J.; Wandrey, Craig J.
2014-01-01
Using a geology-based assessment methodology, the U.S. Geological Survey estimated means of 11 million barrels of potential shale-oil and 320 billion cubic feet of shale-gas resources in Silurian shales of Jordan.
Assessment of undiscovered, technically recoverable oil and gas resources of Armenia, 2014
Klett, Timothy R.; Schenk, Christopher J.; Wandrey, Craig J.; Brownfield, Michael E.; Charpentier, Ronald R.; Tennyson, Marilyn E.; Gautier, Donald L.
2014-01-01
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean volumes of 1 million barrels of undiscovered, technically recoverable conventional oil and 6 billion cubic feet of undiscovered, technically recoverable conventional natural gas in Armenia.
Assessment of Undiscovered Oil and Gas Resources of the Red Sea Basin Province
,
2010-01-01
The U.S. Geological Survey estimated mean volumes of 5 billion barrels of undiscovered technically recoverable oil and 112 trillion cubic feet of recoverable gas in the Red Sea Basin Province using a geology-based assessment methodology.
Assessment of undiscovered oil and gas resources in the Cuyo Basin Province, Argentina, 2017
Schenk, Christopher J.; Brownfield, Michael E.; Tennyson, Marilyn E.; Le, Phuong A.; Mercier, Tracey J.; Finn, Thomas M.; Hawkins, Sarah J.; Gaswirth, Stephanie B.; Marra, Kristen R.; Klett, Timothy R.; Leathers-Miller, Heidi M.; Woodall, Cheryl A.
2017-07-18
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean undiscovered, technically recoverable resources of 236 million barrels of oil and 112 billion cubic feet of associated gas in the Cuyo Basin Province, Argentina.
Assessment of undiscovered oil and gas resources in the Lower Indus Basin, Pakistan, 2017
Schenk, Christopher J.; Tennyson, Marilyn E.; Klett, Timothy R.; Finn, Thomas M.; Mercier, Tracey J.; Gaswirth, Stephanie B.; Marra, Kristen R.; Le, Phuong A.; Hawkins, Sarah J.; Leathers-Miller, Heidi M.
2017-09-19
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean undiscovered, technically recoverable resources of 164 million barrels of oil and 24.6 trillion cubic feet of gas in the Lower Indus Basin, Pakistan.
Assessment of undiscovered oil and gas resources in the North-Central Montana Province, 2017
Schenk, Christopher J.; Mercier, Tracey J.; Brownfield, Michael E.; Tennyson, Marilyn E.; Woodall, Cheryl A.; Le, Phuong A.; Klett, Timothy R.; Gaswirth, Stephanie B.; Finn, Thomas M.; Pitman, Janet K.; Marra, Kristen R.; Leathers-Miller, Heidi M.
2018-02-12
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean undiscovered, technically recoverable resources of 55 million barrels of oil and 846 billion cubic feet of gas in the North-Central Montana Province.
Assessment of undiscovered continuous oil and gas resources in the Hanoi Trough, Vietnam, 2017
Schenk, Christopher J.; Tennyson, Marilyn E.; Mercier, Tracey J.; Woodall, Cheryl A.; Le, Phuong A.; Klett, Timothy R.; Finn, Thomas M.; Leathers-Miller, Heidi M.; Gaswirth, Stephanie B.; Marra, Kristen R.
2018-02-13
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean undiscovered, technically recoverable continuous resources of 52 million barrels of oil and 591 billion cubic feet of gas in the Hanoi Trough of Vietnam.
Development of fuzzy air quality index using soft computing approach.
Mandal, T; Gorai, A K; Pathak, G
2012-10-01
Proper assessment of air quality status in an atmosphere based on limited observations is an essential task for meeting the goals of environmental management. A number of classification methods are available for estimating the changing status of air quality. However, a discrepancy frequently arises from the quality criteria of air employed and vagueness or fuzziness embedded in the decision making output values. Owing to inherent imprecision, difficulties always exist in some conventional methodologies like air quality index when describing integrated air quality conditions with respect to various pollutants parameters and time of exposure. In recent years, the fuzzy logic-based methods have demonstrated to be appropriated to address uncertainty and subjectivity in environmental issues. In the present study, a methodology based on fuzzy inference systems (FIS) to assess air quality is proposed. This paper presents a comparative study to assess status of air quality using fuzzy logic technique and that of conventional technique. The findings clearly indicate that the FIS may successfully harmonize inherent discrepancies and interpret complex conditions.
ERIC Educational Resources Information Center
Emad, Gholamreza; Roth, Wolff Michael
2008-01-01
Purpose: The purpose of this paper is to highlight the contradictions in the current maritime education and training system (MET), which is based on competency-based education, training and assessment, and to theorize the failure to make the training useful. Design/methodology/approach: A case study of education and training in the international…
An Assessment of Resource Availability for Problem Based Learning in a Ghanaian University Setting
ERIC Educational Resources Information Center
Okyere, Gabriel Asare; Tawiah, Richard; Lamptey, Richard Bruce; Oduro, William; Thompson, Michael
2017-01-01
Purpose: The purpose of this paper is to assess the differences pertaining to the resources presently accessible for problem-based learning (PBL) among six colleges of Kwame Nkrumah University of Science and Technology (KNUST) in Ghana. Design/methodology/approach: Data for the study are the cross-sectional type drawn from 1,020 students. Poisson…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wanderer, Thomas, E-mail: thomas.wanderer@dlr.de; Herle, Stefan, E-mail: stefan.herle@rwth-aachen.de
2015-04-15
By their spatially very distributed nature, profitability and impacts of renewable energy resources are highly correlated with the geographic locations of power plant deployments. A web-based Spatial Decision Support System (SDSS) based on a Multi-Criteria Decision Analysis (MCDA) approach has been implemented for identifying preferable locations for solar power plants based on user preferences. The designated areas found serve for the input scenario development for a subsequent integrated Environmental Impact Assessment. The capabilities of the SDSS service get showcased for Concentrated Solar Power (CSP) plants in the region of Andalusia, Spain. The resulting spatial patterns of possible power plant sitesmore » are an important input to the procedural chain of assessing impacts of renewable energies in an integrated effort. The applied methodology and the implemented SDSS are applicable for other renewable technologies as well. - Highlights: • The proposed tool facilitates well-founded CSP plant siting decisions. • Spatial MCDA methods are implemented in a WebGIS environment. • GIS-based SDSS can contribute to a modern integrated impact assessment workflow. • The conducted case study proves the suitability of the methodology.« less
Li, Honghe; Liu, Yang; Wen, Deliang
2017-01-01
Background Over the last three decades, various instruments were developed and employed to assess medical professionalism, but their measurement properties have yet to be fully evaluated. This study aimed to systematically evaluate these instruments’ measurement properties and the methodological quality of their related studies within a universally acceptable standardized framework and then provide corresponding recommendations. Methods A systematic search of the electronic databases PubMed, Web of Science, and PsycINFO was conducted to collect studies published from 1990–2015. After screening titles, abstracts, and full texts for eligibility, the articles included in this study were classified according to their respective instrument’s usage. A two-phase assessment was conducted: 1) methodological quality was assessed by following the COnsensus-based Standards for the selection of health status Measurement INstruments (COSMIN) checklist; and 2) the quality of measurement properties was assessed according to Terwee’s criteria. Results were integrated using best-evidence synthesis to look for recommendable instruments. Results After screening 2,959 records, 74 instruments from 80 existing studies were included. The overall methodological quality of these studies was unsatisfactory, with reasons including but not limited to unknown missing data, inadequate sample sizes, and vague hypotheses. Content validity, cross-cultural validity, and criterion validity were either unreported or negative ratings in most studies. Based on best-evidence synthesis, three instruments were recommended: Hisar’s instrument for nursing students, Nurse Practitioners’ Roles and Competencies Scale, and Perceived Faculty Competency Inventory. Conclusion Although instruments measuring medical professionalism are diverse, only a limited number of studies were methodologically sound. Future studies should give priority to systematically improving the performance of existing instruments and to longitudinal studies. PMID:28498838
Probabilistic Based Modeling and Simulation Assessment
2010-06-01
different crash and blast scenarios. With the integration of the high fidelity neck and head model, a methodology to calculate the probability of injury...variability, correlation, and multiple (often competing) failure metrics. Important scenarios include vehicular collisions, blast /fragment impact, and...first area of focus is to develop a methodology to integrate probabilistic analysis into finite element analysis of vehicle collisions and blast . The
ERIC Educational Resources Information Center
Restauri, Sherri L.; King, Franklin L.; Nelson, J. Gordon
Two of the most popular delivery formats in distance education are video conferencing and online methodologies. The first step in the processes of recognition and reorganization needed for both forms of distance education is to identify the differences between the traditional classroom environment and the classroom that is augmented or replaced by…
Methodology, Methods, and Metrics for Testing and Evaluating Augmented Cognition Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greitzer, Frank L.
The augmented cognition research community seeks cognitive neuroscience-based solutions to improve warfighter performance by applying and managing mitigation strategies to reduce workload and improve the throughput and quality of decisions. The focus of augmented cognition mitigation research is to define, demonstrate, and exploit neuroscience and behavioral measures that support inferences about the warfighter’s cognitive state that prescribe the nature and timing of mitigation. A research challenge is to develop valid evaluation methodologies, metrics and measures to assess the impact of augmented cognition mitigations. Two considerations are external validity, which is the extent to which the results apply to operational contexts;more » and internal validity, which reflects the reliability of performance measures and the conclusions based on analysis of results. The scientific rigor of the research methodology employed in conducting empirical investigations largely affects the validity of the findings. External validity requirements also compel us to demonstrate operational significance of mitigations. Thus it is important to demonstrate effectiveness of mitigations under specific conditions. This chapter reviews some cognitive science and methodological considerations in designing augmented cognition research studies and associated human performance metrics and analysis methods to assess the impact of augmented cognition mitigations.« less
NASA Astrophysics Data System (ADS)
McPhee, J.; William, Y. W.
2005-12-01
This work presents a methodology for pumping test design based on the reliability requirements of a groundwater model. Reliability requirements take into consideration the application of the model results in groundwater management, expressed in this case as a multiobjective management model. The pumping test design is formulated as a mixed-integer nonlinear programming (MINLP) problem and solved using a combination of genetic algorithm (GA) and gradient-based optimization. Bayesian decision theory provides a formal framework for assessing the influence of parameter uncertainty over the reliability of the proposed pumping test. The proposed methodology is useful for selecting a robust design that will outperform all other candidate designs under most potential 'true' states of the system
Bare, Jane; Gloria, Thomas; Norris, Gregory
2006-08-15
Normalization is an optional step within Life Cycle Impact Assessment (LCIA) that may be used to assist in the interpretation of life cycle inventory data as well as life cycle impact assessment results. Normalization transforms the magnitude of LCI and LCIA results into relative contribution by substance and life cycle impact category. Normalization thus can significantly influence LCA-based decisions when tradeoffs exist. The U. S. Environmental Protection Agency (EPA) has developed a normalization database based on the spatial scale of the 48 continental U.S. states, Hawaii, Alaska, the District of Columbia, and Puerto Rico with a one-year reference time frame. Data within the normalization database were compiled based on the impact methodologies and lists of stressors used in TRACI-the EPA's Tool for the Reduction and Assessment of Chemical and other environmental Impacts. The new normalization database published within this article may be used for LCIA case studies within the United States, and can be used to assist in the further development of a global normalization database. The underlying data analyzed for the development of this database are included to allow the development of normalization data consistent with other impact assessment methodologies as well.
O'Brien, Kelly K; Colquhoun, Heather; Levac, Danielle; Baxter, Larry; Tricco, Andrea C; Straus, Sharon; Wickerson, Lisa; Nayar, Ayesha; Moher, David; O'Malley, Lisa
2016-07-26
Scoping studies (or reviews) are a method used to comprehensively map evidence across a range of study designs in an area, with the aim of informing future research practice, programs and policy. However, no universal agreement exists on terminology, definition or methodological steps. Our aim was to understand the experiences of, and considerations for conducting scoping studies from the perspective of academic and community partners. Primary objectives were to 1) describe experiences conducting scoping studies including strengths and challenges; and 2) describe perspectives on terminology, definition, and methodological steps. We conducted a cross-sectional web-based survey with clinicians, educators, researchers, knowledge users, representatives from community-based organizations, graduate students, and policy stakeholders with experience and/or interest in conducting scoping studies to gain an understanding of experiences and perspectives on the conduct and reporting of scoping studies. We administered an electronic self-reported questionnaire comprised of 22 items related to experiences with scoping studies, strengths and challenges, opinions on terminology, and methodological steps. We analyzed questionnaire data using descriptive statistics and content analytical techniques. Survey results were discussed during a multi-stakeholder consultation to identify key considerations in the conduct and reporting of scoping studies. Of the 83 invitations, 54 individuals (65 %) completed the scoping questionnaire, and 48 (58 %) attended the scoping study meeting from Canada, the United Kingdom and United States. Many scoping study strengths were dually identified as challenges including breadth of scope, and iterative process. No consensus on terminology emerged, however key defining features that comprised a working definition of scoping studies included the exploratory mapping of literature in a field; iterative process, inclusion of grey literature; no quality assessment of included studies, and an optional consultation phase. We offer considerations for the conduct and reporting of scoping studies for researchers, clinicians and knowledge users engaging in this methodology. Lack of consensus on scoping terminology, definition and methodological steps persists. Reasons for this may be attributed to diversity of disciplines adopting this methodology for differing purposes. Further work is needed to establish guidelines on the reporting and methodological quality assessment of scoping studies.
Ferragina, Paolo; Giancarlo, Raffaele; Greco, Valentina; Manzini, Giovanni; Valiente, Gabriel
2007-07-13
Similarity of sequences is a key mathematical notion for Classification and Phylogenetic studies in Biology. It is currently primarily handled using alignments. However, the alignment methods seem inadequate for post-genomic studies since they do not scale well with data set size and they seem to be confined only to genomic and proteomic sequences. Therefore, alignment-free similarity measures are actively pursued. Among those, USM (Universal Similarity Metric) has gained prominence. It is based on the deep theory of Kolmogorov Complexity and universality is its most novel striking feature. Since it can only be approximated via data compression, USM is a methodology rather than a formula quantifying the similarity of two strings. Three approximations of USM are available, namely UCD (Universal Compression Dissimilarity), NCD (Normalized Compression Dissimilarity) and CD (Compression Dissimilarity). Their applicability and robustness is tested on various data sets yielding a first massive quantitative estimate that the USM methodology and its approximations are of value. Despite the rich theory developed around USM, its experimental assessment has limitations: only a few data compressors have been tested in conjunction with USM and mostly at a qualitative level, no comparison among UCD, NCD and CD is available and no comparison of USM with existing methods, both based on alignments and not, seems to be available. We experimentally test the USM methodology by using 25 compressors, all three of its known approximations and six data sets of relevance to Molecular Biology. This offers the first systematic and quantitative experimental assessment of this methodology, that naturally complements the many theoretical and the preliminary experimental results available. Moreover, we compare the USM methodology both with methods based on alignments and not. We may group our experiments into two sets. The first one, performed via ROC (Receiver Operating Curve) analysis, aims at assessing the intrinsic ability of the methodology to discriminate and classify biological sequences and structures. A second set of experiments aims at assessing how well two commonly available classification algorithms, UPGMA (Unweighted Pair Group Method with Arithmetic Mean) and NJ (Neighbor Joining), can use the methodology to perform their task, their performance being evaluated against gold standards and with the use of well known statistical indexes, i.e., the F-measure and the partition distance. Based on the experiments, several conclusions can be drawn and, from them, novel valuable guidelines for the use of USM on biological data. The main ones are reported next. UCD and NCD are indistinguishable, i.e., they yield nearly the same values of the statistical indexes we have used, accross experiments and data sets, while CD is almost always worse than both. UPGMA seems to yield better classification results with respect to NJ, i.e., better values of the statistical indexes (10% difference or above), on a substantial fraction of experiments, compressors and USM approximation choices. The compression program PPMd, based on PPM (Prediction by Partial Matching), for generic data and Gencompress for DNA, are the best performers among the compression algorithms we have used, although the difference in performance, as measured by statistical indexes, between them and the other algorithms depends critically on the data set and may not be as large as expected. PPMd used with UCD or NCD and UPGMA, on sequence data is very close, although worse, in performance with the alignment methods (less than 2% difference on the F-measure). Yet, it scales well with data set size and it can work on data other than sequences. In summary, our quantitative analysis naturally complements the rich theory behind USM and supports the conclusion that the methodology is worth using because of its robustness, flexibility, scalability, and competitiveness with existing techniques. In particular, the methodology applies to all biological data in textual format. The software and data sets are available under the GNU GPL at the supplementary material web page.
Ferragina, Paolo; Giancarlo, Raffaele; Greco, Valentina; Manzini, Giovanni; Valiente, Gabriel
2007-01-01
Background Similarity of sequences is a key mathematical notion for Classification and Phylogenetic studies in Biology. It is currently primarily handled using alignments. However, the alignment methods seem inadequate for post-genomic studies since they do not scale well with data set size and they seem to be confined only to genomic and proteomic sequences. Therefore, alignment-free similarity measures are actively pursued. Among those, USM (Universal Similarity Metric) has gained prominence. It is based on the deep theory of Kolmogorov Complexity and universality is its most novel striking feature. Since it can only be approximated via data compression, USM is a methodology rather than a formula quantifying the similarity of two strings. Three approximations of USM are available, namely UCD (Universal Compression Dissimilarity), NCD (Normalized Compression Dissimilarity) and CD (Compression Dissimilarity). Their applicability and robustness is tested on various data sets yielding a first massive quantitative estimate that the USM methodology and its approximations are of value. Despite the rich theory developed around USM, its experimental assessment has limitations: only a few data compressors have been tested in conjunction with USM and mostly at a qualitative level, no comparison among UCD, NCD and CD is available and no comparison of USM with existing methods, both based on alignments and not, seems to be available. Results We experimentally test the USM methodology by using 25 compressors, all three of its known approximations and six data sets of relevance to Molecular Biology. This offers the first systematic and quantitative experimental assessment of this methodology, that naturally complements the many theoretical and the preliminary experimental results available. Moreover, we compare the USM methodology both with methods based on alignments and not. We may group our experiments into two sets. The first one, performed via ROC (Receiver Operating Curve) analysis, aims at assessing the intrinsic ability of the methodology to discriminate and classify biological sequences and structures. A second set of experiments aims at assessing how well two commonly available classification algorithms, UPGMA (Unweighted Pair Group Method with Arithmetic Mean) and NJ (Neighbor Joining), can use the methodology to perform their task, their performance being evaluated against gold standards and with the use of well known statistical indexes, i.e., the F-measure and the partition distance. Based on the experiments, several conclusions can be drawn and, from them, novel valuable guidelines for the use of USM on biological data. The main ones are reported next. Conclusion UCD and NCD are indistinguishable, i.e., they yield nearly the same values of the statistical indexes we have used, accross experiments and data sets, while CD is almost always worse than both. UPGMA seems to yield better classification results with respect to NJ, i.e., better values of the statistical indexes (10% difference or above), on a substantial fraction of experiments, compressors and USM approximation choices. The compression program PPMd, based on PPM (Prediction by Partial Matching), for generic data and Gencompress for DNA, are the best performers among the compression algorithms we have used, although the difference in performance, as measured by statistical indexes, between them and the other algorithms depends critically on the data set and may not be as large as expected. PPMd used with UCD or NCD and UPGMA, on sequence data is very close, although worse, in performance with the alignment methods (less than 2% difference on the F-measure). Yet, it scales well with data set size and it can work on data other than sequences. In summary, our quantitative analysis naturally complements the rich theory behind USM and supports the conclusion that the methodology is worth using because of its robustness, flexibility, scalability, and competitiveness with existing techniques. In particular, the methodology applies to all biological data in textual format. The software and data sets are available under the GNU GPL at the supplementary material web page. PMID:17629909
Montecinos, P; Rodewald, A M
1994-06-01
The aim this work was to assess and compare the achievements of medical students, subjected to problem based learning methodology. The information and comprehension categories of Bloom were tested in 17 medical students in four different occasions during the physiopathology course, using a multiple choice knowledge test. There was a significant improvement in the number of correct answers towards the end of the course. It is concluded that these medical students obtained adequate learning achievements in the information subcategory of Bloom using problem based learning methodology, during the physiopathology course.
A comprehensive risk assessment framework for offsite transportation of inflammable hazardous waste.
Das, Arup; Gupta, A K; Mazumder, T N
2012-08-15
A framework for risk assessment due to offsite transportation of hazardous wastes is designed based on the type of event that can be triggered from an accident of a hazardous waste carrier. The objective of this study is to design a framework for computing the risk to population associated with offsite transportation of inflammable and volatile wastes. The framework is based on traditional definition of risk and is designed for conditions where accident databases are not available. The probability based variable in risk assessment framework is substituted by a composite accident index proposed in this study. The framework computes the impacts due to a volatile cloud explosion based on TNO Multi-energy model. The methodology also estimates the vulnerable population in terms of disability adjusted life years (DALY) which takes into consideration the demographic profile of the population and the degree of injury on mortality and morbidity sustained. The methodology is illustrated using a case study of a pharmaceutical industry in the Kolkata metropolitan area. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Padula, Santo, II
2009-01-01
The ability to sufficiently measure orbiter window defects to allow for window recertification has been an ongoing challenge for the orbiter vehicle program. The recent Columbia accident has forced even tighter constraints on the criteria that must be met in order to recertify windows for flight. As a result, new techniques are being investigated to improve the reliability, accuracy and resolution of the defect detection process. The methodology devised in this work, which is based on the utilization of a vertical scanning interferometric (VSI) tool, shows great promise for meeting the ever increasing requirements for defect detection. This methodology has the potential of a 10-100 fold greater resolution of the true defect depth than can be obtained from the currently employed micrometer based methodology. An added benefit is that it also produces a digital elevation map of the defect, thereby providing information about the defect morphology which can be utilized to ascertain the type of debris that induced the damage. However, in order to successfully implement such a tool, a greater understanding of the resolution capability and measurement repeatability must be obtained. This work focused on assessing the variability of the VSI-based measurement methodology and revealed that the VSI measurement tool was more repeatable and more precise than the current micrometer based approach, even in situations where operator variation could affect the measurement. The analysis also showed that the VSI technique was relatively insensitive to the hardware and software settings employed, making the technique extremely robust and desirable
Systematic review adherence to methodological or reporting quality.
Pussegoda, Kusala; Turner, Lucy; Garritty, Chantelle; Mayhew, Alain; Skidmore, Becky; Stevens, Adrienne; Boutron, Isabelle; Sarkis-Onofre, Rafael; Bjerre, Lise M; Hróbjartsson, Asbjørn; Altman, Douglas G; Moher, David
2017-07-19
Guidelines for assessing methodological and reporting quality of systematic reviews (SRs) were developed to contribute to implementing evidence-based health care and the reduction of research waste. As SRs assessing a cohort of SRs is becoming more prevalent in the literature and with the increased uptake of SR evidence for decision-making, methodological quality and standard of reporting of SRs is of interest. The objective of this study is to evaluate SR adherence to the Quality of Reporting of Meta-analyses (QUOROM) and PRISMA reporting guidelines and the A Measurement Tool to Assess Systematic Reviews (AMSTAR) and Overview Quality Assessment Questionnaire (OQAQ) quality assessment tools as evaluated in methodological overviews. The Cochrane Library, MEDLINE®, and EMBASE® databases were searched from January 1990 to October 2014. Title and abstract screening and full-text screening were conducted independently by two reviewers. Reports assessing the quality or reporting of a cohort of SRs of interventions using PRISMA, QUOROM, OQAQ, or AMSTAR were included. All results are reported as frequencies and percentages of reports and SRs respectively. Of the 20,765 independent records retrieved from electronic searching, 1189 reports were reviewed for eligibility at full text, of which 56 reports (5371 SRs in total) evaluating the PRISMA, QUOROM, AMSTAR, and/or OQAQ tools were included. Notable items include the following: of the SRs using PRISMA, over 85% (1532/1741) provided a rationale for the review and less than 6% (102/1741) provided protocol information. For reports using QUOROM, only 9% (40/449) of SRs provided a trial flow diagram. However, 90% (402/449) described the explicit clinical problem and review rationale in the introduction section. Of reports using AMSTAR, 30% (534/1794) used duplicate study selection and data extraction. Conversely, 80% (1439/1794) of SRs provided study characteristics of included studies. In terms of OQAQ, 37% (499/1367) of the SRs assessed risk of bias (validity) in the included studies, while 80% (1112/1387) reported the criteria for study selection. Although reporting guidelines and quality assessment tools exist, reporting and methodological quality of SRs are inconsistent. Mechanisms to improve adherence to established reporting guidelines and methodological assessment tools are needed to improve the quality of SRs.
NASA Astrophysics Data System (ADS)
Guler Yigitoglu, Askin
In the context of long operation of nuclear power plants (NPPs) (i.e., 60-80 years, and beyond), investigation of the aging of passive systems, structures and components (SSCs) is important to assess safety margins and to decide on reactor life extension as indicated within the U.S. Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) Program. In the traditional probabilistic risk assessment (PRA) methodology, evaluating the potential significance of aging of passive SSCs on plant risk is challenging. Although passive SSC failure rates can be added as initiating event frequencies or basic event failure rates in the traditional event-tree/fault-tree methodology, these failure rates are generally based on generic plant failure data which means that the true state of a specific plant is not reflected in a realistic manner on aging effects. Dynamic PRA methodologies have gained attention recently due to their capability to account for the plant state and thus address the difficulties in the traditional PRA modeling of aging effects of passive components using physics-based models (and also in the modeling of digital instrumentation and control systems). Physics-based models can capture the impact of complex aging processes (e.g., fatigue, stress corrosion cracking, flow-accelerated corrosion, etc.) on SSCs and can be utilized to estimate passive SSC failure rates using realistic NPP data from reactor simulation, as well as considering effects of surveillance and maintenance activities. The objectives of this dissertation are twofold: The development of a methodology for the incorporation of aging modeling of passive SSC into a reactor simulation environment to provide a framework for evaluation of their risk contribution in both the dynamic and traditional PRA; and the demonstration of the methodology through its application to pressurizer surge line pipe weld and steam generator tubes in commercial nuclear power plants. In the proposed methodology, a multi-state physics based model is selected to represent the aging process. The model is modified via sojourn time approach to reflect the operational and maintenance history dependence of the transition rates. Thermal-hydraulic parameters of the model are calculated via the reactor simulation environment and uncertainties associated with both parameters and the models are assessed via a two-loop Monte Carlo approach (Latin hypercube sampling) to propagate input probability distributions through the physical model. The effort documented in this thesis towards this overall objective consists of : i) defining a process for selecting critical passive components and related aging mechanisms, ii) aging model selection, iii) calculating the probability that aging would cause the component to fail, iv) uncertainty/sensitivity analyses, v) procedure development for modifying an existing PRA to accommodate consideration of passive component failures, and, vi) including the calculated failure probability in the modified PRA. The proposed methodology is applied to pressurizer surge line pipe weld aging and steam generator tube degradation in pressurized water reactors.
Enomoto, Catherine B.; Trippi, Michael H.; Higley, Debra K.; Rouse, William A.; Dulong, Frank T.; Klett, Timothy R.; Mercier, Tracey J.; Brownfield, Michael E.; Leathers-Miller, Heidi M.; Finn, Thomas M.; Marra, Kristen R.; Le, Phuong A.; Woodall, Cheryl A.; Schenk, Christopher J.
2018-04-19
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean undiscovered, technically recoverable continuous resources of 10.7 trillion cubic feet of natural gas in Upper Devonian shales of the Appalachian Basin Province.
Schenk, Christopher J.; Tennyson, Marilyn E.; Mercier, Tracey J.; Klett, Timothy R.; Finn, Thomas M.; Le, Phuong A.; Brownfield, Michael E.; Gaswirth, Stephanie B.; Marra, Kristen R.; Hawkins, Sarah J.; Leathers-Miller, Heidi M.; Pitman, Janet K.
2016-05-12
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean conventional resources of 68 million barrels of oil and 964 billion cubic feet of gas in the Cooper and Eromanga Basins of Australia.
Schenk, Christopher J.; Klett, Timothy R.; Tennyson, Marilyn E.; Mercier, Tracey J.; Brownfield, Michael E.; Pitman, Janet K.; Gaswirth, Stephanie B.; Finn, Thomas M.
2016-12-09
Using a geology-based assessment methodology, the U.S. Geological Survey estimated a mean of 20 trillion cubic feet of undiscovered, technically recoverable coalbed gas resource in the Central and South Sumatra Basin Provinces of Indonesia.
Schenk, Christopher J.; Brownfield, Michael E.; Charpentier, Ronald R.; Cook, Troy A.; Klett, Timothy R.; Pitman, Janet K.; Pollastro, Richard M.; Weaver, Jean N.
2012-01-01
Using a geology-based assessment methodology, the U.S. Geological Survey estimated means of 19 billion barrels of oil and 83 trillion cubic feet of undiscovered natural gas resources in 10 geologic provinces of Mexico, Guatemala, and Belize.
Schenk, Christopher J.; Brownfield, Michael E.; Charpentier, Ronald R.; Cook, Troy A.; Klett, Timothy R.; Kirschbaum, Mark A.; Pitman, Janet K.; Pollastro, Richard M.; Tennyson, Marilyn E.
2012-01-01
Using a geology-based assessment methodology, the U.S. Geological Survey estimated means of 126 billion barrels of oil and 679 trillion cubic feet of undiscovered natural gas in 31 geologic provinces of South America and the Caribbean.
Assessment of undiscovered conventional oil and gas resources of North Africa, 2012
Schenk, Christopher J.; Klett, Timothy R.; Whidden, Katherine J.; Kirschbaum, Mark A.; Charpentier, Ronald R.; Cook, Troy A.; Brownfield, Michael E.; Pitman, Janet K.
2013-01-01
Using a geology-based assessment methodology, the U.S. Geological Survey estimated means of 19 billion barrels of technically recoverable undiscovered conventional oil and 370 trillion cubic feet of undiscovered conventional natural gas resources in 8 geologic provinces of North Africa.
USEPA SHEDS MODEL: METHODOLOGY FOR EXPOSURE ASSESSMENT FOR WOOD PRESERVATIVES
A physically-based, Monte Carlo probabilistic model (SHEDS-Wood: Stochastic Human Exposure and Dose Simulation model for wood preservatives) has been applied to assess the exposure and dose of children to arsenic (As) and chromium (Cr) from contact with chromated copper arsenat...
Assessment of continuous oil and gas resources in the San Jorge Basin Province, Argentina, 2017
Schenk, Christopher J.; Mercier, Tracey J.; Hawkins, Sarah J.; Tennyson, Marilyn E.; Marra, Kristen R.; Finn, Thomas M.; Le, Phuong A.; Brownfield, Michael E.; Leathers-Miller, Heidi M.; Woodall, Cheryl A.
2017-07-18
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean undiscovered, technically recoverable resources of 78 million barrels of oil and 8.9 trillion cubic feet of gas in the San Jorge Basin Province, Argentina.
Assessing School Readiness for a Practice Arrangement Using Decision Tree Methodology.
ERIC Educational Resources Information Center
Barger, Sara E.
1998-01-01
Questions in a decision-tree address mission, faculty interest, administrative support, and practice plan as a way of assessing arrangements for nursing faculty's clinical practice. Decisions should be based on congruence between the human resource allocation and the reward systems. (SK)
Assessment of shale-oil resources of the Central Sumatra Basin, Indonesia, 2015
Schenk, Christopher J.; Charpentier, Ronald R.; Klett, Timothy R.; Tennyson, Marilyn E.; Mercier, Tracey J.; Brownfield, Michael E.; Pitman, Janet K.; Gaswirth, Stephanie B.; Leathers-Miller, Heidi M.
2015-11-12
Using a geology-based assessment methodology, the U.S. Geological Survey estimated means of 459 million barrels of shale oil, 275 billion cubic feet of associated gas, and 23 million barrels of natural gas liquids in the Central Sumatra Basin, Indonesia.
An Event-Based Approach to Design a Teamwork Training Scenario and Assessment Tool in Surgery.
Nguyen, Ngan; Watson, William D; Dominguez, Edward
2016-01-01
Simulation is a technique recommended for teaching and measuring teamwork, but few published methodologies are available on how best to design simulation for teamwork training in surgery and health care in general. The purpose of this article is to describe a general methodology, called event-based approach to training (EBAT), to guide the design of simulation for teamwork training and discuss its application to surgery. The EBAT methodology draws on the science of training by systematically introducing training exercise events that are linked to training requirements (i.e., competencies being trained and learning objectives) and performance assessment. The EBAT process involves: Of the 4 teamwork competencies endorsed by the Agency for Healthcare Research Quality and Department of Defense, "communication" was chosen to be the focus of our training efforts. A total of 5 learning objectives were defined based on 5 validated teamwork and communication techniques. Diagnostic laparoscopy was chosen as the clinical context to frame the training scenario, and 29 KSAs were defined based on review of published literature on patient safety and input from subject matter experts. Critical events included those that correspond to a specific phase in the normal flow of a surgical procedure as well as clinical events that may occur when performing the operation. Similar to the targeted KSAs, targeted responses to the critical events were developed based on existing literature and gathering input from content experts. Finally, a 29-item EBAT-derived checklist was created to assess communication performance. Like any instructional tool, simulation is only effective if it is designed and implemented appropriately. It is recognized that the effectiveness of simulation depends on whether (1) it is built upon a theoretical framework, (2) it uses preplanned structured exercises or events to allow learners the opportunity to exhibit the targeted KSAs, (3) it assesses performance, and (4) it provides formative and constructive feedback to bridge the gap between the learners' KSAs and the targeted KSAs. The EBAT methodology guides the design of simulation that incorporates these 4 features and, thus, enhances training effectiveness with simulation. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Ownsworth, Tamara; Haslam, Catherine
2016-01-01
To date, reviews of rehabilitation efficacy after traumatic brain injury (TBI) have overlooked the impact on sense of self, focusing instead on functional impairment and psychological distress. The present review sought to address this gap by critically appraising the methodology and efficacy of intervention studies that assess changes in self-concept. A systematic search of PsycINFO, Medline, CINAHL and PubMed was conducted from inception to September 2013 to identify studies reporting pre- and post-intervention changes on validated measures of self-esteem or self-concept in adults with TBI. Methodological quality of randomised controlled trials (RCTs) was examined using the Physiotherapy Evidence Database (PEDro) scale. A total of 17 studies (10 RCTs, 4 non-RCT group studies, 3 case studies) was identified, which examined the impact of psychotherapy, family-based support, cognitive rehabilitation or activity-based interventions on self-concept. The findings on the efficacy of these interventions were mixed, with only 10 studies showing some evidence of improvement in self-concept based on within-group or pre-post comparisons. Such findings highlight the need for greater focus on the impact of rehabilitation on self-understanding with improved assessment and intervention methodology. We draw upon theories of identity reconstruction and highlight implications for the design and evaluation of identity-oriented interventions that can supplement existing rehabilitation programmes for people with TBI.
Gómez-García, Francisco; Ruano, Juan; Aguilar-Luque, Macarena; Alcalde-Mellado, Patricia; Gay-Mimbrera, Jesús; Hernández-Romero, José Luis; Sanz-Cabanillas, Juan Luis; Maestre-López, Beatriz; González-Padilla, Marcelino; Carmona-Fernández, Pedro J; García-Nieto, Antonio Vélez; Isla-Tejera, Beatriz
2017-12-29
Article summaries' information and structure may influence researchers/clinicians' decisions to conduct deeper full-text analyses. Specifically, abstracts of systematic reviews (SRs) and meta-analyses (MA) should provide structured summaries for quick assessment. This study explored a method for determining the methodological quality and bias risk of full-text reviews using abstract information alone. Systematic literature searches for SRs and/or MA about psoriasis were undertaken on MEDLINE, EMBASE, and Cochrane database. For each review, quality, abstract-reporting completeness, full-text methodological quality, and bias risk were evaluated using Preferred Reporting Items for Systematic Reviews and Meta-analyses for abstracts (PRISMA-A), Assessing the Methodological Quality of Systematic Reviews (AMSTAR), and ROBIS tools, respectively. Article-, author-, and journal-derived metadata were systematically extracted from eligible studies using a piloted template, and explanatory variables concerning abstract-reporting quality were assessed using univariate and multivariate-regression models. Two classification models concerning SRs' methodological quality and bias risk were developed based on per-item and total PRISMA-A scores and decision-tree algorithms. This work was supported, in part, by project ICI1400136 (JR). No funding was received from any pharmaceutical company. This study analysed 139 SRs on psoriasis interventions. On average, they featured 56.7% of PRISMA-A items. The mean total PRISMA-A score was significantly higher for high-methodological-quality SRs than for moderate- and low-methodological-quality reviews. SRs with low-bias risk showed higher total PRISMA-A values than reviews with high-bias risk. In the final model, only 'authors per review > 6' (OR: 1.098; 95%CI: 1.012-1.194), 'academic source of funding' (OR: 3.630; 95%CI: 1.788-7.542), and 'PRISMA-endorsed journal' (OR: 4.370; 95%CI: 1.785-10.98) predicted PRISMA-A variability. Reviews with a total PRISMA-A score < 6, lacking identification as SR or MA in the title, and lacking explanation concerning bias risk assessment methods were classified as low-methodological quality. Abstracts with a total PRISMA-A score ≥ 9, including main outcomes results and explanation bias risk assessment method were classified as having low-bias risk. The methodological quality and bias risk of SRs may be determined by abstract's quality and completeness analyses. Our proposal aimed to facilitate synthesis of evidence evaluation by clinical professionals lacking methodological skills. External validation is necessary.
An Energy Storage Assessment: Using Optimal Control Strategies to Capture Multiple Services
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Di; Jin, Chunlian; Balducci, Patrick J.
2015-09-01
This paper presents a methodology for evaluating benefits of battery storage for multiple grid applications, including energy arbitrage, balancing service, capacity value, distribution system equipment deferral, and outage mitigation. In the proposed method, at each hour, a look-ahead optimization is first formulated and solved to determine battery base operating point. The minute by minute simulation is then performed to simulate the actual battery operation. This methodology is used to assess energy storage alternatives in Puget Sound Energy System. Different battery storage candidates are simulated for a period of one year to assess different value streams and overall benefits, as partmore » of a financial feasibility evaluation of battery storage projects.« less
Rubin, Katrine Hass; Friis-Holmberg, Teresa; Hermann, Anne Pernille; Abrahamsen, Bo; Brixen, Kim
2013-08-01
A huge number of risk assessment tools have been developed. Far from all have been validated in external studies, more of them have absence of methodological and transparent evidence, and few are integrated in national guidelines. Therefore, we performed a systematic review to provide an overview of existing valid and reliable risk assessment tools for prediction of osteoporotic fractures. Additionally, we aimed to determine if the performance of each tool was sufficient for practical use, and last, to examine whether the complexity of the tools influenced their discriminative power. We searched PubMed, Embase, and Cochrane databases for papers and evaluated these with respect to methodological quality using the Quality Assessment Tool for Diagnostic Accuracy Studies (QUADAS) checklist. A total of 48 tools were identified; 20 had been externally validated, however, only six tools had been tested more than once in a population-based setting with acceptable methodological quality. None of the tools performed consistently better than the others and simple tools (i.e., the Osteoporosis Self-assessment Tool [OST], Osteoporosis Risk Assessment Instrument [ORAI], and Garvan Fracture Risk Calculator [Garvan]) often did as well or better than more complex tools (i.e., Simple Calculated Risk Estimation Score [SCORE], WHO Fracture Risk Assessment Tool [FRAX], and Qfracture). No studies determined the effectiveness of tools in selecting patients for therapy and thus improving fracture outcomes. High-quality studies in randomized design with population-based cohorts with different case mixes are needed. Copyright © 2013 American Society for Bone and Mineral Research.
Larsen, Camilla Marie; Juul-Kristensen, Birgit; Lund, Hans; Søgaard, Karen
2014-10-01
The aims were to compile a schematic overview of clinical scapular assessment methods and critically appraise the methodological quality of the involved studies. A systematic, computer-assisted literature search using Medline, CINAHL, SportDiscus and EMBASE was performed from inception to October 2013. Reference lists in articles were also screened for publications. From 50 articles, 54 method names were identified and categorized into three groups: (1) Static positioning assessment (n = 19); (2) Semi-dynamic (n = 13); and (3) Dynamic functional assessment (n = 22). Fifteen studies were excluded for evaluation due to no/few clinimetric results, leaving 35 studies for evaluation. Graded according to the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN checklist), the methodological quality in the reliability and validity domains was "fair" (57%) to "poor" (43%), with only one study rated as "good". The reliability domain was most often investigated. Few of the assessment methods in the included studies that had "fair" or "good" measurement property ratings demonstrated acceptable results for both reliability and validity. We found a substantially larger number of clinical scapular assessment methods than previously reported. Using the COSMIN checklist the methodological quality of the included measurement properties in the reliability and validity domains were in general "fair" to "poor". None were examined for all three domains: (1) reliability; (2) validity; and (3) responsiveness. Observational evaluation systems and assessment of scapular upward rotation seem suitably evidence-based for clinical use. Future studies should test and improve the clinimetric properties, and especially diagnostic accuracy and responsiveness, to increase utility for clinical practice.
2011-01-01
Background Comprehensive "Total Pain" assessments of patients' end-of-life needs are critical for providing improved patient-clinician communication, assessing needs, and offering high quality palliative care. However, patients' needs-based research methodologies and findings remain highly diverse with their lack of consensus preventing optimum needs assessments and care planning. Mixed-methods is an underused yet robust "patient-based" approach for reported lived experiences to map both the incidence and prevalence of what patients perceive as important end of life needs. Methods Findings often include methodological artifacts and their own selection bias. Moving beyond diverse findings therefore requires revisiting methodological choices. A mixed methods research cross-sectional design is therefore used to reduce limitations inherent in both qualitative and quantitative methodologies. Audio-taped phenomenological "thinking aloud" interviews of a purposive sample of 30 hospice patients are used to identify their vocabulary for communicating perceptions of end-of-life needs. Grounded theory procedures assisted by QSR-NVivo software is then used for discovering domains of needs embedded in the interview narratives. Summary findings are translated into quantified format for presentation and analytical purposes. Results Findings from this mixed-methods feasibility study indicate patients' narratives represent 7 core domains of end-of-life needs. These are (1) time, (2) social, (3) physiological, (4) death and dying, (5) safety, (6) spirituality, (7) change & adaptation. The prevalence, rather than just the occurrence, of patients' reported needs provides further insight into their relative importance. Conclusion Patients' perceptions of end-of-life needs are multidimensional, often ambiguous and uncertain. Mixed methodology appears to hold considerable promise for unpacking both the occurrence and prevalence of cognitive structures represented by verbal encoding that constitute patients' narratives. Communication is a key currency for delivering optimal palliative care. Therefore understanding the domains of needs that emerge from patient-based vocabularies indicate potential for: (1) developing more comprehensive clinical-patient needs assessment tools; (2) improved patient-clinician communication; and (3) moving toward a theoretical model of human needs that can emerge at the end of life. PMID:21272318
NASA Astrophysics Data System (ADS)
Martellozzo, Federico; Hendrickson, Cary; Gozdowska, Iga; Groß, Helge; Henderson, Charles; Reusser, Dominik
2015-04-01
The active participation in Community Based Initiatives (CBI) is a spreading phenomenon that has reached a significant magnitude and - in some cases - CBIs are also supposed to have catalysed social and technological innovation, thus contributing to global transition into low-carbon economy. Generally speaking, CBIs are grassroots initiatives with broad sustainability foci that promote a plethora of activities such as alternative transportation, urban gardening, renewable energy implementation, waste regeneration/reduction, etc. Some advocate that such practices fostered by bottom-up activities, rather than top-down policies, represent a proficient countermeasure to alleviate global environmental change and effectively foster a societal transition towards sustainability. However, thus far most empirical research grounds mainly on anecdotal evidence and little work has been done to quantitatively assess CBIs' "environmental impacts" (EI) or their carbon footprints using comparative methodologies. This research main aim is to frame a methodology to assess univocally CBIs' EIs which are crucial to understanding their role in societal sustainability transition. However, to do so, three main caveats need to be addressed: first, some CBIs do not directly produce tangible measurable outputs, nor have an intelligibly defined set of inputs (e.g. CBIs focusing on environmental education and awareness rising). Thus, calculating their "indirect" EI may represent an intricate puzzle that is very much open to subjective interpretation. Second, CBIs' practices are heterogenic and therefore existing methodologies to make comparisons of their EIs are neither straightforward nor proficient, also given the lack of available data. Third, another issue closely related to the one previously mentioned, is a general lack of consensus among already existing impact-assessment frameworks for certain practices (e.g. composting). A potential way of estimating a CBI's EI is a standard Carbon Accounting assessment where all possible sources and inputs are assessed in terms of reduced EI in the conversion and production of outputs. However, this is a very complex and time consuming task for which data availability issues abound. Alternatively, the EI per unit of output of each CBI can be evaluated and compared with the equivalent from a standard counterfactual in a sort of Comparative Carbon Accounting fashion. This will result in an assessment of the EI Assessment (EIA) that is not activity-specific and can be reasonably used for a wide spectrum comparison regardless of a CBI's predominant activity. This paper first theoretically frames the obstacles to be overcome in conceptualizing a meaningful EI assessment. Second, context variables such as conversion factors, counterfactuals for numerous European CBIs in various countries are established (the latters were mapped by the TESS-Transition FP7 Project). Third, an original EI indicator for CBI based on a Comparative Carbon Accounting methodology is proposed and tested. Finally, some preliminary findings from the application of this methodology to the investigated CBIs are presented, and a potential comparison of these preliminary results with some of the planetary boundaries is discussed. While we are aware that several caveats still need to be further explored and addressed, this novel application of a comparative methodology offers much to the existing literature on CBIs' impact assessment.
Marzocchini, Manrico; Tatàno, Fabio; Moretti, Michela Simona; Antinori, Caterina; Orilisi, Stefano
2018-06-05
A possible approach for determining soil and groundwater quality criteria for contaminated sites is the comparative risk assessment. Originating from but not limited to Italian interest in a decentralised (regional) implementation of comparative risk assessment, this paper first addresses the proposal of an original methodology called CORIAN REG-M , which was created with initial attention to the context of potentially contaminated sites in the Marche Region (Central Italy). To deepen the technical-scientific knowledge and applicability of the comparative risk assessment, the following characteristics of the CORIAN REG-M methodology appear to be relevant: the simplified but logical assumption of three categories of factors (source and transfer/transport of potential contamination, and impacted receptors) within each exposure pathway; the adaptation to quality and quantity of data that are available or derivable at the given scale of concern; the attention to a reliable but unsophisticated modelling; the achievement of a conceptual linkage to the absolute risk assessment approach; and the potential for easy updating and/or refining of the methodology. Further, the application of the CORIAN REG-M methodology to some case-study sites located in the Marche Region indicated the following: a positive correlation can be expected between air and direct contact pathway scores, as well as between individual pathway scores and the overall site scores based on a root-mean-square algorithm; the exposure pathway, which presents the highest variability of scores, tends to be dominant at sites with the highest computed overall site scores; and the adoption of a root-mean-square algorithm can be expected to emphasise the overall site scoring.
Putilov, Arcady A
2017-01-01
Differences between the so-called larks and owls representing the opposite poles of morningness-eveningness dimension are widely known. However, scientific consensus has not yet been reached on the methodology for ranking and typing people along other dimensions of individual variation in their sleep-wake pattern. This review focused on the history and state-of-the-art of the methodology for self-assessment of individual differences in more than one trait or adaptability of the human sleep-wake cycle. The differences between this and other methodologies for the self-assessment of trait- and state-like variation in the perceived characteristics of daily rhythms were discussed and the critical issues that remained to be addressed in future studies were highlighted. These issues include a) a failure to develop a unidimensional scale for scoring chronotypological differences, b) the inconclusive results of the long-lasting search for objective markers of chronotype, c) a disagreement on both number and content of scales required for multidimensional self-assessment of chronobiological differences, d) a lack of evidence for the reliability and/or external validity of most of the proposed scales and e) an insufficient development of conceptualizations, models and model-based quantitative simulations linking the differences between people in their sleep-wake pattern with the differences in the basic parameters of underlying chronoregulatory processes. It seems that, in the nearest future, the wide implementation of portable actigraphic and somnographic devices might lead to the development of objective methodologies for multidimensional assessment and classification of sleep-wake traits and adaptabilities.
ERIC Educational Resources Information Center
Dickenson, Tammiee S.; Gilmore, Joanna A.; Price, Karen J.; Bennett, Heather L.
2013-01-01
This study evaluated the benefits of item enhancements applied to science-inquiry items for incorporation into an alternate assessment based on modified achievement standards for high school students. Six items were included in the cognitive lab sessions involving both students with and without disabilities. The enhancements (e.g., use of visuals,…
ERIC Educational Resources Information Center
Saunders, Rebecca
2012-01-01
The purpose of this article is to describe the use of the Concerns Based Adoption Model (Hall & Hord, 2006) as a conceptual lens and practical methodology for professional development program assessment in the vocational education and training (VET) sector. In this sequential mixed-methods study, findings from the first two phases (two of…
Möhler, Christian; Wohlfahrt, Patrick; Richter, Christian; Greilich, Steffen
2017-06-01
Electron density is the most important tissue property influencing photon and ion dose distributions in radiotherapy patients. Dual-energy computed tomography (DECT) enables the determination of electron density by combining the information on photon attenuation obtained at two different effective x-ray energy spectra. Most algorithms suggested so far use the CT numbers provided after image reconstruction as input parameters, i.e., are imaged-based. To explore the accuracy that can be achieved with these approaches, we quantify the intrinsic methodological and calibration uncertainty of the seemingly simplest approach. In the studied approach, electron density is calculated with a one-parametric linear superposition ('alpha blending') of the two DECT images, which is shown to be equivalent to an affine relation between the photon attenuation cross sections of the two x-ray energy spectra. We propose to use the latter relation for empirical calibration of the spectrum-dependent blending parameter. For a conclusive assessment of the electron density uncertainty, we chose to isolate the purely methodological uncertainty component from CT-related effects such as noise and beam hardening. Analyzing calculated spectrally weighted attenuation coefficients, we find universal applicability of the investigated approach to arbitrary mixtures of human tissue with an upper limit of the methodological uncertainty component of 0.2%, excluding high-Z elements such as iodine. The proposed calibration procedure is bias-free and straightforward to perform using standard equipment. Testing the calibration on five published data sets, we obtain very small differences in the calibration result in spite of different experimental setups and CT protocols used. Employing a general calibration per scanner type and voltage combination is thus conceivable. Given the high suitability for clinical application of the alpha-blending approach in combination with a very small methodological uncertainty, we conclude that further refinement of image-based DECT-algorithms for electron density assessment is not advisable. © 2017 American Association of Physicists in Medicine.
Gaswirth, Stephanie B.; Marra, Kristen R.; Lillis, Paul G.; Mercier, Tracey J.; Leathers-Miller, Heidi M.; Schenk, Christopher J.; Klett, Timothy R.; Le, Phuong A.; Tennyson, Marilyn E.; Hawkins, Sarah J.; Brownfield, Michael E.; Pitman, Janet K.; Finn, Thomas M.
2016-11-15
Using a geology-based assessment methodology, the U.S. Geological Survey assessed technically recoverable mean resources of 20 billion barrels of oil and 16 trillion cubic feet of gas in the Wolfcamp shale in the Midland Basin part of the Permian Basin Province, Texas.
Assessment--Enabling Participation in Academic Discourse and the Implications
ERIC Educational Resources Information Center
Bayaga, Anass; Wadesango, Newman
2013-01-01
The current study was an exploration of how to develop assessment resources and processes via in-depth interviews with 30 teachers. The focus was on how teachers use and apply different assessment situations. The methodology, which was a predominately qualitative approach and adopted case study design, sought to use a set of criteria based on…
ERIC Educational Resources Information Center
Matrundola, Deborah La Torre; Chang, Sandy; Herman, Joan
2012-01-01
The purpose of these case studies was to examine the ways technology and professional development supported the use of the SimScientists assessment systems. Qualitative research methodology was used to provide narrative descriptions of six classes implementing simulation-based assessments for either the topic of Ecosystems or Atoms and Molecules.…
Tennyson, Marilyn E.; Charpentier, Ronald R.; Klett, Timothy R.; Brownfield, Michael E.; Pitman, Janet K.; Gaswirth, Stephanie B.; Hawkins, Sarah J.; Le, Phuong A.; Lillis, Paul G.; Marra, Kristen R.; Mercier, Tracey J.; Leathers-Miller, Heidi M.; Schenk, Christopher J.
2016-07-08
Using a geology-based assessment methodology, the U.S. Geological Survey assessed technically recoverable mean resources of 13 million barrels of oil, 22 billion cubic feet of gas, and 1 million barrels of natural gas liquids in the Monterey Formation of the Los Angeles Basin Province, California.
Lassi, Zohra S; Salam, Rehana A; Das, Jai K; Bhutta, Zulfiqar A
2014-01-01
This paper describes the conceptual framework and the methodology used to guide the systematic reviews of community-based interventions (CBIs) for the prevention and control of infectious diseases of poverty (IDoP). We adapted the conceptual framework from the 3ie work on the 'Community-Based Intervention Packages for Preventing Maternal Morbidity and Mortality and Improving Neonatal Outcomes' to aid in the analyzing of the existing CBIs for IDoP. The conceptual framework revolves around objectives, inputs, processes, outputs, outcomes, and impacts showing the theoretical linkages between the delivery of the interventions targeting these diseases through various community delivery platforms and the consequent health impacts. We also describe the methodology undertaken to conduct the systematic reviews and the meta-analyses.
Feedback Effects of Teaching Quality Assessment: Macro and Micro Evidence
ERIC Educational Resources Information Center
Bianchini, Stefano
2014-01-01
This study investigates the feedback effects of teaching quality assessment. Previous literature looked separately at the evolution of individual and aggregate scores to understand whether instructors and university performance depends on its past evaluation. I propose a new quantitative-based methodology, combining statistical distributions and…
Assessment of undiscovered oil and gas resources in the Canning Basin Province, Australia, 2017
Schenk, Christopher J.; Tennyson, Marilyn E.; Mercier, Tracey J.; Woodall, Cheryl A.; Finn, Thomas M.; Le, Phuong A.; Brownfield, Michael E.; Gaswirth, Stephanie B.; Marra, Kristen R.; Leathers-Miller, Heidi M.
2018-05-31
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean undiscovered, technically recoverable resources of 1.3 billion barrels of oil and 34.4 trillion cubic feet of gas in the Canning Basin Province of Australia.
Brownfield, Michael E.; Schenk, Christopher J.; Klett, Timothy R.; Tennyson, Marilyn E.; Mercier, Tracey J.; Gaswirth, Stephanie B.; Marra, Kristen R.; Hawkins, Sarah J.; Finn, Thomas M.; Le, Phuong A.; Leathers-Miller, Heidi M.
2017-02-24
Using a geology-based assessment methodology, the U.S. Geological Survey estimated undiscovered, technically recoverable mean resources of 4.5 trillion cubic feet of coalbed gas in the Kalahari Basin Province of Botswana, Zambia, and Zimbabwe, Africa.
Assessment of undiscovered continuous gas resources of the Ordos Basin Province, China, 2015
Charpentier, Ronald R.; Klett, Timothy R.; Schenk, Christopher J.; Brownfield, Michael E.; Gaswirth, Stephanie B.; Le, Phuong A.; Leathers-Miller, Heidi M.; Marra, Kristen R.; Mercier, Tracey J.
2016-01-11
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean resources of 28 trillion cubic feet of tight gas and 5.6 trillion cubic feet of coalbed gas in upper Paleozoic rocks in the Ordos Basin Province, China.
,
2012-01-01
Using a performance-based geologic assessment methodology, the U.S. Geological Survey estimated a technically recoverable mean volume of 6.1 trillion cubic feet of potential shale gas in the Bombay, Cauvery, and Krishna-Godavari Provinces of India.
Assessment of continuous oil and gas resources in the Pannonian Basin Province, Hungary, 2016
Schenk, Christopher J.; Klett, Timothy R.; Le, Phuong A.; Brownfield, Michael E.; Leathers-Miller, Heidi M.
2017-06-29
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean undiscovered, technically recoverable continuous resources of 119 million barrels of oil and 944 billion cubic feet of gas in the Hungarian part of the Pannonian Basin Province.
Schenk, Christopher J.; Tennyson, Marilyn E.; Mercier, Tracey J.; Woodall, Cheryl A.; Finn, Thomas M.; Brownfield, Michael E.; Le, Phuong A.; Klett, Timothy R.; Gaswirth, Stephanie B.; Marra, Kristen R.; Leathers-Miller, Heidi M.; Potter, Christopher J.
2018-02-07
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean undiscovered, technically recoverable continuous resources of 2.0 billion barrels of oil and 20.3 trillion cubic feet of gas in the Bohaiwan Basin Province, China.
Integrating industrial seminars within a graduate engineering programme
NASA Astrophysics Data System (ADS)
Ringwood, John. V.
2013-05-01
The benefit of external, often industry-based, speakers for a seminar series associated with both undergraduate and graduate programmes is relatively unchallenged. However, the means by which such a seminar series can be encapsulated within a structured learning module, and the appropriate design of an accompanying assessment methodology, is not so obvious. This paper examines how such a learning module can be formulated and addresses the main issues involved in the design of such a module, namely the selection of speakers, format of seminars, method of delivery and assessment methodology, informed by the objectives of the module.
An Integrated Science-based methodology
The data is secondary in nature. Meaning that no data was generated as part of this review effort. Rather, data that was available in the peer-reviewed literature was used.This dataset is associated with the following publication:Tolaymat , T., A. El Badawy, R. Sequeira, and A. Genaidy. An integrated science-based methodology to assess potential risks and implications of engineered nanomaterials. Diana Aga, Wonyong Choi, Andrew Daugulis, Gianluca Li Puma, Gerasimos Lyberatos, and Joo Hwa Tay JOURNAL OF HAZARDOUS MATERIALS. Elsevier Science Ltd, New York, NY, USA, 298: 270-281, (2015).
Assessment of Undiscovered Petroleum Resources of Southern and Western Afghanistan, 2009
Wandrey, C.J.; Kosti, Amir Zada; Selab, Amir Mohammad; Omari, Mohammad Karim; Muty, Salam Abdul; Nakshband, Haidari Gulam; Hosine, Abdul Aminulah; Wahab, Abdul; Hamidi, Abdul Wasy; Ahmadi, Nasim; Agena, Warren F.; Charpentier, Ronald R.; Cook, Troy; Drenth, B.J.
2009-01-01
Using a geology-based assessment methodology, the U.S. Geological Survey--Afghanistan Ministry of Mines Joint Oil and Gas Resource Assessment Team estimated mean undiscovered resource volumes of 21.55 million barrels of oil, 44.76 billion cubic feet of non-associated natural gas, and 0.91 million barrels of natural gas liquids in the western Afghanistan Tirpul Assessment Unit (AU) (80230101).
Regional risk assessment for contaminated sites part 2: ranking of potentially contaminated sites.
Pizzol, Lisa; Critto, Andrea; Agostini, Paola; Marcomini, Antonio
2011-11-01
Environmental risks are traditionally assessed and presented in non spatial ways although the heterogeneity of the contaminants spatial distributions, the spatial positions and relations between receptors and stressors, as well as the spatial distribution of the variables involved in the risk assessment, strongly influence exposure estimations and hence risks. Taking into account spatial variability is increasingly being recognized as a further and essential step in sound exposure and risk assessment. To address this issue an innovative methodology which integrates spatial analysis and a relative risk approach was developed. The purpose of this methodology is to prioritize sites at regional scale where a preliminary site investigation may be required. The methodology aimed at supporting the inventory of contaminated sites was implemented within the spatial decision support sYstem for Regional rIsk Assessment of DEgraded land, SYRIADE, and was applied to the case-study of the Upper Silesia region (Poland). The developed methodology and tool are both flexible and easy to adapt to different regional contexts, allowing the user to introduce the regional relevant parameters identified on the basis of user expertise and regional data availability. Moreover, the used GIS functionalities, integrated with mathematical approaches, allow to take into consideration, all at once, the multiplicity of sources and impacted receptors within the region of concern, to assess the risks posed by all contaminated sites in the region and, finally, to provide a risk-based ranking of the potentially contaminated sites. Copyright © 2011. Published by Elsevier Ltd.
2014-01-01
Background Tobacco smoke toxicity has traditionally been assessed using the particulate fraction under submerged culture conditions which omits the vapour phase elements from any subsequent analysis. Therefore, methodologies that assess the full interactions and complexities of tobacco smoke are required. Here we describe the adaption of a modified BALB/c 3T3 neutral red uptake (NRU) cytotoxicity test methodology, which is based on the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM) protocol for in vitro acute toxicity testing. The methodology described takes into account the synergies of both the particulate and vapour phase of tobacco smoke. This is of particular importance as both phases have been independently shown to induce in vitro cellular cytotoxicity. Findings The findings from this study indicate that mainstream tobacco smoke and the gas vapour phase (GVP), generated using the Vitrocell® VC 10 smoke exposure system, have distinct and significantly different toxicity profiles. Within the system tested, mainstream tobacco smoke produced a dilution IC50 (dilution (L/min) at which 50% cytotoxicity is observed) of 6.02 L/min, whereas the GVP produced a dilution IC50 of 3.20 L/min. In addition, we also demonstrated significant dose-for-dose differences between mainstream cigarette smoke and the GVP fraction (P < 0.05). This demonstrates the importance of testing the entire tobacco smoke aerosol and not just the particulate fraction, as has been the historical preference. Conclusions We have adapted the NRU methodology based on the ICCVAM protocol to capture the full interactions and complexities of tobacco smoke. This methodology could also be used to assess the performance of traditional cigarettes, blend and filter technologies, tobacco smoke fractions and individual test aerosols. PMID:24935030
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mou, J.I.; King, C.
The focus of this study is to develop a sensor fused process modeling and control methodology to model, assess, and then enhance the performance of a hexapod machine for precision product realization. Deterministic modeling technique was used to derive models for machine performance assessment and enhancement. Sensor fusion methodology was adopted to identify the parameters of the derived models. Empirical models and computational algorithms were also derived and implemented to model, assess, and then enhance the machine performance. The developed sensor fusion algorithms can be implemented on a PC-based open architecture controller to receive information from various sensors, assess themore » status of the process, determine the proper action, and deliver the command to actuators for task execution. This will enhance a hexapod machine`s capability to produce workpieces within the imposed dimensional tolerances.« less
Flight simulator fidelity assessment in a rotorcraft lateral translation maneuver
NASA Technical Reports Server (NTRS)
Hess, R. A.; Malsbury, T.; Atencio, A., Jr.
1992-01-01
A model-based methodology for assessing flight simulator fidelity in closed-loop fashion is exercised in analyzing a rotorcraft low-altitude maneuver for which flight test and simulation results were available. The addition of a handling qualities sensitivity function to a previously developed model-based assessment criteria allows an analytical comparison of both performance and handling qualities between simulation and flight test. Model predictions regarding the existence of simulator fidelity problems are corroborated by experiment. The modeling approach is used to assess analytically the effects of modifying simulator characteristics on simulator fidelity.
Agüero, A; Pinedo, P; Simón, I; Cancio, D; Moraleda, M; Trueba, C; Pérez-Sánchez, D
2008-09-15
A methodological approach which includes conceptual developments, methodological aspects and software tools have been developed in the Spanish context, based on the BIOMASS "Reference Biospheres Methodology". The biosphere assessments have to be undertaken with the aim of demonstrating compliance with principles and regulations established to limit the possible radiological impact of radioactive waste disposals on human health and on the environment, and to ensure that future generations will not be exposed to higher radiation levels than those that would be acceptable today. The biosphere in the context of high-level waste disposal is defined as the collection of various radionuclide transfer pathways that may result in releases into the surface environment, transport within and between the biosphere receptors, exposure of humans and biota, and the doses/risks associated with such exposures. The assessments need to take into account the complexity of the biosphere, the nature of the radionuclides released and the long timescales considered. It is also necessary to make assumptions related to the habits and lifestyle of the exposed population, human activities in the long term and possible modifications of the biosphere. A summary on the Spanish methodological approach for biosphere assessment are presented here as well as its application in a Spanish generic case study. A reference scenario has been developed based on current conditions at a site located in Central-West Spain, to indicate the potential impact to the actual population. In addition, environmental change has been considered qualitatively through the use of interaction matrices and transition diagrams. Unit source terms of (36)Cl, (79)Se, (99)Tc, (129)I, (135)Cs, (226)Ra, (231)Pa, (238)U, (237)Np and (239)Pu have been taken. Two exposure groups of infants and adults have been chosen for dose calculations. Results are presented and their robustness is evaluated through the use of uncertainty and sensitivity analyses.
NASA Technical Reports Server (NTRS)
Celaya, Jose; Kulkarni, Chetan; Biswas, Gautam; Saha, Sankalita; Goebel, Kai
2011-01-01
A remaining useful life prediction methodology for electrolytic capacitors is presented. This methodology is based on the Kalman filter framework and an empirical degradation model. Electrolytic capacitors are used in several applications ranging from power supplies on critical avionics equipment to power drivers for electro-mechanical actuators. These devices are known for their comparatively low reliability and given their criticality in electronics subsystems they are a good candidate for component level prognostics and health management. Prognostics provides a way to assess remaining useful life of a capacitor based on its current state of health and its anticipated future usage and operational conditions. We present here also, experimental results of an accelerated aging test under electrical stresses. The data obtained in this test form the basis for a remaining life prediction algorithm where a model of the degradation process is suggested. This preliminary remaining life prediction algorithm serves as a demonstration of how prognostics methodologies could be used for electrolytic capacitors. In addition, the use degradation progression data from accelerated aging, provides an avenue for validation of applications of the Kalman filter based prognostics methods typically used for remaining useful life predictions in other applications.
NASA Technical Reports Server (NTRS)
Celaya, Jose R.; Kulkarni, Chetan S.; Biswas, Gautam; Goebel, Kai
2012-01-01
A remaining useful life prediction methodology for electrolytic capacitors is presented. This methodology is based on the Kalman filter framework and an empirical degradation model. Electrolytic capacitors are used in several applications ranging from power supplies on critical avionics equipment to power drivers for electro-mechanical actuators. These devices are known for their comparatively low reliability and given their criticality in electronics subsystems they are a good candidate for component level prognostics and health management. Prognostics provides a way to assess remaining useful life of a capacitor based on its current state of health and its anticipated future usage and operational conditions. We present here also, experimental results of an accelerated aging test under electrical stresses. The data obtained in this test form the basis for a remaining life prediction algorithm where a model of the degradation process is suggested. This preliminary remaining life prediction algorithm serves as a demonstration of how prognostics methodologies could be used for electrolytic capacitors. In addition, the use degradation progression data from accelerated aging, provides an avenue for validation of applications of the Kalman filter based prognostics methods typically used for remaining useful life predictions in other applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolly, S; Chen, H; Mutic, S
Purpose: A persistent challenge for the quality assessment of radiation therapy treatments (e.g. contouring accuracy) is the absence of the known, ground truth for patient data. Moreover, assessment results are often patient-dependent. Computer simulation studies utilizing numerical phantoms can be performed for quality assessment with a known ground truth. However, previously reported numerical phantoms do not include the statistical properties of inter-patient variations, as their models are based on only one patient. In addition, these models do not incorporate tumor data. In this study, a methodology was developed for generating numerical phantoms which encapsulate the statistical variations of patients withinmore » radiation therapy, including tumors. Methods: Based on previous work in contouring assessment, geometric attribute distribution (GAD) models were employed to model both the deterministic and stochastic properties of individual organs via principle component analysis. Using pre-existing radiation therapy contour data, the GAD models are trained to model the shape and centroid distributions of each organ. Then, organs with different shapes and positions can be generated by assigning statistically sound weights to the GAD model parameters. Organ contour data from 20 retrospective prostate patient cases were manually extracted and utilized to train the GAD models. As a demonstration, computer-simulated CT images of generated numerical phantoms were calculated and assessed subjectively and objectively for realism. Results: A cohort of numerical phantoms of the male human pelvis was generated. CT images were deemed realistic both subjectively and objectively in terms of image noise power spectrum. Conclusion: A methodology has been developed to generate realistic numerical anthropomorphic phantoms using pre-existing radiation therapy data. The GAD models guarantee that generated organs span the statistical distribution of observed radiation therapy patients, according to the training dataset. The methodology enables radiation therapy treatment assessment with multi-modality imaging and a known ground truth, and without patient-dependent bias.« less
Community-based physical activity interventions among women: a systematic review
Amiri Farahani, Leila; Asadi-Lari, Mohsen; Mohammadi, Eesa; Parvizy, Soroor; Haghdoost, Ali Akbar; Taghizadeh, Ziba
2015-01-01
Objective Review and assess the effectiveness of community-based physical activity interventions among women aged 18–65 years. Design Systematic review Methods To find relevant articles, the researcher selected reports published in English between 1 January 2000 and 31 March 2013. Systematic search was to find controlled-trial studies that were conducted to uncover the effect of community-based interventions to promote physical activity among women 18–65 years of age, in which physical activity was reported as one of the measured outcomes. The methodological quality assessment was performed using a critical appraisal sheet. Also, the levels of evidence were assessed for the types of interventions. Results The literature search identified nine articles. Four of the studies were randomised and the others studies had high methodological quality. There was no evidence, on the basis of effectiveness, for social cognitive theory-based interventions and inconclusive evidence of effectiveness for the rest of interventions. Conclusions There is insufficient evidence to assess the effectiveness of community-based interventions for enhancing physical activity among women. There is a need for high-quality randomised clinical trials with adequate statistical power to determine whether multicomponent and community-based intervention programmes increase physical activity among women, as well as to determine what type of interventions have a more effective and sustainable impact on women's physical activity. PMID:25833668
Risk Based Inspection Methodology and Software Applied to Atmospheric Storage Tanks
NASA Astrophysics Data System (ADS)
Topalis, P.; Korneliussen, G.; Hermanrud, J.; Steo, Y.
2012-05-01
A new risk-based inspection (RBI) methodology and software is presented in this paper. The objective of this work is to allow management of the inspections of atmospheric storage tanks in the most efficient way, while, at the same time, accident risks are minimized. The software has been built on the new risk framework architecture, a generic platform facilitating efficient and integrated development of software applications using risk models. The framework includes a library of risk models and the user interface is automatically produced on the basis of editable schemas. This risk-framework-based RBI tool has been applied in the context of RBI for above-ground atmospheric storage tanks (AST) but it has been designed with the objective of being generic enough to allow extension to the process plants in general. This RBI methodology is an evolution of an approach and mathematical models developed for Det Norske Veritas (DNV) and the American Petroleum Institute (API). The methodology assesses damage mechanism potential, degradation rates, probability of failure (PoF), consequence of failure (CoF) in terms of environmental damage and financial loss, risk and inspection intervals and techniques. The scope includes assessment of the tank floor for soil-side external corrosion and product-side internal corrosion and the tank shell courses for atmospheric corrosion and internal thinning. It also includes preliminary assessment for brittle fracture and cracking. The data are structured according to an asset hierarchy including Plant, Production Unit, Process Unit, Tag, Part and Inspection levels and the data are inherited / defaulted seamlessly from a higher hierarchy level to a lower level. The user interface includes synchronized hierarchy tree browsing, dynamic editor and grid-view editing and active reports with drill-in capability.
NASA Astrophysics Data System (ADS)
Abdenov, A. Zh; Trushin, V. A.; Abdenova, G. A.
2018-01-01
The paper considers the questions of filling the relevant SIEM nodes based on calculations of objective assessments in order to improve the reliability of subjective expert assessments. The proposed methodology is necessary for the most accurate security risk assessment of information systems. This technique is also intended for the purpose of establishing real-time operational information protection in the enterprise information systems. Risk calculations are based on objective estimates of the adverse events implementation probabilities, predictions of the damage magnitude from information security violations. Calculations of objective assessments are necessary to increase the reliability of the proposed expert assessments.
Risk-informed selection of a highway trajectory in the neighborhood of an oil-refinery.
Papazoglou, I A; Nivolianitou, Z; Aneziris, O; Christou, M D; Bonanos, G
1999-06-11
A methodology for characterizing alternative trajectories of a new highway in the neighborhood of an oil-refinery with respect to the risk to public health is presented. The approach is based on a quantitative assessment of the risk that the storage facilities of flammable materials of the refinery pose to the users of the highway. Physical phenomena with a potential for detrimental consequences to public health such as BLEVE (Boiling Liquid Expanding Vapor Explosion), Unconfined Vapor Cloud Explosion, flash fire and pool fire are considered. Methodological and procedural steps for assessing the individual risk around the tank farm of the oil-refinery are presented. Based on the individual risk, group risk for each alternative highway trajectory is determined. Copyright 1999 Elsevier Science B.V.
[Assessment of the methodological quality of theses submitted to the Faculty of Medicine Fez].
Boly, A; Tachfouti, N; Zohoungbogbo, I S S; Achhab, Y El; Nejjari, C
2014-06-09
A thesis in medicine is a scientific work which allows a medical student to acquire a Doctor of Medicine degree. It is therefore recommended that theses presented by students fulfill essential methodological criteria in order to obtain scientifically credible results and recommendations. The aim of this study was to assess the methodology of thesis presented to the Faculty of Medicine in Fez in 2008. We developed an evaluation table containing questions on the different sections of the IMRAD structure on which these theses were based and we estimated the proportion of theses that conformed to each criterion. There were 160 theses on various specialties presented in 2008. The majority of the theses (79.3%) were case series. Research questions were clearly expressed in 62.0% but the primary objectives were pertinent in only 52.0%. Our study shows that there were important deficiencies in the methodological rigor of the theses and very little representation of the theses in publications.
Rodríguez-González, Alejandro; Torres-Niño, Javier; Valencia-Garcia, Rafael; Mayer, Miguel A; Alor-Hernandez, Giner
2013-09-01
This paper proposes a new methodology for assessing the efficiency of medical diagnostic systems and clinical decision support systems by using the feedback/opinions of medical experts. The methodology behind this work is based on a comparison between the expert feedback that has helped solve different clinical cases and the expert system that has evaluated these same cases. Once the results are returned, an arbitration process is carried out in order to ensure the correctness of the results provided by both methods. Once this process has been completed, the results are analyzed using Precision, Recall, Accuracy, Specificity and Matthews Correlation Coefficient (MCC) (PRAS-M) metrics. When the methodology is applied, the results obtained from a real diagnostic system allow researchers to establish the accuracy of the system based on objective facts. The methodology returns enough information to analyze the system's behavior for each disease in the knowledge base or across the entire knowledge base. It also returns data on the efficiency of the different assessors involved in the evaluation process, analyzing their behavior in the diagnostic process. The proposed work facilitates the evaluation of medical diagnostic systems, having a reliable process based on objective facts. The methodology presented in this research makes it possible to identify the main characteristics that define a medical diagnostic system and their values, allowing for system improvement. A good example of the results provided by the application of the methodology is shown in this paper. A diagnosis system was evaluated by means of this methodology, yielding positive results (statistically significant) when comparing the system with the assessors that participated in the evaluation process of the system through metrics such as recall (+27.54%) and MCC (+32.19%). These results demonstrate the real applicability of the methodology used. Copyright © 2013 Elsevier Ltd. All rights reserved.
Skills in Clinical Communication: Are We Correctly Assessing Them at Undergraduate Level?
ERIC Educational Resources Information Center
Zamora Cervantes, Alberto; Carrión Ribas, Carme; Cordón Granados, Ferran; Galí Pla, Bibiana; Balló Peña, Elisabet; Quesada Sabate, Miquel; Grau Martin, Armand; Castro Guardiola, Antoni; Torrent Goñi, Silvia; Vargas Vila, Susanna; Vilert Garrofa, Esther; Subirats Bayego, Enric; Coll de Tuero, Gabriel; Muñoz Ortiz, Laura; Cerezo Goyeneche, Carlos; Torán Monserrat, Pere
2014-01-01
Traditional learning and assessment systems are overwhelmed when it comes to addressing the complex and multi-dimensional problems of clinical communication and professional practice. This paper shows results of a training program in clinical communication under Problem Based Learning (PBL) methodology and correlation between student…
Assessing Sustainability in Real Urban Systems: The Greater Cincinnati Metropolitan Area in Ohio
The goal of this research article is to present a practical and general methodology for a sustainability assessment in real urban systems. The method is based on the computation and interpretation of Fisher Information (FI) as a sustainability metric using time series for 29 soci...
Risk assessments for mixtures: technical methods commonly used in the United States
A brief (20 minute) talk on the technical approaches used by EPA and other US agencies to assess risks posed by combined exposures to one or more chemicals. The talk systemically reviews the methodologies (whole-mixtures and component-based approaches) that are or have been used ...
Assessment of conventional oil resources of the East African Rift Province, East Africa, 2016
Brownfield, Michael E.; Schenk, Christopher J.; Klett, Timothy R.; Mercier, Tracey J.; Gaswirth, Stephanie B.; Marra, Kristen R.; Finn, Thomas M.; Le, Phuong A.; Leathers-Miller, Heidi M.
2017-03-27
Using a geology-based assessment methodology, the U.S. Geological Survey estimated undiscovered, technically recoverable mean conventional resources of 13.4 billion barrels of oil and 4.6 trillion cubic feet of gas in the East African Rift Province of east Africa.
Kirschbaum, Mark A.; Schenk, Christopher J.; Charpentier, Ronald R.; Klett, Timothy R.; Brownfield, Michael E.; Pitman, Janet K.; Cook, Troy A.; Tennyson, Marilyn E.
2010-01-01
The U.S. Geological Survey estimated means of 1.8 billion barrels of recoverable oil, 223 trillion cubic feet of recoverable gas, and 6 billion barrels of natural gas liquids in the Nile Delta Basin Province using a geology-based assessment methodology.
Johnson, Ronald C.; Mercier, Tracey J.; Brownfield, Michael E.; Self, Jesse G.
2010-01-01
Using a geology-based assessment methodology, the U.S. Geological Survey estimated a total of 1.32 trillion barrels of oil in place in 18 oil shale zones in the Eocene Green River Formation in the Uinta Basin, Utah and Colorado.
Assessment of potential shale gas and shale oil resources of the Norte Basin, Uruguay, 2011
Schenk, Christopher J.; Kirschbaum, Mark A.; Charpentier, Ronald R.; Cook, Troy; Klett, Timothy R.; Gautier, Donald L.; Pollastro, Richard M.; Weaver, Jean N.; Brownfield, Michael
2011-01-01
Using a performance-based geological assessment methodology, the U.S. Geological Survey estimated mean volumes of 13.4 trillion cubic feet of potential technically recoverable shale gas and 0.5 billion barrels of technically recoverable shale oil resources in the Norte Basin of Uruguay.
Schenk, Christopher J.; Tennyson, Marilyn E.; Mercier, Tracey J.; Gaswirth, Stephanie B.; Marra, Kristen R.; Le, Phoung A.; Pitman, Janet K.; Brownfield, Michael E.; Hawkins, Sarah J.; Leathers-Miller, Heidi M.; Finn, Thomas M.; Klett, Timothy R.
2017-03-27
Using a geology-based assessment methodology, the U.S. Geological Survey estimated undiscovered, technically recoverable mean continuous resources of 656 million barrels of oil and 5.7 trillion cubic feet of gas in the Maracaibo Basin Province, Venezuela and Colombia.
A main goal of ecotoxicology and risk assessment is to assess the impact on aquatic populations. However the most widely used bioassays measure the response of individuals to infer population effects. Bridging the gap between established individual-based methodology and the popul...
Schenk, Christopher J.; Brownfield, Michael E.; Tennyson, Marilyn E.; Le, Phuong A.; Mercier, Tracey J.; Finn, Thomas M.; Hawkins, Sarah J.; Gaswirth, Stephanie B.; Marra, Kristen R.; Klett, Timothy R.; Leathers-Miller, Heidi M.; Woodall, Cheryl A.
2017-09-22
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean undiscovered, technically recoverable continuous resources of 0.45 billion barrels of oil and 1.0 trillion cubic feet of gas in the Middle and Upper Magdalena Basins, Colombia.
Schenk, Christopher J.; Tennyson, Marilyn E.; Mercier, Tracey J.; Hawkins, Sarah J.; Gaswirth, Stephanie B.; Marra, Kristen R.; Klett, Timothy R.; Le, Phuong A.; Brownfield, Michael E.; Woodall, Cheryl A.
2017-08-17
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean undiscovered, technically recoverable continuous resources of 35.1 trillion cubic feet of gas in the Amu Darya Basin Province of Turkmenistan, Uzbekistan, Iran, and Afghanistan.
Methods of Quality Appraisal for Studies Reviewed by Evidence Clearinghouses
ERIC Educational Resources Information Center
Wilson, Sandra Jo; Tanner-Smith, Emily
2015-01-01
This presentation will discuss quality appraisal methods for assessing research studies used in systematic reviews, research syntheses, and evidence-based practice repositories such as the What Works Clearinghouse. The different ways that the methodological rigor and risk of bias of primary studies included in syntheses is assessed means that…
A Mixed-Methods, Multiprofessional Approach to Needs Assessment for Designing Education
ERIC Educational Resources Information Center
Moore, Heidi K.; McKeithen, Tom M.; Holthusen, Amy E.
2011-01-01
Like most hospital units, neonatal intensive care units (NICUs) are multidisciplinary and team-based. As a result, providing optimal nutritional care to premature infants involves using the knowledge and skills of several types of professionals. Using traditional needs assessment methodologies to effectively understand the educational needs…
Assessment of continuous oil and gas resources of the South Sumatra Basin Province, Indonesia, 2016
Schenk, Christopher J.; Tennyson, Marilyn E.; Klett, Timothy R.; Finn, Thomas M.; Mercier, Tracey J.; Gaswirth, Stephanie B.; Marra, Kristen R.; Le, Phuong A.; Hawkins, Sarah J.
2016-12-09
Using a geology-based assessment methodology, the U.S. Geological Survey estimated undiscovered, technically recoverable mean resources of 689 million barrels of continuous shale oil and 3.9 trillion cubic feet of shale gas in the South Sumatra Basin Province in Indonesia.
Schenk, Christopher J.; Mercier, Tracey J.; Tennyson, Marilyn E.; Woodall, Cheryl A.; Brownfield, Michael E.; Le, Phuong A.; Klett, Timothy R.; Gaswirth, Stephanie B.; Finn, Thomas M.; Marra, Kristen R.; Leathers-Miller, Heidi M.
2018-02-16
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean undiscovered, technically recoverable resources of 26 million barrels of oil and 700 billion cubic feet of gas in the Wyoming Thrust Belt Province, Wyoming, Idaho, and Utah.
Embracing the Learning Paradigm to Foster Systems Thinking
ERIC Educational Resources Information Center
Habron, Geoffrey; Goralnik, Lissy; Thorp, Laurie
2012-01-01
Purpose: Michigan State University developed an undergraduate, academic specialization in sustainability based on the learning paradigm. The purpose of this paper is to share initial findings on assessment of systems thinking competency. Design/methodology/approach: The 15-week course served 14 mostly third and fourth-year students. Assessment of…
Teaching and Assessment for an Organisation-Centred Curriculum
ERIC Educational Resources Information Center
Choy, Sarojni
2009-01-01
Purpose: This paper aims to discuss the teaching and assessment strategies for an organisation-centred curriculum. Design/methodology/approach: The paper is based on a case study. Data were collected from interviews and a focus group with worker-learners enrolled in a Graduate Certificate in Education (Educational Leadership) course. Findings: The…
77 FR 24938 - National Assessment Governing Board; Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-26
... to individuals with disabilities. DATES: May 17-19, 2012. Times May 17 Committee Meetings Assessment... Dissemination Committee (R&D): Open Session: 10 a.m.- 12:30 p.m. Committee on Standards, Design and Methodology... May 18 will be a briefing on the NAEP mathematics special studies: the Mathematics Computer-based...
Assessment of undiscovered oil and gas resources of Libya and Tunisia, 2010
Whidden, Katherine J.; Lewan, Michael; Schenk, Christopher J.; Charpentier, Rondald R.; Cook, Troy A.; Klett, Timothy R.; Pitman, Janet K.
2011-01-01
Using a geology-based assessment methodology, the U.S. Geological Survey estimated means of 3.97 billion barrels of undiscovered oil, 38.5 trillion cubic feet of undiscovered natural gas, and 1.47 billion barrels of undiscovered natural gas liquids in two provinces of North Africa.
Płaszewski, Maciej; Bettany-Saltikov, Josette
2014-01-01
Background Non-surgical interventions for adolescents with idiopathic scoliosis remain highly controversial. Despite the publication of numerous reviews no explicit methodological evaluation of papers labeled as, or having a layout of, a systematic review, addressing this subject matter, is available. Objectives Analysis and comparison of the content, methodology, and evidence-base from systematic reviews regarding non-surgical interventions for adolescents with idiopathic scoliosis. Design Systematic overview of systematic reviews. Methods Articles meeting the minimal criteria for a systematic review, regarding any non-surgical intervention for adolescent idiopathic scoliosis, with any outcomes measured, were included. Multiple general and systematic review specific databases, guideline registries, reference lists and websites of institutions were searched. The AMSTAR tool was used to critically appraise the methodology, and the Oxford Centre for Evidence Based Medicine and the Joanna Briggs Institute’s hierarchies were applied to analyze the levels of evidence from included reviews. Results From 469 citations, twenty one papers were included for analysis. Five reviews assessed the effectiveness of scoliosis-specific exercise treatments, four assessed manual therapies, five evaluated bracing, four assessed different combinations of interventions, and one evaluated usual physical activity. Two reviews addressed the adverse effects of bracing. Two papers were high quality Cochrane reviews, Three were of moderate, and the remaining sixteen were of low or very low methodological quality. The level of evidence of these reviews ranged from 1 or 1+ to 4, and in some reviews, due to their low methodological quality and/or poor reporting, this could not be established. Conclusions Higher quality reviews indicate that generally there is insufficient evidence to make a judgment on whether non-surgical interventions in adolescent idiopathic scoliosis are effective. Papers labeled as systematic reviews need to be considered in terms of their methodological rigor; otherwise they may be mistakenly regarded as high quality sources of evidence. Protocol registry number CRD42013003538, PROSPERO PMID:25353954
Płaszewski, Maciej; Bettany-Saltikov, Josette
2014-01-01
Non-surgical interventions for adolescents with idiopathic scoliosis remain highly controversial. Despite the publication of numerous reviews no explicit methodological evaluation of papers labeled as, or having a layout of, a systematic review, addressing this subject matter, is available. Analysis and comparison of the content, methodology, and evidence-base from systematic reviews regarding non-surgical interventions for adolescents with idiopathic scoliosis. Systematic overview of systematic reviews. Articles meeting the minimal criteria for a systematic review, regarding any non-surgical intervention for adolescent idiopathic scoliosis, with any outcomes measured, were included. Multiple general and systematic review specific databases, guideline registries, reference lists and websites of institutions were searched. The AMSTAR tool was used to critically appraise the methodology, and the Oxford Centre for Evidence Based Medicine and the Joanna Briggs Institute's hierarchies were applied to analyze the levels of evidence from included reviews. From 469 citations, twenty one papers were included for analysis. Five reviews assessed the effectiveness of scoliosis-specific exercise treatments, four assessed manual therapies, five evaluated bracing, four assessed different combinations of interventions, and one evaluated usual physical activity. Two reviews addressed the adverse effects of bracing. Two papers were high quality Cochrane reviews, Three were of moderate, and the remaining sixteen were of low or very low methodological quality. The level of evidence of these reviews ranged from 1 or 1+ to 4, and in some reviews, due to their low methodological quality and/or poor reporting, this could not be established. Higher quality reviews indicate that generally there is insufficient evidence to make a judgment on whether non-surgical interventions in adolescent idiopathic scoliosis are effective. Papers labeled as systematic reviews need to be considered in terms of their methodological rigor; otherwise they may be mistakenly regarded as high quality sources of evidence. CRD42013003538, PROSPERO.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-26
... Request; Methodological Studies for the Population Assessment of Tobacco and Health (PATH) Study SUMMARY... Collection: Title: Methodological Studies for Population Assessment of Tobacco and Health (PATH) Study. Type... methodological studies to improve the PATH study instrumentation and data collection procedures. These...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gernhofer, S.; Oliver, T.J.; Vasquez, R.
1994-12-31
A macro environmental risk assessment (ERA) methodology was developed for the Philippine Department of Environment and Natural Resources (DENR) as part of the US Agency for International Development Industrial Environmental Management Project. The DENR allocates its limited resources to mitigate those environmental problems that pose the greatest threat to human health and the environment. The National Regional Industry Prioritization Strategy (NRIPS) methodology was developed as a risk assessment tool to establish a national ranking of industrial facilities. The ranking establishes regional and national priorities, based on risk factors, that DENR can use to determine the most effective allocation of itsmore » limited resources. NRIPS is a systematic framework that examines the potential risk to human health and the environment from hazardous substances released from a facility, and, in doing so, generates a relative numerical score that represents that risk. More than 3,300 facilities throughout the Philippines were evaluated successfully with the NRIPS.« less
Munthe-Kaas, Heather; Bohren, Meghan A; Glenton, Claire; Lewin, Simon; Noyes, Jane; Tunçalp, Özge; Booth, Andrew; Garside, Ruth; Colvin, Christopher J; Wainwright, Megan; Rashidian, Arash; Flottorp, Signe; Carlsen, Benedicte
2018-01-25
The GRADE-CERQual (Confidence in Evidence from Reviews of Qualitative research) approach has been developed by the GRADE (Grading of Recommendations Assessment, Development and Evaluation) Working Group. The approach has been developed to support the use of findings from qualitative evidence syntheses in decision-making, including guideline development and policy formulation. CERQual includes four components for assessing how much confidence to place in findings from reviews of qualitative research (also referred to as qualitative evidence syntheses): (1) methodological limitations, (2) coherence, (3) adequacy of data and (4) relevance. This paper is part of a series providing guidance on how to apply CERQual and focuses on CERQual's methodological limitations component. We developed the methodological limitations component by searching the literature for definitions, gathering feedback from relevant research communities and developing consensus through project group meetings. We tested the CERQual methodological limitations component within several qualitative evidence syntheses before agreeing on the current definition and principles for application. When applying CERQual, we define methodological limitations as the extent to which there are concerns about the design or conduct of the primary studies that contributed evidence to an individual review finding. In this paper, we describe the methodological limitations component and its rationale and offer guidance on how to assess methodological limitations of a review finding as part of the CERQual approach. This guidance outlines the information required to assess methodological limitations component, the steps that need to be taken to assess methodological limitations of data contributing to a review finding and examples of methodological limitation assessments. This paper provides guidance for review authors and others on undertaking an assessment of methodological limitations in the context of the CERQual approach. More work is needed to determine which criteria critical appraisal tools should include when assessing methodological limitations. We currently recommend that whichever tool is used, review authors provide a transparent description of their assessments of methodological limitations in a review finding. We expect the CERQual approach and its individual components to develop further as our experiences with the practical implementation of the approach increase.
Foundation stones for a real socio-environmental integration in projects' impact assessments
NASA Astrophysics Data System (ADS)
Andres Dominguez-Gomez, J.
2015-04-01
In the last twenty years, both the increase in academic production and the expansion of professional involvement in Environmental Impact Assessment (EIA) and Social Impact Assessment (SIA), have evidenced growing scientific and business interest in risk and impact analysis. However, this growth has not brought with it a parallel progress in addressing their main shortcomings: insufficient integration of environmental and social features into development project analyses and, in cases where the social aspects are considered, technical-methodological failings in their diagnosis and assessment. It is clear that these weaknesses carry with them substantial threats to the sustainability (social, environmental and economic) of schemes which impact on the environment, and in consequence, to the local contexts where they are carried out and to the delicate balance of the global ecosystem. This paper argue that, in a sociological context of growing complexity, four foundation-stones are required to underpin research methodologies (for both diagnosis and assessment) in the socio-environmental risks of development projects: a theoretical foundation in actor-network theory; an ethical grounding in values which are internationally recognized though not always carried through into practice; a (new) epistemological-scientific base; and a methodological foundation in social participation.
Doherty, Kathleen; Essajee, Shaffiq; Penazzato, Martina; Holmes, Charles; Resch, Stephen; Ciaranello, Andrea
2014-05-02
Pediatric antiretroviral therapy (ART) has been shown to substantially reduce morbidity and mortality in HIV-infected infants and children. To accurately project program costs, analysts need accurate estimations of antiretroviral drug (ARV) costs for children. However, the costing of pediatric antiretroviral therapy is complicated by weight-based dosing recommendations which change as children grow. We developed a step-by-step methodology for estimating the cost of pediatric ARV regimens for children ages 0-13 years old. The costing approach incorporates weight-based dosing recommendations to provide estimated ARV doses throughout childhood development. Published unit drug costs are then used to calculate average monthly drug costs. We compared our derived monthly ARV costs to published estimates to assess the accuracy of our methodology. The estimates of monthly ARV costs are provided for six commonly used first-line pediatric ARV regimens, considering three possible care scenarios. The costs derived in our analysis for children were fairly comparable to or slightly higher than available published ARV drug or regimen estimates. The methodology described here can be used to provide an accurate estimation of pediatric ARV regimen costs for cost-effectiveness analysts to project the optimum packages of care for HIV-infected children, as well as for program administrators and budget analysts who wish to assess the feasibility of increasing pediatric ART availability in constrained budget environments.
Schünemann, Holger J
2013-01-01
In this brief article which summarises a presentation given at the "6. Diskussionsforum zur Nutzenbewertung im Gesundheitswesen" of the German Ministry of Education and Research "Gesundheitsforschungsrat (GFR)" and the Institute for Quality and Efficiency in Healthcare (IQWiG) I will analyse some methodological idiosyncrasies of studies evaluating non-pharmacological non-technical interventions (NPNTI). I will focus on how the methodological framework of the Grading of Recommendations Assessment, Development and Evaluation (GRADE) working group may support design and appraisal of NPNTI. Specific design features that may be of particular value in NPNTI research, such as expertise-based randomised controlled trials, will be briefly described. Finally, based on an example, I will argue that - despite the methodological idiosyncrasies - there is neither a sufficient reason to accept different standards for the assessment of the confidence in the evidence from NPNTI nor for using study designs that are less rigorous compared to "simpler" interventions but that special measures have to be taken to reduce the risk of bias. The example that will be used in this article will primarily come from the field of respiratory rehabilitation, a typical multi-component or complex intervention and by definition a complex NPNTI, which has been evaluated in many randomised controlled trials. (As supplied by publisher). Copyright © 2013. Published by Elsevier GmbH.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-22
...; Comment Request: Methodological Studies for the Population Assessment of Tobacco and Health (PATH) Study... approval from OMB for methodological studies to improve the PATH study instrumentation and data collection procedures. These methodological studies will support ongoing assessment and refinement of the PATH study's...
Parts and Components Reliability Assessment: A Cost Effective Approach
NASA Technical Reports Server (NTRS)
Lee, Lydia
2009-01-01
System reliability assessment is a methodology which incorporates reliability analyses performed at parts and components level such as Reliability Prediction, Failure Modes and Effects Analysis (FMEA) and Fault Tree Analysis (FTA) to assess risks, perform design tradeoffs, and therefore, to ensure effective productivity and/or mission success. The system reliability is used to optimize the product design to accommodate today?s mandated budget, manpower, and schedule constraints. Stand ard based reliability assessment is an effective approach consisting of reliability predictions together with other reliability analyses for electronic, electrical, and electro-mechanical (EEE) complex parts and components of large systems based on failure rate estimates published by the United States (U.S.) military or commercial standards and handbooks. Many of these standards are globally accepted and recognized. The reliability assessment is especially useful during the initial stages when the system design is still in the development and hard failure data is not yet available or manufacturers are not contractually obliged by their customers to publish the reliability estimates/predictions for their parts and components. This paper presents a methodology to assess system reliability using parts and components reliability estimates to ensure effective productivity and/or mission success in an efficient manner, low cost, and tight schedule.
Risk assessment for construction projects of transport infrastructure objects
NASA Astrophysics Data System (ADS)
Titarenko, Boris
2017-10-01
The paper analyzes and compares different methods of risk assessment for construction projects of transport objects. The management of such type of projects demands application of special probabilistic methods due to large level of uncertainty of their implementation. Risk management in the projects requires the use of probabilistic and statistical methods. The aim of the work is to develop a methodology for using traditional methods in combination with robust methods that allow obtaining reliable risk assessments in projects. The robust approach is based on the principle of maximum likelihood and in assessing the risk allows the researcher to obtain reliable results in situations of great uncertainty. The application of robust procedures allows to carry out a quantitative assessment of the main risk indicators of projects when solving the tasks of managing innovation-investment projects. Calculation of damage from the onset of a risky event is possible by any competent specialist. And an assessment of the probability of occurrence of a risky event requires the involvement of special probabilistic methods based on the proposed robust approaches. Practice shows the effectiveness and reliability of results. The methodology developed in the article can be used to create information technologies and their application in automated control systems for complex projects.
DOT National Transportation Integrated Search
2010-07-01
The objective of this work was to develop a : low-cost portable damage detection tool to : assess and predict damage areas in highway : bridges. : The proposed tool was based on standard : vibration-based damage identification (VBDI) : techniques but...
NASA Astrophysics Data System (ADS)
de Vito, Rossella; Portoghese, Ivan; Pagano, Alessandro; Fratino, Umberto; Vurro, Michele
2017-12-01
Increasing pressure affects water resources, especially in the agricultural sector, with cascading impacts on energy consumption. This is particularly relevant in the Mediterranean area, showing significant water scarcity problems, further exacerbated by the crucial economic role of agricultural production. Assessing the sustainability of water resource use is thus essential to preserving ecosystems and maintaining high levels of agricultural productivity. This paper proposes an integrated methodology based on the Water-Energy-Food Nexus to evaluate the multi-dimensional implications of irrigation practices. Three different indices are introduced, based on an analysis of the most influential factors. The methodology is then implemented in a catchment located in Puglia (Italy) and a comparative analysis of the three indices is presented. The results mainly highlight that economic land productivity is a key driver of irrigated agriculture, and that groundwater is highly affordable compared to surface water, thus being often dangerously perceived as freely available.
Khan, F I; Abbasi, S A
2000-07-10
Fault tree analysis (FTA) is based on constructing a hypothetical tree of base events (initiating events) branching into numerous other sub-events, propagating the fault and eventually leading to the top event (accident). It has been a powerful technique used traditionally in identifying hazards in nuclear installations and power industries. As the systematic articulation of the fault tree is associated with assigning probabilities to each fault, the exercise is also sometimes called probabilistic risk assessment. But powerful as this technique is, it is also very cumbersome and costly, limiting its area of application. We have developed a new algorithm based on analytical simulation (named as AS-II), which makes the application of FTA simpler, quicker, and cheaper; thus opening up the possibility of its wider use in risk assessment in chemical process industries. Based on the methodology we have developed a computer-automated tool. The details are presented in this paper.
Kiluk, Brian D.; Sugarman, Dawn E.; Nich, Charla; Gibbons, Carly J.; Martino, Steve; Rounsaville, Bruce J.; Carroll, Kathleen M.
2013-01-01
Objective Computer-assisted therapies offer a novel, cost-effective strategy for providing evidence-based therapies to a broad range of individuals with psychiatric disorders. However, the extent to which the growing body of randomized trials evaluating computer-assisted therapies meets current standards of methodological rigor for evidence-based interventions is not clear. Method A methodological analysis of randomized clinical trials of computer-assisted therapies for adult psychiatric disorders, published between January 1990 and January 2010, was conducted. Seventy-five studies that examined computer-assisted therapies for a range of axis I disorders were evaluated using a 14-item methodological quality index. Results Results indicated marked heterogeneity in study quality. No study met all 14 basic quality standards, and three met 13 criteria. Consistent weaknesses were noted in evaluation of treatment exposure and adherence, rates of follow-up assessment, and conformity to intention-to-treat principles. Studies utilizing weaker comparison conditions (e.g., wait-list controls) had poorer methodological quality scores and were more likely to report effects favoring the computer-assisted condition. Conclusions While several well-conducted studies have indicated promising results for computer-assisted therapies, this emerging field has not yet achieved a level of methodological quality equivalent to those required for other evidence-based behavioral therapies or pharmacotherapies. Adoption of more consistent standards for methodological quality in this field, with greater attention to potential adverse events, is needed before computer-assisted therapies are widely disseminated or marketed as evidence based. PMID:21536689
DOE Office of Scientific and Technical Information (OSTI.GOV)
Litchfield, J.W.; Watts, R.L.; Gurwell, W.E.
A materials assessment methodology for identifying specific critical material requirements that could hinder the implementation of solar energy has been developed and demonstrated. The methodology involves an initial screening process, followed by a more detailed materials assessment. The detailed assessment considers such materials concerns and constraints as: process and production constraints, reserve and resource limitations, lack of alternative supply sources, geopolitical problems, environmental and energy concerns, time constraints, and economic constraints. Data for 55 bulk and 53 raw materials are currently available on the data base. These materials are required in the example photovoltaic systems. One photovoltaic system and thirteenmore » photovoltaic cells, ten solar heating and cooling systems, and two agricultural and industrial process heat systems have been characterized to define their engineering and bulk material requirements.« less
2009-12-01
standards for assessing the value of intangible assets or intellectual capital. Historically, a number of frameworks have evolved, each with a ...different focus and a different assessment methodology. In order to assess that knowledge management initiatives contributed to the fight against...terrorism in Canada, a results-based framework was selected, customized and applied to CRTI ( a networked science and technology program to counter
Fujinaga, Aiichiro; Uchiyama, Iwao; Morisawa, Shinsuke; Yoneda, Minoru; Sasamoto, Yuzuru
2012-01-01
In Japan, environmental standards for contaminants in groundwater and in leachate from soil are set with the assumption that they are used for drinking water over a human lifetime. Where there is neither a well nor groundwater used for drinking, the standard is thus too severe. Therefore, remediation based on these standards incurs excessive effort and cost. In contrast, the environmental-assessment procedure used in the United States and the Netherlands considers the site conditions (land use, existing wells, etc.); however, a risk assessment is required for each site. Therefore, this study proposes a new framework for judging contamination in Japan by considering the merits of the environmental standards used and a method for risk assessment. The framework involves setting risk-based concentrations that are attainable remediation goals for contaminants in soil and groundwater. The framework was then applied to a model contaminated site for risk management, and the results are discussed regarding the effectiveness and applicability of the new methodology. © 2011 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Deligiorgi, Despina; Philippopoulos, Kostas; Thanou, Lelouda; Karvounis, Georgios
2010-01-01
Spatial interpolation in air pollution modeling is the procedure for estimating ambient air pollution concentrations at unmonitored locations based on available observations. The selection of the appropriate methodology is based on the nature and the quality of the interpolated data. In this paper, an assessment of three widely used interpolation methodologies is undertaken in order to estimate the errors involved. For this purpose, air quality data from January 2001 to December 2005, from a network of seventeen monitoring stations, operating at the greater area of Athens in Greece, are used. The Nearest Neighbor and the Liner schemes were applied to the mean hourly observations, while the Inverse Distance Weighted (IDW) method to the mean monthly concentrations. The discrepancies of the estimated and measured values are assessed for every station and pollutant, using the correlation coefficient, the scatter diagrams and the statistical residuals. The capability of the methods to estimate air quality data in an area with multiple land-use types and pollution sources, such as Athens, is discussed.
NASA Astrophysics Data System (ADS)
Ribeiro, J. B.; Silva, C.; Mendes, R.
2010-10-01
A real coded genetic algorithm methodology that has been developed for the estimation of the parameters of the reaction rate equation of the Lee-Tarver reactive flow model is described in detail. This methodology allows, in a single optimization procedure, using only one experimental result and, without the need of any starting solution, to seek the 15 parameters of the reaction rate equation that fit the numerical to the experimental results. Mass averaging and the plate-gap model have been used for the determination of the shock data used in the unreacted explosive JWL equation of state (EOS) assessment and the thermochemical code THOR retrieved the data used in the detonation products' JWL EOS assessments. The developed methodology was applied for the estimation of the referred parameters for an ammonium nitrate-based emulsion explosive using poly(methyl methacrylate) (PMMA)-embedded manganin gauge pressure-time data. The obtained parameters allow a reasonably good description of the experimental data and show some peculiarities arising from the intrinsic nature of this kind of composite explosive.
A probabilistic seismic risk assessment procedure for nuclear power plants: (II) Application
Huang, Y.-N.; Whittaker, A.S.; Luco, N.
2011-01-01
This paper presents the procedures and results of intensity- and time-based seismic risk assessments of a sample nuclear power plant (NPP) to demonstrate the risk-assessment methodology proposed in its companion paper. The intensity-based assessments include three sets of sensitivity studies to identify the impact of the following factors on the seismic vulnerability of the sample NPP, namely: (1) the description of fragility curves for primary and secondary components of NPPs, (2) the number of simulations of NPP response required for risk assessment, and (3) the correlation in responses between NPP components. The time-based assessment is performed as a series of intensity-based assessments. The studies illustrate the utility of the response-based fragility curves and the inclusion of the correlation in the responses of NPP components directly in the risk computation. ?? 2011 Published by Elsevier B.V.
Bouça-Machado, Raquel; Rosário, Madalena; Alarcão, Joana; Correia-Guedes, Leonor; Abreu, Daisy; Ferreira, Joaquim J
2017-01-25
Over the past decades there has been a significant increase in the number of published clinical trials in palliative care. However, empirical evidence suggests that there are methodological problems in the design and conduct of studies, which raises questions about the validity and generalisability of the results and of the strength of the available evidence. We sought to evaluate the methodological characteristics and assess the quality of reporting of clinical trials in palliative care. We performed a systematic review of published clinical trials assessing therapeutic interventions in palliative care. Trials were identified using MEDLINE (from its inception to February 2015). We assessed methodological characteristics and describe the quality of reporting using the Cochrane Risk of Bias tool. We retrieved 107 studies. The most common medical field studied was oncology, and 43.9% of trials evaluated pharmacological interventions. Symptom control and physical dimensions (e.g. intervention on pain, breathlessness, nausea) were the palliative care-specific issues most studied. We found under-reporting of key information in particular on random sequence generation, allocation concealment, and blinding. While the number of clinical trials in palliative care has increased over time, methodological quality remains suboptimal. This compromises the quality of studies. Therefore, a greater effort is needed to enable the appropriate performance of future studies and increase the robustness of evidence-based medicine in this important field.
NASA Astrophysics Data System (ADS)
Neriani, Kelly E.; Herbranson, Travis J.; Reis, George A.; Pinkus, Alan R.; Goodyear, Charles D.
2006-05-01
While vast numbers of image enhancing algorithms have already been developed, the majority of these algorithms have not been assessed in terms of their visual performance-enhancing effects using militarily relevant scenarios. The goal of this research was to apply a visual performance-based assessment methodology to evaluate six algorithms that were specifically designed to enhance the contrast of digital images. The image enhancing algorithms used in this study included three different histogram equalization algorithms, the Autolevels function, the Recursive Rational Filter technique described in Marsi, Ramponi, and Carrato1 and the multiscale Retinex algorithm described in Rahman, Jobson and Woodell2. The methodology used in the assessment has been developed to acquire objective human visual performance data as a means of evaluating the contrast enhancement algorithms. Objective performance metrics, response time and error rate, were used to compare algorithm enhanced images versus two baseline conditions, original non-enhanced images and contrast-degraded images. Observers completed a visual search task using a spatial-forcedchoice paradigm. Observers searched images for a target (a military vehicle) hidden among foliage and then indicated in which quadrant of the screen the target was located. Response time and percent correct were measured for each observer. Results of the study and future directions are discussed.
Rose, Micah; Rice, Stephen; Craig, Dawn
2018-06-01
Since 2004, National Institute for Health and Care Excellence (NICE) methodological guidance for technology appraisals has emphasised a strong preference for using the validated EuroQol 5-Dimensions (EQ-5D) quality-of-life instrument, measuring patient health status from patients or carers, and using the general public's preference-based valuation of different health states when assessing health benefits in economic evaluations. The aim of this study was to review all NICE single technology appraisals (STAs) for breast cancer treatments to explore consistency in the use of utility scores in light of NICE methodological guidance. A review of all published breast cancer STAs was undertaken using all publicly available STA documents for each included assessment. Utility scores were assessed for consistency with NICE-preferred methods and original data sources. Furthermore, academic assessment group work undertaken during the STA process was examined to evaluate the emphasis of NICE-preferred quality-of-life measurement methods. Twelve breast cancer STAs were identified, and many STAs used evidence that did not follow NICE's preferred utility score measurement methods. Recent STA submissions show companies using EQ-5D and mapping. Academic assessment groups rarely emphasized NICE-preferred methods, and queries about preferred methods were rare. While there appears to be a trend in recent STA submissions towards following NICE methodological guidance, historically STA guidance in breast cancer has generally not used NICE's preferred methods. Future STAs in breast cancer and reviews of older guidance should ensure that utility measurement methods are consistent with the NICE reference case to help produce consistent, equitable decision making.
Kobayashi, Leo; Gosbee, John W; Merck, Derek L
2017-07-01
(1) To develop a clinical microsystem simulation methodology for alarm fatigue research with a human factors engineering (HFE) assessment framework and (2) to explore its application to the comparative examination of different approaches to patient monitoring and provider notification. Problems with the design, implementation, and real-world use of patient monitoring systems result in alarm fatigue. A multidisciplinary team is developing an open-source tool kit to promote bedside informatics research and mitigate alarm fatigue. Simulation, HFE, and computer science experts created a novel simulation methodology to study alarm fatigue. Featuring multiple interconnected simulated patient scenarios with scripted timeline, "distractor" patient care tasks, and triggered true and false alarms, the methodology incorporated objective metrics to assess provider and system performance. Developed materials were implemented during institutional review board-approved study sessions that assessed and compared an experimental multiparametric alerting system with a standard monitor telemetry system for subject response, use characteristics, and end-user feedback. A four-patient simulation setup featuring objective metrics for participant task-related performance and response to alarms was developed along with accompanying structured HFE assessment (questionnaire and interview) for monitor systems use testing. Two pilot and four study sessions with individual nurse subjects elicited true alarm and false alarm responses (including diversion from assigned tasks) as well as nonresponses to true alarms. In-simulation observation and subject questionnaires were used to test the experimental system's approach to suppressing false alarms and alerting providers. A novel investigative methodology applied simulation and HFE techniques to replicate and study alarm fatigue in controlled settings for systems assessment and experimental research purposes.
Methodological quality and descriptive characteristics of prosthodontic-related systematic reviews.
Aziz, T; Compton, S; Nassar, U; Matthews, D; Ansari, K; Flores-Mir, C
2013-04-01
Ideally, healthcare systematic reviews (SRs) should be beneficial to practicing professionals in making evidence-based clinical decisions. However, the conclusions drawn from SRs are directly related to the quality of the SR and of the included studies. The aim was to investigate the methodological quality and key descriptive characteristics of SRs published in prosthodontics. Methodological quality was analysed using the Assessment of Multiple Reviews (AMSTAR) tool. Several electronic resources (MEDLINE, EMBASE, Web of Science and American Dental Association's Evidence-based Dentistry website) were searched. In total 106 SRs were located. Key descriptive characteristics and methodological quality features were gathered and assessed, and descriptive and inferential statistical testing performed. Most SRs in this sample originated from the European continent followed by North America. Two to five authors conducted most SRs; the majority was affiliated with academic institutions and had prior experience publishing SRs. The majority of SRs were published in specialty dentistry journals, with implant or implant-related topics, the primary topics of interest for most. According to AMSTAR, most quality aspects were adequately fulfilled by less than half of the reviews. Publication bias and grey literature searches were the most poorly adhered components. Overall, the methodological quality of the prosthodontic-related systematic was deemed limited. Future recommendations would include authors to have prior training in conducting SRs and for journals to include a universal checklist that should be adhered to address all key characteristics of an unbiased SR process. © 2013 Blackwell Publishing Ltd.
Kwag, Koren Hyogene; González-Lorenzo, Marien; Banzi, Rita; Bonovas, Stefanos
2016-01-01
Background The complexity of modern practice requires health professionals to be active information-seekers. Objective Our aim was to review the quality and progress of point-of-care information summaries—Web-based medical compendia that are specifically designed to deliver pre-digested, rapidly accessible, comprehensive, and periodically updated information to health care providers. We aimed to evaluate product claims of being evidence-based. Methods We updated our previous evaluations by searching Medline, Google, librarian association websites, and conference proceedings from August 2012 to December 2014. We included Web-based, regularly updated point-of-care information summaries with claims of being evidence-based. We extracted data on the general characteristics and content presentation of products, and we quantitatively assessed their breadth of disease coverage, editorial quality, and evidence-based methodology. We assessed potential relationships between these dimensions and compared them with our 2008 assessment. Results We screened 58 products; 26 met our inclusion criteria. Nearly a quarter (6/26, 23%) were newly identified in 2014. We accessed and analyzed 23 products for content presentation and quantitative dimensions. Most summaries were developed by major publishers in the United States and the United Kingdom; no products derived from low- and middle-income countries. The main target audience remained physicians, although nurses and physiotherapists were increasingly represented. Best Practice, Dynamed, and UptoDate scored the highest across all dimensions. The majority of products did not excel across all dimensions: we found only a moderate positive correlation between editorial quality and evidence-based methodology (r=.41, P=.0496). However, all dimensions improved from 2008: editorial quality (P=.01), evidence-based methodology (P=.015), and volume of diseases and medical conditions (P<.001). Conclusions Medical and scientific publishers are investing substantial resources towards the development and maintenance of point-of-care summaries. The number of these products has increased since 2008 along with their quality. Best Practice, Dynamed, and UptoDate scored the highest across all dimensions, while others that were marketed as evidence-based were less reliable. Individuals and institutions should regularly assess the value of point-of-care summaries as their quality changes rapidly over time. PMID:26786976
Kwag, Koren Hyogene; González-Lorenzo, Marien; Banzi, Rita; Bonovas, Stefanos; Moja, Lorenzo
2016-01-19
The complexity of modern practice requires health professionals to be active information-seekers. Our aim was to review the quality and progress of point-of-care information summaries-Web-based medical compendia that are specifically designed to deliver pre-digested, rapidly accessible, comprehensive, and periodically updated information to health care providers. We aimed to evaluate product claims of being evidence-based. We updated our previous evaluations by searching Medline, Google, librarian association websites, and conference proceedings from August 2012 to December 2014. We included Web-based, regularly updated point-of-care information summaries with claims of being evidence-based. We extracted data on the general characteristics and content presentation of products, and we quantitatively assessed their breadth of disease coverage, editorial quality, and evidence-based methodology. We assessed potential relationships between these dimensions and compared them with our 2008 assessment. We screened 58 products; 26 met our inclusion criteria. Nearly a quarter (6/26, 23%) were newly identified in 2014. We accessed and analyzed 23 products for content presentation and quantitative dimensions. Most summaries were developed by major publishers in the United States and the United Kingdom; no products derived from low- and middle-income countries. The main target audience remained physicians, although nurses and physiotherapists were increasingly represented. Best Practice, Dynamed, and UptoDate scored the highest across all dimensions. The majority of products did not excel across all dimensions: we found only a moderate positive correlation between editorial quality and evidence-based methodology (r=.41, P=.0496). However, all dimensions improved from 2008: editorial quality (P=.01), evidence-based methodology (P=.015), and volume of diseases and medical conditions (P<.001). Medical and scientific publishers are investing substantial resources towards the development and maintenance of point-of-care summaries. The number of these products has increased since 2008 along with their quality. Best Practice, Dynamed, and UptoDate scored the highest across all dimensions, while others that were marketed as evidence-based were less reliable. Individuals and institutions should regularly assess the value of point-of-care summaries as their quality changes rapidly over time.
Horvath, Karl; Semlitsch, Thomas; Jeitler, Klaus; Abuzahra, Muna E; Posch, Nicole; Domke, Andreas; Siebenhofer, Andrea
2016-01-01
Objectives Identification of sufficiently trustworthy top 5 list recommendations from the US Choosing Wisely campaign. Setting Not applicable. Participants All top 5 list recommendations available from the American Board of Internal Medicine Foundation website. Main outcome measures/interventions Compilation of US top 5 lists and search for current German highly trustworthy (S3) guidelines. Extraction of guideline recommendations, including grade of recommendation (GoR), for suggestions comparable to top 5 list recommendations. For recommendations without guideline equivalents, the methodological quality of the top 5 list development process was assessed using criteria similar to that used to judge guidelines, and relevant meta-literature was identified in cited references. Judgement of sufficient trustworthiness of top 5 list recommendations was based either on an ‘A’ GoR of guideline equivalents or on high methodological quality and citation of relevant meta-literature. Results 412 top 5 list recommendations were identified. For 75 (18%), equivalents were found in current German S3 guidelines. 44 of these recommendations were associated with an ‘A’ GoR, or a strong recommendation based on strong evidence, and 26 had a ‘B’ or a ‘C’ GoR. No GoR was provided for 5 recommendations. 337 recommendations had no equivalent in the German S3 guidelines. The methodological quality of the development process was high and relevant meta-literature was cited for 87 top 5 list recommendations. For a further 36, either the methodological quality was high without any meta-literature citations or meta-literature citations existed but the methodological quality was lacking. For the remaining 214 recommendations, either the methodological quality was lacking and no literature was cited or the methodological quality was generally unsatisfactory. Conclusions 131 of current US top 5 list recommendations were found to be sufficiently trustworthy. For a substantial number of current US top 5 list recommendations, their trustworthiness remains unclear. Methodological requirements for developing top 5 lists are recommended. PMID:27855098
Quality Assessment of TPB-Based Questionnaires: A Systematic Review
Oluka, Obiageli Crystal; Nie, Shaofa; Sun, Yi
2014-01-01
Objective This review is aimed at assessing the quality of questionnaires and their development process based on the theory of planned behavior (TPB) change model. Methods A systematic literature search for studies with the primary aim of TPB-based questionnaire development was conducted in relevant databases between 2002 and 2012 using selected search terms. Ten of 1,034 screened abstracts met the inclusion criteria and were assessed for methodological quality using two different appraisal tools: one for the overall methodological quality of each study and the other developed for the appraisal of the questionnaire content and development process. Both appraisal tools consisted of items regarding the likelihood of bias in each study and were eventually combined to give the overall quality score for each included study. Results 8 of the 10 included studies showed low risk of bias in the overall quality assessment of each study, while 9 of the studies were of high quality based on the quality appraisal of questionnaire content and development process. Conclusion Quality appraisal of the questionnaires in the 10 reviewed studies was successfully conducted, highlighting the top problem areas (including: sample size estimation; inclusion of direct and indirect measures; and inclusion of questions on demographics) in the development of TPB-based questionnaires and the need for researchers to provide a more detailed account of their development process. PMID:24722323
Treves-Kagan, Sarah; Naidoo, Evasen; Gilvydis, Jennifer M; Raphela, Elsie; Barnhart, Scott; Lippman, Sheri A
2017-09-01
Successful HIV prevention programming requires engaging communities in the planning process and responding to the social environmental factors that shape health and behaviour in a specific local context. We conducted two community-based situational analyses to inform a large, comprehensive HIV prevention programme in two rural districts of North West Province South Africa in 2012. The methodology includes: initial partnership building, goal setting and background research; 1 week of field work; in-field and subsequent data analysis; and community dissemination and programmatic incorporation of results. We describe the methodology and a case study of the approach in rural South Africa; assess if the methodology generated data with sufficient saturation, breadth and utility for programming purposes; and evaluate if this process successfully engaged the community. Between the two sites, 87 men and 105 women consented to in-depth interviews; 17 focus groups were conducted; and 13 health facilities and 7 NGOs were assessed. The methodology succeeded in quickly collecting high-quality data relevant to tailoring a comprehensive HIV programme and created a strong foundation for community engagement and integration with local health services. This methodology can be an accessible tool in guiding community engagement and tailoring future combination HIV prevention and care programmes.
Ubago Pérez, Ruth; Castillo Muñoz, María Auxiliadora; Banqueri, Mercedes Galván; García Estepa, Raúl; Alfaro Lara, Eva Rocío; Vega Coca, María Dolores; Beltrán Calvo, Carmen; Molina López, Teresa
The European network for Health Technology Assessment (EUnetHTA) is the network of public health technology assessment (HTA) agencies and entities from across the EU. In this context, the HTA Core Model ® , has been developed. The Andalusian Agency for Health Technology Assessment (AETSA) is a member of the Spanish HTA Network and EUnetHTA collaboration In addition, AETSA participates in the new EUnetHTA Joint Action 3 (JA, 2016-2019). Furthermore, AETSA works on pharmaceutical assessments. Part of this work involves drafting therapeutic positioning reports (TPRs) on drugs that have recently been granted marketing authorisation, which is overseen by the Spanish Agency of Medicines and Medical Devices (AEMPS). AETSA contributes by drafting "Evidence synthesis reports: pharmaceuticals" in which a rapid comparative efficacy and safety assessment is performed for drugs for which a TPR will be created. To create this type of report, AETSA follows its own methodological guideline based on EUnetHTA guidelines and the HTA Core Model ® . In this paper, the methodology that AETSA has developed to create the guideline for "Evidence synthesis reports: pharmaceuticals" is described. The structure of the report itself is also presented. Copyright © 2016 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.
NASA Astrophysics Data System (ADS)
Belyaeva, Svetlana; Makeeva, Tatyana; Chugunov, Andrei; Andreeva, Peraskovya
2018-03-01
One of the important conditions of effective renovation of accommodation in region on the base of realization of high-rise construction projects is attraction of investments by forming favorable investment climate, as well as reduction if administrative barriers in construction and update of main funds of housing and communal services. The article proposes methodological bases for assessing the state of the investment climate in the region, as well as the methodology for the formation and evaluation of the investment program of the housing and communal services enterprise. The proposed methodologies are tested on the example of the Voronezh region. Authors also showed the necessity and expediency of using the consulting mechanism in the development of state and non-state investment projects and programs.
NASA Astrophysics Data System (ADS)
Audebert, M.; Clément, R.; Touze-Foltz, N.; Günther, T.; Moreau, S.; Duquennoi, C.
2014-12-01
Leachate recirculation is a key process in municipal waste landfills functioning as bioreactors. To quantify the water content and to assess the leachate injection system, in-situ methods are required to obtain spatially distributed information, usually electrical resistivity tomography (ERT). This geophysical method is based on the inversion process, which presents two major problems in terms of delimiting the infiltration area. First, it is difficult for ERT users to choose an appropriate inversion parameter set. Indeed, it might not be sufficient to interpret only the optimum model (i.e. the model with the chosen regularisation strength) because it is not necessarily the model which best represents the physical process studied. Second, it is difficult to delineate the infiltration front based on resistivity models because of the smoothness of the inversion results. This paper proposes a new methodology called MICS (multiple inversions and clustering strategy), which allows ERT users to improve the delimitation of the infiltration area in leachate injection monitoring. The MICS methodology is based on (i) a multiple inversion step by varying the inversion parameter values to take a wide range of resistivity models into account and (ii) a clustering strategy to improve the delineation of the infiltration front. In this paper, MICS was assessed on two types of data. First, a numerical assessment allows us to optimise and test MICS for different infiltration area sizes, contrasts and shapes. Second, MICS was applied to a field data set gathered during leachate recirculation on a bioreactor.
Aldekhayel, Salah A; Alselaim, Nahar A; Magzoub, Mohi Eldin; Al-Qattan, Mohammad M; Al-Namlah, Abdullah M; Tamim, Hani; Al-Khayal, Abdullah; Al-Habdan, Sultan I; Zamakhshary, Mohammed F
2012-10-24
Script Concordance Test (SCT) is a new assessment tool that reliably assesses clinical reasoning skills. Previous descriptions of developing SCT-question banks were merely subjective. This study addresses two gaps in the literature: 1) conducting the first phase of a multistep validation process of SCT in Plastic Surgery, and 2) providing an objective methodology to construct a question bank based on SCT. After developing a test blueprint, 52 test items were written. Five validation questions were developed and a validation survey was established online. Seven reviewers were asked to answer this survey. They were recruited from two countries, Saudi Arabia and Canada, to improve the test's external validity. Their ratings were transformed into percentages. Analysis was performed to compare reviewers' ratings by looking at correlations, ranges, means, medians, and overall scores. Scores of reviewers' ratings were between 76% and 95% (mean 86% ± 5). We found poor correlations between reviewers (Pearson's: +0.38 to -0.22). Ratings of individual validation questions ranged between 0 and 4 (on a scale 1-5). Means and medians of these ranges were computed for each test item (mean: 0.8 to 2.4; median: 1 to 3). A subset of test items comprising 27 items was generated based on a set of inclusion and exclusion criteria. This study proposes an objective methodology for validation of SCT-question bank. Analysis of validation survey is done from all angles, i.e., reviewers, validation questions, and test items. Finally, a subset of test items is generated based on a set of criteria.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scolozzi, Rocco, E-mail: rocco.scolozzi@fmach.it; Geneletti, Davide, E-mail: geneletti@ing.unitn.it
Habitat loss and fragmentation are often concurrent to land conversion and urbanization. Simple application of GIS-based landscape pattern indicators may be not sufficient to support meaningful biodiversity impact assessment. A review of the literature reveals that habitat definition and habitat fragmentation are frequently inadequately considered in environmental assessment, notwithstanding the increasing number of tools and approaches reported in the landscape ecology literature. This paper presents an approach for assessing impacts on habitats on a local scale, where availability of species data is often limited, developed for an alpine valley in northern Italy. The perspective of the methodology is multiple scalemore » and species-oriented, and provides both qualitative and quantitative definitions of impact significance. A qualitative decision model is used to assess ecological values in order to support land-use decisions at the local level. Building on recent studies in the same region, the methodology integrates various approaches, such as landscape graphs, object-oriented rule-based habitat assessment and expert knowledge. The results provide insights into future habitat loss and fragmentation caused by land-use changes, and aim at supporting decision-making in planning and suggesting possible ecological compensation. - Highlights: Black-Right-Pointing-Pointer Many environmental assessments inadequately consider habitat loss and fragmentation. Black-Right-Pointing-Pointer Species-perspective for defining habitat quality and connectivity is claimed. Black-Right-Pointing-Pointer Species-based tools are difficult to be applied with limited availability of data. Black-Right-Pointing-Pointer We propose a species-oriented and multiple scale-based qualitative approach. Black-Right-Pointing-Pointer Advantages include being species-oriented and providing value-based information.« less
Assessment of undiscovered oil and gas resources of the Paris Basin, France, 2015
Schenk, Christopher J.; Charpentier, Ronald R.; Klett, Timothy R.; Tennyson, Marilyn E.; Mercier, Tracey J.; Le, Phoung A.; Brownfield, Michael E.; Pitman, Janet K.; Gaswirth, Stephanie B.; Marra, Kristen R.; Leathers, Heidi M.
2015-01-01
Using a geology-based assessment methodology, the U.S. Geological Survey estimated means of 222 million barrels of unconventional oil; 2,092 billion cubic feet of unconventional gas; 18 million barrels of conventional oil; and 47 billion cubic feet of conventional gas resources in the Paris Basin of France.
Schenk, Christopher J.; Brownfield, Michael E.; Charpentier, Ronald R.; Cook, Troy A.; Klett, Timothy R.; Pitman, Janet K.; Pollastro, Richard M.
2012-01-01
Using a geology-based assessment methodology, the U.S. Geological Survey estimated means of 5.8 billion barrels of oil and 115 trillion cubic feet of undiscovered natural gas in five geologic provinces in the areas of Papua New Guinea, eastern Indonesia, and East Timor.
Fifteen-Year-Old Pupils' Variable Handling Performance in the Context of Scientific Investigations.
ERIC Educational Resources Information Center
Donnelly, J. F.
1987-01-01
Reports findings on variable-handling aspects of pupil performance in investigatory tasks, using data from the British Assessment of Performance Unit (APU) national survey program. Discusses the significance of these findings for assessment methodology and for understanding of 15-year-olds' approaches to the variable-based logic of investigation.…
Klett, Timothy R.; Schenk, Christopher J.; Brownfield, Michael E.; Leathers-Miller, Heidi M.; Mercier, Tracey J.; Pitman, Janet K.; Tennyson, Marilyn E.
2016-11-10
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean continuous resources of 12 billion barrels of oil and 75 trillion cubic feet of gas in the Bazhenov Formation of the West Siberian Basin Province, Russia.
Marra, Kristen R.; Gaswirth, Stephanie B.; Schenk, Christopher J.; Leathers-Miller, Heidi M.; Klett, Timothy R.; Mercier, Tracey J.; Le, Phuong A.; Tennyson, Marilyn E.; Finn, Thomas M.; Hawkins, Sarah J.; Brownfield, Michael E.
2017-05-15
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean resources of 4.2 billion barrels of oil and 3.1 trillion cubic feet of gas in the Spraberry Formation of the Midland Basin, Permian Basin Province, Texas.
Assessment of Permian tight oil and gas resources in the Junggar basin of China, 2016
Potter, Christopher J.; Schenk, Christopher J.; Tennyson, Marilyn E.; Klett, Timothy R.; Gaswirth, Stephanie B.; Leathers-Miller, Heidi M.; Finn, Thomas M.; Brownfield, Michael E.; Pitman, Janet K.; Mercier, Tracey J.; Le, Phuong A.; Drake, Ronald M.
2017-04-05
Using a geology-based assessment methodology, the U.S. Geological Survey estimated undiscovered, technically recoverable mean resources of 764 million barrels of oil and 3.5 trillion cubic feet of gas in tight reservoirs in the Permian Lucaogou Formation in the Junggar basin of northwestern China.
An Evaluation of Diet and Physical Activity Messaging in African American Churches
ERIC Educational Resources Information Center
Harmon, Brook E.; Blake, Christine E.; Thrasher, James F.; Hébert, James R.
2014-01-01
The use of faith-based organizations as sites to deliver diet and physical activity interventions is increasing. Methods to assess the messaging environment within churches are limited. Our research aimed to develop and test an objective assessment methodology to characterize health messages, particularly those related to diet and physical…
Methodical Bases for the Regional Information Potential Estimation
ERIC Educational Resources Information Center
Ashmarina, Svetlana I.; Khasaev, Gabibulla R.; Mantulenko, Valentina V.; Kasarin, Stanislav V.; Dorozhkin, Evgenij M.
2016-01-01
The relevance of the investigated problem is caused by the need to assess the implementation of informatization level of the region and the insufficient development of the theoretical, content-technological, scientific and methodological aspects of the assessment of the region's information potential. The aim of the research work is to develop a…
Schenk, Christopher J.; Mercier, Tracey J.; Tennyson, Marilyn E.; Woodall, Cheryl A.; Finn, Thomas M.; Pitman, Janet K.; Gaswirth, Stephanie B.; Marra, Kristen R.; Le, Phuong A.; Klett, Timothy R.; Leathers-Miller, Heidi M.
2018-04-13
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean undiscovered, technically recoverable resources of 198 billion cubic feet of continuous gas in the Phosphoria Formation of the Wyoming Thrust Belt Province, Wyoming, Idaho, and Utah.
Assessment of unconventional oil and gas resources in Northeast Mexico, 2014
Schenk, Christopher J.; Charpentier, Ronald R.; Klett, Timothy R.; Tennyson, Marilyn E.; Gaswirth, Stephanie B.; Brownfield, Michael E.; Pawlewicz, Mark J.; Weaver, Jean Noe
2014-01-01
Using a geology-based assessment methodology, the U.S. Geological Survey estimated means of 0.78 billion barrels of unconventional oil, 23.5 trillion cubic feet of unconventional gas, and 0.88 billion barrels of natural gas liquids in the Sabinas Basin, Burgos Basin, and Tampico-Misantla Basin provinces of northeast Mexico.
Assessment of undiscovered conventional oil and gas resources of six geologic provinces of China
Charpentier, Ronald R.; Schenk, Christopher J.; Brownfield, Michael E.; Cook, Troy A.; Klett, Timothy R.; Pitman, Janet K.; Pollastro, Richard M.
2012-01-01
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean volumes of undiscovered conventional petroleum resources in six geologic provinces of China at 14.9 billion barrels of oil, 87.6 trillion cubic feet of natural gas, and 1.4 billion barrels of natural-gas liquids.
Harder, Thomas; Takla, Anja; Eckmanns, Tim; Ellis, Simon; Forland, Frode; James, Roberta; Meerpohl, Joerg J; Morgan, Antony; Rehfuess, Eva; Schünemann, Holger; Zuiderent-Jerak, Teun; de Carvalho Gomes, Helena; Wichmann, Ole
2017-10-01
Decisions in public health should be based on the best available evidence, reviewed and appraised using a rigorous and transparent methodology. The Project on a Framework for Rating Evidence in Public Health (PRECEPT) defined a methodology for evaluating and grading evidence in infectious disease epidemiology, prevention and control that takes different domains and question types into consideration. The methodology rates evidence in four domains: disease burden, risk factors, diagnostics and intervention. The framework guiding it has four steps going from overarching questions to an evidence statement. In step 1, approaches for identifying relevant key areas and developing specific questions to guide systematic evidence searches are described. In step 2, methodological guidance for conducting systematic reviews is provided; 15 study quality appraisal tools are proposed and an algorithm is given for matching a given study design with a tool. In step 3, a standardised evidence-grading scheme using the Grading of Recommendations Assessment, Development and Evaluation Working Group (GRADE) methodology is provided, whereby findings are documented in evidence profiles. Step 4 consists of preparing a narrative evidence summary. Users of this framework should be able to evaluate and grade scientific evidence from the four domains in a transparent and reproducible way.
Harder, Thomas; Takla, Anja; Eckmanns, Tim; Ellis, Simon; Forland, Frode; James, Roberta; Meerpohl, Joerg J; Morgan, Antony; Rehfuess, Eva; Schünemann, Holger; Zuiderent-Jerak, Teun; de Carvalho Gomes, Helena; Wichmann, Ole
2017-01-01
Decisions in public health should be based on the best available evidence, reviewed and appraised using a rigorous and transparent methodology. The Project on a Framework for Rating Evidence in Public Health (PRECEPT) defined a methodology for evaluating and grading evidence in infectious disease epidemiology, prevention and control that takes different domains and question types into consideration. The methodology rates evidence in four domains: disease burden, risk factors, diagnostics and intervention. The framework guiding it has four steps going from overarching questions to an evidence statement. In step 1, approaches for identifying relevant key areas and developing specific questions to guide systematic evidence searches are described. In step 2, methodological guidance for conducting systematic reviews is provided; 15 study quality appraisal tools are proposed and an algorithm is given for matching a given study design with a tool. In step 3, a standardised evidence-grading scheme using the Grading of Recommendations Assessment, Development and Evaluation Working Group (GRADE) methodology is provided, whereby findings are documented in evidence profiles. Step 4 consists of preparing a narrative evidence summary. Users of this framework should be able to evaluate and grade scientific evidence from the four domains in a transparent and reproducible way. PMID:29019317
Technical Feasibility Aspects of the Geothermal Resource Reporting Methodology (GRRM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badgett, Alex; Young, Katherine R; Dobson, Patrick F.
This paper reviews the technical assessment of the Geothermal Research Reporting Methodology (GRRM, http://en.openei.org/wiki/GRRM) being developed for reporting geothermal resources and project progress. The goal of the methodology is to provide the U.S. Department of Energy's Geothermal Technologies Office (GTO) with a consistent and comprehensible means of evaluating the impacts of its funding programs. The GRRM is designed to provide uniform assessment criteria for geothermal resource grades and developmental phases of geothermal resource exploration and development. This resource grade system provides information on twelve attributes of geothermal resource locations (e.g., temperature, permeability, land access) to indicate potential for geothermal development.more » The GTO plans to use these Protocols to help quantitatively identify the greatest barriers to geothermal development, develop measureable program goals that will have the greatest impact to geothermal deployment, objectively evaluate proposals based (in part) on a project's ability to contribute to program goals, monitor project progress, and report on GTO portfolio performance. The GRRM assesses three areas of geothermal potential: geological, socio-economic, and technical. Previous work and publications have discussed the work done on the geological aspects of this methodology (Young et al. 2015c); this paper details the development of the technical assessment of the GRRM. Technical development attributes considered include: reservoir management, drilling, logistics, and power conversion.« less
Costa Gondim, João José; de Oliveira Albuquerque, Robson; Clayton Alves Nascimento, Anderson; García Villalba, Luis Javier; Kim, Tai-Hoon
2016-01-01
Concerns about security on Internet of Things (IoT) cover data privacy and integrity, access control, and availability. IoT abuse in distributed denial of service attacks is a major issue, as typical IoT devices’ limited computing, communications, and power resources are prioritized in implementing functionality rather than security features. Incidents involving attacks have been reported, but without clear characterization and evaluation of threats and impacts. The main purpose of this work is to methodically assess the possible impacts of a specific class–amplified reflection distributed denial of service attacks (AR-DDoS)–against IoT. The novel approach used to empirically examine the threat represented by running the attack over a controlled environment, with IoT devices, considered the perspective of an attacker. The methodology used in tests includes that perspective, and actively prospects vulnerabilities in computer systems. This methodology defines standardized procedures for tool-independent vulnerability assessment based on strategy, and the decision flows during execution of penetration tests (pentests). After validation in different scenarios, the methodology was applied in amplified reflection distributed denial of service (AR-DDoS) attack threat assessment. Results show that, according to attack intensity, AR-DDoS saturates reflector infrastructure. Therefore, concerns about AR-DDoS are founded, but expected impact on abused IoT infrastructure and devices will be possibly as hard as on final victims. PMID:27827931
Costa Gondim, João José; de Oliveira Albuquerque, Robson; Clayton Alves Nascimento, Anderson; García Villalba, Luis Javier; Kim, Tai-Hoon
2016-11-04
Concerns about security on Internet of Things (IoT) cover data privacy and integrity, access control, and availability. IoT abuse in distributed denial of service attacks is a major issue, as typical IoT devices' limited computing, communications, and power resources are prioritized in implementing functionality rather than security features. Incidents involving attacks have been reported, but without clear characterization and evaluation of threats and impacts. The main purpose of this work is to methodically assess the possible impacts of a specific class-amplified reflection distributed denial of service attacks (AR-DDoS)-against IoT. The novel approach used to empirically examine the threat represented by running the attack over a controlled environment, with IoT devices, considered the perspective of an attacker. The methodology used in tests includes that perspective, and actively prospects vulnerabilities in computer systems. This methodology defines standardized procedures for tool-independent vulnerability assessment based on strategy, and the decision flows during execution of penetration tests (pentests). After validation in different scenarios, the methodology was applied in amplified reflection distributed denial of service (AR-DDoS) attack threat assessment. Results show that, according to attack intensity, AR-DDoS saturates reflector infrastructure. Therefore, concerns about AR-DDoS are founded, but expected impact on abused IoT infrastructure and devices will be possibly as hard as on final victims.
A multicriteria-based methodology for site prioritisation in sediment management.
Alvarez-Guerra, Manuel; Viguri, Javier R; Voulvoulis, Nikolaos
2009-08-01
Decision-making for sediment management is a complex task that incorporates the selections of areas for remediation and the assessment of options for any mitigation required. The application of Multicriteria Analysis (MCA) to rank different areas, according to their need for sediment management, provides a great opportunity for prioritisation, a first step in an integrated methodology that finally aims to assess and select suitable alternatives for managing the identified priority sites. This paper develops a methodology that starts with the delimitation of management units within areas of study, followed by the application of MCA methods that allows ranking of these management units, according to their need for remediation. This proposed process considers not only scientific evidence on sediment quality, but also other relevant aspects such as social and economic criteria associated with such decisions. This methodology is illustrated with its application to the case study area of the Bay of Santander, in northern Spain, highlighting some of the implications of utilising different MCA methods in the process. It also uses site-specific data to assess the subjectivity in the decision-making process, mainly reflected through the assignment of the criteria weights and uncertainties in the criteria scores. Analysis of the sensitivity of the results to these factors is used as a way to assess the stability and robustness of the ranking as a first step of the sediment management decision-making process.
Approaches to Children’s Exposure Assessment: Case Study with Diethylhexylphthalate (DEHP)
Ginsberg, Gary; Ginsberg, Justine; Foos, Brenda
2016-01-01
Children’s exposure assessment is a key input into epidemiology studies, risk assessment and source apportionment. The goals of this article are to describe a methodology for children’s exposure assessment that can be used for these purposes and to apply the methodology to source apportionment for the case study chemical, diethylhexylphthalate (DEHP). A key feature is the comparison of total (aggregate) exposure calculated via a pathways approach to that derived from a biomonitoring approach. The 4-step methodology and its results for DEHP are: (1) Prioritization of life stages and exposure pathways, with pregnancy, breast-fed infants, and toddlers the focus of the case study and pathways selected that are relevant to these groups; (2) Estimation of pathway-specific exposures by life stage wherein diet was found to be the largest contributor for pregnant women, breast milk and mouthing behavior for the nursing infant and diet, house dust, and mouthing for toddlers; (3) Comparison of aggregate exposure by pathways vs biomonitoring-based approaches wherein good concordance was found for toddlers and pregnant women providing confidence in the exposure assessment; (4) Source apportionment in which DEHP presence in foods, children’s products, consumer products and the built environment are discussed with respect to early life mouthing, house dust and dietary exposure. A potential fifth step of the method involves the calculation of exposure doses for risk assessment which is described but outside the scope for the current case study. In summary, the methodology has been used to synthesize the available information to identify key sources of early life exposure to DEHP. PMID:27376320
Methodology development for evaluation of selective-fidelity rotorcraft simulation
NASA Technical Reports Server (NTRS)
Lewis, William D.; Schrage, D. P.; Prasad, J. V. R.; Wolfe, Daniel
1992-01-01
This paper addressed the initial step toward the goal of establishing performance and handling qualities acceptance criteria for realtime rotorcraft simulators through a planned research effort to quantify the system capabilities of 'selective fidelity' simulators. Within this framework the simulator is then classified based on the required task. The simulator is evaluated by separating the various subsystems (visual, motion, etc.) and applying corresponding fidelity constants based on the specific task. This methodology not only provides an assessment technique, but also provides a technique to determine the required levels of subsystem fidelity for a specific task.
Assessing the impact of modeling limits on intelligent systems
NASA Technical Reports Server (NTRS)
Rouse, William B.; Hammer, John M.
1990-01-01
The knowledge bases underlying intelligent systems are validated. A general conceptual framework is provided for considering the roles in intelligent systems of models of physical, behavioral, and operational phenomena. A methodology is described for identifying limits in particular intelligent systems, and the use of the methodology is illustrated via an experimental evaluation of the pilot-vehicle interface within the Pilot's Associate. The requirements and functionality are outlined for a computer based knowledge engineering environment which would embody the approach advocated and illustrated in earlier discussions. Issues considered include the specific benefits of this functionality, the potential breadth of applicability, and technical feasibility.
Analytical and simulator study of advanced transport
NASA Technical Reports Server (NTRS)
Levison, W. H.; Rickard, W. W.
1982-01-01
An analytic methodology, based on the optimal-control pilot model, was demonstrated for assessing longitidunal-axis handling qualities of transport aircraft in final approach. Calibration of the methodology is largely in terms of closed-loop performance requirements, rather than specific vehicle response characteristics, and is based on a combination of published criteria, pilot preferences, physical limitations, and engineering judgment. Six longitudinal-axis approach configurations were studied covering a range of handling qualities problems, including the presence of flexible aircraft modes. The analytical procedure was used to obtain predictions of Cooper-Harper ratings, a solar quadratic performance index, and rms excursions of important system variables.
Incorporating scenario-based simulation into a hospital nursing education program.
Nagle, Beth M; McHale, Jeanne M; Alexander, Gail A; French, Brian M
2009-01-01
Nurse educators are challenged to provide meaningful and effective learning opportunities for both new and experienced nurses. Simulation as a teaching and learning methodology is being embraced by nursing in academic and practice settings to provide innovative educational experiences to assess and develop clinical competency, promote teamwork, and improve care processes. This article provides an overview of the historical basis for using simulation in education, simulation methodologies, and perceived advantages and disadvantages. It also provides a description of the integration of scenario-based programs using a full-scale patient simulator into nursing education programming at a large academic medical center.
A systems-based food safety evaluation: an experimental approach.
Higgins, Charles L; Hartfield, Barry S
2004-11-01
Food establishments are complex systems with inputs, subsystems, underlying forces that affect the system, outputs, and feedback. Building on past exploration of the hazard analysis critical control point concept and Ludwig von Bertalanffy General Systems Theory, the National Park Service (NPS) is attempting to translate these ideas into a realistic field assessment of food service establishments and to use information gathered by these methods in efforts to improve food safety. Over the course of the last two years, an experimental systems-based methodology has been drafted, developed, and tested by the NPS Public Health Program. This methodology is described in this paper.
MEGASTAR: The Meaning of Energy Growth: An Assessment of Systems, Technologies, and Requirements
NASA Technical Reports Server (NTRS)
1974-01-01
A methodology for the display and analysis of postulated energy futures for the United States is presented. A systems approach that includes the methodology of technology assessment is used to examine three energy scenarios--the Westinghouse Nuclear Electric Economy, the Ford Technical Fix Base Case and a MEGASTAR generated Alternate to the Ford Technical Fix Base Case. The three scenarios represent different paths of energy consumption for the present to the year 2000. Associated with these paths are various mixes of fuels, conversion, distribution, conservation and end-use technologies. MEGASTAR presents the estimated times and unit requirements to supply the fuels, conversion and distribution systems for the postulated end uses for the three scenarios and then estimates the aggregate manpower, materials, and capital requirements needed to develop the energy system described by the particular scenario. The total requirements and the energy subsystems for each scenario are assessed for their primary impacts in the areas of society, the environment, technology and the economy.
Lirio, R B; Dondériz, I C; Pérez Abalo, M C
1992-08-01
The methodology of Receiver Operating Characteristic curves based on the signal detection model is extended to evaluate the accuracy of two-stage diagnostic strategies. A computer program is developed for the maximum likelihood estimation of parameters that characterize the sensitivity and specificity of two-stage classifiers according to this extended methodology. Its use is briefly illustrated with data collected in a two-stage screening for auditory defects.
2017-10-01
to patient safety by addressing key methodological and conceptual gaps in healthcare simulation-based team training. The investigators are developing...primary outcome of Aim 1a is a conceptually and methodologically sound training design architecture that supports the development and integration of team...should be delivered. This subtask was delayed by approximately 1 month and is now completed. Completed Evaluation of existing experimental dataset to
Does Maltreatment Beget Maltreatment? A Systematic Review of the Intergenerational Literature
Thornberry, Terence P.; Knight, Kelly E.; Lovegrove, Peter J.
2014-01-01
In this paper, we critically review the literature testing the cycle of maltreatment hypothesis which posits continuity in maltreatment across adjacent generations. That is, we examine whether a history of maltreatment victimization is a significant risk factor for the later perpetration of maltreatment. We begin by establishing 11 methodological criteria that studies testing this hypothesis should meet. They include such basic standards as using representative samples, valid and reliable measures, prospective designs, and different reporters for each generation. We identify 47 studies that investigated this issue and then evaluate them with regard to the 11 methodological criteria. Overall, most of these studies report findings consistent with the cycle of maltreatment hypothesis. Unfortunately, at the same time, few of them satisfy the basic methodological criteria that we established; indeed, even the stronger studies in this area only meet about half of them. Moreover, the methodologically stronger studies present mixed support for the hypothesis. As a result, the positive association often reported in the literature appears to be based largely on the methodologically weaker designs. Based on our systematic methodological review, we conclude that this small and methodologically weak body of literature does not provide a definitive test of the cycle of maltreatment hypothesis. We conclude that it is imperative to develop more robust and methodologically adequate assessments of this hypothesis to more accurately inform the development of prevention and treatment programs. PMID:22673145
Groundwater vulnerability and risk mapping using GIS, modeling and a fuzzy logic tool.
Nobre, R C M; Rotunno Filho, O C; Mansur, W J; Nobre, M M M; Cosenza, C A N
2007-12-07
A groundwater vulnerability and risk mapping assessment, based on a source-pathway-receptor approach, is presented for an urban coastal aquifer in northeastern Brazil. A modified version of the DRASTIC methodology was used to map the intrinsic and specific groundwater vulnerability of a 292 km(2) study area. A fuzzy hierarchy methodology was adopted to evaluate the potential contaminant source index, including diffuse and point sources. Numerical modeling was performed for delineation of well capture zones, using MODFLOW and MODPATH. The integration of these elements provided the mechanism to assess groundwater pollution risks and identify areas that must be prioritized in terms of groundwater monitoring and restriction on use. A groundwater quality index based on nitrate and chloride concentrations was calculated, which had a positive correlation with the specific vulnerability index.
RESIDUAL RISK ASSESSMENT: MAGNETIC TAPE ...
This document describes the residual risk assessment for the Magnetic Tape Manufacturing source category. For stationary sources, section 112 (f) of the Clean Air Act requires EPA to assess risks to human health and the environment following implementation of technology-based control standards. If these technology-based control standards do not provide an ample margin of safety, then EPA is required to promulgate addtional standards. This document describes the methodology and results of the residual risk assessment performed for the Magnetic Tape Manufacturing source category. The results of this analyiss will assist EPA in determining whether a residual risk rule for this source category is appropriate.
ERIC Educational Resources Information Center
Schmidt, Michael D.; Blizzard, C. Leigh; Venn, Alison J.; Cochrane, Jennifer A.; Dwyer, Terence
2007-01-01
The aim of this study was to summarize both practical and methodological issues in using pedometers to assess physical activity in a large epidemiologic study. As part of a population-based survey of cardiovascular disease risk factors, physical activity was assessed using pedometers and activity diaries in 775 men and women ages 25-64 years who…
Sala, Serenella; Goralczyk, Malgorzata
2013-10-01
The development and use of footprint methodologies for environmental assessment are increasingly important for both the scientific and political communities. Starting from the ecological footprint, developed at the beginning of the 1990s, several other footprints were defined, e.g., carbon and water footprint. These footprints-even though based on a different meaning of "footprint"-integrate life cycle thinking, and focus on some challenging environmental impacts including resource consumption, CO2 emission leading to climate change, and water consumption. However, they usually neglect relevant sources of impacts, as those related to the production and use of chemicals. This article presents and discusses the need and relevance of developing a methodology for assessing the chemical footprint, coupling a life cycle-based approach with methodologies developed in other contexts, such as ERA and sustainability science. Furthermore, different concepts underpin existing footprint and this could be the case also of chemical footprint. At least 2 different approaches and steps to chemical footprint could be envisaged, applicable at the micro- as well as at the meso- and macroscale. The first step (step 1) is related to the account of chemicals use and emissions along the life cycle of a product, sector, or entire economy, to assess potential impacts on ecosystems and human health. The second step (step 2) aims at assessing to which extent actual emission of chemicals harm the ecosystems above their capability to recover (carrying capacity of the system). The latter step might contribute to the wide discussion on planetary boundaries for chemical pollution: the thresholds that should not be surpassed to guarantee a sustainable use of chemicals from an environmental safety perspective. The definition of what the planetary boundaries for chemical pollution are and how the boundaries should be identified is an on-going scientific challenge for ecotoxicology and ecology. In this article, we present a case study at the macroscale for the European Union, in which the chemical footprint according to step 1 is calculated for the year 2005. A proposal for extending this approach toward step 2 is presented and discussed, complemented by a discussion on the challenges and the use of appropriate methodologies for assessing chemical footprints to stimulate further research and discussion on the topic. © 2013 SETAC.
López-Roldán, Ramón; Rubalcaba, Alicia; Martin-Alonso, Jordi; González, Susana; Martí, Vicenç; Cortina, Jose Luis
2016-01-01
A methodology has been developed in order to evaluate the potential risk of drinking water for the health of the consumers. The methodology used for the assessment considered systemic and carcinogenic effects caused by oral ingestion of water based on the reference data developed by the World Health Organisation (WHO) and the Risk Assessment Information System (RAIS) for chemical contaminants. The exposure includes a hypothetical dose received by drinking this water according to the analysed contaminants. An assessment of the chemical quality improvement of produced water in the Drinking Water Treatment Plant (DWTP) after integration of membrane technologies was performed. Series of concentration values covering up to 261 chemical parameters over 5 years (2008-2012) of raw and treated water in the Sant Joan Despí DWTP, at the lower part of the Llobregat River basin (NE Spain), were used. After the application of the methodology, the resulting global indexes were located below the thresholds except for carcinogenic risk in the output of DWTP, where the index was slightly above the threshold during 2008 and 2009 before the upgrade of the treatment works including membrane technologies was executed. The annual evolution of global indexes showed a reduction in the global values for all situations: HQ systemic index based on RAIS dropped from 0.64 to 0.42 for surface water and from 0.61 to 0.31 for drinking water; the R carcinogenic index based on RAIS was negligible for input water and varied between 4.2×10(-05) and 7.4×10(-06) for drinking water; the W systemic index based on the WHO data varied between 0.41 and 0.16 for surface water and between 0.61 and 0.31 for drinking water. A specific analysis for the indexes associated with trihalomethanes (THMs) showed the same pattern. Copyright © 2015 Elsevier B.V. All rights reserved.
Schiffman, Eric L.; Truelove, Edmond L.; Ohrbach, Richard; Anderson, Gary C.; John, Mike T.; List, Thomas; Look, John O.
2011-01-01
AIMS The purpose of the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) Validation Project was to assess the diagnostic validity of this examination protocol. An overview is presented, including Axis I and II methodology and descriptive statistics for the study participant sample. This paper details the development of reliable methods to establish the reference standards for assessing criterion validity of the Axis I RDC/TMD diagnoses. Validity testing for the Axis II biobehavioral instruments was based on previously validated reference standards. METHODS The Axis I reference standards were based on the consensus of 2 criterion examiners independently performing a comprehensive history, clinical examination, and evaluation of imaging. Intersite reliability was assessed annually for criterion examiners and radiologists. Criterion exam reliability was also assessed within study sites. RESULTS Study participant demographics were comparable to those of participants in previous studies using the RDC/TMD. Diagnostic agreement of the criterion examiners with each other and with the consensus-based reference standards was excellent with all kappas ≥ 0.81, except for osteoarthrosis (moderate agreement, k = 0.53). Intrasite criterion exam agreement with reference standards was excellent (k ≥ 0.95). Intersite reliability of the radiologists for detecting computed tomography-disclosed osteoarthrosis and magnetic resonance imaging-disclosed disc displacement was good to excellent (k = 0.71 and 0.84, respectively). CONCLUSION The Validation Project study population was appropriate for assessing the reliability and validity of the RDC/TMD Axis I and II. The reference standards used to assess the validity of Axis I TMD were based on reliable and clinically credible methods. PMID:20213028
NASA Astrophysics Data System (ADS)
Ye, Su; Pontius, Robert Gilmore; Rakshit, Rahul
2018-07-01
Object-based image analysis (OBIA) has gained widespread popularity for creating maps from remotely sensed data. Researchers routinely claim that OBIA procedures outperform pixel-based procedures; however, it is not immediately obvious how to evaluate the degree to which an OBIA map compares to reference information in a manner that accounts for the fact that the OBIA map consists of objects that vary in size and shape. Our study reviews 209 journal articles concerning OBIA published between 2003 and 2017. We focus on the three stages of accuracy assessment: (1) sampling design, (2) response design and (3) accuracy analysis. First, we report the literature's overall characteristics concerning OBIA accuracy assessment. Simple random sampling was the most used method among probability sampling strategies, slightly more than stratified sampling. Office interpreted remotely sensed data was the dominant reference source. The literature reported accuracies ranging from 42% to 96%, with an average of 85%. A third of the articles failed to give sufficient information concerning accuracy methodology such as sampling scheme and sample size. We found few studies that focused specifically on the accuracy of the segmentation. Second, we identify a recent increase of OBIA articles in using per-polygon approaches compared to per-pixel approaches for accuracy assessment. We clarify the impacts of the per-pixel versus the per-polygon approaches respectively on sampling, response design and accuracy analysis. Our review defines the technical and methodological needs in the current per-polygon approaches, such as polygon-based sampling, analysis of mixed polygons, matching of mapped with reference polygons and assessment of segmentation accuracy. Our review summarizes and discusses the current issues in object-based accuracy assessment to provide guidance for improved accuracy assessments for OBIA.
A Screening Method for Assessing Cumulative Impacts
Alexeeff, George V.; Faust, John B.; August, Laura Meehan; Milanes, Carmen; Randles, Karen; Zeise, Lauren; Denton, Joan
2012-01-01
The California Environmental Protection Agency (Cal/EPA) Environmental Justice Action Plan calls for guidelines for evaluating “cumulative impacts.” As a first step toward such guidelines, a screening methodology for assessing cumulative impacts in communities was developed. The method, presented here, is based on the working definition of cumulative impacts adopted by Cal/EPA [1]: “Cumulative impacts means exposures, public health or environmental effects from the combined emissions and discharges in a geographic area, including environmental pollution from all sources, whether single or multi-media, routinely, accidentally, or otherwise released. Impacts will take into account sensitive populations and socio-economic factors, where applicable and to the extent data are available.” The screening methodology is built on this definition as well as current scientific understanding of environmental pollution and its adverse impacts on health, including the influence of both intrinsic, biological factors and non-intrinsic socioeconomic factors in mediating the effects of pollutant exposures. It addresses disparities in the distribution of pollution and health outcomes. The methodology provides a science-based tool to screen places for relative cumulative impacts, incorporating both the pollution burden on a community- including exposures to pollutants, their public health and environmental effects- and community characteristics, specifically sensitivity and socioeconomic factors. The screening methodology provides relative rankings to distinguish more highly impacted communities from less impacted ones. It may also help identify which factors are the greatest contributors to a community’s cumulative impact. It is not designed to provide quantitative estimates of community-level health impacts. A pilot screening analysis is presented here to illustrate the application of this methodology. Once guidelines are adopted, the methodology can serve as a screening tool to help Cal/EPA programs prioritize their activities and target those communities with the greatest cumulative impacts. PMID:22470315
Creating Needs-Based Tiered Models for Assisted Living Reimbursement
ERIC Educational Resources Information Center
Howell-White, Sandra; Gaboda, Dorothy; Rosato, Nancy Scotto; Lucas, Judith A.
2006-01-01
Purpose: This research provides state policy makers and others interested in developing needs-based reimbursement models for Medicaid-funded assisted living with an evaluation of different methodologies that affect the structure and outcomes of these models. Design and Methods: We used assessment data from Medicaid-enrolled assisted living…
Establishing Verbal Repertoires in Children with Autism Using Function-Based Video Modeling
ERIC Educational Resources Information Center
Plavnick, Joshua B.; Ferreri, Summer J.
2011-01-01
Previous research suggests that language-training procedures for children with autism might be enhanced following an assessment of conditions that evoke emerging verbal behavior. The present investigation examined a methodology to teach recognizable mands based on environmental variables known to evoke participants' idiosyncratic communicative…
Exploratory Evaluation of a School-Based Child Sexual Abuse Prevention Program
ERIC Educational Resources Information Center
Barron, Ian G.; Topping, Keith J.
2013-01-01
Internationally, efficacy studies of school-based child sexual abuse prevention programs display a series of methodological shortcomings. Few studies include adolescent participants, recording of disclosures has been inconsistent, and no studies to date have assessed presenter adherence to program protocols or summated the costs of program…
Assessing secondary soil salinization risk based on the PSR sustainability framework.
Zhou, De; Lin, Zhulu; Liu, Liming; Zimmermann, David
2013-10-15
Risk assessment of secondary soil salinization, which is caused in part by the way people manage the land, is an essential challenge to agricultural sustainability. The objective of our study was to develop a soil salinity risk assessment methodology by selecting a consistent set of risk factors based on the conceptual Pressure-State-Response (PSR) sustainability framework and incorporating the grey relational analysis and the Analytic Hierarchy Process methods. The proposed salinity risk assessment methodology was demonstrated through a case study of developing composite risk index maps for the Yinchuan Plain, a major irrigation agriculture district in northwest China. Fourteen risk factors were selected in terms of the three PSR criteria: pressure, state, and response. The results showed that the salinity risk in the Yinchuan Plain was strongly influenced by the subsoil and groundwater salinity, land use, distance to irrigation canals, and depth to groundwater. To maintain agricultural sustainability in the Yinchuan Plain, a suite of remedial and preventative actions were proposed to manage soil salinity risk in the regions that are affected by salinity at different levels and by different salinization processes. The weight sensitivity analysis results also showed that the overall salinity risk of the Yinchuan Plain would increase or decrease as the weights for pressure or response risk factors increased, signifying the importance of human activities on secondary soil salinization. Ideally, the proposed methodology will help us develop more consistent management tools for risk assessment and management and for control of secondary soil salinization. Copyright © 2013 Elsevier Ltd. All rights reserved.
Decision-theoretic methodology for reliability and risk allocation in nuclear power plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, N.Z.; Papazoglou, I.A.; Bari, R.A.
1985-01-01
This paper describes a methodology for allocating reliability and risk to various reactor systems, subsystems, components, operations, and structures in a consistent manner, based on a set of global safety criteria which are not rigid. The problem is formulated as a multiattribute decision analysis paradigm; the multiobjective optimization, which is performed on a PRA model and reliability cost functions, serves as the guiding principle for reliability and risk allocation. The concept of noninferiority is used in the multiobjective optimization problem. Finding the noninferior solution set is the main theme of the current approach. The assessment of the decision maker's preferencesmore » could then be performed more easily on the noninferior solution set. Some results of the methodology applications to a nontrivial risk model are provided and several outstanding issues such as generic allocation and preference assessment are discussed.« less
Chung, Ka-Fai; Chan, Man-Sum; Lam, Ying-Yin; Lai, Cindy Sin-Yee; Yeung, Wing-Fai
2017-06-01
Insufficient sleep among students is a major school health problem. School-based sleep education programs tailored to reach large number of students may be one of the solutions. A systematic review and meta-analysis was conducted to summarize the programs' effectiveness and current status. Electronic databases were searched up until May 2015. Randomized controlled trials of school-based sleep intervention among 10- to 19-year-old students with outcome on total sleep duration were included. Methodological quality of the studies was assessed using the Cochrane's risk of bias assessment. Seven studies were included, involving 1876 students receiving sleep education programs and 2483 attending classes-as-usual. Four weekly 50-minute sleep education classes were most commonly provided. Methodological quality was only moderate, with a high or an uncertain risk of bias in several domains. Compared to classes-as-usual, sleep education programs produced significantly longer weekday and weekend total sleep time and better mood among students at immediate post-treatment, but the improvements were not maintained at follow-up. Limited by the small number of studies and methodological limitations, the preliminary data showed that school-based sleep education programs produced short-term benefits. Future studies should explore integrating sleep education with delayed school start time or other more effective approaches. © 2017, American School Health Association.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaylock, B.P.; Legg, J.; Travis, C.C.
1995-06-01
This document describes a worker health risk evaluation methodology for assessing risks associated with Environmental Restoration (ER) and Waste Management (WM). The methodology is appropriate for estimating worker risks across the Department of Energy (DOE) Complex at both programmatic and site-specific levels. This document supports the worker health risk methodology used to perform the human health risk assessment portion of the DOE Programmatic Environmental Impact Statement (PEIS) although it has applications beyond the PEIS, such as installation-wide worker risk assessments, screening-level assessments, and site-specific assessments.
Burkle, Frederick M
2018-02-01
Triage management remains a major challenge, especially in resource-poor settings such as war, complex humanitarian emergencies, and public health emergencies in developing countries. In triage it is often the disruption of physiology, not anatomy, that is critical, supporting triage methodology based on clinician-assessed physiological parameters as well as anatomy and mechanism of injury. In recent times, too many clinicians from developed countries have deployed to humanitarian emergencies without the physical exam skills needed to assess patients without the benefit of remotely fed electronic monitoring, laboratory, and imaging studies. In triage, inclusion of the once-widely accepted and collectively taught "art of decoding vital signs" with attention to their character and meaning may provide clues to a patient's physiological state, improving triage sensitivity. Attention to decoding vital signs is not a triage methodology of its own or a scoring system, but rather a skill set that supports existing triage methodologies. With unique triage management challenges being raised by an ever-changing variety of humanitarian crises, these once useful skill sets need to be revisited, understood, taught, and utilized by triage planners, triage officers, and teams as a necessary adjunct to physiologically based triage decision-making. (Disaster Med Public Health Preparedness. 2018;12:76-85).
Rezapour, Aziz; Jafari, Abdosaleh; Mirmasoudi, Kosha; Talebianpour, Hamid
2017-09-01
Health economic evaluation research plays an important role in selecting cost-effective interventions. The purpose of this study was to assess the quality of published articles in Iranian journals related to economic evaluation in health care programs based on Drummond's checklist in terms of numbers, features, and quality. In the present review study, published articles (Persian and English) in Iranian journals related to economic evaluation in health care programs were searched using electronic databases. In addition, the methodological quality of articles' structure was analyzed by Drummond's standard checklist. Based on the inclusion criteria, the search of databases resulted in 27 articles that fully covered economic evaluation in health care programs. A review of articles in accordance with Drummond's criteria showed that the majority of studies had flaws. The most common methodological weakness in the articles was in terms of cost calculation and valuation. Considering such methodological faults in these studies, it is anticipated that these studies would not provide an appropriate feedback to policy makers to allocate health care resources correctly and select suitable cost-effective interventions. Therefore, researchers are required to comply with the standard guidelines in order to better execute and report on economic evaluation studies.
Rezapour, Aziz; Jafari, Abdosaleh; Mirmasoudi, Kosha; Talebianpour, Hamid
2017-01-01
Health economic evaluation research plays an important role in selecting cost-effective interventions. The purpose of this study was to assess the quality of published articles in Iranian journals related to economic evaluation in health care programs based on Drummond’s checklist in terms of numbers, features, and quality. In the present review study, published articles (Persian and English) in Iranian journals related to economic evaluation in health care programs were searched using electronic databases. In addition, the methodological quality of articles’ structure was analyzed by Drummond’s standard checklist. Based on the inclusion criteria, the search of databases resulted in 27 articles that fully covered economic evaluation in health care programs. A review of articles in accordance with Drummond’s criteria showed that the majority of studies had flaws. The most common methodological weakness in the articles was in terms of cost calculation and valuation. Considering such methodological faults in these studies, it is anticipated that these studies would not provide an appropriate feedback to policy makers to allocate health care resources correctly and select suitable cost-effective interventions. Therefore, researchers are required to comply with the standard guidelines in order to better execute and report on economic evaluation studies. PMID:29234174
Schiffman, Eric L; Truelove, Edmond L; Ohrbach, Richard; Anderson, Gary C; John, Mike T; List, Thomas; Look, John O
2010-01-01
The purpose of the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) Validation Project was to assess the diagnostic validity of this examination protocol. The aim of this article is to provide an overview of the project's methodology, descriptive statistics, and data for the study participant sample. This article also details the development of reliable methods to establish the reference standards for assessing criterion validity of the Axis I RDC/TMD diagnoses. The Axis I reference standards were based on the consensus of two criterion examiners independently performing a comprehensive history, clinical examination, and evaluation of imaging. Intersite reliability was assessed annually for criterion examiners and radiologists. Criterion examination reliability was also assessed within study sites. Study participant demographics were comparable to those of participants in previous studies using the RDC/TMD. Diagnostic agreement of the criterion examiners with each other and with the consensus-based reference standards was excellent with all kappas > or = 0.81, except for osteoarthrosis (moderate agreement, k = 0.53). Intrasite criterion examiner agreement with reference standards was excellent (k > or = 0.95). Intersite reliability of the radiologists for detecting computed tomography-disclosed osteoarthrosis and magnetic resonance imaging-disclosed disc displacement was good to excellent (k = 0.71 and 0.84, respectively). The Validation Project study population was appropriate for assessing the reliability and validity of the RDC/TMD Axis I and II. The reference standards used to assess the validity of Axis I TMD were based on reliable and clinically credible methods.
Methodological standards in single-case experimental design: Raising the bar.
Ganz, Jennifer B; Ayres, Kevin M
2018-04-12
Single-case experimental designs (SCEDs), or small-n experimental research, are frequently implemented to assess approaches to improving outcomes for people with disabilities, particularly those with low-incidence disabilities, such as some developmental disabilities. SCED has become increasingly accepted as a research design. As this literature base is needed to determine what interventions are evidence-based practices, the acceptance of SCED has resulted in increased critiques with regard to methodological quality. Recent trends include recommendations from a number of expert scholars and institutions. The purpose of this article is to summarize the recent history of methodological quality considerations, synthesize the recommendations found in the SCED literature, and provide recommendations to researchers designing SCEDs with regard to essential and aspirational standards for methodological quality. Conclusions include imploring SCED to increase the quality of their experiments, with particular consideration regarding the applied nature of SCED research to be published in Research in Developmental Disabilities and beyond. Copyright © 2018 Elsevier Ltd. All rights reserved.
Horizon Mission Methodology - A tool for the study of technology innovation and new paradigms
NASA Technical Reports Server (NTRS)
Anderson, John L.
1993-01-01
The Horizon Mission (HM) methodology was developed to provide a means of identifying and evaluating highly innovative, breakthrough technology concepts (BTCs) and for assessing their potential impact on advanced space missions. The methodology is based on identifying new capabilities needed by hypothetical 'horizon' space missions having performance requirements that cannot be met even by extrapolating known space technologies. Normal human evaluation of new ideas such as BTCs appears to be governed (and limited) by 'inner models of reality' defined as paradigms. Thus, new ideas are evaluated by old models. This paper describes the use of the HM Methodology to define possible future paradigms that would provide alternatives to evaluation by current paradigms. The approach is to represent a future paradigm by a set of new BTC-based capabilities - called a paradigm abstract. The paper describes methods of constructing and using the abstracts for evaluating BTCs for space applications and for exploring the concept of paradigms and paradigm shifts as a representation of technology innovation.
Häuser, Winfried; Dobos, Gustav; Langhorst, Jost
2015-01-01
Objectives. This systematic overview of reviews aimed to summarize evidence and methodological quality from systematic reviews of complementary and alternative medicine (CAM) for the fibromyalgia syndrome (FMS). Methods. The PubMed/MEDLINE, Cochrane Library, and Scopus databases were screened from their inception to Sept 2013 to identify systematic reviews and meta-analyses of CAM interventions for FMS. Methodological quality of reviews was rated using the AMSTAR instrument. Results. Altogether 25 systematic reviews were found; they investigated the evidence of CAM in general, exercised-based CAM therapies, manipulative therapies, Mind/Body therapies, acupuncture, hydrotherapy, phytotherapy, and homeopathy. Methodological quality of reviews ranged from lowest to highest possible quality. Consistently positive results were found for tai chi, yoga, meditation and mindfulness-based interventions, hypnosis or guided imagery, electromyogram (EMG) biofeedback, and balneotherapy/hydrotherapy. Inconsistent results concerned qigong, acupuncture, chiropractic interventions, electroencephalogram (EEG) biofeedback, and nutritional supplements. Inconclusive results were found for homeopathy and phytotherapy. Major methodological flaws included missing details on data extraction process, included or excluded studies, study details, and adaption of conclusions based on quality assessment. Conclusions. Despite a growing body of scientific evidence of CAM therapies for the management of FMS systematic reviews still show methodological flaws limiting definite conclusions about their efficacy and safety. PMID:26246841
Lauche, Romy; Cramer, Holger; Häuser, Winfried; Dobos, Gustav; Langhorst, Jost
2015-01-01
Objectives. This systematic overview of reviews aimed to summarize evidence and methodological quality from systematic reviews of complementary and alternative medicine (CAM) for the fibromyalgia syndrome (FMS). Methods. The PubMed/MEDLINE, Cochrane Library, and Scopus databases were screened from their inception to Sept 2013 to identify systematic reviews and meta-analyses of CAM interventions for FMS. Methodological quality of reviews was rated using the AMSTAR instrument. Results. Altogether 25 systematic reviews were found; they investigated the evidence of CAM in general, exercised-based CAM therapies, manipulative therapies, Mind/Body therapies, acupuncture, hydrotherapy, phytotherapy, and homeopathy. Methodological quality of reviews ranged from lowest to highest possible quality. Consistently positive results were found for tai chi, yoga, meditation and mindfulness-based interventions, hypnosis or guided imagery, electromyogram (EMG) biofeedback, and balneotherapy/hydrotherapy. Inconsistent results concerned qigong, acupuncture, chiropractic interventions, electroencephalogram (EEG) biofeedback, and nutritional supplements. Inconclusive results were found for homeopathy and phytotherapy. Major methodological flaws included missing details on data extraction process, included or excluded studies, study details, and adaption of conclusions based on quality assessment. Conclusions. Despite a growing body of scientific evidence of CAM therapies for the management of FMS systematic reviews still show methodological flaws limiting definite conclusions about their efficacy and safety.
Environmental probabilistic quantitative assessment methodologies
Crovelli, R.A.
1995-01-01
In this paper, four petroleum resource assessment methodologies are presented as possible pollution assessment methodologies, even though petroleum as a resource is desirable, whereas pollution is undesirable. A methodology is defined in this paper to consist of a probability model and a probabilistic method, where the method is used to solve the model. The following four basic types of probability models are considered: 1) direct assessment, 2) accumulation size, 3) volumetric yield, and 4) reservoir engineering. Three of the four petroleum resource assessment methodologies were written as microcomputer systems, viz. TRIAGG for direct assessment, APRAS for accumulation size, and FASPU for reservoir engineering. A fourth microcomputer system termed PROBDIST supports the three assessment systems. The three assessment systems have different probability models but the same type of probabilistic method. The type of advantages of the analytic method are in computational speed and flexibility, making it ideal for a microcomputer. -from Author
Are validated outcome measures used in distal radial fractures truly valid?
Nienhuis, R. W.; Bhandari, M.; Goslings, J. C.; Poolman, R. W.; Scholtes, V. A. B.
2016-01-01
Objectives Patient-reported outcome measures (PROMs) are often used to evaluate the outcome of treatment in patients with distal radial fractures. Which PROM to select is often based on assessment of measurement properties, such as validity and reliability. Measurement properties are assessed in clinimetric studies, and results are often reviewed without considering the methodological quality of these studies. Our aim was to systematically review the methodological quality of clinimetric studies that evaluated measurement properties of PROMs used in patients with distal radial fractures, and to make recommendations for the selection of PROMs based on the level of evidence of each individual measurement property. Methods A systematic literature search was performed in PubMed, EMbase, CINAHL and PsycINFO databases to identify relevant clinimetric studies. Two reviewers independently assessed the methodological quality of the studies on measurement properties, using the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) checklist. Level of evidence (strong / moderate / limited / lacking) for each measurement property per PROM was determined by combining the methodological quality and the results of the different clinimetric studies. Results In all, 19 out of 1508 identified unique studies were included, in which 12 PROMs were rated. The Patient-rated wrist evaluation (PRWE) and the Disabilities of Arm, Shoulder and Hand questionnaire (DASH) were evaluated on most measurement properties. The evidence for the PRWE is moderate that its reliability, validity (content and hypothesis testing), and responsiveness are good. The evidence is limited that its internal consistency and cross-cultural validity are good, and its measurement error is acceptable. There is no evidence for its structural and criterion validity. The evidence for the DASH is moderate that its responsiveness is good. The evidence is limited that its reliability and the validity on hypothesis testing are good. There is no evidence for the other measurement properties. Conclusion According to this systematic review, there is, at best, moderate evidence that the responsiveness of the PRWE and DASH are good, as are the reliability and validity of the PRWE. We recommend these PROMs in clinical studies in patients with distal radial fractures; however, more clinimetric studies of higher methodological quality are needed to adequately determine the other measurement properties. Cite this article: Dr Y. V. Kleinlugtenbelt. Are validated outcome measures used in distal radial fractures truly valid?: A critical assessment using the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) checklist. Bone Joint Res 2016;5:153–161. DOI: 10.1302/2046-3758.54.2000462. PMID:27132246
The Animism Controversy Revisited: A Probability Analysis
ERIC Educational Resources Information Center
Smeets, Paul M.
1973-01-01
Considers methodological issues surrounding the Piaget-Huang controversy. A probability model, based on the difference between the expected and observed animistic and deanimistic responses is applied as an improved technique for the assessment of animism. (DP)
Measuring ITS deployment and integration
DOT National Transportation Integrated Search
1999-01-01
A consistent and simple methodology was developed to assess both the level of deployment of individual ITS elements and the level of integration between these elements. This method is based on the metropolitan ITS infrastructure, a blueprint defined ...
The quality of instruments to assess the process of shared decision making: A systematic review.
Gärtner, Fania R; Bomhof-Roordink, Hanna; Smith, Ian P; Scholl, Isabelle; Stiggelbout, Anne M; Pieterse, Arwen H
2018-01-01
To inventory instruments assessing the process of shared decision making and appraise their measurement quality, taking into account the methodological quality of their validation studies. In a systematic review we searched seven databases (PubMed, Embase, Emcare, Cochrane, PsycINFO, Web of Science, Academic Search Premier) for studies investigating instruments measuring the process of shared decision making. Per identified instrument, we assessed the level of evidence separately for 10 measurement properties following a three-step procedure: 1) appraisal of the methodological quality using the COnsensus-based Standards for the selection of health status Measurement INstruments (COSMIN) checklist, 2) appraisal of the psychometric quality of the measurement property using three possible quality scores, 3) best-evidence synthesis based on the number of studies, their methodological and psychometrical quality, and the direction and consistency of the results. The study protocol was registered at PROSPERO: CRD42015023397. We included 51 articles describing the development and/or evaluation of 40 shared decision-making process instruments: 16 patient questionnaires, 4 provider questionnaires, 18 coding schemes and 2 instruments measuring multiple perspectives. There is an overall lack of evidence for their measurement quality, either because validation is missing or methods are poor. The best-evidence synthesis indicated positive results for a major part of instruments for content validity (50%) and structural validity (53%) if these were evaluated, but negative results for a major part of instruments when inter-rater reliability (47%) and hypotheses testing (59%) were evaluated. Due to the lack of evidence on measurement quality, the choice for the most appropriate instrument can best be based on the instrument's content and characteristics such as the perspective that they assess. We recommend refinement and validation of existing instruments, and the use of COSMIN-guidelines to help guarantee high-quality evaluations.
A new approach for the assessment of temporal clustering of extratropical wind storms
NASA Astrophysics Data System (ADS)
Schuster, Mareike; Eddounia, Fadoua; Kuhnel, Ivan; Ulbrich, Uwe
2017-04-01
A widely-used methodology to assess the clustering of storms in a region is based on dispersion statistics of a simple homogeneous Poisson process. This clustering measure is determined by the ratio of the variance and the mean of the local storm statistics per grid point. Resulting values larger than 1, i.e. when the variance is larger than the mean, indicate clustering; while values lower than 1 indicate a sequencing of storms that is more regular than a random process. However, a disadvantage of this methodology is that the characteristics are valid for a pre-defined climatological time period, and it is not possible to identify a temporal variability of clustering. Also, the absolute value of the dispersion statistics is not particularly intuitive. We have developed an approach to describe temporal clustering of storms which offers a more intuitive comprehension, and at the same time allows to assess temporal variations. The approach is based on the local distribution of waiting times between the occurrence of two individual storm events, the former being computed through the post-processing of individual windstorm tracks which in turn are obtained by an objective tracking algorithm. Based on this distribution a threshold can be set, either by the waiting time expected from a random process or by a quantile of the observed distribution. Thus, it can be determined if two consecutive wind storm events count as part of a (temporal) cluster. We analyze extratropical wind storms in a reanalysis dataset and compare the results of the traditional clustering measure with our new methodology. We assess what range of clustering events (in terms of duration and frequency) is covered and identify if the historically known clustered seasons are detectable by the new clustering measure in the reanalysis.
Amaratunga, Thelina; Dobranowski, Julian
2016-09-01
Preventable yet clinically significant rates of medical error remain systemic, while health care spending is at a historic high. Industry-based quality improvement (QI) methodologies show potential for utility in health care and radiology because they use an empirical approach to reduce variability and improve workflow. The aim of this review was to systematically assess the literature with regard to the use and efficacy of Lean and Six Sigma (the most popular of the industrial QI methodologies) within radiology. MEDLINE, the Allied & Complementary Medicine Database, Embase Classic + Embase, Health and Psychosocial Instruments, and the Ovid HealthStar database, alongside the Cochrane Library databases, were searched on June 2015. Empirical studies in peer-reviewed journals were included if they assessed the use of Lean, Six Sigma, or Lean Six Sigma with regard to their ability to improve a variety of quality metrics in a radiology-centered clinical setting. Of the 278 articles returned, 23 studies were suitable for inclusion. Of these, 10 assessed Six Sigma, 7 assessed Lean, and 6 assessed Lean Six Sigma. The diverse range of measured outcomes can be organized into 7 common aims: cost savings, reducing appointment wait time, reducing in-department wait time, increasing patient volume, reducing cycle time, reducing defects, and increasing staff and patient safety and satisfaction. All of the included studies demonstrated improvements across a variety of outcomes. However, there were high rates of systematic bias and imprecision as per the Grading of Recommendations Assessment, Development and Evaluation guidelines. Lean and Six Sigma QI methodologies have the potential to reduce error and costs and improve quality within radiology. However, there is a pressing need to conduct high-quality studies in order to realize the true potential of these QI methodologies in health care and radiology. Recommendations on how to improve the quality of the literature are proposed. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Assessing human rights impacts in corporate development projects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salcito, Kendyl, E-mail: kendyl.salcito@unibas.ch; University of Basel, P.O. Box, CH-4003 Basel; NomoGaia, 1900 Wazee Street, Suite 303, Denver, CO 80202
Human rights impact assessment (HRIA) is a process for systematically identifying, predicting and responding to the potential impact on human rights of a business operation, capital project, government policy or trade agreement. Traditionally, it has been conducted as a desktop exercise to predict the effects of trade agreements and government policies on individuals and communities. In line with a growing call for multinational corporations to ensure they do not violate human rights in their activities, HRIA is increasingly incorporated into the standard suite of corporate development project impact assessments. In this context, the policy world's non-structured, desk-based approaches to HRIAmore » are insufficient. Although a number of corporations have commissioned and conducted HRIA, no broadly accepted and validated assessment tool is currently available. The lack of standardisation has complicated efforts to evaluate the effectiveness of HRIA as a risk mitigation tool, and has caused confusion in the corporate world regarding company duties. Hence, clarification is needed. The objectives of this paper are (i) to describe an HRIA methodology, (ii) to provide a rationale for its components and design, and (iii) to illustrate implementation of HRIA using the methodology in two selected corporate development projects—a uranium mine in Malawi and a tree farm in Tanzania. We found that as a prognostic tool, HRIA could examine potential positive and negative human rights impacts and provide effective recommendations for mitigation. However, longer-term monitoring revealed that recommendations were unevenly implemented, dependent on market conditions and personnel movements. This instability in the approach to human rights suggests a need for on-going monitoring and surveillance. -- Highlights: • We developed a novel methodology for corporate human rights impact assessment. • We piloted the methodology on two corporate projects—a mine and a plantation. • Human rights impact assessment exposed impacts not foreseen in ESIA. • Corporations adopted the majority of findings, but not necessarily immediately. • Methodological advancements are expected for monitoring processes.« less
Risk-based Methodology for Validation of Pharmaceutical Batch Processes.
Wiles, Frederick
2013-01-01
In January 2011, the U.S. Food and Drug Administration published new process validation guidance for pharmaceutical processes. The new guidance debunks the long-held industry notion that three consecutive validation batches or runs are all that are required to demonstrate that a process is operating in a validated state. Instead, the new guidance now emphasizes that the level of monitoring and testing performed during process performance qualification (PPQ) studies must be sufficient to demonstrate statistical confidence both within and between batches. In some cases, three qualification runs may not be enough. Nearly two years after the guidance was first published, little has been written defining a statistical methodology for determining the number of samples and qualification runs required to satisfy Stage 2 requirements of the new guidance. This article proposes using a combination of risk assessment, control charting, and capability statistics to define the monitoring and testing scheme required to show that a pharmaceutical batch process is operating in a validated state. In this methodology, an assessment of process risk is performed through application of a process failure mode, effects, and criticality analysis (PFMECA). The output of PFMECA is used to select appropriate levels of statistical confidence and coverage which, in turn, are used in capability calculations to determine when significant Stage 2 (PPQ) milestones have been met. The achievement of Stage 2 milestones signals the release of batches for commercial distribution and the reduction of monitoring and testing to commercial production levels. Individuals, moving range, and range/sigma charts are used in conjunction with capability statistics to demonstrate that the commercial process is operating in a state of statistical control. The new process validation guidance published by the U.S. Food and Drug Administration in January of 2011 indicates that the number of process validation batches or runs required to demonstrate that a pharmaceutical process is operating in a validated state should be based on sound statistical principles. The old rule of "three consecutive batches and you're done" is no longer sufficient. The guidance, however, does not provide any specific methodology for determining the number of runs required, and little has been published to augment this shortcoming. The paper titled "Risk-based Methodology for Validation of Pharmaceutical Batch Processes" describes a statistically sound methodology for determining when a statistically valid number of validation runs has been acquired based on risk assessment and calculation of process capability.
Human Reliability Assessments: Using the Past (Shuttle) to Predict the Future (ORION)
NASA Technical Reports Server (NTRS)
Mott, Diana L.; Bigler, Mark A.
2017-01-01
NASA uses two HRA assessment methodologies. The first is a simplified method which is based on how much time is available to complete the action, with consideration included for environmental and personal factors that could influence the human's reliability. This method is expected to provide a conservative value or placeholder as a preliminary estimate. This preliminary estimate is used to determine which placeholder needs a more detailed assessment. The second methodology is used to develop a more detailed human reliability assessment on the performance of critical human actions. This assessment needs to consider more than the time available, this would include factors such as: the importance of the action, the context, environmental factors, potential human stresses, previous experience, training, physical design interfaces, available procedures/checklists and internal human stresses. The more detailed assessment is still expected to be more realistic than that based primarily on time available. When performing an HRA on a system or process that has an operational history, we have information specific to the task based on this history and experience. In the case of a PRA model that is based on a new design and has no operational history, providing a "reasonable" assessment of potential crew actions becomes more problematic. In order to determine what is expected of future operational parameters, the experience from individuals who had relevant experience and were familiar with the system and process previously implemented by NASA was used to provide the "best" available data. Personnel from Flight Operations, Flight Directors, Launch Test Directors, Control Room Console Operators and Astronauts were all interviewed to provide a comprehensive picture of previous NASA operations. Verification of the assumptions and expectations expressed in the assessments will be needed when the procedures, flight rules and operational requirements are developed and then finalized.
2014-01-01
Background Pediatric antiretroviral therapy (ART) has been shown to substantially reduce morbidity and mortality in HIV-infected infants and children. To accurately project program costs, analysts need accurate estimations of antiretroviral drug (ARV) costs for children. However, the costing of pediatric antiretroviral therapy is complicated by weight-based dosing recommendations which change as children grow. Methods We developed a step-by-step methodology for estimating the cost of pediatric ARV regimens for children ages 0–13 years old. The costing approach incorporates weight-based dosing recommendations to provide estimated ARV doses throughout childhood development. Published unit drug costs are then used to calculate average monthly drug costs. We compared our derived monthly ARV costs to published estimates to assess the accuracy of our methodology. Results The estimates of monthly ARV costs are provided for six commonly used first-line pediatric ARV regimens, considering three possible care scenarios. The costs derived in our analysis for children were fairly comparable to or slightly higher than available published ARV drug or regimen estimates. Conclusions The methodology described here can be used to provide an accurate estimation of pediatric ARV regimen costs for cost-effectiveness analysts to project the optimum packages of care for HIV-infected children, as well as for program administrators and budget analysts who wish to assess the feasibility of increasing pediatric ART availability in constrained budget environments. PMID:24885453
Tennyson, Marilyn E.; Charpentier, Ronald R.; Klett, Timothy R.; Brownfield, Michael E.; Pitman, Janet K.; Gaswirth, Stephanie B.; Hawkins, Sarah J.; Lillis, Paul G.; Marra, Kristen R.; Mercier, Tracey J.; Leathers, Heidi M.; Schenk, Christopher J.; Whidden, Katherine J.
2015-10-06
Using a geology-based assessment methodology, the U.S. Geological Survey assessed mean volumes of 21 million barrels of oil (MMBO), 27 billion cubic feet of gas, and 1 million barrels of natural gas liquids in two assessment units (AUs) that may contain continuous oil resources. Mean volumes of oil for the individual assessment units are 14 MMBO in the Monterey Buttonwillow AU and 7 MMBO in the Monterey Maricopa AU.
Review of Natural Phenomena Hazard (NPH) Assessments for the DOE Hanford Site
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snow, Robert L.; Ross, Steven B.
2011-09-15
The purpose of this review is to assess the need for updating Natural Phenomena Hazard (NPH) assessments for the DOE's Hanford Site, as required by DOE Order 420.1B Chapter IV, Natural Phenomena Hazards Mitigation, based on significant changes in state-of-the-art NPH assessment methodology or site-specific information. This review is an update and expansion to the September 2010 review of PNNL-19751, Review of Natural Phenomena Hazard (NPH) Assessments for the Hanford 200 Areas (Non-Seismic).
Edwards, Katherine; Jones, Natasha; Newton, Julia; Foster, Charlie; Judge, Andrew; Jackson, Kate; Arden, Nigel K; Pinedo-Villanueva, Rafael
2017-10-19
This descriptive review aimed to assess the characteristics and methodological quality of economic evaluations of cardiac rehabilitation (CR) programs according to updated economic guidelines for healthcare interventions. Recommendations will be made to inform future research addressing the impact of a physical exercise component on cost-effectiveness. Electronic databases were searched for economic evaluations of exercise-based CR programs published in English between 2000 and 2014. The Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement was used to review the methodological quality of included economic evaluations. Fifteen economic evaluations met the review inclusion criteria. Assessed study characteristics exhibited wide variability, particularly in their economic perspective, time horizon, setting, comparators and included costs, with significant heterogeneity in exercise dose across interventions. Ten evaluations were based on randomised controlled trials (RCTs) spanning 6-24 months but often with weak or inconclusive results; two were modelling studies; and the final three utilised longer time horizons of 3.5-5 years from which findings suggest that long-term exercise-based CR results in lower costs, reduced hospitalisations and a longer cumulative patient lifetime. None of the 15 articles met all the CHEERS quality criteria, with the majority either fully or partially meeting a selection of the assessed variables. Evidence exists supporting the cost-effectiveness of exercise-based CR for cardiovascular disease patients. However, variability in CR program delivery and weak consistency between study perspective and design limits study comparability and therefore the accumulation of evidence in support of a particular exercise regime. The generalisability of study findings was limited due to the exclusion of patients with comorbidities as would typically be found in a real-world setting. The use of longer time-horizons would be more comparable with a chronic condition and enable economic assessments of the long-term effects of CR. As none of the articles met recent reporting standards for the economic assessment of healthcare interventions, it is recommended that future studies adhere to such guidelines.
Methodology for the Assessment of the Ecotoxicological Potential of Construction Materials
Rodrigues, Patrícia; Silvestre, José D.; Flores-Colen, Inês; Viegas, Cristina A.; de Brito, Jorge; Kurad, Rawaz; Demertzi, Martha
2017-01-01
Innovation in construction materials (CM) implies changing their composition by incorporating raw materials, usually non-traditional ones, which confer the desired characteristics. However, this practice may have unknown risks. This paper discusses the ecotoxicological potential associated with raw and construction materials, and proposes and applies a methodology for the assessment of their ecotoxicological potential. This methodology is based on existing laws, such as Regulation (European Commission) No. 1907/2006 (REACH—Registration, Evaluation, Authorization and Restriction of Chemicals) and Regulation (European Commission) No. 1272/2008 (CLP—Classification, Labelling and Packaging). Its application and validation showed that raw material without clear evidence of ecotoxicological potential, but with some ability to release chemicals, can lead to the formulation of a CM with a slightly lower hazardousness in terms of chemical characterization despite a slightly higher ecotoxicological potential than the raw materials. The proposed methodology can be a useful tool for the development and manufacturing of products and the design choice of the most appropriate CM, aiming at the reduction of their environmental impact and contributing to construction sustainability. PMID:28773011
Molinos-Senante, María; Hernández-Sancho, Francesc; Sala-Garrido, Ramón
2012-01-01
The concept of sustainability involves the integration of economic, environmental, and social aspects and this also applies in the field of wastewater treatment. Economic feasibility studies are a key tool for selecting the most appropriate option from a set of technological proposals. Moreover, these studies are needed to assess the viability of transferring new technologies from pilot-scale to full-scale. In traditional economic feasibility studies, the benefits that have no market price, such as environmental benefits, are not considered and are therefore underestimated. To overcome this limitation, we propose a new methodology to assess the economic viability of wastewater treatment technologies that considers internal and external impacts. The estimation of the costs is based on the use of cost functions. To quantify the environmental benefits from wastewater treatment, the distance function methodology is proposed to estimate the shadow price of each pollutant removed in the wastewater treatment. The application of this methodological approach by decision makers enables the calculation of the true costs and benefits associated with each alternative technology. The proposed methodology is presented as a useful tool to support decision making.
Security Events and Vulnerability Data for Cybersecurity Risk Estimation.
Allodi, Luca; Massacci, Fabio
2017-08-01
Current industry standards for estimating cybersecurity risk are based on qualitative risk matrices as opposed to quantitative risk estimates. In contrast, risk assessment in most other industry sectors aims at deriving quantitative risk estimations (e.g., Basel II in Finance). This article presents a model and methodology to leverage on the large amount of data available from the IT infrastructure of an organization's security operation center to quantitatively estimate the probability of attack. Our methodology specifically addresses untargeted attacks delivered by automatic tools that make up the vast majority of attacks in the wild against users and organizations. We consider two-stage attacks whereby the attacker first breaches an Internet-facing system, and then escalates the attack to internal systems by exploiting local vulnerabilities in the target. Our methodology factors in the power of the attacker as the number of "weaponized" vulnerabilities he/she can exploit, and can be adjusted to match the risk appetite of the organization. We illustrate our methodology by using data from a large financial institution, and discuss the significant mismatch between traditional qualitative risk assessments and our quantitative approach. © 2017 Society for Risk Analysis.
Risk assessment of groundwater level variability using variable Kriging methods
NASA Astrophysics Data System (ADS)
Spanoudaki, Katerina; Kampanis, Nikolaos A.
2015-04-01
Assessment of the water table level spatial variability in aquifers provides useful information regarding optimal groundwater management. This information becomes more important in basins where the water table level has fallen significantly. The spatial variability of the water table level in this work is estimated based on hydraulic head measured during the wet period of the hydrological year 2007-2008, in a sparsely monitored basin in Crete, Greece, which is of high socioeconomic and agricultural interest. Three Kriging-based methodologies are elaborated in Matlab environment to estimate the spatial variability of the water table level in the basin. The first methodology is based on the Ordinary Kriging approach, the second involves auxiliary information from a Digital Elevation Model in terms of Residual Kriging and the third methodology calculates the probability of the groundwater level to fall below a predefined minimum value that could cause significant problems in groundwater resources availability, by means of Indicator Kriging. The Box-Cox methodology is applied to normalize both the data and the residuals for improved prediction results. In addition, various classical variogram models are applied to determine the spatial dependence of the measurements. The Matérn model proves to be the optimal, which in combination with Kriging methodologies provides the most accurate cross validation estimations. Groundwater level and probability maps are constructed to examine the spatial variability of the groundwater level in the basin and the associated risk that certain locations exhibit regarding a predefined minimum value that has been set for the sustainability of the basin's groundwater resources. Acknowledgement The work presented in this paper has been funded by the Greek State Scholarships Foundation (IKY), Fellowships of Excellence for Postdoctoral Studies (Siemens Program), 'A simulation-optimization model for assessing the best practices for the protection of surface water and groundwater in the coastal zone', (2013 - 2015). Varouchakis, E. A. and D. T. Hristopulos (2013). "Improvement of groundwater level prediction in sparsely gauged basins using physical laws and local geographic features as auxiliary variables." Advances in Water Resources 52: 34-49. Kitanidis, P. K. (1997). Introduction to geostatistics, Cambridge: University Press.
Demougeot-Renard, Helene; De Fouquet, Chantal
2004-10-01
Assessing the volume of soil requiring remediation and the accuracy of this assessment constitutes an essential step in polluted site management. If this remediation volume is not properly assessed, misclassification may lead both to environmental risks (polluted soils may not be remediated) and financial risks (unexpected discovery of polluted soils may generate additional remediation costs). To minimize such risks, this paper proposes a geostatistical methodology based on stochastic simulations that allows the remediation volume and the uncertainty to be assessed using investigation data. The methodology thoroughly reproduces the conditions in which the soils are classified and extracted at the remediation stage. The validity of the approach is tested by applying it on the data collected during the investigation phase of a former lead smelting works and by comparing the results with the volume that has actually been remediated. This real remediated volume was composed of all the remediation units that were classified as polluted after systematic sampling and analysis during clean-up stage. The volume estimated from the 75 samples collected during site investigation slightly overestimates (5.3% relative error) the remediated volume deduced from 212 remediation units. Furthermore, the real volume falls within the range of uncertainty predicted using the proposed methodology.
Radiation Assurance for the Space Environment
NASA Technical Reports Server (NTRS)
Barth, Janet L.; LaBel, Kenneth A.; Poivey, Christian
2004-01-01
The space radiation environment can lead to extremely harsh operating conditions for spacecraft electronic systems. A hardness assurance methodology must be followed to assure that the space radiation environment does not compromise the functionality and performance of space-based systems during the mission lifetime. The methodology includes a definition of the radiation environment, assessment of the radiation sensitivity of parts, worst-case analysis of the impact of radiation effects, and part acceptance decisions which are likely to include mitigation measures.
López-Campos, Jose Luis; Abad Arranz, María; Calero Acuña, Carmen; Romero Valero, Fernando; Ayerbe García, Ruth; Hidalgo Molina, Antonio; Aguilar Pérez-Grovas, Ricardo Ismael; García Gil, Francisco; Casas Maldonado, Francisco; Caballero Ballesteros, Laura; Sánchez Palop, María; Pérez-Tejero, Dolores; Segado, Alejandro; Calvo Bonachera, Jose; Hernández Sierra, Bárbara; Doménech, Adolfo; Arroyo Varela, Macarena; González Vargas, Francisco; Cruz Rueda, Juan Jose
2015-01-01
Previous clinical audits for chronic obstructive pulmonary disease (COPD) have provided valuable information on the clinical care delivered to patients admitted to medical wards because of COPD exacerbations. However, clinical audits of COPD in an outpatient setting are scarce and no methodological guidelines are currently available. Based on our previous experience, herein we describe a clinical audit for COPD patients in specialized outpatient clinics with the overall goal of establishing a potential methodological workflow. A pilot clinical audit of COPD patients referred to respiratory outpatient clinics in the region of Andalusia, Spain (over 8 million inhabitants), was performed. The audit took place between October 2013 and September 2014, and 10 centers (20% of all public hospitals) were invited to participate. Cases with an established diagnosis of COPD based on risk factors, clinical symptoms, and a post-bronchodilator FEV1/FVC ratio of less than 0.70 were deemed eligible. The usefulness of formally scheduled regular follow-up visits was assessed. Two different databases (resources and clinical database) were constructed. Assessments were planned over a year divided by 4 three-month periods, with the goal of determining seasonal-related changes. Exacerbations and survival served as the main endpoints. This paper describes a methodological framework for conducting a clinical audit of COPD patients in an outpatient setting. Results from such audits can guide health information systems development and implementation in real-world settings.
Roughness Based Crossflow Transition Control: A Computational Assessment
NASA Technical Reports Server (NTRS)
Li, Fei; Choudhari, Meelan M.; Chang, Chau-Lyan; Streett, Craig L.; Carpenter, Mark H.
2009-01-01
A combination of parabolized stability equations and secondary instability theory has been applied to a low-speed swept airfoil model with a chord Reynolds number of 7.15 million, with the goals of (i) evaluating this methodology in the context of transition prediction for a known configuration for which roughness based crossflow transition control has been demonstrated under flight conditions and (ii) of analyzing the mechanism of transition delay via the introduction of discrete roughness elements (DRE). Roughness based transition control involves controlled seeding of suitable, subdominant crossflow modes, so as to weaken the growth of naturally occurring, linearly more unstable crossflow modes. Therefore, a synthesis of receptivity, linear and nonlinear growth of stationary crossflow disturbances, and the ensuing development of high frequency secondary instabilities is desirable to understand the experimentally observed transition behavior. With further validation, such higher fidelity prediction methodology could be utilized to assess the potential for crossflow transition control at even higher Reynolds numbers, where experimental data is currently unavailable.
ERIC Educational Resources Information Center
Rickard, Andrew
2006-01-01
Event tourism is accompanied by social, economic and environmental benefits and costs. The assessment of this form of tourism has however largely focused on the social and economic perspectives, while environmental assessments have been bound to a destination-based approach. The application of the Ecological Footprint methodology allows for these…
Integrated national-scale assessment of wildfire risk to human and ecological values
Matthew P. Thompson; David E. Calkin; Mark A. Finney; Alan A. Ager; Julie W. Gilbertson-Day
2011-01-01
The spatial, temporal, and social dimensions of wildfire risk are challenging U.S. federal land management agencies to meet societal needs while maintaining the health of the lands they manage. In this paper we present a quantitative, geospatial wildfire risk assessment tool, developed in response to demands for improved risk-based decision frameworks. The methodology...
Schenk, Christopher J.; Charpentier, Ronald R.; Klett, Timothy R.; Gaswirth, Stephanie B.; Pitman, Janet K.; Brownfield, Michael E.; Mercier, Tracey J.; Wandrey, Craig J.; Weaver, Jean N.
2013-01-01
Using a geology-based assessment methodology, the U.S. Geological Survey estimated means of 19 million barrels of undiscovered, technically recoverable oil and 244 billion cubic feet of undiscovered natural gas in the Puerto Rico–U.S. Virgin Islands Exclusive Economic Zone.
Schenk, Christopher J.; Tennyson, Marilyn E.; Klett, Timothy R.; Finn, Thomas M.; Mercier, Tracey J.; Hawkins, Sarah J.; Gaswirth, Stephanie B.; Marra, Kristen R.; Brownfield, Michael E.; Le, Phuong A.; Leathers-Miller, Heidi M.
2017-03-27
Using a geology-based assessment methodology, the U.S. Geological Survey estimated undiscovered, technically recoverable mean continuous resources of 5 billion barrels of oil and 47 trillion cubic feet of gas in the Paleozoic Solimões, Amazonas, and Parnaíba Basin Provinces, Brazil.
The traditional methodology for health risk assessment used by the U. S. Environmental Protection Agency (EPA) is based on the use of exposure assumptions (e.g. exposure duration, food ingestion rate, body weight, etc.) that represent the entire American population, either as a c...
Design of a Competency-Based Assessment Model in the Field of Accounting
ERIC Educational Resources Information Center
Ciudad-Gómez, Adelaida; Valverde-Berrocoso, Jesús
2012-01-01
This paper presents the phases involved in the design of a methodology to contribute both to the acquisition of competencies and to their assessment in the field of Financial Accounting, within the European Higher Education Area (EHEA) framework, which we call MANagement of COMpetence in the areas of Accounting (MANCOMA). Having selected and…
Drake, Ronald M.; Schenk, Christopher J.; Klett, Timothy R.; Le, Phuong A.; Leathers, Heidi M.; Brownfield, Michael E.; Finn, Thomas M.; Gaswirth, Stephanie B.; Marra, Kristen R.; Tennyson, Marilyn E.
2017-06-07
Using a geology-based assessment methodology, the U.S. Geological Survey estimated undiscovered, technically recoverable mean resources of 884 million barrels of oil and 106 billion cubic feet of gas in the North-Central Montana and Williston Basin Provinces of central Montana and western North Dakota.
Klett, Timothy R.; Brownfield, Michael E.; Finn, Thomas M.; Gaswirth, Stephanie B.; Le, Phuong A.; Leathers-Miller, Heidi M.; Marra, Kristen R.; Mercier, Tracey J.; Pitman, Janet K.; Schenk, Christopher J.; Tennyson, Marilyn E.; Woodall, Cheryl A.
2018-02-27
Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean undiscovered, technically recoverable continuous resources of 2.8 billion barrels of oil and 34 trillion cubic feet of gas in the Domanik-type formations of the Volga-Ural Region Province, Russia.
Rossi, A B; Leyden, J J; Pappert, A S; Ramaswamy, A; Nkengne, A; Ramaswamy, R; Nighland, M
2011-04-01
Post-inflammatory hyperpigmentation (PIH) is a common occurrence in patients with acne vulgaris, particularly in those with skin of colour. A previous study has demonstrated the benefit of tretinoin (retinoic acid) in the treatment of PIH; however, there is currently no standard protocol to evaluate change in PIH following treatment. Based on these findings, we performed a pilot, exploratory, blinded, intraindividual-controlled methodology study that consisted of a photographic assessment protocol with facial mapping. The study was based on a secondary analysis of a phase 4, community-based trial of 544 acne patients who were treated with tretinoin gel microsphere 0.04% or 0.1%. Only patients with Fitzpatrick types III-V (skin of colour) were included in the study; subjects with Fitzpatrick skin type VI were excluded because the photographic assessment did not allow for proper evaluation. Despite the small number of subjects evaluated (n=25), the results revealed consistent assessment of improvement in PIH between two independent graders (weighted κ=0.84). Further study with a larger population is recommended to validate the accuracy of this method. © 2010 Johnson & Johnson Consumer Companies, Inc. Journal of the European Academy of Dermatology and Venereology © 2010 European Academy of Dermatology and Venereology.
Perspectives on prevention, assessment, and rehabilitation of low back pain in WORK.
Ravenek, Michael J; Bryson-Campbell, Mikelle M; Shaw, Lynn; Hughes, Ian D
2010-01-01
The aim of this review was to describe the low back pain (LBP) knowledge base developed in WORK and to discuss its relevance to current perspectives in the broader literature on LBP and employment. A scoping review of the literature in WORK on LBP and employment was conducted using published articles from 1990-2009. Articles were organized into geographical regions and summarized for contributions to the domains of WORK: prevention, assessment, and rehabilitation. Methodological accordance of the articles was also assessed. Fifty articles were extracted and organized into contributions from authors within North America (n=34) and outside North America (n=16). In total there were 26 prevention, 7 assessment, and 12 rehabilitation articles in this review. Five articles were also classified as 'understanding' articles. More than half of the articles retrieved employed quantitative methodology. WORK has contributed a broad realm of publications to the knowledge base on LBP and employment. Two thirds of the articles were contributed from authors within North America, with a greater emphasis on prevention. This article highlights the similarities and differences in the international knowledge base in the management of LBP in WORK. Future directions for research are elaborated drawing on current perspectives of two experts on the management of LBP.
Indirect assessment of bulk strain soliton velocity in opaque solids
NASA Astrophysics Data System (ADS)
Belashov, A. V.; Beltukov, Y. M.; Petrov, N. V.; Samsonov, A. M.; Semenova, I. V.
2018-03-01
This paper presents a methodology allowing for determination of strain soliton velocity in opaque solid materials. The methodology is based on the analysis of soliton evolution in a layer of a transparent material adhesively bonded to the layer of a material under study. It is shown that the resulting soliton velocity in the complex waveguide equals to the arithmetic mean of soliton velocities in the two component materials. The suggested methodology is best suited for analysis of materials with relatively close elastic parameters and can be applied in research of nonlinear wave processes in opaque composites on the basis of transparent matrices.
Patorno, Elisabetta; Patrick, Amanda R; Garry, Elizabeth M; Schneeweiss, Sebastian; Gillet, Victoria G; Bartels, Dorothee B; Masso-Gonzalez, Elvira; Seeger, John D
2014-11-01
Recent years have witnessed a growing body of observational literature on the association between glucose-lowering treatments and cardiovascular disease. However, many of the studies are based on designs or analyses that inadequately address the methodological challenges involved. We reviewed recent observational literature on the association between glucose-lowering medications and cardiovascular outcomes and assessed the design and analysis methods used, with a focus on their ability to address specific methodological challenges. We describe and illustrate these methodological issues and their impact on observed associations, providing examples from the reviewed literature. We suggest approaches that may be employed to manage these methodological challenges. From the evaluation of 81 publications of observational investigations assessing the association between glucose-lowering treatments and cardiovascular outcomes, we identified the following methodological challenges: 1) handling of temporality in administrative databases; 2) handling of risks that vary with time and treatment duration; 3) definitions of the exposure risk window; 4) handling of exposures that change over time; and 5) handling of confounding by indication. Most of these methodological challenges may be suitably addressed through application of appropriate methods. Observational research plays an increasingly important role in the evaluation of the clinical effects of diabetes treatment. Implementation of appropriate research methods holds the promise of reducing the potential for spurious findings and the risk that the spurious findings will mislead the medical community about risks and benefits of diabetes medications.
Advanced reliability modeling of fault-tolerant computer-based systems
NASA Technical Reports Server (NTRS)
Bavuso, S. J.
1982-01-01
Two methodologies for the reliability assessment of fault tolerant digital computer based systems are discussed. The computer-aided reliability estimation 3 (CARE 3) and gate logic software simulation (GLOSS) are assessment technologies that were developed to mitigate a serious weakness in the design and evaluation process of ultrareliable digital systems. The weak link is based on the unavailability of a sufficiently powerful modeling technique for comparing the stochastic attributes of one system against others. Some of the more interesting attributes are reliability, system survival, safety, and mission success.
Assessment of undiscovered oil and gas resources of the Sud Province, north-central Africa
Brownfield, M.E.; Klett, T.R.; Schenk, C.J.; Charpentier, R.R.; Cook, T.A.; Pollastro, R.M.; Tennyson, Marilyn E.
2011-01-01
The Sud Province located in north-central Africa recently was assessed for undiscovered, technically recoverable oil, natural gas, and natural gas liquids resources as part of the U.S. Geological Survey's (USGS) World Oil and Gas Assessment. Using a geology-based assessment methodology, the USGS estimated mean volumes of 7.31 billion barrels of oil, 13.42 trillion cubic feet of natural gas, and 353 million barrels of natural gas liquids.
Assessment of undiscovered oil and gas resources of the Chad Basin Province, North-Central Africa
Brownfield, Michael E.; Schenk, Christopher J.; Charpentier, Ronald R.; Klett, Timothy R.; Cook, Troy A.; Pollastro, Richard M.; Tennyson, Marilyn E.
2010-01-01
The Chad Basin Province located in north-central Africa recently was assessed for undiscovered, technically recoverable oil, natural gas, and natural gas liquids resources as part of the U.S. Geological Survey's (USGS) World Oil and Gas Assessment. Using a geology-based assessment methodology, the USGS estimated mean volumes of 2.32 billion barrels of oil, 14.65 trillion cubic feet of natural gas, and 391 million barrels of natural gas liquids.
Assessment of undiscovered oil and gas resources of four East Africa Geologic Provinces
Brownfield, Michael E.; Schenk, Christopher J.; Charpentier, Ronald R.; Klett, Timothy R.; Cook, Troy A.; Pollastro, Richard M.; Tennyson, Marilyn E.
2012-01-01
Four geologic provinces along the east coast of Africa recently were assessed for undiscovered, technically recoverable oil, natural gas, and natural gas liquids resources as part of the U.S. Geological Survey's (USGS) World Oil and Gas Assessment. Using a geology-based assessment methodology, the USGS estimated mean volumes of 27.6 billion barrels of oil, 441.1 trillion cubic feet of natural gas, and 13.77 billion barrels of natural gas liquids.
Analysis and Purification of Bioactive Natural Products: The AnaPurNa Study
2012-01-01
Based on a meta-analysis of data mined from almost 2000 publications on bioactive natural products (NPs) from >80 000 pages of 13 different journals published in 1998–1999, 2004–2005, and 2009–2010, the aim of this systematic review is to provide both a survey of the status quo and a perspective for analytical methodology used for isolation and purity assessment of bioactive NPs. The study provides numerical measures of the common means of sourcing NPs, the chromatographic methodology employed for NP purification, and the role of spectroscopy and purity assessment in NP characterization. A link is proposed between the observed use of various analytical methodologies, the challenges posed by the complexity of metabolomes, and the inescapable residual complexity of purified NPs and their biological assessment. The data provide inspiration for the development of innovative methods for NP analysis as a means of advancing the role of naturally occurring compounds as a viable source of biologically active agents with relevance for human health and global benefit. PMID:22620854
An approach to quantitative sustainability assessment in the early stages of process design.
Tugnoli, Alessandro; Santarelli, Francesco; Cozzani, Valerio
2008-06-15
A procedure was developed for the quantitative assessment of key performance indicators suitable for the sustainability analysis of alternative processes, mainly addressing the early stages of process design. The methodology was based on the calculation of a set of normalized impact indices allowing a direct comparison of the additional burden of each process alternative on a selected reference area. Innovative reference criteria were developed to compare and aggregate the impact indicators on the basis of the site-specific impact burden and sustainability policy. An aggregation procedure also allows the calculation of overall sustainability performance indicators and of an "impact fingerprint" of each process alternative. The final aim of the method is to support the decision making process during process development, providing a straightforward assessment of the expected sustainability performances. The application of the methodology to case studies concerning alternative waste disposal processes allowed a preliminary screening of the expected critical sustainability impacts of each process. The methodology was shown to provide useful results to address sustainability issues in the early stages of process design.
Huter, Kai; Dubas-Jakóbczyk, Katarzyna; Kocot, Ewa; Kissimova-Skarbek, Katarzyna; Rothgang, Heinz
2018-01-01
In the light of demographic developments health promotion interventions for older people are gaining importance. In addition to methodological challenges arising from the economic evaluation of health promotion interventions in general, there are specific methodological problems for the particular target group of older people. There are especially four main methodological challenges that are discussed in the literature. They concern measurement and valuation of informal caregiving, accounting for productivity costs, effects of unrelated cost in added life years and the inclusion of 'beyond-health' benefits. This paper focuses on the question whether and to what extent specific methodological requirements are actually met in applied health economic evaluations. Following a systematic review of pertinent health economic evaluations, the included studies are analysed on the basis of four assessment criteria that are derived from methodological debates on the economic evaluation of health promotion interventions in general and economic evaluations targeting older people in particular. Of the 37 studies included in the systematic review, only very few include cost and outcome categories discussed as being of specific relevance to the assessment of health promotion interventions for older people. The few studies that consider these aspects use very heterogeneous methods, thus there is no common methodological standard. There is a strong need for the development of guidelines to achieve better comparability and to include cost categories and outcomes that are relevant for older people. Disregarding these methodological obstacles could implicitly lead to discrimination against the elderly in terms of health promotion and disease prevention and, hence, an age-based rationing of public health care.
Stieglitz, Rolf-Dieter; Haug, Achim; Fähndrich, Erdmann; Rösler, Michael; Trabert, Wolfgang
2017-01-01
The documentation of psychopathology is core to the clinical practice of the psychiatrist and clinical psychologist. However, both in initial as well as further training and specialization in their fields, this particular aspect of their work receives scanty attention only. Yet, for the past 50 years, the Association for Methodology and Documentation in Psychiatry (AMDP) System has been in existence and available as a tool to serve precisely the purpose of offering a systematic introduction to the terminology and documentation of psychopathology. The motivation for its development was based on the need for an assessment procedure for the reliable documentation of the effectiveness of newly developed psychopharmacological substances. Subsequently, the AMDP-System began to be applied in the context of investigations into a number of methodological issues in psychiatry (e.g., the frequency and specificity of particular symptoms, the comparison of rating scales). The System then became increasingly important also in clinical practice and, today, represents the most used instrument for the documentation of psychopathology in the German-speaking countries of Europe. This paper intends to offer an overview of the AMDP-System, its origins, design, and functionality. After an initial account of the history and development of the AMDP-System, the discussion will in turn focus on the System's underlying methodological principles, the transfer of clinical skills and competencies in its practical application, and its use in research and clinical practice. Finally, potential future areas of development in relation to the AMDP-System are explored.
Stieglitz, Rolf-Dieter; Haug, Achim; Fähndrich, Erdmann; Rösler, Michael; Trabert, Wolfgang
2017-01-01
The documentation of psychopathology is core to the clinical practice of the psychiatrist and clinical psychologist. However, both in initial as well as further training and specialization in their fields, this particular aspect of their work receives scanty attention only. Yet, for the past 50 years, the Association for Methodology and Documentation in Psychiatry (AMDP) System has been in existence and available as a tool to serve precisely the purpose of offering a systematic introduction to the terminology and documentation of psychopathology. The motivation for its development was based on the need for an assessment procedure for the reliable documentation of the effectiveness of newly developed psychopharmacological substances. Subsequently, the AMDP-System began to be applied in the context of investigations into a number of methodological issues in psychiatry (e.g., the frequency and specificity of particular symptoms, the comparison of rating scales). The System then became increasingly important also in clinical practice and, today, represents the most used instrument for the documentation of psychopathology in the German-speaking countries of Europe. This paper intends to offer an overview of the AMDP-System, its origins, design, and functionality. After an initial account of the history and development of the AMDP-System, the discussion will in turn focus on the System’s underlying methodological principles, the transfer of clinical skills and competencies in its practical application, and its use in research and clinical practice. Finally, potential future areas of development in relation to the AMDP-System are explored. PMID:28439242
NASA Astrophysics Data System (ADS)
Bosca, Ryan J.; Jackson, Edward F.
2016-01-01
Assessing and mitigating the various sources of bias and variance associated with image quantification algorithms is essential to the use of such algorithms in clinical research and practice. Assessment is usually accomplished with grid-based digital reference objects (DRO) or, more recently, digital anthropomorphic phantoms based on normal human anatomy. Publicly available digital anthropomorphic phantoms can provide a basis for generating realistic model-based DROs that incorporate the heterogeneity commonly found in pathology. Using a publicly available vascular input function (VIF) and digital anthropomorphic phantom of a normal human brain, a methodology was developed to generate a DRO based on the general kinetic model (GKM) that represented realistic and heterogeneously enhancing pathology. GKM parameters were estimated from a deidentified clinical dynamic contrast-enhanced (DCE) MRI exam. This clinical imaging volume was co-registered with a discrete tissue model, and model parameters estimated from clinical images were used to synthesize a DCE-MRI exam that consisted of normal brain tissues and a heterogeneously enhancing brain tumor. An example application of spatial smoothing was used to illustrate potential applications in assessing quantitative imaging algorithms. A voxel-wise Bland-Altman analysis demonstrated negligible differences between the parameters estimated with and without spatial smoothing (using a small radius Gaussian kernel). In this work, we reported an extensible methodology for generating model-based anthropomorphic DROs containing normal and pathological tissue that can be used to assess quantitative imaging algorithms.
[Demonstrating patient safety requires acceptance of a broader scientific palette].
Leistikow, I
2017-01-01
It is high time the medical community recognised that patient-safety research can be assessed using other scientific methods than the traditional medical ones. There is often a fundamental mismatch between the methodology of patient-safety research and the methodology used to assess the quality of this research. One example is research into the reliability and validity of record review as a method for detecting adverse events. This type of research is based on logical positivism, while record review itself is based on social constructivism. Record review does not lead to "one truth": adverse events are not measured on the basis of the records themselves, but by weighing the probability of certain situations being classifiable as adverse events. Healthcare should welcome behavioural and social sciences to its scientific palette. Restricting ourselves to the randomised control trial paradigm is short-sighted and dangerous; it deprives patients of much-needed improvements in safety.
NASA Astrophysics Data System (ADS)
Zolfaghari, Mohammad R.
2009-07-01
Recent achievements in computer and information technology have provided the necessary tools to extend the application of probabilistic seismic hazard mapping from its traditional engineering use to many other applications. Examples for such applications are risk mitigation, disaster management, post disaster recovery planning and catastrophe loss estimation and risk management. Due to the lack of proper knowledge with regard to factors controlling seismic hazards, there are always uncertainties associated with all steps involved in developing and using seismic hazard models. While some of these uncertainties can be controlled by more accurate and reliable input data, the majority of the data and assumptions used in seismic hazard studies remain with high uncertainties that contribute to the uncertainty of the final results. In this paper a new methodology for the assessment of seismic hazard is described. The proposed approach provides practical facility for better capture of spatial variations of seismological and tectonic characteristics, which allows better treatment of their uncertainties. In the proposed approach, GIS raster-based data models are used in order to model geographical features in a cell-based system. The cell-based source model proposed in this paper provides a framework for implementing many geographically referenced seismotectonic factors into seismic hazard modelling. Examples for such components are seismic source boundaries, rupture geometry, seismic activity rate, focal depth and the choice of attenuation functions. The proposed methodology provides improvements in several aspects of the standard analytical tools currently being used for assessment and mapping of regional seismic hazard. The proposed methodology makes the best use of the recent advancements in computer technology in both software and hardware. The proposed approach is well structured to be implemented using conventional GIS tools.
Lattimore, Vanessa L.; Pearson, John F.; Currie, Margaret J.; Spurdle, Amanda B.; Robinson, Bridget A.; Walker, Logan C.
2018-01-01
PCR-based RNA splicing assays are commonly used in diagnostic and research settings to assess the potential effects of variants of uncertain clinical significance in BRCA1 and BRCA2. The Evidence-based Network for the Interpretation of Germline Mutant Alleles (ENIGMA) consortium completed a multicentre investigation to evaluate differences in assay design and the integrity of published data, raising a number of methodological questions associated with cell culture conditions and PCR-based protocols. We utilized targeted RNA-seq to re-assess BRCA1 and BRCA2 mRNA isoform expression patterns in lymphoblastoid cell lines (LCLs) previously used in the multicentre ENIGMA study. Capture of the targeted cDNA sequences was carried out using 34 BRCA1 and 28 BRCA2 oligonucleotides from the Illumina Truseq Targeted RNA Expression platform. Our results show that targeted RNA-seq analysis of LCLs overcomes many of the methodology limitations associated with PCR-based assays leading us to make the following observations and recommendations: (1) technical replicates (n > 2) of variant carriers to capture methodology induced variability associated with RNA-seq assays, (2) LCLs can undergo multiple freeze/thaw cycles and can be cultured up to 2 weeks without noticeably influencing isoform expression levels, (3) nonsense-mediated decay inhibitors are essential prior to splicing assays for comprehensive mRNA isoform detection, (4) quantitative assessment of exon:exon junction levels across BRCA1 and BRCA2 can help distinguish between normal and aberrant isoform expression patterns. Experimentally derived recommendations from this study will facilitate the application of targeted RNA-seq platforms for the quantitation of BRCA1 and BRCA2 mRNA aberrations associated with sequence variants of uncertain clinical significance. PMID:29774201
Lattimore, Vanessa L; Pearson, John F; Currie, Margaret J; Spurdle, Amanda B; Robinson, Bridget A; Walker, Logan C
2018-01-01
PCR-based RNA splicing assays are commonly used in diagnostic and research settings to assess the potential effects of variants of uncertain clinical significance in BRCA1 and BRCA2 . The Evidence-based Network for the Interpretation of Germline Mutant Alleles (ENIGMA) consortium completed a multicentre investigation to evaluate differences in assay design and the integrity of published data, raising a number of methodological questions associated with cell culture conditions and PCR-based protocols. We utilized targeted RNA-seq to re-assess BRCA1 and BRCA2 mRNA isoform expression patterns in lymphoblastoid cell lines (LCLs) previously used in the multicentre ENIGMA study. Capture of the targeted cDNA sequences was carried out using 34 BRCA1 and 28 BRCA2 oligonucleotides from the Illumina Truseq Targeted RNA Expression platform. Our results show that targeted RNA-seq analysis of LCLs overcomes many of the methodology limitations associated with PCR-based assays leading us to make the following observations and recommendations: (1) technical replicates ( n > 2) of variant carriers to capture methodology induced variability associated with RNA-seq assays, (2) LCLs can undergo multiple freeze/thaw cycles and can be cultured up to 2 weeks without noticeably influencing isoform expression levels, (3) nonsense-mediated decay inhibitors are essential prior to splicing assays for comprehensive mRNA isoform detection, (4) quantitative assessment of exon:exon junction levels across BRCA1 and BRCA2 can help distinguish between normal and aberrant isoform expression patterns. Experimentally derived recommendations from this study will facilitate the application of targeted RNA-seq platforms for the quantitation of BRCA1 and BRCA2 mRNA aberrations associated with sequence variants of uncertain clinical significance.
NASA Astrophysics Data System (ADS)
Naldesi, Luciano; Buttol, Patrizia; Masoni, Paolo; Misceo, Monica; Sára, Balázs
2004-12-01
"eLCA" is a European Commission financed project aimed at realising "On line green tools and services for Small and Medium-sized Enterprises (SMEs)". Knowledge and use of Life Cycle Assessment (LCA) by SMEs are strategic to introduce the Integrated Product Policy (IPP) in Europe, but methodology simplification is needed. LCA requires a large amount of validated general and sector specific data. Since their availability and cost can be insuperable barriers for SMEs, pre-elaborated data/meta-data, use of standards and low cost solutions are required. Within the framework of the eLCA project an LCA software - eVerdEE - based on a simplified methodology and specialised for SMEs has been developed. eVerdEE is a web-based tool with some innovative features. Its main feature is the adaptation of ISO 14040 requirements to offer easy-to-handle functions with solid scientific bases. Complex methodological problems, such as the system boundaries definition, the data quality estimation and documentation, the choice of impact categories, are simplified according to the SMEs" needs. Predefined "Goal and Scope definition" and "Inventory" forms, a user-friendly and well structured procedure are time and cost-effective. The tool is supported by a database containing pre-elaborated environmental indicators of substances and processes for different impact categories. The impact assessment is calculated automatically by using the user"s input and the database values. The results have different levels of interpretation in order to identify the life cycle critical points and the improvement options. The use of a target plot allows the direct comparison of different design alternatives.
Sleep disturbances as an evidence-based suicide risk factor.
Bernert, Rebecca A; Kim, Joanne S; Iwata, Naomi G; Perlis, Michael L
2015-03-01
Increasing research indicates that sleep disturbances may confer increased risk for suicidal behaviors, including suicidal ideation, suicide attempts, and death by suicide. Despite increased investigation, a number of methodological problems present important limitations to the validity and generalizability of findings in this area, which warrant additional focus. To evaluate and delineate sleep disturbances as an evidence-based suicide risk factor, a systematic review of the extant literature was conducted with methodological considerations as a central focus. The following methodologic criteria were required for inclusion: the report (1) evaluated an index of sleep disturbance; (2) examined an outcome measure for suicidal behavior; (3) adjusted for presence of a depression diagnosis or depression severity, as a covariate; and (4) represented an original investigation as opposed to a chart review. Reports meeting inclusion criteria were further classified and reviewed according to: study design and timeframe; sample type and size; sleep disturbance, suicide risk, and depression covariate assessment measure(s); and presence of positive versus negative findings. Based on keyword search, the following search engines were used: PubMed and PsycINFO. Search criteria generated N = 82 articles representing original investigations focused on sleep disturbances and suicide outcomes. Of these, N = 18 met inclusion criteria for review based on systematic analysis. Of the reports identified, N = 18 evaluated insomnia or poor sleep quality symptoms, whereas N = 8 assessed nightmares in association with suicide risk. Despite considerable differences in study designs, samples, and assessment techniques, the comparison of such reports indicates preliminary, converging evidence for sleep disturbances as an empirical risk factor for suicidal behaviors, while highlighting important, future directions for increased investigation.
ERIC Educational Resources Information Center
Fujimoto, Kayo; Unger, Jennifer B.; Valente, Thomas W.
2012-01-01
Using a network analytic framework, this study introduces a new method to measure peer influence based on adolescents' affiliations or 2-mode social network data. Exposure based on affiliations is referred to as the "affiliation exposure model." This study demonstrates the methodology using data on young adolescent smoking being influenced by…
ERIC Educational Resources Information Center
Dow, Mirah J.; Boettcher, Carrie A.; Diego, Juana F.; Karch, Marziah E.; Todd-Diaz, Ashley; Woods, Kristine M.
2015-01-01
The purpose of this mixed methods study is to determine the effectiveness of case-based pedagogy in teaching basic principles of information ethics and ethical decision making. Study reports results of pre- and post-assessment completed by 49 library and information science (LIS) graduate students at a Midwestern university. Using Creswell's…
Assessing reservoir operations risk under climate change
Brekke, L.D.; Maurer, E.P.; Anderson, J.D.; Dettinger, M.D.; Townsley, E.S.; Harrison, A.; Pruitt, T.
2009-01-01
Risk-based planning offers a robust way to identify strategies that permit adaptive water resources management under climate change. This paper presents a flexible methodology for conducting climate change risk assessments involving reservoir operations. Decision makers can apply this methodology to their systems by selecting future periods and risk metrics relevant to their planning questions and by collectively evaluating system impacts relative to an ensemble of climate projection scenarios (weighted or not). This paper shows multiple applications of this methodology in a case study involving California's Central Valley Project and State Water Project systems. Multiple applications were conducted to show how choices made in conducting the risk assessment, choices known as analytical design decisions, can affect assessed risk. Specifically, risk was reanalyzed for every choice combination of two design decisions: (1) whether to assume climate change will influence flood-control constraints on water supply operations (and how), and (2) whether to weight climate change scenarios (and how). Results show that assessed risk would motivate different planning pathways depending on decision-maker attitudes toward risk (e.g., risk neutral versus risk averse). Results also show that assessed risk at a given risk attitude is sensitive to the analytical design choices listed above, with the choice of whether to adjust flood-control rules under climate change having considerably more influence than the choice on whether to weight climate scenarios. Copyright 2009 by the American Geophysical Union.
Using MBTI for the Success Assessment of Engineering Teams in Project-Based Learning
ERIC Educational Resources Information Center
Rodríguez Montequín, V.; Mesa Fernández, J. M.; Balsera, J. Villanueva; García Nieto, A.
2013-01-01
Project-Based Learning (PBL) is a teaching and learning methodology that emphasizes student centered instruction by assigning projects. The students have to conduct significant projects and cope with realistic working conditions and scenarios. PBL is generally done by groups of students working together towards a common goal. Several factors play…
ERIC Educational Resources Information Center
Stirling, Keith
2000-01-01
Describes a session on information retrieval systems that planned to discuss relevance measures with Web-based information retrieval; retrieval system performance and evaluation; probabilistic independence of index terms; vector-based models; metalanguages and digital objects; how users assess the reliability, timeliness and bias of information;…
Evolution of Project-Based Learning in Small Groups in Environmental Engineering Courses
ERIC Educational Resources Information Center
Requies, Jesús M.; Agirre, Ion; Barrio, V. Laura; Graells, Moisès
2018-01-01
This work presents the assessment of the development and evolution of an active methodology (Project-Based Learning--PBL) implemented on the course "Unit Operations in Environmental Engineering", within the bachelor's degree in Environmental Engineering, with the purpose of decreasing the dropout rate in this course. After the initial…
PBL-SEE: An Authentic Assessment Model for PBL-Based Software Engineering Education
ERIC Educational Resources Information Center
dos Santos, Simone C.
2017-01-01
The problem-based learning (PBL) approach has been successfully applied to teaching software engineering thanks to its principles of group work, learning by solving real problems, and learning environments that match the market realities. However, the lack of well-defined methodologies and processes for implementing the PBL approach represents a…
Designing trials for pressure ulcer risk assessment research: methodological challenges.
Balzer, K; Köpke, S; Lühmann, D; Haastert, B; Kottner, J; Meyer, G
2013-08-01
For decades various pressure ulcer risk assessment scales (PURAS) have been developed and implemented into nursing practice despite uncertainty whether use of these tools helps to prevent pressure ulcers. According to current methodological standards, randomised controlled trials (RCTs) are required to conclusively determine the clinical efficacy and safety of this risk assessment strategy. In these trials, PURAS-aided risk assessment has to be compared to nurses' clinical judgment alone in terms of its impact on pressure ulcer incidence and adverse outcomes. However, RCTs evaluating diagnostic procedures are prone to specific risks of bias and threats to the statistical power which may challenge their validity and feasibility. This discussion paper critically reflects on the rigour and feasibility of experimental research needed to substantiate the clinical efficacy of PURAS-aided risk assessment. Based on reflections of the methodological literature, a critical appraisal of available trials on this subject and an analysis of a protocol developed for a methodologically robust cluster-RCT, this paper arrives at the following conclusions: First, available trials do not provide reliable estimates of the impact of PURAS-aided risk assessment on pressure ulcer incidence compared to nurses' clinical judgement alone due to serious risks of bias and insufficient sample size. Second, it seems infeasible to assess this impact by means of rigorous experimental studies since sample size would become extremely high if likely threats to validity and power are properly taken into account. Third, means of evidence linkages seem to currently be the most promising approaches for evaluating the clinical efficacy and safety of PURAS-aided risk assessment. With this kind of secondary research, the downstream effect of use of PURAS on pressure ulcer incidence could be modelled by combining best available evidence for single parts of this pathway. However, to yield reliable modelling results, more robust experimental research evaluating specific parts of the pressure ulcer risk assessment-prevention pathway is needed. Copyright © 2013 Elsevier Ltd. All rights reserved.
Klasmeier, Jörg; Matthies, Michael; Macleod, Matthew; Fenner, Kathrin; Scheringer, Martin; Stroebe, Maximilian; Le Gall, Anne Christine; Mckone, Thomas; Van De Meent, Dik; Wania, Frank
2006-01-01
We propose a multimedia model-based methodology to evaluate whether a chemical substance qualifies as POP-like based on overall persistence (Pov) and potential for long-range transport (LRTP). It relies upon screening chemicals against the Pov and LRTP characteristics of selected reference chemicals with well-established environmental fates. Results indicate that chemicals of high and low concern in terms of persistence and long-range transport can be consistently identified by eight contemporary multimedia models using the proposed methodology. Model results for three hypothetical chemicals illustrate that the model-based classification of chemicals according to Pov and LRTP is not always consistent with the single-media half-life approach proposed by the UNEP Stockholm Convention and thatthe models provide additional insight into the likely long-term hazards associated with chemicals in the environment. We suggest this model-based classification method be adopted as a complement to screening against defined half-life criteria at the initial stages of tiered assessments designed to identify POP-like chemicals and to prioritize further environmental fate studies for new and existing chemicals.
Risk analysis based on hazards interactions
NASA Astrophysics Data System (ADS)
Rossi, Lauro; Rudari, Roberto; Trasforini, Eva; De Angeli, Silvia; Becker, Joost
2017-04-01
Despite an increasing need for open, transparent, and credible multi-hazard risk assessment methods, models, and tools, the availability of comprehensive risk information needed to inform disaster risk reduction is limited, and the level of interaction across hazards is not systematically analysed. Risk assessment methodologies for different hazards often produce risk metrics that are not comparable. Hazard interactions (consecutive occurrence two or more different events) are generally neglected, resulting in strongly underestimated risk assessment in the most exposed areas. This study presents cases of interaction between different hazards, showing how subsidence can affect coastal and river flood risk (Jakarta and Bandung, Indonesia) or how flood risk is modified after a seismic event (Italy). The analysis of well documented real study cases, based on a combination between Earth Observation and in-situ data, would serve as basis the formalisation of a multi-hazard methodology, identifying gaps and research frontiers. Multi-hazard risk analysis is performed through the RASOR platform (Rapid Analysis and Spatialisation Of Risk). A scenario-driven query system allow users to simulate future scenarios based on existing and assumed conditions, to compare with historical scenarios, and to model multi-hazard risk both before and during an event (www.rasor.eu).
Kumar, Amit; Sokhansanj, Shahab; Flynn, Peter C
2006-01-01
This study details multicriteria assessment methodology that integrates economic, social, environmental, and technical factors in order to rank alternatives for biomass collection and transportation systems. Ranking of biomass collection systems is based on cost of delivered biomass, quality of biomass supplied, emissions during collection, energy input to the chain operations, and maturity of supply system technologies. The assessment methodology is used to evaluate alternatives for collecting 1.8 x 10(6) dry t/yr based on assumptions made on performance of various assemblies of biomass collection systems. A proposed collection option using loafer/ stacker was shown to be the best option followed by ensiling and baling. Ranking of biomass transport systems is based on cost of biomass transport, emissions during transport, traffic congestion, and maturity of different technologies. At a capacity of 4 x 10(6) dry t/yr, rail transport was shown to be the best option, followed by truck transport and pipeline transport, respectively. These rankings depend highly on assumed maturity of technologies and scale of utilization. These may change if technologies such as loafing or ensiling (wet storage) methods are proved to be infeasible for large-scale collection systems.
Assessment of perceptions of clinical management in courses oriented by competency.
Gomes, Romeu; Padilha, Roberto de Queiroz; Lima, Valéria Vernaschi; Silva, Cosme Marcelo Furtado Passos da
2018-01-01
The study aims to assess perceptions of mastery of abilities in clinical management in participants of courses oriented by competency and based on active methodologies of teaching and learning, before and after the offered training process. Three conceptual frameworks were utilized: clinical management, expectation of auto-efficacy, and the holistic concept of competency. Methodologically, an electronic instrument was made available to students of the training courses, adapted to the Likert scale, in two stages: before the courses were undertaken and after their completion. The group of subjects that participated simultaneously in both stages was comprised of 825 trainees. Average, mean, standard deviation, and the Wilcoxon test were utilized in the analysis. Generally, in terms of findings, the perception of mastery of abilities in clinical management increased after the courses, proving a positive contribution of the training process of the students. Among other aspects of their results, it is concluded that the educational initiatives studied, oriented by competency and based in active methodologies of teaching and learning, can obtain the increase in perception of their participants regarding the mastery of abilities present in the competency profile, confirming the study's hypothesis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buelt, J.L.; Stottlemyre, J.A.; White, M.K.
1991-09-01
Because of the great complexity and number of potential waste sites facing the US Department of Energy (DOE) for potential cleanup, the DOE is supporting the development of a computer-based methodology to streamline the remedial investigations/feasibility study process required for DOE operable units. DOE operable units are generally more complex in nature because of the existence of multiple waste sites within many of the operable units and the presence of mixed radioactive and hazardous chemical wastes. Consequently, Pacific Northwest Laboratory (PNL) is developing the Remedial Action Assessment System (RAAS), which is aimed at screening, linking, and evaluating establishment technology processmore » options in support of conducting feasibility studies under the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA). It is also intended to do the same in support of corrective measures studies requires by the Resource Conservation and Recovery Act (RCRA). This paper presents the characteristics of two RAAS prototypes currently being developed. These include the RAAS Technology Information System, which accesses information on technologies in a graphical and tabular manner, and the main RAAS methodology, which screens, links, and evaluates remedial technologies. 4 refs., 3 figs., 1 tab.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buelt, J.L.; Stottlemyre, J.A.; White, M.K.
1991-02-01
Because of the great complexity and number of potential waste sites facing the US Department of Energy (DOE) for potential cleanup, the DOE is supporting the development of a computer-based methodology to streamline the remedial investigation/feasibility study process required for DOE operable units. DOE operable units are generally more complex in nature because of the existence of multiple waste sites within many of the operable units and the presence of mixed radioactive and hazardous chemical wastes. Consequently, Pacific Northwest Laboratory (PNL) is developing the Remedial Action Assessment System (RAAS), which is aimed at screening, linking, and evaluating established technology processmore » options in support of conducting feasibility studies under the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA). It is also intended to do the same in support of corrective measures studies required by the Resource Conservation and Recovery Act (RCRA). This paper presents the characteristics of two RAAS prototypes currently being developed. These include the RAAS Technology Information System, which accesses information on technologies in a graphical and tabular manner, and the main RAAS methodology, which screens, links, and evaluates remedial technologies. 4 refs., 3 figs., 1 tab.« less
Pussegoda, Kusala; Turner, Lucy; Garritty, Chantelle; Mayhew, Alain; Skidmore, Becky; Stevens, Adrienne; Boutron, Isabelle; Sarkis-Onofre, Rafael; Bjerre, Lise M; Hróbjartsson, Asbjørn; Altman, Douglas G; Moher, David
2017-06-19
The methodological quality and completeness of reporting of the systematic reviews (SRs) is fundamental to optimal implementation of evidence-based health care and the reduction of research waste. Methods exist to appraise SRs yet little is known about how they are used in SRs or where there are potential gaps in research best-practice guidance materials. The aims of this study are to identify reports assessing the methodological quality (MQ) and/or reporting quality (RQ) of a cohort of SRs and to assess their number, general characteristics, and approaches to 'quality' assessment over time. The Cochrane Library, MEDLINE®, and EMBASE® were searched from January 1990 to October 16, 2014, for reports assessing MQ and/or RQ of SRs. Title, abstract, and full-text screening of all reports were conducted independently by two reviewers. Reports assessing the MQ and/or RQ of a cohort of ten or more SRs of interventions were included. All results are reported as frequencies and percentages of reports. Of 20,765 unique records retrieved, 1189 of them were reviewed for full-text review, of which 76 reports were included. Eight previously published approaches to assessing MQ or reporting guidelines used as proxy to assess RQ were used in 80% (61/76) of identified reports. These included two reporting guidelines (PRISMA and QUOROM) and five quality assessment tools (AMSTAR, R-AMSTAR, OQAQ, Mulrow, Sacks) and GRADE criteria. The remaining 24% (18/76) of reports developed their own criteria. PRISMA, OQAQ, and AMSTAR were the most commonly used published tools to assess MQ or RQ. In conjunction with other approaches, published tools were used in 29% (22/76) of reports, with 36% (8/22) assessing adherence to both PRISMA and AMSTAR criteria and 26% (6/22) using QUOROM and OQAQ. The methods used to assess quality of SRs are diverse, and none has become universally accepted. The most commonly used quality assessment tools are AMSTAR, OQAQ, and PRISMA. As new tools and guidelines are developed to improve both the MQ and RQ of SRs, authors of methodological studies are encouraged to put thoughtful consideration into the use of appropriate tools to assess quality and reporting.
Carayon, Pascale; Li, Yaqiong; Kelly, Michelle M.; DuBenske, Lori L.; Xie, Anping; McCabe, Brenna; Orne, Jason; Cox, Elizabeth D.
2014-01-01
Human factors and ergonomics methods are needed to redesign healthcare processes and support patient-centered care, in particular for vulnerable patients such as hospitalized children. We implemented and evaluated a stimulated recall methodology for collective confrontation in the context of family-centered rounds. Five parents and five healthcare team members reviewed video records of their bedside rounds, and were then interviewed using the stimulated recall methodology to identify work system barriers and facilitators in family-centered rounds. The evaluation of the methodology was based on a survey of the participants, and a qualitative analysis of interview data in light of the work system model of Smith and Carayon (1989; 2000). Positive survey feedback from the participants was received. The stimulated recall methodology identified barriers and facilitators in all work system elements. Participatory ergonomics methods such as the stimulated recall methodology allow a range of participants, including parents and children, to participate in healthcare process improvement. PMID:24894378
RESIDUAL RISK ASSESSMENT: ETHYLENE OXIDE ...
This document describes the residual risk assessment for the Ethylene Oxide Commercial Sterilization source category. For stationary sources, section 112 (f) of the Clean Air Act requires EPA to assess risks to human health and the environment following implementation of technology-based control standards. If these technology-based control standards do not provide an ample margin of safety, then EPA is required to promulgate addtional standards. This document describes the methodology and results of the residual risk assessment performed for the Ethylene Oxide Commercial Sterilization source category. The results of this analyiss will assist EPA in determining whether a residual risk rule for this source category is appropriate.
Embedded assessment algorithms within home-based cognitive computer game exercises for elders.
Jimison, Holly; Pavel, Misha
2006-01-01
With the recent consumer interest in computer-based activities designed to improve cognitive performance, there is a growing need for scientific assessment algorithms to validate the potential contributions of cognitive exercises. In this paper, we present a novel methodology for incorporating dynamic cognitive assessment algorithms within computer games designed to enhance cognitive performance. We describe how this approach works for variety of computer applications and describe cognitive monitoring results for one of the computer game exercises. The real-time cognitive assessments also provide a control signal for adapting the difficulty of the game exercises and providing tailored help for elders of varying abilities.
NASA Astrophysics Data System (ADS)
Noble, Bram F.; Christmas, Lisa M.
2008-01-01
This article presents a methodological framework for strategic environmental assessment (SEA) application. The overall objective is to demonstrate SEA as a systematic and structured policy, plan, and program (PPP) decision support tool. In order to accomplish this objective, a stakeholder-based SEA application to greenhouse gas (GHG) mitigation policy options in Canadian agriculture is presented. Using a mail-out impact assessment exercise, agricultural producers and nonproducers from across the Canadian prairie region were asked to evaluate five competing GHG mitigation options against 13 valued environmental components (VECs). Data were analyzed using multi-criteria and exploratory analytical techniques. The results suggest considerable variation in perceived impacts and GHG mitigation policy preferences, suggesting that a blanket policy approach to GHG mitigation will create gainers and losers based on soil type and associate cropping and on-farm management practices. It is possible to identify a series of regional greenhouse gas mitigation programs that are robust, socially meaningful, and operationally relevant to both agricultural producers and policy decision makers. The assessment demonstrates the ability of SEA to address, in an operational sense, environmental problems that are characterized by conflicting interests and competing objectives and alternatives. A structured and systematic SEA methodology provides the necessary decision support framework for the consideration of impacts, and allows for PPPs to be assessed based on a much broader set of properties, objectives, criteria, and constraints whereas maintaining rigor and accountability in the assessment process.
The Development of a Checklist to Enhance Methodological Quality in Intervention Programs.
Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana; Sánchez-Martín, Milagrosa
2016-01-01
The methodological quality of primary studies is an important issue when performing meta-analyses or systematic reviews. Nevertheless, there are no clear criteria for how methodological quality should be analyzed. Controversies emerge when considering the various theoretical and empirical definitions, especially in relation to three interrelated problems: the lack of representativeness, utility, and feasibility. In this article, we (a) systematize and summarize the available literature about methodological quality in primary studies; (b) propose a specific, parsimonious, 12-items checklist to empirically define the methodological quality of primary studies based on a content validity study; and (c) present an inter-coder reliability study for the resulting 12-items. This paper provides a precise and rigorous description of the development of this checklist, highlighting the clearly specified criteria for the inclusion of items and a substantial inter-coder agreement in the different items. Rather than simply proposing another checklist, however, it then argues that the list constitutes an assessment tool with respect to the representativeness, utility, and feasibility of the most frequent methodological quality items in the literature, one that provides practitioners and researchers with clear criteria for choosing items that may be adequate to their needs. We propose individual methodological features as indicators of quality, arguing that these need to be taken into account when designing, implementing, or evaluating an intervention program. This enhances methodological quality of intervention programs and fosters the cumulative knowledge based on meta-analyses of these interventions. Future development of the checklist is discussed.
The Development of a Checklist to Enhance Methodological Quality in Intervention Programs
Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana; Sánchez-Martín, Milagrosa
2016-01-01
The methodological quality of primary studies is an important issue when performing meta-analyses or systematic reviews. Nevertheless, there are no clear criteria for how methodological quality should be analyzed. Controversies emerge when considering the various theoretical and empirical definitions, especially in relation to three interrelated problems: the lack of representativeness, utility, and feasibility. In this article, we (a) systematize and summarize the available literature about methodological quality in primary studies; (b) propose a specific, parsimonious, 12-items checklist to empirically define the methodological quality of primary studies based on a content validity study; and (c) present an inter-coder reliability study for the resulting 12-items. This paper provides a precise and rigorous description of the development of this checklist, highlighting the clearly specified criteria for the inclusion of items and a substantial inter-coder agreement in the different items. Rather than simply proposing another checklist, however, it then argues that the list constitutes an assessment tool with respect to the representativeness, utility, and feasibility of the most frequent methodological quality items in the literature, one that provides practitioners and researchers with clear criteria for choosing items that may be adequate to their needs. We propose individual methodological features as indicators of quality, arguing that these need to be taken into account when designing, implementing, or evaluating an intervention program. This enhances methodological quality of intervention programs and fosters the cumulative knowledge based on meta-analyses of these interventions. Future development of the checklist is discussed. PMID:27917143
NASA Technical Reports Server (NTRS)
1979-01-01
Information to identify viable coal gasification and utilization technologies is presented. Analysis capabilities required to support design and implementation of coal based synthetic fuels complexes are identified. The potential market in the Southeast United States for coal based synthetic fuels is investigated. A requirements analysis to identify the types of modeling and analysis capabilities required to conduct and monitor coal gasification project designs is discussed. Models and methodologies to satisfy these requirements are identified and evaluated, and recommendations are developed. Requirements for development of technology and data needed to improve gasification feasibility and economies are examined.
NASA Astrophysics Data System (ADS)
Elias, E.; Reyes, J. J.; Steele, C. M.; Rango, A.
2017-12-01
Assessing vulnerability of agricultural systems to climate variability and change is vital in securing food systems and sustaining rural livelihoods. Farmers, ranchers, and forest landowners rely on science-based, decision-relevant, and localized information to maintain production, ecological viability, and economic returns. This contribution synthesizes a collection of research on the future of agricultural production in the American Southwest (SW). Research was based on a variety of geospatial methodologies and datasets to assess the vulnerability of rangelands and livestock, field crops, specialty crops, and forests in the SW to climate-risk and change. This collection emerged from the development of regional vulnerability assessments for agricultural climate-risk by the U.S. Department of Agriculture (USDA) Climate Hub Network, established to deliver science-based information and technologies to enable climate-informed decision-making. Authors defined vulnerability differently based on their agricultural system of interest, although each primarily focuses on biophysical systems. We found that an inconsistent framework for vulnerability and climate risk was necessary to adequately capture the diversity, variability, and heterogeneity of SW landscapes, peoples, and agriculture. Through the diversity of research questions and methodologies, this collection of articles provides valuable information on various aspects of SW vulnerability. All articles relied on geographic information systems technology, with highly variable levels of complexity. Agricultural articles used National Agricultural Statistics Service data, either as tabular county level summaries or through the CropScape cropland raster datasets. Most relied on modeled historic and future climate information, but with differing assumptions regarding spatial resolution and temporal framework. We assert that it is essential to evaluate climate risk using a variety of complementary methodologies and perspectives. In addition, we found that spatial analysis supports informed adaptation, within and outside the SW United States. The persistence and adaptive capacity of agriculture in the water-limited Southwest serves as an instructive example and may offer solutions to reduce future climate risk.
Wandrey, Craig J.; Schenk, Christopher J.; Klett, Timothy R.; Brownfield, Michael E.; Charpentier, Ronald R.; Cook, Troy A.; Pollastro, Richard M.; Tennyson, Marilyn E.
2013-01-01
The Cretaceous-Tertiary Composite Total Petroleum System coincident Taranaki Basin Assessment Unit was recently assessed for undiscovered technically recoverable oil, natural gas, and natural gas liquids resources as part of the U.S. Geological Survey (USGS) World Energy Resources Project, World Oil and Gas Assessment. Using a geology-based assessment methodology, the USGS estimated mean volumes of 487 million barrels of oil, 9.8 trillion cubic feet of gas, and 408 million barrels of natural gas liquids.
Rapid evidence assessment: increasing the transparency of an emerging methodology.
Varker, Tracey; Forbes, David; Dell, Lisa; Weston, Adele; Merlin, Tracy; Hodson, Stephanie; O'Donnell, Meaghan
2015-12-01
Within the field of evidence-based practice, policy makers, health care professionals and consumers require timely reviews to inform decisions on efficacious health care and treatments. Rapid evidence assessment (REA), also known as rapid review, has emerged in recent years as a literature review methodology that fulfils this need. It highlights what is known in a clinical area to the target audience in a relatively short time frame. This article discusses the lack of transparency and limited critical appraisal that can occur in REA, and goes on to propose general principles for conducting a REA. The approach that we describe is consistent with the principles underlying systematic review methodology, but also makes allowances for the rapid delivery of information as required while utilizing explicit and reproducible methods at each stage. Our method for conducting REA includes: developing an explicit research question in consultation with the end-users; clear definition of the components of the research question; development of a thorough and reproducible search strategy; development of explicit evidence selection criteria; and quality assessments and transparent decisions about the level of information to be obtained from each study. In addition, the REA may also include an assessment of the quality of the total body of evidence. Transparent reporting of REA methodologies will provide greater clarity to end-users about how the information is obtained and about the trade-offs that are made between speed and rigour. © 2015 John Wiley & Sons, Ltd.
Reef Fish Survey Techniques: Assessing the Potential for Standardizing Methodologies.
Caldwell, Zachary R; Zgliczynski, Brian J; Williams, Gareth J; Sandin, Stuart A
2016-01-01
Dramatic changes in populations of fishes living on coral reefs have been documented globally and, in response, the research community has initiated efforts to assess and monitor reef fish assemblages. A variety of visual census techniques are employed, however results are often incomparable due to differential methodological performance. Although comparability of data may promote improved assessment of fish populations, and thus management of often critically important nearshore fisheries, to date no standardized and agreed-upon survey method has emerged. This study describes the use of methods across the research community and identifies potential drivers of method selection. An online survey was distributed to researchers from academic, governmental, and non-governmental organizations internationally. Although many methods were identified, 89% of survey-based projects employed one of three methods-belt transect, stationary point count, and some variation of the timed swim method. The selection of survey method was independent of the research design (i.e., assessment goal) and region of study, but was related to the researcher's home institution. While some researchers expressed willingness to modify their current survey protocols to more standardized protocols (76%), their willingness decreased when methodologies were tied to long-term datasets spanning five or more years. Willingness to modify current methodologies was also less common among academic researchers than resource managers. By understanding both the current application of methods and the reported motivations for method selection, we hope to focus discussions towards increasing the comparability of quantitative reef fish survey data.
Sorgente, Angela; Manzoni, Gian Mauro; Re, Federica; Simpson, Susan; Perona, Sara; Rossi, Alessandro; Cattivelli, Roberto; Innamorati, Marco; Jackson, Jeffrey B; Castelnuovo, Gianluca
2017-01-01
Background Weight loss is challenging and maintenance of weight loss is problematic. Web-based programs offer good potential for delivery of interventions for weight loss or weight loss maintenance. However, the precise impact of Web-based weight management programs is still unclear. Objective The purpose of this meta-systematic review was to provide a comprehensive summary of the efficacy of Web-based interventions for weight loss and weight loss maintenance. Methods Electronic databases were searched for systematic reviews and meta-analyses that included at least one study investigating the effect of a Web-based intervention on weight loss and/or weight loss maintenance among samples of overweight and/or obese individuals. Twenty identified reviews met the inclusion criteria. The Revised Assessment of Multiple SysTemAtic Reviews (R-AMSTAR) was used to assess methodological quality of reviews. All included reviews were of sufficient methodological quality (R-AMSTAR score ≥22). Key methodological and outcome data were extracted from each review. Results Web-based interventions for both weight loss and weight loss maintenance were more effective than minimal or control conditions. However, when contrasted with comparable non-Web-based interventions, results were less consistent across reviews. Conclusions Overall, the efficacy of weight loss maintenance interventions was stronger than the efficacy of weight loss interventions, but further evidence is needed to more clearly understand the efficacy of both types of Web-based interventions. Trial Registration PROSPERO 2015: CRD42015029377; http://www.crd.york.ac.uk/PROSPERO/display_record.asp? ID=CRD42015029377 (Archived by WebCite at http://www.webcitation.org/6qkSafdCZ) PMID:28652225
An entropy-based method for determining the flow depth distribution in natural channels
NASA Astrophysics Data System (ADS)
Moramarco, Tommaso; Corato, Giovanni; Melone, Florisa; Singh, Vijay P.
2013-08-01
A methodology for determining the bathymetry of river cross-sections during floods by the sampling of surface flow velocity and existing low flow hydraulic data is developed . Similar to Chiu (1988) who proposed an entropy-based velocity distribution, the flow depth distribution in a cross-section of a natural channel is derived by entropy maximization. The depth distribution depends on one parameter, whose estimate is straightforward, and on the maximum flow depth. Applying to a velocity data set of five river gage sites, the method modeled the flow area observed during flow measurements and accurately assessed the corresponding discharge by coupling the flow depth distribution and the entropic relation between mean velocity and maximum velocity. The methodology unfolds a new perspective for flow monitoring by remote sensing, considering that the two main quantities on which the methodology is based, i.e., surface flow velocity and flow depth, might be potentially sensed by new sensors operating aboard an aircraft or satellite.
A systematic review of patient tracking systems for use in the pediatric emergency department.
Dobson, Ian; Doan, Quynh; Hung, Geoffrey
2013-01-01
Patient safety is of great importance in the pediatric emergency department (PED). The combination of acutely and critically ill patients and high patient volumes creates a need for systems to support physicians in making accurate and timely diagnoses. Electronic patient tracking systems can potentially improve PED safety by reducing overcrowding and enhancing security. To enhance our understanding of current electronic tracking technologies, how they are implemented in a clinical setting, and resulting effect on patient care outcomes including patient safety. Nine databases were searched. Two independent reviewers identified articles that contained reference to patient tracking technologies in pediatrics or emergency medicine. Quantitative studies were assessed independently for methodological strength by two reviewers using an external assessment tool. Of 2292 initial articles, 22 were deemed relevant. Seventeen were qualitative, and the remaining five quantitative articles were assessed as being methodologically weak. Existing patient tracking systems in the ED included: infant monitoring/abduction prevention; barcode identification; radiofrequency identification (RFID)- or infrared (IR)-based patient tracking. Twenty articles supported the use of tracking technology to enhance patient safety or improve efficiency. One article failed to support the use of IR patient sensors due to study design flaws. Support exists for the use of barcode-, IR-, and RFID-based patient tracking systems to improve ED patient safety and efficiency. A lack of methodologically strong studies indicates a need for further evidence-based support for the implementation of patient tracking technology in a clinical or research setting. Copyright © 2013 Elsevier Inc. All rights reserved.
Using the Simulated Patient Methodology to Assess Paracetamol-Related Counselling for Headache
Horvat, Nejc; Koder, Marko; Kos, Mitja
2012-01-01
Objectives Firstly, to assess paracetamol-related counselling. Secondly, to evaluate the patient’s approach as a determinant of counselling and to test the acceptability of the simulated patient method in Slovenian pharmacies. Methods The simulated patient methodology was used in 17 community pharmacies. Three scenarios related to self-medication for headaches were developed and used in all participating pharmacies. Two scenarios were direct product requests: scenario 1: a patient with an uncomplicated short-term headache; scenario 2: a patient with a severe, long-duration headache who takes paracetamol for too long and concurrently drinks alcohol. Scenario 3 was a symptom-based request: a patient asking for medicine for a headache. Pharmacy visits were audio recorded and scored according to predetermined criteria arranged in two categories: counselling content and manner of counselling. The acceptability of the methodology used was evaluated by surveying the participating pharmacists. Results The symptom-based request was scored significantly better (a mean 2.17 out of a possible 4 points) than the direct product requests (means of 1.64 and 0.67 out of a possible 4 points for scenario 1 and 2, respectively). The most common information provided was dosage and adverse effects. Only the symptom-based request stimulated spontaneous counselling. No statistically significant differences in the duration of the consultation between the scenarios were found. There were also no significant differences in the quality of counselling between the Masters of Pharmacy and Pharmacy Technicians. The acceptability of the SP method was not as high as in other countries. Conclusion The assessment of paracetamol-related counselling demonstrates room for practice improvement. PMID:23300691
On the Assessment of Global Terrestrial Reference Frame Temporal Variations
NASA Astrophysics Data System (ADS)
Ampatzidis, Dimitrios; Koenig, Rolf; Zhu, Shengyuan
2015-04-01
Global Terrestrial Reference Frames (GTRFs) as the International Terrestrial Reference Frame (ITRF) provide reliable 4-D position information (3-D coordinates and their evolution through time). The given 3-D velocities play a significant role in precise position acquisition and are estimated from long term coordinate time series from the space-geodetic techniques DORIS, GNSS, SLR, and VLBI. GTRFs temporal evolution is directly connected with their internal stability: The more intense and inhomogeneous velocity field, the less stable TRF is derived. The assessment of the quality of the GTRF is mainly realized by comparing it to each individual technique's reference frame. E.g the comparison of GTRFs to SLR-only based TRF gives the sense of the ITRF stability with respect to the Geocenter and scale and their associated rates respectively. In addition, the comparison of ITRF to the VLBI-only based TRF can be used for the scale validation. However, till now there is not any specified methodology for the total assessment (in terms of origin, orientation and scale respectively) of the temporal evolution and GTRFs associated accuracy. We present a new alternative diagnostic tool for the assessment of GTRFs temporal evolution based on the well-known time-dependent Helmert type transformation formula (three shifts, three rotations and scale rates respectively). The advantage of the new methodology relies on the fact that it uses the full velocity field of the TRF and therefore all points not just the ones common to different techniques. It also examines simultaneously rates of origin, orientation and scale. The methodology is presented and implemented to the two existing GTRFs on the market (ITRF and DTRF which is computed from DGFI) , the results are discussed. The results also allow to compare directly each GTRF dynamic behavior. Furthermore, the correlations of the estimated parameters can also provide useful information to the proposed GTRFs assessment scheme.
ASSESSMENT OF TOXICANT-INDUCED ALTERATIONS IN OVARIAN STEROIDOGENESIS: A METHODOLOGICAL OVERVIEW
RTD-03-035
Assessment of Toxicant-induced Alterations in Ovarian Steroidogenesis:
A Methodological Overview
Jerome M. Goldman, Susan C. Laws and Ralph L. Cooper
Abstract
A variety of methodological approaches have been used for the assessment of tox...
Advances in early fetal loss research: importance for risk assessment.
Sweeney, A M; LaPorte, R E
1991-01-01
The assessment of early fetal losses (EFLs) in relationship to environmental agents offers unique advantages compared to other end points for hazard assessment. There is a high incidence (greater than 20% of all pregnancies end in an EFL), and the interval between exposure and end point is the short duration between conception and event, i.e., approximately 12 weeks. In contrast, cancer, which is the primary end point evaluated in risk assessment models, occurs with much lower frequency, and the latency period is measured in years or decades. EFLs have not been used effectively for risk assessment because most of the events are not detected. Prospective studies provide the only approach whereby it is possible to link exposure to EFLs. Recent methodologic advancements have demonstrated that it is now possible to conduct population-based studies of EFLs. It is likely that EFLs could serve as sentinels to monitor adverse health effects of many potential environmental hazards. The methodology will be demonstrated using lead exposure in utero as an example. PMID:2050056
Ternik, Robert; Liu, Fang; Bartlett, Jeremy A; Khong, Yuet Mei; Thiam Tan, David Cheng; Dixit, Trupti; Wang, Siri; Galella, Elizabeth A; Gao, Zhihui; Klein, Sandra
2018-02-05
The acceptability of pediatric pharmaceutical products to patients and their caregivers can have a profound impact on the resulting therapeutic outcome. However, existing methodology and approaches used for acceptability assessments for pediatric products is fragmented, making robust and consistent product evaluations difficult. A pediatric formulation development workshop took place in Washington, DC in June 2016 through the University of Maryland's Center of Excellence in Regulatory Science and Innovation (M-CERSI). A session at the workshop was dedicated to acceptability assessments and focused on two major elements that affect the overall acceptability of oral medicines, namely swallowability and palatability. The session started with presentations to provide an overview of literature, background and current state on swallowability and palatability assessments. Five parallel breakout discussions followed the presentations on each element, focusing on three overarching themes, risk-based approaches, methodology and product factors. This article reports the key outcomes of the workshop related to swallowability and palatability assessments. Copyright © 2017 Elsevier B.V. All rights reserved.
Hayes, J E; McGreevy, P D; Forbes, S L; Laing, G; Stuetz, R M
2018-08-01
Detection dogs serve a plethora of roles within modern society, and are relied upon to identify threats such as explosives and narcotics. Despite their importance, research and training regarding detection dogs has involved ambiguity. This is partially due to the fact that the assessment of effectiveness regarding detection dogs continues to be entrenched within a traditional, non-scientific understanding. Furthermore, the capabilities of detection dogs are also based on their olfactory physiology and training methodologies, both of which are hampered by knowledge gaps. Additionally, the future of detection dogs is strongly influenced by welfare and social implications. Most importantly however, is the emergence of progressively inexpensive and efficacious analytical methodologies including gas chromatography related techniques, "e-noses", and capillary electrophoresis. These analytical methodologies provide both an alternative and assistor for the detection dog industry, however the interrelationship between these two detection paradigms requires clarification. These factors, when considering their relative contributions, illustrate a need to address research gaps, formalise the detection dog industry and research process, as well as take into consideration analytical methodologies and their influence on the future status of detection dogs. This review offers an integrated assessment of the factors involved in order to determine the current and future status of detection dogs. Copyright © 2018 Elsevier B.V. All rights reserved.
Werner, R N; Stockfleth, E; Connolly, S M; Correia, O; Erdmann, R; Foley, P; Gupta, A K; Jacobs, A; Kerl, H; Lim, H W; Martin, G; Paquet, M; Pariser, D M; Rosumeck, S; Röwert-Huber, H-J; Sahota, A; Sangueza, O P; Shumack, S; Sporbeck, B; Swanson, N A; Torezan, L; Nast, A
2015-11-01
Actinic keratosis (AK) is a frequent health condition attributable to chronic exposure to ultraviolet radiation. Several treatment options are available and evidence based guidelines are missing. The goal of these evidence- and consensus-based guidelines was the development of treatment recommendations appropriate for different subgroups of patients presenting with AK. A secondary aim of these guidelines was the implementation of knowledge relating to the clinical background of AK, including consensus-based recommendations for the histopathological definition, diagnosis and the assessment of patients. The guidelines development followed a pre-defined and structured process. For the underlying systematic literature review of interventions for AK, the methodology suggested by the Cochrane Handbook for Systematic Reviews of Interventions, the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement and Grading of Recommendations Assessment, Development and Evaluation (GRADE) methodology was adapted. All recommendations were consented during a consensus conference using a formal consensus methodology. Strength of recommendations was expressed based on the GRADE approach. If expert opinion without external evidence was incorporated into the reasoning for making a certain recommendation, the rationale was provided. The Guidelines underwent open public review and approval by the commissioning societies. Various interventions for the treatment of AK have been assessed for their efficacy. The consenting procedure led to a treatment algorithm as shown in the guidelines document. Based on expert consensus, the present guidelines present recommendations on the classification of patients, diagnosis and histopathological definition of AK. Details on the methods and results of the systematic literature review and guideline development process have been published separately. International guidelines are intended to be adapted to national or regional circumstances (regulatory approval, availability and reimbursement of treatments). © 2015 European Academy of Dermatology and Venereology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morris, Jeremy W.F., E-mail: jmorris@geosyntec.com; Crest, Marion, E-mail: marion.crest@suez-env.com; Barlaz, Morton A., E-mail: barlaz@ncsu.edu
Highlights: Black-Right-Pointing-Pointer Performance-based evaluation of landfill gas control system. Black-Right-Pointing-Pointer Analytical framework to evaluate transition from active to passive gas control. Black-Right-Pointing-Pointer Focus on cover oxidation as an alternative means of passive gas control. Black-Right-Pointing-Pointer Integrates research on long-term landfill behavior with practical guidance. - Abstract: Municipal solid waste landfills represent the dominant option for waste disposal in many parts of the world. While some countries have greatly reduced their reliance on landfills, there remain thousands of landfills that require aftercare. The development of cost-effective strategies for landfill aftercare is in society's interest to protect human health and the environmentmore » and to prevent the emergence of landfills with exhausted aftercare funding. The Evaluation of Post-Closure Care (EPCC) methodology is a performance-based approach in which landfill performance is assessed in four modules including leachate, gas, groundwater, and final cover. In the methodology, the objective is to evaluate landfill performance to determine when aftercare monitoring and maintenance can be reduced or possibly eliminated. This study presents an improved gas module for the methodology. While the original version of the module focused narrowly on regulatory requirements for control of methane migration, the improved gas module also considers best available control technology for landfill gas in terms of greenhouse gas emissions, air quality, and emissions of odoriferous compounds. The improved module emphasizes the reduction or elimination of fugitive methane by considering the methane oxidation capacity of the cover system. The module also allows for the installation of biologically active covers or other features designed to enhance methane oxidation. A methane emissions model, CALMIM, was used to assist with an assessment of the methane oxidation capacity of landfill covers.« less
U.S. Heat Demand by Sector for Potential Application of Direct Use Geothermal
Katherine Young
2016-06-23
This dataset includes heat demand for potential application of direct use geothermal broken down into 4 sectors: agricultural, commercial, manufacturing and residential. The data for each sector are organized by county, were disaggregated specifically to assess the market demand for geothermal direct use, and were derived using methodologies customized for each sector based on the availability of data and other sector-specific factors. This dataset also includes a paper containing a full explanation of the methodologies used.
2014-03-03
results. As part of this research and development effort, a number of products were developed that served to advance the research and provided a testing ...Teams, U.S. Navy SEALs, brown‐water Navy personnel, and Naval Reserve Officer Training Corps midshipmen. The base conducts research and tests of newly...effort, a number of products were developed that served to advance the research , and provided a testing ground for our methodologies. In addition
Methodologies For A Physically Based Rockfall Hazard Assessment
NASA Astrophysics Data System (ADS)
Agliardi, F.; Crosta, G. B.; Guzzetti, F.; Marian, M.
Rockfall hazard assessment is an important land planning tool in alpine areas, where settlements progressively expand across rockfall prone areas, rising the vulnerability of the elements at risk, the worth of potential losses and the restoration costs. Nev- ertheless, hazard definition is not simple to achieve in practice and sound, physically based assessment methodologies are still missing. In addition, the high mobility of rockfalls implies a more difficult hazard definition with respect to other slope insta- bilities for which runout is minimal. When coping with rockfalls, hazard assessment involves complex definitions for "occurrence probability" and "intensity". The local occurrence probability must derive from the combination of the triggering probability (related to the geomechanical susceptibility of rock masses to fail) and the transit or impact probability at a given location (related to the motion of falling blocks). The intensity (or magnitude) of a rockfall is a complex function of mass, velocity and fly height of involved blocks that can be defined in many different ways depending on the adopted physical description and "destructiveness" criterion. This work is an attempt to evaluate rockfall hazard using the results of numerical modelling performed by an original 3D rockfall simulation program. This is based on a kinematic algorithm and allows the spatially distributed simulation of rockfall motions on a three-dimensional topography described by a DTM. The code provides raster maps portraying the max- imum frequency of transit, velocity and height of blocks at each model cell, easily combined in a GIS in order to produce physically based rockfall hazard maps. The results of some three dimensional rockfall models, performed at both regional and lo- cal scale in areas where rockfall related problems are well known, have been used to assess rockfall hazard, by adopting an objective approach based on three-dimensional matrixes providing a positional "hazard index". Different hazard maps have been ob- tained combining and classifying variables in different ways. The performance of the different hazard maps has been evaluated on the basis of past rockfall events and com- pared to the results of existing methodologies. The sensitivity of the hazard index with respect to the included variables and their combinations is discussed in order to constrain as objective as possible assessment criteria.
Mt-Isa, Shahrul; Hallgreen, Christine E; Wang, Nan; Callréus, Torbjörn; Genov, Georgy; Hirsch, Ian; Hobbiger, Stephen F; Hockley, Kimberley S; Luciani, Davide; Phillips, Lawrence D; Quartey, George; Sarac, Sinan B; Stoeckert, Isabelle; Tzoulaki, Ioanna; Micaleff, Alain; Ashby, Deborah
2014-07-01
The need for formal and structured approaches for benefit-risk assessment of medicines is increasing, as is the complexity of the scientific questions addressed before making decisions on the benefit-risk balance of medicines. We systematically collected, appraised and classified available benefit-risk methodologies to facilitate and inform their future use. A systematic review of publications identified benefit-risk assessment methodologies. Methodologies were appraised on their fundamental principles, features, graphical representations, assessability and accessibility. We created a taxonomy of methodologies to facilitate understanding and choice. We identified 49 methodologies, critically appraised and classified them into four categories: frameworks, metrics, estimation techniques and utility survey techniques. Eight frameworks describe qualitative steps in benefit-risk assessment and eight quantify benefit-risk balance. Nine metric indices include threshold indices to measure either benefit or risk; health indices measure quality-of-life over time; and trade-off indices integrate benefits and risks. Six estimation techniques support benefit-risk modelling and evidence synthesis. Four utility survey techniques elicit robust value preferences from relevant stakeholders to the benefit-risk decisions. Methodologies to help benefit-risk assessments of medicines are diverse and each is associated with different limitations and strengths. There is not a 'one-size-fits-all' method, and a combination of methods may be needed for each benefit-risk assessment. The taxonomy introduced herein may guide choice of adequate methodologies. Finally, we recommend 13 of 49 methodologies for further appraisal for use in the real-life benefit-risk assessment of medicines. Copyright © 2014 John Wiley & Sons, Ltd.
Tidal current energy potential of Nalón river estuary assessment using a high precision flow model
NASA Astrophysics Data System (ADS)
Badano, Nicolás; Valdés, Rodolfo Espina; Álvarez, Eduardo Álvarez
2018-05-01
Obtaining energy from tide currents in onshore locations is of great interest due to the proximity to the points of consumption. This opens the door to the feasibility of new installations based on hydrokinetic microturbines even in zones of moderate speed. In this context, the accuracy of energy predictions based on hydrodynamic models is of paramount importance. This research presents a high precision methodology based on a multidimensional hydrodynamic model that is used to study the energetic potential in estuaries. Moreover, it is able to estimate the flow variations caused by microturbine installations. The paper also shows the results obtained from the application of the methodology in a study of the Nalón river mouth (Asturias, Spain).
Towards environmental health equity in health impact assessment: innovations and opportunities.
Buse, Chris G; Lai, Valerie; Cornish, Katie; Parkes, Margot W
2018-06-18
As global environmental change drives inequitable health outcomes, novel health equity assessment methodologies are increasingly required. We review literatures on equity-focused HIA to clarify how equity is informing HIA practice, and to surface innovations for assessing health equity in relation to a range of exposures across geographic and temporal scales. A narrative review of the health equity and HIA literatures analysed English articles published between 2003 and 2017 across PubMed, PubMed Central, Biomed Central and Ovid Medline. Title and abstract reviews of 849 search results yielded 89 articles receiving full text review. Considerations of equity in HIA increased over the last 5 years, but equity continues to be conflated with health disparities rather than their root causes (i.e. inequities). Lessons from six literatures to inform future HIA practice are described: HIA for healthy cities, climate change vulnerability assessment, cumulative health risk assessment, intersectionality-based policy analysis, corporate health impact assessment and global health impact assessment. Academic reporting on incorporating equity in HIA practice has been limited. Nonetheless, significant methodological advancements are being made to examine the health equity implications of multiple environmental exposures.
Edwards, Mervyn; Nathanson, Andrew; Carroll, Jolyon; Wisch, Marcus; Zander, Oliver; Lubbe, Nils
2015-01-01
Autonomous emergency braking (AEB) systems fitted to cars for pedestrians have been predicted to offer substantial benefit. On this basis, consumer rating programs-for example, the European New Car Assessment Programme (Euro NCAP)-are developing rating schemes to encourage fitment of these systems. One of the questions that needs to be answered to do this fully is how the assessment of the speed reduction offered by the AEB is integrated with the current assessment of the passive safety for mitigation of pedestrian injury. Ideally, this should be done on a benefit-related basis. The objective of this research was to develop a benefit-based methodology for assessment of integrated pedestrian protection systems with AEB and passive safety components. The method should include weighting procedures to ensure that it represents injury patterns from accident data and replicates an independently estimated benefit of AEB. A methodology has been developed to calculate the expected societal cost of pedestrian injuries, assuming that all pedestrians in the target population (i.e., pedestrians impacted by the front of a passenger car) are impacted by the car being assessed, taking into account the impact speed reduction offered by the car's AEB (if fitted) and the passive safety protection offered by the car's frontal structure. For rating purposes, the cost for the assessed car is normalized by comparing it to the cost calculated for a reference car. The speed reductions measured in AEB tests are used to determine the speed at which each pedestrian in the target population will be impacted. Injury probabilities for each impact are then calculated using the results from Euro NCAP pedestrian impactor tests and injury risk curves. These injury probabilities are converted into cost using "harm"-type costs for the body regions tested. These costs are weighted and summed. Weighting factors were determined using accident data from Germany and Great Britain and an independently estimated AEB benefit. German and Great Britain versions of the methodology are available. The methodology was used to assess cars with good, average, and poor Euro NCAP pedestrian ratings, in combination with a current AEB system. The fitment of a hypothetical A-pillar airbag was also investigated. It was found that the decrease in casualty injury cost achieved by fitting an AEB system was approximately equivalent to that achieved by increasing the passive safety rating from poor to average. Because the assessment was influenced strongly by the level of head protection offered in the scuttle and windscreen area, a hypothetical A-pillar airbag showed high potential to reduce overall casualty cost. A benefit-based methodology for assessment of integrated pedestrian protection systems with AEB has been developed and tested. It uses input from AEB tests and Euro NCAP passive safety tests to give an integrated assessment of the system performance, which includes consideration of effects such as the change in head impact location caused by the impact speed reduction given by the AEB.