Sample records for evaluation method based

  1. A Multi-level Fuzzy Evaluation Method for Smart Distribution Network Based on Entropy Weight

    NASA Astrophysics Data System (ADS)

    Li, Jianfang; Song, Xiaohui; Gao, Fei; Zhang, Yu

    2017-05-01

    Smart distribution network is considered as the future trend of distribution network. In order to comprehensive evaluate smart distribution construction level and give guidance to the practice of smart distribution construction, a multi-level fuzzy evaluation method based on entropy weight is proposed. Firstly, focus on both the conventional characteristics of distribution network and new characteristics of smart distribution network such as self-healing and interaction, a multi-level evaluation index system which contains power supply capability, power quality, economy, reliability and interaction is established. Then, a combination weighting method based on Delphi method and entropy weight method is put forward, which take into account not only the importance of the evaluation index in the experts’ subjective view, but also the objective and different information from the index values. Thirdly, a multi-level evaluation method based on fuzzy theory is put forward. Lastly, an example is conducted based on the statistical data of some cites’ distribution network and the evaluation method is proved effective and rational.

  2. An evaluation method for nanoscale wrinkle

    NASA Astrophysics Data System (ADS)

    Liu, Y. P.; Wang, C. G.; Zhang, L. M.; Tan, H. F.

    2016-06-01

    In this paper, a spectrum-based wrinkling analysis method via two-dimensional Fourier transformation is proposed aiming to solve the difficulty of nanoscale wrinkle evaluation. It evaluates the wrinkle characteristics including wrinkling wavelength and direction simply using a single wrinkling image. Based on this method, the evaluation results of nanoscale wrinkle characteristics show agreement with the open experimental results within an error of 6%. It is also verified to be appropriate for the macro wrinkle evaluation without scale limitations. The spectrum-based wrinkling analysis is an effective method for nanoscale evaluation, which contributes to reveal the mechanism of nanoscale wrinkling.

  3. [Reconstituting evaluation methods based on both qualitative and quantitative paradigms].

    PubMed

    Miyata, Hiroaki; Okubo, Suguru; Yoshie, Satoru; Kai, Ichiro

    2011-01-01

    Debate about the relationship between quantitative and qualitative paradigms is often muddled and confusing and the clutter of terms and arguments has resulted in the concepts becoming obscure and unrecognizable. In this study we conducted content analysis regarding evaluation methods of qualitative healthcare research. We extracted descriptions on four types of evaluation paradigm (validity/credibility, reliability/credibility, objectivity/confirmability, and generalizability/transferability), and classified them into subcategories. In quantitative research, there has been many evaluation methods based on qualitative paradigms, and vice versa. Thus, it might not be useful to consider evaluation methods of qualitative paradigm are isolated from those of quantitative methods. Choosing practical evaluation methods based on the situation and prior conditions of each study is an important approach for researchers.

  4. Research on Sustainable Development Level Evaluation of Resource-based Cities Based on Shapely Entropy and Chouqet Integral

    NASA Astrophysics Data System (ADS)

    Zhao, Hui; Qu, Weilu; Qiu, Weiting

    2018-03-01

    In order to evaluate sustainable development level of resource-based cities, an evaluation method with Shapely entropy and Choquet integral is proposed. First of all, a systematic index system is constructed, the importance of each attribute is calculated based on the maximum Shapely entropy principle, and then the Choquet integral is introduced to calculate the comprehensive evaluation value of each city from the bottom up, finally apply this method to 10 typical resource-based cities in China. The empirical results show that the evaluation method is scientific and reasonable, which provides theoretical support for the sustainable development path and reform direction of resource-based cities.

  5. Usability Evaluation of a Web-Based Learning System

    ERIC Educational Resources Information Center

    Nguyen, Thao

    2012-01-01

    The paper proposes a contingent, learner-centred usability evaluation method and a prototype tool of such systems. This is a new usability evaluation method for web-based learning systems using a set of empirically-supported usability factors and can be done effectively with limited resources. During the evaluation process, the method allows for…

  6. Drug exposure in register-based research—An expert-opinion based evaluation of methods

    PubMed Central

    Taipale, Heidi; Koponen, Marjaana; Tolppanen, Anna-Maija; Hartikainen, Sirpa; Ahonen, Riitta; Tiihonen, Jari

    2017-01-01

    Background In register-based pharmacoepidemiological studies, construction of drug exposure periods from drug purchases is a major methodological challenge. Various methods have been applied but their validity is rarely evaluated. Our objective was to conduct an expert-opinion based evaluation of the correctness of drug use periods produced by different methods. Methods Drug use periods were calculated with three fixed methods: time windows, assumption of one Defined Daily Dose (DDD) per day and one tablet per day, and with PRE2DUP that is based on modelling of individual drug purchasing behavior. Expert-opinion based evaluation was conducted with 200 randomly selected purchase histories of warfarin, bisoprolol, simvastatin, risperidone and mirtazapine in the MEDALZ-2005 cohort (28,093 persons with Alzheimer’s disease). Two experts reviewed purchase histories and judged which methods had joined correct purchases and gave correct duration for each of 1000 drug exposure periods. Results The evaluated correctness of drug use periods was 70–94% for PRE2DUP, and depending on grace periods and time window lengths 0–73% for tablet methods, 0–41% for DDD methods and 0–11% for time window methods. The highest rate of evaluated correct solutions for each method class were observed for 1 tablet per day with 180 days grace period (TAB_1_180, 43–73%), and 1 DDD per day with 180 days grace period (1–41%). Time window methods produced at maximum only 11% correct solutions. The best performing fixed method TAB_1_180 reached highest correctness for simvastatin 73% (95% CI 65–81%) whereas 89% (95% CI 84–94%) of PRE2DUP periods were judged as correct. Conclusions This study shows inaccuracy of fixed methods and the urgent need for new data-driven methods. In the expert-opinion based evaluation, the lowest error rates were observed with data-driven method PRE2DUP. PMID:28886089

  7. Developing a Self-Report-Based Sequential Analysis Method for Educational Technology Systems: A Process-Based Usability Evaluation

    ERIC Educational Resources Information Center

    Lin, Yi-Chun; Hsieh, Ya-Hui; Hou, Huei-Tse

    2015-01-01

    The development of a usability evaluation method for educational systems or applications, called the self-report-based sequential analysis, is described herein. The method aims to extend the current practice by proposing self-report-based sequential analysis as a new usability method, which integrates the advantages of self-report in survey…

  8. Dynamic Chest Image Analysis: Evaluation of Model-Based Pulmonary Perfusion Analysis With Pyramid Images

    DTIC Science & Technology

    2001-10-25

    Image Analysis aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected with the Dynamic Pulmonary Imaging technique 18,5,17,6. We have proposed and evaluated a multiresolutional method with an explicit ventilation model based on pyramid images for ventilation analysis. We have further extended the method for ventilation analysis to pulmonary perfusion. This paper focuses on the clinical evaluation of our method for

  9. Advanced Cardiac Life Support Training by Problem-Based Method: Effect on the Trainee's Skills, Knowledge and Evaluation of Trainers.

    PubMed

    Hosseini, Seyed Kianoosh; Ghalamkari, Marziyeh; Yousefshahi, Fardin; Mireskandari, Seyed Mohammad; Rezaei Hamami, Mohsen

    2013-10-28

    Cardiopulmonary-cerebral resuscitation (CPCR) training is essential for all hospital workers, especially junior residents who might become the manager of the resuscitation team. In our center, the traditional CPCR knowledge training curriculum for junior residents up to 5 years ago was lecture-based and had some faults. This study aimed to evaluate the effect of a problem-based method on residents' CPCR knowledge and skills as well as their evaluation of their CPCR trainers. This study, conducted at Tehran University of Medical Sciences, included 290 first-year residents in 2009-2010 - who were trained via a problem-based method (the problem-based group) - and 160 first-year residents in 2003-2004 - who were trained via a lecture-based method (the lecture-based group). Other educational techniques and facilities were similar. The participants self-evaluated their own CPCR knowledge and skills pre and post workshop and also assessed their trainers' efficacy post workshop by completing special questionnaires. The problem-based group, trained via the problem-based method, had higher self-assessment scores of CPCR knowledge and skills post workshop: the difference as regards the mean scores between the problem-based and lecture-based groups was 32.36 ± 19.23 vs. 22.33 ± 20.35 for knowledge (p value = 0.003) and 10.13 ± 7.17 vs. 8.19 ± 8.45 for skills (p value = 0.043). The residents' evaluation of their trainers was similar between the two study groups (p value = 0.193), with the mean scores being 15.90 ± 2.59 and 15.46 ± 2.90 in the problem-based and lecture-based groups - respectively. The problem-based method increased our residents' self-evaluation score of their own CPCR knowledge and skills.

  10. The Usability Evaluation of Fakih Method Based on Technology for Students with Hearing Difficulties: The User's Retrospective

    ERIC Educational Resources Information Center

    Sabdan, Muhammad Sayuti Bin; Alias, Norlidah; Jomhari, Nazean; Jamaludin, Khairul Azhar; DeWitt, Dorothy

    2014-01-01

    The study is aimed at evaluating the FAKIH method based on technology in teaching al-Quran, based on the user's retrospective. The participants of this study were five students selected based on hearing difficulties. The study employed the user evaluation framework. Teacher's journals were used to determine the frequency and percentage of…

  11. Research on the Value Evaluation of Used Pure Electric Car Based on the Replacement Cost Method

    NASA Astrophysics Data System (ADS)

    Tan, zhengping; Cai, yun; Wang, yidong; Mao, pan

    2018-03-01

    In this paper, the value evaluation of the used pure electric car is carried out by the replacement cost method, which fills the blank of the value evaluation of the electric vehicle. The basic principle of using the replacement cost method, combined with the actual cost of pure electric cars, puts forward the calculation method of second-hand electric car into a new rate based on the use of AHP method to construct the weight matrix comprehensive adjustment coefficient of related factors, the improved method of value evaluation system for second-hand car

  12. Subgrade evaluation based on theoretical concepts.

    DOT National Transportation Integrated Search

    1971-01-01

    Evaluations of pavement soil subgrades for the purpose of design are mostly based on empirical methods such as the CBR, California soil resistance method, etc. The need for the application of theory and the evaluation of subgrade strength in terms of...

  13. Compatibility of pedigree-based and marker-based relationship matrices for single-step genetic evaluation.

    PubMed

    Christensen, Ole F

    2012-12-03

    Single-step methods provide a coherent and conceptually simple approach to incorporate genomic information into genetic evaluations. An issue with single-step methods is compatibility between the marker-based relationship matrix for genotyped animals and the pedigree-based relationship matrix. Therefore, it is necessary to adjust the marker-based relationship matrix to the pedigree-based relationship matrix. Moreover, with data from routine evaluations, this adjustment should in principle be based on both observed marker genotypes and observed phenotypes, but until now this has been overlooked. In this paper, I propose a new method to address this issue by 1) adjusting the pedigree-based relationship matrix to be compatible with the marker-based relationship matrix instead of the reverse and 2) extending the single-step genetic evaluation using a joint likelihood of observed phenotypes and observed marker genotypes. The performance of this method is then evaluated using two simulated datasets. The method derived here is a single-step method in which the marker-based relationship matrix is constructed assuming all allele frequencies equal to 0.5 and the pedigree-based relationship matrix is constructed using the unusual assumption that animals in the base population are related and inbred with a relationship coefficient γ and an inbreeding coefficient γ / 2. Taken together, this γ parameter and a parameter that scales the marker-based relationship matrix can handle the issue of compatibility between marker-based and pedigree-based relationship matrices. The full log-likelihood function used for parameter inference contains two terms. The first term is the REML-log-likelihood for the phenotypes conditional on the observed marker genotypes, whereas the second term is the log-likelihood for the observed marker genotypes. Analyses of the two simulated datasets with this new method showed that 1) the parameters involved in adjusting marker-based and pedigree-based relationship matrices can depend on both observed phenotypes and observed marker genotypes and 2) a strong association between these two parameters exists. Finally, this method performed at least as well as a method based on adjusting the marker-based relationship matrix. Using the full log-likelihood and adjusting the pedigree-based relationship matrix to be compatible with the marker-based relationship matrix provides a new and interesting approach to handle the issue of compatibility between the two matrices in single-step genetic evaluation.

  14. Two Methods of Automatic Evaluation of Speech Signal Enhancement Recorded in the Open-Air MRI Environment

    NASA Astrophysics Data System (ADS)

    Přibil, Jiří; Přibilová, Anna; Frollo, Ivan

    2017-12-01

    The paper focuses on two methods of evaluation of successfulness of speech signal enhancement recorded in the open-air magnetic resonance imager during phonation for the 3D human vocal tract modeling. The first approach enables to obtain a comparison based on statistical analysis by ANOVA and hypothesis tests. The second method is based on classification by Gaussian mixture models (GMM). The performed experiments have confirmed that the proposed ANOVA and GMM classifiers for automatic evaluation of the speech quality are functional and produce fully comparable results with the standard evaluation based on the listening test method.

  15. Examining Teacher Evaluation Validity and Leadership Decision Making within a Standards-Based Evaluation System

    ERIC Educational Resources Information Center

    Kimball, Steven M.; Milanowski, Anthony

    2009-01-01

    Purpose: The article reports on a study of school leader decision making that examined variation in the validity of teacher evaluation ratings in a school district that has implemented a standards-based teacher evaluation system. Research Methods: Applying mixed methods, the study used teacher evaluation ratings and value-added student achievement…

  16. Grey Comprehensive Evaluation of Biomass Power Generation Project Based on Group Judgement

    NASA Astrophysics Data System (ADS)

    Xia, Huicong; Niu, Dongxiao

    2017-06-01

    The comprehensive evaluation of benefit is an important task needed to be carried out at all stages of biomass power generation projects. This paper proposed an improved grey comprehensive evaluation method based on triangle whiten function. To improve the objectivity of weight calculation result of only reference comparison judgment method, this paper introduced group judgment to the weighting process. In the process of grey comprehensive evaluation, this paper invited a number of experts to estimate the benefit level of projects, and optimized the basic estimations based on the minimum variance principle to improve the accuracy of evaluation result. Taking a biomass power generation project as an example, the grey comprehensive evaluation result showed that the benefit level of this project was good. This example demonstrates the feasibility of grey comprehensive evaluation method based on group judgment for benefit evaluation of biomass power generation project.

  17. Evaluating the Sharing Stories youth theatre program: an interactive theatre and drama-based strategy for sexual health promotion among multicultural youth.

    PubMed

    Roberts, Meagan; Lobo, Roanna; Sorenson, Anne

    2017-03-01

    Issue addressed Rates of sexually transmissible infections among young people are high, and there is a need for innovative, youth-focused sexual health promotion programs. This study evaluated the effectiveness of the Sharing Stories youth theatre program, which uses interactive theatre and drama-based strategies to engage and educate multicultural youth on sexual health issues. The effectiveness of using drama-based evaluation methods is also discussed. Methods The youth theatre program participants were 18 multicultural youth from South East Asian, African and Middle Eastern backgrounds aged between 14 and 21 years. Four sexual health drama scenarios and a sexual health questionnaire were used to measure changes in knowledge and attitudes. Results Participants reported being confident talking to and supporting their friends with regards to safe sex messages, improved their sexual health knowledge and demonstrated a positive shift in their attitudes towards sexual health. Drama-based evaluation methods were effective in engaging multicultural youth and worked well across the cultures and age groups. Conclusions Theatre and drama-based sexual health promotion strategies are an effective method for up-skilling young people from multicultural backgrounds to be peer educators and good communicators of sexual health information. Drama-based evaluation methods are engaging for young people and an effective way of collecting data from culturally diverse youth. So what? This study recommends incorporating interactive and arts-based strategies into sexual health promotion programs for multicultural youth. It also provides guidance for health promotion practitioners evaluating an arts-based health promotion program using arts-based data collection methods.

  18. Evaluation of Methods for Decladding LWR Fuel for a Pyroprocessing-Based Reprocessing Plant

    DTIC Science & Technology

    1992-10-01

    oAD-A275 326 ORN.rFM-1121o04 OAK RIDGE NATIONAL LABORATORY Evaluation of Methods for Decladding _LWR Fuel for a Pyroprocessing -Based Reprocessing...Dist. Category UC-526 EVALUATION OF METHODS FOR DECLADDING LWR FUEL FOR A PYROPROCESSING -BASED REPROCESSING PLANT W. D. Bond J. C. Mailen G. E...decladding technologies has been performed to identify candidate decladding processes suitable for LWR fuel and compatible with downstream pyroprocesses

  19. Research on software behavior trust based on hierarchy evaluation

    NASA Astrophysics Data System (ADS)

    Long, Ke; Xu, Haishui

    2017-08-01

    In view of the correlation software behavior, we evaluate software behavior credibility from two levels of control flow and data flow. In control flow level, method of the software behavior of trace based on support vector machine (SVM) is proposed. In data flow level, behavioral evidence evaluation based on fuzzy decision analysis method is put forward.

  20. A KARAOKE System Singing Evaluation Method that More Closely Matches Human Evaluation

    NASA Astrophysics Data System (ADS)

    Takeuchi, Hideyo; Hoguro, Masahiro; Umezaki, Taizo

    KARAOKE is a popular amusement for old and young. Many KARAOKE machines have singing evaluation function. However, it is often said that the scores given by KARAOKE machines do not match human evaluation. In this paper a KARAOKE scoring method strongly correlated with human evaluation is proposed. This paper proposes a way to evaluate songs based on the distance between singing pitch and musical scale, employing a vibrato extraction method based on template matching of spectrum. The results show that correlation coefficients between scores given by the proposed system and human evaluation are -0.76∼-0.89.

  1. Evaluating the utility of two gestural discomfort evaluation methods

    PubMed Central

    Son, Minseok; Jung, Jaemoon; Park, Woojin

    2017-01-01

    Evaluating physical discomfort of designed gestures is important for creating safe and usable gesture-based interaction systems; yet, gestural discomfort evaluation has not been extensively studied in HCI, and few evaluation methods seem currently available whose utility has been experimentally confirmed. To address this, this study empirically demonstrated the utility of the subjective rating method after a small number of gesture repetitions (a maximum of four repetitions) in evaluating designed gestures in terms of physical discomfort resulting from prolonged, repetitive gesture use. The subjective rating method has been widely used in previous gesture studies but without empirical evidence on its utility. This study also proposed a gesture discomfort evaluation method based on an existing ergonomics posture evaluation tool (Rapid Upper Limb Assessment) and demonstrated its utility in evaluating designed gestures in terms of physical discomfort resulting from prolonged, repetitive gesture use. Rapid Upper Limb Assessment is an ergonomics postural analysis tool that quantifies the work-related musculoskeletal disorders risks for manual tasks, and has been hypothesized to be capable of correctly determining discomfort resulting from prolonged, repetitive gesture use. The two methods were evaluated through comparisons against a baseline method involving discomfort rating after actual prolonged, repetitive gesture use. Correlation analyses indicated that both methods were in good agreement with the baseline. The methods proposed in this study seem useful for predicting discomfort resulting from prolonged, repetitive gesture use, and are expected to help interaction designers create safe and usable gesture-based interaction systems. PMID:28423016

  2. Evaluation as institution: a contractarian argument for needs-based economic evaluation.

    PubMed

    Rogowski, Wolf H

    2018-06-13

    There is a gap between health economic evaluation methods and the value judgments of coverage decision makers, at least in Germany. Measuring preference satisfaction has been claimed to be inappropriate for allocating health care resources, e.g. because it disregards medical need. The existing methods oriented at medical need have been claimed to disregard non-consequentialist fairness concerns. The aim of this article is to propose a new, contractarian argument for justifying needs-based economic evaluation. It is based on consent rather than maximization of some impersonal unit of value to accommodate the fairness concerns. This conceptual paper draws upon contractarian ethics and constitution economics to show how economic evaluation can be viewed as an institution to overcome societal conflicts in the allocation of scarce health care resources. For this, the problem of allocating scarce health care resources in a society is reconstructed as a social dilemma. Both disadvantaged patients and affluent healthy individuals can be argued to share interests in a societal contract to provide technologies which ameliorate medical need, based on progressive funding. The use of needs-based economic evaluation methods for coverage determination can be interpreted as institutions for conflict resolution as far as they use consented criteria to ensure the social contract's sustainability and avoid implicit rationing or unaffordable contribution rates. This justifies the use of needs-based evaluation methods by Pareto-superiority and consent (rather than by some needs-based value function per se). The view of economic evaluation presented here may help account for fairness concerns in the further development of evaluation methods. This is because it directs the attention away from determining some unit of value to be maximized towards determining those persons who are most likely not to consent and meeting their concerns. Following this direction in methods development is likely to increase the acceptability of health economic evaluation by decision makers.

  3. Image quality evaluation of full reference algorithm

    NASA Astrophysics Data System (ADS)

    He, Nannan; Xie, Kai; Li, Tong; Ye, Yushan

    2018-03-01

    Image quality evaluation is a classic research topic, the goal is to design the algorithm, given the subjective feelings consistent with the evaluation value. This paper mainly introduces several typical reference methods of Mean Squared Error(MSE), Peak Signal to Noise Rate(PSNR), Structural Similarity Image Metric(SSIM) and feature similarity(FSIM) of objective evaluation methods. The different evaluation methods are tested by Matlab, and the advantages and disadvantages of these methods are obtained by analyzing and comparing them.MSE and PSNR are simple, but they are not considered to introduce HVS characteristics into image quality evaluation. The evaluation result is not ideal. SSIM has a good correlation and simple calculation ,because it is considered to the human visual effect into image quality evaluation,However the SSIM method is based on a hypothesis,The evaluation result is limited. The FSIM method can be used for test of gray image and color image test, and the result is better. Experimental results show that the new image quality evaluation algorithm based on FSIM is more accurate.

  4. Effectiveness Evaluation Method of Anti-Radiation Missile against Active Decoy

    NASA Astrophysics Data System (ADS)

    Tang, Junyao; Cao, Fei; Li, Sijia

    2017-06-01

    In the problem of anti-radiation missile against active decoy, whether the ARM can effectively kill the target radiation source and bait is an important index for evaluating the operational effectiveness of the missile. Aiming at this problem, this paper proposes a method to evaluate the effect of ARM against active decoy. Based on the calculation of ARM’s ability to resist the decoy, the paper proposes a method to evaluate the decoy resistance based on the key components of the hitting radar. The method has the advantages of scientific and reliability.

  5. Evaluating the Good Ontology Design Guideline (GoodOD) with the Ontology Quality Requirements and Evaluation Method and Metrics (OQuaRE)

    PubMed Central

    Duque-Ramos, Astrid; Boeker, Martin; Jansen, Ludger; Schulz, Stefan; Iniesta, Miguela; Fernández-Breis, Jesualdo Tomás

    2014-01-01

    Objective To (1) evaluate the GoodOD guideline for ontology development by applying the OQuaRE evaluation method and metrics to the ontology artefacts that were produced by students in a randomized controlled trial, and (2) informally compare the OQuaRE evaluation method with gold standard and competency questions based evaluation methods, respectively. Background In the last decades many methods for ontology construction and ontology evaluation have been proposed. However, none of them has become a standard and there is no empirical evidence of comparative evaluation of such methods. This paper brings together GoodOD and OQuaRE. GoodOD is a guideline for developing robust ontologies. It was previously evaluated in a randomized controlled trial employing metrics based on gold standard ontologies and competency questions as outcome parameters. OQuaRE is a method for ontology quality evaluation which adapts the SQuaRE standard for software product quality to ontologies and has been successfully used for evaluating the quality of ontologies. Methods In this paper, we evaluate the effect of training in ontology construction based on the GoodOD guideline within the OQuaRE quality evaluation framework and compare the results with those obtained for the previous studies based on the same data. Results Our results show a significant effect of the GoodOD training over developed ontologies by topics: (a) a highly significant effect was detected in three topics from the analysis of the ontologies of untrained and trained students; (b) both positive and negative training effects with respect to the gold standard were found for five topics. Conclusion The GoodOD guideline had a significant effect over the quality of the ontologies developed. Our results show that GoodOD ontologies can be effectively evaluated using OQuaRE and that OQuaRE is able to provide additional useful information about the quality of the GoodOD ontologies. PMID:25148262

  6. Evaluation of Deep Learning Based Stereo Matching Methods: from Ground to Aerial Images

    NASA Astrophysics Data System (ADS)

    Liu, J.; Ji, S.; Zhang, C.; Qin, Z.

    2018-05-01

    Dense stereo matching has been extensively studied in photogrammetry and computer vision. In this paper we evaluate the application of deep learning based stereo methods, which were raised from 2016 and rapidly spread, on aerial stereos other than ground images that are commonly used in computer vision community. Two popular methods are evaluated. One learns matching cost with a convolutional neural network (known as MC-CNN); the other produces a disparity map in an end-to-end manner by utilizing both geometry and context (known as GC-net). First, we evaluate the performance of the deep learning based methods for aerial stereo images by a direct model reuse. The models pre-trained on KITTI 2012, KITTI 2015 and Driving datasets separately, are directly applied to three aerial datasets. We also give the results of direct training on target aerial datasets. Second, the deep learning based methods are compared to the classic stereo matching method, Semi-Global Matching(SGM), and a photogrammetric software, SURE, on the same aerial datasets. Third, transfer learning strategy is introduced to aerial image matching based on the assumption of a few target samples available for model fine tuning. It experimentally proved that the conventional methods and the deep learning based methods performed similarly, and the latter had greater potential to be explored.

  7. Evaluation of a QuECHERS-like extraction approach for the determination of PBDEs in mussels by immuno-assay-based screening methods

    USDA-ARS?s Scientific Manuscript database

    A sample preparation method was evaluated for the determination of polybrominated diphenyl ethers (PBDEs) in mussel samples, by using colorimetric and electrochemical immunoassay-based screening methods. A simple sample preparation in conjunction with a rapid screening method possesses the desired c...

  8. Performance of human fecal anaerobe-associated PCR-based assays in a multi-laboratory method evaluation study

    EPA Science Inventory

    A number of PCR-based methods for detecting human fecal material in environmental waters have been developed over the past decade, but these methods have rarely received independent comparative testing. Here, we evaluated ten of these methods (BacH, BacHum-UCD, B. thetaiotaomic...

  9. A new evaluation tool to obtain practice-based evidence of worksite health promotion programs.

    PubMed

    Dunet, Diane O; Sparling, Phillip B; Hersey, James; Williams-Piehota, Pamela; Hill, Mary D; Hanssen, Carl; Lawrenz, Frances; Reyes, Michele

    2008-10-01

    The Centers for Disease Control and Prevention developed the Swift Worksite Assessment and Translation (SWAT) evaluation method to identify promising practices in worksite health promotion programs. The new method complements research studies and evaluation studies of evidence-based practices that promote healthy weight in working adults. We used nationally recognized program evaluation standards of utility, feasibility, accuracy, and propriety as the foundation for our 5-step method: 1) site identification and selection, 2) site visit, 3) post-visit evaluation of promising practices, 4) evaluation capacity building, and 5) translation and dissemination. An independent, outside evaluation team conducted process and summative evaluations of SWAT to determine its efficacy in providing accurate, useful information and its compliance with evaluation standards. The SWAT evaluation approach is feasible in small and medium-sized workplace settings. The independent evaluation team judged SWAT favorably as an evaluation method, noting among its strengths its systematic and detailed procedures and service orientation. Experts in worksite health promotion evaluation concluded that the data obtained by using this evaluation method were sufficient to allow them to make judgments about promising practices. SWAT is a useful, business-friendly approach to systematic, yet rapid, evaluation that comports with program evaluation standards. The method provides a new tool to obtain practice-based evidence of worksite health promotion programs that help prevent obesity and, more broadly, may advance public health goals for chronic disease prevention and health promotion.

  10. The research on user behavior evaluation method for network state

    NASA Astrophysics Data System (ADS)

    Zhang, Chengyuan; Xu, Haishui

    2017-08-01

    Based on the correlation between user behavior and network running state, this paper proposes a method of user behavior evaluation based on network state. Based on the analysis and evaluation methods in other fields of study, we introduce the theory and tools of data mining. Based on the network status information provided by the trusted network view, the user behavior data and the network state data are analysed. Finally, we construct the user behavior evaluation index and weight, and on this basis, we can accurately quantify the influence degree of the specific behavior of different users on the change of network running state, so as to provide the basis for user behavior control decision.

  11. QoE collaborative evaluation method based on fuzzy clustering heuristic algorithm.

    PubMed

    Bao, Ying; Lei, Weimin; Zhang, Wei; Zhan, Yuzhuo

    2016-01-01

    At present, to realize or improve the quality of experience (QoE) is a major goal for network media transmission service, and QoE evaluation is the basis for adjusting the transmission control mechanism. Therefore, a kind of QoE collaborative evaluation method based on fuzzy clustering heuristic algorithm is proposed in this paper, which is concentrated on service score calculation at the server side. The server side collects network transmission quality of service (QoS) parameter, node location data, and user expectation value from client feedback information. Then it manages the historical data in database through the "big data" process mode, and predicts user score according to heuristic rules. On this basis, it completes fuzzy clustering analysis, and generates service QoE score and management message, which will be finally fed back to clients. Besides, this paper mainly discussed service evaluation generative rules, heuristic evaluation rules and fuzzy clustering analysis methods, and presents service-based QoE evaluation processes. The simulation experiments have verified the effectiveness of QoE collaborative evaluation method based on fuzzy clustering heuristic rules.

  12. Usability Evaluation Methods for Gesture-Based Games: A Systematic Review

    PubMed Central

    Simor, Fernando Winckler; Brum, Manoela Rogofski; Schmidt, Jaison Dairon Ebertz; De Marchi, Ana Carolina Bertoletti

    2016-01-01

    Background Gestural interaction systems are increasingly being used, mainly in games, expanding the idea of entertainment and providing experiences with the purpose of promoting better physical and/or mental health. Therefore, it is necessary to establish mechanisms for evaluating the usability of these interfaces, which make gestures the basis of interaction, to achieve a balance between functionality and ease of use. Objective This study aims to present the results of a systematic review focused on usability evaluation methods for gesture-based games, considering devices with motion-sensing capability. We considered the usability methods used, the common interface issues, and the strategies adopted to build good gesture-based games. Methods The research was centered on four electronic databases: IEEE, Association for Computing Machinery (ACM), Springer, and Science Direct from September 4 to 21, 2015. Within 1427 studies evaluated, 10 matched the eligibility criteria. As a requirement, we considered studies about gesture-based games, Kinect and/or Wii as devices, and the use of a usability method to evaluate the user interface. Results In the 10 studies found, there was no standardization in the methods because they considered diverse analysis variables. Heterogeneously, authors used different instruments to evaluate gesture-based interfaces and no default approach was proposed. Questionnaires were the most used instruments (70%, 7/10), followed by interviews (30%, 3/10), and observation and video recording (20%, 2/10). Moreover, 60% (6/10) of the studies used gesture-based serious games to evaluate the performance of elderly participants in rehabilitation tasks. This highlights the need for creating an evaluation protocol for older adults to provide a user-friendly interface according to the user’s age and limitations. Conclusions Through this study, we conclude this field is in need of a usability evaluation method for serious games, especially games for older adults, and that the definition of a methodology and a test protocol may offer the user more comfort, welfare, and confidence. PMID:27702737

  13. Intelligent Evaluation Method of Tank Bottom Corrosion Status Based on Improved BP Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Qiu, Feng; Dai, Guang; Zhang, Ying

    According to the acoustic emission information and the appearance inspection information of tank bottom online testing, the external factors associated with tank bottom corrosion status are confirmed. Applying artificial neural network intelligent evaluation method, three tank bottom corrosion status evaluation models based on appearance inspection information, acoustic emission information, and online testing information are established. Comparing with the result of acoustic emission online testing through the evaluation of test sample, the accuracy of the evaluation model based on online testing information is 94 %. The evaluation model can evaluate tank bottom corrosion accurately and realize acoustic emission online testing intelligent evaluation of tank bottom.

  14. Evaluation method based on the image correlation for laser jamming image

    NASA Astrophysics Data System (ADS)

    Che, Jinxi; Li, Zhongmin; Gao, Bo

    2013-09-01

    The jamming effectiveness evaluation of infrared imaging system is an important part of electro-optical countermeasure. The infrared imaging devices in the military are widely used in the searching, tracking and guidance and so many other fields. At the same time, with the continuous development of laser technology, research of laser interference and damage effect developed continuously, laser has been used to disturbing the infrared imaging device. Therefore, the effect evaluation of the infrared imaging system by laser has become a meaningful problem to be solved. The information that the infrared imaging system ultimately present to the user is an image, so the evaluation on jamming effect can be made from the point of assessment of image quality. The image contains two aspects of the information, the light amplitude and light phase, so the image correlation can accurately perform the difference between the original image and disturbed image. In the paper, the evaluation method of digital image correlation, the assessment method of image quality based on Fourier transform, the estimate method of image quality based on error statistic and the evaluation method of based on peak signal noise ratio are analysed. In addition, the advantages and disadvantages of these methods are analysed. Moreover, the infrared disturbing images of the experiment result, in which the thermal infrared imager was interfered by laser, were analysed by using these methods. The results show that the methods can better reflect the jamming effects of the infrared imaging system by laser. Furthermore, there is good consistence between evaluation results by using the methods and the results of subjective visual evaluation. And it also provides well repeatability and convenient quantitative analysis. The feasibility of the methods to evaluate the jamming effect was proved. It has some extent reference value for the studying and developing on electro-optical countermeasures equipments and effectiveness evaluation.

  15. Comparison of heuristic and cognitive walkthrough usability evaluation methods for evaluating health information systems.

    PubMed

    Khajouei, Reza; Zahiri Esfahani, Misagh; Jahani, Yunes

    2017-04-01

    There are several user-based and expert-based usability evaluation methods that may perform differently according to the context in which they are used. The objective of this study was to compare 2 expert-based methods, heuristic evaluation (HE) and cognitive walkthrough (CW), for evaluating usability of health care information systems. Five evaluators independently evaluated a medical office management system using HE and CW. We compared the 2 methods in terms of the number of identified usability problems, their severity, and the coverage of each method. In total, 156 problems were identified using the 2 methods. HE identified a significantly higher number of problems related to the "satisfaction" attribute ( P  = .002). The number of problems identified using CW concerning the "learnability" attribute was significantly higher than those identified using HE ( P  = .005). There was no significant difference between the number of problems identified by HE, based on different usability attributes ( P  = .232). Results of CW showed a significant difference between the number of problems related to usability attributes ( P  < .0001). The average severity of problems identified using CW was significantly higher than that of HE ( P  < .0001). This study showed that HE and CW do not differ significantly in terms of the number of usability problems identified, but they differ based on the severity of problems and the coverage of some usability attributes. The results suggest that CW would be the preferred method for evaluating systems intended for novice users and HE for users who have experience with similar systems. However, more studies are needed to support this finding. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  16. Quantifying the quality of medical x-ray images: An evaluation based on normal anatomy for lumbar spine and chest radiography

    NASA Astrophysics Data System (ADS)

    Tingberg, Anders Martin

    Optimisation in diagnostic radiology requires accurate methods for determination of patient absorbed dose and clinical image quality. Simple methods for evaluation of clinical image quality are at present scarce and this project aims at developing such methods. Two methods are used and further developed; fulfillment of image criteria (IC) and visual grading analysis (VGA). Clinical image quality descriptors are defined based on these two methods: image criteria score (ICS) and visual grading analysis score (VGAS), respectively. For both methods the basis is the Image Criteria of the ``European Guidelines on Quality Criteria for Diagnostic Radiographic Images''. Both methods have proved to be useful for evaluation of clinical image quality. The two methods complement each other: IC is an absolute method, which means that the quality of images of different patients and produced with different radiographic techniques can be compared with each other. The separating power of IC is, however, weaker than that of VGA. VGA is the best method for comparing images produced with different radiographic techniques and has strong separating power, but the results are relative, since the quality of an image is compared to the quality of a reference image. The usefulness of the two methods has been verified by comparing the results from both of them with results from a generally accepted method for evaluation of clinical image quality, receiver operating characteristics (ROC). The results of the comparison between the two methods based on visibility of anatomical structures and the method based on detection of pathological structures (free-response forced error) indicate that the former two methods can be used for evaluation of clinical image quality as efficiently as the method based on ROC. More studies are, however, needed for us to be able to draw a general conclusion, including studies of other organs, using other radiographic techniques, etc. The results of the experimental evaluation of clinical image quality are compared with physical quantities calculated with a theoretical model based on a voxel phantom, and correlations are found. The results demonstrate that the computer model can be a useful toot in planning further experimental studies.

  17. Did you have an impact? A theory-based method for planning and evaluating knowledge-transfer and exchange activities in occupational health and safety.

    PubMed

    Kramer, Desré M; Wells, Richard P; Carlan, Nicolette; Aversa, Theresa; Bigelow, Philip P; Dixon, Shane M; McMillan, Keith

    2013-01-01

    Few evaluation tools are available to assess knowledge-transfer and exchange interventions. The objective of this paper is to develop and demonstrate a theory-based knowledge-transfer and exchange method of evaluation (KEME) that synthesizes 3 theoretical frameworks: the promoting action on research implementation of health services (PARiHS) model, the transtheoretical model of change, and a model of knowledge use. It proposes a new term, keme, to mean a unit of evidence-based transferable knowledge. The usefulness of the evaluation method is demonstrated with 4 occupational health and safety knowledge transfer and exchange (KTE) implementation case studies that are based upon the analysis of over 50 pre-existing interviews. The usefulness of the evaluation model has enabled us to better understand stakeholder feedback, frame our interpretation, and perform a more comprehensive evaluation of the knowledge use outcomes of our KTE efforts.

  18. Evaluation of Alternative Altitude Scaling Methods for Thermal Ice Protection System in NASA Icing Research Tunnel

    NASA Technical Reports Server (NTRS)

    Lee, Sam; Addy, Harold E. Jr.; Broeren, Andy P.; Orchard, David M.

    2017-01-01

    A test was conducted at NASA Icing Research Tunnel to evaluate altitude scaling methods for thermal ice protection system. Two new scaling methods based on Weber number were compared against a method based on Reynolds number. The results generally agreed with the previous set of tests conducted in NRCC Altitude Icing Wind Tunnel where the three methods of scaling were also tested and compared along with reference (altitude) icing conditions. In those tests, the Weber number-based scaling methods yielded results much closer to those observed at the reference icing conditions than the Reynolds number-based icing conditions. The test in the NASA IRT used a much larger, asymmetric airfoil with an ice protection system that more closely resembled designs used in commercial aircraft. Following the trends observed during the AIWT tests, the Weber number based scaling methods resulted in smaller runback ice than the Reynolds number based scaling, and the ice formed farther upstream. The results show that the new Weber number based scaling methods, particularly the Weber number with water loading scaling, continue to show promise for ice protection system development and evaluation in atmospheric icing tunnels.

  19. Facilitating children's views of therapy: an analysis of the use of play-based techniques to evaluate clinical practice.

    PubMed

    Jäger, Jessica

    2013-07-01

    This article reports on a follow-up study exploring the use of play-based evaluation methods to facilitate children's views of therapy. The development and piloting of these techniques, with 12 children in the author's own practice, was previously reported in this journal. It was argued that play-based evaluation methods reduce the power imbalance inherent in adult researcher/interviewer-child relationships and provide children with meaningful ways to share their views. In this article, follow-up research into play-based evaluations with 20 children and 7 different play therapists is drawn upon to explore in greater depth the strengths and weaknesses of these techniques. The study shows that play-based evaluation techniques are important and flexible methods for facilitating children's views of child therapy. It is argued that those play therapists who incorporate their therapeutic skills effectively, maintain flexibility and sensitively attune to the child during the evaluation session, enable the child to explore their views most fully.

  20. Evaluating the Good Ontology Design Guideline (GoodOD) with the ontology quality requirements and evaluation method and metrics (OQuaRE).

    PubMed

    Duque-Ramos, Astrid; Boeker, Martin; Jansen, Ludger; Schulz, Stefan; Iniesta, Miguela; Fernández-Breis, Jesualdo Tomás

    2014-01-01

    To (1) evaluate the GoodOD guideline for ontology development by applying the OQuaRE evaluation method and metrics to the ontology artefacts that were produced by students in a randomized controlled trial, and (2) informally compare the OQuaRE evaluation method with gold standard and competency questions based evaluation methods, respectively. In the last decades many methods for ontology construction and ontology evaluation have been proposed. However, none of them has become a standard and there is no empirical evidence of comparative evaluation of such methods. This paper brings together GoodOD and OQuaRE. GoodOD is a guideline for developing robust ontologies. It was previously evaluated in a randomized controlled trial employing metrics based on gold standard ontologies and competency questions as outcome parameters. OQuaRE is a method for ontology quality evaluation which adapts the SQuaRE standard for software product quality to ontologies and has been successfully used for evaluating the quality of ontologies. In this paper, we evaluate the effect of training in ontology construction based on the GoodOD guideline within the OQuaRE quality evaluation framework and compare the results with those obtained for the previous studies based on the same data. Our results show a significant effect of the GoodOD training over developed ontologies by topics: (a) a highly significant effect was detected in three topics from the analysis of the ontologies of untrained and trained students; (b) both positive and negative training effects with respect to the gold standard were found for five topics. The GoodOD guideline had a significant effect over the quality of the ontologies developed. Our results show that GoodOD ontologies can be effectively evaluated using OQuaRE and that OQuaRE is able to provide additional useful information about the quality of the GoodOD ontologies.

  1. Evaluation of Alternative Altitude Scaling Methods for Thermal Ice Protection System in NASA Icing Research Tunnel

    NASA Technical Reports Server (NTRS)

    Lee, Sam; Addy, Harold; Broeren, Andy P.; Orchard, David M.

    2017-01-01

    A test was conducted at NASA Icing Research Tunnel to evaluate altitude scaling methods for thermal ice protection system. Two scaling methods based on Weber number were compared against a method based on the Reynolds number. The results generally agreed with the previous set of tests conducted in NRCC Altitude Icing Wind Tunnel. The Weber number based scaling methods resulted in smaller runback ice mass than the Reynolds number based scaling method. The ice accretions from the Weber number based scaling method also formed farther upstream. However there were large differences in the accreted ice mass between the two Weber number based scaling methods. The difference became greater when the speed was increased. This indicated that there may be some Reynolds number effects that isnt fully accounted for and warrants further study.

  2. Evaluation Framework for NASA's Educational Outreach Programs

    NASA Technical Reports Server (NTRS)

    Berg, Rick; Booker, Angela; Linde, Charlotte; Preston, Connie

    1999-01-01

    The objective of the proposed work is to develop an evaluation framework for NASA's educational outreach efforts. We focus on public (rather than technical or scientific) dissemination efforts, specifically on Internet-based outreach sites for children.The outcome of this work is to propose both methods and criteria for evaluation, which would enable NASA to do a more analytic evaluation of its outreach efforts. The proposed framework is based on IRL's ethnographic and video-based observational methods, which allow us to analyze how these sites are actually used.

  3. Evaluation Methods for Assessing Users’ Psychological Experiences of Web-Based Psychosocial Interventions: A Systematic Review

    PubMed Central

    Howson, Moira; Ritchie, Linda; Carter, Philip D; Parry, David Tudor; Koziol-McLain, Jane

    2016-01-01

    Background The use of Web-based interventions to deliver mental health and behavior change programs is increasingly popular. They are cost-effective, accessible, and generally effective. Often these interventions concern psychologically sensitive and challenging issues, such as depression or anxiety. The process by which a person receives and experiences therapy is important to understanding therapeutic process and outcomes. While the experience of the patient or client in traditional face-to-face therapy has been evaluated in a number of ways, there appeared to be a gap in the evaluation of patient experiences of therapeutic interventions delivered online. Evaluation of Web-based artifacts has focused either on evaluation of experience from a computer Web-design perspective through usability testing or on evaluation of treatment effectiveness. Neither of these methods focuses on the psychological experience of the person while engaged in the therapeutic process. Objective This study aimed to investigate what methods, if any, have been used to evaluate the in situ psychological experience of users of Web-based self-help psychosocial interventions. Methods A systematic literature review was undertaken of interdisciplinary databases with a focus on health and computer sciences. Studies that met a predetermined search protocol were included. Results Among 21 studies identified that examined psychological experience of the user, only 1 study collected user experience in situ. The most common method of understanding users’ experience was through semistructured interviews conducted posttreatment or questionnaires administrated at the end of an intervention session. The questionnaires were usually based on standardized tools used to assess user experience with traditional face-to-face treatment. Conclusions There is a lack of methods specified in the literature to evaluate the interface between Web-based mental health or behavior change artifacts and users. Main limitations in the research were the nascency of the topic and cross-disciplinary nature of the field. There is a need to develop and deliver methods of understanding users’ psychological experiences while using an intervention. PMID:27363519

  4. Evaluation of contents-based image retrieval methods for a database of logos on drug tablets

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Hardy, Huub; Poortman, Anneke; Bijhold, Jurrien

    2001-02-01

    In this research an evaluation has been made of the different ways of contents based image retrieval of logos of drug tablets. On a database of 432 illicitly produced tablets (mostly containing MDMA), we have compared different retrieval methods. Two of these methods were available from commercial packages, QBIC and Imatch, where the implementation of the contents based image retrieval methods are not exactly known. We compared the results for this database with the MPEG-7 shape comparison methods, which are the contour-shape, bounding box and region-based shape methods. In addition, we have tested the log polar method that is available from our own research.

  5. The PBL-Evaluator: A Web-Based Tool for Assessment in Tutorials.

    ERIC Educational Resources Information Center

    Chaves, John F.; Chaves, John A.; Lantz, Marilyn S.

    1998-01-01

    Describes design and use of the PBL Evaluator, a computer-based method of evaluating dental students' clinical problem-solving skills. Analysis of Indiana University students' self-, peer, and tutor ratings for one iteration of a course in critical thinking and professional behavior shows differences in these ratings. The method is found useful…

  6. Study on the evaluation method for fault displacement based on characterized source model

    NASA Astrophysics Data System (ADS)

    Tonagi, M.; Takahama, T.; Matsumoto, Y.; Inoue, N.; Irikura, K.; Dalguer, L. A.

    2016-12-01

    In IAEA Specific Safety Guide (SSG) 9 describes that probabilistic methods for evaluating fault displacement should be used if no sufficient basis is provided to decide conclusively that the fault is not capable by using the deterministic methodology. In addition, International Seismic Safety Centre compiles as ANNEX to realize seismic hazard for nuclear facilities described in SSG-9 and shows the utility of the deterministic and probabilistic evaluation methods for fault displacement. In Japan, it is required that important nuclear facilities should be established on ground where fault displacement will not arise when earthquakes occur in the future. Under these situations, based on requirements, we need develop evaluation methods for fault displacement to enhance safety in nuclear facilities. We are studying deterministic and probabilistic methods with tentative analyses using observed records such as surface fault displacement and near-fault strong ground motions of inland crustal earthquake which fault displacements arose. In this study, we introduce the concept of evaluation methods for fault displacement. After that, we show parts of tentative analysis results for deterministic method as follows: (1) For the 1999 Chi-Chi earthquake, referring slip distribution estimated by waveform inversion, we construct a characterized source model (Miyake et al., 2003, BSSA) which can explain observed near-fault broad band strong ground motions. (2) Referring a characterized source model constructed in (1), we study an evaluation method for surface fault displacement using hybrid method, which combines particle method and distinct element method. At last, we suggest one of the deterministic method to evaluate fault displacement based on characterized source model. This research was part of the 2015 research project `Development of evaluating method for fault displacement` by the Secretariat of Nuclear Regulation Authority (S/NRA), Japan.

  7. Evaluation of Vacuum Blasting and Heat Guns as Methods for Abating Lead- Based Paint on Buildings

    DTIC Science & Technology

    1993-09-01

    INCOMPATIBILITY - Contact with powerful oxidizing agents such as FLUORINE, CHLORINE TRIFLUORIDE , MANGANESE TRIOXIDE, OXYGEN DIFLUORIDE, MANGANESE...investigating new technologies for lead-based paint abatement. This research evaluates the effectiveness , safety, LEC1L•.T• and cost of vacuum abrasive...paint abatement. This research evaluates the effectiveness , safety, and cost of vacuum abrasive units and heat guns as methods of removing lead-based

  8. Comparing Methods for UAV-Based Autonomous Surveillance

    NASA Technical Reports Server (NTRS)

    Freed, Michael; Harris, Robert; Shafto, Michael

    2004-01-01

    We describe an approach to evaluating algorithmic and human performance in directing UAV-based surveillance. Its key elements are a decision-theoretic framework for measuring the utility of a surveillance schedule and an evaluation testbed consisting of 243 scenarios covering a well-defined space of possible missions. We apply this approach to two example UAV-based surveillance methods, a TSP-based algorithm and a human-directed approach, then compare them to identify general strengths, and weaknesses of each method.

  9. Risk Evaluation of Bogie System Based on Extension Theory and Entropy Weight Method

    PubMed Central

    Du, Yanping; Zhang, Yuan; Zhao, Xiaogang; Wang, Xiaohui

    2014-01-01

    A bogie system is the key equipment of railway vehicles. Rigorous practical evaluation of bogies is still a challenge. Presently, there is overreliance on part-specific experiments in practice. In the present work, a risk evaluation index system of a bogie system has been established based on the inspection data and experts' evaluation. Then, considering quantitative and qualitative aspects, the risk state of a bogie system has been evaluated using an extension theory and an entropy weight method. Finally, the method has been used to assess the bogie system of four different samples. Results show that this method can assess the risk state of a bogie system exactly. PMID:25574159

  10. Risk evaluation of bogie system based on extension theory and entropy weight method.

    PubMed

    Du, Yanping; Zhang, Yuan; Zhao, Xiaogang; Wang, Xiaohui

    2014-01-01

    A bogie system is the key equipment of railway vehicles. Rigorous practical evaluation of bogies is still a challenge. Presently, there is overreliance on part-specific experiments in practice. In the present work, a risk evaluation index system of a bogie system has been established based on the inspection data and experts' evaluation. Then, considering quantitative and qualitative aspects, the risk state of a bogie system has been evaluated using an extension theory and an entropy weight method. Finally, the method has been used to assess the bogie system of four different samples. Results show that this method can assess the risk state of a bogie system exactly.

  11. Usability Evaluation Methods for Gesture-Based Games: A Systematic Review.

    PubMed

    Simor, Fernando Winckler; Brum, Manoela Rogofski; Schmidt, Jaison Dairon Ebertz; Rieder, Rafael; De Marchi, Ana Carolina Bertoletti

    2016-10-04

    Gestural interaction systems are increasingly being used, mainly in games, expanding the idea of entertainment and providing experiences with the purpose of promoting better physical and/or mental health. Therefore, it is necessary to establish mechanisms for evaluating the usability of these interfaces, which make gestures the basis of interaction, to achieve a balance between functionality and ease of use. This study aims to present the results of a systematic review focused on usability evaluation methods for gesture-based games, considering devices with motion-sensing capability. We considered the usability methods used, the common interface issues, and the strategies adopted to build good gesture-based games. The research was centered on four electronic databases: IEEE, Association for Computing Machinery (ACM), Springer, and Science Direct from September 4 to 21, 2015. Within 1427 studies evaluated, 10 matched the eligibility criteria. As a requirement, we considered studies about gesture-based games, Kinect and/or Wii as devices, and the use of a usability method to evaluate the user interface. In the 10 studies found, there was no standardization in the methods because they considered diverse analysis variables. Heterogeneously, authors used different instruments to evaluate gesture-based interfaces and no default approach was proposed. Questionnaires were the most used instruments (70%, 7/10), followed by interviews (30%, 3/10), and observation and video recording (20%, 2/10). Moreover, 60% (6/10) of the studies used gesture-based serious games to evaluate the performance of elderly participants in rehabilitation tasks. This highlights the need for creating an evaluation protocol for older adults to provide a user-friendly interface according to the user's age and limitations. Through this study, we conclude this field is in need of a usability evaluation method for serious games, especially games for older adults, and that the definition of a methodology and a test protocol may offer the user more comfort, welfare, and confidence.

  12. A novel knowledge-based potential for RNA 3D structure evaluation

    NASA Astrophysics Data System (ADS)

    Yang, Yi; Gu, Qi; Zhang, Ben-Gong; Shi, Ya-Zhou; Shao, Zhi-Gang

    2018-03-01

    Ribonucleic acids (RNAs) play a vital role in biology, and knowledge of their three-dimensional (3D) structure is required to understand their biological functions. Recently structural prediction methods have been developed to address this issue, but a series of RNA 3D structures are generally predicted by most existing methods. Therefore, the evaluation of the predicted structures is generally indispensable. Although several methods have been proposed to assess RNA 3D structures, the existing methods are not precise enough. In this work, a new all-atom knowledge-based potential is developed for more accurately evaluating RNA 3D structures. The potential not only includes local and nonlocal interactions but also fully considers the specificity of each RNA by introducing a retraining mechanism. Based on extensive test sets generated from independent methods, the proposed potential correctly distinguished the native state and ranked near-native conformations to effectively select the best. Furthermore, the proposed potential precisely captured RNA structural features such as base-stacking and base-pairing. Comparisons with existing potential methods show that the proposed potential is very reliable and accurate in RNA 3D structure evaluation. Project supported by the National Science Foundation of China (Grants Nos. 11605125, 11105054, 11274124, and 11401448).

  13. Retrieval evaluation and distance learning from perceived similarity between endomicroscopy videos.

    PubMed

    André, Barbara; Vercauteren, Tom; Buchner, Anna M; Wallace, Michael B; Ayache, Nicholas

    2011-01-01

    Evaluating content-based retrieval (CBR) is challenging because it requires an adequate ground-truth. When the available groundtruth is limited to textual metadata such as pathological classes, retrieval results can only be evaluated indirectly, for example in terms of classification performance. In this study we first present a tool to generate perceived similarity ground-truth that enables direct evaluation of endomicroscopic video retrieval. This tool uses a four-points Likert scale and collects subjective pairwise similarities perceived by multiple expert observers. We then evaluate against the generated ground-truth a previously developed dense bag-of-visual-words method for endomicroscopic video retrieval. Confirming the results of previous indirect evaluation based on classification, our direct evaluation shows that this method significantly outperforms several other state-of-the-art CBR methods. In a second step, we propose to improve the CBR method by learning an adjusted similarity metric from the perceived similarity ground-truth. By minimizing a margin-based cost function that differentiates similar and dissimilar video pairs, we learn a weight vector applied to the visual word signatures of videos. Using cross-validation, we demonstrate that the learned similarity distance is significantly better correlated with the perceived similarity than the original visual-word-based distance.

  14. A new evaluation method research for fusion quality of infrared and visible images

    NASA Astrophysics Data System (ADS)

    Ge, Xingguo; Ji, Yiguo; Tao, Zhongxiang; Tian, Chunyan; Ning, Chengda

    2017-03-01

    In order to objectively evaluate the fusion effect of infrared and visible image, a fusion evaluation method for infrared and visible images based on energy-weighted average structure similarity and edge information retention value is proposed for drawbacks of existing evaluation methods. The evaluation index of this method is given, and the infrared and visible image fusion results under different algorithms and environments are made evaluation experiments on the basis of this index. The experimental results show that the objective evaluation index is consistent with the subjective evaluation results obtained from this method, which shows that the method is a practical and effective fusion image quality evaluation method.

  15. Properties of a Formal Method for Prediction of Emergent Behaviors in Swarm-based Systems

    NASA Technical Reports Server (NTRS)

    Rouff, Christopher; Vanderbilt, Amy; Hinchey, Mike; Truszkowski, Walt; Rash, James

    2004-01-01

    Autonomous intelligent swarms of satellites are being proposed for NASA missions that have complex behaviors and interactions. The emergent properties of swarms make these missions powerful, but at the same time more difficult to design and assure that proper behaviors will emerge. This paper gives the results of research into formal methods techniques for verification and validation of NASA swarm-based missions. Multiple formal methods were evaluated to determine their effectiveness in modeling and assuring the behavior of swarms of spacecraft. The NASA ANTS mission was used as an example of swarm intelligence for which to apply the formal methods. This paper will give the evaluation of these formal methods and give partial specifications of the ANTS mission using four selected methods. We then give an evaluation of the methods and the needed properties of a formal method for effective specification and prediction of emergent behavior in swarm-based systems.

  16. Comparative Evaluation of Two Methods to Estimate Natural Gas Production in Texas

    EIA Publications

    2003-01-01

    This report describes an evaluation conducted by the Energy Information Administration (EIA) in August 2003 of two methods that estimate natural gas production in Texas. The first method (parametric method) was used by EIA from February through August 2003 and the second method (multinomial method) replaced it starting in September 2003, based on the results of this evaluation.

  17. Evaluating IRT- and CTT-Based Methods of Estimating Classification Consistency and Accuracy Indices from Single Administrations

    ERIC Educational Resources Information Center

    Deng, Nina

    2011-01-01

    Three decision consistency and accuracy (DC/DA) methods, the Livingston and Lewis (LL) method, LEE method, and the Hambleton and Han (HH) method, were evaluated. The purposes of the study were: (1) to evaluate the accuracy and robustness of these methods, especially when their assumptions were not well satisfied, (2) to investigate the "true"…

  18. Evaluation of Low-Voltage Distribution Network Index Based on Improved Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Fan, Hanlu; Gao, Suzhou; Fan, Wenjie; Zhong, Yinfeng; Zhu, Lei

    2018-01-01

    In order to evaluate the development level of the low-voltage distribution network objectively and scientifically, chromatography analysis method is utilized to construct evaluation index model of low-voltage distribution network. Based on the analysis of principal component and the characteristic of logarithmic distribution of the index data, a logarithmic centralization method is adopted to improve the principal component analysis algorithm. The algorithm can decorrelate and reduce the dimensions of the evaluation model and the comprehensive score has a better dispersion degree. The clustering method is adopted to analyse the comprehensive score because the comprehensive score of the courts is concentrated. Then the stratification evaluation of the courts is realized. An example is given to verify the objectivity and scientificity of the evaluation method.

  19. Using evaluation to adapt health information outreach to the complex environments of community-based organizations.

    PubMed

    Olney, Cynthia A

    2005-10-01

    After arguing that most community-based organizations (CBOs) function as complex adaptive systems, this white paper describes the evaluation goals, questions, indicators, and methods most important at different stages of community-based health information outreach. This paper presents the basic characteristics of complex adaptive systems and argues that the typical CBO can be considered this type of system. It then presents evaluation as a tool for helping outreach teams adapt their outreach efforts to the CBO environment and thus maximize success. Finally, it describes the goals, questions, indicators, and methods most important or helpful at each stage of evaluation (community assessment, needs assessment and planning, process evaluation, and outcomes assessment). Literature from complex adaptive systems as applied to health care, business, and evaluation settings is presented. Evaluation models and applications, particularly those based on participatory approaches, are presented as methods for maximizing the effectiveness of evaluation in dynamic CBO environments. If one accepts that CBOs function as complex adaptive systems-characterized by dynamic relationships among many agents, influences, and forces-then effective evaluation at the stages of community assessment, needs assessment and planning, process evaluation, and outcomes assessment is critical to outreach success.

  20. 'Televaluation' of clinical information systems: an integrative approach to assessing Web-based systems.

    PubMed

    Kushniruk, A W; Patel, C; Patel, V L; Cimino, J J

    2001-04-01

    The World Wide Web provides an unprecedented opportunity for widespread access to health-care applications by both patients and providers. The development of new methods for assessing the effectiveness and usability of these systems is becoming a critical issue. This paper describes the distance evaluation (i.e. 'televaluation') of emerging Web-based information technologies. In health informatics evaluation, there is a need for application of new ideas and methods from the fields of cognitive science and usability engineering. A framework is presented for conducting evaluations of health-care information technologies that integrates a number of methods, ranging from deployment of on-line questionnaires (and Web-based forms) to remote video-based usability testing of user interactions with clinical information systems. Examples illustrating application of these techniques are presented for the assessment of a patient clinical information system (PatCIS), as well as an evaluation of use of Web-based clinical guidelines. Issues in designing, prototyping and iteratively refining evaluation components are discussed, along with description of a 'virtual' usability laboratory.

  1. Challenges of teacher-based clinical evaluation from nursing students' point of view: Qualitative content analysis.

    PubMed

    Sadeghi, Tabandeh; Seyed Bagheri, Seyed Hamid

    2017-01-01

    Clinical evaluation is very important in the educational system of nursing. One of the most common methods of clinical evaluation is evaluation by the teacher, but the challenges that students would face in this evaluation method, have not been mentioned. Thus, this study aimed to explore the experiences and views of nursing students about the challenges of teacher-based clinical evaluation. This study was a descriptive qualitative study with a qualitative content analysis approach. Data were gathered through semi-structured focused group sessions with undergraduate nursing students who were passing their 8 th semester at Rafsanjan University of Medical Sciences. Date were analyzed using Graneheim and Lundman's proposed method. Data collection and analysis were concurrent. According to the findings, "factitious evaluation" was the main theme of study that consisted of three categories: "Personal preferences," "unfairness" and "shirking responsibility." These categories are explained using quotes derived from the data. According to the results of this study, teacher-based clinical evaluation would lead to factitious evaluation. Thus, changing this approach of evaluation toward modern methods of evaluation is suggested. The finding can help nursing instructors to get a better understanding of the nursing students' point of view toward this evaluation approach and as a result could be planning for changing of this approach.

  2. Reliability and performance evaluation of systems containing embedded rule-based expert systems

    NASA Technical Reports Server (NTRS)

    Beaton, Robert M.; Adams, Milton B.; Harrison, James V. A.

    1989-01-01

    A method for evaluating the reliability of real-time systems containing embedded rule-based expert systems is proposed and investigated. It is a three stage technique that addresses the impact of knowledge-base uncertainties on the performance of expert systems. In the first stage, a Markov reliability model of the system is developed which identifies the key performance parameters of the expert system. In the second stage, the evaluation method is used to determine the values of the expert system's key performance parameters. The performance parameters can be evaluated directly by using a probabilistic model of uncertainties in the knowledge-base or by using sensitivity analyses. In the third and final state, the performance parameters of the expert system are combined with performance parameters for other system components and subsystems to evaluate the reliability and performance of the complete system. The evaluation method is demonstrated in the context of a simple expert system used to supervise the performances of an FDI algorithm associated with an aircraft longitudinal flight-control system.

  3. Effectiveness evaluation of objective and subjective weighting methods for aquifer vulnerability assessment in urban context

    NASA Astrophysics Data System (ADS)

    Sahoo, Madhumita; Sahoo, Satiprasad; Dhar, Anirban; Pradhan, Biswajeet

    2016-10-01

    Groundwater vulnerability assessment has been an accepted practice to identify the zones with relatively increased potential for groundwater contamination. DRASTIC is the most popular secondary information-based vulnerability assessment approach. Original DRASTIC approach considers relative importance of features/sub-features based on subjective weighting/rating values. However variability of features at a smaller scale is not reflected in this subjective vulnerability assessment process. In contrast to the subjective approach, the objective weighting-based methods provide flexibility in weight assignment depending on the variation of the local system. However experts' opinion is not directly considered in the objective weighting-based methods. Thus effectiveness of both subjective and objective weighting-based approaches needs to be evaluated. In the present study, three methods - Entropy information method (E-DRASTIC), Fuzzy pattern recognition method (F-DRASTIC) and Single parameter sensitivity analysis (SA-DRASTIC), were used to modify the weights of the original DRASTIC features to include local variability. Moreover, a grey incidence analysis was used to evaluate the relative performance of subjective (DRASTIC and SA-DRASTIC) and objective (E-DRASTIC and F-DRASTIC) weighting-based methods. The performance of the developed methodology was tested in an urban area of Kanpur City, India. Relative performance of the subjective and objective methods varies with the choice of water quality parameters. This methodology can be applied without/with suitable modification. These evaluations establish the potential applicability of the methodology for general vulnerability assessment in urban context.

  4. Water Quality Evaluation of the Yellow River Basin Based on Gray Clustering Method

    NASA Astrophysics Data System (ADS)

    Fu, X. Q.; Zou, Z. H.

    2018-03-01

    Evaluating the water quality of 12 monitoring sections in the Yellow River Basin comprehensively by grey clustering method based on the water quality monitoring data from the Ministry of environmental protection of China in May 2016 and the environmental quality standard of surface water. The results can reflect the water quality of the Yellow River Basin objectively. Furthermore, the evaluation results are basically the same when compared with the fuzzy comprehensive evaluation method. The results also show that the overall water quality of the Yellow River Basin is good and coincident with the actual situation of the Yellow River basin. Overall, gray clustering method for water quality evaluation is reasonable and feasible and it is also convenient to calculate.

  5. Evaluating user reputation in online rating systems via an iterative group-based ranking method

    NASA Astrophysics Data System (ADS)

    Gao, Jian; Zhou, Tao

    2017-05-01

    Reputation is a valuable asset in online social lives and it has drawn increased attention. Due to the existence of noisy ratings and spamming attacks, how to evaluate user reputation in online rating systems is especially significant. However, most of the previous ranking-based methods either follow a debatable assumption or have unsatisfied robustness. In this paper, we propose an iterative group-based ranking method by introducing an iterative reputation-allocation process into the original group-based ranking method. More specifically, the reputation of users is calculated based on the weighted sizes of the user rating groups after grouping all users by their rating similarities, and the high reputation users' ratings have larger weights in dominating the corresponding user rating groups. The reputation of users and the user rating group sizes are iteratively updated until they become stable. Results on two real data sets with artificial spammers suggest that the proposed method has better performance than the state-of-the-art methods and its robustness is considerably improved comparing with the original group-based ranking method. Our work highlights the positive role of considering users' grouping behaviors towards a better online user reputation evaluation.

  6. Developing and using a rubric for evaluating evidence-based medicine point-of-care tools

    PubMed Central

    Foster, Margaret J

    2011-01-01

    Objective: The research sought to establish a rubric for evaluating evidence-based medicine (EBM) point-of-care tools in a health sciences library. Methods: The authors searched the literature for EBM tool evaluations and found that most previous reviews were designed to evaluate the ability of an EBM tool to answer a clinical question. The researchers' goal was to develop and complete rubrics for assessing these tools based on criteria for a general evaluation of tools (reviewing content, search options, quality control, and grading) and criteria for an evaluation of clinical summaries (searching tools for treatments of common diagnoses and evaluating summaries for quality control). Results: Differences between EBM tools' options, content coverage, and usability were minimal. However, the products' methods for locating and grading evidence varied widely in transparency and process. Conclusions: As EBM tools are constantly updating and evolving, evaluation of these tools needs to be conducted frequently. Standards for evaluating EBM tools need to be established, with one method being the use of objective rubrics. In addition, EBM tools need to provide more information about authorship, reviewers, methods for evidence collection, and grading system employed. PMID:21753917

  7. Evaluation on Cost Overrun Risks of Long-distance Water Diversion Project Based on SPA-IAHP Method

    NASA Astrophysics Data System (ADS)

    Yuanyue, Yang; Huimin, Li

    2018-02-01

    Large investment, long route, many change orders and etc. are main causes for costs overrun of long-distance water diversion project. This paper, based on existing research, builds a full-process cost overrun risk evaluation index system for water diversion project, apply SPA-IAHP method to set up cost overrun risk evaluation mode, calculate and rank weight of every risk evaluation indexes. Finally, the cost overrun risks are comprehensively evaluated by calculating linkage measure, and comprehensive risk level is acquired. SPA-IAHP method can accurately evaluate risks, and the reliability is high. By case calculation and verification, it can provide valid cost overrun decision making information to construction companies.

  8. Damage evaluation by a guided wave-hidden Markov model based method

    NASA Astrophysics Data System (ADS)

    Mei, Hanfei; Yuan, Shenfang; Qiu, Lei; Zhang, Jinjin

    2016-02-01

    Guided wave based structural health monitoring has shown great potential in aerospace applications. However, one of the key challenges of practical engineering applications is the accurate interpretation of the guided wave signals under time-varying environmental and operational conditions. This paper presents a guided wave-hidden Markov model based method to improve the damage evaluation reliability of real aircraft structures under time-varying conditions. In the proposed approach, an HMM based unweighted moving average trend estimation method, which can capture the trend of damage propagation from the posterior probability obtained by HMM modeling is used to achieve a probabilistic evaluation of the structural damage. To validate the developed method, experiments are performed on a hole-edge crack specimen under fatigue loading condition and a real aircraft wing spar under changing structural boundary conditions. Experimental results show the advantage of the proposed method.

  9. Benchmark data sets for structure-based computational target prediction.

    PubMed

    Schomburg, Karen T; Rarey, Matthias

    2014-08-25

    Structure-based computational target prediction methods identify potential targets for a bioactive compound. Methods based on protein-ligand docking so far face many challenges, where the greatest probably is the ranking of true targets in a large data set of protein structures. Currently, no standard data sets for evaluation exist, rendering comparison and demonstration of improvements of methods cumbersome. Therefore, we propose two data sets and evaluation strategies for a meaningful evaluation of new target prediction methods, i.e., a small data set consisting of three target classes for detailed proof-of-concept and selectivity studies and a large data set consisting of 7992 protein structures and 72 drug-like ligands allowing statistical evaluation with performance metrics on a drug-like chemical space. Both data sets are built from openly available resources, and any information needed to perform the described experiments is reported. We describe the composition of the data sets, the setup of screening experiments, and the evaluation strategy. Performance metrics capable to measure the early recognition of enrichments like AUC, BEDROC, and NSLR are proposed. We apply a sequence-based target prediction method to the large data set to analyze its content of nontrivial evaluation cases. The proposed data sets are used for method evaluation of our new inverse screening method iRAISE. The small data set reveals the method's capability and limitations to selectively distinguish between rather similar protein structures. The large data set simulates real target identification scenarios. iRAISE achieves in 55% excellent or good enrichment a median AUC of 0.67 and RMSDs below 2.0 Å for 74% and was able to predict the first true target in 59 out of 72 cases in the top 2% of the protein data set of about 8000 structures.

  10. Overall Performance Evaluation of Tubular Scraper Conveyors Using a TOPSIS-Based Multiattribute Decision-Making Method

    PubMed Central

    Yao, Yanping; Kou, Ziming; Meng, Wenjun; Han, Gang

    2014-01-01

    Properly evaluating the overall performance of tubular scraper conveyors (TSCs) can increase their overall efficiency and reduce economic investments, but such methods have rarely been studied. This study evaluated the overall performance of TSCs based on the technique for order of preference by similarity to ideal solution (TOPSIS). Three conveyors of the same type produced in the same factory were investigated. Their scraper space, material filling coefficient, and vibration coefficient of the traction components were evaluated. A mathematical model of the multiattribute decision matrix was constructed; a weighted judgment matrix was obtained using the DELPHI method. The linguistic positive-ideal solution (LPIS), the linguistic negative-ideal solution (LNIS), and the distance from each solution to the LPIS and the LNIS, that is, the approximation degrees, were calculated. The optimal solution was determined by ordering the approximation degrees for each solution. The TOPSIS-based results were compared with the measurement results provided by the manufacturer. The ordering result based on the three evaluated parameters was highly consistent with the result provided by the manufacturer. The TOPSIS-based method serves as a suitable evaluation tool for the overall performance of TSCs. It facilitates the optimal deployment of TSCs for industrial purposes. PMID:24991646

  11. Using a fuzzy comprehensive evaluation method to determine product usability: A test case

    PubMed Central

    Zhou, Ronggang; Chan, Alan H. S.

    2016-01-01

    BACKGROUND: In order to take into account the inherent uncertainties during product usability evaluation, Zhou and Chan [1] proposed a comprehensive method of usability evaluation for products by combining the analytic hierarchy process (AHP) and fuzzy evaluation methods for synthesizing performance data and subjective response data. This method was designed to provide an integrated framework combining the inevitable vague judgments from the multiple stages of the product evaluation process. OBJECTIVE AND METHODS: In order to illustrate the effectiveness of the model, this study used a summative usability test case to assess the application and strength of the general fuzzy usability framework. To test the proposed fuzzy usability evaluation framework [1], a standard summative usability test was conducted to benchmark the overall usability of a specific network management software. Based on the test data, the fuzzy method was applied to incorporate both the usability scores and uncertainties involved in the multiple components of the evaluation. Then, with Monte Carlo simulation procedures, confidence intervals were used to compare the reliabilities among the fuzzy approach and two typical conventional methods combining metrics based on percentages. RESULTS AND CONCLUSIONS: This case study showed that the fuzzy evaluation technique can be applied successfully for combining summative usability testing data to achieve an overall usability quality for the network software evaluated. Greater differences of confidence interval widths between the method of averaging equally percentage and weighted evaluation method, including the method of weighted percentage averages, verified the strength of the fuzzy method. PMID:28035942

  12. Quantitative assessment of tumour extraction from dermoscopy images and evaluation of computer-based extraction methods for an automatic melanoma diagnostic system.

    PubMed

    Iyatomi, Hitoshi; Oka, Hiroshi; Saito, Masataka; Miyake, Ayako; Kimoto, Masayuki; Yamagami, Jun; Kobayashi, Seiichiro; Tanikawa, Akiko; Hagiwara, Masafumi; Ogawa, Koichi; Argenziano, Giuseppe; Soyer, H Peter; Tanaka, Masaru

    2006-04-01

    The aims of this study were to provide a quantitative assessment of the tumour area extracted by dermatologists and to evaluate computer-based methods from dermoscopy images for refining a computer-based melanoma diagnostic system. Dermoscopic images of 188 Clark naevi, 56 Reed naevi and 75 melanomas were examined. Five dermatologists manually drew the border of each lesion with a tablet computer. The inter-observer variability was evaluated and the standard tumour area (STA) for each dermoscopy image was defined. Manual extractions by 10 non-medical individuals and by two computer-based methods were evaluated with STA-based assessment criteria: precision and recall. Our new computer-based method introduced the region-growing approach in order to yield results close to those obtained by dermatologists. The effectiveness of our extraction method with regard to diagnostic accuracy was evaluated. Two linear classifiers were built using the results of conventional and new computer-based tumour area extraction methods. The final diagnostic accuracy was evaluated by drawing the receiver operating curve (ROC) of each classifier, and the area under each ROC was evaluated. The standard deviations of the tumour area extracted by five dermatologists and 10 non-medical individuals were 8.9% and 10.7%, respectively. After assessment of the extraction results by dermatologists, the STA was defined as the area that was selected by more than two dermatologists. Dermatologists selected the melanoma area with statistically smaller divergence than that of Clark naevus or Reed naevus (P = 0.05). By contrast, non-medical individuals did not show this difference. Our new computer-based extraction algorithm showed superior performance (precision, 94.1%; recall, 95.3%) to the conventional thresholding method (precision, 99.5%; recall, 87.6%). These results indicate that our new algorithm extracted a tumour area close to that obtained by dermatologists and, in particular, the border part of the tumour was adequately extracted. With this refinement, the area under the ROC increased from 0.795 to 0.875 and the diagnostic accuracy showed an increase of approximately 20% in specificity when the sensitivity was 80%. It can be concluded that our computer-based tumour extraction algorithm extracted almost the same area as that obtained by dermatologists and provided improved computer-based diagnostic accuracy.

  13. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network

    PubMed Central

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-01

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006

  14. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network.

    PubMed

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-08

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.

  15. Deterministic and fuzzy-based methods to evaluate community resilience

    NASA Astrophysics Data System (ADS)

    Kammouh, Omar; Noori, Ali Zamani; Taurino, Veronica; Mahin, Stephen A.; Cimellaro, Gian Paolo

    2018-04-01

    Community resilience is becoming a growing concern for authorities and decision makers. This paper introduces two indicator-based methods to evaluate the resilience of communities based on the PEOPLES framework. PEOPLES is a multi-layered framework that defines community resilience using seven dimensions. Each of the dimensions is described through a set of resilience indicators collected from literature and they are linked to a measure allowing the analytical computation of the indicator's performance. The first method proposed in this paper requires data on previous disasters as an input and returns as output a performance function for each indicator and a performance function for the whole community. The second method exploits a knowledge-based fuzzy modeling for its implementation. This method allows a quantitative evaluation of the PEOPLES indicators using descriptive knowledge rather than deterministic data including the uncertainty involved in the analysis. The output of the fuzzy-based method is a resilience index for each indicator as well as a resilience index for the community. The paper also introduces an open source online tool in which the first method is implemented. A case study illustrating the application of the first method and the usage of the tool is also provided in the paper.

  16. A probability-based multi-cycle sorting method for 4D-MRI: A simulation study.

    PubMed

    Liang, Xiao; Yin, Fang-Fang; Liu, Yilin; Cai, Jing

    2016-12-01

    To develop a novel probability-based sorting method capable of generating multiple breathing cycles of 4D-MRI images and to evaluate performance of this new method by comparing with conventional phase-based methods in terms of image quality and tumor motion measurement. Based on previous findings that breathing motion probability density function (PDF) of a single breathing cycle is dramatically different from true stabilized PDF that resulted from many breathing cycles, it is expected that a probability-based sorting method capable of generating multiple breathing cycles of 4D images may capture breathing variation information missing from conventional single-cycle sorting methods. The overall idea is to identify a few main breathing cycles (and their corresponding weightings) that can best represent the main breathing patterns of the patient and then reconstruct a set of 4D images for each of the identified main breathing cycles. This method is implemented in three steps: (1) The breathing signal is decomposed into individual breathing cycles, characterized by amplitude, and period; (2) individual breathing cycles are grouped based on amplitude and period to determine the main breathing cycles. If a group contains more than 10% of all breathing cycles in a breathing signal, it is determined as a main breathing pattern group and is represented by the average of individual breathing cycles in the group; (3) for each main breathing cycle, a set of 4D images is reconstructed using a result-driven sorting method adapted from our previous study. The probability-based sorting method was first tested on 26 patients' breathing signals to evaluate its feasibility of improving target motion PDF. The new method was subsequently tested for a sequential image acquisition scheme on the 4D digital extended cardiac torso (XCAT) phantom. Performance of the probability-based and conventional sorting methods was evaluated in terms of target volume precision and accuracy as measured by the 4D images, and also the accuracy of average intensity projection (AIP) of 4D images. Probability-based sorting showed improved similarity of breathing motion PDF from 4D images to reference PDF compared to single cycle sorting, indicated by the significant increase in Dice similarity coefficient (DSC) (probability-based sorting, DSC = 0.89 ± 0.03, and single cycle sorting, DSC = 0.83 ± 0.05, p-value <0.001). Based on the simulation study on XCAT, the probability-based method outperforms the conventional phase-based methods in qualitative evaluation on motion artifacts and quantitative evaluation on tumor volume precision and accuracy and accuracy of AIP of the 4D images. In this paper the authors demonstrated the feasibility of a novel probability-based multicycle 4D image sorting method. The authors' preliminary results showed that the new method can improve the accuracy of tumor motion PDF and the AIP of 4D images, presenting potential advantages over the conventional phase-based sorting method for radiation therapy motion management.

  17. Comparison of Methods for Evaluating Urban Transportation Alternatives

    DOT National Transportation Integrated Search

    1975-02-01

    The objective of the report was to compare five alternative methods for evaluating urban transportation improvement options: unaided judgmental evaluation cost-benefit analysis, cost-effectiveness analysis based on a single measure of effectiveness, ...

  18. A online credit evaluation method based on AHP and SPA

    NASA Astrophysics Data System (ADS)

    Xu, Yingtao; Zhang, Ying

    2009-07-01

    Online credit evaluation is the foundation for the establishment of trust and for the management of risk between buyers and sellers in e-commerce. In this paper, a new credit evaluation method based on the analytic hierarchy process (AHP) and the set pair analysis (SPA) is presented to determine the credibility of the electronic commerce participants. It solves some of the drawbacks found in classical credit evaluation methods and broadens the scope of current approaches. Both qualitative and quantitative indicators are considered in the proposed method, then a overall credit score is achieved from the optimal perspective. In the end, a case analysis of China Garment Network is provided for illustrative purposes.

  19. Evaluation of design flood frequency methods for Iowa streams : final report, June 2009.

    DOT National Transportation Integrated Search

    2009-06-01

    The objective of this project was to assess the predictive accuracy of flood frequency estimation for small Iowa streams based : on the Rational Method, the NRCS curve number approach, and the Iowa Runoff Chart. The evaluation was based on : comparis...

  20. Using a fuzzy comprehensive evaluation method to determine product usability: A test case.

    PubMed

    Zhou, Ronggang; Chan, Alan H S

    2017-01-01

    In order to take into account the inherent uncertainties during product usability evaluation, Zhou and Chan [1] proposed a comprehensive method of usability evaluation for products by combining the analytic hierarchy process (AHP) and fuzzy evaluation methods for synthesizing performance data and subjective response data. This method was designed to provide an integrated framework combining the inevitable vague judgments from the multiple stages of the product evaluation process. In order to illustrate the effectiveness of the model, this study used a summative usability test case to assess the application and strength of the general fuzzy usability framework. To test the proposed fuzzy usability evaluation framework [1], a standard summative usability test was conducted to benchmark the overall usability of a specific network management software. Based on the test data, the fuzzy method was applied to incorporate both the usability scores and uncertainties involved in the multiple components of the evaluation. Then, with Monte Carlo simulation procedures, confidence intervals were used to compare the reliabilities among the fuzzy approach and two typical conventional methods combining metrics based on percentages. This case study showed that the fuzzy evaluation technique can be applied successfully for combining summative usability testing data to achieve an overall usability quality for the network software evaluated. Greater differences of confidence interval widths between the method of averaging equally percentage and weighted evaluation method, including the method of weighted percentage averages, verified the strength of the fuzzy method.

  1. Methodological Pluralism: The Gold Standard of STEM Evaluation

    ERIC Educational Resources Information Center

    Lawrenz, Frances; Huffman, Douglas

    2006-01-01

    Nationally, there is continuing debate about appropriate methods for conducting educational evaluations. The U.S. Department of Education has placed a priority on "scientifically" based evaluation methods and has advocated a "gold standard" of randomized controlled experimentation. The priority suggests that randomized control methods are best,…

  2. Performance evaluation method of electric energy data acquire system based on combination of subjective and objective weights

    NASA Astrophysics Data System (ADS)

    Gao, Chen; Ding, Zhongan; Deng, Bofa; Yan, Shengteng

    2017-10-01

    According to the characteristics of electric energy data acquire system (EEDAS), considering the availability of each index data and the connection between the index integrity, establishing the performance evaluation index system of electric energy data acquire system from three aspects as master station system, communication channel, terminal equipment. To determine the comprehensive weight of each index based on triangular fuzzy number analytic hierarchy process with entropy weight method, and both subjective preference and objective attribute are taken into consideration, thus realize the performance comprehensive evaluation more reasonable and reliable. Example analysis shows that, by combination with analytic hierarchy process (AHP) and triangle fuzzy numbers (TFN) to establish comprehensive index evaluation system based on entropy method, the evaluation results not only convenient and practical, but also more objective and accurate.

  3. Research on Comprehensive Evaluation Method for Heating Project Based on Analytic Hierarchy Processing

    NASA Astrophysics Data System (ADS)

    Han, Shenchao; Yang, Yanchun; Liu, Yude; Zhang, Peng; Li, Siwei

    2018-01-01

    It is effective to reduce haze in winter by changing the distributed heat supply system. Thus, the studies on comprehensive index system and scientific evaluation method of distributed heat supply project are essential. Firstly, research the influence factors of heating modes, and an index system with multiple dimension including economic, environmental, risk and flexibility was built and all indexes were quantified. Secondly, a comprehensive evaluation method based on AHP was put forward to analyze the proposed multiple and comprehensive index system. Lastly, the case study suggested that supplying heat with electricity has great advantage and promotional value. The comprehensive index system of distributed heating supply project and evaluation method in this paper can evaluate distributed heat supply project effectively and provide scientific support for choosing the distributed heating project.

  4. Method for evaluation of human induced pluripotent stem cell quality using image analysis based on the biological morphology of cells.

    PubMed

    Wakui, Takashi; Matsumoto, Tsuyoshi; Matsubara, Kenta; Kawasaki, Tomoyuki; Yamaguchi, Hiroshi; Akutsu, Hidenori

    2017-10-01

    We propose an image analysis method for quality evaluation of human pluripotent stem cells based on biologically interpretable features. It is important to maintain the undifferentiated state of induced pluripotent stem cells (iPSCs) while culturing the cells during propagation. Cell culture experts visually select good quality cells exhibiting the morphological features characteristic of undifferentiated cells. Experts have empirically determined that these features comprise prominent and abundant nucleoli, less intercellular spacing, and fewer differentiating cellular nuclei. We quantified these features based on experts' visual inspection of phase contrast images of iPSCs and found that these features are effective for evaluating iPSC quality. We then developed an iPSC quality evaluation method using an image analysis technique. The method allowed accurate classification, equivalent to visual inspection by experts, of three iPSC cell lines.

  5. Evaluation of the safety performance of highway alignments based on fault tree analysis and safety boundaries.

    PubMed

    Chen, Yikai; Wang, Kai; Xu, Chengcheng; Shi, Qin; He, Jie; Li, Peiqing; Shi, Ting

    2018-05-19

    To overcome the limitations of previous highway alignment safety evaluation methods, this article presents a highway alignment safety evaluation method based on fault tree analysis (FTA) and the characteristics of vehicle safety boundaries, within the framework of dynamic modeling of the driver-vehicle-road system. Approaches for categorizing the vehicle failure modes while driving on highways and the corresponding safety boundaries were comprehensively investigated based on vehicle system dynamics theory. Then, an overall crash probability model was formulated based on FTA considering the risks of 3 failure modes: losing steering capability, losing track-holding capability, and rear-end collision. The proposed method was implemented on a highway segment between Bengbu and Nanjing in China. A driver-vehicle-road multibody dynamics model was developed based on the 3D alignments of the Bengbu to Nanjing section of Ning-Luo expressway using Carsim, and the dynamics indices, such as sideslip angle and, yaw rate were obtained. Then, the average crash probability of each road section was calculated with a fixed-length method. Finally, the average crash probability was validated against the crash frequency per kilometer to demonstrate the accuracy of the proposed method. The results of the regression analysis and correlation analysis indicated good consistency between the results of the safety evaluation and the crash data and that it outperformed the safety evaluation methods used in previous studies. The proposed method has the potential to be used in practical engineering applications to identify crash-prone locations and alignment deficiencies on highways in the planning and design phases, as well as those in service.

  6. A new state evaluation method of oil pump unit based on AHP and FCE

    NASA Astrophysics Data System (ADS)

    Lin, Yang; Liang, Wei; Qiu, Zeyang; Zhang, Meng; Lu, Wenqing

    2017-05-01

    In order to make an accurate state evaluation of oil pump unit, a comprehensive evaluation index should be established. A multi-parameters state evaluation method of oil pump unit is proposed in this paper. The oil pump unit is analyzed by Failure Mode and Effect Analysis (FMEA), so evaluation index can be obtained based on FMEA conclusions. The weights of different parameters in evaluation index are discussed using Analytic Hierarchy Process (AHP) with expert experience. According to the evaluation index and the weight of each parameter, the state evaluation is carried out by Fuzzy Comprehensive Evaluation (FCE) and the state is divided into five levels depending on status value, which is inspired by human body health. In order to verify the effectiveness and feasibility of the proposed method, a state evaluation of oil pump used in a pump station is taken as an example.

  7. Performance Evaluation and Online Realization of Data-driven Normalization Methods Used in LC/MS based Untargeted Metabolomics Analysis.

    PubMed

    Li, Bo; Tang, Jing; Yang, Qingxia; Cui, Xuejiao; Li, Shuang; Chen, Sijie; Cao, Quanxing; Xue, Weiwei; Chen, Na; Zhu, Feng

    2016-12-13

    In untargeted metabolomics analysis, several factors (e.g., unwanted experimental &biological variations and technical errors) may hamper the identification of differential metabolic features, which requires the data-driven normalization approaches before feature selection. So far, ≥16 normalization methods have been widely applied for processing the LC/MS based metabolomics data. However, the performance and the sample size dependence of those methods have not yet been exhaustively compared and no online tool for comparatively and comprehensively evaluating the performance of all 16 normalization methods has been provided. In this study, a comprehensive comparison on these methods was conducted. As a result, 16 methods were categorized into three groups based on their normalization performances across various sample sizes. The VSN, the Log Transformation and the PQN were identified as methods of the best normalization performance, while the Contrast consistently underperformed across all sub-datasets of different benchmark data. Moreover, an interactive web tool comprehensively evaluating the performance of 16 methods specifically for normalizing LC/MS based metabolomics data was constructed and hosted at http://server.idrb.cqu.edu.cn/MetaPre/. In summary, this study could serve as a useful guidance to the selection of suitable normalization methods in analyzing the LC/MS based metabolomics data.

  8. Performance Evaluation and Online Realization of Data-driven Normalization Methods Used in LC/MS based Untargeted Metabolomics Analysis

    PubMed Central

    Li, Bo; Tang, Jing; Yang, Qingxia; Cui, Xuejiao; Li, Shuang; Chen, Sijie; Cao, Quanxing; Xue, Weiwei; Chen, Na; Zhu, Feng

    2016-01-01

    In untargeted metabolomics analysis, several factors (e.g., unwanted experimental & biological variations and technical errors) may hamper the identification of differential metabolic features, which requires the data-driven normalization approaches before feature selection. So far, ≥16 normalization methods have been widely applied for processing the LC/MS based metabolomics data. However, the performance and the sample size dependence of those methods have not yet been exhaustively compared and no online tool for comparatively and comprehensively evaluating the performance of all 16 normalization methods has been provided. In this study, a comprehensive comparison on these methods was conducted. As a result, 16 methods were categorized into three groups based on their normalization performances across various sample sizes. The VSN, the Log Transformation and the PQN were identified as methods of the best normalization performance, while the Contrast consistently underperformed across all sub-datasets of different benchmark data. Moreover, an interactive web tool comprehensively evaluating the performance of 16 methods specifically for normalizing LC/MS based metabolomics data was constructed and hosted at http://server.idrb.cqu.edu.cn/MetaPre/. In summary, this study could serve as a useful guidance to the selection of suitable normalization methods in analyzing the LC/MS based metabolomics data. PMID:27958387

  9. Developing and using a rubric for evaluating evidence-based medicine point-of-care tools.

    PubMed

    Shurtz, Suzanne; Foster, Margaret J

    2011-07-01

    The research sought to establish a rubric for evaluating evidence-based medicine (EBM) point-of-care tools in a health sciences library. The authors searched the literature for EBM tool evaluations and found that most previous reviews were designed to evaluate the ability of an EBM tool to answer a clinical question. The researchers' goal was to develop and complete rubrics for assessing these tools based on criteria for a general evaluation of tools (reviewing content, search options, quality control, and grading) and criteria for an evaluation of clinical summaries (searching tools for treatments of common diagnoses and evaluating summaries for quality control). Differences between EBM tools' options, content coverage, and usability were minimal. However, the products' methods for locating and grading evidence varied widely in transparency and process. As EBM tools are constantly updating and evolving, evaluation of these tools needs to be conducted frequently. Standards for evaluating EBM tools need to be established, with one method being the use of objective rubrics. In addition, EBM tools need to provide more information about authorship, reviewers, methods for evidence collection, and grading system employed.

  10. Patch-based generation of a pseudo CT from conventional MRI sequences for MRI-only radiotherapy of the brain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andreasen, Daniel, E-mail: dana@dtu.dk; Van Leemput, Koen; Hansen, Rasmus H.

    Purpose: In radiotherapy (RT) based on magnetic resonance imaging (MRI) as the only modality, the information on electron density must be derived from the MRI scan by creating a so-called pseudo computed tomography (pCT). This is a nontrivial task, since the voxel-intensities in an MRI scan are not uniquely related to electron density. To solve the task, voxel-based or atlas-based models have typically been used. The voxel-based models require a specialized dual ultrashort echo time MRI sequence for bone visualization and the atlas-based models require deformable registrations of conventional MRI scans. In this study, we investigate the potential of amore » patch-based method for creating a pCT based on conventional T{sub 1}-weighted MRI scans without using deformable registrations. We compare this method against two state-of-the-art methods within the voxel-based and atlas-based categories. Methods: The data consisted of CT and MRI scans of five cranial RT patients. To compare the performance of the different methods, a nested cross validation was done to find optimal model parameters for all the methods. Voxel-wise and geometric evaluations of the pCTs were done. Furthermore, a radiologic evaluation based on water equivalent path lengths was carried out, comparing the upper hemisphere of the head in the pCT and the real CT. Finally, the dosimetric accuracy was tested and compared for a photon treatment plan. Results: The pCTs produced with the patch-based method had the best voxel-wise, geometric, and radiologic agreement with the real CT, closely followed by the atlas-based method. In terms of the dosimetric accuracy, the patch-based method had average deviations of less than 0.5% in measures related to target coverage. Conclusions: We showed that a patch-based method could generate an accurate pCT based on conventional T{sub 1}-weighted MRI sequences and without deformable registrations. In our evaluations, the method performed better than existing voxel-based and atlas-based methods and showed a promising potential for RT of the brain based only on MRI.« less

  11. A new wavelet transform to sparsely represent cortical current densities for EEG/MEG inverse problems.

    PubMed

    Liao, Ke; Zhu, Min; Ding, Lei

    2013-08-01

    The present study investigated the use of transform sparseness of cortical current density on human brain surface to improve electroencephalography/magnetoencephalography (EEG/MEG) inverse solutions. Transform sparseness was assessed by evaluating compressibility of cortical current densities in transform domains. To do that, a structure compression method from computer graphics was first adopted to compress cortical surface structure, either regular or irregular, into hierarchical multi-resolution meshes. Then, a new face-based wavelet method based on generated multi-resolution meshes was proposed to compress current density functions defined on cortical surfaces. Twelve cortical surface models were built by three EEG/MEG softwares and their structural compressibility was evaluated and compared by the proposed method. Monte Carlo simulations were implemented to evaluate the performance of the proposed wavelet method in compressing various cortical current density distributions as compared to other two available vertex-based wavelet methods. The present results indicate that the face-based wavelet method can achieve higher transform sparseness than vertex-based wavelet methods. Furthermore, basis functions from the face-based wavelet method have lower coherence against typical EEG and MEG measurement systems than vertex-based wavelet methods. Both high transform sparseness and low coherent measurements suggest that the proposed face-based wavelet method can improve the performance of L1-norm regularized EEG/MEG inverse solutions, which was further demonstrated in simulations and experimental setups using MEG data. Thus, this new transform on complicated cortical structure is promising to significantly advance EEG/MEG inverse source imaging technologies. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  12. Core Professionalism Education in Surgery: A Systematic Review.

    PubMed

    Sarıoğlu Büke, Akile; Karabilgin Öztürkçü, Özlem Sürel; Yılmaz, Yusuf; Sayek, İskender

    2018-03-15

    Professionalism education is one of the major elements of surgical residency education. To evaluate the studies on core professionalism education programs in surgical professionalism education. Systematic review. This systematic literature review was performed to analyze core professionalism programs for surgical residency education published in English with at least three of the following features: program developmental model/instructional design method, aims and competencies, methods of teaching, methods of assessment, and program evaluation model or method. A total of 27083 articles were retrieved using EBSCOHOST, PubMed, Science Direct, Web of Science, and manual search. Eight articles met the selection criteria. The instructional design method was presented in only one article, which described the Analysis, Design, Development, Implementation, and Evaluation model. Six articles were based on the Accreditation Council for Graduate Medical Education criterion, although there was significant variability in content. The most common teaching method was role modeling with scenario- and case-based learning. A wide range of assessment methods for evaluating professionalism education were reported. The Kirkpatrick model was reported in one article as a method for program evaluation. It is suggested that for a core surgical professionalism education program, developmental/instructional design model, aims and competencies, content, teaching methods, assessment methods, and program evaluation methods/models should be well defined, and the content should be comparable.

  13. Evaluation of dysphagia in early stroke patients by bedside, endoscopic, and electrophysiological methods.

    PubMed

    Umay, Ebru Karaca; Unlu, Ece; Saylam, Guleser Kılıc; Cakci, Aytul; Korkmaz, Hakan

    2013-09-01

    We aimed in this study to evaluate dysphagia in early stroke patients using a bedside screening test and flexible fiberoptic endoscopic evaluation of swallowing (FFEES) and electrophysiological evaluation (EE) methods and to compare the effectiveness of these methods. Twenty-four patients who were hospitalized in our clinic within the first 3 months after stroke were included in this study. Patients were evaluated using a bedside screening test [including bedside dysphagia score (BDS), neurological examination dysphagia score (NEDS), and total dysphagia score (TDS)] and FFEES and EE methods. Patients were divided into normal-swallowing and dysphagia groups according to the results of the evaluation methods. Patients with dysphagia as determined by any of these methods were compared to the patients with normal swallowing based on the results of the other two methods. Based on the results of our study, a high BDS was positively correlated with dysphagia identified by FFEES and EE methods. Moreover, the FFEES and EE methods were positively correlated. There was no significant correlation between NEDS and TDS levels and either EE or FFEES method. Bedside screening tests should be used mainly as an initial screening test; then FFEES and EE methods should be combined in patients who show risks. This diagnostic algorithm may provide a practical and fast solution for selected stroke patients.

  14. Segmentation quality evaluation using region-based precision and recall measures for remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhang, Xueliang; Feng, Xuezhi; Xiao, Pengfeng; He, Guangjun; Zhu, Liujun

    2015-04-01

    Segmentation of remote sensing images is a critical step in geographic object-based image analysis. Evaluating the performance of segmentation algorithms is essential to identify effective segmentation methods and optimize their parameters. In this study, we propose region-based precision and recall measures and use them to compare two image partitions for the purpose of evaluating segmentation quality. The two measures are calculated based on region overlapping and presented as a point or a curve in a precision-recall space, which can indicate segmentation quality in both geometric and arithmetic respects. Furthermore, the precision and recall measures are combined by using four different methods. We examine and compare the effectiveness of the combined indicators through geometric illustration, in an effort to reveal segmentation quality clearly and capture the trade-off between the two measures. In the experiments, we adopted the multiresolution segmentation (MRS) method for evaluation. The proposed measures are compared with four existing discrepancy measures to further confirm their capabilities. Finally, we suggest using a combination of the region-based precision-recall curve and the F-measure for supervised segmentation evaluation.

  15. A probability-based multi-cycle sorting method for 4D-MRI: A simulation study

    PubMed Central

    Liang, Xiao; Yin, Fang-Fang; Liu, Yilin; Cai, Jing

    2016-01-01

    Purpose: To develop a novel probability-based sorting method capable of generating multiple breathing cycles of 4D-MRI images and to evaluate performance of this new method by comparing with conventional phase-based methods in terms of image quality and tumor motion measurement. Methods: Based on previous findings that breathing motion probability density function (PDF) of a single breathing cycle is dramatically different from true stabilized PDF that resulted from many breathing cycles, it is expected that a probability-based sorting method capable of generating multiple breathing cycles of 4D images may capture breathing variation information missing from conventional single-cycle sorting methods. The overall idea is to identify a few main breathing cycles (and their corresponding weightings) that can best represent the main breathing patterns of the patient and then reconstruct a set of 4D images for each of the identified main breathing cycles. This method is implemented in three steps: (1) The breathing signal is decomposed into individual breathing cycles, characterized by amplitude, and period; (2) individual breathing cycles are grouped based on amplitude and period to determine the main breathing cycles. If a group contains more than 10% of all breathing cycles in a breathing signal, it is determined as a main breathing pattern group and is represented by the average of individual breathing cycles in the group; (3) for each main breathing cycle, a set of 4D images is reconstructed using a result-driven sorting method adapted from our previous study. The probability-based sorting method was first tested on 26 patients’ breathing signals to evaluate its feasibility of improving target motion PDF. The new method was subsequently tested for a sequential image acquisition scheme on the 4D digital extended cardiac torso (XCAT) phantom. Performance of the probability-based and conventional sorting methods was evaluated in terms of target volume precision and accuracy as measured by the 4D images, and also the accuracy of average intensity projection (AIP) of 4D images. Results: Probability-based sorting showed improved similarity of breathing motion PDF from 4D images to reference PDF compared to single cycle sorting, indicated by the significant increase in Dice similarity coefficient (DSC) (probability-based sorting, DSC = 0.89 ± 0.03, and single cycle sorting, DSC = 0.83 ± 0.05, p-value <0.001). Based on the simulation study on XCAT, the probability-based method outperforms the conventional phase-based methods in qualitative evaluation on motion artifacts and quantitative evaluation on tumor volume precision and accuracy and accuracy of AIP of the 4D images. Conclusions: In this paper the authors demonstrated the feasibility of a novel probability-based multicycle 4D image sorting method. The authors’ preliminary results showed that the new method can improve the accuracy of tumor motion PDF and the AIP of 4D images, presenting potential advantages over the conventional phase-based sorting method for radiation therapy motion management. PMID:27908178

  16. Learning predictive models that use pattern discovery--a bootstrap evaluative approach applied in organ functioning sequences.

    PubMed

    Toma, Tudor; Bosman, Robert-Jan; Siebes, Arno; Peek, Niels; Abu-Hanna, Ameen

    2010-08-01

    An important problem in the Intensive Care is how to predict on a given day of stay the eventual hospital mortality for a specific patient. A recent approach to solve this problem suggested the use of frequent temporal sequences (FTSs) as predictors. Methods following this approach were evaluated in the past by inducing a model from a training set and validating the prognostic performance on an independent test set. Although this evaluative approach addresses the validity of the specific models induced in an experiment, it falls short of evaluating the inductive method itself. To achieve this, one must account for the inherent sources of variation in the experimental design. The main aim of this work is to demonstrate a procedure based on bootstrapping, specifically the .632 bootstrap procedure, for evaluating inductive methods that discover patterns, such as FTSs. A second aim is to apply this approach to find out whether a recently suggested inductive method that discovers FTSs of organ functioning status is superior over a traditional method that does not use temporal sequences when compared on each successive day of stay at the Intensive Care Unit. The use of bootstrapping with logistic regression using pre-specified covariates is known in the statistical literature. Using inductive methods of prognostic models based on temporal sequence discovery within the bootstrap procedure is however novel at least in predictive models in the Intensive Care. Our results of applying the bootstrap-based evaluative procedure demonstrate the superiority of the FTS-based inductive method over the traditional method in terms of discrimination as well as accuracy. In addition we illustrate the insights gained by the analyst into the discovered FTSs from the bootstrap samples. Copyright 2010 Elsevier Inc. All rights reserved.

  17. Economic evaluation of diagnostic methods used in dentistry. A systematic review.

    PubMed

    Christell, Helena; Birch, Stephen; Horner, Keith; Lindh, Christina; Rohlin, Madeleine

    2014-11-01

    To review the literature of economic evaluations regarding diagnostic methods used in dentistry. Four databases (MEDLINE, Web of Science, The Cochrane library, the NHS Economic Evaluation Database) were searched for studies, complemented by hand search, until February 2013. Two authors independently screened all titles or abstracts and then applied inclusion and exclusion criteria to select full-text publications published in English, which reported an economic evaluation comparing at least two alternative methods. Studies of diagnostic methods were assessed by four reviewers using a protocol based on the QUADAS tool regarding diagnostic methods and a check-list for economic evaluations. The results of the data extraction were summarized in a structured table and as a narrative description. From 476 identified full-text publications, 160 were considered to be economic evaluations. Only 12 studies (7%) were on diagnostic methods, whilst 78 studies (49%) were on prevention and 70 (40%) on treatment. Among studies on diagnostic methods, there was between-study heterogeneity methodologically, regarding the diagnostic method analysed and type of economic evaluation addressed. Generally, the choice of economic evaluation method was not justified and the perspective of the study not stated. Costing of diagnostic methods varied. A small body of literature addresses economic evaluation of diagnostic methods in dentistry. Thus, there is a need for studies from various perspectives with well defined research questions and measures of the cost and effectiveness. Economic resources in healthcare are finite. For diagnostic methods, an understanding of efficacy provides only part of the information needed for evidence-based practice. This study highlighted a paucity of economic evaluations of diagnostic methods used in dentistry, indicating that much of what we practise lacks sufficient evidence.

  18. Comparing team-based and mixed active-learning methods in an ambulatory care elective course.

    PubMed

    Zingone, Michelle M; Franks, Andrea S; Guirguis, Alexander B; George, Christa M; Howard-Thompson, Amanda; Heidel, Robert E

    2010-11-10

    To assess students' performance and perceptions of team-based and mixed active-learning methods in 2 ambulatory care elective courses, and to describe faculty members' perceptions of team-based learning. Using the 2 teaching methods, students' grades were compared. Students' perceptions were assessed through 2 anonymous course evaluation instruments. Faculty members who taught courses using the team-based learning method were surveyed regarding their impressions of team-based learning. The ambulatory care course was offered to 64 students using team-based learning (n = 37) and mixed active learning (n = 27) formats. The mean quality points earned were 3.7 (team-based learning) and 3.3 (mixed active learning), p < 0.001. Course evaluations for both courses were favorable. All faculty members who used the team-based learning method reported that they would consider using team-based learning in another course. Students were satisfied with both teaching methods; however, student grades were significantly higher in the team-based learning course. Faculty members recognized team-based learning as an effective teaching strategy for small-group active learning.

  19. Entrepreneur environment management behavior evaluation method derived from environmental economy.

    PubMed

    Zhang, Lili; Hou, Xilin; Xi, Fengru

    2013-12-01

    Evaluation system can encourage and guide entrepreneurs, and impel them to perform well in environment management. An evaluation method based on advantage structure is established. It is used to analyze entrepreneur environment management behavior in China. Entrepreneur environment management behavior evaluation index system is constructed based on empirical research. Evaluation method of entrepreneurs is put forward, from the point of objective programming-theory to alert entrepreneurs concerned to think much of it, which means to take minimized objective function as comprehensive evaluation result and identify disadvantage structure pattern. Application research shows that overall behavior of Chinese entrepreneurs environmental management are good, specially, environment strategic behavior are best, environmental management behavior are second, cultural behavior ranks last. Application results show the efficiency and feasibility of this method. Copyright © 2013 The Research Centre for Eco-Environmental Sciences, Chinese Academy of Sciences. Published by Elsevier B.V. All rights reserved.

  20. Evaluation and integration of existing methods for computational prediction of allergens

    PubMed Central

    2013-01-01

    Background Allergy involves a series of complex reactions and factors that contribute to the development of the disease and triggering of the symptoms, including rhinitis, asthma, atopic eczema, skin sensitivity, even acute and fatal anaphylactic shock. Prediction and evaluation of the potential allergenicity is of importance for safety evaluation of foods and other environment factors. Although several computational approaches for assessing the potential allergenicity of proteins have been developed, their performance and relative merits and shortcomings have not been compared systematically. Results To evaluate and improve the existing methods for allergen prediction, we collected an up-to-date definitive dataset consisting of 989 known allergens and massive putative non-allergens. The three most widely used allergen computational prediction approaches including sequence-, motif- and SVM-based (Support Vector Machine) methods were systematically compared using the defined parameters and we found that SVM-based method outperformed the other two methods with higher accuracy and specificity. The sequence-based method with the criteria defined by FAO/WHO (FAO: Food and Agriculture Organization of the United Nations; WHO: World Health Organization) has higher sensitivity of over 98%, but having a low specificity. The advantage of motif-based method is the ability to visualize the key motif within the allergen. Notably, the performances of the sequence-based method defined by FAO/WHO and motif eliciting strategy could be improved by the optimization of parameters. To facilitate the allergen prediction, we integrated these three methods in a web-based application proAP, which provides the global search of the known allergens and a powerful tool for allergen predication. Flexible parameter setting and batch prediction were also implemented. The proAP can be accessed at http://gmobl.sjtu.edu.cn/proAP/main.html. Conclusions This study comprehensively evaluated sequence-, motif- and SVM-based computational prediction approaches for allergens and optimized their parameters to obtain better performance. These findings may provide helpful guidance for the researchers in allergen-prediction. Furthermore, we integrated these methods into a web application proAP, greatly facilitating users to do customizable allergen search and prediction. PMID:23514097

  1. Evaluation and integration of existing methods for computational prediction of allergens.

    PubMed

    Wang, Jing; Yu, Yabin; Zhao, Yunan; Zhang, Dabing; Li, Jing

    2013-01-01

    Allergy involves a series of complex reactions and factors that contribute to the development of the disease and triggering of the symptoms, including rhinitis, asthma, atopic eczema, skin sensitivity, even acute and fatal anaphylactic shock. Prediction and evaluation of the potential allergenicity is of importance for safety evaluation of foods and other environment factors. Although several computational approaches for assessing the potential allergenicity of proteins have been developed, their performance and relative merits and shortcomings have not been compared systematically. To evaluate and improve the existing methods for allergen prediction, we collected an up-to-date definitive dataset consisting of 989 known allergens and massive putative non-allergens. The three most widely used allergen computational prediction approaches including sequence-, motif- and SVM-based (Support Vector Machine) methods were systematically compared using the defined parameters and we found that SVM-based method outperformed the other two methods with higher accuracy and specificity. The sequence-based method with the criteria defined by FAO/WHO (FAO: Food and Agriculture Organization of the United Nations; WHO: World Health Organization) has higher sensitivity of over 98%, but having a low specificity. The advantage of motif-based method is the ability to visualize the key motif within the allergen. Notably, the performances of the sequence-based method defined by FAO/WHO and motif eliciting strategy could be improved by the optimization of parameters. To facilitate the allergen prediction, we integrated these three methods in a web-based application proAP, which provides the global search of the known allergens and a powerful tool for allergen predication. Flexible parameter setting and batch prediction were also implemented. The proAP can be accessed at http://gmobl.sjtu.edu.cn/proAP/main.html. This study comprehensively evaluated sequence-, motif- and SVM-based computational prediction approaches for allergens and optimized their parameters to obtain better performance. These findings may provide helpful guidance for the researchers in allergen-prediction. Furthermore, we integrated these methods into a web application proAP, greatly facilitating users to do customizable allergen search and prediction.

  2. Evaluation of the user experience of "astronaut training device": an immersive, vr-based, motion-training system

    NASA Astrophysics Data System (ADS)

    Yue, Kang; Wang, Danli; Yang, Xinpan; Hu, Haichen; Liu, Yuqing; Zhu, Xiuqing

    2016-10-01

    To date, as the different application fields, most VR-based training systems have been different. Therefore, we should take the characteristics of application field into consideration and adopt different evaluation methods when evaluate the user experience of these training systems. In this paper, we propose a method to evaluate the user experience of virtual astronauts training system. Also, we design an experiment based on the proposed method. The proposed method takes learning performance as one of the evaluation dimensions, also combines with other evaluation dimensions such as: presence, immersion, pleasure, satisfaction and fatigue to evaluation user experience of the System. We collect subjective and objective data, the subjective data are mainly from questionnaire designed based on the evaluation dimensions and user interview conducted before and after the experiment. While the objective data are consisted of Electrocardiogram (ECG), reaction time, numbers of reaction error and the video data recorded during the experiment. For the analysis of data, we calculate the integrated score of each evaluation dimension by using factor analysis. In order to improve the credibility of the assessment, we use the ECG signal and reaction test data before and after experiment to validate the changes of fatigue during the experiment, and the typical behavioral features extracted from the experiment video to explain the result of subjective questionnaire. Experimental results show that the System has a better user experience and learning performance, but slight visual fatigue exists after experiment.

  3. In-service teachers' perceptions of project-based learning.

    PubMed

    Habók, Anita; Nagy, Judit

    2016-01-01

    The study analyses teachers' perceptions of methods, teacher roles, success and evaluation in PBL and traditional classroom instruction. The analysis is based on empirical data collected in primary schools and vocational secondary schools. An analysis of 109 questionnaires revealed numerous differences based on degree of experience and type of school. In general, project-based methods were preferred among teachers, who mostly perceived themselves as facilitators and considered motivation and transmission of values central to their work. Teachers appeared not to capitalize on the use of ICT tools or emotions. Students actively participated in the evaluation process via oral evaluation.

  4. A conversation-based process tracing method for use with naturalistic decisions: an evaluation study.

    PubMed

    Williamson, J; Ranyard, R; Cuthbert, L

    2000-05-01

    This study is an evaluation of a process tracing method developed for naturalistic decisions, in this case a consumer choice task. The method is based on Huber et al.'s (1997) Active Information Search (AIS) technique, but develops it by providing spoken rather than written answers to respondents' questions, and by including think aloud instructions. The technique is used within a conversation-based situation, rather than the respondent thinking aloud 'into an empty space', as is conventionally the case in think aloud techniques. The method results in a concurrent verbal protocol as respondents make their decisions, and a retrospective report in the form of a post-decision summary. The method was found to be virtually non-reactive in relation to think aloud, although the variable of Preliminary Attribute Elicitation showed some evidence of reactivity. This was a methodological evaluation, and as such the data reported are essentially descriptive. Nevertheless, the data obtained indicate that the method is capable of producing information about decision processes which could have theoretical importance in terms of evaluating models of decision-making.

  5. Multi-criteria evaluation methods in the production scheduling

    NASA Astrophysics Data System (ADS)

    Kalinowski, K.; Krenczyk, D.; Paprocka, I.; Kempa, W.; Grabowik, C.

    2016-08-01

    The paper presents a discussion on the practical application of different methods of multi-criteria evaluation in the process of scheduling in manufacturing systems. Among the methods two main groups are specified: methods based on the distance function (using metacriterion) and methods that create a Pareto set of possible solutions. The basic criteria used for scheduling were also described. The overall procedure of evaluation process in production scheduling was presented. It takes into account the actions in the whole scheduling process and human decision maker (HDM) participation. The specified HDM decisions are related to creating and editing a set of evaluation criteria, selection of multi-criteria evaluation method, interaction in the searching process, using informal criteria and making final changes in the schedule for implementation. According to need, process scheduling may be completely or partially automated. Full automatization is possible in case of metacriterion based objective function and if Pareto set is selected - the final decision has to be done by HDM.

  6. Non-cultural methods of human microflora evaluation for the benefit of crew medical control in confined habitat

    NASA Astrophysics Data System (ADS)

    Viacheslav, Ilyin; Lana, Moukhamedieva; Georgy, Osipov; Aleksey, Batov; Zoya, Soloviova; Robert, Mardanov; Yana, Panina; Anna, Gegenava

    2011-05-01

    Current control of human microflora is a great problem not only for the space medicine but also for practical health care. Due to many reasons its realization by classical bacteriological method is difficult in practical application or cannot be done. To evaluate non-cultural methods of microbial control of crews in a confined habitat we evaluated two different methods. The first method is based on digital treatment of microbial visual images, appearing after gram staining of microbial material from natural sample. This way the rate between gram-positive and gram-negative microbe could be gained as well as differentiation of rods and cocci could be attained, which is necessary for primary evaluation of human microbial cenosis in remote confined habitats. The other non-culture method of human microflora evaluation is gas chromatomass spectrometry (gcms) analysis of swabs gathered from different body sites. Gc-ms testing of swabs allows one to validate quantitative and special microflora based on specific lipid markers analysis.

  7. Study on process evaluation model of students' learning in practical course

    NASA Astrophysics Data System (ADS)

    Huang, Jie; Liang, Pei; Shen, Wei-min; Ye, Youxiang

    2017-08-01

    In practical course teaching based on project object method, the traditional evaluation methods include class attendance, assignments and exams fails to give incentives to undergraduate students to learn innovatively and autonomously. In this paper, the element such as creative innovation, teamwork, document and reporting were put into process evaluation methods, and a process evaluation model was set up. Educational practice shows that the evaluation model makes process evaluation of students' learning more comprehensive, accurate, and fairly.

  8. Correlation of Simulation Examination to Written Test Scores for Advanced Cardiac Life Support Testing: Prospective Cohort Study.

    PubMed

    Strom, Suzanne L; Anderson, Craig L; Yang, Luanna; Canales, Cecilia; Amin, Alpesh; Lotfipour, Shahram; McCoy, C Eric; Osborn, Megan Boysen; Langdorf, Mark I

    2015-11-01

    Traditional Advanced Cardiac Life Support (ACLS) courses are evaluated using written multiple-choice tests. High-fidelity simulation is a widely used adjunct to didactic content, and has been used in many specialties as a training resource as well as an evaluative tool. There are no data to our knowledge that compare simulation examination scores with written test scores for ACLS courses. To compare and correlate a novel high-fidelity simulation-based evaluation with traditional written testing for senior medical students in an ACLS course. We performed a prospective cohort study to determine the correlation between simulation-based evaluation and traditional written testing in a medical school simulation center. Students were tested on a standard acute coronary syndrome/ventricular fibrillation cardiac arrest scenario. Our primary outcome measure was correlation of exam results for 19 volunteer fourth-year medical students after a 32-hour ACLS-based Resuscitation Boot Camp course. Our secondary outcome was comparison of simulation-based vs. written outcome scores. The composite average score on the written evaluation was substantially higher (93.6%) than the simulation performance score (81.3%, absolute difference 12.3%, 95% CI [10.6-14.0%], p<0.00005). We found a statistically significant moderate correlation between simulation scenario test performance and traditional written testing (Pearson r=0.48, p=0.04), validating the new evaluation method. Simulation-based ACLS evaluation methods correlate with traditional written testing and demonstrate resuscitation knowledge and skills. Simulation may be a more discriminating and challenging testing method, as students scored higher on written evaluation methods compared to simulation.

  9. Study on an Air Quality Evaluation Model for Beijing City Under Haze-Fog Pollution Based on New Ambient Air Quality Standards

    PubMed Central

    Li, Li; Liu, Dong-Jun

    2014-01-01

    Since 2012, China has been facing haze-fog weather conditions, and haze-fog pollution and PM2.5 have become hot topics. It is very necessary to evaluate and analyze the ecological status of the air environment of China, which is of great significance for environmental protection measures. In this study the current situation of haze-fog pollution in China was analyzed first, and the new Ambient Air Quality Standards were introduced. For the issue of air quality evaluation, a comprehensive evaluation model based on an entropy weighting method and nearest neighbor method was developed. The entropy weighting method was used to determine the weights of indicators, and the nearest neighbor method was utilized to evaluate the air quality levels. Then the comprehensive evaluation model was applied into the practical evaluation problems of air quality in Beijing to analyze the haze-fog pollution. Two simulation experiments were implemented in this study. One experiment included the indicator of PM2.5 and was carried out based on the new Ambient Air Quality Standards (GB 3095-2012); the other experiment excluded PM2.5 and was carried out based on the old Ambient Air Quality Standards (GB 3095-1996). Their results were compared, and the simulation results showed that PM2.5 was an important indicator for air quality and the evaluation results of the new Air Quality Standards were more scientific than the old ones. The haze-fog pollution situation in Beijing City was also analyzed based on these results, and the corresponding management measures were suggested. PMID:25170682

  10. A novel scene-based non-uniformity correction method for SWIR push-broom hyperspectral sensors

    NASA Astrophysics Data System (ADS)

    Hu, Bin-Lin; Hao, Shi-Jing; Sun, De-Xin; Liu, Yin-Nian

    2017-09-01

    A novel scene-based non-uniformity correction (NUC) method for short-wavelength infrared (SWIR) push-broom hyperspectral sensors is proposed and evaluated. This method relies on the assumption that for each band there will be ground objects with similar reflectance to form uniform regions when a sufficient number of scanning lines are acquired. The uniform regions are extracted automatically through a sorting algorithm, and are used to compute the corresponding NUC coefficients. SWIR hyperspectral data from airborne experiment are used to verify and evaluate the proposed method, and results show that stripes in the scenes have been well corrected without any significant information loss, and the non-uniformity is less than 0.5%. In addition, the proposed method is compared to two other regular methods, and they are evaluated based on their adaptability to the various scenes, non-uniformity, roughness and spectral fidelity. It turns out that the proposed method shows strong adaptability, high accuracy and efficiency.

  11. Indicators and Metrics for Evaluating the Sustainability of Chemical Processes

    EPA Science Inventory

    A metric-based method, called GREENSCOPE, has been developed for evaluating process sustainability. Using lab-scale information and engineering assumptions the method evaluates full-scale epresentations of processes in environmental, efficiency, energy and economic areas. The m...

  12. [Discussion on Quality Evaluation Method of Medical Device During Life-Cycle in Operation Based on the Analytic Hierarchy Process].

    PubMed

    Zheng, Caixian; Zheng, Kun; Shen, Yunming; Wu, Yunyun

    2016-01-01

    The content related to the quality during life-cycle in operation of medical device includes daily use, repair volume, preventive maintenance, quality control and adverse event monitoring. In view of this, the article aims at discussion on the quality evaluation method of medical devices during their life cycle in operation based on the Analytic Hierarchy Process (AHP). The presented method is proved to be effective by evaluating patient monitors as example. The method presented in can promote and guide the device quality control work, and it can provide valuable inputs to decisions about purchase of new device.

  13. Core Professionalism Education in Surgery: A Systematic Review

    PubMed Central

    Sarıoğlu Büke, Akile; Karabilgin Öztürkçü, Özlem Sürel; Yılmaz, Yusuf; Sayek, İskender

    2018-01-01

    Background: Professionalism education is one of the major elements of surgical residency education. Aims: To evaluate the studies on core professionalism education programs in surgical professionalism education. Study Design: Systematic review. Methods: This systematic literature review was performed to analyze core professionalism programs for surgical residency education published in English with at least three of the following features: program developmental model/instructional design method, aims and competencies, methods of teaching, methods of assessment, and program evaluation model or method. A total of 27083 articles were retrieved using EBSCOHOST, PubMed, Science Direct, Web of Science, and manual search. Results: Eight articles met the selection criteria. The instructional design method was presented in only one article, which described the Analysis, Design, Development, Implementation, and Evaluation model. Six articles were based on the Accreditation Council for Graduate Medical Education criterion, although there was significant variability in content. The most common teaching method was role modeling with scenario- and case-based learning. A wide range of assessment methods for evaluating professionalism education were reported. The Kirkpatrick model was reported in one article as a method for program evaluation. Conclusion: It is suggested that for a core surgical professionalism education program, developmental/instructional design model, aims and competencies, content, teaching methods, assessment methods, and program evaluation methods/models should be well defined, and the content should be comparable. PMID:29553464

  14. Quantitative evaluation methods of skin condition based on texture feature parameters.

    PubMed

    Pang, Hui; Chen, Tianhua; Wang, Xiaoyi; Chang, Zhineng; Shao, Siqi; Zhao, Jing

    2017-03-01

    In order to quantitatively evaluate the improvement of the skin condition after using skin care products and beauty, a quantitative evaluation method for skin surface state and texture is presented, which is convenient, fast and non-destructive. Human skin images were collected by image sensors. Firstly, the median filter of the 3 × 3 window is used and then the location of the hairy pixels on the skin is accurately detected according to the gray mean value and color information. The bilinear interpolation is used to modify the gray value of the hairy pixels in order to eliminate the negative effect of noise and tiny hairs on the texture. After the above pretreatment, the gray level co-occurrence matrix (GLCM) is calculated. On the basis of this, the four characteristic parameters, including the second moment, contrast, entropy and correlation, and their mean value are calculated at 45 ° intervals. The quantitative evaluation model of skin texture based on GLCM is established, which can calculate the comprehensive parameters of skin condition. Experiments show that using this method evaluates the skin condition, both based on biochemical indicators of skin evaluation methods in line, but also fully consistent with the human visual experience. This method overcomes the shortcomings of the biochemical evaluation method of skin damage and long waiting time, also the subjectivity and fuzziness of the visual evaluation, which achieves the non-destructive, rapid and quantitative evaluation of skin condition. It can be used for health assessment or classification of the skin condition, also can quantitatively evaluate the subtle improvement of skin condition after using skin care products or stage beauty.

  15. Develop a new testing and evaluation protocol to assess flexbase performance using strength of soil binder.

    DOT National Transportation Integrated Search

    2008-01-01

    This research involved a detailed laboratory study of a new test method for evaluating road base materials based on : the strength of the soil binder. In this test method, small test specimens (5.0in length and 0.75in square cross : section) of binde...

  16. Empirical research in service engineering based on AHP and fuzzy methods

    NASA Astrophysics Data System (ADS)

    Zhang, Yanrui; Cao, Wenfu; Zhang, Lina

    2015-12-01

    Recent years, management consulting industry has been rapidly developing worldwide. Taking a big management consulting company as research object, this paper established an index system of service quality of consulting, based on customer satisfaction survey, evaluated service quality of the consulting company by AHP and fuzzy comprehensive evaluation methods.

  17. A Strengths-Based Group Intervention for Women Who Experienced Child Sexual Abuse

    ERIC Educational Resources Information Center

    Walker-Williams, Hayley J.; Fouché, Ansie

    2017-01-01

    Purpose: This study evaluated the benefits of a ''survivor to thriver'' strengths-based group intervention program to facilitate posttraumatic growth in women survivors of child sexual abuse. Method: A quasi-experimental, one group, pretest, posttest, time-delay design was employed using qualitative methods to evaluate the benefits of the…

  18. A novel, image analysis-based method for the evaluation of in vitro antagonism.

    PubMed

    Szekeres, András; Leitgeb, Balázs; Kredics, László; Manczinger, László; Vágvölgyi, Csaba

    2006-06-01

    A novel method is proposed for the accurate evaluation of in vitro antagonism. Based on the measurement of areas of the fungal colonies, biocontrol indices were calculated, which are characteristic to the antagonistic Trichoderma strains. These indices provide a useful tool to describe the biocontrol abilities of fungi.

  19. COMPARISON OF TWO METHODS FOR DETECTION OF GIARDIA CYSTS AND CRYTOSPORIDIUM OOCYSTS IN WATER

    EPA Science Inventory

    The steps of two immunofluorescent-antibody-based detection methods were evaluated for their efficiencies in detecting Giardia cysts and Cryptosporidium oocysts. The two methods evaluated were the American Society for Testing and Materials proposed test method for Giardia cysts a...

  20. Evaluation and recommendation of sensitivity analysis methods for application to Stochastic Human Exposure and Dose Simulation models.

    PubMed

    Mokhtari, Amirhossein; Christopher Frey, H; Zheng, Junyu

    2006-11-01

    Sensitivity analyses of exposure or risk models can help identify the most significant factors to aid in risk management or to prioritize additional research to reduce uncertainty in the estimates. However, sensitivity analysis is challenged by non-linearity, interactions between inputs, and multiple days or time scales. Selected sensitivity analysis methods are evaluated with respect to their applicability to human exposure models with such features using a testbed. The testbed is a simplified version of a US Environmental Protection Agency's Stochastic Human Exposure and Dose Simulation (SHEDS) model. The methods evaluated include the Pearson and Spearman correlation, sample and rank regression, analysis of variance, Fourier amplitude sensitivity test (FAST), and Sobol's method. The first five methods are known as "sampling-based" techniques, wheras the latter two methods are known as "variance-based" techniques. The main objective of the test cases was to identify the main and total contributions of individual inputs to the output variance. Sobol's method and FAST directly quantified these measures of sensitivity. Results show that sensitivity of an input typically changed when evaluated under different time scales (e.g., daily versus monthly). All methods provided similar insights regarding less important inputs; however, Sobol's method and FAST provided more robust insights with respect to sensitivity of important inputs compared to the sampling-based techniques. Thus, the sampling-based methods can be used in a screening step to identify unimportant inputs, followed by application of more computationally intensive refined methods to a smaller set of inputs. The implications of time variation in sensitivity results for risk management are briefly discussed.

  1. Evaluation method on steering for the shape-shifting robot in different configurations

    NASA Astrophysics Data System (ADS)

    Chang, Jian; Li, Bin; Wang, Chong; Zheng, Huaibing; Li, Zhiqiang

    2016-01-01

    The evaluation method on steering is based on qualitative manner in existence, which causes the result inaccurate and fuzziness. It reduces the efficiency of process execution. So the method by quantitative manner for the shape-shifting robot in different configurations is proposed. Comparing to traditional evaluation method, the most important aspects which can influence the steering abilities of the robot in different configurations are researched in detail, including the energy, angular velocity, time and space. In order to improve the robustness of system, the ideal and slippage conditions are all considered by mathematical model. Comparing to the traditional weighting confirming method, the extent of robot steering method is proposed by the combination of subjective and objective weighting method. The subjective weighting method can show more preferences of the experts and is based on five-grade scale. The objective weighting method is based on information entropy to determine the factors. By the sensors fixed on the robot, the contract force between track grouser and ground, the intrinsic motion characteristics of robot are obtained and the experiment is done to prove the algorithm which is proposed as the robot in different common configurations. Through the method proposed in the article, fuzziness and inaccurate of the evaluation method has been solved, so the operators can choose the most suitable configuration of the robot to fulfil the different tasks more quickly and simply.

  2. A shape-based quality evaluation and reconstruction method for electrical impedance tomography.

    PubMed

    Antink, Christoph Hoog; Pikkemaat, Robert; Malmivuo, Jaakko; Leonhardt, Steffen

    2015-06-01

    Linear methods of reconstruction play an important role in medical electrical impedance tomography (EIT) and there is a wide variety of algorithms based on several assumptions. With the Graz consensus reconstruction algorithm for EIT (GREIT), a novel linear reconstruction algorithm as well as a standardized framework for evaluating and comparing methods of reconstruction were introduced that found widespread acceptance in the community. In this paper, we propose a two-sided extension of this concept by first introducing a novel method of evaluation. Instead of being based on point-shaped resistivity distributions, we use 2759 pairs of real lung shapes for evaluation that were automatically segmented from human CT data. Necessarily, the figures of merit defined in GREIT were adjusted. Second, a linear method of reconstruction that uses orthonormal eigenimages as training data and a tunable desired point spread function are proposed. Using our novel method of evaluation, this approach is compared to the classical point-shaped approach. Results show that most figures of merit improve with the use of eigenimages as training data. Moreover, the possibility of tuning the reconstruction by modifying the desired point spread function is shown. Finally, the reconstruction of real EIT data shows that higher contrasts and fewer artifacts can be achieved in ventilation- and perfusion-related images.

  3. Segmentation of images of abdominal organs.

    PubMed

    Wu, Jie; Kamath, Markad V; Noseworthy, Michael D; Boylan, Colm; Poehlman, Skip

    2008-01-01

    Abdominal organ segmentation, which is, the delineation of organ areas in the abdomen, plays an important role in the process of radiological evaluation. Attempts to automate segmentation of abdominal organs will aid radiologists who are required to view thousands of images daily. This review outlines the current state-of-the-art semi-automated and automated methods used to segment abdominal organ regions from computed tomography (CT), magnetic resonance imaging (MEI), and ultrasound images. Segmentation methods generally fall into three categories: pixel based, region based and boundary tracing. While pixel-based methods classify each individual pixel, region-based methods identify regions with similar properties. Boundary tracing is accomplished by a model of the image boundary. This paper evaluates the effectiveness of the above algorithms with an emphasis on their advantages and disadvantages for abdominal organ segmentation. Several evaluation metrics that compare machine-based segmentation with that of an expert (radiologist) are identified and examined. Finally, features based on intensity as well as the texture of a small region around a pixel are explored. This review concludes with a discussion of possible future trends for abdominal organ segmentation.

  4. Evaluating Work-Based Learning: Insights from an Illuminative Evaluation Study of Work-Based Learning in a Vocational Qualification

    ERIC Educational Resources Information Center

    van Rensburg, Estelle

    2008-01-01

    This article outlines an illuminative evaluation study of the work-based module in a vocational qualification in Animal Health offered for the paraveterinary industry by a distance education institution in South Africa. In illuminative evaluation, a programme is studied by qualitative methods to gain an in-depth understanding of its…

  5. Initial draft of CSE-UCLA evaluation model based on weighted product in order to optimize digital library services in computer college in Bali

    NASA Astrophysics Data System (ADS)

    Divayana, D. G. H.; Adiarta, A.; Abadi, I. B. G. S.

    2018-01-01

    The aim of this research was to create initial design of CSE-UCLA evaluation model modified with Weighted Product in evaluating digital library service at Computer College in Bali. The method used in this research was developmental research method and developed by Borg and Gall model design. The results obtained from the research that conducted earlier this month was a rough sketch of Weighted Product based CSE-UCLA evaluation model that the design had been able to provide a general overview of the stages of weighted product based CSE-UCLA evaluation model used in order to optimize the digital library services at the Computer Colleges in Bali.

  6. BTC method for evaluation of remaining strength and service life of bridge cables.

    DOT National Transportation Integrated Search

    2011-09-01

    "This report presents the BTC method; a comprehensive state-of-the-art methodology for evaluation of remaining : strength and residual life of bridge cables. The BTC method is a probability-based, proprietary, patented, and peerreviewed : methodology...

  7. Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang

    2017-07-01

    In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.

  8. Course Evaluation: Reconfigurations for Learning with Learning Management Systems

    ERIC Educational Resources Information Center

    Park, Ji Yong

    2014-01-01

    The introduction of online delivery platforms such as learning management systems (LMS) in tertiary education has changed the methods and modes of curriculum delivery and communication. While course evaluation methods have also changed from paper-based in-class-administered methods to largely online-administered methods, the data collection…

  9. Quantitative Evaluation of the Total Magnetic Moments of Colloidal Magnetic Nanoparticles: A Kinetics-based Method.

    PubMed

    Liu, Haiyi; Sun, Jianfei; Wang, Haoyao; Wang, Peng; Song, Lina; Li, Yang; Chen, Bo; Zhang, Yu; Gu, Ning

    2015-06-08

    A kinetics-based method is proposed to quantitatively characterize the collective magnetization of colloidal magnetic nanoparticles. The method is based on the relationship between the magnetic force on a colloidal droplet and the movement of the droplet under a gradient magnetic field. Through computational analysis of the kinetic parameters, such as displacement, velocity, and acceleration, the magnetization of colloidal magnetic nanoparticles can be calculated. In our experiments, the values measured by using our method exhibited a better linear correlation with magnetothermal heating, than those obtained by using a vibrating sample magnetometer and magnetic balance. This finding indicates that this method may be more suitable to evaluate the collective magnetism of colloidal magnetic nanoparticles under low magnetic fields than the commonly used methods. Accurate evaluation of the magnetic properties of colloidal nanoparticles is of great importance for the standardization of magnetic nanomaterials and for their practical application in biomedicine. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Development of performance-based evaluation methods and specifications for roadside maintenance.

    DOT National Transportation Integrated Search

    2011-01-01

    This report documents the work performed during Project 0-6387, Performance Based Roadside : Maintenance Specifications. Quality assurance methods and specifications for roadside performance-based : maintenance contracts (PBMCs) were developed ...

  11. Quantitative Evaluation of Heavy Duty Machine Tools Remanufacturing Based on Modified Catastrophe Progression Method

    NASA Astrophysics Data System (ADS)

    shunhe, Li; jianhua, Rao; lin, Gui; weimin, Zhang; degang, Liu

    2017-11-01

    The result of remanufacturing evaluation is the basis for judging whether the heavy duty machine tool can remanufacture in the EOL stage of the machine tool lifecycle management.The objectivity and accuracy of evaluation is the key to the evaluation method.In this paper, the catastrophe progression method is introduced into the quantitative evaluation of heavy duty machine tools’ remanufacturing,and the results are modified by the comprehensive adjustment method,which makes the evaluation results accord with the standard of human conventional thinking.Using the catastrophe progression method to establish the heavy duty machine tools’ quantitative evaluation model,to evaluate the retired TK6916 type CNC floor milling-boring machine’s remanufacturing.The evaluation process is simple,high quantification,the result is objective.

  12. A Method of Evaluating Operation of Electric Energy Meter

    NASA Astrophysics Data System (ADS)

    Chen, Xiangqun; Li, Tianyang; Cao, Fei; Chu, Pengfei; Zhao, Xinwang; Huang, Rui; Liu, Liping; Zhang, Chenglin

    2018-05-01

    The existing electric energy meter rotation maintenance strategy regularly checks the electric energy meter and evaluates the state. It only considers the influence of time factors, neglects the influence of other factors, leads to the inaccuracy of the evaluation, and causes the waste of resources. In order to evaluate the running state of the electric energy meter in time, a method of the operation evaluation of the electric energy meter is proposed. The method is based on extracting the existing data acquisition system, marketing business system and metrology production scheduling platform that affect the state of energy meters, and classified into error stability, operational reliability, potential risks and other factors according to the influencing factors, based on the above basic test score, inspecting score, monitoring score, score of family defect detection. Then, according to the evaluation model according to the scoring, we evaluate electric energy meter operating state, and finally put forward the corresponding maintenance strategy of rotation.

  13. Usage-Based Collection Evaluation with a Curricular Focus

    ERIC Educational Resources Information Center

    Kohn, Karen C.

    2013-01-01

    Systematic evaluation of a library's collection can be a useful tool for collection development. After reviewing three evaluation methods and their usefulness for our small academic library, I undertook a usage-based evaluation, focusing on narrow segments of our collection that served specific undergraduate courses. For each section, I collected…

  14. Insight into Evaluation Practice: A Content Analysis of Designs and Methods Used in Evaluation Studies Published in North American Evaluation-Focused Journals

    ERIC Educational Resources Information Center

    Christie, Christina A.; Fleischer, Dreolin Nesbitt

    2010-01-01

    To describe the recent practice of evaluation, specifically method and design choices, the authors performed a content analysis on 117 evaluation studies published in eight North American evaluation-focused journals for a 3-year period (2004-2006). The authors chose this time span because it follows the scientifically based research (SBR)…

  15. Evaluation of multiple precipitation products across Mainland China using the triple collocation method without ground truth

    NASA Astrophysics Data System (ADS)

    Tang, G.; Li, C.; Hong, Y.; Long, D.

    2017-12-01

    Proliferation of satellite and reanalysis precipitation products underscores the need to evaluate their reliability, particularly over ungauged or poorly gauged regions. However, it is really challenging to perform such evaluations over regions lacking ground truth data. Here, using the triple collocation (TC) method that is capable of evaluating relative uncertainties in different products without ground truth, we evaluate five satellite-based precipitation products and comparatively assess uncertainties in three types of independent precipitation products, e.g., satellite-based, ground-observed, and model reanalysis over Mainland China, including a ground-based precipitation dataset (the gauge based daily precipitation analysis, CGDPA), the latest version of the European reanalysis agency reanalysis (ERA-interim) product, and five satellite-based products (i.e., 3B42V7, 3B42RT of TMPA, IMERG, CMORPH-CRT, PERSIANN-CDR) on a regular 0.25° grid at the daily timescale from 2013 to 2015. First, the effectiveness of the TC method is evaluated by comparison with traditional methods based on ground observations in a densely gauged region. Results show that the TC method is reliable because the correlation coefficient (CC) and root mean square error (RMSE) are close to those based on the traditional method with a maximum difference only up to 0.08 and 0.71 (mm/day) for CC and RMSE, respectively. Then, the TC method is applied to Mainland China and the Tibetan Plateau (TP). Results indicate that: (1) the overall performance of IMERG is better than the other satellite products over Mainland China; (2) over grid cells without rain gauges in the TP, IMERG and ERA show better performance than CGDPA, indicating the potential of remote sensing and reanalysis data over these regions and the inherent uncertainty of CGDPA due to interpolation using sparsely gauged data; (3) both TMPA-3B42 and CMORPH-CRT have some unexpected CC values over certain grid cells that contain water bodies, reaffirming the overestimation of precipitation over inland water bodies. Overall, the TC method provides not only reliable cross-validation results of precipitation estimates over Mainland China but also a new perspective as to compressively assess multi-source precipitation products, particularly over poorly gauged regions.

  16. Program Evaluation of a Competency-Based Online Model in Higher Education

    ERIC Educational Resources Information Center

    DiGiacomo, Karen

    2017-01-01

    In order to serve its nontraditional students, a university piloted a competency-based program as alternative method for its students to earn college credit. The purpose of this mixed-methods study was to conduct a summative program evaluation to determine if the program was successful in order to make decisions about program revision and…

  17. A Controlled Evaluation of a High School Biomedical Pipeline Program: Design and Methods

    ERIC Educational Resources Information Center

    Winkleby, Marilyn A.; Ned, Judith; Ahn, David; Koehler, Alana; Fagliano, Kathleen; Crump, Casey

    2014-01-01

    Given limited funding for school-based science education, non-school-based programs have been developed at colleges and universities to increase the number of students entering science- and health-related careers and address critical workforce needs. However, few evaluations of such programs have been conducted. We report the design and methods of…

  18. Acoustics based assessment of respiratory diseases using GMM classification.

    PubMed

    Mayorga, P; Druzgalski, C; Morelos, R L; Gonzalez, O H; Vidales, J

    2010-01-01

    The focus of this paper is to present a method utilizing lung sounds for a quantitative assessment of patient health as it relates to respiratory disorders. In order to accomplish this, applicable traditional techniques within the speech processing domain were utilized to evaluate lung sounds obtained with a digital stethoscope. Traditional methods utilized in the evaluation of asthma involve auscultation and spirometry, but utilization of more sensitive electronic stethoscopes, which are currently available, and application of quantitative signal analysis methods offer opportunities of improved diagnosis. In particular we propose an acoustic evaluation methodology based on the Gaussian Mixed Models (GMM) which should assist in broader analysis, identification, and diagnosis of asthma based on the frequency domain analysis of wheezing and crackles.

  19. Do Toxicity Identification and Evaluation Laboratory-Based Methods Reflect Causes of Field Impairment?

    EPA Science Inventory

    Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both interstitial waters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question ...

  20. Integrating relationship- and research-based approaches in Australian health promotion practice.

    PubMed

    Klinner, Christiane; Carter, Stacy M; Rychetnik, Lucie; Li, Vincy; Daley, Michelle; Zask, Avigdor; Lloyd, Beverly

    2015-12-01

    We examine the perspectives of health promotion practitioners on their approaches to determining health promotion practice, in particular on the role of research and relationships in this process. Using Grounded Theory methods, we analysed 58 semi-structured interviews with 54 health promotion practitioners in New South Wales, Australia. Practitioners differentiated between relationship-based and research-based approaches as two sources of knowledge to guide health promotion practice. We identify several tensions in seeking to combine these approaches in practice and describe the strategies that participants adopted to manage these tensions. The strategies included working in an evidence-informed rather than evidence-based way, creating new evidence about relationship-based processes and outcomes, adopting 'relationship-based' research and evaluation methods, making research and evaluation useful for communities, building research and evaluation skills and improving collaboration between research and evaluation and programme implementation staff. We conclude by highlighting three systemic factors which could further support the integration of research-based and relationship-based health promotion practices: (i) expanding conceptions of health promotion evidence, (ii) developing 'relationship-based' research methods that enable practitioners to measure complex social processes and outcomes and to facilitate community participation and benefit, and (iii) developing organizational capacity. © The Author (2014). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. A comparison of moving object detection methods for real-time moving object detection

    NASA Astrophysics Data System (ADS)

    Roshan, Aditya; Zhang, Yun

    2014-06-01

    Moving object detection has a wide variety of applications from traffic monitoring, site monitoring, automatic theft identification, face detection to military surveillance. Many methods have been developed across the globe for moving object detection, but it is very difficult to find one which can work globally in all situations and with different types of videos. The purpose of this paper is to evaluate existing moving object detection methods which can be implemented in software on a desktop or laptop, for real time object detection. There are several moving object detection methods noted in the literature, but few of them are suitable for real time moving object detection. Most of the methods which provide for real time movement are further limited by the number of objects and the scene complexity. This paper evaluates the four most commonly used moving object detection methods as background subtraction technique, Gaussian mixture model, wavelet based and optical flow based methods. The work is based on evaluation of these four moving object detection methods using two (2) different sets of cameras and two (2) different scenes. The moving object detection methods have been implemented using MatLab and results are compared based on completeness of detected objects, noise, light change sensitivity, processing time etc. After comparison, it is observed that optical flow based method took least processing time and successfully detected boundary of moving objects which also implies that it can be implemented for real-time moving object detection.

  2. A method to evaluate process performance by integrating time and resources

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Wei, Qingjie; Jin, Shuang

    2017-06-01

    The purpose of process mining is to improve the existing process of the enterprise, so how to measure the performance of the process is particularly important. However, the current research on the performance evaluation method is still insufficient. The main methods of evaluation are mainly using time or resource. These basic statistics cannot evaluate process performance very well. In this paper, a method of evaluating the performance of the process based on time dimension and resource dimension is proposed. This method can be used to measure the utilization and redundancy of resources in the process. This paper will introduce the design principle and formula of the evaluation algorithm. Then, the design and the implementation of the evaluation method will be introduced. Finally, we will use the evaluating method to analyse the event log from a telephone maintenance process and propose an optimization plan.

  3. "Expectations to Change" ((E2C): A Participatory Method for Facilitating Stakeholder Engagement with Evaluation Findings

    ERIC Educational Resources Information Center

    Adams, Adrienne E.; Nnawulezi, Nkiru A.; Vandenberg, Lela

    2015-01-01

    From a utilization-focused evaluation perspective, the success of an evaluation is rooted in the extent to which the evaluation was used by stakeholders. This paper details the "Expectations to Change" (E2C) process, an interactive, workshop-based method designed to engage primary users with their evaluation findings as a means of…

  4. The performance evaluation model of mining project founded on the weight optimization entropy value method

    NASA Astrophysics Data System (ADS)

    Mao, Chao; Chen, Shou

    2017-01-01

    According to the traditional entropy value method still have low evaluation accuracy when evaluating the performance of mining projects, a performance evaluation model of mineral project founded on improved entropy is proposed. First establish a new weight assignment model founded on compatible matrix analysis of analytic hierarchy process (AHP) and entropy value method, when the compatibility matrix analysis to achieve consistency requirements, if it has differences between subjective weights and objective weights, moderately adjust both proportions, then on this basis, the fuzzy evaluation matrix for performance evaluation. The simulation experiments show that, compared with traditional entropy and compatible matrix analysis method, the proposed performance evaluation model of mining project based on improved entropy value method has higher accuracy assessment.

  5. Providing a Science Base for the Evaluation of Tobacco Products

    PubMed Central

    Berman, Micah L.; Connolly, Greg; Cummings, K. Michael; Djordjevic, Mirjana V.; Hatsukami, Dorothy K.; Henningfield, Jack E.; Myers, Matthew; O'Connor, Richard J.; Parascandola, Mark; Rees, Vaughan; Rice, Jerry M.

    2015-01-01

    Objective Evidence-based tobacco regulation requires a comprehensive scientific framework to guide the evaluation of new tobacco products and health-related claims made by product manufacturers. Methods The Tobacco Product Assessment Consortium (TobPRAC) employed an iterative process involving consortia investigators, consultants, a workshop of independent scientists and public health experts, and written reviews in order to develop a conceptual framework for evaluating tobacco products. Results The consortium developed a four-phased framework for the scientific evaluation of tobacco products. The four phases addressed by the framework are: (1) pre-market evaluation, (2) pre-claims evaluation, (3) post-market activities, and (4) monitoring and re-evaluation. For each phase, the framework proposes the use of validated testing procedures that will evaluate potential harms at both the individual and population level. Conclusions While the validation of methods for evaluating tobacco products is an ongoing and necessary process, the proposed framework need not wait for fully validated methods to be used in guiding tobacco product regulation today. PMID:26665160

  6. Knowledge management impact of information technology Web 2.0/3.0. The case study of agent software technology usability in knowledge management system

    NASA Astrophysics Data System (ADS)

    Sołtysik-Piorunkiewicz, Anna

    2015-02-01

    How we can measure the impact of internet technology Web 2.0/3.0 for knowledge management? How we can use the Web 2.0/3.0 technologies for generating, evaluating, sharing, organizing knowledge in knowledge-based organization? How we can evaluate it from user-centered perspective? Article aims to provide a method for evaluate the usability of web technologies to support knowledge management in knowledge-based organizations of the various stages of the cycle knowledge management, taking into account: generating knowledge, evaluating knowledge, sharing knowledge, etc. for the modern Internet technologies based on the example of agent technologies. The method focuses on five areas of evaluation: GUI, functional structure, the way of content publication, organizational aspect, technological aspect. The method is based on the proposed indicators relating respectively to assess specific areas of evaluation, taking into account the individual characteristics of the scoring. Each of the features identified in the evaluation is judged first point wise, then this score is subject to verification and clarification by means of appropriate indicators of a given feature. The article proposes appropriate indicators to measure the impact of Web 2.0/3.0 technologies for knowledge management and verification them in an example of agent technology usability in knowledge management system.

  7. Lunar-base construction equipment and methods evaluation

    NASA Technical Reports Server (NTRS)

    Boles, Walter W.; Ashley, David B.; Tucker, Richard L.

    1993-01-01

    A process for evaluating lunar-base construction equipment and methods concepts is presented. The process is driven by the need for more quantitative, systematic, and logical methods for assessing further research and development requirements in an area where uncertainties are high, dependence upon terrestrial heuristics is questionable, and quantitative methods are seldom applied. Decision theory concepts are used in determining the value of accurate information and the process is structured as a construction-equipment-and-methods selection methodology. Total construction-related, earth-launch mass is the measure of merit chosen for mathematical modeling purposes. The work is based upon the scope of the lunar base as described in the National Aeronautics and Space Administration's Office of Exploration's 'Exploration Studies Technical Report, FY 1989 Status'. Nine sets of conceptually designed construction equipment are selected as alternative concepts. It is concluded that the evaluation process is well suited for assisting in the establishment of research agendas in an approach that is first broad, with a low level of detail, followed by more-detailed investigations into areas that are identified as critical due to high degrees of uncertainty and sensitivity.

  8. A rapid method for soil cement design : Louisiana slope value method : part II : evaluation.

    DOT National Transportation Integrated Search

    1966-05-01

    This report is an evaluation of the recently developed "Louisiana Slope Value Method". : The conclusion drawn are based on data from 637 separate samples representing nearly all major soil groups in Louisiana that are suitable for cement stabilizatio...

  9. Construction of an evaluation and selection system of emergency treatment technology based on dynamic fuzzy GRA method for phenol spill

    NASA Astrophysics Data System (ADS)

    Zhao, Jingjing; Yu, Lean; Li, Lian

    2017-05-01

    There is often a great deal of complexity, fuzziness and uncertainties of the chemical contingency spills. In order to obtain the optimum emergency disposal technology schemes as soon as the chemical pollution accident occurs, the technique evaluation system was developed based on dynamic fuzzy GRA method, and the feasibility of the proposed methods has been tested by using a emergency phenol spill accidence occurred in highway.

  10. Walsh-Hadamard transform kernel-based feature vector for shot boundary detection.

    PubMed

    Lakshmi, Priya G G; Domnic, S

    2014-12-01

    Video shot boundary detection (SBD) is the first step of video analysis, summarization, indexing, and retrieval. In SBD process, videos are segmented into basic units called shots. In this paper, a new SBD method is proposed using color, edge, texture, and motion strength as vector of features (feature vector). Features are extracted by projecting the frames on selected basis vectors of Walsh-Hadamard transform (WHT) kernel and WHT matrix. After extracting the features, based on the significance of the features, weights are calculated. The weighted features are combined to form a single continuity signal, used as input for Procedure Based shot transition Identification process (PBI). Using the procedure, shot transitions are classified into abrupt and gradual transitions. Experimental results are examined using large-scale test sets provided by the TRECVID 2007, which has evaluated hard cut and gradual transition detection. To evaluate the robustness of the proposed method, the system evaluation is performed. The proposed method yields F1-Score of 97.4% for cut, 78% for gradual, and 96.1% for overall transitions. We have also evaluated the proposed feature vector with support vector machine classifier. The results show that WHT-based features can perform well than the other existing methods. In addition to this, few more video sequences are taken from the Openvideo project and the performance of the proposed method is compared with the recent existing SBD method.

  11. An Inter-Personal Information Sharing Model Based on Personalized Recommendations

    NASA Astrophysics Data System (ADS)

    Kamei, Koji; Funakoshi, Kaname; Akahani, Jun-Ichi; Satoh, Tetsuji

    In this paper, we propose an inter-personal information sharing model among individuals based on personalized recommendations. In the proposed model, we define an information resource as shared between people when both of them consider it important --- not merely when they both possess it. In other words, the model defines the importance of information resources based on personalized recommendations from identifiable acquaintances. The proposed method is based on a collaborative filtering system that focuses on evaluations from identifiable acquaintances. It utilizes both user evaluations for documents and their contents. In other words, each user profile is represented as a matrix of credibility to the other users' evaluations on each domain of interests. We extended the content-based collaborative filtering method to distinguish other users to whom the documents should be recommended. We also applied a concept-based vector space model to represent the domain of interests instead of the previous method which represented them by a term-based vector space model. We introduce a personalized concept-base compiled from each user's information repository to improve the information retrieval in the user's environment. Furthermore, the concept-spaces change from user to user since they reflect the personalities of the users. Because of different concept-spaces, the similarity between a document and a user's interest varies for each user. As a result, a user receives recommendations from other users who have different view points, achieving inter-personal information sharing based on personalized recommendations. This paper also describes an experimental simulation of our information sharing model. In our laboratory, five participants accumulated a personal repository of e-mails and web pages from which they built their own concept-base. Then we estimated the user profiles according to personalized concept-bases and sets of documents which others evaluated. We simulated inter-personal recommendation based on the user profiles and evaluated the performance of the recommendation method by comparing the recommended documents to the result of the content-based collaborative filtering.

  12. The use of portable 2D echocardiography and 'frame-based' bubble counting as a tool to evaluate diving decompression stress.

    PubMed

    Germonpré, Peter; Papadopoulou, Virginie; Hemelryck, Walter; Obeid, Georges; Lafère, Pierre; Eckersley, Robert J; Tang, Meng-Xing; Balestra, Costantino

    2014-03-01

    'Decompression stress' is commonly evaluated by scoring circulating bubble numbers post dive using Doppler or cardiac echography. This information may be used to develop safer decompression algorithms, assuming that the lower the numbers of venous gas emboli (VGE) observed post dive, the lower the statistical risk of decompression sickness (DCS). Current echocardiographic evaluation of VGE, using the Eftedal and Brubakk method, has some disadvantages as it is less well suited for large-scale evaluation of recreational diving profiles. We propose and validate a new 'frame-based' VGE-counting method which offers a continuous scale of measurement. Nine 'raters' of varying familiarity with echocardiography were asked to grade 20 echocardiograph recordings using both the Eftedal and Brubakk grading and the new 'frame-based' counting method. They were also asked to count the number of bubbles in 50 still-frame images, some of which were randomly repeated. A Wilcoxon Spearman ρ calculation was used to assess test-retest reliability of each rater for the repeated still frames. For the video images, weighted kappa statistics, with linear and quadratic weightings, were calculated to measure agreement between raters for the Eftedal and Brubakk method. Bland-Altman plots and intra-class correlation coefficients were used to measure agreement between raters for the frame-based counting method. Frame-based counting showed a better inter-rater agreement than the Eftedal and Brubakk grading, even with relatively inexperienced assessors, and has good intra- and inter-rater reliability. Frame-based bubble counting could be used to evaluate post-dive decompression stress, and offers possibilities for computer-automated algorithms to allow near-real-time counting.

  13. A Method to Calculate and Analyze Residents' Evaluations by Using a Microcomputer Data-Base Management System.

    ERIC Educational Resources Information Center

    Mills, Myron L.

    1988-01-01

    A system developed for more efficient evaluation of graduate medical students' progress uses numerical scoring and a microcomputer database management system as an alternative to manual methods to produce accurate, objective, and meaningful summaries of resident evaluations. (Author/MSE)

  14. Evaluation of methods for measuring particulate matter emissions from gas turbines.

    PubMed

    Petzold, Andreas; Marsh, Richard; Johnson, Mark; Miller, Michael; Sevcenco, Yura; Delhaye, David; Ibrahim, Amir; Williams, Paul; Bauer, Heidi; Crayford, Andrew; Bachalo, William D; Raper, David

    2011-04-15

    The project SAMPLE evaluated methods for measuring particle properties in the exhaust of aircraft engines with respect to the development of standardized operation procedures for particulate matter measurement in aviation industry. Filter-based off-line mass methods included gravimetry and chemical analysis of carbonaceous species by combustion methods. Online mass methods were based on light absorption measurement or used size distribution measurements obtained from an electrical mobility analyzer approach. Number concentrations were determined using different condensation particle counters (CPC). Total mass from filter-based methods balanced gravimetric mass within 8% error. Carbonaceous matter accounted for 70% of gravimetric mass while the remaining 30% were attributed to hydrated sulfate and noncarbonaceous organic matter fractions. Online methods were closely correlated over the entire range of emission levels studied in the tests. Elemental carbon from combustion methods and black carbon from optical methods deviated by maximum 5% with respect to mass for low to medium emission levels, whereas for high emission levels a systematic deviation between online methods and filter based methods was found which is attributed to sampling effects. CPC based instruments proved highly reproducible for number concentration measurements with a maximum interinstrument standard deviation of 7.5%.

  15. Automated glioblastoma segmentation based on a multiparametric structured unsupervised classification.

    PubMed

    Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V; Robles, Montserrat; Aparici, F; Martí-Bonmatí, L; García-Gómez, Juan M

    2015-01-01

    Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation.

  16. Development and preliminary evaluation of a new anatomically based prosthetic alignment method for below-knee prosthesis.

    PubMed

    Tafti, Nahid; Karimlou, Masoud; Mardani, Mohammad Ali; Jafarpisheh, Amir Salar; Aminian, Gholam Reza; Safari, Reza

    2018-04-20

    The objectives of current study were to a) assess similarities and relationships between anatomical landmark-based angles and distances of lower limbs in unilateral transtibial amputees and b) develop and evaluate a new anatomically based static prosthetic alignment method. First sub-study assessed the anthropometrical differences and relationships between the lower limbs in the photographs taken from amputees. Data were analysed via paired t-test and regression analysis. Results show no significant differences in frontal and transverse planes. In the sagittal plane, the anthropometric parameters of the amputated limb were significantly correlated to the corresponding variables of the sound limb. The results served as bases for the development of a new prosthetic alignment method. The method was evaluated on a single subject study. Prosthetic alignment carried out by an experienced prosthetist was compared with such alignment adjusted by an inexperienced prosthetist but with the use of the developed method. In sagittal and frontal planes, the socket angle was tuned with respect to the shin angle, and the position of the prosthetic foot was tuned in relation to the pelvic landmarks. Further study is needed to assess the proposed method on a larger sample of amputees and prosthetists.

  17. On Some Methods in Safety Evaluation in Geotechnics

    NASA Astrophysics Data System (ADS)

    Puła, Wojciech; Zaskórski, Łukasz

    2015-06-01

    The paper demonstrates how the reliability methods can be utilised in order to evaluate safety in geotechnics. Special attention is paid to the so-called reliability based design that can play a useful and complementary role to Eurocode 7. In the first part, a brief review of first- and second-order reliability methods is given. Next, two examples of reliability-based design are demonstrated. The first one is focussed on bearing capacity calculation and is dedicated to comparison with EC7 requirements. The second one analyses a rigid pile subjected to lateral load and is oriented towards working stress design method. In the second part, applications of random field to safety evaluations in geotechnics are addressed. After a short review of the theory a Random Finite Element algorithm to reliability based design of shallow strip foundation is given. Finally, two illustrative examples for cohesive and cohesionless soils are demonstrated.

  18. Comparison of Different Methods of Grading a Level Turn Task on a Flight Simulator

    NASA Technical Reports Server (NTRS)

    Heath, Bruce E.; Crier, tomyka

    2003-01-01

    With the advancements in the computing power of personal computers, pc-based flight simulators and trainers have opened new avenues in the training of airplane pilots. It may be desirable to have the flight simulator make a quantitative evaluation of the progress of a pilot's training thereby reducing the physical requirement of the flight instructor who must, in turn, watch every flight. In an experiment, University students conducted six different flights, each consisting of two level turns. The flights were three minutes in duration. By evaluating videotapes, two certified flight instructors provided separate letter grades for each turn. These level turns were also evaluated using two other computer based grading methods. One method determined automated grades based on prescribed tolerances in bank angle, airspeed and altitude. The other method used was deviations in altitude and bank angle for performance index and performance grades.

  19. Dig into Learning: A Program Evaluation of an Agricultural Literacy Innovation

    ERIC Educational Resources Information Center

    Edwards, Erica Brown

    2016-01-01

    This study is a mixed-methods program evaluation of an agricultural literacy innovation in a local school district in rural eastern North Carolina. This evaluation describes the use of a theory-based framework, the Concerns-Based Adoption Model (CBAM), in accordance with Stufflebeam's Context, Input, Process, Product (CIPP) model by evaluating the…

  20. Evaluation of Turkish and Mathematics Curricula According to Value-Based Evaluation Model

    ERIC Educational Resources Information Center

    Duman, Serap Nur; Akbas, Oktay

    2017-01-01

    This study evaluated secondary school seventh-grade Turkish and mathematics programs using the Context-Input-Process-Product Evaluation Model based on student, teacher, and inspector views. The convergent parallel mixed method design was used in the study. Student values were identified using the scales for socio-level identification, traditional…

  1. On the evaluation of segmentation editing tools

    PubMed Central

    Heckel, Frank; Moltz, Jan H.; Meine, Hans; Geisler, Benjamin; Kießling, Andreas; D’Anastasi, Melvin; dos Santos, Daniel Pinto; Theruvath, Ashok Joseph; Hahn, Horst K.

    2014-01-01

    Abstract. Efficient segmentation editing tools are important components in the segmentation process, as no automatic methods exist that always generate sufficient results. Evaluating segmentation editing algorithms is challenging, because their quality depends on the user’s subjective impression. So far, no established methods for an objective, comprehensive evaluation of such tools exist and, particularly, intermediate segmentation results are not taken into account. We discuss the evaluation of editing algorithms in the context of tumor segmentation in computed tomography. We propose a rating scheme to qualitatively measure the accuracy and efficiency of editing tools in user studies. In order to objectively summarize the overall quality, we propose two scores based on the subjective rating and the quantified segmentation quality over time. Finally, a simulation-based evaluation approach is discussed, which allows a more reproducible evaluation without the need for human input. This automated evaluation complements user studies, allowing a more convincing evaluation, particularly during development, where frequent user studies are not possible. The proposed methods have been used to evaluate two dedicated editing algorithms on 131 representative tumor segmentations. We show how the comparison of editing algorithms benefits from the proposed methods. Our results also show the correlation of the suggested quality score with the qualitative ratings. PMID:26158063

  2. Application of Nemerow Index Method and Integrated Water Quality Index Method in Water Quality Assessment of Zhangze Reservoir

    NASA Astrophysics Data System (ADS)

    Zhang, Qian; Feng, Minquan; Hao, Xiaoyan

    2018-03-01

    [Objective] Based on the water quality historical data from the Zhangze Reservoir from the last five years, the water quality was assessed by the integrated water quality identification index method and the Nemerow pollution index method. The results of different evaluation methods were analyzed and compared and the characteristics of each method were identified.[Methods] The suitability of the water quality assessment methods were compared and analyzed, based on these results.[Results] the water quality tended to decrease over time with 2016 being the year with the worst water quality. The sections with the worst water quality were the southern and northern sections.[Conclusion] The results produced by the traditional Nemerow index method fluctuated greatly in each section of water quality monitoring and therefore could not effectively reveal the trend of water quality at each section. The combination of qualitative and quantitative measures of the comprehensive pollution index identification method meant it could evaluate the degree of water pollution as well as determine that the river water was black and odorous. However, the evaluation results showed that the water pollution was relatively low.The results from the improved Nemerow index evaluation were better as the single indicators and evaluation results are in strong agreement; therefore the method is able to objectively reflect the water quality of each water quality monitoring section and is more suitable for the water quality evaluation of the reservoir.

  3. Evaluation of a Cubature Kalman Filtering-Based Phase Unwrapping Method for Differential Interferograms with High Noise in Coal Mining Areas

    PubMed Central

    Liu, Wanli; Bian, Zhengfu; Liu, Zhenguo; Zhang, Qiuzhao

    2015-01-01

    Differential interferometric synthetic aperture radar has been shown to be effective for monitoring subsidence in coal mining areas. Phase unwrapping can have a dramatic influence on the monitoring result. In this paper, a filtering-based phase unwrapping algorithm in combination with path-following is introduced to unwrap differential interferograms with high noise in mining areas. It can perform simultaneous noise filtering and phase unwrapping so that the pre-filtering steps can be omitted, thus usually retaining more details and improving the detectable deformation. For the method, the nonlinear measurement model of phase unwrapping is processed using a simplified Cubature Kalman filtering, which is an effective and efficient tool used in many nonlinear fields. Three case studies are designed to evaluate the performance of the method. In Case 1, two tests are designed to evaluate the performance of the method under different factors including the number of multi-looks and path-guiding indexes. The result demonstrates that the unwrapped results are sensitive to the number of multi-looks and that the Fisher Distance is the most suitable path-guiding index for our study. Two case studies are then designed to evaluate the feasibility of the proposed phase unwrapping method based on Cubature Kalman filtering. The results indicate that, compared with the popular Minimum Cost Flow method, the Cubature Kalman filtering-based phase unwrapping can achieve promising results without pre-filtering and is an appropriate method for coal mining areas with high noise. PMID:26153776

  4. Towards standardized assessment of endoscope optical performance: geometric distortion

    NASA Astrophysics Data System (ADS)

    Wang, Quanzeng; Desai, Viraj N.; Ngo, Ying Z.; Cheng, Wei-Chung; Pfefer, Joshua

    2013-12-01

    Technological advances in endoscopes, such as capsule, ultrathin and disposable devices, promise significant improvements in safety, clinical effectiveness and patient acceptance. Unfortunately, the industry lacks test methods for preclinical evaluation of key optical performance characteristics (OPCs) of endoscopic devices that are quantitative, objective and well-validated. As a result, it is difficult for researchers and developers to compare image quality and evaluate equivalence to, or improvement upon, prior technologies. While endoscope OPCs include resolution, field of view, and depth of field, among others, our focus in this paper is geometric image distortion. We reviewed specific test methods for distortion and then developed an objective, quantitative test method based on well-defined experimental and data processing steps to evaluate radial distortion in the full field of view of an endoscopic imaging system. Our measurements and analyses showed that a second-degree polynomial equation could well describe the radial distortion curve of a traditional endoscope. The distortion evaluation method was effective for correcting the image and can be used to explain other widely accepted evaluation methods such as picture height distortion. Development of consensus standards based on promising test methods for image quality assessment, such as the method studied here, will facilitate clinical implementation of innovative endoscopic devices.

  5. Construction of Expert Knowledge Monitoring and Assessment System Based on Integral Method of Knowledge Evaluation

    ERIC Educational Resources Information Center

    Golovachyova, Viktoriya N.; Menlibekova, Gulbakhyt Zh.; Abayeva, Nella F.; Ten, Tatyana L.; Kogaya, Galina D.

    2016-01-01

    Using computer-based monitoring systems that rely on tests could be the most effective way of knowledge evaluation. The problem of objective knowledge assessment by means of testing takes on a new dimension in the context of new paradigms in education. The analysis of the existing test methods enabled us to conclude that tests with selected…

  6. Research on the comparison of performance-based concept and force-based concept

    NASA Astrophysics Data System (ADS)

    Wu, Zeyu; Wang, Dongwei

    2011-03-01

    There are two ideologies about structure design: force-based concept and performance-based concept. Generally, if the structure operates during elastic stage, the two philosophies usually attain the same results. But beyond that stage, the shortage of force-based method is exposed, and the merit of performance-based is displayed. Pros and cons of each strategy are listed herein, and then which structure is best suitable to each method analyzed. At last, a real structure is evaluated by adaptive pushover method to verify that performance-based method is better than force-based method.

  7. Gold-standard evaluation of a folksonomy-based ontology learning model

    NASA Astrophysics Data System (ADS)

    Djuana, E.

    2018-03-01

    Folksonomy, as one result of collaborative tagging process, has been acknowledged for its potential in improving categorization and searching of web resources. However, folksonomy contains ambiguities such as synonymy and polysemy as well as different abstractions or generality problem. To maximize its potential, some methods for associating tags of folksonomy with semantics and structural relationships have been proposed such as using ontology learning method. This paper evaluates our previous work in ontology learning according to gold-standard evaluation approach in comparison to a notable state-of-the-art work and several baselines. The results show that our method is comparable to the state-of the art work which further validate our approach as has been previously validated using task-based evaluation approach.

  8. Research on Operation Assessment Method for Energy Meter

    NASA Astrophysics Data System (ADS)

    Chen, Xiangqun; Huang, Rui; Shen, Liman; chen, Hao; Xiong, Dezhi; Xiao, Xiangqi; Liu, Mouhai; Xu, Renheng

    2018-03-01

    The existing electric energy meter rotation maintenance strategy regularly checks the electric energy meter and evaluates the state. It only considers the influence of time factors, neglects the influence of other factors, leads to the inaccuracy of the evaluation, and causes the waste of resources. In order to evaluate the running state of the electric energy meter in time, a method of the operation evaluation of the electric energy meter is proposed. The method is based on extracting the existing data acquisition system, marketing business system and metrology production scheduling platform that affect the state of energy meters, and classified into error stability, operational reliability, potential risks and other factors according to the influencing factors, based on the above basic test score, inspecting score, monitoring score, score of family defect detection. Then, according to the evaluation model according to the scoring, we evaluate electric energy meter operating state, and finally put forward the corresponding maintenance strategy of rotation.

  9. A Synthetic Comparator Approach to Local Evaluation of School-Based Substance Use Prevention Programming.

    PubMed

    Hansen, William B; Derzon, James H; Reese, Eric L

    2014-06-01

    We propose a method for creating groups against which outcomes of local pretest-posttest evaluations of evidence-based programs can be judged. This involves assessing pretest markers for new and previously conducted evaluations to identify groups that have high pretest similarity. A database of 802 prior local evaluations provided six summary measures for analysis. The proximity of all groups using these variables is calculated as standardized proximities having values between 0 and 1. Five methods for creating standardized proximities are demonstrated. The approach allows proximity limits to be adjusted to find sufficient numbers of synthetic comparators. Several index cases are examined to assess the numbers of groups available to serve as comparators. Results show that most local evaluations would have sufficient numbers of comparators available for estimating program effects. This method holds promise as a tool for local evaluations to estimate relative effectiveness. © The Author(s) 2012.

  10. Uncertainty Modeling and Evaluation of CMM Task Oriented Measurement Based on SVCMM

    NASA Astrophysics Data System (ADS)

    Li, Hongli; Chen, Xiaohuai; Cheng, Yinbao; Liu, Houde; Wang, Hanbin; Cheng, Zhenying; Wang, Hongtao

    2017-10-01

    Due to the variety of measurement tasks and the complexity of the errors of coordinate measuring machine (CMM), it is very difficult to reasonably evaluate the uncertainty of the measurement results of CMM. It has limited the application of CMM. Task oriented uncertainty evaluation has become a difficult problem to be solved. Taking dimension measurement as an example, this paper puts forward a practical method of uncertainty modeling and evaluation of CMM task oriented measurement (called SVCMM method). This method makes full use of the CMM acceptance or reinspection report and the Monte Carlo computer simulation method (MCM). The evaluation example is presented, and the results are evaluated by the traditional method given in GUM and the proposed method, respectively. The SVCMM method is verified to be feasible and practical. It can help CMM users to conveniently complete the measurement uncertainty evaluation through a single measurement cycle.

  11. Evaluating core technology capacity based on an improved catastrophe progression method: the case of automotive industry

    NASA Astrophysics Data System (ADS)

    Zhao, Shijia; Liu, Zongwei; Wang, Yue; Zhao, Fuquan

    2017-01-01

    Subjectivity usually causes large fluctuations in evaluation results. Many scholars attempt to establish new mathematical methods to make evaluation results consistent with actual objective situations. An improved catastrophe progression method (ICPM) is constructed to overcome the defects of the original method. The improved method combines the merits of the principal component analysis' information coherence and the catastrophe progression method's none index weight and has the advantage of highly objective comprehensive evaluation. Through the systematic analysis of the influencing factors of the automotive industry's core technology capacity, the comprehensive evaluation model is established according to the different roles that different indices play in evaluating the overall goal with a hierarchical structure. Moreover, ICPM is developed for evaluating the automotive industry's core technology capacity for the typical seven countries in the world, which demonstrates the effectiveness of the method.

  12. A novel orthoimage mosaic method using a weighted A∗ algorithm - Implementation and evaluation

    NASA Astrophysics Data System (ADS)

    Zheng, Maoteng; Xiong, Xiaodong; Zhu, Junfeng

    2018-04-01

    The implementation and evaluation of a weighted A∗ algorithm for orthoimage mosaic with UAV (Unmanned Aircraft Vehicle) imagery is proposed. The initial seam-line network is firstly generated by standard Voronoi Diagram algorithm; an edge diagram is generated based on DSM (Digital Surface Model) data; the vertices (conjunction nodes of seam-lines) of the initial network are relocated if they are on high objects (buildings, trees and other artificial structures); and the initial seam-lines are refined using the weighted A∗ algorithm based on the edge diagram and the relocated vertices. Our method was tested with three real UAV datasets. Two quantitative terms are introduced to evaluate the results of the proposed method. Preliminary results show that the method is suitable for regular and irregular aligned UAV images for most terrain types (flat or mountainous areas), and is better than the state-of-the-art method in both quality and efficiency based on the test datasets.

  13. WE-AB-207A-12: HLCC Based Quantitative Evaluation Method of Image Artifact in Dental CBCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Y; Wu, S; Qi, H

    Purpose: Image artifacts are usually evaluated qualitatively via visual observation of the reconstructed images, which is susceptible to subjective factors due to the lack of an objective evaluation criterion. In this work, we propose a Helgason-Ludwig consistency condition (HLCC) based evaluation method to quantify the severity level of different image artifacts in dental CBCT. Methods: Our evaluation method consists of four step: 1) Acquire Cone beam CT(CBCT) projection; 2) Convert 3D CBCT projection to fan-beam projection by extracting its central plane projection; 3) Convert fan-beam projection to parallel-beam projection utilizing sinogram-based rebinning algorithm or detail-based rebinning algorithm; 4) Obtain HLCCmore » profile by integrating parallel-beam projection per view and calculate wave percentage and variance of the HLCC profile, which can be used to describe the severity level of image artifacts. Results: Several sets of dental CBCT projections containing only one type of artifact (i.e. geometry, scatter, beam hardening, lag and noise artifact), were simulated using gDRR, a GPU tool developed for efficient, accurate, and realistic simulation of CBCT Projections. These simulated CBCT projections were used to test our proposed method. HLCC profile wave percentage and variance induced by geometry distortion are about 3∼21 times and 16∼393 times as large as that of the artifact-free projection, respectively. The increase factor of wave percentage and variance are 6 and133 times for beam hardening, 19 and 1184 times for scatter, and 4 and16 times for lag artifacts, respectively. In contrast, for noisy projection the wave percentage, variance and inconsistency level are almost the same with those of the noise-free one. Conclusion: We have proposed a quantitative evaluation method of image artifact based on HLCC theory. According to our simulation results, the severity of different artifact types is found to be in a following order: Scatter>Geometry>Beam hardening>Lag>Noise>Artifact-free in dental CBCT.« less

  14. A Wireless Sensor Network-Based Portable Vehicle Detector Evaluation System

    PubMed Central

    Yoo, Seong-eun

    2013-01-01

    In an upcoming smart transportation environment, performance evaluations of existing Vehicle Detection Systems are crucial to maintain their accuracy. The existing evaluation method for Vehicle Detection Systems is based on a wired Vehicle Detection System reference and a video recorder, which must be operated and analyzed by capable traffic experts. However, this conventional evaluation system has many disadvantages. It is inconvenient to deploy, the evaluation takes a long time, and it lacks scalability and objectivity. To improve the evaluation procedure, this paper proposes a Portable Vehicle Detector Evaluation System based on wireless sensor networks. We describe both the architecture and design of a Vehicle Detector Evaluation System and the implementation results, focusing on the wireless sensor networks and methods for traffic information measurement. With the help of wireless sensor networks and automated analysis, our Vehicle Detector Evaluation System can evaluate a Vehicle Detection System conveniently and objectively. The extensive evaluations of our Vehicle Detector Evaluation System show that it can measure the traffic information such as volume counts and speed with over 98% accuracy. PMID:23344388

  15. A wireless sensor network-based portable vehicle detector evaluation system.

    PubMed

    Yoo, Seong-eun

    2013-01-17

    In an upcoming smart transportation environment, performance evaluations of existing Vehicle Detection Systems are crucial to maintain their accuracy. The existing evaluation method for Vehicle Detection Systems is based on a wired Vehicle Detection System reference and a video recorder, which must be operated and analyzed by capable traffic experts. However, this conventional evaluation system has many disadvantages. It is inconvenient to deploy, the evaluation takes a long time, and it lacks scalability and objectivity. To improve the evaluation procedure, this paper proposes a Portable Vehicle Detector Evaluation System based on wireless sensor networks. We describe both the architecture and design of a Vehicle Detector Evaluation System and the implementation results, focusing on the wireless sensor networks and methods for traffic information measurement. With the help of wireless sensor networks and automated analysis, our Vehicle Detector Evaluation System can evaluate a Vehicle Detection System conveniently and objectively. The extensive evaluations of our Vehicle Detector Evaluation System show that it can measure the traffic information such as volume counts and speed with over 98% accuracy.

  16. Evaluating Pillar Industry’s Transformation Capability: A Case Study of Two Chinese Steel-Based Cities

    PubMed Central

    Li, Zhidong; Marinova, Dora; Guo, Xiumei; Gao, Yuan

    2015-01-01

    Many steel-based cities in China were established between the 1950s and 1960s. After more than half a century of development and boom, these cities are starting to decline and industrial transformation is urgently needed. This paper focuses on evaluating the transformation capability of resource-based cities building an evaluation model. Using Text Mining and the Document Explorer technique as a way of extracting text features, the 200 most frequently used words are derived from 100 publications related to steel- and other resource-based cities. The Expert Evaluation Method (EEM) and Analytic Hierarchy Process (AHP) techniques are then applied to select 53 indicators, determine their weights and establish an index system for evaluating the transformation capability of the pillar industry of China’s steel-based cities. Using real data and expert reviews, the improved Fuzzy Relation Matrix (FRM) method is applied to two case studies in China, namely Panzhihua and Daye, and the evaluation model is developed using Fuzzy Comprehensive Evaluation (FCE). The cities’ abilities to carry out industrial transformation are evaluated with concerns expressed for the case of Daye. The findings have policy implications for the potential and required industrial transformation in the two selected cities and other resource-based towns. PMID:26422266

  17. Evaluating Pillar Industry's Transformation Capability: A Case Study of Two Chinese Steel-Based Cities.

    PubMed

    Li, Zhidong; Marinova, Dora; Guo, Xiumei; Gao, Yuan

    2015-01-01

    Many steel-based cities in China were established between the 1950s and 1960s. After more than half a century of development and boom, these cities are starting to decline and industrial transformation is urgently needed. This paper focuses on evaluating the transformation capability of resource-based cities building an evaluation model. Using Text Mining and the Document Explorer technique as a way of extracting text features, the 200 most frequently used words are derived from 100 publications related to steel- and other resource-based cities. The Expert Evaluation Method (EEM) and Analytic Hierarchy Process (AHP) techniques are then applied to select 53 indicators, determine their weights and establish an index system for evaluating the transformation capability of the pillar industry of China's steel-based cities. Using real data and expert reviews, the improved Fuzzy Relation Matrix (FRM) method is applied to two case studies in China, namely Panzhihua and Daye, and the evaluation model is developed using Fuzzy Comprehensive Evaluation (FCE). The cities' abilities to carry out industrial transformation are evaluated with concerns expressed for the case of Daye. The findings have policy implications for the potential and required industrial transformation in the two selected cities and other resource-based towns.

  18. FE-ANN based modeling of 3D Simple Reinforced Concrete Girders for Objective Structural Health Evaluation : Tech Transfer Summary

    DOT National Transportation Integrated Search

    2017-06-01

    The objective of this study was to develop an objective, quantitative method for evaluating damage to bridge girders by using artificial neural networks (ANNs). This evaluation method, which is a supplement to visual inspection, requires only the res...

  19. A Survey of Model Evaluation Approaches with a Tutorial on Hierarchical Bayesian Methods

    ERIC Educational Resources Information Center

    Shiffrin, Richard M.; Lee, Michael D.; Kim, Woojae; Wagenmakers, Eric-Jan

    2008-01-01

    This article reviews current methods for evaluating models in the cognitive sciences, including theoretically based approaches, such as Bayes factors and minimum description length measures; simulation approaches, including model mimicry evaluations; and practical approaches, such as validation and generalization measures. This article argues…

  20. Evaluating Health Information Systems Using Ontologies

    PubMed Central

    Anderberg, Peter; Larsson, Tobias C; Fricker, Samuel A; Berglund, Johan

    2016-01-01

    Background There are several frameworks that attempt to address the challenges of evaluation of health information systems by offering models, methods, and guidelines about what to evaluate, how to evaluate, and how to report the evaluation results. Model-based evaluation frameworks usually suggest universally applicable evaluation aspects but do not consider case-specific aspects. On the other hand, evaluation frameworks that are case specific, by eliciting user requirements, limit their output to the evaluation aspects suggested by the users in the early phases of system development. In addition, these case-specific approaches extract different sets of evaluation aspects from each case, making it challenging to collectively compare, unify, or aggregate the evaluation of a set of heterogeneous health information systems. Objectives The aim of this paper is to find a method capable of suggesting evaluation aspects for a set of one or more health information systems—whether similar or heterogeneous—by organizing, unifying, and aggregating the quality attributes extracted from those systems and from an external evaluation framework. Methods On the basis of the available literature in semantic networks and ontologies, a method (called Unified eValuation using Ontology; UVON) was developed that can organize, unify, and aggregate the quality attributes of several health information systems into a tree-style ontology structure. The method was extended to integrate its generated ontology with the evaluation aspects suggested by model-based evaluation frameworks. An approach was developed to extract evaluation aspects from the ontology that also considers evaluation case practicalities such as the maximum number of evaluation aspects to be measured or their required degree of specificity. The method was applied and tested in Future Internet Social and Technological Alignment Research (FI-STAR), a project of 7 cloud-based eHealth applications that were developed and deployed across European Union countries. Results The relevance of the evaluation aspects created by the UVON method for the FI-STAR project was validated by the corresponding stakeholders of each case. These evaluation aspects were extracted from a UVON-generated ontology structure that reflects both the internally declared required quality attributes in the 7 eHealth applications of the FI-STAR project and the evaluation aspects recommended by the Model for ASsessment of Telemedicine applications (MAST) evaluation framework. The extracted evaluation aspects were used to create questionnaires (for the corresponding patients and health professionals) to evaluate each individual case and the whole of the FI-STAR project. Conclusions The UVON method can provide a relevant set of evaluation aspects for a heterogeneous set of health information systems by organizing, unifying, and aggregating the quality attributes through ontological structures. Those quality attributes can be either suggested by evaluation models or elicited from the stakeholders of those systems in the form of system requirements. The method continues to be systematic, context sensitive, and relevant across a heterogeneous set of health information systems. PMID:27311735

  1. Montessori education: a review of the evidence base

    NASA Astrophysics Data System (ADS)

    Marshall, Chloë

    2017-10-01

    The Montessori educational method has existed for over 100 years, but evaluations of its effectiveness are scarce. This review paper has three aims, namely to (1) identify some key elements of the method, (2) review existing evaluations of Montessori education, and (3) review studies that do not explicitly evaluate Montessori education but which evaluate the key elements identified in (1). The goal of the paper is therefore to provide a review of the evidence base for Montessori education, with the dual aspirations of stimulating future research and helping teachers to better understand whether and why Montessori education might be effective.

  2. Development of a specification for flexible base construction.

    DOT National Transportation Integrated Search

    2014-01-01

    The Texas Department of Transportation (TxDOT) currently uses Item 247 Flexible Base to specify a : pavement foundation course. The goal of this project was to evaluate the current method of base course : acceptance and investigate methods to r...

  3. Measurement System Analyses - Gauge Repeatability and Reproducibility Methods

    NASA Astrophysics Data System (ADS)

    Cepova, Lenka; Kovacikova, Andrea; Cep, Robert; Klaput, Pavel; Mizera, Ondrej

    2018-02-01

    The submitted article focuses on a detailed explanation of the average and range method (Automotive Industry Action Group, Measurement System Analysis approach) and of the honest Gauge Repeatability and Reproducibility method (Evaluating the Measurement Process approach). The measured data (thickness of plastic parts) were evaluated by both methods and their results were compared on the basis of numerical evaluation. Both methods were additionally compared and their advantages and disadvantages were discussed. One difference between both methods is the calculation of variation components. The AIAG method calculates the variation components based on standard deviation (then a sum of variation components does not give 100 %) and the honest GRR study calculates the variation components based on variance, where the sum of all variation components (part to part variation, EV & AV) gives the total variation of 100 %. Acceptance of both methods among the professional society, future use, and acceptance by manufacturing industry were also discussed. Nowadays, the AIAG is the leading method in the industry.

  4. An Evaluation of Teaching Introductory Geomorphology Using Computer-based Tools.

    ERIC Educational Resources Information Center

    Wentz, Elizabeth A.; Vender, Joann C.; Brewer, Cynthia A.

    1999-01-01

    Compares student reactions to traditional teaching methods and an approach where computer-based tools (GEODe CD-ROM and GIS-based exercises) were either integrated with or replaced the traditional methods. Reveals that the students found both of these tools valuable forms of instruction when used in combination with the traditional methods. (CMK)

  5. Evidence-Based Indicators of Neuropsychological Change in the Individual Patient: Relevant Concepts and Methods

    PubMed Central

    Duff, Kevin

    2012-01-01

    Repeated assessments are a relatively common occurrence in clinical neuropsychology. The current paper will review some of the relevant concepts (e.g., reliability, practice effects, alternate forms) and methods (e.g., reliable change index, standardized based regression) that are used in repeated neuropsychological evaluations. The focus will be on the understanding and application of these concepts and methods in the evaluation of the individual patient through examples. Finally, some future directions for assessing change will be described. PMID:22382384

  6. A simulation-based evaluation of methods for inferring linear barriers to gene flow

    Treesearch

    Christopher Blair; Dana E. Weigel; Matthew Balazik; Annika T. H. Keeley; Faith M. Walker; Erin Landguth; Sam Cushman; Melanie Murphy; Lisette Waits; Niko Balkenhol

    2012-01-01

    Different analytical techniques used on the same data set may lead to different conclusions about the existence and strength of genetic structure. Therefore, reliable interpretation of the results from different methods depends on the efficacy and reliability of different statistical methods. In this paper, we evaluated the performance of multiple analytical methods to...

  7. STATUS REPORT ON THE EVALUATION OF THE ALTERNATIVE ASBESTOS CONTROL METHOD – A COMPARISON TO THE NESHAP METHOD OF DEMOLITION OF ASBESTOS CONTAINING BUILDINGS

    EPA Science Inventory

    Status Report on the Evaluation of the Alternative Asbestos Control Method – A Comparison to the NESHAP Method of Demolition of Asbestos Containing Buildings. This abstract and presentation are based, at least in part, on preliminary data and conclusions. The Alternative Asbestos...

  8. Flexible methods for segmentation evaluation: results from CT-based luggage screening.

    PubMed

    Karimi, Seemeen; Jiang, Xiaoqian; Cosman, Pamela; Martz, Harry

    2014-01-01

    Imaging systems used in aviation security include segmentation algorithms in an automatic threat recognition pipeline. The segmentation algorithms evolve in response to emerging threats and changing performance requirements. Analysis of segmentation algorithms' behavior, including the nature of errors and feature recovery, facilitates their development. However, evaluation methods from the literature provide limited characterization of the segmentation algorithms. To develop segmentation evaluation methods that measure systematic errors such as oversegmentation and undersegmentation, outliers, and overall errors. The methods must measure feature recovery and allow us to prioritize segments. We developed two complementary evaluation methods using statistical techniques and information theory. We also created a semi-automatic method to define ground truth from 3D images. We applied our methods to evaluate five segmentation algorithms developed for CT luggage screening. We validated our methods with synthetic problems and an observer evaluation. Both methods selected the same best segmentation algorithm. Human evaluation confirmed the findings. The measurement of systematic errors and prioritization helped in understanding the behavior of each segmentation algorithm. Our evaluation methods allow us to measure and explain the accuracy of segmentation algorithms.

  9. Using Videos Derived from Simulations to Support the Analysis of Spatial Awareness in Synthetic Vision Displays

    NASA Technical Reports Server (NTRS)

    Boton, Matthew L.; Bass, Ellen J.; Comstock, James R., Jr.

    2006-01-01

    The evaluation of human-centered systems can be performed using a variety of different methodologies. This paper describes a human-centered systems evaluation methodology where participants watch 5-second non-interactive videos of a system in operation before supplying judgments and subjective measures based on the information conveyed in the videos. This methodology was used to evaluate the ability of different textures and fields of view to convey spatial awareness in synthetic vision systems (SVS) displays. It produced significant results for both judgment based and subjective measures. This method is compared to other methods commonly used to evaluate SVS displays based on cost, the amount of experimental time required, experimental flexibility, and the type of data provided.

  10. 78 FR 34664 - Prospective Grant of Start-up Exclusive Evaluation License: Portable Device and Method for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-10

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health Prospective Grant of Start-up Exclusive Evaluation License: Portable Device and Method for Detecting Hematomas AGENCY: National... device and method for detecting hematomas based on near infrared light emitted perpendicularly into a...

  11. A method to evaluate performance reliability of individual subjects in laboratory research applied to work settings.

    DOT National Transportation Integrated Search

    1978-10-01

    This report presents a method that may be used to evaluate the reliability of performance of individual subjects, particularly in applied laboratory research. The method is based on analysis of variance of a tasks-by-subjects data matrix, with all sc...

  12. INTEGRATION OF SPATIAL DATA: EVALUATION OF METHODS BASED ON DATA ISSUES AND ASSESSMENT QUESTIONS

    EPA Science Inventory

    EPA's Regional Vulnerability Assessment (ReVA) Program has focused initially on the synthesis of existing data. We have used the same set of spatial data and synthesized these data using a total of 11 existing and newly developed integration methods. These methods were evaluated ...

  13. MEASUREMENT OF VOLATILE ORGANIC COMPOUNDS BY THE US ENVIRONMENTAL PROTECTION AGENCY COMPENDIUM METHOD TO-17 - EVALUATION OF PERFORMANCE CRITERIA

    EPA Science Inventory

    An evaluation of performance criteria for US Environmental Protection Agency Compendium Method TO-17 for monitoring volatile organic compounds (VOCs) in air has been accomplished. The method is a solid adsorbent-based sampling and analytical procedure including performance crit...

  14. Automatic identification of the reference system based on the fourth ventricular landmarks in T1-weighted MR images.

    PubMed

    Fu, Yili; Gao, Wenpeng; Chen, Xiaoguang; Zhu, Minwei; Shen, Weigao; Wang, Shuguo

    2010-01-01

    The reference system based on the fourth ventricular landmarks (including the fastigial point and ventricular floor plane) is used in medical image analysis of the brain stem. The objective of this study was to develop a rapid, robust, and accurate method for the automatic identification of this reference system on T1-weighted magnetic resonance images. The fully automated method developed in this study consisted of four stages: preprocessing of the data set, expectation-maximization algorithm-based extraction of the fourth ventricle in the region of interest, a coarse-to-fine strategy for identifying the fastigial point, and localization of the base point. The method was evaluated on 27 Brain Web data sets qualitatively and 18 Internet Brain Segmentation Repository data sets and 30 clinical scans quantitatively. The results of qualitative evaluation indicated that the method was robust to rotation, landmark variation, noise, and inhomogeneity. The results of quantitative evaluation indicated that the method was able to identify the reference system with an accuracy of 0.7 +/- 0.2 mm for the fastigial point and 1.1 +/- 0.3 mm for the base point. It took <6 seconds for the method to identify the related landmarks on a personal computer with an Intel Core 2 6300 processor and 2 GB of random-access memory. The proposed method for the automatic identification of the reference system based on the fourth ventricular landmarks was shown to be rapid, robust, and accurate. The method has potentially utility in image registration and computer-aided surgery.

  15. Hierarchical semi-numeric method for pairwise fuzzy group decision making.

    PubMed

    Marimin, M; Umano, M; Hatono, I; Tamura, H

    2002-01-01

    Gradual improvements to a single-level semi-numeric method, i.e., linguistic labels preference representation by fuzzy sets computation for pairwise fuzzy group decision making are summarized. The method is extended to solve multiple criteria hierarchical structure pairwise fuzzy group decision-making problems. The problems are hierarchically structured into focus, criteria, and alternatives. Decision makers express their evaluations of criteria and alternatives based on each criterion by using linguistic labels. The labels are converted into and processed in triangular fuzzy numbers (TFNs). Evaluations of criteria yield relative criteria weights. Evaluations of the alternatives, based on each criterion, yield a degree of preference for each alternative or a degree of satisfaction for each preference value. By using a neat ordered weighted average (OWA) or a fuzzy weighted average operator, solutions obtained based on each criterion are aggregated into final solutions. The hierarchical semi-numeric method is suitable for solving a larger and more complex pairwise fuzzy group decision-making problem. The proposed method has been verified and applied to solve some real cases and is compared to Saaty's (1996) analytic hierarchy process (AHP) method.

  16. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, J.; Polly, B.; Collis, J.

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define 'explicit' input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAEmore » 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.« less

  17. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    and Ben Polly, Joseph Robertson; Polly, Ben; Collis, Jon

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define "explicit" input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAEmore » 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.« less

  18. Double-dictionary matching pursuit for fault extent evaluation of rolling bearing based on the Lempel-Ziv complexity

    NASA Astrophysics Data System (ADS)

    Cui, Lingli; Gong, Xiangyang; Zhang, Jianyu; Wang, Huaqing

    2016-12-01

    The quantitative diagnosis of rolling bearing fault severity is particularly crucial to realize a proper maintenance decision. Aiming at the fault feature of rolling bearing, a novel double-dictionary matching pursuit (DDMP) for fault extent evaluation of rolling bearing based on the Lempel-Ziv complexity (LZC) index is proposed in this paper. In order to match the features of rolling bearing fault, the impulse time-frequency dictionary and modulation dictionary are constructed to form the double-dictionary by using the method of parameterized function model. Then a novel matching pursuit method is proposed based on the new double-dictionary. For rolling bearing vibration signals with different fault sizes, the signals are decomposed and reconstructed by the DDMP. After the noise reduced and signals reconstructed, the LZC index is introduced to realize the fault extent evaluation. The applications of this method to the fault experimental signals of bearing outer race and inner race with different degree of injury have shown that the proposed method can effectively realize the fault extent evaluation.

  19. A Framework for the Development of Automatic DFA Method to Minimize the Number of Components and Assembly Reorientations

    NASA Astrophysics Data System (ADS)

    Alfadhlani; Samadhi, T. M. A. Ari; Ma’ruf, Anas; Setiasyah Toha, Isa

    2018-03-01

    Assembly is a part of manufacturing processes that must be considered at the product design stage. Design for Assembly (DFA) is a method to evaluate product design in order to make it simpler, easier and quicker to assemble, so that assembly cost is reduced. This article discusses a framework for developing a computer-based DFA method. The method is expected to aid product designer to extract data, evaluate assembly process, and provide recommendation for the product design improvement. These three things are desirable to be performed without interactive process or user intervention, so product design evaluation process could be done automatically. Input for the proposed framework is a 3D solid engineering drawing. Product design evaluation is performed by: minimizing the number of components; generating assembly sequence alternatives; selecting the best assembly sequence based on the minimum number of assembly reorientations; and providing suggestion for design improvement.

  20. Grape colour phenotyping: development of a method based on the reflectance spectrum.

    PubMed

    Rustioni, Laura; Basilico, Roberto; Fiori, Simone; Leoni, Alessandra; Maghradze, David; Failla, Osvaldo

    2013-01-01

    The colour of fruit is an important quality factor for cultivar classification and phenotyping techniques. Besides the subjective visual evaluation, new instruments and techniques can be used. This work aims at developping an objective, fast, easy and non-destructive method as a useful support for evaluating grapes' colour under different cultural and environmental conditions, as well as for breeding process and germplasm evaluation, supporting the plant characterization and the biodiversity preservation. Colours of 120 grape varieties were studied using reflectance spectra. The classification was realized using cluster and discriminant analysis. Reflectance of the whole berries surface was also compared with absorption properties of single skin extracts. A phenotyping method based on the reflectance spectra was developed, producing reliable colour classifications. A cultivar-independent index for pigment content evaluation has also been obtained. This work allowed the classification of the berry colour using an objective method. Copyright © 2013 John Wiley & Sons, Ltd.

  1. Evaluating University-Industry Collaboration: The European Foundation of Quality Management Excellence Model-Based Evaluation of University-Industry Collaboration

    ERIC Educational Resources Information Center

    Kauppila, Osmo; Mursula, Anu; Harkonen, Janne; Kujala, Jaakko

    2015-01-01

    The growth in university-industry collaboration has resulted in an increasing demand for methods to evaluate it. This paper presents one way to evaluate an organization's collaborative activities based on the European Foundation of Quality Management excellence model. Success factors of collaboration are derived from literature and compared…

  2. Toward a Web Based Environment for Evaluation and Design of Pedagogical Hypermedia

    ERIC Educational Resources Information Center

    Trigano, Philippe C.; Pacurar-Giacomini, Ecaterina

    2004-01-01

    We are working on a method, called CEPIAH. We propose a web based system used to help teachers to design multimedia documents and to evaluate their prototypes. Our current research objectives are to create a methodology to sustain the educational hypermedia design and evaluation. A module is used to evaluate multimedia software applied in…

  3. Analysis and development of adjoint-based h-adaptive direct discontinuous Galerkin method for the compressible Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Cheng, Jian; Yue, Huiqiang; Yu, Shengjiao; Liu, Tiegang

    2018-06-01

    In this paper, an adjoint-based high-order h-adaptive direct discontinuous Galerkin method is developed and analyzed for the two dimensional steady state compressible Navier-Stokes equations. Particular emphasis is devoted to the analysis of the adjoint consistency for three different direct discontinuous Galerkin discretizations: including the original direct discontinuous Galerkin method (DDG), the direct discontinuous Galerkin method with interface correction (DDG(IC)) and the symmetric direct discontinuous Galerkin method (SDDG). Theoretical analysis shows the extra interface correction term adopted in the DDG(IC) method and the SDDG method plays a key role in preserving the adjoint consistency. To be specific, for the model problem considered in this work, we prove that the original DDG method is not adjoint consistent, while the DDG(IC) method and the SDDG method can be adjoint consistent with appropriate treatment of boundary conditions and correct modifications towards the underlying output functionals. The performance of those three DDG methods is carefully investigated and evaluated through typical test cases. Based on the theoretical analysis, an adjoint-based h-adaptive DDG(IC) method is further developed and evaluated, numerical experiment shows its potential in the applications of adjoint-based adaptation for simulating compressible flows.

  4. Video conference quality assessment based on cooperative sensing of video and audio

    NASA Astrophysics Data System (ADS)

    Wang, Junxi; Chen, Jialin; Tian, Xin; Zhou, Cheng; Zhou, Zheng; Ye, Lu

    2015-12-01

    This paper presents a method to video conference quality assessment, which is based on cooperative sensing of video and audio. In this method, a proposed video quality evaluation method is used to assess the video frame quality. The video frame is divided into noise image and filtered image by the bilateral filters. It is similar to the characteristic of human visual, which could also be seen as a low-pass filtering. The audio frames are evaluated by the PEAQ algorithm. The two results are integrated to evaluate the video conference quality. A video conference database is built to test the performance of the proposed method. It could be found that the objective results correlate well with MOS. Then we can conclude that the proposed method is efficiency in assessing video conference quality.

  5. An evaluation of rapid methods for monitoring vegetation characteristics of wetland bird habitat

    USGS Publications Warehouse

    Tavernia, Brian G.; Lyons, James E.; Loges, Brian W.; Wilson, Andrew; Collazo, Jaime A.; Runge, Michael C.

    2016-01-01

    Wetland managers benefit from monitoring data of sufficient precision and accuracy to assess wildlife habitat conditions and to evaluate and learn from past management decisions. For large-scale monitoring programs focused on waterbirds (waterfowl, wading birds, secretive marsh birds, and shorebirds), precision and accuracy of habitat measurements must be balanced with fiscal and logistic constraints. We evaluated a set of protocols for rapid, visual estimates of key waterbird habitat characteristics made from the wetland perimeter against estimates from (1) plots sampled within wetlands, and (2) cover maps made from aerial photographs. Estimated percent cover of annuals and perennials using a perimeter-based protocol fell within 10 percent of plot-based estimates, and percent cover estimates for seven vegetation height classes were within 20 % of plot-based estimates. Perimeter-based estimates of total emergent vegetation cover did not differ significantly from cover map estimates. Post-hoc analyses revealed evidence for observer effects in estimates of annual and perennial covers and vegetation height. Median time required to complete perimeter-based methods was less than 7 percent of the time needed for intensive plot-based methods. Our results show that rapid, perimeter-based assessments, which increase sample size and efficiency, provide vegetation estimates comparable to more intensive methods.

  6. Application of Entropy Method in River Health Evaluation Based on Aquatic Ecological Function Regionalization

    NASA Astrophysics Data System (ADS)

    Shi, Yan-ting; Liu, Jie; Wang, Peng; Zhang, Xu-nuo; Wang, Jun-qiang; Guo, Liang

    2017-05-01

    With the implementation of water environment management in key basins in China, the monitoring and evaluation system of basins are in urgent need of innovation and upgrading. In view of the heavy workload of existing evaluation methods and the cumbersome calculation of multi-factor weighting method, the idea of using entroy method to assess river health based on aquatic ecological function regionalization was put forward. According to the monitoring data of songhua river in the year of 2011-2015, the entropy weight method was used to calculate the weight of 9 evaluation factors of 29 monitoring sections, and the river health assessment was carried out. In the study area, the river health status of the biodiversity conservation function area (4.111 point) was good, the water conservation function area (3.371 point), the habitat maintenance functional area (3.262 point), the agricultural production maintenance functional area (3.695 point) and the urban supporting functional area (3.399 point) was light pollution.

  7. Connecting Theory to Practice: Evaluating a Brain-Based Writing Curriculum

    ERIC Educational Resources Information Center

    Griffee, Dale T.

    2007-01-01

    This 10 week longitudinal evaluation study evaluated a brain-based learning curriculum proposed by Smilkstein (2003) by comparing student performance in a traditional basic writing curriculum with NHLP-oriented basic writing curriculum. The study included two classes each of experimental and traditional methods. Results of the data, gathered by…

  8. Item Difficulty in the Evaluation of Computer-Based Instruction: An Example from Neuroanatomy

    PubMed Central

    Chariker, Julia H.; Naaz, Farah; Pani, John R.

    2012-01-01

    This article reports large item effects in a study of computer-based learning of neuroanatomy. Outcome measures of the efficiency of learning, transfer of learning, and generalization of knowledge diverged by a wide margin across test items, with certain sets of items emerging as particularly difficult to master. In addition, the outcomes of comparisons between instructional methods changed with the difficulty of the items to be learned. More challenging items better differentiated between instructional methods. This set of results is important for two reasons. First, it suggests that instruction may be more efficient if sets of consistently difficult items are the targets of instructional methods particularly suited to them. Second, there is wide variation in the published literature regarding the outcomes of empirical evaluations of computer-based instruction. As a consequence, many questions arise as to the factors that may affect such evaluations. The present paper demonstrates that the level of challenge in the material that is presented to learners is an important factor to consider in the evaluation of a computer-based instructional system. PMID:22231801

  9. Item difficulty in the evaluation of computer-based instruction: an example from neuroanatomy.

    PubMed

    Chariker, Julia H; Naaz, Farah; Pani, John R

    2012-01-01

    This article reports large item effects in a study of computer-based learning of neuroanatomy. Outcome measures of the efficiency of learning, transfer of learning, and generalization of knowledge diverged by a wide margin across test items, with certain sets of items emerging as particularly difficult to master. In addition, the outcomes of comparisons between instructional methods changed with the difficulty of the items to be learned. More challenging items better differentiated between instructional methods. This set of results is important for two reasons. First, it suggests that instruction may be more efficient if sets of consistently difficult items are the targets of instructional methods particularly suited to them. Second, there is wide variation in the published literature regarding the outcomes of empirical evaluations of computer-based instruction. As a consequence, many questions arise as to the factors that may affect such evaluations. The present article demonstrates that the level of challenge in the material that is presented to learners is an important factor to consider in the evaluation of a computer-based instructional system. Copyright © 2011 American Association of Anatomists.

  10. Nonlinearity in Social Service Evaluation: A Primer on Agent-Based Modeling

    ERIC Educational Resources Information Center

    Israel, Nathaniel; Wolf-Branigin, Michael

    2011-01-01

    Measurement of nonlinearity in social service research and evaluation relies primarily on spatial analysis and, to a lesser extent, social network analysis. Recent advances in geographic methods and computing power, however, allow for the greater use of simulation methods. These advances now enable evaluators and researchers to simulate complex…

  11. A simple method used to evaluate phase-change materials based on focused-ion beam technique

    NASA Astrophysics Data System (ADS)

    Peng, Cheng; Wu, Liangcai; Rao, Feng; Song, Zhitang; Lv, Shilong; Zhou, Xilin; Du, Xiaofeng; Cheng, Yan; Yang, Pingxiong; Chu, Junhao

    2013-05-01

    A nanoscale phase-change line cell based on focused-ion beam (FIB) technique has been proposed to evaluate the electrical property of the phase-change material. Thanks to the FIB-deposited SiO2 hardmask, only one etching step has been used during the fabrication process of the cell. Reversible phase-change behaviors are observed in the line cells based on Al-Sb-Te and Ge-Sb-Te films. The low power consumption of the Al-Sb-Te based cell has been explained by theoretical calculation accompanying with thermal simulation. This line cell is considered to be a simple and reliable method in evaluating the application prospect of a certain phase-change material.

  12. Color image definition evaluation method based on deep learning method

    NASA Astrophysics Data System (ADS)

    Liu, Di; Li, YingChun

    2018-01-01

    In order to evaluate different blurring levels of color image and improve the method of image definition evaluation, this paper proposed a method based on the depth learning framework and BP neural network classification model, and presents a non-reference color image clarity evaluation method. Firstly, using VGG16 net as the feature extractor to extract 4,096 dimensions features of the images, then the extracted features and labeled images are employed in BP neural network to train. And finally achieve the color image definition evaluation. The method in this paper are experimented by using images from the CSIQ database. The images are blurred at different levels. There are 4,000 images after the processing. Dividing the 4,000 images into three categories, each category represents a blur level. 300 out of 400 high-dimensional features are trained in VGG16 net and BP neural network, and the rest of 100 samples are tested. The experimental results show that the method can take full advantage of the learning and characterization capability of deep learning. Referring to the current shortcomings of the major existing image clarity evaluation methods, which manually design and extract features. The method in this paper can extract the images features automatically, and has got excellent image quality classification accuracy for the test data set. The accuracy rate is 96%. Moreover, the predicted quality levels of original color images are similar to the perception of the human visual system.

  13. Evaluation of the efficacy of twelve mitochondrial protein-coding genes as barcodes for mollusk DNA barcoding.

    PubMed

    Yu, Hong; Kong, Lingfeng; Li, Qi

    2016-01-01

    In this study, we evaluated the efficacy of 12 mitochondrial protein-coding genes from 238 mitochondrial genomes of 140 molluscan species as potential DNA barcodes for mollusks. Three barcoding methods (distance, monophyly and character-based methods) were used in species identification. The species recovery rates based on genetic distances for the 12 genes ranged from 70.83 to 83.33%. There were no significant differences in intra- or interspecific variability among the 12 genes. The monophyly and character-based methods provided higher resolution than the distance-based method in species delimitation. Especially in closely related taxa, the character-based method showed some advantages. The results suggested that besides the standard COI barcode, other 11 mitochondrial protein-coding genes could also be potentially used as a molecular diagnostic for molluscan species discrimination. Our results also showed that the combination of mitochondrial genes did not enhance the efficacy for species identification and a single mitochondrial gene would be fully competent.

  14. Assessment of ecological passages along road networks within the Mediterranean forest using GIS-based multi criteria evaluation approach.

    PubMed

    Gülci, Sercan; Akay, Abdullah Emin

    2015-12-01

    Major roads cause barrier effect and fragmentation on wildlife habitats that are suitable places for feeding, mating, socializing, and hiding. Due to wildlife collisions (Wc), human-wildlife conflicts result in lost lives and loss of biodiversity. Geographical information system (GIS)-based multi criteria evaluation (MCE) methods have been successfully used in short-term planning of road networks considering wild animals. Recently, wildlife passages have been effectively utilized as road engineering structures provide quick and certain solutions for traffic safety and wildlife conservation problems. GIS-based MCE methods provide decision makers with optimum location for ecological passages based on habitat suitability models (HSMs) that classify the areas based on ecological requirements of target species. In this study, ecological passages along Motorway 52 within forested areas in Mediterranean city of Osmaniye in Turkey were evaluated. Firstly, HSM coupled with nine eco-geographic decision variables were developed based on ecological requirements of roe deer (Capreolus capreolus) that were chosen as target species. Then specified decision variables were evaluated using GIS-based weighted linear combination (WLC) method to estimate movement corridors and mitigation points along the motorway. In the solution process, two linkage nodes were evaluated for eco-passages which were determined based on the least-cost movement corridor intersecting with the motorway. One of the passages was identified as a natural wildlife overpass while the other was suggested as underpass construction. The results indicated that computer-based models provide accurate and quick solutions for positioning ecological passages to reduce environmental effects of road networks on wild animals.

  15. Study on evaluation of construction reliability for engineering project based on fuzzy language operator

    NASA Astrophysics Data System (ADS)

    Shi, Yu-Fang; Ma, Yi-Yi; Song, Ping-Ping

    2018-03-01

    System Reliability Theory is a research hotspot of management science and system engineering in recent years, and construction reliability is useful for quantitative evaluation of project management level. According to reliability theory and target system of engineering project management, the defination of construction reliability appears. Based on fuzzy mathematics theory and language operator, value space of construction reliability is divided into seven fuzzy subsets and correspondingly, seven membership function and fuzzy evaluation intervals are got with the operation of language operator, which provides the basis of corresponding method and parameter for the evaluation of construction reliability. This method is proved to be scientific and reasonable for construction condition and an useful attempt for theory and method research of engineering project system reliability.

  16. Nondestructive equipment study

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Identification of existing nondestructive Evaluation (NDE) methods that could be used in a low Earth orbit environment; evaluation of each method with respect to the set of criteria called out in the statement of work; selection of the most promising NDE methods for further evaluation; use of selected NDE methods to test samples of pressure vessel materials in a vacuum; pressure testing of a complex monolythic pressure vessel with known flaws using acoustic emissions in a vacuum; and recommendations for further studies based on analysis and testing are covered.

  17. Automated Glioblastoma Segmentation Based on a Multiparametric Structured Unsupervised Classification

    PubMed Central

    Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V.; Robles, Montserrat; Aparici, F.; Martí-Bonmatí, L.; García-Gómez, Juan M.

    2015-01-01

    Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation. PMID:25978453

  18. Type-2 fuzzy set extension of DEMATEL method combined with perceptual computing for decision making

    NASA Astrophysics Data System (ADS)

    Hosseini, Mitra Bokaei; Tarokh, Mohammad Jafar

    2013-05-01

    Most decision making methods used to evaluate a system or demonstrate the weak and strength points are based on fuzzy sets and evaluate the criteria with words that are modeled with fuzzy sets. The ambiguity and vagueness of the words and different perceptions of a word are not considered in these methods. For this reason, the decision making methods that consider the perceptions of decision makers are desirable. Perceptual computing is a subjective judgment method that considers that words mean different things to different people. This method models words with interval type-2 fuzzy sets that consider the uncertainty of the words. Also, there are interrelations and dependency between the decision making criteria in the real world; therefore, using decision making methods that cannot consider these relations is not feasible in some situations. The Decision-Making Trail and Evaluation Laboratory (DEMATEL) method considers the interrelations between decision making criteria. The current study used the combination of DEMATEL and perceptual computing in order to improve the decision making methods. For this reason, the fuzzy DEMATEL method was extended into type-2 fuzzy sets in order to obtain the weights of dependent criteria based on the words. The application of the proposed method is presented for knowledge management evaluation criteria.

  19. Discovering the Unknown: Improving Detection of Novel Species and Genera from Short Reads

    DOE PAGES

    Rosen, Gail L.; Polikar, Robi; Caseiro, Diamantino A.; ...

    2011-01-01

    High-throughput sequencing technologies enable metagenome profiling, simultaneous sequencing of multiple microbial species present within an environmental sample. Since metagenomic data includes sequence fragments (“reads”) from organisms that are absent from any database, new algorithms must be developed for the identification and annotation of novel sequence fragments. Homology-based techniques have been modified to detect novel species and genera, but, composition-based methods, have not been adapted. We develop a detection technique that can discriminate between “known” and “unknown” taxa, which can be used with composition-based methods, as well as a hybrid method. Unlike previous studies, we rigorously evaluate all algorithms for theirmore » ability to detect novel taxa. First, we show that the integration of a detector with a composition-based method performs significantly better than homology-based methods for the detection of novel species and genera, with best performance at finer taxonomic resolutions. Most importantly, we evaluate all the algorithms by introducing an “unknown” class and show that the modified version of PhymmBL has similar or better overall classification performance than the other modified algorithms, especially for the species-level and ultrashort reads. Finally, we evaluate theperformance of several algorithms on a real acid mine drainage dataset.« less

  20. Reproducibility measurements of three methods for calculating in vivo MR-based knee kinematics.

    PubMed

    Lansdown, Drew A; Zaid, Musa; Pedoia, Valentina; Subburaj, Karupppasamy; Souza, Richard; Benjamin, C; Li, Xiaojuan

    2015-08-01

    To describe three quantification methods for magnetic resonance imaging (MRI)-based knee kinematic evaluation and to report on the reproducibility of these algorithms. T2 -weighted, fast-spin echo images were obtained of the bilateral knees in six healthy volunteers. Scans were repeated for each knee after repositioning to evaluate protocol reproducibility. Semiautomatic segmentation defined regions of interest for the tibia and femur. The posterior femoral condyles and diaphyseal axes were defined using the previously defined tibia and femur. All segmentation was performed twice to evaluate segmentation reliability. Anterior tibial translation (ATT) and internal tibial rotation (ITR) were calculated using three methods: a tibial-based registration system, a combined tibiofemoral-based registration method with all manual segmentation, and a combined tibiofemoral-based registration method with automatic definition of condyles and axes. Intraclass correlation coefficients and standard deviations across multiple measures were determined. Reproducibility of segmentation was excellent (ATT = 0.98; ITR = 0.99) for both combined methods. ATT and ITR measurements were also reproducible across multiple scans in the combined registration measurements with manual (ATT = 0.94; ITR = 0.94) or automatic (ATT = 0.95; ITR = 0.94) condyles and axes. The combined tibiofemoral registration with automatic definition of the posterior femoral condyle and diaphyseal axes allows for improved knee kinematics quantification with excellent in vivo reproducibility. © 2014 Wiley Periodicals, Inc.

  1. Evaluation of the environmental impact of Brownfield remediation options: comparison of two life cycle assessment-based evaluation tools.

    PubMed

    Cappuyns, Valérie; Kessen, Bram

    2012-01-01

    The choice between different options for the remediation of a contaminated site traditionally relies on economical, technical and regulatory criteria without consideration of the environmental impact of the soil remediation process itself. In the present study, the environmental impact assessment of two potential soil remediation techniques (excavation and off-site cleaning and in situ steam extraction) was performed using two life cycle assessment (LCA)-based evaluation tools, namely the REC (risk reduction, environmental merit and cost) method and the ReCiPe method. The comparison and evaluation of the different tools used to estimate the environmental impact of Brownfield remediation was based on a case study which consisted of the remediation of a former oil and fat processing plant. For the environmental impact assessment, both the REC and ReCiPe methods result in a single score for the environmental impact of the soil remediation process and allow the same conclusion to be drawn: excavation and off-site cleaning has a more pronounced environmental impact than in situ soil remediation by means of steam extraction. The ReCiPe method takes into account more impact categories, but is also more complex to work with and needs more input data. Within the routine evaluation of soil remediation alternatives, a detailed LCA evaluation will often be too time consuming and costly and the estimation of the environmental impact with the REC method will in most cases be sufficient. The case study worked out in this paper wants to provide a basis for a more sounded selection of soil remediation technologies based on a more detailed assessment of the secondary impact of soil remediation.

  2. Caught Ya! A School-Based Practical Activity to Evaluate the Capture-Mark-Release-Recapture Method

    ERIC Educational Resources Information Center

    Kingsnorth, Crawford; Cruickshank, Chae; Paterson, David; Diston, Stephen

    2017-01-01

    The capture-mark-release-recapture method provides a simple way to estimate population size. However, when used as part of ecological sampling, this method does not easily allow an opportunity to evaluate the accuracy of the calculation because the actual population size is unknown. Here, we describe a method that can be used to measure the…

  3. Development of the local magnification method for quantitative evaluation of endoscope geometric distortion

    NASA Astrophysics Data System (ADS)

    Wang, Quanzeng; Cheng, Wei-Chung; Suresh, Nitin; Hua, Hong

    2016-05-01

    With improved diagnostic capabilities and complex optical designs, endoscopic technologies are advancing. As one of the several important optical performance characteristics, geometric distortion can negatively affect size estimation and feature identification related diagnosis. Therefore, a quantitative and simple distortion evaluation method is imperative for both the endoscopic industry and the medical device regulatory agent. However, no such method is available yet. While the image correction techniques are rather mature, they heavily depend on computational power to process multidimensional image data based on complex mathematical model, i.e., difficult to understand. Some commonly used distortion evaluation methods, such as the picture height distortion (DPH) or radial distortion (DRAD), are either too simple to accurately describe the distortion or subject to the error of deriving a reference image. We developed the basic local magnification (ML) method to evaluate endoscope distortion. Based on the method, we also developed ways to calculate DPH and DRAD. The method overcomes the aforementioned limitations, has clear physical meaning in the whole field of view, and can facilitate lesion size estimation during diagnosis. Most importantly, the method can facilitate endoscopic technology to market and potentially be adopted in an international endoscope standard.

  4. Evaluating Health Information Systems Using Ontologies.

    PubMed

    Eivazzadeh, Shahryar; Anderberg, Peter; Larsson, Tobias C; Fricker, Samuel A; Berglund, Johan

    2016-06-16

    There are several frameworks that attempt to address the challenges of evaluation of health information systems by offering models, methods, and guidelines about what to evaluate, how to evaluate, and how to report the evaluation results. Model-based evaluation frameworks usually suggest universally applicable evaluation aspects but do not consider case-specific aspects. On the other hand, evaluation frameworks that are case specific, by eliciting user requirements, limit their output to the evaluation aspects suggested by the users in the early phases of system development. In addition, these case-specific approaches extract different sets of evaluation aspects from each case, making it challenging to collectively compare, unify, or aggregate the evaluation of a set of heterogeneous health information systems. The aim of this paper is to find a method capable of suggesting evaluation aspects for a set of one or more health information systems-whether similar or heterogeneous-by organizing, unifying, and aggregating the quality attributes extracted from those systems and from an external evaluation framework. On the basis of the available literature in semantic networks and ontologies, a method (called Unified eValuation using Ontology; UVON) was developed that can organize, unify, and aggregate the quality attributes of several health information systems into a tree-style ontology structure. The method was extended to integrate its generated ontology with the evaluation aspects suggested by model-based evaluation frameworks. An approach was developed to extract evaluation aspects from the ontology that also considers evaluation case practicalities such as the maximum number of evaluation aspects to be measured or their required degree of specificity. The method was applied and tested in Future Internet Social and Technological Alignment Research (FI-STAR), a project of 7 cloud-based eHealth applications that were developed and deployed across European Union countries. The relevance of the evaluation aspects created by the UVON method for the FI-STAR project was validated by the corresponding stakeholders of each case. These evaluation aspects were extracted from a UVON-generated ontology structure that reflects both the internally declared required quality attributes in the 7 eHealth applications of the FI-STAR project and the evaluation aspects recommended by the Model for ASsessment of Telemedicine applications (MAST) evaluation framework. The extracted evaluation aspects were used to create questionnaires (for the corresponding patients and health professionals) to evaluate each individual case and the whole of the FI-STAR project. The UVON method can provide a relevant set of evaluation aspects for a heterogeneous set of health information systems by organizing, unifying, and aggregating the quality attributes through ontological structures. Those quality attributes can be either suggested by evaluation models or elicited from the stakeholders of those systems in the form of system requirements. The method continues to be systematic, context sensitive, and relevant across a heterogeneous set of health information systems.

  5. Application of image recognition-based automatic hyphae detection in fungal keratitis.

    PubMed

    Wu, Xuelian; Tao, Yuan; Qiu, Qingchen; Wu, Xinyi

    2018-03-01

    The purpose of this study is to evaluate the accuracy of two methods in diagnosis of fungal keratitis, whereby one method is automatic hyphae detection based on images recognition and the other method is corneal smear. We evaluate the sensitivity and specificity of the method in diagnosis of fungal keratitis, which is automatic hyphae detection based on image recognition. We analyze the consistency of clinical symptoms and the density of hyphae, and perform quantification using the method of automatic hyphae detection based on image recognition. In our study, 56 cases with fungal keratitis (just single eye) and 23 cases with bacterial keratitis were included. All cases underwent the routine inspection of slit lamp biomicroscopy, corneal smear examination, microorganism culture and the assessment of in vivo confocal microscopy images before starting medical treatment. Then, we recognize the hyphae images of in vivo confocal microscopy by using automatic hyphae detection based on image recognition to evaluate its sensitivity and specificity and compare with the method of corneal smear. The next step is to use the index of density to assess the severity of infection, and then find the correlation with the patients' clinical symptoms and evaluate consistency between them. The accuracy of this technology was superior to corneal smear examination (p < 0.05). The sensitivity of the technology of automatic hyphae detection of image recognition was 89.29%, and the specificity was 95.65%. The area under the ROC curve was 0.946. The correlation coefficient between the grading of the severity in the fungal keratitis by the automatic hyphae detection based on image recognition and the clinical grading is 0.87. The technology of automatic hyphae detection based on image recognition was with high sensitivity and specificity, able to identify fungal keratitis, which is better than the method of corneal smear examination. This technology has the advantages when compared with the conventional artificial identification of confocal microscope corneal images, of being accurate, stable and does not rely on human expertise. It was the most useful to the medical experts who are not familiar with fungal keratitis. The technology of automatic hyphae detection based on image recognition can quantify the hyphae density and grade this property. Being noninvasive, it can provide an evaluation criterion to fungal keratitis in a timely, accurate, objective and quantitative manner.

  6. Correction for FDG PET dose extravasations: Monte Carlo validation and quantitative evaluation of patient studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silva-Rodríguez, Jesús, E-mail: jesus.silva.rodriguez@sergas.es; Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es; Servicio de Medicina Nuclear, Complexo Hospitalario Universidade de Santiago de Compostela

    Purpose: Current procedure guidelines for whole body [18F]fluoro-2-deoxy-D-glucose (FDG)-positron emission tomography (PET) state that studies with visible dose extravasations should be rejected for quantification protocols. Our work is focused on the development and validation of methods for estimating extravasated doses in order to correct standard uptake value (SUV) values for this effect in clinical routine. Methods: One thousand three hundred sixty-seven consecutive whole body FDG-PET studies were visually inspected looking for extravasation cases. Two methods for estimating the extravasated dose were proposed and validated in different scenarios using Monte Carlo simulations. All visible extravasations were retrospectively evaluated using a manualmore » ROI based method. In addition, the 50 patients with higher extravasated doses were also evaluated using a threshold-based method. Results: Simulation studies showed that the proposed methods for estimating extravasated doses allow us to compensate the impact of extravasations on SUV values with an error below 5%. The quantitative evaluation of patient studies revealed that paravenous injection is a relatively frequent effect (18%) with a small fraction of patients presenting considerable extravasations ranging from 1% to a maximum of 22% of the injected dose. A criterion based on the extravasated volume and maximum concentration was established in order to identify this fraction of patients that might be corrected for paravenous injection effect. Conclusions: The authors propose the use of a manual ROI based method for estimating the effectively administered FDG dose and then correct SUV quantification in those patients fulfilling the proposed criterion.« less

  7. [The methods within the evaluation of disease management programmes in control-group designs using the example of diabetes mellitus - a systematic literature review].

    PubMed

    Drabik, A; Sawicki, P T; Müller, D; Passon, A; Stock, S

    2012-08-01

    Disease management programmes (DMPs) were implemented in Germany in 2002. Their evaluation is required by law. Beyond the mandatory evaluation, a growing number of published studies evaluate the DMP for diabetes mellitus type 2 in a control-group design. As patients opt into the programme on a voluntary basis it is necessary to adjust the inherent selection bias between groups. The aim of this study is to review published studies which evaluate the diabetes DMP using a control-group design with respect to the methods used. A systematic literature review of electronic databases (PUBMED, Cochrane Library, EMBASE, MEDPILOT) and a hand search of reference lists of the relevant publications was conducted to identify studies evaluating the DMP diabetes mellitus in a control-group design. 8 studies were included in the systematic literature review. 4 studies gathered retrospective claims data from sickness funds, one from physician's records, one study used prospective data from ambulatory care, and 2 studies were based on one patient survey. Methods used for adjustment of selection bias included exact matching, matching using propensity score methods, age-adjusted and sex-separated analysis, and adjustment in a regression model/analysis of covariance. One study did not apply adjustment methods. The intervention period ranged from 1 day to 4 years. Considered outcomes of studies (surrogate parameter, diabetes complications, mortality, quality of life, and claim data) depended on the database. In the evaluation of the DMP diabetes mellitus based on a control-group design neither the database nor the methods used for selection bias adjustment were consistent in the available studies. Effectiveness of DMPs cannot be judged based on this review due to heterogeneity of study designs. To allow for a comprehensive programme evaluation standardised minimum requirements for the evaluation of DMPs in the control group design are required. © Georg Thieme Verlag KG Stuttgart · New York.

  8. Improving the Repair Planning System for Mining Equipment on the Basis of Non-destructive Evaluation Data

    NASA Astrophysics Data System (ADS)

    Drygin, Michael; Kuryshkin, Nicholas

    2017-11-01

    The article tells about forming a new concept of scheduled preventive repair system of the equipment at coal mining enterprises, based on the use of modem non-destructive evaluation methods. The approach to the solution for this task is based on the system-oriented analysis of the regulatory documentation, non-destructive evaluation methods and means, experimental studies with compilation of statistics and subsequent grapho-analytical analysis. The main result of the work is a feasible explanation of using non-destructive evaluation methods within the current scheduled preventive repair system, their high efficiency and the potential of gradual transition to condition-based maintenance. In practice wide use of nondestructive evaluation means w;ill allow to reduce significantly the number of equipment failures and to repair only the nodes in pre-accident condition. Considering the import phase-out policy, the solution for this task will allow to adapt the SPR system to Russian market economy conditions and give the opportunity of commercial move by reducing the expenses for maintenance of Russian-made and imported equipment.

  9. Fuzzy-logic based strategy for validation of multiplex methods: example with qualitative GMO assays.

    PubMed

    Bellocchi, Gianni; Bertholet, Vincent; Hamels, Sandrine; Moens, W; Remacle, José; Van den Eede, Guy

    2010-02-01

    This paper illustrates the advantages that a fuzzy-based aggregation method could bring into the validation of a multiplex method for GMO detection (DualChip GMO kit, Eppendorf). Guidelines for validation of chemical, bio-chemical, pharmaceutical and genetic methods have been developed and ad hoc validation statistics are available and routinely used, for in-house and inter-laboratory testing, and decision-making. Fuzzy logic allows summarising the information obtained by independent validation statistics into one synthetic indicator of overall method performance. The microarray technology, introduced for simultaneous identification of multiple GMOs, poses specific validation issues (patterns of performance for a variety of GMOs at different concentrations). A fuzzy-based indicator for overall evaluation is illustrated in this paper, and applied to validation data for different genetically modified elements. Remarks were drawn on the analytical results. The fuzzy-logic based rules were shown to be applicable to improve interpretation of results and facilitate overall evaluation of the multiplex method.

  10. Evaluation of new flux attribution methods for mapping N2O emissions at the landscape scale from EC measurements

    NASA Astrophysics Data System (ADS)

    Grossel, Agnes; Bureau, Jordan; Loubet, Benjamin; Laville, Patricia; Massad, Raia; Haas, Edwin; Butterbach-Bahl, Klaus; Guimbaud, Christophe; Hénault, Catherine

    2017-04-01

    The objective of this study was to develop and evaluate an attribution method based on a combination of Eddy Covariance (EC) and chamber measurements to map N2O emissions over a 3-km2 area of croplands and forests in France. During 2 months of spring 2015, N2O fluxes were measured (i) by EC at 15 m height and (ii) punctually with a mobile chamber at 16 places within 1-km of EC mast. The attribution method was based on coupling the EC measurements, information on footprints (Loubet et al., 20101) and emission ratios based on crops and fertilizations, calculated based on chamber measurements. The results were evaluated against an independent flux dataset measured by automatic chambers in a wheat field within the area. At the landscape scale, the method estimated a total emission of 114-271 kg N-N2O during the campaign. This new approach allowed estimating continuously N2O emission and better accounting for the spatial variability of N2O emission at the landscape scale.

  11. A method for environmental acoustic analysis improvement based on individual evaluation of common sources in urban areas.

    PubMed

    López-Pacheco, María G; Sánchez-Fernández, Luis P; Molina-Lozano, Herón

    2014-01-15

    Noise levels of common sources such as vehicles, whistles, sirens, car horns and crowd sounds are mixed in urban soundscapes. Nowadays, environmental acoustic analysis is performed based on mixture signals recorded by monitoring systems. These mixed signals make it difficult for individual analysis which is useful in taking actions to reduce and control environmental noise. This paper aims at separating, individually, the noise source from recorded mixtures in order to evaluate the noise level of each estimated source. A method based on blind deconvolution and blind source separation in the wavelet domain is proposed. This approach provides a basis to improve results obtained in monitoring and analysis of common noise sources in urban areas. The method validation is through experiments based on knowledge of the predominant noise sources in urban soundscapes. Actual recordings of common noise sources are used to acquire mixture signals using a microphone array in semi-controlled environments. The developed method has demonstrated great performance improvements in identification, analysis and evaluation of common urban sources. © 2013 Elsevier B.V. All rights reserved.

  12. Optimization evaluation of cutting technology based on mechanical parts

    NASA Astrophysics Data System (ADS)

    Wang, Yu

    2018-04-01

    The relationship between the mechanical manufacturing process and the carbon emission is studied on the basis of the process of the mechanical manufacturing process. The formula of carbon emission calculation suitable for mechanical manufacturing process is derived. Based on this, a green evaluation method for cold machining process of mechanical parts is proposed. The application verification and data analysis of the proposed evaluation method are carried out by an example. The results show that there is a great relationship between the mechanical manufacturing process data and carbon emissions.

  13. Multi-Role Project (MRP): A New Project-Based Learning Method for STEM

    ERIC Educational Resources Information Center

    Warin, Bruno; Talbi, Omar; Kolski, Christophe; Hoogstoel, Frédéric

    2016-01-01

    This paper presents the "Multi-Role Project" method (MRP), a broadly applicable project-based learning method, and describes its implementation and evaluation in the context of a Science, Technology, Engineering, and Mathematics (STEM) course. The MRP method is designed around a meta-principle that considers the project learning activity…

  14. Comparisons of Four Methods for Estimating a Dynamic Factor Model

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Hamaker, Ellen L.; Nesselroade, John R.

    2008-01-01

    Four methods for estimating a dynamic factor model, the direct autoregressive factor score (DAFS) model, are evaluated and compared. The first method estimates the DAFS model using a Kalman filter algorithm based on its state space model representation. The second one employs the maximum likelihood estimation method based on the construction of a…

  15. Flexible methods for segmentation evaluation: Results from CT-based luggage screening

    PubMed Central

    Karimi, Seemeen; Jiang, Xiaoqian; Cosman, Pamela; Martz, Harry

    2017-01-01

    BACKGROUND Imaging systems used in aviation security include segmentation algorithms in an automatic threat recognition pipeline. The segmentation algorithms evolve in response to emerging threats and changing performance requirements. Analysis of segmentation algorithms’ behavior, including the nature of errors and feature recovery, facilitates their development. However, evaluation methods from the literature provide limited characterization of the segmentation algorithms. OBJECTIVE To develop segmentation evaluation methods that measure systematic errors such as oversegmentation and undersegmentation, outliers, and overall errors. The methods must measure feature recovery and allow us to prioritize segments. METHODS We developed two complementary evaluation methods using statistical techniques and information theory. We also created a semi-automatic method to define ground truth from 3D images. We applied our methods to evaluate five segmentation algorithms developed for CT luggage screening. We validated our methods with synthetic problems and an observer evaluation. RESULTS Both methods selected the same best segmentation algorithm. Human evaluation confirmed the findings. The measurement of systematic errors and prioritization helped in understanding the behavior of each segmentation algorithm. CONCLUSIONS Our evaluation methods allow us to measure and explain the accuracy of segmentation algorithms. PMID:24699346

  16. Evaluation of the light scattering and the turbidity microtiter plate-based methods for the detection of the excipient-mediated drug precipitation inhibition.

    PubMed

    Petruševska, Marija; Urleb, Uroš; Peternel, Luka

    2013-11-01

    The excipient-mediated precipitation inhibition is classically determined by the quantification of the dissolved compound in the solution. In this study, two alternative approaches were evaluated, one is the light scattering (nephelometer) and other is the turbidity (plate reader) microtiter plate-based methods which are based on the quantification of the compound precipitate. Following the optimization of the nephelometer settings (beam focus, laser gain) and the experimental conditions, the screening of 23 excipients on the precipitation inhibition of poorly soluble fenofibrate and dipyridamole was performed. The light scattering method resulted in excellent correlation (r>0.91) between the calculated precipitation inhibitor parameters (PIPs) and the precipitation inhibition index (PI(classical)) obtained by the classical approach for fenofibrate and dipyridamole. Among the evaluated PIPs AUC100 (nephelometer) resulted in only four false positives and lack of false negatives. In the case of the turbidity-based method a good correlation of the PI(classical) was obtained for the PIP maximal optical density (OD(max), r=0.91), however, only for fenofibrate. In the case of the OD(max) (plate reader) five false positives and two false negatives were identified. In conclusion, the light scattering-based method outperformed the turbidity-based one and could be reliably used for identification of novel precipitation inhibitors. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Optimizing taxonomic classification of marker-gene amplicon sequences with QIIME 2's q2-feature-classifier plugin.

    PubMed

    Bokulich, Nicholas A; Kaehler, Benjamin D; Rideout, Jai Ram; Dillon, Matthew; Bolyen, Evan; Knight, Rob; Huttley, Gavin A; Gregory Caporaso, J

    2018-05-17

    Taxonomic classification of marker-gene sequences is an important step in microbiome analysis. We present q2-feature-classifier ( https://github.com/qiime2/q2-feature-classifier ), a QIIME 2 plugin containing several novel machine-learning and alignment-based methods for taxonomy classification. We evaluated and optimized several commonly used classification methods implemented in QIIME 1 (RDP, BLAST, UCLUST, and SortMeRNA) and several new methods implemented in QIIME 2 (a scikit-learn naive Bayes machine-learning classifier, and alignment-based taxonomy consensus methods based on VSEARCH, and BLAST+) for classification of bacterial 16S rRNA and fungal ITS marker-gene amplicon sequence data. The naive-Bayes, BLAST+-based, and VSEARCH-based classifiers implemented in QIIME 2 meet or exceed the species-level accuracy of other commonly used methods designed for classification of marker gene sequences that were evaluated in this work. These evaluations, based on 19 mock communities and error-free sequence simulations, including classification of simulated "novel" marker-gene sequences, are available in our extensible benchmarking framework, tax-credit ( https://github.com/caporaso-lab/tax-credit-data ). Our results illustrate the importance of parameter tuning for optimizing classifier performance, and we make recommendations regarding parameter choices for these classifiers under a range of standard operating conditions. q2-feature-classifier and tax-credit are both free, open-source, BSD-licensed packages available on GitHub.

  18. Method of evaluation of process of red blood cell sedimentation based on photometry of droplet samples.

    PubMed

    Aristov, Alexander; Nosova, Ekaterina

    2017-04-01

    The paper focuses on research aimed at creating and testing a new approach to evaluate the processes of aggregation and sedimentation of red blood cells for purpose of its use in clinical laboratory diagnostics. The proposed method is based on photometric analysis of blood sample formed as a sessile drop. The results of clinical approbation of this method are given in the paper. Analysis of the processes occurring in the sample in the form of sessile drop during the process of blood cells sedimentation is described. The results of experimental studies to evaluate the effect of the droplet sample focusing properties on light radiation transmittance are presented. It is shown that this method significantly reduces the sample volume and provides sufficiently high sensitivity to the studied processes.

  19. A GIS-BASED METHOD FOR MULTI-OBJECTIVE EVALUATION OF PARK VEGETATION. (R824766)

    EPA Science Inventory

    Abstract

    In this paper we describe a method for evaluating the concordance between a set of mapped landscape attributes and a set of quantitatively expressed management priorities. The method has proved to be useful in planning urban green areas, allowing objectively d...

  20. An investigation of density measurement method for yarn-dyed woven fabrics based on dual-side fusion technique

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Xin, Binjie

    2016-08-01

    Yarn density is always considered as the fundamental structural parameter used for the quality evaluation of woven fabrics. The conventional yarn density measurement method is based on one-side analysis. In this paper, a novel density measurement method is developed for yarn-dyed woven fabrics based on a dual-side fusion technique. Firstly, a lab-used dual-side imaging system is established to acquire both face-side and back-side images of woven fabric and the affine transform is used for the alignment and fusion of the dual-side images. Then, the color images of the woven fabrics are transferred from the RGB to the CIE-Lab color space, and the intensity information of the image extracted from the L component is used for texture fusion and analysis. Subsequently, three image fusion methods are developed and utilized to merge the dual-side images: the weighted average method, wavelet transform method and Laplacian pyramid blending method. The fusion efficacy of each method is evaluated by three evaluation indicators and the best of them is selected to do the reconstruction of the complete fabric texture. Finally, the yarn density of the fused image is measured based on the fast Fourier transform, and the yarn alignment image could be reconstructed using the inverse fast Fourier transform. Our experimental results show that the accuracy of density measurement by using the proposed method is close to 99.44% compared with the traditional method and the robustness of this new proposed method is better than that of conventional analysis methods.

  1. Performance evaluation of the multiple-image optical compression and encryption method by increasing the number of target images

    NASA Astrophysics Data System (ADS)

    Aldossari, M.; Alfalou, A.; Brosseau, C.

    2017-08-01

    In an earlier study [Opt. Express 22, 22349-22368 (2014)], a compression and encryption method that simultaneous compress and encrypt closely resembling images was proposed and validated. This multiple-image optical compression and encryption (MIOCE) method is based on a special fusion of the different target images spectra in the spectral domain. Now for the purpose of assessing the capacity of the MIOCE method, we would like to evaluate and determine the influence of the number of target images. This analysis allows us to evaluate the performance limitation of this method. To achieve this goal, we use a criterion based on the root-mean-square (RMS) [Opt. Lett. 35, 1914-1916 (2010)] and compression ratio to determine the spectral plane area. Then, the different spectral areas are merged in a single spectrum plane. By choosing specific areas, we can compress together 38 images instead of 26 using the classical MIOCE method. The quality of the reconstructed image is evaluated by making use of the mean-square-error criterion (MSE).

  2. PET and MRI image fusion based on combination of 2-D Hilbert transform and IHS method.

    PubMed

    Haddadpour, Mozhdeh; Daneshvar, Sabalan; Seyedarabi, Hadi

    2017-08-01

    The process of medical image fusion is combining two or more medical images such as Magnetic Resonance Image (MRI) and Positron Emission Tomography (PET) and mapping them to a single image as fused image. So purpose of our study is assisting physicians to diagnose and treat the diseases in the least of the time. We used Magnetic Resonance Image (MRI) and Positron Emission Tomography (PET) as input images, so fused them based on combination of two dimensional Hilbert transform (2-D HT) and Intensity Hue Saturation (IHS) method. Evaluation metrics that we apply are Discrepancy (D k ) as an assessing spectral features and Average Gradient (AG k ) as an evaluating spatial features and also Overall Performance (O.P) to verify properly of the proposed method. In this paper we used three common evaluation metrics like Average Gradient (AG k ) and the lowest Discrepancy (D k ) and Overall Performance (O.P) to evaluate the performance of our method. Simulated and numerical results represent the desired performance of proposed method. Since that the main purpose of medical image fusion is preserving both spatial and spectral features of input images, so based on numerical results of evaluation metrics such as Average Gradient (AG k ), Discrepancy (D k ) and Overall Performance (O.P) and also desired simulated results, it can be concluded that our proposed method can preserve both spatial and spectral features of input images. Copyright © 2017 Chang Gung University. Published by Elsevier B.V. All rights reserved.

  3. [Exploring a new method for superimposition of pre-treatment and post-treatment mandibular digital dental casts in adults].

    PubMed

    Dai, F F; Liu, Y; Xu, T M; Chen, G

    2018-04-18

    To explore a cone beam computed tomography (CBCT)-independent method for mandibular digital dental cast superimposition to evaluate three-dimensional (3D) mandibular tooth movement after orthodontic treatment in adults, and to evaluate the accuracy of this method. Fifteen post-extraction orthodontic treatment adults from the Department of Orthodontics, Peking University School and Hospital of Stomatology were included. All the patients had four first premolars extracted, and were treated with straight wire appliance. The pre- and post-treatment plaster dental casts and craniofacial CBCT scans were obtained. The plaster dental casts were transferred to digital dental casts by 3D laser scanning, and lateral cephalograms were created from the craniofacial CBCT scans by orthogonal projection. The lateral cephalogram-based mandibular digital dental cast superimposition was achieved by sequential maxillary dental cast superimposition registered on the palatal stable region, occlusal transfer, and adjustment of mandibular rotation and translation obtained from lateral cephalogram superimposition. The accuracy of the lateral cephalogram-based mandibular digital dental cast superimposition method was evaluated with the CBCT-based mandibular digital dental cast superimposition method as the standard reference. After mandibular digital dental cast superimposition using both methods, 3D coordinate system was established, and 3D displacements of the lower bilateral first molars, canines and central incisors were measured. Differences between the two superimposition methods in tooth displacement measurements were assessed using the paired t-test with the level of statistical significance set at P<0.05. No significant differences were found between the lateral cephalogram-based and CBCT-based mandibular digital dental cast superimposition methods in 3D displacements of the lower first molars, and sagittal and vertical displacements of the canines and central incisors; transverse displacements of the canines and central incisors differed by (0.3±0.5) mm with statistical significance. The lateral cephalogram-based mandibular digital dental cast superimposition method has the similar accuracy as the CBCT-based mandibular digital dental cast superimposition method in 3D evaluation of mandibular orthodontic tooth displacement, except for minor differences for the transverse displacements of anterior teeth. This method is applicable to adult patients with conventional orthodontic treatment records, especially the previous precious orthodontic data in the absence of CBCT scans.

  4. Construction risk assessment of deep foundation pit in metro station based on G-COWA method

    NASA Astrophysics Data System (ADS)

    You, Weibao; Wang, Jianbo; Zhang, Wei; Liu, Fangmeng; Yang, Diying

    2018-05-01

    In order to get an accurate understanding of the construction safety of deep foundation pit in metro station and reduce the probability and loss of risk occurrence, a risk assessment method based on G-COWA is proposed. Firstly, relying on the specific engineering examples and the construction characteristics of deep foundation pit, an evaluation index system based on the five factors of “human, management, technology, material and environment” is established. Secondly, the C-OWA operator is introduced to realize the evaluation index empowerment and weaken the negative influence of expert subjective preference. The gray cluster analysis and fuzzy comprehensive evaluation method are combined to construct the construction risk assessment model of deep foundation pit, which can effectively solve the uncertainties. Finally, the model is applied to the actual project of deep foundation pit of Qingdao Metro North Station, determine its construction risk rating is “medium”, evaluate the model is feasible and reasonable. And then corresponding control measures are put forward and useful reference are provided.

  5. Image feature extraction based on the camouflage effectiveness evaluation

    NASA Astrophysics Data System (ADS)

    Yuan, Xin; Lv, Xuliang; Li, Ling; Wang, Xinzhu; Zhang, Zhi

    2018-04-01

    The key step of camouflage effectiveness evaluation is how to combine the human visual physiological features, psychological features to select effectively evaluation indexes. Based on the predecessors' camo comprehensive evaluation method, this paper chooses the suitable indexes combining with the image quality awareness, and optimizes those indexes combining with human subjective perception. Thus, it perfects the theory of index extraction.

  6. Balanced scorecard-based performance evaluation of Chinese county hospitals in underdeveloped areas.

    PubMed

    Gao, Hongda; Chen, He; Feng, Jun; Qin, Xianjing; Wang, Xuan; Liang, Shenglin; Zhao, Jinmin; Feng, Qiming

    2018-05-01

    Objective Since the Guangxi government implemented public county hospital reform in 2009, there have been no studies of county hospitals in this underdeveloped area of China. This study aimed to establish an evaluation indicator system for Guangxi county hospitals and to generate recommendations for hospital development and policymaking. Methods A performance evaluation indicator system was developed based on balanced scorecard theory. Opinions were elicited from 25 experts from administrative units, universities and hospitals and the Delphi method was used to modify the performance indicators. The indicator system and the Topsis method were used to evaluate the performance of five county hospitals randomly selected from the same batch of 2015 Guangxi reform pilots. Results There were 4 first-level indicators, 9 second-level indicators and 36 third-level indicators in the final performance evaluation indicator system that showed good consistency, validity and reliability. The performance rank of the hospitals was B > E > A > C > D. Conclusions The performance evaluation indicator system established using the balanced scorecard is practical and scientific. Analysis of the results based on this indicator system identified several factors affecting hospital performance, such as resource utilisation efficiency, medical service price, personnel structure and doctor-patient relationships.

  7. A training image evaluation and selection method based on minimum data event distance for multiple-point geostatistics

    NASA Astrophysics Data System (ADS)

    Feng, Wenjie; Wu, Shenghe; Yin, Yanshu; Zhang, Jiajia; Zhang, Ke

    2017-07-01

    A training image (TI) can be regarded as a database of spatial structures and their low to higher order statistics used in multiple-point geostatistics (MPS) simulation. Presently, there are a number of methods to construct a series of candidate TIs (CTIs) for MPS simulation based on a modeler's subjective criteria. The spatial structures of TIs are often various, meaning that the compatibilities of different CTIs with the conditioning data are different. Therefore, evaluation and optimal selection of CTIs before MPS simulation is essential. This paper proposes a CTI evaluation and optimal selection method based on minimum data event distance (MDevD). In the proposed method, a set of MDevD properties are established through calculation of the MDevD of conditioning data events in each CTI. Then, CTIs are evaluated and ranked according to the mean value and variance of the MDevD properties. The smaller the mean value and variance of an MDevD property are, the more compatible the corresponding CTI is with the conditioning data. In addition, data events with low compatibility in the conditioning data grid can be located to help modelers select a set of complementary CTIs for MPS simulation. The MDevD property can also help to narrow the range of the distance threshold for MPS simulation. The proposed method was evaluated using three examples: a 2D categorical example, a 2D continuous example, and an actual 3D oil reservoir case study. To illustrate the method, a C++ implementation of the method is attached to the paper.

  8. Objective evaluation of fatigue by EEG spectral analysis in steady-state visual evoked potential-based brain-computer interfaces

    PubMed Central

    2014-01-01

    Background The fatigue that users suffer when using steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) can cause a number of serious problems such as signal quality degradation and system performance deterioration, users’ discomfort and even risk of photosensitive epileptic seizures, posing heavy restrictions on the applications of SSVEP-based BCIs. Towards alleviating the fatigue, a fundamental step is to measure and evaluate it but most existing works adopt self-reported questionnaire methods which are subjective, offline and memory dependent. This paper proposes an objective and real-time approach based on electroencephalography (EEG) spectral analysis to evaluate the fatigue in SSVEP-based BCIs. Methods How the EEG indices (amplitudes in δ, θ, α and β frequency bands), the selected ratio indices (θ/α and (θ + α)/β), and SSVEP properties (amplitude and signal-to-noise ratio (SNR)) changes with the increasing fatigue level are investigated through two elaborate SSVEP-based BCI experiments, one validates mainly the effectiveness and another considers more practical situations. Meanwhile, a self-reported fatigue questionnaire is used to provide a subjective reference. ANOVA is employed to test the significance of the difference between the alert state and the fatigue state for each index. Results Consistent results are obtained in two experiments: the significant increases in α and (θ + α)/β, as well as the decrease in θ/α are found associated with the increasing fatigue level, indicating that EEG spectral analysis can provide robust objective evaluation of the fatigue in SSVEP-based BCIs. Moreover, the results show that the amplitude and SNR of the elicited SSVEP are significantly affected by users’ fatigue. Conclusions The experiment results demonstrate the feasibility and effectiveness of the proposed method as an objective and real-time evaluation of the fatigue in SSVEP-based BCIs. This method would be helpful in understanding the fatigue problem and optimizing the system design to alleviate the fatigue in SSVEP-based BCIs. PMID:24621009

  9. Reciprocal Questioning and Computer-based Instruction in Introductory Auditing: Student Perceptions.

    ERIC Educational Resources Information Center

    Watters, Mike

    2000-01-01

    An auditing course used reciprocal questioning (Socratic method) and computer-based instruction. Separate evaluations by 67 students revealed a strong aversion to the Socratic method; students expected professors to lecture. They showed a strong preference for the computer-based assignment. (SK)

  10. Fuzzy Risk Evaluation in Failure Mode and Effects Analysis Using a D Numbers Based Multi-Sensor Information Fusion Method.

    PubMed

    Deng, Xinyang; Jiang, Wen

    2017-09-12

    Failure mode and effect analysis (FMEA) is a useful tool to define, identify, and eliminate potential failures or errors so as to improve the reliability of systems, designs, and products. Risk evaluation is an important issue in FMEA to determine the risk priorities of failure modes. There are some shortcomings in the traditional risk priority number (RPN) approach for risk evaluation in FMEA, and fuzzy risk evaluation has become an important research direction that attracts increasing attention. In this paper, the fuzzy risk evaluation in FMEA is studied from a perspective of multi-sensor information fusion. By considering the non-exclusiveness between the evaluations of fuzzy linguistic variables to failure modes, a novel model called D numbers is used to model the non-exclusive fuzzy evaluations. A D numbers based multi-sensor information fusion method is proposed to establish a new model for fuzzy risk evaluation in FMEA. An illustrative example is provided and examined using the proposed model and other existing method to show the effectiveness of the proposed model.

  11. Fuzzy Risk Evaluation in Failure Mode and Effects Analysis Using a D Numbers Based Multi-Sensor Information Fusion Method

    PubMed Central

    Deng, Xinyang

    2017-01-01

    Failure mode and effect analysis (FMEA) is a useful tool to define, identify, and eliminate potential failures or errors so as to improve the reliability of systems, designs, and products. Risk evaluation is an important issue in FMEA to determine the risk priorities of failure modes. There are some shortcomings in the traditional risk priority number (RPN) approach for risk evaluation in FMEA, and fuzzy risk evaluation has become an important research direction that attracts increasing attention. In this paper, the fuzzy risk evaluation in FMEA is studied from a perspective of multi-sensor information fusion. By considering the non-exclusiveness between the evaluations of fuzzy linguistic variables to failure modes, a novel model called D numbers is used to model the non-exclusive fuzzy evaluations. A D numbers based multi-sensor information fusion method is proposed to establish a new model for fuzzy risk evaluation in FMEA. An illustrative example is provided and examined using the proposed model and other existing method to show the effectiveness of the proposed model. PMID:28895905

  12. Two-signal electrochemical method for evaluation suppression and proliferation of MCF-7 cells based on intracellular purine.

    PubMed

    Li, Jinlian; Lin, Runxian; Wang, Qian; Gao, Guanggang; Cui, Jiwen; Liu, Jiguang; Wu, Dongmei

    2014-07-01

    Two electrochemical signals ascribed to xanthine/guanine and hypanthine/adenine in MCF-7 cells were detected at 0.726 and 1.053 V, respectively. Based on the intensity of signals, the genistein-induced proliferation and suppression of MCF-7 cells could be evaluated. The results showed that with the increase of genistein dose at the range of 10(-9) to 10(-6)M, the two electrochemical signals of MCF-7 cell suspension increased due to the proliferation, whereas the tendency at the high dosage range of more than 10(-5)M was decreased. The proliferation and cytotoxicity obtained by the electrochemical method were in agreement with those obtained by cell counting and the MTT [3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium] method. Thus, the two-signal electrochemical method is an effective way to evaluate the effect of drugs on cell activity based on purine metabolism. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Estimation of Comfort/Disconfort Based on EEG in Massage by Use of Clustering according to Correration and Incremental Learning type NN

    NASA Astrophysics Data System (ADS)

    Teramae, Tatsuya; Kushida, Daisuke; Takemori, Fumiaki; Kitamura, Akira

    Authors proposed the estimation method combining k-means algorithm and NN for evaluating massage. However, this estimation method has a problem that discrimination ratio is decreased to new user. There are two causes of this problem. One is that generalization of NN is bad. Another one is that clustering result by k-means algorithm has not high correlation coefficient in a class. Then, this research proposes k-means algorithm according to correlation coefficient and incremental learning for NN. The proposed k-means algorithm is method included evaluation function based on correlation coefficient. Incremental learning is method that NN is learned by new data and initialized weight based on the existing data. The effect of proposed methods are verified by estimation result using EEG data when testee is given massage.

  14. Assessment of public health impact of work-related asthma.

    PubMed

    Jaakkola, Maritta S; Jaakkola, Jouni J K

    2012-03-05

    Asthma is among the most common chronic diseases in working-aged populations and occupational exposures are important causal agents. Our aims were to evaluate the best methods to assess occurrence, public health impact, and burden to society related to occupational or work-related asthma and to achieve comparable estimates for different populations. We addressed three central questions: 1: What is the best method to assess the occurrence of occupational asthma? We evaluated: 1) assessment of the occurrence of occupational asthma per se, and 2) assessment of adult-onset asthma and the population attributable fractions due to specific occupational exposures. 2: What are the best methods to assess public health impact and burden to society related to occupational or work-related asthma? We evaluated methods based on assessment of excess burden of disease due to specific occupational exposures. 3: How to achieve comparable estimates for different populations? We evaluated comparability of estimates of occurrence and burden attributable to occupational asthma based on different methods. Assessment of the occurrence of occupational asthma per se can be used in countries with good coverage of the identification system for occupational asthma, i.e. countries with well-functioning occupational health services. Assessment based on adult-onset asthma and population attributable fractions due to specific occupational exposures is a good approach to estimate the occurrence of occupational asthma at the population level. For assessment of public health impact from work-related asthma we recommend assessing excess burden of disease due to specific occupational exposures, including excess incidence of asthma complemented by an assessment of disability from it. International comparability of estimates can be best achieved by methods based on population attributable fractions. Public health impact assessment for occupational asthma is central in prevention and health policy planning and could be improved by purposeful development of methods for assessing health benefits from preventive actions. Registry-based methods are suitable for evaluating time-trends of occurrence at a given population but for international comparisons they face serious limitations. Assessment of excess burden of disease due to specific occupational exposure is a useful measure, when there is valid information on population exposure and attributable fractions.

  15. Dynamic spiking studies using the DNPH sampling train

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steger, J.L.; Knoll, J.E.

    1996-12-31

    The proposed aldehyde and ketone sampling method using aqueous 2,4-dinitrophenylhydrazine (DNPH) was evaluated in the laboratory and in the field. The sampling trains studied were based on the train described in SW 846 Method 0011. Nine compounds were evaluated: formaldehyde, acetaldehyde, quinone, acrolein, propionaldeyde, methyl isobutyl ketone, methyl ethyl ketone, acetophenone, and isophorone. In the laboratory, the trains were spiked both statistically and dynamically. Laboratory studies also investigated potential interferences to the method. Based on their potential to hydrolyze in acid solution to form formaldehyde, dimethylolurea, saligenin, s-trioxane, hexamethylenetetramine, and paraformaldehyde were investigated. Ten runs were performed using quadruplicate samplingmore » trains. Two of the four trains were dynamically spiked with the nine aldehydes and ketones. The test results were evaluated using the EPA method 301 criteria for method precision (< + pr - 50% relative standard deviation) and bias (correction factor of 1.00 + or - 0.30).« less

  16. A Recommended Engineering Application of the Method for Evaluating the Visual Significance of Reflected Glare.

    ERIC Educational Resources Information Center

    Blackwell, H. Richard

    1963-01-01

    An application method for evaluating the visual significance of reflected glare is described, based upon a number of decisions with respect to the relative importance of various aspects of visual performance. A standardized procedure for evaluating the overall effectiveness of lighting from photometric data on materials or installations is needed…

  17. Reliability of Smartphone-Based Instant Messaging Application for Diagnosis, Classification, and Decision-making in Pediatric Orthopedic Trauma.

    PubMed

    Stahl, Ido; Katsman, Alexander; Zaidman, Michael; Keshet, Doron; Sigal, Amit; Eidelman, Mark

    2017-07-11

    Smartphones have the ability to capture and send images, and their use has become common in the emergency setting for transmitting radiographic images with the intent to consult an off-site specialist. Our objective was to evaluate the reliability of smartphone-based instant messaging applications for the evaluation of various pediatric limb traumas, as compared with the standard method of viewing images of a workstation-based picture archiving and communication system (PACS). X-ray images of 73 representative cases of pediatric limb trauma were captured and transmitted to 5 pediatric orthopedic surgeons by the Whatsapp instant messaging application on an iPhone 6 smartphone. Evaluators were asked to diagnose, classify, and determine the course of treatment for each case over their personal smartphones. Following a 4-week interval, revaluation was conducted using the PACS. Intraobserver agreement was calculated for overall agreement and per fracture site. The overall results indicate "near perfect agreement" between interpretations of the radiographs on smartphones compared with computer-based PACS, with κ of 0.84, 0.82, and 0.89 for diagnosis, classification, and treatment planning, respectively. Looking at the results per fracture site, we also found substantial to near perfect agreement. Smartphone-based instant messaging applications are reliable for evaluation of a wide range of pediatric limb fractures. This method of obtaining an expert opinion from the off-site specialist is immediately accessible and inexpensive, making smartphones a powerful tool for doctors in the emergency department, primary care clinics, or remote medical centers, enabling timely and appropriate treatment for the injured child. This method is not a substitution for evaluation of the images in the standard method over computer-based PACS, which should be performed before final decision-making.

  18. Direct evaluation of free energy for large system through structure integration approach.

    PubMed

    Takeuchi, Kazuhito; Tanaka, Ryohei; Yuge, Koretaka

    2015-09-30

    We propose a new approach, 'structure integration', enabling direct evaluation of configurational free energy for large systems. The present approach is based on the statistical information of lattice. Through first-principles-based simulation, we find that the present method evaluates configurational free energy accurately in disorder states above critical temperature.

  19. Evaluation of the relationship between the Adenosine Triphosphate (ATP) bioluminescence assay and the presence of Bacillus anthracis spores and vegetative cells.

    PubMed

    Gibbs, Shawn G; Sayles, Harlan; Colbert, Erica M; Hewlett, Angela; Chaika, Oleg; Smith, Philip W

    2014-05-28

    The Adenosine triphosphate (ATP) bioluminescence assay was utilized in laboratory evaluations to determine the presence and concentration of vegetative and spore forms of Bacillus anthracis Sterne 34F2. Seventeen surfaces from the healthcare environment were selected for evaluation. Surfaces were inoculated with 50 µL of organism suspensions at three concentrations of 104, 106, 108 colony forming units per surface (CFU/surface) of B. anthracis. Culture-based methods and ATP based methods were utilized to determine concentrations. When all concentrations were evaluated together, a positive correlation between log-adjusted CFU and Relative Light Units (RLU) for endospores and vegetative cells was established. When concentrations were evaluated separately, a significant correlation was not demonstrated. This study demonstrated a positive correlation for ATP and culture-based methods for the vegetative cells of B. anthracis. When evaluating the endospores and combining both metabolic states, the ATP measurements and CFU recovered did not correspond to the initial concentrations on the evaluated surfaces. The results of our study show that the low ATP signal which does not correlate well to the CFU results would not make the ATP measuring devises effective in confirming contamination residual from a bioterrorist event.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hohimer, J.P.

    The use of laser-based analytical methods in nuclear-fuel processing plants is considered. The species and locations for accountability, process control, and effluent control measurements in the Coprocessing, Thorex, and reference Purex fuel processing operations are identified and the conventional analytical methods used for these measurements are summarized. The laser analytical methods based upon Raman, absorption, fluorescence, and nonlinear spectroscopy are reviewed and evaluated for their use in fuel processing plants. After a comparison of the capabilities of the laser-based and conventional analytical methods, the promising areas of application of the laser-based methods in fuel processing plants are identified.

  1. Evaluating Student Achievement in Discipline-Based Art Programs.

    ERIC Educational Resources Information Center

    Day, Michael D.

    1985-01-01

    The discipline-based view of art education requires that students progress in all of the four domains of art learning: art history, art criticism, aesthetic appreciation, and creative production. Evaluation methods in each of these domains are discussed. (RM)

  2. Unified method to integrate and blend several, potentially related, sources of information for genetic evaluation.

    PubMed

    Vandenplas, Jérémie; Colinet, Frederic G; Gengler, Nicolas

    2014-09-30

    A condition to predict unbiased estimated breeding values by best linear unbiased prediction is to use simultaneously all available data. However, this condition is not often fully met. For example, in dairy cattle, internal (i.e. local) populations lead to evaluations based only on internal records while widely used foreign sires have been selected using internally unavailable external records. In such cases, internal genetic evaluations may be less accurate and biased. Because external records are unavailable, methods were developed to combine external information that summarizes these records, i.e. external estimated breeding values and associated reliabilities, with internal records to improve accuracy of internal genetic evaluations. Two issues of these methods concern double-counting of contributions due to relationships and due to records. These issues could be worse if external information came from several evaluations, at least partially based on the same records, and combined into a single internal evaluation. Based on a Bayesian approach, the aim of this research was to develop a unified method to integrate and blend simultaneously several sources of information into an internal genetic evaluation by avoiding double-counting of contributions due to relationships and due to records. This research resulted in equations that integrate and blend simultaneously several sources of information and avoid double-counting of contributions due to relationships and due to records. The performance of the developed equations was evaluated using simulated and real datasets. The results showed that the developed equations integrated and blended several sources of information well into a genetic evaluation. The developed equations also avoided double-counting of contributions due to relationships and due to records. Furthermore, because all available external sources of information were correctly propagated, relatives of external animals benefited from the integrated information and, therefore, more reliable estimated breeding values were obtained. The proposed unified method integrated and blended several sources of information well into a genetic evaluation by avoiding double-counting of contributions due to relationships and due to records. The unified method can also be extended to other types of situations such as single-step genomic or multi-trait evaluations, combining information across different traits.

  3. Presenting an Evaluation Model for the Cancer Registry Software.

    PubMed

    Moghaddasi, Hamid; Asadi, Farkhondeh; Rabiei, Reza; Rahimi, Farough; Shahbodaghi, Reihaneh

    2017-12-01

    As cancer is increasingly growing, cancer registry is of great importance as the main core of cancer control programs, and many different software has been designed for this purpose. Therefore, establishing a comprehensive evaluation model is essential to evaluate and compare a wide range of such software. In this study, the criteria of the cancer registry software have been determined by studying the documents and two functional software of this field. The evaluation tool was a checklist and in order to validate the model, this checklist was presented to experts in the form of a questionnaire. To analyze the results of validation, an agreed coefficient of %75 was determined in order to apply changes. Finally, when the model was approved, the final version of the evaluation model for the cancer registry software was presented. The evaluation model of this study contains tool and method of evaluation. The evaluation tool is a checklist including the general and specific criteria of the cancer registry software along with their sub-criteria. The evaluation method of this study was chosen as a criteria-based evaluation method based on the findings. The model of this study encompasses various dimensions of cancer registry software and a proper method for evaluating it. The strong point of this evaluation model is the separation between general criteria and the specific ones, while trying to fulfill the comprehensiveness of the criteria. Since this model has been validated, it can be used as a standard to evaluate the cancer registry software.

  4. Assessing and evaluating multidisciplinary translational teams: a mixed methods approach.

    PubMed

    Wooten, Kevin C; Rose, Robert M; Ostir, Glenn V; Calhoun, William J; Ameredes, Bill T; Brasier, Allan R

    2014-03-01

    A case report illustrates how multidisciplinary translational teams can be assessed using outcome, process, and developmental types of evaluation using a mixed-methods approach. Types of evaluation appropriate for teams are considered in relation to relevant research questions and assessment methods. Logic models are applied to scientific projects and team development to inform choices between methods within a mixed-methods design. Use of an expert panel is reviewed, culminating in consensus ratings of 11 multidisciplinary teams and a final evaluation within a team-type taxonomy. Based on team maturation and scientific progress, teams were designated as (a) early in development, (b) traditional, (c) process focused, or (d) exemplary. Lessons learned from data reduction, use of mixed methods, and use of expert panels are explored.

  5. PRELIMINARY RESULTS: EVALUATIONS OF THE ALTERNATIVE ASBESTOS CONTROL METHOD FOR BUILDING DEMOLITION

    EPA Science Inventory

    This presentation describes the preliminary results of the evaluations of the alternative asbestos control method for demolishing buildings containing asbestos, and are covered under the regulatory requirements of the Asbestos NESHAP. This abstract and presentation are based, at ...

  6. EVALUATION OF FUGITIVE EMISSIONS USING GROUND-BASED OPTICAL REMOTE SENSING TECHNOLOGY

    EPA Science Inventory

    EPA has developed and evaluated a method for characterizing fugitive emissions from large area sources. The method, known as radial plume mapping (RPM) uses multiple-beam, scanning, optical remote sensing (ORS) instrumentation such as open-path Fourier transform infrared spectro...

  7. Evaluating the predictive performance of empirical estimators of natural mortality rate using information on over 200 fish species

    USGS Publications Warehouse

    Then, Amy Y.; Hoenig, John M; Hall, Norman G.; Hewitt, David A.

    2015-01-01

    Many methods have been developed in the last 70 years to predict the natural mortality rate, M, of a stock based on empirical evidence from comparative life history studies. These indirect or empirical methods are used in most stock assessments to (i) obtain estimates of M in the absence of direct information, (ii) check on the reasonableness of a direct estimate of M, (iii) examine the range of plausible M estimates for the stock under consideration, and (iv) define prior distributions for Bayesian analyses. The two most cited empirical methods have appeared in the literature over 2500 times to date. Despite the importance of these methods, there is no consensus in the literature on how well these methods work in terms of prediction error or how their performance may be ranked. We evaluate estimators based on various combinations of maximum age (tmax), growth parameters, and water temperature by seeing how well they reproduce >200 independent, direct estimates of M. We use tenfold cross-validation to estimate the prediction error of the estimators and to rank their performance. With updated and carefully reviewed data, we conclude that a tmax-based estimator performs the best among all estimators evaluated. The tmax-based estimators in turn perform better than the Alverson–Carney method based on tmax and the von Bertalanffy K coefficient, Pauly’s method based on growth parameters and water temperature and methods based just on K. It is possible to combine two independent methods by computing a weighted mean but the improvement over the tmax-based methods is slight. Based on cross-validation prediction error, model residual patterns, model parsimony, and biological considerations, we recommend the use of a tmax-based estimator (M=4.899tmax−0.916">M=4.899t−0.916maxM=4.899tmax−0.916, prediction error = 0.32) when possible and a growth-based method (M=4.118K0.73L∞−0.33">M=4.118K0.73L−0.33∞M=4.118K0.73L∞−0.33 , prediction error = 0.6, length in cm) otherwise.

  8. A study of active learning methods for named entity recognition in clinical text.

    PubMed

    Chen, Yukun; Lasko, Thomas A; Mei, Qiaozhu; Denny, Joshua C; Xu, Hua

    2015-12-01

    Named entity recognition (NER), a sequential labeling task, is one of the fundamental tasks for building clinical natural language processing (NLP) systems. Machine learning (ML) based approaches can achieve good performance, but they often require large amounts of annotated samples, which are expensive to build due to the requirement of domain experts in annotation. Active learning (AL), a sample selection approach integrated with supervised ML, aims to minimize the annotation cost while maximizing the performance of ML-based models. In this study, our goal was to develop and evaluate both existing and new AL methods for a clinical NER task to identify concepts of medical problems, treatments, and lab tests from the clinical notes. Using the annotated NER corpus from the 2010 i2b2/VA NLP challenge that contained 349 clinical documents with 20,423 unique sentences, we simulated AL experiments using a number of existing and novel algorithms in three different categories including uncertainty-based, diversity-based, and baseline sampling strategies. They were compared with the passive learning that uses random sampling. Learning curves that plot performance of the NER model against the estimated annotation cost (based on number of sentences or words in the training set) were generated to evaluate different active learning and the passive learning methods and the area under the learning curve (ALC) score was computed. Based on the learning curves of F-measure vs. number of sentences, uncertainty sampling algorithms outperformed all other methods in ALC. Most diversity-based methods also performed better than random sampling in ALC. To achieve an F-measure of 0.80, the best method based on uncertainty sampling could save 66% annotations in sentences, as compared to random sampling. For the learning curves of F-measure vs. number of words, uncertainty sampling methods again outperformed all other methods in ALC. To achieve 0.80 in F-measure, in comparison to random sampling, the best uncertainty based method saved 42% annotations in words. But the best diversity based method reduced only 7% annotation effort. In the simulated setting, AL methods, particularly uncertainty-sampling based approaches, seemed to significantly save annotation cost for the clinical NER task. The actual benefit of active learning in clinical NER should be further evaluated in a real-time setting. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Automatic quality assessment and peak identification of auditory brainstem responses with fitted parametric peaks.

    PubMed

    Valderrama, Joaquin T; de la Torre, Angel; Alvarez, Isaac; Segura, Jose Carlos; Thornton, A Roger D; Sainz, Manuel; Vargas, Jose Luis

    2014-05-01

    The recording of the auditory brainstem response (ABR) is used worldwide for hearing screening purposes. In this process, a precise estimation of the most relevant components is essential for an accurate interpretation of these signals. This evaluation is usually carried out subjectively by an audiologist. However, the use of automatic methods for this purpose is being encouraged nowadays in order to reduce human evaluation biases and ensure uniformity among test conditions, patients, and screening personnel. This article describes a new method that performs automatic quality assessment and identification of the peaks, the fitted parametric peaks (FPP). This method is based on the use of synthesized peaks that are adjusted to the ABR response. The FPP is validated, on one hand, by an analysis of amplitudes and latencies measured manually by an audiologist and automatically by the FPP method in ABR signals recorded at different stimulation rates; and on the other hand, contrasting the performance of the FPP method with the automatic evaluation techniques based on the correlation coefficient, FSP, and cross correlation with a predefined template waveform by comparing the automatic evaluations of the quality of these methods with subjective evaluations provided by five experienced evaluators on a set of ABR signals of different quality. The results of this study suggest (a) that the FPP method can be used to provide an accurate parameterization of the peaks in terms of amplitude, latency, and width, and (b) that the FPP remains as the method that best approaches the averaged subjective quality evaluation, as well as provides the best results in terms of sensitivity and specificity in ABR signals validation. The significance of these findings and the clinical value of the FPP method are highlighted on this paper. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  10. Evaluation of the repeatability and reproducibility of a suite of qPCR based microbial source tracking methods

    EPA Science Inventory

    Many PCR-based methods for microbial source tracking (MST) have been developed and validated within individual research laboratories. Inter-laboratory validation of these methods, however, has been minimal, and the effects of protocol standardization regimes have not been thor...

  11. Probabilistic Scenario-based Seismic Risk Analysis for Critical Infrastructures Method and Application for a Nuclear Power Plant

    NASA Astrophysics Data System (ADS)

    Klügel, J.

    2006-12-01

    Deterministic scenario-based seismic hazard analysis has a long tradition in earthquake engineering for developing the design basis of critical infrastructures like dams, transport infrastructures, chemical plants and nuclear power plants. For many applications besides of the design of infrastructures it is of interest to assess the efficiency of the design measures taken. These applications require a method allowing to perform a meaningful quantitative risk analysis. A new method for a probabilistic scenario-based seismic risk analysis has been developed based on a probabilistic extension of proven deterministic methods like the MCE- methodology. The input data required for the method are entirely based on the information which is necessary to perform any meaningful seismic hazard analysis. The method is based on the probabilistic risk analysis approach common for applications in nuclear technology developed originally by Kaplan & Garrick (1981). It is based (1) on a classification of earthquake events into different size classes (by magnitude), (2) the evaluation of the frequency of occurrence of events, assigned to the different classes (frequency of initiating events, (3) the development of bounding critical scenarios assigned to each class based on the solution of an optimization problem and (4) in the evaluation of the conditional probability of exceedance of critical design parameters (vulnerability analysis). The advantage of the method in comparison with traditional PSHA consists in (1) its flexibility, allowing to use different probabilistic models for earthquake occurrence as well as to incorporate advanced physical models into the analysis, (2) in the mathematically consistent treatment of uncertainties, and (3) in the explicit consideration of the lifetime of the critical structure as a criterion to formulate different risk goals. The method was applied for the evaluation of the risk of production interruption losses of a nuclear power plant during its residual lifetime.

  12. Comparison of Sensor Selection Mechanisms for an ERP-Based Brain-Computer Interface

    PubMed Central

    Metzen, Jan H.

    2013-01-01

    A major barrier for a broad applicability of brain-computer interfaces (BCIs) based on electroencephalography (EEG) is the large number of EEG sensor electrodes typically used. The necessity for this results from the fact that the relevant information for the BCI is often spread over the scalp in complex patterns that differ depending on subjects and application scenarios. Recently, a number of methods have been proposed to determine an individual optimal sensor selection. These methods have, however, rarely been compared against each other or against any type of baseline. In this paper, we review several selection approaches and propose one additional selection criterion based on the evaluation of the performance of a BCI system using a reduced set of sensors. We evaluate the methods in the context of a passive BCI system that is designed to detect a P300 event-related potential and compare the performance of the methods against randomly generated sensor constellations. For a realistic estimation of the reduced system's performance we transfer sensor constellations found on one experimental session to a different session for evaluation. We identified notable (and unanticipated) differences among the methods and could demonstrate that the best method in our setup is able to reduce the required number of sensors considerably. Though our application focuses on EEG data, all presented algorithms and evaluation schemes can be transferred to any binary classification task on sensor arrays. PMID:23844021

  13. Feature selection from hyperspectral imaging for guava fruit defects detection

    NASA Astrophysics Data System (ADS)

    Mat Jafri, Mohd. Zubir; Tan, Sou Ching

    2017-06-01

    Development of technology makes hyperspectral imaging commonly used for defect detection. In this research, a hyperspectral imaging system was setup in lab to target for guava fruits defect detection. Guava fruit was selected as the object as to our knowledge, there is fewer attempts were made for guava defect detection based on hyperspectral imaging. The common fluorescent light source was used to represent the uncontrolled lighting condition in lab and analysis was carried out in a specific wavelength range due to inefficiency of this particular light source. Based on the data, the reflectance intensity of this specific setup could be categorized in two groups. Sequential feature selection with linear discriminant (LD) and quadratic discriminant (QD) function were used to select features that could potentially be used in defects detection. Besides the ordinary training method, training dataset in discriminant was separated in two to cater for the uncontrolled lighting condition. These two parts were separated based on the brighter and dimmer area. Four evaluation matrixes were evaluated which are LD with common training method, QD with common training method, LD with two part training method and QD with two part training method. These evaluation matrixes were evaluated using F1-score with total 48 defected areas. Experiment shown that F1-score of linear discriminant with the compensated method hitting 0.8 score, which is the highest score among all.

  14. Evaluation of the Technical Adequacy of Three Methods for Identifying Specific Learning Disabilities Based on Cognitive Discrepancies

    ERIC Educational Resources Information Center

    Stuebing, Karla K.; Fletcher, Jack M.; Branum-Martin, Lee; Francis, David J.

    2012-01-01

    This study used simulation techniques to evaluate the technical adequacy of three methods for the identification of specific learning disabilities via patterns of strengths and weaknesses in cognitive processing. Latent and observed data were generated and the decision-making process of each method was applied to assess concordance in…

  15. Performance and Specificity of the Covalently Linked Immunomagnetic Separation-ATP Method for Rapid Detection and Enumeration of Enterococci in Coastal Environments

    PubMed Central

    Zimmer-Faust, Amity G.; Thulsiraj, Vanessa; Ferguson, Donna

    2014-01-01

    The performance and specificity of the covalently linked immunomagnetic separation-ATP (Cov-IMS/ATP) method for the detection and enumeration of enterococci was evaluated in recreational waters. Cov-IMS/ATP performance was compared with standard methods: defined substrate technology (Enterolert; IDEXX Laboratories), membrane filtration (EPA Method 1600), and an Enterococcus-specific quantitative PCR (qPCR) assay (EPA Method A). We extend previous studies by (i) analyzing the stability of the relationship between the Cov-IMS/ATP method and culture-based methods at different field sites, (ii) evaluating specificity of the assay for seven ATCC Enterococcus species, (iii) identifying cross-reacting organisms binding the antibody-bead complexes with 16S rRNA gene sequencing and evaluating specificity of the assay to five nonenterococcus species, and (iv) conducting preliminary tests of preabsorption as a means of improving the assay. Cov-IMS/ATP was found to perform consistently and with strong agreement rates (based on exceedance/compliance with regulatory limits) of between 83% and 100% compared to the culture-based Enterolert method at a variety of sites with complex inputs. The Cov-IMS/ATP method is specific to five of seven different Enterococcus spp. tested. However, there is potential for nontarget bacteria to bind the antibody, which may be reduced by purification of the IgG serum with preabsorption at problematic sites. The findings of this study help to validate the Cov-IMS/ATP method, suggesting a predictable relationship between the Cov-IMS/ATP method and traditional culture-based methods, which will allow for more widespread application of this rapid and field-portable method for coastal water quality assessment. PMID:24561583

  16. An Analysis of the Bases Used By Library Evaluators in the Accrediting Process of the Southern Association of Colleges and Schools.

    ERIC Educational Resources Information Center

    Yates, Dudley V.

    Seventy-seven of ninety library evaluators of the Southern Association of Colleges and Schools (SACS) responded to a 1973 questionnaire to determine: (1) if evaluative criteria used are based with an authority other than SACS; and (2) if certain methods, procedures, and techniques employed by evaluators could be used to construct an ideal…

  17. Development and Evaluation of the Method with an Affective Interface for Promoting Employees' Morale

    NASA Astrophysics Data System (ADS)

    Fujino, Hidenori; Ishii, Hirotake; Shimoda, Hiroshi; Yoshikawa, Hidekazu

    For the sustainable society, organization management not based on the mass production and mass consumption but having the flexibility to meet to various social needs precisely is required. For realizing such management, the emploees' work morale is required. Recently, however, the emploees' work morale is tend to decrease. Therefore, in this study, the authors developed the model of the method for promoting and keeping employees' work morale effectively and efficiently. Especially the authors thought “work morale” of “attitude to the work”. Based on this idea, it could be considered that the theory of the persuasion psychology and various persuasion techniques. Therefore, the model of the method applying the character agent was developed based on the forced compliance which is one of persuasion techniques based on the theory of the cognitive dissonance. By the evaluation experiment using human subjects, it was confirmed that developed method could improve workers' work morle effectively.

  18. Evaluation of aortic contractility based on analysis of CT images of the heart

    NASA Astrophysics Data System (ADS)

    DzierŻak, RóŻa; Maciejewski, Ryszard; Uhlig, Sebastian

    2017-08-01

    The paper presents a method to assess the aortic contractility based on the analysis of CT images of the heart. This is an alternative method that can be used for patients who cannot be examined by using echocardiography. Usage of medical imaging application for DICOM file processing allows to evaluate the aortic cross section during systole and diastole. It makes possible to assess the level of aortic contractility.

  19. A human reliability based usability evaluation method for safety-critical software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boring, R. L.; Tran, T. Q.; Gertman, D. I.

    2006-07-01

    Boring and Gertman (2005) introduced a novel method that augments heuristic usability evaluation methods with that of the human reliability analysis method of SPAR-H. By assigning probabilistic modifiers to individual heuristics, it is possible to arrive at the usability error probability (UEP). Although this UEP is not a literal probability of error, it nonetheless provides a quantitative basis to heuristic evaluation. This method allows one to seamlessly prioritize and identify usability issues (i.e., a higher UEP requires more immediate fixes). However, the original version of this method required the usability evaluator to assign priority weights to the final UEP, thusmore » allowing the priority of a usability issue to differ among usability evaluators. The purpose of this paper is to explore an alternative approach to standardize the priority weighting of the UEP in an effort to improve the method's reliability. (authors)« less

  20. Connecting clinical and actuarial prediction with rule-based methods.

    PubMed

    Fokkema, Marjolein; Smits, Niels; Kelderman, Henk; Penninx, Brenda W J H

    2015-06-01

    Meta-analyses comparing the accuracy of clinical versus actuarial prediction have shown actuarial methods to outperform clinical methods, on average. However, actuarial methods are still not widely used in clinical practice, and there has been a call for the development of actuarial prediction methods for clinical practice. We argue that rule-based methods may be more useful than the linear main effect models usually employed in prediction studies, from a data and decision analytic as well as a practical perspective. In addition, decision rules derived with rule-based methods can be represented as fast and frugal trees, which, unlike main effects models, can be used in a sequential fashion, reducing the number of cues that have to be evaluated before making a prediction. We illustrate the usability of rule-based methods by applying RuleFit, an algorithm for deriving decision rules for classification and regression problems, to a dataset on prediction of the course of depressive and anxiety disorders from Penninx et al. (2011). The RuleFit algorithm provided a model consisting of 2 simple decision rules, requiring evaluation of only 2 to 4 cues. Predictive accuracy of the 2-rule model was very similar to that of a logistic regression model incorporating 20 predictor variables, originally applied to the dataset. In addition, the 2-rule model required, on average, evaluation of only 3 cues. Therefore, the RuleFit algorithm appears to be a promising method for creating decision tools that are less time consuming and easier to apply in psychological practice, and with accuracy comparable to traditional actuarial methods. (c) 2015 APA, all rights reserved).

  1. Research on image complexity evaluation method based on color information

    NASA Astrophysics Data System (ADS)

    Wang, Hao; Duan, Jin; Han, Xue-hui; Xiao, Bo

    2017-11-01

    In order to evaluate the complexity of a color image more effectively and find the connection between image complexity and image information, this paper presents a method to compute the complexity of image based on color information.Under the complexity ,the theoretical analysis first divides the complexity from the subjective level, divides into three levels: low complexity, medium complexity and high complexity, and then carries on the image feature extraction, finally establishes the function between the complexity value and the color characteristic model. The experimental results show that this kind of evaluation method can objectively reconstruct the complexity of the image from the image feature research. The experimental results obtained by the method of this paper are in good agreement with the results of human visual perception complexity,Color image complexity has a certain reference value.

  2. A hybrid method for evaluating enterprise architecture implementation.

    PubMed

    Nikpay, Fatemeh; Ahmad, Rodina; Yin Kia, Chiam

    2017-02-01

    Enterprise Architecture (EA) implementation evaluation provides a set of methods and practices for evaluating the EA implementation artefacts within an EA implementation project. There are insufficient practices in existing EA evaluation models in terms of considering all EA functions and processes, using structured methods in developing EA implementation, employing matured practices, and using appropriate metrics to achieve proper evaluation. The aim of this research is to develop a hybrid evaluation method that supports achieving the objectives of EA implementation. To attain this aim, the first step is to identify EA implementation evaluation practices. To this end, a Systematic Literature Review (SLR) was conducted. Second, the proposed hybrid method was developed based on the foundation and information extracted from the SLR, semi-structured interviews with EA practitioners, program theory evaluation and Information Systems (ISs) evaluation. Finally, the proposed method was validated by means of a case study and expert reviews. This research provides a suitable foundation for researchers who wish to extend and continue this research topic with further analysis and exploration, and for practitioners who would like to employ an effective and lightweight evaluation method for EA projects. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. A method for evaluating discoverability and navigability of recommendation algorithms.

    PubMed

    Lamprecht, Daniel; Strohmaier, Markus; Helic, Denis

    2017-01-01

    Recommendations are increasingly used to support and enable discovery, browsing, and exploration of items. This is especially true for entertainment platforms such as Netflix or YouTube, where frequently, no clear categorization of items exists. Yet, the suitability of a recommendation algorithm to support these use cases cannot be comprehensively evaluated by any recommendation evaluation measures proposed so far. In this paper, we propose a method to expand the repertoire of existing recommendation evaluation techniques with a method to evaluate the discoverability and navigability of recommendation algorithms. The proposed method tackles this by means of first evaluating the discoverability of recommendation algorithms by investigating structural properties of the resulting recommender systems in terms of bow tie structure, and path lengths. Second, the method evaluates navigability by simulating three different models of information seeking scenarios and measuring the success rates. We show the feasibility of our method by applying it to four non-personalized recommendation algorithms on three data sets and also illustrate its applicability to personalized algorithms. Our work expands the arsenal of evaluation techniques for recommendation algorithms, extends from a one-click-based evaluation towards multi-click analysis, and presents a general, comprehensive method to evaluating navigability of arbitrary recommendation algorithms.

  4. Correction for FDG PET dose extravasations: Monte Carlo validation and quantitative evaluation of patient studies.

    PubMed

    Silva-Rodríguez, Jesús; Aguiar, Pablo; Sánchez, Manuel; Mosquera, Javier; Luna-Vega, Víctor; Cortés, Julia; Garrido, Miguel; Pombar, Miguel; Ruibal, Alvaro

    2014-05-01

    Current procedure guidelines for whole body [18F]fluoro-2-deoxy-D-glucose (FDG)-positron emission tomography (PET) state that studies with visible dose extravasations should be rejected for quantification protocols. Our work is focused on the development and validation of methods for estimating extravasated doses in order to correct standard uptake value (SUV) values for this effect in clinical routine. One thousand three hundred sixty-seven consecutive whole body FDG-PET studies were visually inspected looking for extravasation cases. Two methods for estimating the extravasated dose were proposed and validated in different scenarios using Monte Carlo simulations. All visible extravasations were retrospectively evaluated using a manual ROI based method. In addition, the 50 patients with higher extravasated doses were also evaluated using a threshold-based method. Simulation studies showed that the proposed methods for estimating extravasated doses allow us to compensate the impact of extravasations on SUV values with an error below 5%. The quantitative evaluation of patient studies revealed that paravenous injection is a relatively frequent effect (18%) with a small fraction of patients presenting considerable extravasations ranging from 1% to a maximum of 22% of the injected dose. A criterion based on the extravasated volume and maximum concentration was established in order to identify this fraction of patients that might be corrected for paravenous injection effect. The authors propose the use of a manual ROI based method for estimating the effectively administered FDG dose and then correct SUV quantification in those patients fulfilling the proposed criterion.

  5. Evaluation of different classification methods for the diagnosis of schizophrenia based on functional near-infrared spectroscopy.

    PubMed

    Li, Zhaohua; Wang, Yuduo; Quan, Wenxiang; Wu, Tongning; Lv, Bin

    2015-02-15

    Based on near-infrared spectroscopy (NIRS), recent converging evidence has been observed that patients with schizophrenia exhibit abnormal functional activities in the prefrontal cortex during a verbal fluency task (VFT). Therefore, some studies have attempted to employ NIRS measurements to differentiate schizophrenia patients from healthy controls with different classification methods. However, no systematic evaluation was conducted to compare their respective classification performances on the same study population. In this study, we evaluated the classification performance of four classification methods (including linear discriminant analysis, k-nearest neighbors, Gaussian process classifier, and support vector machines) on an NIRS-aided schizophrenia diagnosis. We recruited a large sample of 120 schizophrenia patients and 120 healthy controls and measured the hemoglobin response in the prefrontal cortex during the VFT using a multichannel NIRS system. Features for classification were extracted from three types of NIRS data in each channel. We subsequently performed a principal component analysis (PCA) for feature selection prior to comparison of the different classification methods. We achieved a maximum accuracy of 85.83% and an overall mean accuracy of 83.37% using a PCA-based feature selection on oxygenated hemoglobin signals and support vector machine classifier. This is the first comprehensive evaluation of different classification methods for the diagnosis of schizophrenia based on different types of NIRS signals. Our results suggested that, using the appropriate classification method, NIRS has the potential capacity to be an effective objective biomarker for the diagnosis of schizophrenia. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Including α s1 casein gene information in genomic evaluations of French dairy goats.

    PubMed

    Carillier-Jacquin, Céline; Larroque, Hélène; Robert-Granié, Christèle

    2016-08-04

    Genomic best linear unbiased prediction methods assume that all markers explain the same fraction of the genetic variance and do not account effectively for genes with major effects such as the α s1 casein polymorphism in dairy goats. In this study, we investigated methods to include the available α s1 casein genotype effect in genomic evaluations of French dairy goats. First, the α s1 casein genotype was included as a fixed effect in genomic evaluation models based only on bucks that were genotyped at the α s1 casein locus. Less than 1 % of the females with phenotypes were genotyped at the α s1 casein gene. Thus, to incorporate these female phenotypes in the genomic evaluation, two methods that allowed for this large number of missing α s1 casein genotypes were investigated. Probabilities for each possible α s1 casein genotype were first estimated for each female of unknown genotype based on iterative peeling equations. The second method is based on a multiallelic gene content approach. For each model tested, we used three datasets each divided into a training and a validation set: (1) two-breed population (Alpine + Saanen), (2) Alpine population, and (3) Saanen population. The α s1 casein genotype had a significant effect on milk yield, fat content and protein content. Including an α s1 casein effect in genetic and genomic evaluations based only on male known α s1 casein genotypes improved accuracies (from 6 to 27 %). In genomic evaluations based on all female phenotypes, the gene content approach performed better than the other tested methods but the improvement in accuracy was only slightly better (from 1 to 14 %) than that of a genomic model without the α s1 casein effect. Including the α s1 casein effect in a genomic evaluation model for French dairy goats is possible and useful to improve accuracy. Difficulties in predicting the genotypes for ungenotyped animals limited the improvement in accuracy of the obtained estimated breeding values.

  7. [Artistic anatomy of the nose: proposals for a simplified project of rhinoplasty].

    PubMed

    Polselli, R; Saban, Y

    2007-01-01

    The authors developed an original and simple method of evaluation of the aesthetic lines of the nose adapted to the harmony of the face. Initially based on their experience, the authors propose an evaluation of the nose in 2 stages and 5 sequencies based on the construction of single circuit lines according to various incidences. They checked thereafter the validity of this method on the operative project and on the appreciation of the results of the rhinoplasties. Controlled on several types of faces, the method suggested by the authors proved to be reliable, simple, reproducible. The authors proposed a method of evaluation of the aesthetic lines of the nose integrated to the harmony of the face. This method relies on the construction, in 5 stages, of single circuit lines not requiring any particular material. The artistic method of evaluation of the nose proposed by the authors is very simple. Rapid and immediately usable, it makes it possible to schedule a rhinoplasty in a few minutes. The evaluation of the aesthetic results of the rhinoplasties is also very simple and reproducible. It has moreover the merit to propose a model of teaching making it possible to the rhinoplastician to criticize his results and thus to progress in its technical training and its operational indications.

  8. Comparing conventional Descriptive Analysis and Napping®-UFP against physiochemical measurements: a case study using apples.

    PubMed

    Pickup, William; Bremer, Phil; Peng, Mei

    2018-03-01

    The extensive time and cost associated with conventional sensory profiling methods has spurred sensory researchers to develop rapid method alternatives, such as Napping® with Ultra-Flash Profiling (UFP). Napping®-UFP generates sensory maps by requiring untrained panellists to separate samples based on perceived sensory similarities. Evaluations of this method have been restrained to manufactured/formulated food models, and predominantly structured on comparisons against the conventional descriptive method. The present study aims to extend the validation of Napping®-UFP (N = 72) to natural biological products; and to evaluate this method against Descriptive Analysis (DA; N = 8) with physiochemical measurements as an additional evaluative criterion. The results revealed that sample configurations generated by DA and Napping®-UFP were not significantly correlated (RV = 0.425, P = 0.077); however, they were both correlated with the product map generated based on the instrumental measures (P < 0.05). The finding also noted that sample characterisations from DA and Napping®-UFP were driven by different sensory attributes, indicating potential structural differences between these two methods in configuring samples. Overall, these findings lent support for the extended use of Napping®-UFP for evaluations of natural biological products. Although DA was shown to be a better method for establishing sensory-instrumental relationships, Napping®-UFP exhibited strengths in generating informative sample configurations based on holistic perception of products. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  9. Estimation of interfacial heat transfer coefficient in inverse heat conduction problems based on artificial fish swarm algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Xiaowei; Li, Huiping; Li, Zhichao

    2018-04-01

    The interfacial heat transfer coefficient (IHTC) is one of the most important thermal physical parameters which have significant effects on the calculation accuracy of physical fields in the numerical simulation. In this study, the artificial fish swarm algorithm (AFSA) was used to evaluate the IHTC between the heated sample and the quenchant in a one-dimensional heat conduction problem. AFSA is a global optimization method. In order to speed up the convergence speed, a hybrid method which is the combination of AFSA and normal distribution method (ZAFSA) was presented. The IHTC evaluated by ZAFSA were compared with those attained by AFSA and the advanced-retreat method and golden section method. The results show that the reasonable IHTC is obtained by using ZAFSA, the convergence of hybrid method is well. The algorithm based on ZAFSA can not only accelerate the convergence speed, but also reduce the numerical oscillation in the evaluation of IHTC.

  10. Practicable group testing method to evaluate weight/weight GMO content in maize grains.

    PubMed

    Mano, Junichi; Yanaka, Yuka; Ikezu, Yoko; Onishi, Mari; Futo, Satoshi; Minegishi, Yasutaka; Ninomiya, Kenji; Yotsuyanagi, Yuichi; Spiegelhalter, Frank; Akiyama, Hiroshi; Teshima, Reiko; Hino, Akihiro; Naito, Shigehiro; Koiwa, Tomohiro; Takabatake, Reona; Furui, Satoshi; Kitta, Kazumi

    2011-07-13

    Because of the increasing use of maize hybrids with genetically modified (GM) stacked events, the established and commonly used bulk sample methods for PCR quantification of GM maize in non-GM maize are prone to overestimate the GM organism (GMO) content, compared to the actual weight/weight percentage of GM maize in the grain sample. As an alternative method, we designed and assessed a group testing strategy in which the GMO content is statistically evaluated based on qualitative analyses of multiple small pools, consisting of 20 maize kernels each. This approach enables the GMO content evaluation on a weight/weight basis, irrespective of the presence of stacked-event kernels. To enhance the method's user-friendliness in routine application, we devised an easy-to-use PCR-based qualitative analytical method comprising a sample preparation step in which 20 maize kernels are ground in a lysis buffer and a subsequent PCR assay in which the lysate is directly used as a DNA template. This method was validated in a multilaboratory collaborative trial.

  11. 75 FR 2523 - Office of Innovation and Improvement; Overview Information; Arts in Education Model Development...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-15

    ... that is based on rigorous scientifically based research methods to assess the effectiveness of a...) Relies on measurements or observational methods that provide reliable and valid data across evaluators... of innovative, cohesive models that are based on research and have demonstrated that they effectively...

  12. Evaluation Criteria for Competency-Based Syllabi: A Chilean Case Study Applying Mixed Methods

    ERIC Educational Resources Information Center

    Jerez, Oscar; Valenzuela, Leslier; Pizarro, Veronica; Hasbun, Beatriz; Valenzuela, Gabriela; Orsini, Cesar

    2016-01-01

    In recent decades, higher education institutions worldwide have been moving from knowledge-based to competence-based curricula. One of the greatest challenges in this transition is the difficulty in changing the knowledge-oriented practices of teachers. This study evaluates the consistency between syllabus design and the requirements imposed by a…

  13. Evaluation of Smoking Prevention Television Messages Based on the Elaboration Likelihood Model

    ERIC Educational Resources Information Center

    Flynn, Brian S.; Worden, John K.; Bunn, Janice Yanushka; Connolly, Scott W.; Dorwaldt, Anne L.

    2011-01-01

    Progress in reducing youth smoking may depend on developing improved methods to communicate with higher risk youth. This study explored the potential of smoking prevention messages based on the Elaboration Likelihood Model (ELM) to address these needs. Structured evaluations of 12 smoking prevention messages based on three strategies derived from…

  14. LABORATORY TOXICITY TESTS FOR EVALUATING POTENTIAL EFFECTS OF ENDOCRINE-DISRUPTING COMPOUNDS

    EPA Science Inventory

    The scope of the Laboratory Testing Work Group was to evaluate methods for testing aquatic and terrestrial invertebrates in the laboratory. Specifically, discussions focused on the following objectives: 1) assess the extent to which consensus-based standard methods and other pub...

  15. The design and implementation of urban earthquake disaster loss evaluation and emergency response decision support systems based on GIS

    NASA Astrophysics Data System (ADS)

    Yang, Kun; Xu, Quan-li; Peng, Shuang-yun; Cao, Yan-bo

    2008-10-01

    Based on the necessity analysis of GIS applications in earthquake disaster prevention, this paper has deeply discussed the spatial integration scheme of urban earthquake disaster loss evaluation models and visualization technologies by using the network development methods such as COM/DCOM, ActiveX and ASP, as well as the spatial database development methods such as OO4O and ArcSDE based on ArcGIS software packages. Meanwhile, according to Software Engineering principles, a solution of Urban Earthquake Emergency Response Decision Support Systems based on GIS technologies have also been proposed, which include the systems logical structures, the technical routes,the system realization methods and function structures etc. Finally, the testing systems user interfaces have also been offered in the paper.

  16. Follow-up of solar lentigo depigmentation with a retinaldehyde-based cream by clinical evaluation and calibrated colour imaging.

    PubMed

    Questel, E; Durbise, E; Bardy, A-L; Schmitt, A-M; Josse, G

    2015-05-01

    To assess an objective method evaluating the effects of a retinaldehyde-based cream (RA-cream) on solar lentigines; 29 women randomly applied RA-cream on lentigines of one hand and a control cream on the other, once daily for 3 months. A specific method enabling a reliable visualisation of the lesions was proposed, using high-magnification colour-calibrated camera imaging. Assessment was performed using clinical evaluation by Physician Global Assessment score and image analysis. Luminance determination on the numeric images was performed either on the basis of 5 independent expert's consensus borders or probability map analysis via an algorithm automatically detecting the pigmented area. Both image analysis methods showed a similar lightening of ΔL* = 2 after a 3-month treatment by RA-cream, in agreement with single-blind clinical evaluation. High-magnification colour-calibrated camera imaging combined with probability map analysis is a fast and precise method to follow lentigo depigmentation. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  17. Nondestructive methods of evaluating quality of wood in preservative-treated piles

    Treesearch

    Xiping Wang; Robert J. Ross; John R. Erickson; John W. Forsman; Gary D. McGinnis; Rodney C. De Groot

    2000-01-01

    Stress-wave-based nondestructive evaluation methods were used to evaluate the potential quality and modulus of elasticity (MOE) of wood in used preservative-treated Douglas-fir and southern pine piles. Stress wave measurements were conducted on each pile section. Stress wave propagation speeds in the piles were then obtained to estimate their MOE. This was followed by...

  18. Evaluation of parameters of color profile models of LCD and LED screens

    NASA Astrophysics Data System (ADS)

    Zharinov, I. O.; Zharinov, O. O.

    2017-12-01

    The purpose of the research relates to the problem of parametric identification of the color profile model of LCD (liquid crystal display) and LED (light emitting diode) screens. The color profile model of a screen is based on the Grassmann’s Law of additive color mixture. Mathematically the problem is to evaluate unknown parameters (numerical coefficients) of the matrix transformation between different color spaces. Several methods of evaluation of these screen profile coefficients were developed. These methods are based either on processing of some colorimetric measurements or on processing of technical documentation data.

  19. The Practical Concept of an Evaluator and Its Use in the Design of Training Systems.

    ERIC Educational Resources Information Center

    Gibbons, Andrew S.; Rogers, Dwayne H.

    1991-01-01

    The evaluator is an instructional system product that provides practice, testing capability, and feedback in a way not yet seen in computer-assisted instruction. Training methods using an evaluator contain scenario-based simulation exercises, followed by a critique of performance. A focus on competency-based education and performance makes the…

  20. Modelling in Evaluating a Working Life Project in Higher Education

    ERIC Educational Resources Information Center

    Sarja, Anneli; Janhonen, Sirpa; Havukainen, Pirjo; Vesterinen, Anne

    2012-01-01

    This article describes an evaluation method based on collaboration between the higher education, a care home and university, in a R&D project. The aim of the project was to elaborate modelling as a tool of developmental evaluation for innovation and competence in project cooperation. The approach was based on activity theory. Modelling enabled a…

  1. Evaluating the Accessibility of Web-Based Instruction for Students with Disabilities.

    ERIC Educational Resources Information Center

    Hinn, D. Michelle

    This paper presents the methods and results of a year-long evaluation study, conducted for the purpose of determining disability accessibility barriers and potential solutions for those barriers found in four World Wide Web-based learning environments. The primary questions used to frame the evaluation study were: (1) Are there any features of the…

  2. Classroom Teacher's Performance-Based Evaluation Form (CTPBEF) for Public Education Schools in the State of Kuwait: A Framework

    ERIC Educational Resources Information Center

    Al-Shammari, Zaid; Yawkey, Thomas D.

    2008-01-01

    This investigation using Grounded Theory focuses on developing, designing and testing out an evaluation method used as a framework for this study. This framework evolved into the instrument entitled, "Classroom Teacher's Performance Based Evaluation Form (CTPBEF)". This study shows the processes and procedures used in CTPBEF's…

  3. Selection and Evaluation of Priority Domains in Global Energy Internet Standard Development Based on Technology Foresight

    NASA Astrophysics Data System (ADS)

    Jin, Yang; Ciwei, Gao; Jing, Zhang; Min, Sun; Jie, Yu

    2017-05-01

    The selection and evaluation of priority domains in Global Energy Internet standard development will help to break through limits of national investment, thus priority will be given to standardizing technical areas with highest urgency and feasibility. Therefore, in this paper, the process of Delphi survey based on technology foresight is put forward, the evaluation index system of priority domains is established, and the index calculation method is determined. Afterwards, statistical method is used to evaluate the alternative domains. Finally the top four priority domains are determined as follows: Interconnected Network Planning and Simulation Analysis, Interconnected Network Safety Control and Protection, Intelligent Power Transmission and Transformation, and Internet of Things.

  4. Projection pursuit water quality evaluation model based on chicken swam algorithm

    NASA Astrophysics Data System (ADS)

    Hu, Zhe

    2018-03-01

    In view of the uncertainty and ambiguity of each index in water quality evaluation, in order to solve the incompatibility of evaluation results of individual water quality indexes, a projection pursuit model based on chicken swam algorithm is proposed. The projection index function which can reflect the water quality condition is constructed, the chicken group algorithm (CSA) is introduced, the projection index function is optimized, the best projection direction of the projection index function is sought, and the best projection value is obtained to realize the water quality evaluation. The comparison between this method and other methods shows that it is reasonable and feasible to provide decision-making basis for water pollution control in the basin.

  5. Inter-rater Agreement of End-of-shift Evaluations Based on a Single Encounter

    PubMed Central

    Warrington, Steven; Beeson, Michael; Bradford, Amber

    2017-01-01

    Introduction End-of-shift evaluation (ESE) forms, also known as daily encounter cards, represent a subset of encounter-based assessment forms. Encounter cards have become prevalent for formative evaluation, with some suggesting a potential for summative evaluation. Our objective was to evaluate the inter-rater agreement of ESE forms using a single scripted encounter at a conference of emergency medicine (EM) educators. Methods Following institutional review board exemption, we created a scripted video simulating an encounter between an intern and a patient with an ankle injury. That video was shown during a lecture at the Council of EM Residency Director’s Academic Assembly with attendees asked to evaluate the “resident” using one of eight possible ESE forms randomly distributed. Descriptive statistics were used to analyze the results with Fleiss’ kappa to evaluate inter-rater agreement. Results Most of the 324 respondents were leadership in residency programs (66%), with a range of 29–47 responses per evaluation form. Few individuals (5%) felt they were experts in assessing residents based on EM milestones. Fleiss’ kappa ranged from 0.157 – 0.308 and did not perform much better in two post-hoc subgroup analyses. Conclusion The kappa ranges found show only slight to fair inter-rater agreement and raise concerns about the use of ESE forms in assessment of EM residents. Despite limitations present in this study, these results and a lack of other studies on inter-rater agreement of encounter cards should prompt further studies of such methods of assessment. Additionally, EM educators should focus research on methods to improve inter-rater agreement of ESE forms or other evaluating other methods of assessment of EM residents. PMID:28435505

  6. Research on the evaluation method of rural hollowing based on RS and GIS technology: a case study of the Ningxia Hui autonomous region in China

    NASA Astrophysics Data System (ADS)

    Yin, Kai; Wen, MeiPing; Zhang, FeiFei; Yuan, Chao; Chen, Qiang; Zhang, Xiupeng

    2016-10-01

    With the acceleration of urbanization in China, most rural areas formed a widespread phenomenon, i.e., destitute village, labor population loss, land abandonment and rural hollowing. And it formed a unique hollow village problem in China finally. The governance of hollow village was the objective need of the development of economic and social development in rural area for Chinese government, and the research on the evaluation method of rural hollowing was the premise and basis of the hollow village governance. In this paper, several evaluation methods were used to evaluate the rural hollowing based on the survey data, land use data, social and economic development data. And these evaluation indexes were the transition of homesteads, the development intensity of rural residential areas, the per capita housing construction area, the residential population proportion in rural area, and the average annual electricity consumption, which can reflect the rural hollowing degree from the land, population, and economy point of view, respectively. After that, spatial analysis method of GIS was used to analyze the evaluation result for each index. Based on spatial raster data generated by Kriging interpolation, we carried out re-classification of all the results. Using the fuzzy clustering method, the rural hollowing degree in Ningxia area was reclassified based on the two spatial scales of county and village. The results showed that the rural hollowing pattern in the Ningxia Hui Autonomous Region had a spatial distribution characteristics that the rural hollowing degree was obvious high in the middle of the study area but was low around the study area. On a county scale, the specific performances of the serious rural hollowing were the higher degree of extensive land use, and the lower level of rural economic development and population transfer concentration. On a village scale, the main performances of the rural hollowing were the rural population loss and idle land. The evaluation method of rural hollowing constructed in this paper can effectively carry out a comprehensive degree zoning of rural hollowing, which can make orderly decision support plans of hollow village governance for the government.

  7. A primitive study of voxel feature generation by multiple stacked denoising autoencoders for detecting cerebral aneurysms on MRA

    NASA Astrophysics Data System (ADS)

    Nemoto, Mitsutaka; Hayashi, Naoto; Hanaoka, Shouhei; Nomura, Yukihiro; Miki, Soichiro; Yoshikawa, Takeharu; Ohtomo, Kuni

    2016-03-01

    The purpose of this study is to evaluate the feasibility of a novel feature generation, which is based on multiple deep neural networks (DNNs) with boosting, for computer-assisted detection (CADe). It is hard and time-consuming to optimize the hyperparameters for DNNs such as stacked denoising autoencoder (SdA). The proposed method allows using SdA based features without the burden of the hyperparameter setting. The proposed method was evaluated by an application for detecting cerebral aneurysms on magnetic resonance angiogram (MRA). A baseline CADe process included four components; scaling, candidate area limitation, candidate detection, and candidate classification. Proposed feature generation method was applied to extract the optimal features for candidate classification. Proposed method only required setting range of the hyperparameters for SdA. The optimal feature set was selected from a large quantity of SdA based features by multiple SdAs, each of which was trained using different hyperparameter set. The feature selection was operated through ada-boost ensemble learning method. Training of the baseline CADe process and proposed feature generation were operated with 200 MRA cases, and the evaluation was performed with 100 MRA cases. Proposed method successfully provided SdA based features just setting the range of some hyperparameters for SdA. The CADe process by using both previous voxel features and SdA based features had the best performance with 0.838 of an area under ROC curve and 0.312 of ANODE score. The results showed that proposed method was effective in the application for detecting cerebral aneurysms on MRA.

  8. Testing For EM Upsets In Aircraft Control Computers

    NASA Technical Reports Server (NTRS)

    Belcastro, Celeste M.

    1994-01-01

    Effects of transient electrical signals evaluated in laboratory tests. Method of evaluating nominally fault-tolerant, aircraft-type digital-computer-based control system devised. Provides for evaluation of susceptibility of system to upset and evaluation of integrity of control when system subjected to transient electrical signals like those induced by electromagnetic (EM) source, in this case lightning. Beyond aerospace applications, fault-tolerant control systems becoming more wide-spread in industry; such as in automobiles. Method supports practical, systematic tests for evaluation of designs of fault-tolerant control systems.

  9. The accuracy of a designed software for automated localization of craniofacial landmarks on CBCT images.

    PubMed

    Shahidi, Shoaleh; Bahrampour, Ehsan; Soltanimehr, Elham; Zamani, Ali; Oshagh, Morteza; Moattari, Marzieh; Mehdizadeh, Alireza

    2014-09-16

    Two-dimensional projection radiographs have been traditionally considered the modality of choice for cephalometric analysis. To overcome the shortcomings of two-dimensional images, three-dimensional computed tomography (CT) has been used to evaluate craniofacial structures. However, manual landmark detection depends on medical expertise, and the process is time-consuming. The present study was designed to produce software capable of automated localization of craniofacial landmarks on cone beam (CB) CT images based on image registration and to evaluate its accuracy. The software was designed using MATLAB programming language. The technique was a combination of feature-based (principal axes registration) and voxel similarity-based methods for image registration. A total of 8 CBCT images were selected as our reference images for creating a head atlas. Then, 20 CBCT images were randomly selected as the test images for evaluating the method. Three experts twice located 14 landmarks in all 28 CBCT images during two examinations set 6 weeks apart. The differences in the distances of coordinates of each landmark on each image between manual and automated detection methods were calculated and reported as mean errors. The combined intraclass correlation coefficient for intraobserver reliability was 0.89 and for interobserver reliability 0.87 (95% confidence interval, 0.82 to 0.93). The mean errors of all 14 landmarks were <4 mm. Additionally, 63.57% of landmarks had a mean error of <3 mm compared with manual detection (gold standard method). The accuracy of our approach for automated localization of craniofacial landmarks, which was based on combining feature-based and voxel similarity-based methods for image registration, was acceptable. Nevertheless we recommend repetition of this study using other techniques, such as intensity-based methods.

  10. Performance evaluation of structure based and ligand based virtual screening methods on ten selected anti-cancer targets.

    PubMed

    Ramasamy, Thilagavathi; Selvam, Chelliah

    2015-10-15

    Virtual screening has become an important tool in drug discovery process. Structure based and ligand based approaches are generally used in virtual screening process. To date, several benchmark sets for evaluating the performance of the virtual screening tool are available. In this study, our aim is to compare the performance of both structure based and ligand based virtual screening methods. Ten anti-cancer targets and their corresponding benchmark sets from 'Demanding Evaluation Kits for Objective In silico Screening' (DEKOIS) library were selected. X-ray crystal structures of protein-ligand complexes were selected based on their resolution. Openeye tools such as FRED, vROCS were used and the results were carefully analyzed. At EF1%, vROCS produced better results but at EF5% and EF10%, both FRED and ROCS produced almost similar results. It was noticed that the enrichment factor values were decreased while going from EF1% to EF5% and EF10% in many cases. Published by Elsevier Ltd.

  11. An Approach to the Evaluation of Hypermedia.

    ERIC Educational Resources Information Center

    Knussen, Christina; And Others

    1991-01-01

    Discusses methods that may be applied to the evaluation of hypermedia, based on six models described by Lawton. Techniques described include observation, self-report measures, interviews, automated measures, psychometric tests, checklists and criterion-based techniques, process models, Experimentally Measuring Usability (EMU), and a naturalistic…

  12. Evaluation of Visibility Sensors at the Eglin Air Force Base Climatic Chamber

    DOT National Transportation Integrated Search

    1983-10-01

    Three transmissometers and five forward-scatter meters were evaluated for measuring fog, haze, rain and snow in the large test chamber of the Eglin Air Force Base Climatic Laboratory. Methods were developed for generating moderately uniform and stabl...

  13. Selection of Polynomial Chaos Bases via Bayesian Model Uncertainty Methods with Applications to Sparse Approximation of PDEs with Stochastic Inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karagiannis, Georgios; Lin, Guang

    2014-02-15

    Generalized polynomial chaos (gPC) expansions allow the representation of the solution of a stochastic system as a series of polynomial terms. The number of gPC terms increases dramatically with the dimension of the random input variables. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs if the evaluations of the system are expensive, the evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solution, both in spacial and random domains, by coupling Bayesianmore » model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spacial points via (1) Bayesian model average or (2) medial probability model, and their construction as functions on the spacial domain via spline interpolation. The former accounts the model uncertainty and provides Bayes-optimal predictions; while the latter, additionally, provides a sparse representation of the solution by evaluating the expansion on a subset of dominating gPC bases when represented as a gPC expansion. Moreover, the method quantifies the importance of the gPC bases through inclusion probabilities. We design an MCMC sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed method is suitable for, but not restricted to, problems whose stochastic solution is sparse at the stochastic level with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the good performance of the proposed method and make comparisons with others on 1D, 14D and 40D in random space elliptic stochastic partial differential equations.« less

  14. Protein contact prediction by integrating deep multiple sequence alignments, coevolution and machine learning.

    PubMed

    Adhikari, Badri; Hou, Jie; Cheng, Jianlin

    2018-03-01

    In this study, we report the evaluation of the residue-residue contacts predicted by our three different methods in the CASP12 experiment, focusing on studying the impact of multiple sequence alignment, residue coevolution, and machine learning on contact prediction. The first method (MULTICOM-NOVEL) uses only traditional features (sequence profile, secondary structure, and solvent accessibility) with deep learning to predict contacts and serves as a baseline. The second method (MULTICOM-CONSTRUCT) uses our new alignment algorithm to generate deep multiple sequence alignment to derive coevolution-based features, which are integrated by a neural network method to predict contacts. The third method (MULTICOM-CLUSTER) is a consensus combination of the predictions of the first two methods. We evaluated our methods on 94 CASP12 domains. On a subset of 38 free-modeling domains, our methods achieved an average precision of up to 41.7% for top L/5 long-range contact predictions. The comparison of the three methods shows that the quality and effective depth of multiple sequence alignments, coevolution-based features, and machine learning integration of coevolution-based features and traditional features drive the quality of predicted protein contacts. On the full CASP12 dataset, the coevolution-based features alone can improve the average precision from 28.4% to 41.6%, and the machine learning integration of all the features further raises the precision to 56.3%, when top L/5 predicted long-range contacts are evaluated. And the correlation between the precision of contact prediction and the logarithm of the number of effective sequences in alignments is 0.66. © 2017 Wiley Periodicals, Inc.

  15. Validation of a Smartphone Image-Based Dietary Assessment Method for Pregnant Women

    PubMed Central

    Ashman, Amy M.; Collins, Clare E.; Brown, Leanne J.; Rae, Kym M.; Rollo, Megan E.

    2017-01-01

    Image-based dietary records could lower participant burden associated with traditional prospective methods of dietary assessment. They have been used in children, adolescents and adults, but have not been evaluated in pregnant women. The current study evaluated relative validity of the DietBytes image-based dietary assessment method for assessing energy and nutrient intakes. Pregnant women collected image-based dietary records (via a smartphone application) of all food, drinks and supplements consumed over three non-consecutive days. Intakes from the image-based method were compared to intakes collected from three 24-h recalls, taken on random days; once per week, in the weeks following the image-based record. Data were analyzed using nutrient analysis software. Agreement between methods was ascertained using Pearson correlations and Bland-Altman plots. Twenty-five women (27 recruited, one withdrew, one incomplete), median age 29 years, 15 primiparas, eight Aboriginal Australians, completed image-based records for analysis. Significant correlations between the two methods were observed for energy, macronutrients and fiber (r = 0.58–0.84, all p < 0.05), and for micronutrients both including (r = 0.47–0.94, all p < 0.05) and excluding (r = 0.40–0.85, all p < 0.05) supplements in the analysis. Bland-Altman plots confirmed acceptable agreement with no systematic bias. The DietBytes method demonstrated acceptable relative validity for assessment of nutrient intakes of pregnant women. PMID:28106758

  16. Evaluation and Ranking of Researchers – Bh Index

    PubMed Central

    Bharathi, D. Gnana

    2013-01-01

    Evaluation and ranking of every author is very crucial as it is widely used to evaluate the performance of the researcher. This article proposes a new method, called Bh-Index, to evaluate the researchers based on the publications and citations. The method is built on h-Index and only the h-core articles are taken into consideration. The method assigns value additions to those articles that receive significantly high citations in comparison to the h-Index of the researcher. It provides a wide range of values for a given h-Index and effective evaluation even for a short period. Use of Bh-Index along with the h-Index gives a powerful tool to evaluate the researchers. PMID:24349183

  17. Economic evaluation of environmental epidemiological projects in national industrial complexes.

    PubMed

    Shin, Youngchul

    2017-01-01

    In this economic evaluation of environmental epidemiological monitoring projects, we analyzed the economic feasibility of these projects by determining the social cost and benefit of these projects and conducting a cost/benefit analysis. Here, the social cost was evaluated by converting annual budgets for these research and survey projects into present values. Meanwhile, the societal benefit of these projects was evaluated by using the contingent valuation method to estimate the willingness-to-pay of residents living in or near industrial complexes. In addition, the extent to which these projects reduced negative health effects (i.e., excess disease and premature death) was evaluated through expert surveys, and the analysis was conducted to reflect the unit of economic value, based on the cost of illness and benefit transfer method. The results were then used to calculate the benefit of these projects in terms of the decrease in negative health effects. For residents living near industrial complexes, the benefit/cost ratio was 1.44 in the analysis based on resident surveys and 5.17 in the analysis based on expert surveys. Thus, whichever method was used for the economic analysis, the economic feasibility of these projects was confirmed.

  18. A framework for the evaluation of patient information leaflets

    PubMed Central

    Garner, Mark; Ning, Zhenye; Francis, Jill

    2011-01-01

    Abstract Background  The provision of patient information leaflets (PILs) is an important part of health care. PILs require evaluation, but the frameworks that are used for evaluation are largely under‐informed by theory. Most evaluation to date has been based on indices of readability, yet several writers argue that readability is not enough. We propose a framework for evaluating PILs that reflect the central role of the patient perspective in communication and use methods for evaluation based on simple linguistic principles. The proposed framework  The framework has three elements that give rise to three approaches to evaluation. Each element is a necessary but not sufficient condition for effective communication. Readability (focussing on text) may be assessed using existing well‐established procedures. Comprehensibility (focussing on reader and text) may be assessed using multiple‐choice questions based on the lexical and semantic features of the text. Communicative effectiveness (focussing on reader) explores the relationship between the emotional, cognitive and behavioural responses of the reader and the objectives of the PIL. Suggested methods for assessment are described, based on our preliminary empirical investigations. Conclusions  The tripartite model of communicative effectiveness is a patient‐centred framework for evaluating PILs. It may assist the field in moving beyond readability to broader indicators of the quality and appropriateness of printed information provided to patients. PMID:21332620

  19. Evaluation of passenger health risk assessment of sustainable indoor air quality monitoring in metro systems based on a non-Gaussian dynamic sensor validation method.

    PubMed

    Kim, MinJeong; Liu, Hongbin; Kim, Jeong Tai; Yoo, ChangKyoo

    2014-08-15

    Sensor faults in metro systems provide incorrect information to indoor air quality (IAQ) ventilation systems, resulting in the miss-operation of ventilation systems and adverse effects on passenger health. In this study, a new sensor validation method is proposed to (1) detect, identify and repair sensor faults and (2) evaluate the influence of sensor reliability on passenger health risk. To address the dynamic non-Gaussianity problem of IAQ data, dynamic independent component analysis (DICA) is used. To detect and identify sensor faults, the DICA-based squared prediction error and sensor validity index are used, respectively. To restore the faults to normal measurements, a DICA-based iterative reconstruction algorithm is proposed. The comprehensive indoor air-quality index (CIAI) that evaluates the influence of the current IAQ on passenger health is then compared using the faulty and reconstructed IAQ data sets. Experimental results from a metro station showed that the DICA-based method can produce an improved IAQ level in the metro station and reduce passenger health risk since it more accurately validates sensor faults than do conventional methods. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Deviation-based spam-filtering method via stochastic approach

    NASA Astrophysics Data System (ADS)

    Lee, Daekyung; Lee, Mi Jin; Kim, Beom Jun

    2018-03-01

    In the presence of a huge number of possible purchase choices, ranks or ratings of items by others often play very important roles for a buyer to make a final purchase decision. Perfectly objective rating is an impossible task to achieve, and we often use an average rating built on how previous buyers estimated the quality of the product. The problem of using a simple average rating is that it can easily be polluted by careless users whose evaluation of products cannot be trusted, and by malicious spammers who try to bias the rating result on purpose. In this letter we suggest how trustworthiness of individual users can be systematically and quantitatively reflected to build a more reliable rating system. We compute the suitably defined reliability of each user based on the user's rating pattern for all products she evaluated. We call our proposed method as the deviation-based ranking, since the statistical significance of each user's rating pattern with respect to the average rating pattern is the key ingredient. We find that our deviation-based ranking method outperforms existing methods in filtering out careless random evaluators as well as malicious spammers.

  1. Development and evaluation of a method for calculating the Healthy Eating Index-2005 using the Nutrition Data System for Research

    USDA-ARS?s Scientific Manuscript database

    Objective: To develop and evaluate a method for calculating the Healthy Eating Index-2005 (HEI-2005) with the widely used Nutrition Data System for Research (NDSR) based on the method developed for use with the US Department of Agriculture’s (USDA) Food and Nutrient Dietary Data System (FNDDS) and M...

  2. Assessing Change in the Teaching Practice of Faculty in a Faculty Development Program for Primary Care Physicians: Toward a Mixed Method Evaluation Approach.

    ERIC Educational Resources Information Center

    Pinheiro, Sandro O.; Rohrer, Jonathan D.; Heimann, C. F. Larry

    This paper describes a mixed method evaluation study that was developed to assess faculty teaching behavior change in a faculty development fellowship program for community-based hospital faculty. Principles of adult learning were taught to faculty participants over the fellowship period. These included instruction in teaching methods, group…

  3. Reliability evaluation of microgrid considering incentive-based demand response

    NASA Astrophysics Data System (ADS)

    Huang, Ting-Cheng; Zhang, Yong-Jun

    2017-07-01

    Incentive-based demand response (IBDR) can guide customers to adjust their behaviour of electricity and curtail load actively. Meanwhile, distributed generation (DG) and energy storage system (ESS) can provide time for the implementation of IBDR. The paper focus on the reliability evaluation of microgrid considering IBDR. Firstly, the mechanism of IBDR and its impact on power supply reliability are analysed. Secondly, the IBDR dispatch model considering customer’s comprehensive assessment and the customer response model are developed. Thirdly, the reliability evaluation method considering IBDR based on Monte Carlo simulation is proposed. Finally, the validity of the above models and method is studied through numerical tests on modified RBTS Bus6 test system. Simulation results demonstrated that IBDR can improve the reliability of microgrid.

  4. An Improved Image Ringing Evaluation Method with Weighted Sum of Gray Extreme Value

    NASA Astrophysics Data System (ADS)

    Yang, Ling; Meng, Yanhua; Wang, Bo; Bai, Xu

    2018-03-01

    Blind image restoration algorithm usually produces ringing more obvious at the edges. Ringing phenomenon is mainly affected by noise, species of restoration algorithm, and the impact of the blur kernel estimation during restoration. Based on the physical mechanism of ringing, a method of evaluating the ringing on blind restoration images is proposed. The method extracts the ringing image overshooting and ripple region to make the weighted statistics for the regional gradient value. According to the weights set by multiple experiments, the edge information is used to characterize the details of the edge to determine the weight, quantify the seriousness of the ring effect, and propose the evaluation method of the ringing caused by blind restoration. The experimental results show that the method can effectively evaluate the ring effect in the restoration images under different restoration algorithms and different restoration parameters. The evaluation results are consistent with the visual evaluation results.

  5. Field-based evaluation of a male-specific (F+) RNA coliphage concentration method

    EPA Science Inventory

    Fecal contamination of water poses a significant risk to public health due to the potential presence of pathogens, including enteric viruses. Thus, sensitive, reliable and easy to use methods for the detection of microorganisms are needed to evaluate water quality. In this stud...

  6. Surrogate Based Uni/Multi-Objective Optimization and Distribution Estimation Methods

    NASA Astrophysics Data System (ADS)

    Gong, W.; Duan, Q.; Huo, X.

    2017-12-01

    Parameter calibration has been demonstrated as an effective way to improve the performance of dynamic models, such as hydrological models, land surface models, weather and climate models etc. Traditional optimization algorithms usually cost a huge number of model evaluations, making dynamic model calibration very difficult, or even computationally prohibitive. With the help of a serious of recently developed adaptive surrogate-modelling based optimization methods: uni-objective optimization method ASMO, multi-objective optimization method MO-ASMO, and probability distribution estimation method ASMO-PODE, the number of model evaluations can be significantly reduced to several hundreds, making it possible to calibrate very expensive dynamic models, such as regional high resolution land surface models, weather forecast models such as WRF, and intermediate complexity earth system models such as LOVECLIM. This presentation provides a brief introduction to the common framework of adaptive surrogate-based optimization algorithms of ASMO, MO-ASMO and ASMO-PODE, a case study of Common Land Model (CoLM) calibration in Heihe river basin in Northwest China, and an outlook of the potential applications of the surrogate-based optimization methods.

  7. Influence of inner circular sealing area impression method on the retention of complete dentures.

    PubMed

    Wang, Cun-Wei; Shao, Qi; Sun, Hui-Qiang; Mao, Meng-Yun; Zhang, Xin-Wei; Gong, Qi; Xiao, Guo-Ning

    2015-01-01

    The aims of the present study were to describe an impression method of "inner circular sealing area" and to evaluate the effect of the method on retention, aesthetics and comfort of complete dentures, which lack labial base for patients with maxillary protrusions. Three patients were subjected to the experiment, and two sets of complete maxillary dentures were made for each patient; the first set was made without labial base via an inner circular sealing area method (experimental group) and the second had an intact base that was made with conventional methods (control group). Retention force tests were implemented with a tensile strength assessment device to assess the retention and a visual analogue scale (VAS) was used to evaluate the comfort between the two groups. Results showed larger retention force, better aesthetics and more comfort in the experimental group. The improved two-step impression method formed an inner circular sealing area that prevented damage to the peripheral border seal effect of the denture caused by incomplete bases and obtained better denture retention.

  8. Assessing and Evaluating Multidisciplinary Translational Teams: A Mixed Methods Approach

    PubMed Central

    Wooten, Kevin C.; Rose, Robert M.; Ostir, Glenn V.; Calhoun, William J.; Ameredes, Bill T.; Brasier, Allan R.

    2014-01-01

    A case report illustrates how multidisciplinary translational teams can be assessed using outcome, process, and developmental types of evaluation using a mixed methods approach. Types of evaluation appropriate for teams are considered in relation to relevant research questions and assessment methods. Logic models are applied to scientific projects and team development to inform choices between methods within a mixed methods design. Use of an expert panel is reviewed, culminating in consensus ratings of 11 multidisciplinary teams and a final evaluation within a team type taxonomy. Based on team maturation and scientific progress, teams were designated as: a) early in development, b) traditional, c) process focused, or d) exemplary. Lessons learned from data reduction, use of mixed methods, and use of expert panels are explored. PMID:24064432

  9. Task-based evaluation of segmentation algorithms for diffusion-weighted MRI without using a gold standard

    PubMed Central

    Jha, Abhinav K.; Kupinski, Matthew A.; Rodríguez, Jeffrey J.; Stephen, Renu M.; Stopeck, Alison T.

    2012-01-01

    In many studies, the estimation of the apparent diffusion coefficient (ADC) of lesions in visceral organs in diffusion-weighted (DW) magnetic resonance images requires an accurate lesion-segmentation algorithm. To evaluate these lesion-segmentation algorithms, region-overlap measures are used currently. However, the end task from the DW images is accurate ADC estimation, and the region-overlap measures do not evaluate the segmentation algorithms on this task. Moreover, these measures rely on the existence of gold-standard segmentation of the lesion, which is typically unavailable. In this paper, we study the problem of task-based evaluation of segmentation algorithms in DW imaging in the absence of a gold standard. We first show that using manual segmentations instead of gold-standard segmentations for this task-based evaluation is unreliable. We then propose a method to compare the segmentation algorithms that does not require gold-standard or manual segmentation results. The no-gold-standard method estimates the bias and the variance of the error between the true ADC values and the ADC values estimated using the automated segmentation algorithm. The method can be used to rank the segmentation algorithms on the basis of both accuracy and precision. We also propose consistency checks for this evaluation technique. PMID:22713231

  10. Community Currency Trading Method through Partial Transaction Intermediary Process

    NASA Astrophysics Data System (ADS)

    Kido, Kunihiko; Hasegawa, Seiichi; Komoda, Norihisa

    A community currency is local money that is issued by local governments or Non-Profit Organization (NPO) to support social services. The purpose of introducing community currencies is to regenerate communities by fostering mutual aids among community members. In this paper, we propose a community currency trading method through partial intermediary process, under operational environments without introducing coordinators all the time. In this method, coordinators perform coordination between service users and service providers during several months from the start point of transactions. After the period of coordination, participants spontaneously make transactions based on their trust area and a trust evaluation method based on the number of provided services and complaint information. This method is especially effective to communities with close social networks and low trustworthiness. The proposed method is evaluated through multi-agent simulation.

  11. Potential applicability of stress wave velocity method on pavement base materials as a non-destructive testing technique

    NASA Astrophysics Data System (ADS)

    Mahedi, Masrur

    Aggregates derived from natural sources have been used traditionally as the pavement base materials. But in recent times, the extraction of these natural aggregates has become more labor intensive and costly due to resource depletion and environmental concerns. Thus, the uses of recycled aggregates as the supplementary of natural aggregates are increasing considerably in pavement construction. Use of recycled aggregates such as recycled crushed concrete (RCA) and recycled asphalt pavement (RAP) reduces the rate of natural resource depletion, construction debris and cost. Although recycled aggregates could be used as a viable alternative of conventional base materials, strength characteristics and product variability limit their utility to a great extent. Hence, their applicability is needed to be evaluated extensively based on strength, stiffness and cost factors. But for extensive evaluation, traditionally practiced test methods are proven to be unreasonable in terms of time, cost, reliability and applicability. On the other hand, rapid non-destructive methods have the potential to be less time consuming and inexpensive along with the low variability of test results; therefore improving the reliability of estimated performance of the pavement. In this research work, the experimental program was designed to assess the potential application of stress wave velocity method as a non-destructive test in evaluating recycled base materials. Different combinations of cement treated recycled concrete aggregate (RAP) and recycled crushed concrete (RCA) were used to evaluate the applicability of stress wave velocity method. It was found that, stress wave velocity method is excellent in characterizing the strength and stiffness properties of cement treated base materials. Statistical models, based on P-wave velocity were derived for predicting the modulus of elasticity and compressive strength of different combinations of cement treated RAP, Grade-1 and Grade-2 materials. Two, three and four parameter modeling were also done for characterizing the resilient modulus response. It is anticipated that, derived correlations can be useful in estimating the strength and stiffness response of cement treated base materials with satisfactory level of confidence, if the P-wave velocity remains within the range of 500 ft/sec to 1500 ft/sec.

  12. Retinal status analysis method based on feature extraction and quantitative grading in OCT images.

    PubMed

    Fu, Dongmei; Tong, Hejun; Zheng, Shuang; Luo, Ling; Gao, Fulin; Minar, Jiri

    2016-07-22

    Optical coherence tomography (OCT) is widely used in ophthalmology for viewing the morphology of the retina, which is important for disease detection and assessing therapeutic effect. The diagnosis of retinal diseases is based primarily on the subjective analysis of OCT images by trained ophthalmologists. This paper describes an OCT images automatic analysis method for computer-aided disease diagnosis and it is a critical part of the eye fundus diagnosis. This study analyzed 300 OCT images acquired by Optovue Avanti RTVue XR (Optovue Corp., Fremont, CA). Firstly, the normal retinal reference model based on retinal boundaries was presented. Subsequently, two kinds of quantitative methods based on geometric features and morphological features were proposed. This paper put forward a retinal abnormal grading decision-making method which was used in actual analysis and evaluation of multiple OCT images. This paper showed detailed analysis process by four retinal OCT images with different abnormal degrees. The final grading results verified that the analysis method can distinguish abnormal severity and lesion regions. This paper presented the simulation of the 150 test images, where the results of analysis of retinal status showed that the sensitivity was 0.94 and specificity was 0.92.The proposed method can speed up diagnostic process and objectively evaluate the retinal status. This paper aims on studies of retinal status automatic analysis method based on feature extraction and quantitative grading in OCT images. The proposed method can obtain the parameters and the features that are associated with retinal morphology. Quantitative analysis and evaluation of these features are combined with reference model which can realize the target image abnormal judgment and provide a reference for disease diagnosis.

  13. Evaluation of finite difference and FFT-based solutions of the transport of intensity equation.

    PubMed

    Zhang, Hongbo; Zhou, Wen-Jing; Liu, Ying; Leber, Donald; Banerjee, Partha; Basunia, Mahmudunnabi; Poon, Ting-Chung

    2018-01-01

    A finite difference method is proposed for solving the transport of intensity equation. Simulation results show that although slower than fast Fourier transform (FFT)-based methods, finite difference methods are able to reconstruct the phase with better accuracy due to relaxed assumptions for solving the transport of intensity equation relative to FFT methods. Finite difference methods are also more flexible than FFT methods in dealing with different boundary conditions.

  14. Application of genetic algorithm in the evaluation of the profile error of archimedes helicoid surface

    NASA Astrophysics Data System (ADS)

    Zhu, Lianqing; Chen, Yunfang; Chen, Qingshan; Meng, Hao

    2011-05-01

    According to minimum zone condition, a method for evaluating the profile error of Archimedes helicoid surface based on Genetic Algorithm (GA) is proposed. The mathematic model of the surface is provided and the unknown parameters in the equation of surface are acquired through least square method. Principle of GA is explained. Then, the profile error of Archimedes Helicoid surface is obtained through GA optimization method. To validate the proposed method, the profile error of an Archimedes helicoid surface, Archimedes Cylindrical worm (ZA worm) surface, is evaluated. The results show that the proposed method is capable of correctly evaluating the profile error of Archimedes helicoid surface and satisfy the evaluation standard of the Minimum Zone Method. It can be applied to deal with the measured data of profile error of complex surface obtained by three coordinate measurement machines (CMM).

  15. Comparative evaluation of tensile bond strength of a polyvinyl acetate-based resilient liner following various denture base surface pre-treatment methods and immersion in artificial salivary medium: An in vitro study.

    PubMed

    Philip, Jacob M; Ganapathy, Dhanraj M; Ariga, Padma

    2012-07-01

    This study was formulated to evaluate and estimate the influence of various denture base resin surface pre-treatments (chemical and mechanical and combinations) upon tensile bond strength between a poly vinyl acetate-based denture liner and a denture base resin. A universal testing machine was used for determining the bond strength of the liner to surface pre-treated acrylic resin blocks. The data was analyzed by one-way analysis of variance and the t-test (α =.05). This study infers that denture base surface pre-treatment can improve the adhesive tensile bond strength between the liner and denture base specimens. The results of this study infer that chemical, mechanical, and mechano-chemical pre-treatments will have different effects on the bond strength of the acrylic soft resilient liner to the denture base. Among the various methods of pre-treatment of denture base resins, it was inferred that the mechano-chemical pre-treatment method with air-borne particle abrasion followed by monomer application exhibited superior bond strength than other methods with the resilient liner. Hence, this method could be effectively used to improve bond strength between liner and denture base and thus could minimize delamination of liner from the denture base during function.

  16. Research on Livable Community Evaluation Based on GIS

    NASA Astrophysics Data System (ADS)

    Yin, Zhangcai; Wu, Yang; Jin, Zhanghaonan; Zhang, Xu

    2018-01-01

    Community is the basic unit of the city. Research on livable community could provide a bottom-up research path for the realization of livable city. Livability is the total factor affecting the quality of community life. In this paper, livable community evaluation indexes are evaluated based on GIS and fuzzy comprehensive evaluation method. Then the sum-index and sub-index of community livability are both calculated. And community livable evaluation index system is constructed based on the platform of GIS. This study provides theoretical support for the construction and management of livable communities, so as to guide the development and optimization of city.

  17. Multiple and mixed methods in formative evaluation: Is more better? Reflections from a South African study.

    PubMed

    Odendaal, Willem; Atkins, Salla; Lewin, Simon

    2016-12-15

    Formative programme evaluations assess intervention implementation processes, and are seen widely as a way of unlocking the 'black box' of any programme in order to explore and understand why a programme functions as it does. However, few critical assessments of the methods used in such evaluations are available, and there are especially few that reflect on how well the evaluation achieved its objectives. This paper describes a formative evaluation of a community-based lay health worker programme for TB and HIV/AIDS clients across three low-income communities in South Africa. It assesses each of the methods used in relation to the evaluation objectives, and offers suggestions on ways of optimising the use of multiple, mixed-methods within formative evaluations of complex health system interventions. The evaluation's qualitative methods comprised interviews, focus groups, observations and diary keeping. Quantitative methods included a time-and-motion study of the lay health workers' scope of practice and a client survey. The authors conceptualised and conducted the evaluation, and through iterative discussions, assessed the methods used and their results. Overall, the evaluation highlighted programme issues and insights beyond the reach of traditional single methods evaluations. The strengths of the multiple, mixed-methods in this evaluation included a detailed description and nuanced understanding of the programme and its implementation, and triangulation of the perspectives and experiences of clients, lay health workers, and programme managers. However, the use of multiple methods needs to be carefully planned and implemented as this approach can overstretch the logistic and analytic resources of an evaluation. For complex interventions, formative evaluation designs including multiple qualitative and quantitative methods hold distinct advantages over single method evaluations. However, their value is not in the number of methods used, but in how each method matches the evaluation questions and the scientific integrity with which the methods are selected and implemented.

  18. Single Wall Carbon Nanotube Alignment Mechanisms for Non-Destructive Evaluation

    NASA Technical Reports Server (NTRS)

    Hong, Seunghun

    2002-01-01

    As proposed in our original proposal, we developed a new innovative method to assemble millions of single wall carbon nanotube (SWCNT)-based circuit components as fast as conventional microfabrication processes. This method is based on surface template assembly strategy. The new method solves one of the major bottlenecks in carbon nanotube based electrical applications and, potentially, may allow us to mass produce a large number of SWCNT-based integrated devices of critical interests to NASA.

  19. A rule-based named-entity recognition method for knowledge extraction of evidence-based dietary recommendations

    PubMed Central

    2017-01-01

    Evidence-based dietary information represented as unstructured text is a crucial information that needs to be accessed in order to help dietitians follow the new knowledge arrives daily with newly published scientific reports. Different named-entity recognition (NER) methods have been introduced previously to extract useful information from the biomedical literature. They are focused on, for example extracting gene mentions, proteins mentions, relationships between genes and proteins, chemical concepts and relationships between drugs and diseases. In this paper, we present a novel NER method, called drNER, for knowledge extraction of evidence-based dietary information. To the best of our knowledge this is the first attempt at extracting dietary concepts. DrNER is a rule-based NER that consists of two phases. The first one involves the detection and determination of the entities mention, and the second one involves the selection and extraction of the entities. We evaluate the method by using text corpora from heterogeneous sources, including text from several scientifically validated web sites and text from scientific publications. Evaluation of the method showed that drNER gives good results and can be used for knowledge extraction of evidence-based dietary recommendations. PMID:28644863

  20. Shortening the Miles to the Milestones: Connecting EPA-Based Evaluations to ACGME Milestone Reports for Internal Medicine Residency Programs.

    PubMed

    Choe, John H; Knight, Christopher L; Stiling, Rebekah; Corning, Kelli; Lock, Keli; Steinberg, Kenneth P

    2016-07-01

    The Next Accreditation System requires internal medicine training programs to provide the Accreditation Council for Graduate Medical Education (ACGME) with semiannual information about each resident's progress in 22 subcompetency domains. Evaluation of resident "trustworthiness" in performing entrustable professional activities (EPAs) may offer a more tangible assessment construct than evaluations based on expectations of usual progression toward competence. However, translating results from EPA-based evaluations into ACGME milestone progress reports has proven to be challenging because the constructs that underlay these two systems differ.The authors describe a process to bridge the gap between rotation-specific EPA-based evaluations and ACGME milestone reporting. Developed at the University of Washington in 2012 and 2013, this method involves mapping EPA-based evaluation responses to "milestone elements," the narrative descriptions within the columns of each of the 22 internal medicine subcompetencies. As faculty members complete EPA-based evaluations, the mapped milestone elements are automatically marked as "confirmed." Programs can maintain a database that tallies the number of times each milestone element is confirmed for a resident; these data can be used to produce graphical displays of resident progress along the internal medicine milestones.Using this count of milestone elements allows programs to bridge the gap between faculty assessments of residents based on rotation-specific observed activities and semiannual ACGME reports based on the internal medicine milestones. Although potentially useful for all programs, this method is especially beneficial to large programs where clinical competency committee members may not have the opportunity for direct observation of all residents.

  1. Research Methods Tutor: evaluation of a dialogue-based tutoring system in the classroom.

    PubMed

    Arnott, Elizabeth; Hastings, Peter; Allbritton, David

    2008-08-01

    Research Methods Tutor (RMT) is a dialogue-based intelligent tutoring system for use in conjunction with undergraduate psychology research methods courses. RMT includes five topics that correspond to the curriculum of introductory research methods courses: ethics, variables, reliability, validity, and experimental design. We evaluated the effectiveness of the RMT system in the classroom using a nonequivalent control group design. Students in three classes (n = 83) used RMT, and students in two classes (n = 53) did not use RMT. Results indicated that the use of RMT yieldedstrong learning gains of 0.75 standard deviations above classroom instruction alone. Further, the dialogue-based tutoring condition of the system resulted in higher gains than did the textbook-style condition (CAI version) of the system. Future directions for RMT include the addition of new topics and tutoring elements.

  2. A modular method for evaluating the performance of picture archiving and communication systems.

    PubMed

    Sanders, W H; Kant, L A; Kudrimoti, A

    1993-08-01

    Modeling can be used to predict the performance of picture archiving and communication system (PACS) configurations under various load conditions at an early design stage. This is important because choices made early in the design of a system can have a significant impact on the performance of the resulting implementation. Because PACS consist of many types of components, it is important to do such evaluations in a modular manner, so that alternative configurations and designs can be easily investigated. Stochastic activity networks (SANs) and reduced base model construction methods can aid in doing this. SANs are a model type particularly suited to the evaluation of systems in which several activities may be in progress concurrently, and each activity may affect the others through the results of its completion. Together with SANs, reduced base model construction methods provide a means to build highly modular models, in which models of particular components can be easily reused. In this article, we investigate the use of SANs and reduced base model construction techniques in evaluating PACS. Construction and solution of the models is done using UltraSAN, a graphic-oriented software tool for model specification, analysis, and simulation. The method is illustrated via the evaluation of a realistically sized PACS for a typical United States hospital of 300 to 400 beds, and the derivation of system response times and component utilizations.

  3. [Evaluation of inflammatory cells (tumor infiltrating lymphocytes - TIL) in malignant melanoma].

    PubMed

    Dundr, Pavel; Němejcová, Kristýna; Bártů, Michaela; Tichá, Ivana; Jakša, Radek

    2018-01-01

    The evaluation of inflammatory infiltrate (tumor infiltrating lymphocytes - TIL) should be a standard part of biopsy examination for malignant melanoma. Currently, the most commonly used assessment method according to Clark is not optimal and there have been attempts to find an alternative system. Here we present an overview of possible approaches involving five different evaluation methods based on hematoxylin-eosin staining, including the recent suggestion of unified TIL evaluation method for all solid tumors. The issue of methodology, prognostic and predictive significance of TIL determination as well as the importance of immunohistochemical subtyping of inflammatory infiltrate is discussed.

  4. 23 CFR 636.302 - Are there any limitations on the selection and use of proposal evaluation factors?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... evaluation of proposals and award of the contract may be based on qualitative considerations; (iv) If the... project funded with Federal-aid highway funds shall be based on at least one of the following methods: (A...

  5. Surface defects evaluation system based on electromagnetic model simulation and inverse-recognition calibration method

    NASA Astrophysics Data System (ADS)

    Yang, Yongying; Chai, Huiting; Li, Chen; Zhang, Yihui; Wu, Fan; Bai, Jian; Shen, Yibing

    2017-05-01

    Digitized evaluation of micro sparse defects on large fine optical surfaces is one of the challenges in the field of optical manufacturing and inspection. The surface defects evaluation system (SDES) for large fine optical surfaces is developed based on our previously reported work. In this paper, the electromagnetic simulation model based on Finite-Difference Time-Domain (FDTD) for vector diffraction theory is firstly established to study the law of microscopic scattering dark-field imaging. Given the aberration in actual optical systems, point spread function (PSF) approximated by a Gaussian function is introduced in the extrapolation from the near field to the far field and the scatter intensity distribution in the image plane is deduced. Analysis shows that both diffraction-broadening imaging and geometrical imaging should be considered in precise size evaluation of defects. Thus, a novel inverse-recognition calibration method is put forward to avoid confusion caused by diffraction-broadening effect. The evaluation method is applied to quantitative evaluation of defects information. The evaluation results of samples of many materials by SDES are compared with those by OLYMPUS microscope to verify the micron-scale resolution and precision. The established system has been applied to inspect defects on large fine optical surfaces and can achieve defects inspection of surfaces as large as 850 mm×500 mm with the resolution of 0.5 μm.

  6. Initial Assessment of a Rapid Method of Calculating CEV Environmental Heating

    NASA Technical Reports Server (NTRS)

    Pickney, John T.; Milliken, Andrew H.

    2010-01-01

    An innovative method for rapidly calculating spacecraft environmental absorbed heats in planetary orbit is described. The method employs reading a database of pre-calculated orbital absorbed heats and adjusting those heats for desired orbit parameters. The approach differs from traditional Monte Carlo methods that are orbit based with a planet centered coordinate system. The database is based on a spacecraft centered coordinated system where the range of all possible sun and planet look angles are evaluated. In an example case 37,044 orbit configurations were analyzed for average orbital heats on selected spacecraft surfaces. Calculation time was under 2 minutes while a comparable Monte Carlo evaluation would have taken an estimated 26 hours

  7. Assessing QuADEM: Preliminary Notes on a New Method for Evaluating Online Language Learning Courseware

    ERIC Educational Resources Information Center

    Strobl, Carola; Jacobs, Geert

    2011-01-01

    In this article, we set out to assess QuADEM (Quality Assessment of Digital Educational Material), one of the latest methods for evaluating online language learning courseware. What is special about QuADEM is that the evaluation is based on observing the actual usage of the online courseware and that, from a checklist of 12 different components,…

  8. Identifying Audiences of E-Infrastructures - Tools for Measuring Impact

    PubMed Central

    van den Besselaar, Peter

    2012-01-01

    Research evaluation should take into account the intended scholarly and non-scholarly audiences of the research output. This holds too for research infrastructures, which often aim at serving a large variety of audiences. With research and research infrastructures moving to the web, new possibilities are emerging for evaluation metrics. This paper proposes a feasible indicator for measuring the scope of audiences who use web-based e-infrastructures, as well as the frequency of use. In order to apply this indicator, a method is needed for classifying visitors to e-infrastructures into relevant user categories. The paper proposes such a method, based on an inductive logic program and a Bayesian classifier. The method is tested, showing that the visitors are efficiently classified with 90% accuracy into the selected categories. Consequently, the method can be used to evaluate the use of the e-infrastructure within and outside academia. PMID:23239995

  9. Model-based segmentation in orbital volume measurement with cone beam computed tomography and evaluation against current concepts.

    PubMed

    Wagner, Maximilian E H; Gellrich, Nils-Claudius; Friese, Karl-Ingo; Becker, Matthias; Wolter, Franz-Erich; Lichtenstein, Juergen T; Stoetzer, Marcus; Rana, Majeed; Essig, Harald

    2016-01-01

    Objective determination of the orbital volume is important in the diagnostic process and in evaluating the efficacy of medical and/or surgical treatment of orbital diseases. Tools designed to measure orbital volume with computed tomography (CT) often cannot be used with cone beam CT (CBCT) because of inferior tissue representation, although CBCT has the benefit of greater availability and lower patient radiation exposure. Therefore, a model-based segmentation technique is presented as a new method for measuring orbital volume and compared to alternative techniques. Both eyes from thirty subjects with no known orbital pathology who had undergone CBCT as a part of routine care were evaluated (n = 60 eyes). Orbital volume was measured with manual, atlas-based, and model-based segmentation methods. Volume measurements, volume determination time, and usability were compared between the three methods. Differences in means were tested for statistical significance using two-tailed Student's t tests. Neither atlas-based (26.63 ± 3.15 mm(3)) nor model-based (26.87 ± 2.99 mm(3)) measurements were significantly different from manual volume measurements (26.65 ± 4.0 mm(3)). However, the time required to determine orbital volume was significantly longer for manual measurements (10.24 ± 1.21 min) than for atlas-based (6.96 ± 2.62 min, p < 0.001) or model-based (5.73 ± 1.12 min, p < 0.001) measurements. All three orbital volume measurement methods examined can accurately measure orbital volume, although atlas-based and model-based methods seem to be more user-friendly and less time-consuming. The new model-based technique achieves fully automated segmentation results, whereas all atlas-based segmentations at least required manipulations to the anterior closing. Additionally, model-based segmentation can provide reliable orbital volume measurements when CT image quality is poor.

  10. An Evaluation of High School Curricula Employing Using the Element-Based Curriculum Development Model

    ERIC Educational Resources Information Center

    Aslan, Dolgun; Günay, Rafet

    2016-01-01

    This study was conducted with the aim of evaluating the curricula that constitute the basis of education provision at high schools in Turkey from the perspective of the teachers involved. A descriptive survey model, a quantitative research method was employed in this study. An item-based curriculum evaluation model was employed as part of the…

  11. The Formative Evaluation of a Web-based Course-Management System within a University Setting.

    ERIC Educational Resources Information Center

    Maslowski, Ralf; Visscher, Adrie J.; Collis, Betty; Bloemen, Paul P. M.

    2000-01-01

    Discussion of Web-based course management systems (W-CMSs) in higher education focuses on formative evaluation and its contribution in the design and development of high-quality W-CMSs. Reviews methods and techniques that can be applied in formative evaluation and examines TeLeTOP, a W-CMS produced at the University of Twente (Netherlands). (LRW)

  12. Quality of life in children with epilepsy and cognitive impairment: a review and a pilot study.

    PubMed

    Soria, Carmen; El Sabbagh, Sandra; Escolano, Sylvie; Bobet, René; Bulteau, Christine; Dellatolas, Georges

    2007-01-01

    Various methods have recently been proposed to assess the physical, psychological or social dimensions of quality of life (QoL) in children with epilepsy (CwE) and their families. Some methods are based exclusively on parental report and others emphasize the importance of an interview with the patient himself. In children with epilepsy and severe cognitive deficit only parental report is possible in practice; however, some parental based methods to evaluate QoL in CwE have excluded children with cognitive deficit. The present pilot study explores which items are suitable for a parental-based QoL evaluation in CwE and special educational needs, and the most frequently reported parental concerns in this special population of children.

  13. Design and Test of Pseudorandom Number Generator Using a Star Network of Lorenz Oscillators

    NASA Astrophysics Data System (ADS)

    Cho, Kenichiro; Miyano, Takaya

    We have recently developed a chaos-based stream cipher based on augmented Lorenz equations as a star network of Lorenz subsystems. In our method, the augmented Lorenz equations are used as a pseudorandom number generator. In this study, we propose a new method based on the augmented Lorenz equations for generating binary pseudorandom numbers and evaluate its security using the statistical tests of SP800-22 published by the National Institute for Standards and Technology in comparison with the performances of other chaotic dynamical models used as binary pseudorandom number generators. We further propose a faster version of the proposed method and evaluate its security using the statistical tests of TestU01 published by L’Ecuyer and Simard.

  14. Edge gradients evaluation for 2D hybrid finite volume method model

    USDA-ARS?s Scientific Manuscript database

    In this study, a two-dimensional depth-integrated hydrodynamic model was developed using FVM on a hybrid unstructured collocated mesh system. To alleviate the negative effects of mesh irregularity and non-uniformity, a conservative evaluation method for edge gradients based on the second-order Tayl...

  15. Reform of the Method for Evaluating the Teaching of Medical Linguistics to Medical Students

    ERIC Educational Resources Information Center

    Zhang, Hongkui; Wang, Bo; Zhang, Longlu

    2014-01-01

    Explorating reform of the teaching evaluation method for vocational competency-based education (CBE) curricula for medical students is a very important process in following international medical education standards, intensify ing education and teaching reforms, enhancing teaching management, and improving the quality of medical education. This…

  16. EVALUATION OF A TEST METHOD FOR MEASURING INDOOR AIR EMISSIONS FROM DRY-PROCESS PHOTOCOPIERS

    EPA Science Inventory

    A large chamber test method for measuring indoor air emissions from office equipment was developed, evaluated, and revised based on the initial testing of four dry-process photocopiers. Because all chambers may not necessarily produce similar results (e.g., due to differences in ...

  17. Algorithms and applications of aberration correction and American standard-based digital evaluation in surface defects evaluating system

    NASA Astrophysics Data System (ADS)

    Wu, Fan; Cao, Pin; Yang, Yongying; Li, Chen; Chai, Huiting; Zhang, Yihui; Xiong, Haoliang; Xu, Wenlin; Yan, Kai; Zhou, Lin; Liu, Dong; Bai, Jian; Shen, Yibing

    2016-11-01

    The inspection of surface defects is one of significant sections of optical surface quality evaluation. Based on microscopic scattering dark-field imaging, sub-aperture scanning and stitching, the Surface Defects Evaluating System (SDES) can acquire full-aperture image of defects on optical elements surface and then extract geometric size and position information of defects with image processing such as feature recognization. However, optical distortion existing in the SDES badly affects the inspection precision of surface defects. In this paper, a distortion correction algorithm based on standard lattice pattern is proposed. Feature extraction, polynomial fitting and bilinear interpolation techniques in combination with adjacent sub-aperture stitching are employed to correct the optical distortion of the SDES automatically in high accuracy. Subsequently, in order to digitally evaluate surface defects with American standard by using American military standards MIL-PRF-13830B to judge the surface defects information obtained from the SDES, an American standard-based digital evaluation algorithm is proposed, which mainly includes a judgment method of surface defects concentration. The judgment method establishes weight region for each defect and adopts the method of overlap of weight region to calculate defects concentration. This algorithm takes full advantage of convenience of matrix operations and has merits of low complexity and fast in running, which makes itself suitable very well for highefficiency inspection of surface defects. Finally, various experiments are conducted and the correctness of these algorithms are verified. At present, these algorithms have been used in SDES.

  18. Propulsion Diagnostic Method Evaluation Strategy (ProDiMES) User's Guide

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.

    2010-01-01

    This report is a User's Guide for the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES). ProDiMES is a standard benchmarking problem and a set of evaluation metrics to enable the comparison of candidate aircraft engine gas path diagnostic methods. This Matlab (The Mathworks, Inc.) based software tool enables users to independently develop and evaluate diagnostic methods. Additionally, a set of blind test case data is also distributed as part of the software. This will enable the side-by-side comparison of diagnostic approaches developed by multiple users. The Users Guide describes the various components of ProDiMES, and provides instructions for the installation and operation of the tool.

  19. Comparison of Climatological Planetary Boundary Layer Depth Estimates Using the GEOS-5 AGCM

    NASA Technical Reports Server (NTRS)

    Mcgrath-Spangler, Erica Lynn; Molod, Andrea M.

    2014-01-01

    Planetary boundary layer (PBL) processes, including those influencing the PBL depth, control many aspects of weather and climate and accurate models of these processes are important for forecasting changes in the future. However, evaluation of model estimates of PBL depth are difficult because no consensus on PBL depth definition currently exists and various methods for estimating this parameter can give results that differ by hundreds of meters or more. In order to facilitate comparisons between the Goddard Earth Observation System (GEOS-5) and other modeling and observational systems, seven PBL depth estimation methods are used to produce PBL depth climatologies and are evaluated and compared here. All seven methods evaluate the same atmosphere so all differences are related solely to the definition chosen. These methods depend on the scalar diffusivity, bulk and local Richardson numbers, and the diagnosed horizontal turbulent kinetic energy (TKE). Results are aggregated by climate class in order to allow broad generalizations. The various PBL depth estimations give similar midday results with some exceptions. One method based on horizontal turbulent kinetic energy produces deeper PBL depths in the winter associated with winter storms. In warm, moist conditions, the method based on a bulk Richardson number gives results that are shallower than those given by the methods based on the scalar diffusivity. The impact of turbulence driven by radiative cooling at cloud top is most significant during the evening transition and along several regions across the oceans and methods sensitive to this cooling produce deeper PBL depths where it is most active. Additionally, Richardson number-based methods collapse better at night than methods that depend on the scalar diffusivity. This feature potentially affects tracer transport.

  20. Towards evidence-based computational statistics: lessons from clinical research on the role and design of real-data benchmark studies.

    PubMed

    Boulesteix, Anne-Laure; Wilson, Rory; Hapfelmeier, Alexander

    2017-09-09

    The goal of medical research is to develop interventions that are in some sense superior, with respect to patient outcome, to interventions currently in use. Similarly, the goal of research in methodological computational statistics is to develop data analysis tools that are themselves superior to the existing tools. The methodology of the evaluation of medical interventions continues to be discussed extensively in the literature and it is now well accepted that medicine should be at least partly "evidence-based". Although we statisticians are convinced of the importance of unbiased, well-thought-out study designs and evidence-based approaches in the context of clinical research, we tend to ignore these principles when designing our own studies for evaluating statistical methods in the context of our methodological research. In this paper, we draw an analogy between clinical trials and real-data-based benchmarking experiments in methodological statistical science, with datasets playing the role of patients and methods playing the role of medical interventions. Through this analogy, we suggest directions for improvement in the design and interpretation of studies which use real data to evaluate statistical methods, in particular with respect to dataset inclusion criteria and the reduction of various forms of bias. More generally, we discuss the concept of "evidence-based" statistical research, its limitations and its impact on the design and interpretation of real-data-based benchmark experiments. We suggest that benchmark studies-a method of assessment of statistical methods using real-world datasets-might benefit from adopting (some) concepts from evidence-based medicine towards the goal of more evidence-based statistical research.

  1. An EGR performance evaluation and decision-making approach based on grey theory and grey entropy analysis

    PubMed Central

    2018-01-01

    Exhaust gas recirculation (EGR) is one of the main methods of reducing NOX emissions and has been widely used in marine diesel engines. This paper proposes an optimized comprehensive assessment method based on multi-objective grey situation decision theory, grey relation theory and grey entropy analysis to evaluate the performance and optimize rate determination of EGR, which currently lack clear theoretical guidance. First, multi-objective grey situation decision theory is used to establish the initial decision-making model according to the main EGR parameters. The optimal compromise between diesel engine combustion and emission performance is transformed into a decision-making target weight problem. After establishing the initial model and considering the characteristics of EGR under different conditions, an optimized target weight algorithm based on grey relation theory and grey entropy analysis is applied to generate the comprehensive evaluation and decision-making model. Finally, the proposed method is successfully applied to a TBD234V12 turbocharged diesel engine, and the results clearly illustrate the feasibility of the proposed method for providing theoretical support and a reference for further EGR optimization. PMID:29377956

  2. Quality evaluation of Yin Chen Hao Tang extract based on fingerprint chromatogram and simultaneous determination of five bioactive constituents.

    PubMed

    Wang, Xijun; Lv, Haitao; Sun, Hui; Jiang, Xingang; Wu, Zeming; Sun, Wenjun; Wang, Ping; Liu, Lian; Bi, Kaishun

    2008-01-01

    A completely validated method based on HPLC coupled with photodiode array detector (HPLC-UV) was described for evaluating and controlling quality of Yin Chen Hao Tang extract (YCHTE). First, HPLC-UV fingerprint chromatogram of YCHTE was established for preliminarily elucidating amount and chromatographic trajectory of chemical constituents in YCHTE. Second, for the first time, five mainly bioactive constituents in YCHTE were simultaneously determined based on fingerprint chromatogram for furthermore controlling the quality of YCHTE quantitatively. The developed method was applied to analyze 12 batches of YCHTE samples which consisted of herbal drugs from different places of production, showed acceptable linearity, intraday (RSD <5%), interday precision (RSD <4.80%), and accuracy (RSD <2.80%). As a result, fingerprint chromatogram determined 15 representative general fingerprint peaks, and the fingerprint chromatogram resemblances are all better than 0.9996. The contents of five analytes in different batches of YCHTE samples do not indicate significant difference. So, it is concluded that the developed HPLC-UV method is a more fully validated and complete method for evaluating and controlling the quality of YCHTE.

  3. An EGR performance evaluation and decision-making approach based on grey theory and grey entropy analysis.

    PubMed

    Zu, Xianghuan; Yang, Chuanlei; Wang, Hechun; Wang, Yinyan

    2018-01-01

    Exhaust gas recirculation (EGR) is one of the main methods of reducing NOX emissions and has been widely used in marine diesel engines. This paper proposes an optimized comprehensive assessment method based on multi-objective grey situation decision theory, grey relation theory and grey entropy analysis to evaluate the performance and optimize rate determination of EGR, which currently lack clear theoretical guidance. First, multi-objective grey situation decision theory is used to establish the initial decision-making model according to the main EGR parameters. The optimal compromise between diesel engine combustion and emission performance is transformed into a decision-making target weight problem. After establishing the initial model and considering the characteristics of EGR under different conditions, an optimized target weight algorithm based on grey relation theory and grey entropy analysis is applied to generate the comprehensive evaluation and decision-making model. Finally, the proposed method is successfully applied to a TBD234V12 turbocharged diesel engine, and the results clearly illustrate the feasibility of the proposed method for providing theoretical support and a reference for further EGR optimization.

  4. An evaluation method of power quality about electrified railways connected to power grid based on PSCAD/EMTDC

    NASA Astrophysics Data System (ADS)

    Liang, Weibin; Ouyang, Sen; Huang, Xiang; Su, Weijian

    2017-05-01

    The existing modeling process of power quality about electrified railways connected to power grid is complicated and the simulation scene is incomplete, so this paper puts forward a novel evaluation method of power quality based on PSCAD/ETMDC. Firstly, a model of power quality about electrified railways connected to power grid is established, which is based on testing report or measured data. The equivalent model of electrified locomotive contains power characteristic and harmonic characteristic, which are substituted by load and harmonic source. Secondly, in order to make evaluation more complete, an analysis scheme has been put forward. The scheme uses a combination of three-dimensions of electrified locomotive, which contains types, working conditions and quantity. At last, Shenmao Railway is taken as example to evaluate the power quality at different scenes, and the result shows electrified railways connected to power grid have significant effect on power quality.

  5. Ratio-based vs. model-based methods to correct for urinary creatinine concentrations.

    PubMed

    Jain, Ram B

    2016-08-01

    Creatinine-corrected urinary analyte concentration is usually computed as the ratio of the observed level of analyte concentration divided by the observed level of the urinary creatinine concentration (UCR). This ratio-based method is flawed since it implicitly assumes that hydration is the only factor that affects urinary creatinine concentrations. On the contrary, it has been shown in the literature, that age, gender, race/ethnicity, and other factors also affect UCR. Consequently, an optimal method to correct for UCR should correct for hydration as well as other factors like age, gender, and race/ethnicity that affect UCR. Model-based creatinine correction in which observed UCRs are used as an independent variable in regression models has been proposed. This study was conducted to evaluate the performance of ratio-based and model-based creatinine correction methods when the effects of gender, age, and race/ethnicity are evaluated one factor at a time for selected urinary analytes and metabolites. It was observed that ratio-based method leads to statistically significant pairwise differences, for example, between males and females or between non-Hispanic whites (NHW) and non-Hispanic blacks (NHB), more often than the model-based method. However, depending upon the analyte of interest, the reverse is also possible. The estimated ratios of geometric means (GM), for example, male to female or NHW to NHB, were also compared for the two methods. When estimated UCRs were higher for the group (for example, males) in the numerator of this ratio, these ratios were higher for the model-based method, for example, male to female ratio of GMs. When estimated UCR were lower for the group (for example, NHW) in the numerator of this ratio, these ratios were higher for the ratio-based method, for example, NHW to NHB ratio of GMs. Model-based method is the method of choice if all factors that affect UCR are to be accounted for.

  6. Development of representative magnetic resonance imaging-based atlases of the canine brain and evaluation of three methods for atlas-based segmentation.

    PubMed

    Milne, Marjorie E; Steward, Christopher; Firestone, Simon M; Long, Sam N; O'Brien, Terrence J; Moffat, Bradford A

    2016-04-01

    To develop representative MRI atlases of the canine brain and to evaluate 3 methods of atlas-based segmentation (ABS). 62 dogs without clinical signs of epilepsy and without MRI evidence of structural brain disease. The MRI scans from 44 dogs were used to develop 4 templates on the basis of brain shape (brachycephalic, mesaticephalic, dolichocephalic, and combined mesaticephalic and dolichocephalic). Atlas labels were generated by segmenting the brain, ventricular system, hippocampal formation, and caudate nuclei. The MRI scans from the remaining 18 dogs were used to evaluate 3 methods of ABS (manual brain extraction and application of a brain shape-specific template [A], automatic brain extraction and application of a brain shape-specific template [B], and manual brain extraction and application of a combined template [C]). The performance of each ABS method was compared by calculation of the Dice and Jaccard coefficients, with manual segmentation used as the gold standard. Method A had the highest mean Jaccard coefficient and was the most accurate ABS method assessed. Measures of overlap for ABS methods that used manual brain extraction (A and C) ranged from 0.75 to 0.95 and compared favorably with repeated measures of overlap for manual extraction, which ranged from 0.88 to 0.97. Atlas-based segmentation was an accurate and repeatable method for segmentation of canine brain structures. It could be performed more rapidly than manual segmentation, which should allow the application of computer-assisted volumetry to large data sets and clinical cases and facilitate neuroimaging research and disease diagnosis.

  7. Development of Weeds Density Evaluation System Based on RGB Sensor

    NASA Astrophysics Data System (ADS)

    Solahudin, M.; Slamet, W.; Wahyu, W.

    2018-05-01

    Weeds are plant competitors which potentially reduce the yields due to competition for sunlight, water and soil nutrients. Recently, for chemical-based weed control, site-specific weed management that accommodates spatial and temporal diversity of weeds attack in determining the appropriate dose of herbicide based on Variable Rate Technology (VRT) is preferable than traditional approach with single dose herbicide application. In such application, determination of the level of weed density is an important task. Several methods have been studied to evaluate the density of weed attack. The objective of this study is to develop a system that is able to evaluate weed density based on RGB (Red, Green, and Blue) sensors. RGB sensor was used to acquire the RGB values of the surface of the field. An artificial neural network (ANN) model was then used for determining the weed density. In this study the ANN model was trained with 280 training data (70%), 60 validation data (15%), and 60 testing data (15%). Based on the field test, using the proposed method the weed density could be evaluated with an accuracy of 83.75%.

  8. Prospects for Public Library Evaluation.

    ERIC Educational Resources Information Center

    Van House, Nancy A.; Childers, Thomas

    1991-01-01

    Discusses methods of evaluation that can be used to measure public library effectiveness, based on a conference sponsored by the Council on Library Resources. Topics discussed include the Public Library Effectiveness Study (PLES), quantitative and qualitative evaluation, using evaluative information for resource acquisition and resource…

  9. An evaluation method of computer usability based on human-to-computer information transmission model.

    PubMed

    Ogawa, K

    1992-01-01

    This paper proposes a new evaluation and prediction method for computer usability. This method is based on our two previously proposed information transmission measures created from a human-to-computer information transmission model. The model has three information transmission levels: the device, software, and task content levels. Two measures, called the device independent information measure (DI) and the computer independent information measure (CI), defined on the software and task content levels respectively, are given as the amount of information transmitted. Two information transmission rates are defined as DI/T and CI/T, where T is the task completion time: the device independent information transmission rate (RDI), and the computer independent information transmission rate (RCI). The method utilizes the RDI and RCI rates to evaluate relatively the usability of software and device operations on different computer systems. Experiments using three different systems, in this case a graphical information input task, confirm that the method offers an efficient way of determining computer usability.

  10. Breast histopathology image segmentation using spatio-colour-texture based graph partition method.

    PubMed

    Belsare, A D; Mushrif, M M; Pangarkar, M A; Meshram, N

    2016-06-01

    This paper proposes a novel integrated spatio-colour-texture based graph partitioning method for segmentation of nuclear arrangement in tubules with a lumen or in solid islands without a lumen from digitized Hematoxylin-Eosin stained breast histology images, in order to automate the process of histology breast image analysis to assist the pathologists. We propose a new similarity based super pixel generation method and integrate it with texton representation to form spatio-colour-texture map of Breast Histology Image. Then a new weighted distance based similarity measure is used for generation of graph and final segmentation using normalized cuts method is obtained. The extensive experiments carried shows that the proposed algorithm can segment nuclear arrangement in normal as well as malignant duct in breast histology tissue image. For evaluation of the proposed method the ground-truth image database of 100 malignant and nonmalignant breast histology images is created with the help of two expert pathologists and the quantitative evaluation of proposed breast histology image segmentation has been performed. It shows that the proposed method outperforms over other methods. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  11. Performance evaluation of infrared imaging system in field test

    NASA Astrophysics Data System (ADS)

    Wang, Chensheng; Guo, Xiaodong; Ren, Tingting; Zhang, Zhi-jie

    2014-11-01

    Infrared imaging system has been applied widely in both military and civilian fields. Since the infrared imager has various types and different parameters, for system manufacturers and customers, there is great demand for evaluating the performance of IR imaging systems with a standard tool or platform. Since the first generation IR imager was developed, the standard method to assess the performance has been the MRTD or related improved methods which are not perfect adaptable for current linear scanning imager or 2D staring imager based on FPA detector. For this problem, this paper describes an evaluation method based on the triangular orientation discrimination metric which is considered as the effective and emerging method to evaluate the synthesis performance of EO system. To realize the evaluation in field test, an experiment instrument is developed. And considering the importance of operational environment, the field test is carried in practical atmospheric environment. The test imagers include panoramic imaging system and staring imaging systems with different optics and detectors parameters (both cooled and uncooled). After showing the instrument and experiment setup, the experiment results are shown. The target range performance is analyzed and discussed. In data analysis part, the article gives the range prediction values obtained from TOD method, MRTD method and practical experiment, and shows the analysis and results discussion. The experimental results prove the effectiveness of this evaluation tool, and it can be taken as a platform to give the uniform performance prediction reference.

  12. A New Method for the Evaluation and Prediction of Base Stealing Performance.

    PubMed

    Bricker, Joshua C; Bailey, Christopher A; Driggers, Austin R; McInnis, Timothy C; Alami, Arya

    2016-11-01

    Bricker, JC, Bailey, CA, Driggers, AR, McInnis, TC, and Alami, A. A new method for the evaluation and prediction of base stealing performance. J Strength Cond Res 30(11): 3044-3050, 2016-The purposes of this study were to evaluate a new method using electronic timing gates to monitor base stealing performance in terms of reliability, differences between it and traditional stopwatch-collected times, and its ability to predict base stealing performance. Twenty-five healthy collegiate baseball players performed maximal effort base stealing trials with a right and left-handed pitcher. An infrared electronic timing system was used to calculate the reaction time (RT) and total time (TT), whereas coaches' times (CT) were recorded with digital stopwatches. Reliability of the TGM was evaluated with intraclass correlation coefficients (ICCs) and coefficient of variation (CV). Differences between the TGM and traditional CT were calculated with paired samples t tests Cohen's d effect size estimates. Base stealing performance predictability of the TGM was evaluated with Pearson's bivariate correlations. Acceptable relative reliability was observed (ICCs 0.74-0.84). Absolute reliability measures were acceptable for TT (CVs = 4.4-4.8%), but measures were elevated for RT (CVs = 32.3-35.5%). Statistical and practical differences were found between TT and CT (right p = 0.00, d = 1.28 and left p = 0.00, d = 1.49). The TGM TT seems to be a decent predictor of base stealing performance (r = -0.49 to -0.61). The authors recommend using the TGM used in this investigation for athlete monitoring because it was found to be reliable, seems to be more precise than traditional CT measured with a stopwatch, provides an additional variable of value (RT), and may predict future performance.

  13. Reference-free ground truth metric for metal artifact evaluation in CT images.

    PubMed

    Kratz, Bärbel; Ens, Svitlana; Müller, Jan; Buzug, Thorsten M

    2011-07-01

    In computed tomography (CT), metal objects in the region of interest introduce data inconsistencies during acquisition. Reconstructing these data results in an image with star shaped artifacts induced by the metal inconsistencies. To enhance image quality, the influence of the metal objects can be reduced by different metal artifact reduction (MAR) strategies. For an adequate evaluation of new MAR approaches a ground truth reference data set is needed. In technical evaluations, where phantoms can be measured with and without metal inserts, ground truth data can easily be obtained by a second reference data acquisition. Obviously, this is not possible for clinical data. Here, an alternative evaluation method is presented without the need of an additionally acquired reference data set. The proposed metric is based on an inherent ground truth for metal artifacts as well as MAR methods comparison, where no reference information in terms of a second acquisition is needed. The method is based on the forward projection of a reconstructed image, which is compared to the actually measured projection data. The new evaluation technique is performed on phantom and on clinical CT data with and without MAR. The metric results are then compared with methods using a reference data set as well as an expert-based classification. It is shown that the new approach is an adequate quantification technique for artifact strength in reconstructed metal or MAR CT images. The presented method works solely on the original projection data itself, which yields some advantages compared to distance measures in image domain using two data sets. Beside this, no parameters have to be manually chosen. The new metric is a useful evaluation alternative when no reference data are available.

  14. An Evaluation of Automotive Interior Packages Based on Human Ocular and Joint Motor Properties

    NASA Astrophysics Data System (ADS)

    Tanaka, Yoshiyuki; Rakumatsu, Takeshi; Horiue, Masayoshi; Miyazaki, Tooru; Nishikawa, Kazuo; Nouzawa, Takahide; Tsuji, Toshio

    This paper proposes a new evaluation method of an automotive interior package based on human oculomotor and joint-motor properties. Assuming the long-term driving situation in the express high way, the three evaluation indices were designed on i) the ratio of head motion at gazing the driving items; ii) the load torque for maintaining the standard driving posture; and iii) the human force manipulability at the end-point of human extremities. Experiments were carried out for two different interior packages with four subjects who have the special knowledge on the automobile development. Evaluation results demonstrate that the proposed method can quantitatively analyze the driving interior in good agreement with the generally accepted subjective opinion in the automobile industry.

  15. Mediating the Cognitive Walkthrough with Patient Groups to achieve Personalized Health in Chronic Disease Self-Management System Evaluation.

    PubMed

    Georgsson, Mattias; Kushniruk, Andre

    2016-01-01

    The cognitive walkthrough (CW) is a task-based, expert inspection usability evaluation method involving benefits such as cost effectiveness and efficiency. A drawback of the method is that it doesn't involve the user perspective from real users but instead is based on experts' predictions about the usability of the system and how users interact. In this paper, we propose a way of involving the user in an expert evaluation method by modifying the CW with patient groups as mediators. This along with other modifications include a dual domain session facilitator, specific patient groups and three different phases: 1) a preparation phase where suitable tasks are developed by a panel of experts and patients, validated through the content validity index 2) a patient user evaluation phase including an individual and collaborative process part 3) an analysis and coding phase where all data is digitalized and synthesized making use of Qualitative Data Analysis Software (QDAS) to determine usability deficiencies. We predict that this way of evaluating will utilize the benefits of the expert methods, also providing a way of including the patient user of these self-management systems. Results from this prospective study should provide evidence of the usefulness of this method modification.

  16. Methodology Evaluation Framework for Component-Based System Development.

    ERIC Educational Resources Information Center

    Dahanayake, Ajantha; Sol, Henk; Stojanovic, Zoran

    2003-01-01

    Explains component-based development (CBD) for distributed information systems and presents an evaluation framework, which highlights the extent to which a methodology is component oriented. Compares prominent CBD methods, discusses ways of modeling, and suggests that this is a first step towards a components-oriented systems development…

  17. Evaluation of Two PCR-based Swine-specific Fecal Source Tracking Assays (Abstract)

    EPA Science Inventory

    Several PCR-based methods have been proposed to identify swine fecal pollution in environmental waters. However, the utility of these assays in identifying swine fecal contamination on a broad geographic scale is largely unknown. In this study, we evaluated the specificity, distr...

  18. The Evaluation of School-Based Violence Prevention Programs: A Meta-Analysis

    ERIC Educational Resources Information Center

    Park-Higgerson, Hyoun-Kyoung; Perumean-Chaney, Suzanne E.; Bartolucci, Alfred A.; Grimley, Diane M.; Singh, Karan P.

    2008-01-01

    Background: Youth violence and related aggressive behaviors have become serious public health issues with physical, economic, social, and psychological impacts and consequences. This study identified and evaluated the characteristics of successful school-based violence prevention programs. Methods: Twenty-six randomized controlled trial (RCT),…

  19. Influence on Learning of a Collaborative Learning Method Comprising the Jigsaw Method and Problem-based Learning (PBL).

    PubMed

    Takeda, Kayoko; Takahashi, Kiyoshi; Masukawa, Hiroyuki; Shimamori, Yoshimitsu

    2017-01-01

    Recently, the practice of active learning has spread, increasingly recognized as an essential component of academic studies. Classes incorporating small group discussion (SGD) are conducted at many universities. At present, assessments of the effectiveness of SGD have mostly involved evaluation by questionnaires conducted by teachers, by peer assessment, and by self-evaluation of students. However, qualitative data, such as open-ended descriptions by students, have not been widely evaluated. As a result, we have been unable to analyze the processes and methods involved in how students acquire knowledge in SGD. In recent years, due to advances in information and communication technology (ICT), text mining has enabled the analysis of qualitative data. We therefore investigated whether the introduction of a learning system comprising the jigsaw method and problem-based learning (PBL) would improve student attitudes toward learning; we did this by text mining analysis of the content of student reports. We found that by applying the jigsaw method before PBL, we were able to improve student attitudes toward learning and increase the depth of their understanding of the area of study as a result of working with others. The use of text mining to analyze qualitative data also allowed us to understand the processes and methods by which students acquired knowledge in SGD and also changes in students' understanding and performance based on improvements to the class. This finding suggests that the use of text mining to analyze qualitative data could enable teachers to evaluate the effectiveness of various methods employed to improve learning.

  20. Using Program Theory-Driven Evaluation Science to Crack the Da Vinci Code

    ERIC Educational Resources Information Center

    Donaldson, Stewart I.

    2005-01-01

    Program theory-driven evaluation science uses substantive knowledge, as opposed to method proclivities, to guide program evaluations. It aspires to update, clarify, simplify, and make more accessible the evolving theory of evaluation practice commonly referred to as theory-driven or theory-based evaluation. The evaluator in this chapter provides a…

  1. Defining a staged-based process for economic and financial evaluations of mHealth programs.

    PubMed

    LeFevre, Amnesty E; Shillcutt, Samuel D; Broomhead, Sean; Labrique, Alain B; Jones, Tom

    2017-01-01

    Mobile and wireless technology for health (mHealth) has the potential to improve health outcomes by addressing critical health systems constraints that impede coverage, utilization, and effectiveness of health services. To date, few mHealth programs have been implemented at scale and there remains a paucity of evidence on their effectiveness and value for money. This paper aims to improve understanding among mHealth program managers and key stakeholders of how to select methods for economic evaluation (comparative analysis for determining value for money) and financial evaluation (determination of the cost of implementing an intervention, estimation of costs for sustaining or expanding an intervention, and assessment of its affordability). We outline a 6 stage-based process for selecting and integrating economic and financial evaluation methods into the monitoring and evaluation of mHealth solutions including (1) defining the program strategy and linkages with key outcomes, (2) assessment of effectiveness, (3) full economic evaluation or partial evaluation, (4) sub-group analyses, (5) estimating resource requirements for expansion, (6) affordability assessment and identification of models for financial sustainability. While application of these stages optimally occurs linearly, finite resources, limited technical expertise, and the timing of evaluation initiation may impede this. We recommend that analysts prioritize economic and financial evaluation methods based on programmatic linkages with health outcomes; alignment with an mHealth solution's broader stage of maturity and stage of evaluation; overarching monitoring and evaluation activities; stakeholder evidence needs; time point of initiation; and available resources for evaluations.

  2. A Learner-Centered Grading Method Focused on Reaching Proficiency with Course Learning Outcomes

    ERIC Educational Resources Information Center

    Toledo, Santiago; Dubas, Justin M.

    2017-01-01

    Getting students to use grading feedback as a tool for learning is a continual challenge for educators. This work proposes a method for evaluating student performance that provides feedback to students based on standards of learning dictated by clearly delineated course learning outcomes. This method combines elements of standards-based grading…

  3. Comparison of SVM RBF-NN and DT for crop and weed identification based on spectral measurement over corn fields

    USDA-ARS?s Scientific Manuscript database

    It is important to find an appropriate pattern-recognition method for in-field plant identification based on spectral measurement in order to classify the crop and weeds accurately. In this study, the method of Support Vector Machine (SVM) was evaluated and compared with two other methods, Decision ...

  4. RNA-Based Methods Increase the Detection of Fecal Bacteria and Fecal Identifiers in Environmental Waters

    EPA Science Inventory

    We evaluated the use of qPCR RNA-based methods in the detection of fecal bacteria in environmental waters. We showed that RNA methods can increase the detection of fecal bacteria in multiple water matrices. The data suggest that this is a viable alternative for the detection of a...

  5. Cross-evaluation of ground-based, multi-satellite and reanalysis precipitation products: Applicability of the Triple Collocation method across Mainland China

    NASA Astrophysics Data System (ADS)

    Li, Changming; Tang, Guoqiang; Hong, Yang

    2018-07-01

    Evaluating the reliability of satellite and reanalysis precipitation products is critical but challenging over ungauged or poorly gauged regions. The Triple Collocation (TC) method is a reliable approach to estimate the accuracy of any three independent inputs in the absence of truth values. This study assesses the uncertainty of three types of independent precipitation products, i.e., satellite-based, ground-based and model reanalysis over Mainland China using the TC method. The ground-based data set is Gauge Based Daily Precipitation Analysis (CGDPA). The reanalysis data set is European Reanalysis Agency Reanalysis Product (ERA-interim). The satellite-based products include five mainstream satellite products. The comparison and evaluation are conducted at 0.25° and daily resolutions from 2013 to 2015. First, the effectiveness of the TC method is evaluated in South China with dense gauge network. The results demonstrate that the TC method is reliable because the correlation coefficient (CC) and root mean square error (RMSE) derived from TC are close to those derived from ground observations, with only 9% and 7% mean relative differences, respectively. Then, the TC method is applied in Mainland China, with special attention paid to the Tibetan Plateau (TP) known as the Earth's third pole with few ground stations. Results indicate that (1) The overall performance of IMERG is better than the other satellite products over Mainland China, followed by 3B42V7, CMORPH-CRT and PERSIANN-CDR. (2) In the TP, CGDPA shows the best overall performance over gauged grid cells, however, over ungauged regions, IMERG and ERA-interim slightly outperform CGDPA with similar RMSE but higher mean CC (0.63, 0.61, and 0.58, respectively). It highlights the strengths and potentiality of remote sensing and reanalysis data over the TP and reconfirms the cons of the inherent uncertainty of CGDPA due to interpolation from sparsely gauged data. The study concludes that the TC method provides not only reliable cross-validation results over Mainland China but also a new perspective for comparatively assessing multi-source precipitation products, particularly over poorly gauged regions such as the TP.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, L; Yin, F; Cai, J

    Purpose: To develop a methodology of constructing physiological-based virtual thorax phantom based on hyperpolarized (HP) gas tagging MRI for evaluating deformable image registration (DIR). Methods: Three healthy subjects were imaged at both the end-of-inhalation (EOI) and the end-of-exhalation (EOE) phases using a high-resolution (2.5mm isovoxel) 3D proton MRI, as well as a hybrid MRI which combines HP gas tagging MRI and a low-resolution (4.5mm isovoxel) proton MRI. A sparse tagging displacement vector field (tDVF) was derived from the HP gas tagging MRI by tracking the displacement of tagging grids between EOI and EOE. Using the tDVF and the high-resolution MRmore » images, we determined the motion model of the entire thorax in the following two steps: 1) the DVF inside of lungs was estimated based on the sparse tDVF using a novel multi-step natural neighbor interpolation method; 2) the DVF outside of lungs was estimated from the DIR between the EOI and EOE images (Velocity AI). The derived motion model was then applied to the high-resolution EOI image to create a deformed EOE image, forming the virtual phantom where the motion model provides the ground truth of deformation. Five DIR methods were evaluated using the developed virtual phantom. Errors in DVF magnitude (Em) and angle (Ea) were determined and compared for each DIR method. Results: Among the five DIR methods, free form deformation produced DVF results that are most closely resembling the ground truth (Em=1.04mm, Ea=6.63°). The two DIR methods based on B-spline produced comparable results (Em=2.04mm, Ea=13.66°; and Em =2.62mm, Ea=17.67°), and the two optical-flow methods produced least accurate results (Em=7.8mm; Ea=53.04°; Em=4.45mm, Ea=31.02°). Conclusion: A methodology for constructing physiological-based virtual thorax phantom based on HP gas tagging MRI has been developed. Initial evaluation demonstrated its potential as an effective tool for robust evaluation of DIR in the lung.« less

  7. Model‐based economic evaluations in smoking cessation and their transferability to new contexts: a systematic review

    PubMed Central

    Berg, Marrit L.; Cheung, Kei Long; Hiligsmann, Mickaël; Evers, Silvia; de Kinderen, Reina J. A.; Kulchaitanaroaj, Puttarin

    2017-01-01

    Abstract Aims To identify different types of models used in economic evaluations of smoking cessation, analyse the quality of the included models examining their attributes and ascertain their transferability to a new context. Methods A systematic review of the literature on the economic evaluation of smoking cessation interventions published between 1996 and April 2015, identified via Medline, EMBASE, National Health Service (NHS) Economic Evaluation Database (NHS EED), Health Technology Assessment (HTA). The checklist‐based quality of the included studies and transferability scores was based on the European Network of Health Economic Evaluation Databases (EURONHEED) criteria. Studies that were not in smoking cessation, not original research, not a model‐based economic evaluation, that did not consider adult population and not from a high‐income country were excluded. Findings Among the 64 economic evaluations included in the review, the state‐transition Markov model was the most frequently used method (n = 30/64), with quality adjusted life years (QALY) being the most frequently used outcome measure in a life‐time horizon. A small number of the included studies (13 of 64) were eligible for EURONHEED transferability checklist. The overall transferability scores ranged from 0.50 to 0.97, with an average score of 0.75. The average score per section was 0.69 (range = 0.35–0.92). The relative transferability of the studies could not be established due to a limitation present in the EURONHEED method. Conclusion All existing economic evaluations in smoking cessation lack in one or more key study attributes necessary to be fully transferable to a new context. PMID:28060453

  8. Word sense disambiguation in the clinical domain: a comparison of knowledge-rich and knowledge-poor unsupervised methods

    PubMed Central

    Chasin, Rachel; Rumshisky, Anna; Uzuner, Ozlem; Szolovits, Peter

    2014-01-01

    Objective To evaluate state-of-the-art unsupervised methods on the word sense disambiguation (WSD) task in the clinical domain. In particular, to compare graph-based approaches relying on a clinical knowledge base with bottom-up topic-modeling-based approaches. We investigate several enhancements to the topic-modeling techniques that use domain-specific knowledge sources. Materials and methods The graph-based methods use variations of PageRank and distance-based similarity metrics, operating over the Unified Medical Language System (UMLS). Topic-modeling methods use unlabeled data from the Multiparameter Intelligent Monitoring in Intensive Care (MIMIC II) database to derive models for each ambiguous word. We investigate the impact of using different linguistic features for topic models, including UMLS-based and syntactic features. We use a sense-tagged clinical dataset from the Mayo Clinic for evaluation. Results The topic-modeling methods achieve 66.9% accuracy on a subset of the Mayo Clinic's data, while the graph-based methods only reach the 40–50% range, with a most-frequent-sense baseline of 56.5%. Features derived from the UMLS semantic type and concept hierarchies do not produce a gain over bag-of-words features in the topic models, but identifying phrases from UMLS and using syntax does help. Discussion Although topic models outperform graph-based methods, semantic features derived from the UMLS prove too noisy to improve performance beyond bag-of-words. Conclusions Topic modeling for WSD provides superior results in the clinical domain; however, integration of knowledge remains to be effectively exploited. PMID:24441986

  9. Multifamily determination of pesticide residues in soya-based nutraceutical products by GC/MS-MS.

    PubMed

    Páleníková, Agneša; Martínez-Domínguez, Gerardo; Arrebola, Francisco Javier; Romero-González, Roberto; Hrouzková, Svetlana; Frenich, Antonia Garrido

    2015-04-15

    An analytical method based on a modified QuEChERS extraction coupled with gas chromatography-tandem mass spectrometry (GC-MS/MS) was evaluated for the determination of 177 pesticides in soya-based nutraceutical products. The QuEChERS method was optimised and different extraction solvents and clean-up approaches were tested, obtaining the most efficient conditions with a mixture of sorbents (PSA, C18, GBC and Zr-Sep(+)). Recoveries were evaluated at 10, 50 and 100 μg/kg and ranged between 70% and 120%. Precision was expressed as relative standard deviation (RSD), and it was evaluated for more than 160 pesticides as intra and inter-day precision, with values always below 20% and 25%, respectively. Limits of detection (LODs) ranged from 0.1 to 10 μg/kg, whereas limits of quantification (LOQs) from 0.5 to 20 μg/kg. The applicability of the method was proved by analysing soya-based nutraceuticals. Two pesticides were found in these samples, malathion and pyriproxyfen, at 11.1 and 1.5 μg/kg respectively. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Research on the recycling industry development model for typical exterior plastic components of end-of-life passenger vehicle based on the SWOT method.

    PubMed

    Zhang, Hongshen; Chen, Ming

    2013-11-01

    In-depth studies on the recycling of typical automotive exterior plastic parts are significant and beneficial for environmental protection, energy conservation, and sustainable development of China. In the current study, several methods were used to analyze the recycling industry model for typical exterior parts of passenger vehicles in China. The strengths, weaknesses, opportunities, and challenges of the current recycling industry for typical exterior parts of passenger vehicles were analyzed comprehensively based on the SWOT method. The internal factor evaluation matrix and external factor evaluation matrix were used to evaluate the internal and external factors of the recycling industry. The recycling industry was found to respond well to all the factors and it was found to face good developing opportunities. Then, the cross-link strategies analysis for the typical exterior parts of the passenger car industry of China was conducted based on the SWOT analysis strategies and established SWOT matrix. Finally, based on the aforementioned research, the recycling industry model led by automobile manufacturers was promoted. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Validation of a Quantitative Single-Subject Based Evaluation for Rehabilitation-Induced Improvement Assessment.

    PubMed

    Gandolla, Marta; Molteni, Franco; Ward, Nick S; Guanziroli, Eleonora; Ferrigno, Giancarlo; Pedrocchi, Alessandra

    2015-11-01

    The foreseen outcome of a rehabilitation treatment is a stable improvement on the functional outcomes, which can be longitudinally assessed through multiple measures to help clinicians in functional evaluation. In this study, we propose an automatic comprehensive method of combining multiple measures in order to assess a functional improvement. As test-bed, a functional electrical stimulation based treatment for foot drop correction performed with chronic post-stroke participants is presented. Patients were assessed on five relevant outcome measures before, after intervention, and at a follow-up time-point. A novel algorithm based on variables minimum detectable change is proposed and implemented in a custom-made software, combining the outcome measures to obtain a unique parameter: capacity score. The difference between capacity scores at different timing is three holded to obtain improvement evaluation. Ten clinicians evaluated patients on the Improvement Clinical Global Impression scale. Eleven patients underwent the treatment, and five resulted to achieve a stable functional improvement, as assessed by the proposed algorithm. A statistically significant agreement between intra-clinicians and algorithm-clinicians evaluations was demonstrated. The proposed method evaluates functional improvement on a single-subject yes/no base by merging different measures (e.g., kinematic, muscular) and it is validated against clinical evaluation.

  12. Pulse Transit Time Measurement Using Seismocardiogram, Photoplethysmogram, and Acoustic Recordings: Evaluation and Comparison.

    PubMed

    Yang, Chenxi; Tavassolian, Negar

    2018-05-01

    This work proposes a novel method of pulse transit time (PTT) measurement. The proximal arterial location data are collected from seismocardiogram (SCG) recordings by placing a micro-electromechanical accelerometer on the chest wall. The distal arterial location data are recorded using an acoustic sensor placed inside the ear. The performance of distal location recordings is evaluated by comparing SCG-acoustic and SCG-photoplethysmogram (PPG) measurements. PPG and acoustic performances under motion noise are also compared. Experimental results suggest comparable performances for the acoustic-based and PPG-based devices. The feasibility of each PTT measurement method is validated for blood pressure evaluations and its limitations are analyzed.

  13. Evaluation of retrieval methods of daytime convective boundary layer height based on lidar data

    NASA Astrophysics Data System (ADS)

    Li, Hong; Yang, Yi; Hu, Xiao-Ming; Huang, Zhongwei; Wang, Guoyin; Zhang, Beidou; Zhang, Tiejun

    2017-04-01

    The atmospheric boundary layer height is a basic parameter in describing the structure of the lower atmosphere. Because of their high temporal resolution, ground-based lidar data are widely used to determine the daytime convective boundary layer height (CBLH), but the currently available retrieval methods have their advantages and drawbacks. In this paper, four methods of retrieving the CBLH (i.e., the gradient method, the idealized backscatter method, and two forms of the wavelet covariance transform method) from lidar normalized relative backscatter are evaluated, using two artificial cases (an idealized profile and a case similar to real profile), to test their stability and accuracy. The results show that the gradient method is suitable for high signal-to-noise ratio conditions. The idealized backscatter method is less sensitive to the first estimate of the CBLH; however, it is computationally expensive. The results obtained from the two forms of the wavelet covariance transform method are influenced by the selection of the initial input value of the wavelet amplitude. Further sensitivity analysis using real profiles under different orders of magnitude of background counts show that when different initial input values are set, the idealized backscatter method always obtains consistent CBLH. For two wavelet methods, the different CBLH are always obtained with the increase in the wavelet amplitude when noise is significant. Finally, the CBLHs as measured by three lidar-based methods are evaluated by as measured from L-band soundings. The boundary layer heights from two instruments coincide with ±200 m in most situations.

  14. A comparison of web-based and paper-based survey methods: testing assumptions of survey mode and response cost.

    PubMed

    Greenlaw, Corey; Brown-Welty, Sharon

    2009-10-01

    Web-based surveys have become more prevalent in areas such as evaluation, research, and marketing research to name a few. The proliferation of these online surveys raises the question, how do response rates compare with traditional surveys and at what cost? This research explored response rates and costs for Web-based surveys, paper surveys, and mixed-mode surveys. The participants included evaluators from the American Evaluation Association (AEA). Results included that mixed-mode, while more expensive, had higher response rates.

  15. Comparative Evaluation of Vacuum-based Surface Sampling ...

    EPA Pesticide Factsheets

    Journal Article Following a biological contamination incident, collection of surface samples is necessary to determine the extent and level of contamination, and to deem an area safe for reentry upon decontamination. Current sampling strategies targeting Bacillus anthracis spores prescribe vacuum-based methods for rough and/or porous surfaces. In this study, four commonly-used B. anthracis spore sampling devices (vacuum socks, 37 mm 0.8 µm MCE filter cassettes, 37 mm 0.3 µm PTFE filter cassettes, and 3MTM forensic filters) were comparatively evaluated for their ability to recover surface-associated spores. The vacuum sock device was evaluated at two sampling speeds (slow and fast), resulting in five total methods evaluated. Aerosolized spores (~105 cm-2) of a surrogate Bacillus species (Bacillus atrophaeus) were allowed to settle onto three material types (concrete, carpet, and upholstery). Ten replicate samples were collected using each vacuum method, from each of the three material types. In addition, stainless steel (i.e., nonporous) surfaces inoculated simultaneously were sampled with pre-moistened wipes. Recoveries from wipes of steel surfaces were utilized to verify the inoculum, and to normalize vacuum-based recoveries across trials. Recovery (CFU cm-2) and relative recovery (vacuum recovery/wipe recovery) were determined for each method and material type. Relative recoveries were compared by one-way and three-way ANOVA. Data analysis by one-

  16. Statistical method evaluation for differentially methylated CpGs in base resolution next-generation DNA sequencing data.

    PubMed

    Zhang, Yun; Baheti, Saurabh; Sun, Zhifu

    2018-05-01

    High-throughput bisulfite methylation sequencing such as reduced representation bisulfite sequencing (RRBS), Agilent SureSelect Human Methyl-Seq (Methyl-seq) or whole-genome bisulfite sequencing is commonly used for base resolution methylome research. These data are represented either by the ratio of methylated cytosine versus total coverage at a CpG site or numbers of methylated and unmethylated cytosines. Multiple statistical methods can be used to detect differentially methylated CpGs (DMCs) between conditions, and these methods are often the base for the next step of differentially methylated region identification. The ratio data have a flexibility of fitting to many linear models, but the raw count data take consideration of coverage information. There is an array of options in each datatype for DMC detection; however, it is not clear which is an optimal statistical method. In this study, we systematically evaluated four statistic methods on methylation ratio data and four methods on count-based data and compared their performances with regard to type I error control, sensitivity and specificity of DMC detection and computational resource demands using real RRBS data along with simulation. Our results show that the ratio-based tests are generally more conservative (less sensitive) than the count-based tests. However, some count-based methods have high false-positive rates and should be avoided. The beta-binomial model gives a good balance between sensitivity and specificity and is preferred method. Selection of methods in different settings, signal versus noise and sample size estimation are also discussed.

  17. Process service quality evaluation based on Dempster-Shafer theory and support vector machine.

    PubMed

    Pei, Feng-Que; Li, Dong-Bo; Tong, Yi-Fei; He, Fei

    2017-01-01

    Human involvement influences traditional service quality evaluations, which triggers an evaluation's low accuracy, poor reliability and less impressive predictability. This paper proposes a method by employing a support vector machine (SVM) and Dempster-Shafer evidence theory to evaluate the service quality of a production process by handling a high number of input features with a low sampling data set, which is called SVMs-DS. Features that can affect production quality are extracted by a large number of sensors. Preprocessing steps such as feature simplification and normalization are reduced. Based on three individual SVM models, the basic probability assignments (BPAs) are constructed, which can help the evaluation in a qualitative and quantitative way. The process service quality evaluation results are validated by the Dempster rules; the decision threshold to resolve conflicting results is generated from three SVM models. A case study is presented to demonstrate the effectiveness of the SVMs-DS method.

  18. Evaluation of Fear Using Nonintrusive Measurement of Multimodal Sensors

    PubMed Central

    Choi, Jong-Suk; Bang, Jae Won; Heo, Hwan; Park, Kang Ryoung

    2015-01-01

    Most previous research into emotion recognition used either a single modality or multiple modalities of physiological signal. However, the former method allows for limited enhancement of accuracy, and the latter has the disadvantages that its performance can be affected by head or body movements. Further, the latter causes inconvenience to the user due to the sensors attached to the body. Among various emotions, the accurate evaluation of fear is crucial in many applications, such as criminal psychology, intelligent surveillance systems and the objective evaluation of horror movies. Therefore, we propose a new method for evaluating fear based on nonintrusive measurements obtained using multiple sensors. Experimental results based on the t-test, the effect size and the sum of all of the correlation values with other modalities showed that facial temperature and subjective evaluation are more reliable than electroencephalogram (EEG) and eye blinking rate for the evaluation of fear. PMID:26205268

  19. ANTONIA perfusion and stroke. A software tool for the multi-purpose analysis of MR perfusion-weighted datasets and quantitative ischemic stroke assessment.

    PubMed

    Forkert, N D; Cheng, B; Kemmling, A; Thomalla, G; Fiehler, J

    2014-01-01

    The objective of this work is to present the software tool ANTONIA, which has been developed to facilitate a quantitative analysis of perfusion-weighted MRI (PWI) datasets in general as well as the subsequent multi-parametric analysis of additional datasets for the specific purpose of acute ischemic stroke patient dataset evaluation. Three different methods for the analysis of DSC or DCE PWI datasets are currently implemented in ANTONIA, which can be case-specifically selected based on the study protocol. These methods comprise a curve fitting method as well as a deconvolution-based and deconvolution-free method integrating a previously defined arterial input function. The perfusion analysis is extended for the purpose of acute ischemic stroke analysis by additional methods that enable an automatic atlas-based selection of the arterial input function, an analysis of the perfusion-diffusion and DWI-FLAIR mismatch as well as segmentation-based volumetric analyses. For reliability evaluation, the described software tool was used by two observers for quantitative analysis of 15 datasets from acute ischemic stroke patients to extract the acute lesion core volume, FLAIR ratio, perfusion-diffusion mismatch volume with manually as well as automatically selected arterial input functions, and follow-up lesion volume. The results of this evaluation revealed that the described software tool leads to highly reproducible results for all parameters if the automatic arterial input function selection method is used. Due to the broad selection of processing methods that are available in the software tool, ANTONIA is especially helpful to support image-based perfusion and acute ischemic stroke research projects.

  20. A new background distribution-based active contour model for three-dimensional lesion segmentation in breast DCE-MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Hui; Liu, Yiping; Qiu, Tianshuang

    2014-08-15

    Purpose: To develop and evaluate a computerized semiautomatic segmentation method for accurate extraction of three-dimensional lesions from dynamic contrast-enhanced magnetic resonance images (DCE-MRIs) of the breast. Methods: The authors propose a new background distribution-based active contour model using level set (BDACMLS) to segment lesions in breast DCE-MRIs. The method starts with manual selection of a region of interest (ROI) that contains the entire lesion in a single slice where the lesion is enhanced. Then the lesion volume from the volume data of interest, which is captured automatically, is separated. The core idea of BDACMLS is a new signed pressure functionmore » which is based solely on the intensity distribution combined with pathophysiological basis. To compare the algorithm results, two experienced radiologists delineated all lesions jointly to obtain the ground truth. In addition, results generated by other different methods based on level set (LS) are also compared with the authors’ method. Finally, the performance of the proposed method is evaluated by several region-based metrics such as the overlap ratio. Results: Forty-two studies with 46 lesions that contain 29 benign and 17 malignant lesions are evaluated. The dataset includes various typical pathologies of the breast such as invasive ductal carcinoma, ductal carcinomain situ, scar carcinoma, phyllodes tumor, breast cysts, fibroadenoma, etc. The overlap ratio for BDACMLS with respect to manual segmentation is 79.55% ± 12.60% (mean ± s.d.). Conclusions: A new active contour model method has been developed and shown to successfully segment breast DCE-MRI three-dimensional lesions. The results from this model correspond more closely to manual segmentation, solve the weak-edge-passed problem, and improve the robustness in segmenting different lesions.« less

  1. Combining heuristic and statistical techniques in landslide hazard assessments

    NASA Astrophysics Data System (ADS)

    Cepeda, Jose; Schwendtner, Barbara; Quan, Byron; Nadim, Farrokh; Diaz, Manuel; Molina, Giovanni

    2014-05-01

    As a contribution to the Global Assessment Report 2013 - GAR2013, coordinated by the United Nations International Strategy for Disaster Reduction - UNISDR, a drill-down exercise for landslide hazard assessment was carried out by entering the results of both heuristic and statistical techniques into a new but simple combination rule. The data available for this evaluation included landslide inventories, both historical and event-based. In addition to the application of a heuristic method used in the previous editions of GAR, the availability of inventories motivated the use of statistical methods. The heuristic technique is largely based on the Mora & Vahrson method, which estimates hazard as the product of susceptibility and triggering factors, where classes are weighted based on expert judgment and experience. Two statistical methods were also applied: the landslide index method, which estimates weights of the classes for the susceptibility and triggering factors based on the evidence provided by the density of landslides in each class of the factors; and the weights of evidence method, which extends the previous technique to include both positive and negative evidence of landslide occurrence in the estimation of weights for the classes. One key aspect during the hazard evaluation was the decision on the methodology to be chosen for the final assessment. Instead of opting for a single methodology, it was decided to combine the results of the three implemented techniques using a combination rule based on a normalization of the results of each method. The hazard evaluation was performed for both earthquake- and rainfall-induced landslides. The country chosen for the drill-down exercise was El Salvador. The results indicate that highest hazard levels are concentrated along the central volcanic chain and at the centre of the northern mountains.

  2. Medical Literature Evaluation Education at US Schools of Pharmacy

    PubMed Central

    Phillips, Jennifer; Demaris, Kendra

    2016-01-01

    Objective. To determine how medical literature evaluation (MLE) is being taught across the United States and to summarize methods for teaching and assessing MLE. Methods. An 18-question survey was administered to faculty members whose primary responsibility was teaching MLE at schools and colleges of pharmacy. Results. Responses were received from 90 (71%) US schools of pharmacy. The most common method of integrating MLE into the curriculum was as a stand-alone course (49%). The most common placement was during the second professional year (43%) or integrated throughout the curriculum (25%). The majority (77%) of schools used a team-based approach. The use of active-learning strategies was common as was the use of multiple methods of evaluation. Responses varied regarding what role the course director played in incorporating MLE into advanced pharmacy practice experiences (APPEs). Conclusion. There is a trend toward incorporating MLE education components throughout the pre-APPE curriculum and placement of literature review/evaluation exercises into therapeutics practice skills laboratories to help students see how this skill integrates into other patient care skills. Several pre-APPE educational standards for MLE education exist, including journal club activities, a team-based approach to teaching and evaluation, and use of active-learning techniques. PMID:26941431

  3. Mirage events & driver haptic steering alerts in a motion-base driving simulator: A method for selecting an optimal HMI.

    PubMed

    Talamonti, Walter; Tijerina, Louis; Blommer, Mike; Swaminathan, Radhakrishnan; Curry, Reates; Ellis, R Darin

    2017-11-01

    This paper describes a new method, a 'mirage scenario,' to support formative evaluation of driver alerting or warning displays for manual and automated driving. This method provides driving contexts (e.g., various Times-To-Collision (TTCs) to a lead vehicle) briefly presented and then removed. In the present study, during each mirage event, a haptic steering display was evaluated. This haptic display indicated a steering response may be initiated to drive around an obstacle ahead. A motion-base simulator was used in a 32-participant study to present vehicle motion cues similar to the actual application. Surprise was neither present nor of concern, as it would be for a summative evaluation of a forward collision warning system. Furthermore, no collision avoidance maneuvers were performed, thereby reducing the risk of simulator sickness. This paper illustrates the mirage scenario procedures, the rating methods and definitions used with the mirage scenario, and analysis of the ratings obtained, together with a multi-attribute utility theory (MAUT) approach to evaluate and select among alternative designs for future summative evaluation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Initial Usability and Feasibility Evaluation of a Personal Health Record-Based Self-Management System for Older Adults.

    PubMed

    Sheehan, Barbara; Lucero, Robert J

    2015-01-01

    Electronic personal health record-based (ePHR-based) self-management systems can improve patient engagement and have an impact on health outcomes. In order to realize the benefits of these systems, there is a need to develop and evaluate heath information technology from the same theoretical underpinnings. Using an innovative usability approach based in human-centered distributed information design (HCDID), we tested an ePHR-based falls-prevention self-management system-Self-Assessment via a Personal Health Record (i.e., SAPHeR)-designed using HCDID principles in a laboratory. And we later evaluated SAPHeR's use by community-dwelling older adults at home. The innovative approach used in this study supported the analysis of four components: tasks, users, representations, and functions. Tasks were easily learned and features such as text-associated images facilitated task completion. Task performance times were slow, however user satisfaction was high. Nearly seven out of every ten features desired by design participants were evaluated in our usability testing of the SAPHeR system. The in vivo evaluation suggests that older adults could improve their confidence in performing indoor and outdoor activities after using the SAPHeR system. We have applied an innovative consumer-usability evaluation. Our approach addresses the limitations of other usability testing methods that do not utilize consistent theoretically based methods for designing and testing technology. We have successfully demonstrated the utility of testing consumer technology use across multiple components (i.e., task, user, representational, functional) to evaluate the usefulness, usability, and satisfaction of an ePHR-based self-management system.

  5. A Consistency Evaluation and Calibration Method for Piezoelectric Transmitters.

    PubMed

    Zhang, Kai; Tan, Baohai; Liu, Xianping

    2017-04-28

    Array transducer and transducer combination technologies are evolving rapidly. While adapting transmitter combination technologies, the parameter consistencies between each transmitter are extremely important because they can determine a combined effort directly. This study presents a consistency evaluation and calibration method for piezoelectric transmitters by using impedance analyzers. Firstly, electronic parameters of transmitters that can be measured by impedance analyzers are introduced. A variety of transmitter acoustic energies that are caused by these parameter differences are then analyzed and certified and, thereafter, transmitter consistency is evaluated. Lastly, based on the evaluations, consistency can be calibrated by changing the corresponding excitation voltage. Acoustic experiments show that this method accurately evaluates and calibrates transducer consistencies, and is easy to realize.

  6. Improved volumetric measurement of brain structure with a distortion correction procedure using an ADNI phantom.

    PubMed

    Maikusa, Norihide; Yamashita, Fumio; Tanaka, Kenichiro; Abe, Osamu; Kawaguchi, Atsushi; Kabasawa, Hiroyuki; Chiba, Shoma; Kasahara, Akihiro; Kobayashi, Nobuhisa; Yuasa, Tetsuya; Sato, Noriko; Matsuda, Hiroshi; Iwatsubo, Takeshi

    2013-06-01

    Serial magnetic resonance imaging (MRI) images acquired from multisite and multivendor MRI scanners are widely used in measuring longitudinal structural changes in the brain. Precise and accurate measurements are important in understanding the natural progression of neurodegenerative disorders such as Alzheimer's disease. However, geometric distortions in MRI images decrease the accuracy and precision of volumetric or morphometric measurements. To solve this problem, the authors suggest a commercially available phantom-based distortion correction method that accommodates the variation in geometric distortion within MRI images obtained with multivendor MRI scanners. The authors' method is based on image warping using a polynomial function. The method detects fiducial points within a phantom image using phantom analysis software developed by the Mayo Clinic and calculates warping functions for distortion correction. To quantify the effectiveness of the authors' method, the authors corrected phantom images obtained from multivendor MRI scanners and calculated the root-mean-square (RMS) of fiducial errors and the circularity ratio as evaluation values. The authors also compared the performance of the authors' method with that of a distortion correction method based on a spherical harmonics description of the generic gradient design parameters. Moreover, the authors evaluated whether this correction improves the test-retest reproducibility of voxel-based morphometry in human studies. A Wilcoxon signed-rank test with uncorrected and corrected images was performed. The root-mean-square errors and circularity ratios for all slices significantly improved (p < 0.0001) after the authors' distortion correction. Additionally, the authors' method was significantly better than a distortion correction method based on a description of spherical harmonics in improving the distortion of root-mean-square errors (p < 0.001 and 0.0337, respectively). Moreover, the authors' method reduced the RMS error arising from gradient nonlinearity more than gradwarp methods. In human studies, the coefficient of variation of voxel-based morphometry analysis of the whole brain improved significantly from 3.46% to 2.70% after distortion correction of the whole gray matter using the authors' method (Wilcoxon signed-rank test, p < 0.05). The authors proposed a phantom-based distortion correction method to improve reproducibility in longitudinal structural brain analysis using multivendor MRI. The authors evaluated the authors' method for phantom images in terms of two geometrical values and for human images in terms of test-retest reproducibility. The results showed that distortion was corrected significantly using the authors' method. In human studies, the reproducibility of voxel-based morphometry analysis for the whole gray matter significantly improved after distortion correction using the authors' method.

  7. MR Imaging-based Semi-quantitative Methods for Knee Osteoarthritis

    PubMed Central

    JARRAYA, Mohamed; HAYASHI, Daichi; ROEMER, Frank Wolfgang; GUERMAZI, Ali

    2016-01-01

    Magnetic resonance imaging (MRI)-based semi-quantitative (SQ) methods applied to knee osteoarthritis (OA) have been introduced during the last decade and have fundamentally changed our understanding of knee OA pathology since then. Several epidemiological studies and clinical trials have used MRI-based SQ methods to evaluate different outcome measures. Interest in MRI-based SQ scoring system has led to continuous update and refinement. This article reviews the different SQ approaches for MRI-based whole organ assessment of knee OA and also discuss practical aspects of whole joint assessment. PMID:26632537

  8. Development and validation of methods for man-made machine interface evaluation. [for shuttles and shuttle payloads

    NASA Technical Reports Server (NTRS)

    Malone, T. B.; Micocci, A.

    1975-01-01

    The alternate methods of conducting a man-machine interface evaluation are classified as static and dynamic, and are evaluated. A dynamic evaluation tool is presented to provide for a determination of the effectiveness of the man-machine interface in terms of the sequence of operations (task and task sequences) and in terms of the physical characteristics of the interface. This dynamic checklist approach is recommended for shuttle and shuttle payload man-machine interface evaluations based on reduced preparation time, reduced data, and increased sensitivity of critical problems.

  9. Automated Text Analysis Based on Skip-Gram Model for Food Evaluation in Predicting Consumer Acceptance

    PubMed Central

    Kim, Augustine Yongwhi; Choi, Hoduk

    2018-01-01

    The purpose of this paper is to evaluate food taste, smell, and characteristics from consumers' online reviews. Several studies in food sensory evaluation have been presented for consumer acceptance. However, these studies need taste descriptive word lexicon, and they are not suitable for analyzing large number of evaluators to predict consumer acceptance. In this paper, an automated text analysis method for food evaluation is presented to analyze and compare recently introduced two jjampong ramen types (mixed seafood noodles). To avoid building a sensory word lexicon, consumers' reviews are collected from SNS. Then, by training word embedding model with acquired reviews, words in the large amount of review text are converted into vectors. Based on these words represented as vectors, inference is performed to evaluate taste and smell of two jjampong ramen types. Finally, the reliability and merits of the proposed food evaluation method are confirmed by a comparison with the results from an actual consumer preference taste evaluation. PMID:29606960

  10. Automated Text Analysis Based on Skip-Gram Model for Food Evaluation in Predicting Consumer Acceptance.

    PubMed

    Kim, Augustine Yongwhi; Ha, Jin Gwan; Choi, Hoduk; Moon, Hyeonjoon

    2018-01-01

    The purpose of this paper is to evaluate food taste, smell, and characteristics from consumers' online reviews. Several studies in food sensory evaluation have been presented for consumer acceptance. However, these studies need taste descriptive word lexicon, and they are not suitable for analyzing large number of evaluators to predict consumer acceptance. In this paper, an automated text analysis method for food evaluation is presented to analyze and compare recently introduced two jjampong ramen types (mixed seafood noodles). To avoid building a sensory word lexicon, consumers' reviews are collected from SNS. Then, by training word embedding model with acquired reviews, words in the large amount of review text are converted into vectors. Based on these words represented as vectors, inference is performed to evaluate taste and smell of two jjampong ramen types. Finally, the reliability and merits of the proposed food evaluation method are confirmed by a comparison with the results from an actual consumer preference taste evaluation.

  11. Hopes and Cautions for Instrument-Based Evaluation of Consent Capacity: Results of a Construct Validity Study of Three Instruments

    PubMed Central

    Moye, Jennifer; Azar, Annin R.; Karel, Michele J.; Gurrera, Ronald J.

    2016-01-01

    Does instrument based evaluation of consent capacity increase the precision and validity of competency assessment or does ostensible precision provide a false sense of confidence without in fact improving validity? In this paper we critically examine the evidence for construct validity of three instruments for measuring four functional abilities important in consent capacity: understanding, appreciation, reasoning, and expressing a choice. Instrument based assessment of these abilities is compared through investigation of a multi-trait multi-method matrix in 88 older adults with mild to moderate dementia. Results find variable support for validity. There appears to be strong evidence for good hetero-method validity for the measurement of understanding, mixed evidence for validity in the measurement of reasoning, and strong evidence for poor hetero-method validity for the concepts of appreciation and expressing a choice, although the latter is likely due to extreme range restrictions. The development of empirically based tools for use in capacity evaluation should ultimately enhance the reliability and validity of assessment, yet clearly more research is needed to define and measure the constructs of decisional capacity. We would also emphasize that instrument based assessment of capacity is only one part of a comprehensive evaluation of competency which includes consideration of diagnosis, psychiatric and/or cognitive symptomatology, risk involved in the situation, and individual and cultural differences. PMID:27330455

  12. Merit Evaluation Of Competitors In Debate And Recitation Competitions By Fuzzy Approach

    NASA Astrophysics Data System (ADS)

    Mukherjee, Supratim; Bhattacharyya, Rupak; Chatterjee, Amitava; Kar, Samarjit

    2010-10-01

    Co-curricular activities have a great importance in students' life, especially to grow their personality and communication skills. In different process of evaluating competitors in such competitions, generally crisp techniques are used. In this paper, we introduce a new fuzzy set theory based method of evaluation of competitors in co-curricular activities like debate and recitation competitions. The proposed method is illustrated by two examples.

  13. Annoyance rate evaluation method on ride comfort of vehicle suspension system

    NASA Astrophysics Data System (ADS)

    Tang, Chuanyin; Zhang, Yimin; Zhao, Guangyao; Ma, Yan

    2014-03-01

    The existing researches of the evaluation method of ride comfort of vehicle mainly focus on the level of human feelings to vibration. The level of human feelings to vibration is influenced by many factors, however, the ride comfort according to the common principle of probability and statistics and simple binary logic is unable to reflect these uncertainties. The random fuzzy evaluation model from people subjective response to vibration is adopted in the paper, these uncertainties are analyzed from the angle of psychological physics. Discussing the traditional evaluation of ride comfort during vehicle vibration, a fuzzily random evaluation model on the basis of annoyance rate is proposed for the human body's subjective response to vibration, with relevant fuzzy membership function and probability distribution given. A half-car four degrees of freedom suspension vibration model is described, subject to irregular excitations from the road surface, with the aid of software Matlab/Simulink. A new kind of evaluation method for ride comfort of vehicles is proposed in the paper, i.e., the annoyance rate evaluation method. The genetic algorithm and neural network control theory are used to control the system. Simulation results are obtained, such as the comparison of comfort reaction to vibration environments between before and after control, relationship of annoyance rate to vibration frequency and weighted acceleration, based on ISO 2631/1(1982), ISO 2631-1(1997) and annoyance rate evaluation method, respectively. Simulated assessment results indicate that the proposed active suspension systems prove to be effective in the vibration isolation of the suspension system, and the subjective response of human being can be promoted from very uncomfortable to a little uncomfortable. Furthermore, the novel evaluation method based on annoyance rate can further estimate quantitatively the number of passengers who feel discomfort due to vibration. A new analysis method of vehicle comfort is presented.

  14. Qualitative Methods in Field Research: An Indonesian Experience in Community Based Practice.

    ERIC Educational Resources Information Center

    Lysack, Catherine L.; Krefting, Laura

    1994-01-01

    Cross-cultural evaluation of a community-based rehabilitation project in Indonesia used three methods: focus groups, questionnaires, and key informant interviews. A continuous cyclical approach to data collection and concern for cultural sensitivity increased the rigor of the research. (SK)

  15. Research on simulated infrared image utility evaluation using deep representation

    NASA Astrophysics Data System (ADS)

    Zhang, Ruiheng; Mu, Chengpo; Yang, Yu; Xu, Lixin

    2018-01-01

    Infrared (IR) image simulation is an important data source for various target recognition systems. However, whether simulated IR images could be used as training data for classifiers depends on the features of fidelity and authenticity of simulated IR images. For evaluation of IR image features, a deep-representation-based algorithm is proposed. Being different from conventional methods, which usually adopt a priori knowledge or manually designed feature, the proposed method can extract essential features and quantitatively evaluate the utility of simulated IR images. First, for data preparation, we employ our IR image simulation system to generate large amounts of IR images. Then, we present the evaluation model of simulated IR image, for which an end-to-end IR feature extraction and target detection model based on deep convolutional neural network is designed. At last, the experiments illustrate that our proposed method outperforms other verification algorithms in evaluating simulated IR images. Cross-validation, variable proportion mixed data validation, and simulation process contrast experiments are carried out to evaluate the utility and objectivity of the images generated by our simulation system. The optimum mixing ratio between simulated and real data is 0.2≤γ≤0.3, which is an effective data augmentation method for real IR images.

  16. Knowledge based word-concept model estimation and refinement for biomedical text mining.

    PubMed

    Jimeno Yepes, Antonio; Berlanga, Rafael

    2015-02-01

    Text mining of scientific literature has been essential for setting up large public biomedical databases, which are being widely used by the research community. In the biomedical domain, the existence of a large number of terminological resources and knowledge bases (KB) has enabled a myriad of machine learning methods for different text mining related tasks. Unfortunately, KBs have not been devised for text mining tasks but for human interpretation, thus performance of KB-based methods is usually lower when compared to supervised machine learning methods. The disadvantage of supervised methods though is they require labeled training data and therefore not useful for large scale biomedical text mining systems. KB-based methods do not have this limitation. In this paper, we describe a novel method to generate word-concept probabilities from a KB, which can serve as a basis for several text mining tasks. This method not only takes into account the underlying patterns within the descriptions contained in the KB but also those in texts available from large unlabeled corpora such as MEDLINE. The parameters of the model have been estimated without training data. Patterns from MEDLINE have been built using MetaMap for entity recognition and related using co-occurrences. The word-concept probabilities were evaluated on the task of word sense disambiguation (WSD). The results showed that our method obtained a higher degree of accuracy than other state-of-the-art approaches when evaluated on the MSH WSD data set. We also evaluated our method on the task of document ranking using MEDLINE citations. These results also showed an increase in performance over existing baseline retrieval approaches. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Improving the clinical assessment of consciousness with advances in electrophysiological and neuroimaging techniques

    PubMed Central

    2010-01-01

    In clinical neurology, a comprehensive understanding of consciousness has been regarded as an abstract concept - best left to philosophers. However, times are changing and the need to clinically assess consciousness is increasingly becoming a real-world, practical challenge. Current methods for evaluating altered levels of consciousness are highly reliant on either behavioural measures or anatomical imaging. While these methods have some utility, estimates of misdiagnosis are worrisome (as high as 43%) - clearly this is a major clinical problem. The solution must involve objective, physiologically based measures that do not rely on behaviour. This paper reviews recent advances in physiologically based measures that enable better evaluation of consciousness states (coma, vegetative state, minimally conscious state, and locked in syndrome). Based on the evidence to-date, electroencephalographic and neuroimaging based assessments of consciousness provide valuable information for evaluation of residual function, formation of differential diagnoses, and estimation of prognosis. PMID:20113490

  18. Evaluation of a physically based quasi-linear and a conceptually based nonlinear Muskingum methods

    NASA Astrophysics Data System (ADS)

    Perumal, Muthiah; Tayfur, Gokmen; Rao, C. Madhusudana; Gurarslan, Gurhan

    2017-03-01

    Two variants of the Muskingum flood routing method formulated for accounting nonlinearity of the channel routing process are investigated in this study. These variant methods are: (1) The three-parameter conceptual Nonlinear Muskingum (NLM) method advocated by Gillin 1978, and (2) The Variable Parameter McCarthy-Muskingum (VPMM) method recently proposed by Perumal and Price in 2013. The VPMM method does not require rigorous calibration and validation procedures as required in the case of NLM method due to established relationships of its parameters with flow and channel characteristics based on hydrodynamic principles. The parameters of the conceptual nonlinear storage equation used in the NLM method were calibrated using the Artificial Intelligence Application (AIA) techniques, such as the Genetic Algorithm (GA), the Differential Evolution (DE), the Particle Swarm Optimization (PSO) and the Harmony Search (HS). The calibration was carried out on a given set of hypothetical flood events obtained by routing a given inflow hydrograph in a set of 40 km length prismatic channel reaches using the Saint-Venant (SV) equations. The validation of the calibrated NLM method was investigated using a different set of hypothetical flood hydrographs obtained in the same set of channel reaches used for calibration studies. Both the sets of solutions obtained in the calibration and validation cases using the NLM method were compared with the corresponding solutions of the VPMM method based on some pertinent evaluation measures. The results of the study reveal that the physically based VPMM method is capable of accounting for nonlinear characteristics of flood wave movement better than the conceptually based NLM method which requires the use of tedious calibration and validation procedures.

  19. Plant species classification using flower images—A comparative study of local feature representations

    PubMed Central

    Seeland, Marco; Rzanny, Michael; Alaqraa, Nedal; Wäldchen, Jana; Mäder, Patrick

    2017-01-01

    Steady improvements of image description methods induced a growing interest in image-based plant species classification, a task vital to the study of biodiversity and ecological sensitivity. Various techniques have been proposed for general object classification over the past years and several of them have already been studied for plant species classification. However, results of these studies are selective in the evaluated steps of a classification pipeline, in the utilized datasets for evaluation, and in the compared baseline methods. No study is available that evaluates the main competing methods for building an image representation on the same datasets allowing for generalized findings regarding flower-based plant species classification. The aim of this paper is to comparatively evaluate methods, method combinations, and their parameters towards classification accuracy. The investigated methods span from detection, extraction, fusion, pooling, to encoding of local features for quantifying shape and color information of flower images. We selected the flower image datasets Oxford Flower 17 and Oxford Flower 102 as well as our own Jena Flower 30 dataset for our experiments. Findings show large differences among the various studied techniques and that their wisely chosen orchestration allows for high accuracies in species classification. We further found that true local feature detectors in combination with advanced encoding methods yield higher classification results at lower computational costs compared to commonly used dense sampling and spatial pooling methods. Color was found to be an indispensable feature for high classification results, especially while preserving spatial correspondence to gray-level features. In result, our study provides a comprehensive overview of competing techniques and the implications of their main parameters for flower-based plant species classification. PMID:28234999

  20. Evaluation of a Web-Based Training in Smoking Cessation Counseling Targeting U.S. Eye-Care Professionals

    ERIC Educational Resources Information Center

    Asfar, Taghrid; Lee, David J.; Lam, Byron L.; Murchison, Ann P.; Mayro, Eileen L.; Owsley, Cynthia; McGwin, Gerald; Gower, Emily W.; Friedman, David S.; Saaddine, Jinan

    2018-01-01

    Background: Smoking causes blindness-related diseases. Eye-care providers are uniquely positioned to help their patients quit smoking. Aims: Using a pre-/postevaluation design, this study evaluated a web-based training in smoking cessation counseling targeting eye-care providers. Method: The training was developed based on the 3A1R protocol:…

  1. Is Performance Feedback for Educators an Evidence-Based Practice? A Systematic Review and Evaluation Based on Single-Case Research

    ERIC Educational Resources Information Center

    Fallon, Lindsay M.; Collier-Meek, Melissa A.; Maggin, Daniel M.; Sanetti, Lisa M. H.; Johnson, Austin H.

    2015-01-01

    Optimal levels of treatment fidelity, a critical moderator of intervention effectiveness, are often difficult to sustain in applied settings. It is unknown whether performance feedback, a widely researched method for increasing educators' treatment fidelity, is an evidence-based practice. The purpose of this review was to evaluate the current…

  2. The Application of FIA-based Data to Wildlife Habitat Modeling: A Comparative Study

    Treesearch

    Thomas C., Jr. Edwards; Gretchen G. Moisen; Tracey S. Frescino; Randall J. Schultz

    2005-01-01

    We evaluated the capability of two types of models, one based on spatially explicit variables derived from FIA data and one using so-called traditional habitat evaluation methods, for predicting the presence of cavity-nesting bird habitat in Fishlake National Forest, Utah. Both models performed equally well, in measures of predictive accuracy, with the FIA-based model...

  3. Evaluation of a Theory-Based Farm to School Pilot Intervention

    ERIC Educational Resources Information Center

    Landry, Alicia S.; Butz, Rebecca; Connell, Carol L.; Yadrick, Kathy

    2017-01-01

    Purpose/Objectives: The purpose of this study was to evaluate behaviors related to fruit and vegetable intake before and after implementation of a theory-based Farm to School pilot intervention in a rural school. Methods: Students in fifth grade at a rural elementary school were asked to complete pre- and post-test measures based on the Theory of…

  4. Identifying Effective Methods of Instruction for Adult Emergent Readers through Community-Based Research

    ERIC Educational Resources Information Center

    Blackmer, Rachel; Hayes-Harb, Rachel

    2016-01-01

    We present a community-based research project aimed at identifying effective methods and materials for teaching English literacy skills to adult English as a second language emergent readers. We conducted a quasi-experimental study whereby we evaluated the efficacy of two approaches, one based on current practices at the English Skills Learning…

  5. Evaluation of Intelligent Grouping Based on Learners' Collaboration Competence Level in Online Collaborative Learning Environment

    ERIC Educational Resources Information Center

    Muuro, Maina Elizaphan; Oboko, Robert; Wagacha, Waiganjo Peter

    2016-01-01

    In this paper we explore the impact of an intelligent grouping algorithm based on learners' collaborative competency when compared with (a) instructor based Grade Point Average (GPA) method level and (b) random method, on group outcomes and group collaboration problems in an online collaborative learning environment. An intelligent grouping…

  6. The Development of Online Tutorial Program Design Using Problem-Based Learning in Open Distance Learning System

    ERIC Educational Resources Information Center

    Said, Asnah; Syarif, Edy

    2016-01-01

    This research aimed to evaluate of online tutorial program design by applying problem-based learning Research Methods currently implemented in the system of Open Distance Learning (ODL). The students must take a Research Methods course to prepare themselves for academic writing projects. Problem-based learning basically emphasizes the process of…

  7. The Aristotle method: a new concept to evaluate quality of care based on complexity.

    PubMed

    Lacour-Gayet, François; Clarke, David R

    2005-06-01

    Evaluation of quality of care is a duty of the modern medical practice. A reliable method of quality evaluation able to compare fairly institutions and inform a patient and his family of the potential risk of a procedure is clearly needed. It is now well recognized that any method that purports to evaluate quality of care should include a case mix/risk stratification method. No valuable method was available until recently in pediatric cardiac surgery. The Aristotle method is a new concept of evaluation of quality of care in congenital heart surgery based on the complexity of the surgical procedures. Involving a panel of expert surgeons, the project started in 1999 and included 50 pediatric surgeons from 23 countries. The basic score adjusts the complexity of a given procedure and is calculated as the sum of potential for mortality, potential for morbidity and anticipated technical difficulty. The Comprehensive Score further adjusts the complexity according to the specific patient characteristics (anatomy, associated procedures, co-morbidity, etc.). The Aristotle method is original as it introduces several new concepts: the calculated complexity is a constant for a given patient all over the world; complexity is an independent value and risk is a variable depending on the performance; and Performance = Complexity x Outcome. The Aristotle score is a good vector of communication between patients, doctors and insurance companies and may stimulate the quality and the organization of heath care in our field and in others.

  8. Fast algorithms for evaluating the stress field of dislocation lines in anisotropic elastic media

    NASA Astrophysics Data System (ADS)

    Chen, C.; Aubry, S.; Oppelstrup, T.; Arsenlis, A.; Darve, E.

    2018-06-01

    In dislocation dynamics (DD) simulations, the most computationally intensive step is the evaluation of the elastic interaction forces among dislocation ensembles. Because the pair-wise interaction between dislocations is long-range, this force calculation step can be significantly accelerated by the fast multipole method (FMM). We implemented and compared four different methods in isotropic and anisotropic elastic media: one based on the Taylor series expansion (Taylor FMM), one based on the spherical harmonics expansion (Spherical FMM), one kernel-independent method based on the Chebyshev interpolation (Chebyshev FMM), and a new kernel-independent method that we call the Lagrange FMM. The Taylor FMM is an existing method, used in ParaDiS, one of the most popular DD simulation softwares. The Spherical FMM employs a more compact multipole representation than the Taylor FMM does and is thus more efficient. However, both the Taylor FMM and the Spherical FMM are difficult to derive in anisotropic elastic media because the interaction force is complex and has no closed analytical formula. The Chebyshev FMM requires only being able to evaluate the interaction between dislocations and thus can be applied easily in anisotropic elastic media. But it has a relatively large memory footprint, which limits its usage. The Lagrange FMM was designed to be a memory-efficient black-box method. Various numerical experiments are presented to demonstrate the convergence and the scalability of the four methods.

  9. Automatic and user-centric approaches to video summary evaluation

    NASA Astrophysics Data System (ADS)

    Taskiran, Cuneyt M.; Bentley, Frank

    2007-01-01

    Automatic video summarization has become an active research topic in content-based video processing. However, not much emphasis has been placed on developing rigorous summary evaluation methods and developing summarization systems based on a clear understanding of user needs, obtained through user centered design. In this paper we address these two topics and propose an automatic video summary evaluation algorithm adapted from teh text summarization domain.

  10. Compressed sparse tensor based quadrature for vibrational quantum mechanics integrals

    DOE PAGES

    Rai, Prashant; Sargsyan, Khachik; Najm, Habib N.

    2018-03-20

    A new method for fast evaluation of high dimensional integrals arising in quantum mechanics is proposed. Here, the method is based on sparse approximation of a high dimensional function followed by a low-rank compression. In the first step, we interpret the high dimensional integrand as a tensor in a suitable tensor product space and determine its entries by a compressed sensing based algorithm using only a few function evaluations. Secondly, we implement a rank reduction strategy to compress this tensor in a suitable low-rank tensor format using standard tensor compression tools. This allows representing a high dimensional integrand function asmore » a small sum of products of low dimensional functions. Finally, a low dimensional Gauss–Hermite quadrature rule is used to integrate this low-rank representation, thus alleviating the curse of dimensionality. Finally, numerical tests on synthetic functions, as well as on energy correction integrals for water and formaldehyde molecules demonstrate the efficiency of this method using very few function evaluations as compared to other integration strategies.« less

  11. Student Analysis of Handout Development based on Guided Discovery Method in Process Evaluation and Learning Outcomes of Biology

    NASA Astrophysics Data System (ADS)

    Nerita, S.; Maizeli, A.; Afza, A.

    2017-09-01

    Process Evaluation and Learning Outcomes of Biology subjects discusses the evaluation process in learning and application of designed and processed learning outcomes. Some problems found during this subject was the student difficult to understand the subject and the subject unavailability of learning resources that can guide and make students independent study. So, it necessary to develop a learning resource that can make active students to think and to make decisions with the guidance of the lecturer. The purpose of this study is to produce handout based on guided discovery method that match the needs of students. The research was done by using 4-D models and limited to define phase that is student requirement analysis. Data obtained from the questionnaire and analyzed descriptively. The results showed that the average requirement of students was 91,43%. Can be concluded that students need a handout based on guided discovery method in the learning process.

  12. Compressed sparse tensor based quadrature for vibrational quantum mechanics integrals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rai, Prashant; Sargsyan, Khachik; Najm, Habib N.

    A new method for fast evaluation of high dimensional integrals arising in quantum mechanics is proposed. Here, the method is based on sparse approximation of a high dimensional function followed by a low-rank compression. In the first step, we interpret the high dimensional integrand as a tensor in a suitable tensor product space and determine its entries by a compressed sensing based algorithm using only a few function evaluations. Secondly, we implement a rank reduction strategy to compress this tensor in a suitable low-rank tensor format using standard tensor compression tools. This allows representing a high dimensional integrand function asmore » a small sum of products of low dimensional functions. Finally, a low dimensional Gauss–Hermite quadrature rule is used to integrate this low-rank representation, thus alleviating the curse of dimensionality. Finally, numerical tests on synthetic functions, as well as on energy correction integrals for water and formaldehyde molecules demonstrate the efficiency of this method using very few function evaluations as compared to other integration strategies.« less

  13. Non-Contact Laser Based Ultrasound Evaluation of Canned Foods

    NASA Astrophysics Data System (ADS)

    Shelton, David

    2005-03-01

    Laser-Based Ultrasound detection was used to measure the velocity of compression waves transmitted through canned foods. Condensed broth, canned pasta, and non-condensed soup were evaluated in these experiments. Homodyne adaptive optics resulted in measurements that were more accurate than the traditional heterodyne method, as well as yielding a 10 dB gain in signal to noise. A-Scans measured the velocity of ultrasound sent through the center of the can and were able to distinguish the quantity of food stuff in its path, as well as distinguish between meat and potato. B-Scans investigated the heterogeneity of the sample’s contents. The evaluation of canned foods was completely non-contact and would be suitable for continuous monitoring in production. These results were verified by conducting the same experiments with a contact piezo transducer. Although the contact method yields a higher signal to noise ratio than the non-contact method, Laser-Based Ultrasound was able to detect surface waves the contact transducer could not.

  14. A novel method for extraction of neural response from single channel cochlear implant auditory evoked potentials.

    PubMed

    Sinkiewicz, Daniel; Friesen, Lendra; Ghoraani, Behnaz

    2017-02-01

    Cortical auditory evoked potentials (CAEP) are used to evaluate cochlear implant (CI) patient auditory pathways, but the CI device produces an electrical artifact, which obscures the relevant information in the neural response. Currently there are multiple methods, which attempt to recover the neural response from the contaminated CAEP, but there is no gold standard, which can quantitatively confirm the effectiveness of these methods. To address this crucial shortcoming, we develop a wavelet-based method to quantify the amount of artifact energy in the neural response. In addition, a novel technique for extracting the neural response from single channel CAEPs is proposed. The new method uses matching pursuit (MP) based feature extraction to represent the contaminated CAEP in a feature space, and support vector machines (SVM) to classify the components as normal hearing (NH) or artifact. The NH components are combined to recover the neural response without artifact energy, as verified using the evaluation tool. Although it needs some further evaluation, this approach is a promising method of electrical artifact removal from CAEPs. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  15. An accurate and efficient reliability-based design optimization using the second order reliability method and improved stability transformation method

    NASA Astrophysics Data System (ADS)

    Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo

    2018-05-01

    The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.

  16. Evaluation of Performance and Perceptions of Electronic vs. Paper Multiple-Choice Exams

    ERIC Educational Resources Information Center

    Washburn, Shannon; Herman, James; Stewart, Randolph

    2017-01-01

    In the veterinary professional curriculum, methods of examination in many courses are transitioning from the traditional paper-based exams to electronic-based exams. Therefore, a controlled trial to evaluate the impact of testing methodology on examination performance in a veterinary physiology course was designed and implemented. Formalized…

  17. Computer-Based Training for Library Staff: From Demonstration to Continuing Program.

    ERIC Educational Resources Information Center

    Bayne, Pauline S.

    1993-01-01

    Describes a demonstration project developed at the University of Tennessee (Knoxville) libraries to train nonprofessional library staff with computer-based training using HyperCard that was created by librarians rather than by computer programmers. Evaluation methods are discussed, including formative and summative evaluation; and modifications…

  18. Effects of Computer-Based Training on Procedural Modifications to Standard Functional Analyses

    ERIC Educational Resources Information Center

    Schnell, Lauren K.; Sidener, Tina M.; DeBar, Ruth M.; Vladescu, Jason C.; Kahng, SungWoo

    2018-01-01

    Few studies have evaluated methods for training decision-making when functional analysis data are undifferentiated. The current study evaluated computer-based training to teach 20 graduate students to arrange functional analysis conditions, analyze functional analysis data, and implement procedural modifications. Participants were exposed to…

  19. Diet and Colorectal Cancer Risk: Evaluation of a Nutrition Education Leaflet

    ERIC Educational Resources Information Center

    Dyer, K. J.; Fearon, K. C. H.; Buckner, K.; Richardson, R. A.

    2005-01-01

    Objective: To evaluate the effect of a needs-based, nutrition education leaflet on nutritional knowledge. Design: Comparison of nutritional knowledge levels before and after exposure to a nutrition education leaflet. Setting: A regional colorectal out-patient clinic in Edinburgh. Method: A nutrition education leaflet, based on an earlier…

  20. A Performance-Based Method of Student Evaluation

    ERIC Educational Resources Information Center

    Nelson, G. E.; And Others

    1976-01-01

    The Problem Oriented Medical Record (which allows practical definition of the behavioral terms thoroughness, reliability, sound analytical sense, and efficiency as they apply to the identification and management of patient problems) provides a vehicle to use in performance based type evaluation. A test-run use of the record is reported. (JT)

  1. Evaluation of the Laplace Integral. Classroom Notes

    ERIC Educational Resources Information Center

    Chen, Hongwei

    2004-01-01

    Based on the dominated convergence theorem and parametric differentiation, two different evaluations of the Laplace integral are displayed. This article presents two different proofs of (1) which may be of interest since they are based on principles within the realm of real analysis. The first method applies the dominated convergence theorem to…

  2. GoActive: a protocol for the mixed methods process evaluation of a school-based physical activity promotion programme for 13-14year old adolescents.

    PubMed

    Jong, Stephanie T; Brown, Helen Elizabeth; Croxson, Caroline H D; Wilkinson, Paul; Corder, Kirsten L; van Sluijs, Esther M F

    2018-05-21

    Process evaluations are critical for interpreting and understanding outcome trial results. By understanding how interventions function across different settings, process evaluations have the capacity to inform future dissemination of interventions. The complexity of Get others Active (GoActive), a 12-week, school-based physical activity intervention implemented in eight schools, highlights the need to investigate how implementation is achieved across a variety of school settings. This paper describes the mixed methods GoActive process evaluation protocol that is embedded within the outcome evaluation. In this detailed process evaluation protocol, we describe the flexible and pragmatic methods that will be used for capturing the process evaluation data. A mixed methods design will be used for the process evaluation, including quantitative data collected in both the control and intervention arms of the GoActive trial, and qualitative data collected in the intervention arm. Data collection methods will include purposively sampled, semi-structured interviews and focus group interviews, direct observation, and participant questionnaires (completed by students, teachers, older adolescent mentors, and local authority-funded facilitators). Data will be analysed thematically within and across datasets. Overall synthesis of findings will address the process of GoActive implementation, and through which this process affects outcomes, with careful attention to the context of the school environment. This process evaluation will explore the experience of participating in GoActive from the perspectives of key groups, providing a greater understanding of the acceptability and process of implementation of the intervention across the eight intervention schools. This will allow for appraisal of the intervention's conceptual base, inform potential dissemination, and help optimise post-trial sustainability. The process evaluation will also assist in contextualising the trial effectiveness results with respect to how the intervention may or may not have worked and, if it was found to be effective, what might be required for it to be sustained in the 'real world'. Furthermore, it will offer suggestions for the development and implementation of future initiatives to promote physical activity within schools. ISRCTN, ISRCTN31583496 . Registered on 18 February 2014.

  3. A Novel Unsupervised Segmentation Quality Evaluation Method for Remote Sensing Images

    PubMed Central

    Tang, Yunwei; Jing, Linhai; Ding, Haifeng

    2017-01-01

    The segmentation of a high spatial resolution remote sensing image is a critical step in geographic object-based image analysis (GEOBIA). Evaluating the performance of segmentation without ground truth data, i.e., unsupervised evaluation, is important for the comparison of segmentation algorithms and the automatic selection of optimal parameters. This unsupervised strategy currently faces several challenges in practice, such as difficulties in designing effective indicators and limitations of the spectral values in the feature representation. This study proposes a novel unsupervised evaluation method to quantitatively measure the quality of segmentation results to overcome these problems. In this method, multiple spectral and spatial features of images are first extracted simultaneously and then integrated into a feature set to improve the quality of the feature representation of ground objects. The indicators designed for spatial stratified heterogeneity and spatial autocorrelation are included to estimate the properties of the segments in this integrated feature set. These two indicators are then combined into a global assessment metric as the final quality score. The trade-offs of the combined indicators are accounted for using a strategy based on the Mahalanobis distance, which can be exhibited geometrically. The method is tested on two segmentation algorithms and three testing images. The proposed method is compared with two existing unsupervised methods and a supervised method to confirm its capabilities. Through comparison and visual analysis, the results verified the effectiveness of the proposed method and demonstrated the reliability and improvements of this method with respect to other methods. PMID:29064416

  4. Efficacy of evaluation of rooster sperm morphology using different staining methods.

    PubMed

    Lukaszewicz, E; Jerysz, A; Partyka, A; Siudzińska, A

    2008-12-01

    This work focused on inexpensive methods of evaluation fowl sperm morphology, based on eosin-nigrosin smears, which can determine disorders in spermatogenesis and can be recommended for evaluating the fertilising potency and selecting males in flocks reproduced by artificial insemination. Four fowl breeds (Black Minorca, Italian Partridge, Forwerk and Greenleg Partridge) were used to determine the efficacy of sperm morphology evaluation using four eosin-nigrosin staining methods (according to Blom, Bakst and Cecil, Morisson, Jaśkowski) and three examiners of different experience (high, medium, novice). There were significant (P< or = 0.01) differences in sperm morphology between Blom's staining method and those of Bakst and Cecil, Morisson or Jaśkowski, irrespective of fowl breed and examiners experience. Blom stain caused sperm head swelling and showed a drastic reduction in the proportion of live spermatozoa with normal morphology. The staining method had a greater influence on sperm morphology evaluation than the experience of the examiners.

  5. [Multifactorial method for assessing the physical work capacity of mice].

    PubMed

    Dubovik, B V; Bogomazov, S D

    1987-01-01

    Based on the swimming test according to Kiplinger, in experiments on (CBA X C57BL)F1 mice there were elaborated criteria for animal performance evaluation in the process of repeated swimming of a standard distance thus measuring power, volume of work and rate of the fatigue development in relative units. From the study of effects of sydnocarb, bemethyl and phenazepam on various parameters of physical performance of mice a conclusion was made that the proposed method provides a more informative evaluation of the pharmacological effect on physical performance of animals as compared to the methods based on the record of time of performing the load.

  6. Results from the VALUE perfect predictor experiment: process-based evaluation

    NASA Astrophysics Data System (ADS)

    Maraun, Douglas; Soares, Pedro; Hertig, Elke; Brands, Swen; Huth, Radan; Cardoso, Rita; Kotlarski, Sven; Casado, Maria; Pongracz, Rita; Bartholy, Judit

    2016-04-01

    Until recently, the evaluation of downscaled climate model simulations has typically been limited to surface climatologies, including long term means, spatial variability and extremes. But these aspects are often, at least partly, tuned in regional climate models to match observed climate. The tuning issue is of course particularly relevant for bias corrected regional climate models. In general, a good performance of a model for these aspects in present climate does therefore not imply a good performance in simulating climate change. It is now widely accepted that, to increase our condidence in climate change simulations, it is necessary to evaluate how climate models simulate relevant underlying processes. In other words, it is important to assess whether downscaling does the right for the right reason. Therefore, VALUE has carried out a broad process-based evaluation study based on its perfect predictor experiment simulations: the downscaling methods are driven by ERA-Interim data over the period 1979-2008, reference observations are given by a network of 85 meteorological stations covering all European climates. More than 30 methods participated in the evaluation. In order to compare statistical and dynamical methods, only variables provided by both types of approaches could be considered. This limited the analysis to conditioning local surface variables on variables from driving processes that are simulated by ERA-Interim. We considered the following types of processes: at the continental scale, we evaluated the performance of downscaling methods for positive and negative North Atlantic Oscillation, Atlantic ridge and blocking situations. At synoptic scales, we considered Lamb weather types for selected European regions such as Scandinavia, the United Kingdom, the Iberian Pensinsula or the Alps. At regional scales we considered phenomena such as the Mistral, the Bora or the Iberian coastal jet. Such process-based evaluation helps to attribute biases in surface variables to underlying processes and ultimately to improve climate models.

  7. DEVELOPMENT OF CRITERIA AND METHODS FOR EVALUATING TRAINER AIRCRAFT EFFECTIVENESS.

    ERIC Educational Resources Information Center

    KUSEWITT, J.B.

    THE PURPOSE OF THIS STUDY WAS TO DEVELOP A METHOD FOR DETERMINING OBJECTIVE MEASURES OF TRAINER AIRCRAFT EFFECTIVENESS TO EVALUATE PROGRAM ALTERNATIVES FOR TRAINING PILOTS FOR FLEET FIGHTER AND ATTACK-TYPE AIRCRAFT. THE TRAINING SYLLABUS WAS BASED ON AVERAGE STUDENT ABILITY. THE BASIC PROBLEM WAS TO ESTABLISH QUANTITATIVE TIME-DIFFICULTY…

  8. Efficacy of 4-allylanisole-based products for protecting individual loblolly pines from Dendroctonus frontalis Zimmermann (Coleoptera: Scolytidae)

    Treesearch

    Brian L. Strom; S.R. Clarke; P.J. Shea

    2004-01-01

    Abstract: We evaluated the effectiveness of 4-allylanisole (4AA) as a protective treatment for loblolly pines threatened by the southern pine beetle, Dendroctonus frontalis Zimmermann. Three products were evaluated in combination with two methods that promoted attack of trees by D. frontalis. One method used...

  9. Student Teachers' Views about Assessment and Evaluation Methods in Mathematics

    ERIC Educational Resources Information Center

    Dogan, Mustafa

    2011-01-01

    This study aimed to find out assessment and evaluation approaches in a Mathematics Teacher Training Department based on the views and experiences of student teachers. The study used a descriptive survey method, with the research sample consisting of 150 third- and fourth-year Primary Mathematics student teachers. Data were collected using a…

  10. Beyond Instrumentation: Redesigning Measures and Methods for Evaluating the Graduate College Experience

    ERIC Educational Resources Information Center

    Hardré, Patricia L.; Hackett, Shannon

    2015-01-01

    This manuscript chronicles the process and products of a redesign for evaluation of the graduate college experience (GCE) which was initiated by a university graduate college, based on its observed need to reconsider and update its measures and methods for assessing graduate students' experiences. We examined the existing instrumentation and…

  11. Managing for resilience: an information theory-based ...

    EPA Pesticide Factsheets

    Ecosystems are complex and multivariate; hence, methods to assess the dynamics of ecosystems should have the capacity to evaluate multiple indicators simultaneously. Most research on identifying leading indicators of regime shifts has focused on univariate methods and simple models which have limited utility when evaluating real ecosystems, particularly because drivers are often unknown. We discuss some common univariate and multivariate approaches for detecting critical transitions in ecosystems and demonstrate their capabilities via case studies. Synthesis and applications. We illustrate the utility of an information theory-based index for assessing ecosystem dynamics. Trends in this index also provide a sentinel of both abrupt and gradual transitions in ecosystems. In response to the need to identify leading indicators of regime shifts in ecosystems, our research compares traditional indicators and Fisher information, an information theory based method, by examining four case study systems. Results demonstrate the utility of methods and offers great promise for quantifying and managing for resilience.

  12. Evaluation of Method-Specific Extraction Variability for the Measurement of Fatty Acids in a Candidate Infant/Adult Nutritional Formula Reference Material.

    PubMed

    Place, Benjamin J

    2017-05-01

    To address community needs, the National Institute of Standards and Technology has developed a candidate Standard Reference Material (SRM) for infant/adult nutritional formula based on milk and whey protein concentrates with isolated soy protein called SRM 1869 Infant/Adult Nutritional Formula. One major component of this candidate SRM is the fatty acid content. In this study, multiple extraction techniques were evaluated to quantify the fatty acids in this new material. Extraction methods that were based on lipid extraction followed by transesterification resulted in lower mass fraction values for all fatty acids than the values measured by methods utilizing in situ transesterification followed by fatty acid methyl ester extraction (ISTE). An ISTE method, based on the identified optimal parameters, was used to determine the fatty acid content of the new infant/adult nutritional formula reference material.

  13. A novel hybrid MCDM model for performance evaluation of research and technology organizations based on BSC approach.

    PubMed

    Varmazyar, Mohsen; Dehghanbaghi, Maryam; Afkhami, Mehdi

    2016-10-01

    Balanced Scorecard (BSC) is a strategic evaluation tool using both financial and non-financial indicators to determine the business performance of organizations or companies. In this paper, a new integrated approach based on the Balanced Scorecard (BSC) and multi-criteria decision making (MCDM) methods are proposed to evaluate the performance of research centers of research and technology organization (RTO) in Iran. Decision-Making Trial and Evaluation Laboratory (DEMATEL) are employed to reflect the interdependencies among BSC perspectives. Then, Analytic Network Process (ANP) is utilized to weight the indices influencing the considered problem. In the next step, we apply four MCDM methods including Additive Ratio Assessment (ARAS), Complex Proportional Assessment (COPRAS), Multi-Objective Optimization by Ratio Analysis (MOORA), and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) for ranking of alternatives. Finally, the utility interval technique is applied to combine the ranking results of MCDM methods. Weighted utility intervals are computed by constructing a correlation matrix between the ranking methods. A real case is presented to show the efficacy of the proposed approach. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. MOCC: A Fast and Robust Correlation-Based Method for Interest Point Matching under Large Scale Changes

    NASA Astrophysics Data System (ADS)

    Zhao, Feng; Huang, Qingming; Wang, Hao; Gao, Wen

    2010-12-01

    Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC) is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matching image pairs with scale changes up to a factor of 7. Moreover, MOCC is much faster in comparison with the state-of-the-art matching methods. Experimental results on real images show the robustness and effectiveness of the proposed method.

  15. Flight-Test Evaluation of Flutter-Prediction Methods

    NASA Technical Reports Server (NTRS)

    Lind, RIck; Brenner, Marty

    2003-01-01

    The flight-test community routinely spends considerable time and money to determine a range of flight conditions, called a flight envelope, within which an aircraft is safe to fly. The cost of determining a flight envelope could be greatly reduced if there were a method of safely and accurately predicting the speed associated with the onset of an instability called flutter. Several methods have been developed with the goal of predicting flutter speeds to improve the efficiency of flight testing. These methods include (1) data-based methods, in which one relies entirely on information obtained from the flight tests and (2) model-based approaches, in which one relies on a combination of flight data and theoretical models. The data-driven methods include one based on extrapolation of damping trends, one that involves an envelope function, one that involves the Zimmerman-Weissenburger flutter margin, and one that involves a discrete-time auto-regressive model. An example of a model-based approach is that of the flutterometer. These methods have all been shown to be theoretically valid and have been demonstrated on simple test cases; however, until now, they have not been thoroughly evaluated in flight tests. An experimental apparatus called the Aerostructures Test Wing (ATW) was developed to test these prediction methods.

  16. Evaluating a Pivot-Based Approach for Bilingual Lexicon Extraction

    PubMed Central

    Kim, Jae-Hoon; Kwon, Hong-Seok; Seo, Hyeong-Won

    2015-01-01

    A pivot-based approach for bilingual lexicon extraction is based on the similarity of context vectors represented by words in a pivot language like English. In this paper, in order to show validity and usability of the pivot-based approach, we evaluate the approach in company with two different methods for estimating context vectors: one estimates them from two parallel corpora based on word association between source words (resp., target words) and pivot words and the other estimates them from two parallel corpora based on word alignment tools for statistical machine translation. Empirical results on two language pairs (e.g., Korean-Spanish and Korean-French) have shown that the pivot-based approach is very promising for resource-poor languages and this approach observes its validity and usability. Furthermore, for words with low frequency, our method is also well performed. PMID:25983745

  17. A new automated NaCl based robust method for routine production of gallium-68 labeled peptides

    PubMed Central

    Schultz, Michael K.; Mueller, Dirk; Baum, Richard P.; Watkins, G. Leonard; Breeman, Wouter A. P.

    2017-01-01

    A new NaCl based method for preparation of gallium-68 labeled radiopharmaceuticals has been adapted for use with an automated gallium-68 generator system. The method was evaluated based on 56 preparations of [68Ga]DOTATOC and compared to a similar acetone-based approach. Advantages of the new NaCl approach include reduced preparation time (< 15 min) and removal of organic solvents. The method produces high peptide-bound % (> 97%), and specific activity (> 40 MBq nmole−1 [68Ga]DOTATOC) and is well-suited for clinical production of radiopharmaceuticals. PMID:23026223

  18. Research on Radar Importance with Decision Matrix

    NASA Astrophysics Data System (ADS)

    Meng, Lingjie; Du, Yu; Wang, Liuheng

    2017-12-01

    Considering the characteristic of radar, constructed the evaluation index system of radar importance, established the comprehensive evaluation model based on decision matrix. Finally, by means of an example, the methods of this evaluation on radar importance was right and feasibility.

  19. Formalizing the Role of Agent-Based Modeling in Causal Inference and Epidemiology

    PubMed Central

    Marshall, Brandon D. L.; Galea, Sandro

    2015-01-01

    Calls for the adoption of complex systems approaches, including agent-based modeling, in the field of epidemiology have largely centered on the potential for such methods to examine complex disease etiologies, which are characterized by feedback behavior, interference, threshold dynamics, and multiple interacting causal effects. However, considerable theoretical and practical issues impede the capacity of agent-based methods to examine and evaluate causal effects and thus illuminate new areas for intervention. We build on this work by describing how agent-based models can be used to simulate counterfactual outcomes in the presence of complexity. We show that these models are of particular utility when the hypothesized causal mechanisms exhibit a high degree of interdependence between multiple causal effects and when interference (i.e., one person's exposure affects the outcome of others) is present and of intrinsic scientific interest. Although not without challenges, agent-based modeling (and complex systems methods broadly) represent a promising novel approach to identify and evaluate complex causal effects, and they are thus well suited to complement other modern epidemiologic methods of etiologic inquiry. PMID:25480821

  20. Performance comparison of LUR and OK in PM2.5 concentration mapping: a multidimensional perspective

    PubMed Central

    Zou, Bin; Luo, Yanqing; Wan, Neng; Zheng, Zhong; Sternberg, Troy; Liao, Yilan

    2015-01-01

    Methods of Land Use Regression (LUR) modeling and Ordinary Kriging (OK) interpolation have been widely used to offset the shortcomings of PM2.5 data observed at sparse monitoring sites. However, traditional point-based performance evaluation strategy for these methods remains stagnant, which could cause unreasonable mapping results. To address this challenge, this study employs ‘information entropy’, an area-based statistic, along with traditional point-based statistics (e.g. error rate, RMSE) to evaluate the performance of LUR model and OK interpolation in mapping PM2.5 concentrations in Houston from a multidimensional perspective. The point-based validation reveals significant differences between LUR and OK at different test sites despite the similar end-result accuracy (e.g. error rate 6.13% vs. 7.01%). Meanwhile, the area-based validation demonstrates that the PM2.5 concentrations simulated by the LUR model exhibits more detailed variations than those interpolated by the OK method (i.e. information entropy, 7.79 vs. 3.63). Results suggest that LUR modeling could better refine the spatial distribution scenario of PM2.5 concentrations compared to OK interpolation. The significance of this study primarily lies in promoting the integration of point- and area-based statistics for model performance evaluation in air pollution mapping. PMID:25731103

  1. Fluorescence-based methods for detecting caries lesions: systematic review, meta-analysis and sources of heterogeneity.

    PubMed

    Gimenez, Thais; Braga, Mariana Minatel; Raggio, Daniela Procida; Deery, Chris; Ricketts, David N; Mendes, Fausto Medeiros

    2013-01-01

    Fluorescence-based methods have been proposed to aid caries lesion detection. Summarizing and analysing findings of studies about fluorescence-based methods could clarify their real benefits. We aimed to perform a comprehensive systematic review and meta-analysis to evaluate the accuracy of fluorescence-based methods in detecting caries lesions. Two independent reviewers searched PubMed, Embase and Scopus through June 2012 to identify papers/articles published. Other sources were checked to identify non-published literature. STUDY ELIGIBILITY CRITERIA, PARTICIPANTS AND DIAGNOSTIC METHODS: The eligibility criteria were studies that: (1) have assessed the accuracy of fluorescence-based methods of detecting caries lesions on occlusal, approximal or smooth surfaces, in both primary or permanent human teeth, in the laboratory or clinical setting; (2) have used a reference standard; and (3) have reported sufficient data relating to the sample size and the accuracy of methods. A diagnostic 2×2 table was extracted from included studies to calculate the pooled sensitivity, specificity and overall accuracy parameters (Diagnostic Odds Ratio and Summary Receiver-Operating curve). The analyses were performed separately for each method and different characteristics of the studies. The quality of the studies and heterogeneity were also evaluated. Seventy five studies met the inclusion criteria from the 434 articles initially identified. The search of the grey or non-published literature did not identify any further studies. In general, the analysis demonstrated that the fluorescence-based method tend to have similar accuracy for all types of teeth, dental surfaces or settings. There was a trend of better performance of fluorescence methods in detecting more advanced caries lesions. We also observed moderate to high heterogeneity and evidenced publication bias. Fluorescence-based devices have similar overall performance; however, better accuracy in detecting more advanced caries lesions has been observed.

  2. Evaluation of knowledge-based reconstruction for magnetic resonance volumetry of the right ventricle after arterial switch operation for dextro-transposition of the great arteries.

    PubMed

    Nyns, Emile C A; Dragulescu, Andreea; Yoo, Shi-Joon; Grosse-Wortmann, Lars

    2016-09-01

    Right ventricular (RV) volume and function evaluation is essential in the follow-up of patients after arterial switch operation (ASO) for dextro-transposition of the great arteries (d-TGA). Cardiac magnetic resonance (CMR) imaging using the Simpson's method is the gold-standard for measuring these parameters. However, this method can be challenging and time-consuming, especially in congenital heart disease. Knowledge-based reconstruction (KBR) is an alternative method to derive volumes from CMR datasets. It is based on the identification of a finite number of anatomical RV landmarks in various planes, followed by computer-based reconstruction of the endocardial contours by matching these landmarks with a reference library of representative RV shapes. The purpose of this study was to evaluate the feasibility, accuracy, reproducibility and labor intensity of KBR for RV volumetry in patients after ASO for d-TGA. The CMR datasets of 17 children and adolescents (males 11, median age 15) were studied for RV volumetry using both KBR and Simpson's method. The intraobserver, interobserver and intermethod variabilities were assessed using Bland-Altman analyses. Good correlation between KBR and Simpson's method was noted. Intraobserver and interobserver variability for KBR showed excellent agreement. Volume and function assessment using KBR was faster when compared with the Simpson's method (5.1 ± 0.6 vs. 6.7 ± 0.9 min, p < 0.001). KBR is a feasible, accurate, reproducible and fast method for measuring RV volumes and function derived from CMR in patients after ASO for d-TGA.

  3. Improving the performances of autofocus based on adaptive retina-like sampling model

    NASA Astrophysics Data System (ADS)

    Hao, Qun; Xiao, Yuqing; Cao, Jie; Cheng, Yang; Sun, Ce

    2018-03-01

    An adaptive retina-like sampling model (ARSM) is proposed to balance autofocusing accuracy and efficiency. Based on the model, we carry out comparative experiments between the proposed method and the traditional method in terms of accuracy, the full width of the half maxima (FWHM) and time consumption. Results show that the performances of our method are better than that of the traditional method. Meanwhile, typical autofocus functions, including sum-modified-Laplacian (SML), Laplacian (LAP), Midfrequency-DCT (MDCT) and Absolute Tenengrad (ATEN) are compared through comparative experiments. The smallest FWHM is obtained by the use of LAP, which is more suitable for evaluating accuracy than other autofocus functions. The autofocus function of MDCT is most suitable to evaluate the real-time ability.

  4. Efficient Testing Combining Design of Experiment and Learn-to-Fly Strategies

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.; Brandon, Jay M.

    2017-01-01

    Rapid modeling and efficient testing methods are important in a number of aerospace applications. In this study efficient testing strategies were evaluated in a wind tunnel test environment and combined to suggest a promising approach for both ground-based and flight-based experiments. Benefits of using Design of Experiment techniques, well established in scientific, military, and manufacturing applications are evaluated in combination with newly developing methods for global nonlinear modeling. The nonlinear modeling methods, referred to as Learn-to-Fly methods, utilize fuzzy logic and multivariate orthogonal function techniques that have been successfully demonstrated in flight test. The blended approach presented has a focus on experiment design and identifies a sequential testing process with clearly defined completion metrics that produce increased testing efficiency.

  5. Nondestructive evaluation of the preservation state of stone columns in the Hospital Real of Granada

    NASA Astrophysics Data System (ADS)

    Moreno de Jong van Coevorden, C.; Cobos Sánchez, C.; Rubio Bretones, A.; Fernández Pantoja, M.; García, Salvador G.; Gómez Martín, R.

    2012-12-01

    This paper describes the results of the employment of two nondestructive evaluation methods for the diagnostic of the preservation state of stone elements. The first method is based on ultrasonic (US) pulses while the second method uses short electromagnetic pulses. Specifically, these methods were applied to some columns, some of them previously restored. These columns are part of the architectonic heritage of the University of Granada, in particular they are located at the patio de la capilla del Hospital Real of Granada. The objective of this work was the application of systems based on US pulses (in transmission mode) and the ground-penetrating radar systems (electromagnetic tomography) in the diagnosis and detection of possible faults in the interior of columns.

  6. Experiments of the selection of a method evaluating the fire resistance of some materials based on macromolecular compounds

    NASA Technical Reports Server (NTRS)

    Stoica, Steln; Sebe, Mircea Octavian

    1987-01-01

    A comparative experimental study on the application of various tests for the evaluation of the fire-resistant properties of plastic materials is presented. On the basis of the results obtained conclusions are drawn on the advantages and disadvantages of the methods used, and a preferred test method is picked, i.e., the introduction of fire retardant materials into the polymers.

  7. Multidisciplinary eHealth Survey Evaluation Methods

    ERIC Educational Resources Information Center

    Karras, Bryant T.; Tufano, James T.

    2006-01-01

    This paper describes the development process of an evaluation framework for describing and comparing web survey tools. We believe that this approach will help shape the design, development, deployment, and evaluation of population-based health interventions. A conceptual framework for describing and evaluating web survey systems will enable the…

  8. Muscle fatigue evaluation of astronaut upper limb based on sEMG and subjective assessment

    NASA Astrophysics Data System (ADS)

    Zu, Xiaoqi; Zhou, Qianxiang; Li, Yun

    2012-07-01

    All movements are driven by muscle contraction, and it is easy to cause muscle fatigue. Evaluation of muscle fatigue is a hot topic in the area of astronaut life support training and rehabilitation. If muscle gets into fatigue condition, it may reduce work efficiency and has an impact on psychological performance. Therefore it is necessary to develop an accurate and usable method on muscle fatigue evaluation of astronaut upper limb. In this study, we developed a method based on surface electromyography (sEMG) and subjective assessment (Borg scale) to evaluate local muscle fatigue. Fifteen healthy young male subjects participated in the experiment. They performed isometric muscle contractions of the upper limb. sEMG of the biceps brachii were recorded during the entire process of isotonic muscle contraction and Borg scales of muscle fatigue were collected in certain times. sEMG were divided into several parts, and then mean energy of each parts were calculated by the one-twelfth band octave method. Equations were derived based on the relationship between the mean energy of sEMG and Borg scale. The results showed that cubic curve could describe the degree of local muscle fatigue, and could be used to evaluate and monitor local muscle fatigue during the entire process.

  9. Assessment method of digital Chinese dance movements based on virtual reality technology

    NASA Astrophysics Data System (ADS)

    Feng, Wei; Shao, Shuyuan; Wang, Shumin

    2008-03-01

    Virtual reality has played an increasing role in such areas as medicine, architecture, aviation, engineering science and advertising. However, in the art fields, virtual reality is still in its infancy in the representation of human movements. Based on the techniques of motion capture and reuse of motion capture data in virtual reality environment, this paper presents an assessment method in order to evaluate the quantification of dancers' basic Arm Position movements in Chinese traditional dance. In this paper, the data for quantifying traits of dance motions are defined and measured on dancing which performed by an expert and two beginners, with results indicating that they are beneficial for evaluating dance skills and distinctiveness, and the assessment method of digital Chinese dance movements based on virtual reality technology is validity and feasibility.

  10. Ensemble-based prediction of RNA secondary structures.

    PubMed

    Aghaeepour, Nima; Hoos, Holger H

    2013-04-24

    Accurate structure prediction methods play an important role for the understanding of RNA function. Energy-based, pseudoknot-free secondary structure prediction is one of the most widely used and versatile approaches, and improved methods for this task have received much attention over the past five years. Despite the impressive progress that as been achieved in this area, existing evaluations of the prediction accuracy achieved by various algorithms do not provide a comprehensive, statistically sound assessment. Furthermore, while there is increasing evidence that no prediction algorithm consistently outperforms all others, no work has been done to exploit the complementary strengths of multiple approaches. In this work, we present two contributions to the area of RNA secondary structure prediction. Firstly, we use state-of-the-art, resampling-based statistical methods together with a previously published and increasingly widely used dataset of high-quality RNA structures to conduct a comprehensive evaluation of existing RNA secondary structure prediction procedures. The results from this evaluation clarify the performance relationship between ten well-known existing energy-based pseudoknot-free RNA secondary structure prediction methods and clearly demonstrate the progress that has been achieved in recent years. Secondly, we introduce AveRNA, a generic and powerful method for combining a set of existing secondary structure prediction procedures into an ensemble-based method that achieves significantly higher prediction accuracies than obtained from any of its component procedures. Our new, ensemble-based method, AveRNA, improves the state of the art for energy-based, pseudoknot-free RNA secondary structure prediction by exploiting the complementary strengths of multiple existing prediction procedures, as demonstrated using a state-of-the-art statistical resampling approach. In addition, AveRNA allows an intuitive and effective control of the trade-off between false negative and false positive base pair predictions. Finally, AveRNA can make use of arbitrary sets of secondary structure prediction procedures and can therefore be used to leverage improvements in prediction accuracy offered by algorithms and energy models developed in the future. Our data, MATLAB software and a web-based version of AveRNA are publicly available at http://www.cs.ubc.ca/labs/beta/Software/AveRNA.

  11. Mass Median Plume Angle: A novel approach to characterize plume geometry in solution based pMDIs.

    PubMed

    Moraga-Espinoza, Daniel; Eshaghian, Eli; Smyth, Hugh D C

    2018-05-30

    High-speed laser imaging (HSLI) is the preferred technique to characterize the geometry of the plume in pressurized metered dose inhalers (pMDIs). However, current methods do not allow for simulation of inhalation airflow and do not use drug mass quantification to determine plume angles. To address these limitations, a Plume Induction Port Evaluator (PIPE) was designed to characterize the plume geometry based on mass deposition patterns. The method is easily adaptable to current pMDI characterization methodologies, uses similar calculations methods, and can be used under airflow. The effect of airflow and formulation on the plume geometry were evaluated using PIPE and HSLI. Deposition patterns in PIPE were highly reproducible and log-normal distributed. Mass Median Plume Angle (MMPA) was a new characterization parameter to describe the effective angle of the droplets deposited in the induction port. Plume angles determined by mass showed a significant decrease in size as ethanol increases which correlates to the decrease on vapor pressure in the formulation. Additionally, airflow significantly decreased the angle of the plumes when cascade impactor was operated under flow. PIPE is an alternative to laser-based characterization methods to evaluate the plume angle of pMDIs based on reliable drug quantification while simulating patient inhalation. Copyright © 2018. Published by Elsevier B.V.

  12. Evaluating current automatic de-identification methods with Veteran's health administration clinical documents.

    PubMed

    Ferrández, Oscar; South, Brett R; Shen, Shuying; Friedlin, F Jeffrey; Samore, Matthew H; Meystre, Stéphane M

    2012-07-27

    The increased use and adoption of Electronic Health Records (EHR) causes a tremendous growth in digital information useful for clinicians, researchers and many other operational purposes. However, this information is rich in Protected Health Information (PHI), which severely restricts its access and possible uses. A number of investigators have developed methods for automatically de-identifying EHR documents by removing PHI, as specified in the Health Insurance Portability and Accountability Act "Safe Harbor" method.This study focuses on the evaluation of existing automated text de-identification methods and tools, as applied to Veterans Health Administration (VHA) clinical documents, to assess which methods perform better with each category of PHI found in our clinical notes; and when new methods are needed to improve performance. We installed and evaluated five text de-identification systems "out-of-the-box" using a corpus of VHA clinical documents. The systems based on machine learning methods were trained with the 2006 i2b2 de-identification corpora and evaluated with our VHA corpus, and also evaluated with a ten-fold cross-validation experiment using our VHA corpus. We counted exact, partial, and fully contained matches with reference annotations, considering each PHI type separately, or only one unique 'PHI' category. Performance of the systems was assessed using recall (equivalent to sensitivity) and precision (equivalent to positive predictive value) metrics, as well as the F(2)-measure. Overall, systems based on rules and pattern matching achieved better recall, and precision was always better with systems based on machine learning approaches. The highest "out-of-the-box" F(2)-measure was 67% for partial matches; the best precision and recall were 95% and 78%, respectively. Finally, the ten-fold cross validation experiment allowed for an increase of the F(2)-measure to 79% with partial matches. The "out-of-the-box" evaluation of text de-identification systems provided us with compelling insight about the best methods for de-identification of VHA clinical documents. The errors analysis demonstrated an important need for customization to PHI formats specific to VHA documents. This study informed the planning and development of a "best-of-breed" automatic de-identification application for VHA clinical text.

  13. Novel methods to estimate antiretroviral adherence: protocol for a longitudinal study

    PubMed Central

    Saberi, Parya; Ming, Kristin; Legnitto, Dominique; Neilands, Torsten B; Gandhi, Monica; Johnson, Mallory O

    2018-01-01

    Background There is currently no gold standard for assessing antiretroviral (ARV) adherence, so researchers often resort to the most feasible and cost-effective methods possible (eg, self-report), which may be biased or inaccurate. The goal of our study was to evaluate the feasibility and acceptability of innovative and remote methods to estimate ARV adherence, which can potentially be conducted with less time and financial resources in a wide range of clinic and research settings. Here, we describe the research protocol for studying these novel methods and some lessons learned. Methods The 6-month pilot study aimed to examine the feasibility and acceptability of a remotely conducted study to evaluate the correlation between: 1) text-messaged photographs of pharmacy refill dates for refill-based adherence; 2) text-messaged photographs of pills for pill count-based adherence; and 3) home-collected hair sample measures of ARV concentration for pharmacologic-based adherence. Participants were sent monthly automated text messages to collect refill dates and pill counts that were taken and sent via mobile telephone photographs, and hair collection kits every 2 months by mail. At the study end, feasibility was calculated by specific metrics, such as the receipt of hair samples and responses to text messages. Participants completed a quantitative survey and qualitative exit interviews to examine the acceptability of these adherence evaluation methods. The relationship between the 3 novel metrics of adherence and self-reported adherence will be assessed. Discussion Investigators conducting adherence research are often limited to using either self-reported adherence, which is subjective, biased, and often overestimated, or other more complex methods. Here, we describe the protocol for evaluating the feasibility and acceptability of 3 novel and remote methods of estimating adherence, with the aim of evaluating the relationships between them. Additionally, we note the lessons learned from the protocol implementation to date. We expect that these novel measures will be feasible and acceptable. The implications of this research will be the identification and evaluation of innovative and accurate metrics of ARV adherence for future implementation. PMID:29950816

  14. Research on the Optimization Method of Arm Movement in the Assembly Workshop Based on Ergonomics

    NASA Astrophysics Data System (ADS)

    Hu, X. M.; Qu, H. W.; Xu, H. J.; Yang, L.; Yu, C. C.

    2017-12-01

    In order to improve the work efficiency and comfortability, Ergonomics is used to research the work of the operator in the assembly workshop. An optimization algorithm of arm movement in the assembly workshop is proposed. In the algorithm, a mathematical model of arm movement is established based on multi rigid body movement model and D-H method. The solution of inverse kinematics equation on arm movement is solved through kinematics theory. The evaluation functions of each joint movement and the whole arm movement are given based on the comfortability of human body joint. The solution method of the optimal arm movement posture based on the evaluation functions is described. The software CATIA is used to verify that the optimal arm movement posture is valid in an example and the experimental result show the effectiveness of the algorithm.

  15. Comparison of methods for the prediction of human clearance from hepatocyte intrinsic clearance for a set of reference compounds and an external evaluation set.

    PubMed

    Yamagata, Tetsuo; Zanelli, Ugo; Gallemann, Dieter; Perrin, Dominique; Dolgos, Hugues; Petersson, Carl

    2017-09-01

    1. We compared direct scaling, regression model equation and the so-called "Poulin et al." methods to scale clearance (CL) from in vitro intrinsic clearance (CL int ) measured in human hepatocytes using two sets of compounds. One reference set comprised of 20 compounds with known elimination pathways and one external evaluation set based on 17 compounds development in Merck (MS). 2. A 90% prospective confidence interval was calculated using the reference set. This interval was found relevant for the regression equation method. The three outliers identified were justified on the basis of their elimination mechanism. 3. The direct scaling method showed a systematic underestimation of clearance in both the reference and evaluation sets. The "Poulin et al." and the regression equation methods showed no obvious bias in either the reference or evaluation sets. 4. The regression model equation was slightly superior to the "Poulin et al." method in the reference set and showed a better absolute average fold error (AAFE) of value 1.3 compared to 1.6. A larger difference was observed in the evaluation set were the regression method and "Poulin et al." resulted in an AAFE of 1.7 and 2.6, respectively (removing the three compounds with known issues mentioned above). A similar pattern was observed for the correlation coefficient. Based on these data we suggest the regression equation method combined with a prospective confidence interval as the first choice for the extrapolation of human in vivo hepatic metabolic clearance from in vitro systems.

  16. Measuring Symmetry in Children With Unrepaired Cleft Lip: Defining a Standard for the Three-Dimensional Midfacial Reference Plane.

    PubMed

    Wu, Jia; Heike, Carrie; Birgfeld, Craig; Evans, Kelly; Maga, Murat; Morrison, Clinton; Saltzman, Babette; Shapiro, Linda; Tse, Raymond

    2016-11-01

      Quantitative measures of facial form to evaluate treatment outcomes for cleft lip (CL) are currently limited. Computer-based analysis of three-dimensional (3D) images provides an opportunity for efficient and objective analysis. The purpose of this study was to define a computer-based standard of identifying the 3D midfacial reference plane of the face in children with unrepaired cleft lip for measurement of facial symmetry.   The 3D images of 50 subjects (35 with unilateral CL, 10 with bilateral CL, five controls) were included in this study.   Five methods of defining a midfacial plane were applied to each image, including two human-based (Direct Placement, Manual Landmark) and three computer-based (Mirror, Deformation, Learning) methods.   Six blinded raters (three cleft surgeons, two craniofacial pediatricians, and one craniofacial researcher) independently ranked and rated the accuracy of the defined planes.   Among computer-based methods, the Deformation method performed significantly better than the others. Although human-based methods performed best, there was no significant difference compared with the Deformation method. The average correlation coefficient among raters was .4; however, it was .7 and .9 when the angular difference between planes was greater than 6° and 8°, respectively.   Raters can agree on the 3D midfacial reference plane in children with unrepaired CL using digital surface mesh. The Deformation method performed best among computer-based methods evaluated and can be considered a useful tool to carry out automated measurements of facial symmetry in children with unrepaired cleft lip.

  17. Scenario-Based Case Study Method and the Functionality of the Section Called "From Production to Consumption" from the Perspective of Primary School Students

    ERIC Educational Resources Information Center

    Taneri, Ahu

    2018-01-01

    In this research, the aim was showing the evaluation of students on scenario-based case study method and showing the functionality of the studied section called "from production to consumption". Qualitative research method and content analysis were used to reveal participants' experiences and reveal meaningful relations regarding…

  18. A Hybrid Wavelet-Based Method for the Peak Detection of Photoplethysmography Signals.

    PubMed

    Li, Suyi; Jiang, Shanqing; Jiang, Shan; Wu, Jiang; Xiong, Wenji; Diao, Shu

    2017-01-01

    The noninvasive peripheral oxygen saturation (SpO 2 ) and the pulse rate can be extracted from photoplethysmography (PPG) signals. However, the accuracy of the extraction is directly affected by the quality of the signal obtained and the peak of the signal identified; therefore, a hybrid wavelet-based method is proposed in this study. Firstly, we suppressed the partial motion artifacts and corrected the baseline drift by using a wavelet method based on the principle of wavelet multiresolution. And then, we designed a quadratic spline wavelet modulus maximum algorithm to identify the PPG peaks automatically. To evaluate this hybrid method, a reflective pulse oximeter was used to acquire ten subjects' PPG signals under sitting, raising hand, and gently walking postures, and the peak recognition results on the raw signal and on the corrected signal were compared, respectively. The results showed that the hybrid method not only corrected the morphologies of the signal well but also optimized the peaks identification quality, subsequently elevating the measurement accuracy of SpO 2 and the pulse rate. As a result, our hybrid wavelet-based method profoundly optimized the evaluation of respiratory function and heart rate variability analysis.

  19. A low-cost rapid upper limb assessment method in manual assembly line based on somatosensory interaction technology

    NASA Astrophysics Data System (ADS)

    Jiang, Shengqian; Liu, Peng; Fu, Danni; Xue, Yiming; Luo, Wentao; Wang, Mingjie

    2017-04-01

    As an effective survey method of upper limb disorder, rapid upper limb assessment (RULA) has a wide application in industry period. However, it is very difficult to rapidly evaluate operator's postures in real complex work place. In this paper, a real-time RULA method is proposed to accurately assess the potential risk of operator's postures based on the somatosensory data collected from Kinect sensor, which is a line of motion sensing input devices by Microsoft. First, the static position information of each bone point is collected to obtain the effective angles of body parts based on the calculating methods based on joints angles. Second, a whole RULA score of body is obtained to assess the risk level of current posture in real time. Third, those RULA scores are compared with the results provided by a group of ergonomic practitionerswho were asked to observe the same static postures. All the experiments were carried out in an ergonomic lab. The results show that the proposed method can detect operator's postures more accurately. What's more, this method is applied in a real-time condition which can improve the evaluating efficiency.

  20. A Hybrid Wavelet-Based Method for the Peak Detection of Photoplethysmography Signals

    PubMed Central

    Jiang, Shanqing; Jiang, Shan; Wu, Jiang; Xiong, Wenji

    2017-01-01

    The noninvasive peripheral oxygen saturation (SpO2) and the pulse rate can be extracted from photoplethysmography (PPG) signals. However, the accuracy of the extraction is directly affected by the quality of the signal obtained and the peak of the signal identified; therefore, a hybrid wavelet-based method is proposed in this study. Firstly, we suppressed the partial motion artifacts and corrected the baseline drift by using a wavelet method based on the principle of wavelet multiresolution. And then, we designed a quadratic spline wavelet modulus maximum algorithm to identify the PPG peaks automatically. To evaluate this hybrid method, a reflective pulse oximeter was used to acquire ten subjects' PPG signals under sitting, raising hand, and gently walking postures, and the peak recognition results on the raw signal and on the corrected signal were compared, respectively. The results showed that the hybrid method not only corrected the morphologies of the signal well but also optimized the peaks identification quality, subsequently elevating the measurement accuracy of SpO2 and the pulse rate. As a result, our hybrid wavelet-based method profoundly optimized the evaluation of respiratory function and heart rate variability analysis. PMID:29250135

  1. Using multimodal information for the segmentation of fluorescent micrographs with application to virology and microbiology.

    PubMed

    Held, Christian; Wenzel, Jens; Webel, Rike; Marschall, Manfred; Lang, Roland; Palmisano, Ralf; Wittenberg, Thomas

    2011-01-01

    In order to improve reproducibility and objectivity of fluorescence microscopy based experiments and to enable the evaluation of large datasets, flexible segmentation methods are required which are able to adapt to different stainings and cell types. This adaption is usually achieved by the manual adjustment of the segmentation methods parameters, which is time consuming and challenging for biologists with no knowledge on image processing. To avoid this, parameters of the presented methods automatically adapt to user generated ground truth to determine the best method and the optimal parameter setup. These settings can then be used for segmentation of the remaining images. As robust segmentation methods form the core of such a system, the currently used watershed transform based segmentation routine is replaced by a fast marching level set based segmentation routine which incorporates knowledge on the cell nuclei. Our evaluations reveal that incorporation of multimodal information improves segmentation quality for the presented fluorescent datasets.

  2. [Evidence-based Chinese medicine:theory and practice].

    PubMed

    Zhang, Jun-Hua; Li, You-Ping; Zhang, Bo-Li

    2018-01-01

    The introduction and popularization of evidence-based medicine has opened up a new research field of clinical efficacy evaluation of traditional Chinese medicine(TCM), produced new research ideas and methods, and promoted the progress of clinical research of TCM. After about 20 years assiduous study and earnest practice, the evidence based evaluation method and technique, which conforms to the characteristics of TCM theory and practice, has been developing continuously. Evidence-based Chinese medicine (EBCM) has gradually formed and become an important branch of evidence-based medicine. The basic concept of evidence-based Chinese medicine: EBCM is an applied discipline, following the theory and methodology of evidence-based medicine, to collect, evaluate, produce, transform the evidence of effectiveness, safety and economy of TCM, to reveal the feature and regular pattern of TCM taking effect, and to guide the development of clinical guidelines, clinical pathways and health decisions. The effects and achievements of EBCM development: secondary studies mainly based on systematic review/Meta-analysis were extensively carried out; clinical efficacy studies mainly relying on randomized controlled trials grew rapidly; clinical safety evaluations based on real world study have been conducted; methodological researches mainly focused on study quality control deepened gradually; internationalization researches mainly on report specifications have got some breakthroughs; standardization researches based on treatment specification were strengthened gradually; the research team and talents with the characteristics of inter-disciplinary have been steadily increased. A number of high-quality research findings have been published at international well-known journals; the clinical efficacy and safety evidence of TCM has been increased; the level of clinical rational use of TCM has been improved; a large number of Chinese patent medicines with big market have been cultured. The future missions of EBCM mainly consist of four categories (scientific research, methodology and standard, platform construction and personnel training) with nine tasks. ①Carry out systematic reviews to systematically collect clinical trial reports of TCM and establish database of clinical evidence of TCM; ②Carry out evidence transformation research to lay the foundation for the development of clinical diagnosis and treatment guidelines, clinical pathways of TCM, and for the screening of basic drug list and medical insurance list, and for the policy-making relevant to TCM; ③Conduct researches to evaluate the advantages and effective regular patterns of TCM and form the evidence chain of TCM efficacy; ④Carry out researches for the safety evaluation of TCM, and provide evidence supporting the rational and safe use of TCM in clinical practice; ⑤Conduct researches on methodology of EBCM and provide method for developing high quality evidence; ⑥Carry out researches to develop standards and norms of TCM, and to form methods, standards, specifications and technical systems; ⑦Establish data management platform for evidence-based evaluation of TCM, and promote data sharing; ⑧Build international academic exchange platform to promote international cooperation and mutual recognition of EBCM research; ⑨Carry out education and popularization activities of evidence-based evaluation methods, and train undergraduate students, graduate students, clinical healthcare providers and practitioners of TCM. The development of EBCM, as it was, not only promoted the transformation of clinical research and decision-making mode of TCM, contributed to the modernization and internationalization of TCM, but also enriched the connotation of Evidence-based Medicine. Copyright© by the Chinese Pharmaceutical Association.

  3. Work organization in hospital wards and nurses' emotional exhaustion: A multi-method study of observation-based assessment and nurses' self-reports.

    PubMed

    Stab, Nicole; Hacker, Winfried; Weigl, Matthias

    2016-09-01

    Ward organization is a major determinant for nurses' well-being on the job. The majority of previous research on this relationship is based on single source methods, which have been criticized as skewed estimations mainly due to subjectivity of the ratings and due to common source bias. To investigate the association of ward organization characteristics and nurses' exhaustion by combining observation-based assessments with nurses' self-reports. Cross-sectional study on 25 wards of four hospitals and 245 nurses. Our multi-method approach to evaluate hospital ward organization consisted of on-site observations with a standardized assessment tool and of questionnaires to evaluate nurses' self-reports and exhaustion. After establishing the reliability of our measures, we applied multi-level regression analyses to determine associations between determinant and outcome variables. We found substantial convergence in ward organization between the observation-based assessments and nurses' self-reports, which supports the validity of our external assessments. Furthermore, two observation-based characteristics, namely participation and patient-focused care, were significantly associated with lower emotional exhaustion among the nurses. Our results suggest that observation-based assessments are a valid and feasible way to assess ward organization in hospitals. Nurses' self-reported as well as observation-based ratings on ward organization were associated with nurses' emotional exhaustion. This is of interest mainly for identifying alternative measures in evaluating nurses' work environments, to inform health promotion activities and to evaluate job redesign intervention. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Community-based interventions to promote increased physical activity: a primer.

    PubMed

    Bopp, Melissa; Fallon, Elizabeth

    2008-01-01

    Current recommendations, based on an abundance of empirical data documenting the impact of physical activity (PA) on preventing morbidity and mortality associated with common chronic diseases, indicate that adults should accumulate 30 minutes of moderate-intensity PA > or =5 days per week. However, worldwide rates of PA remain low, indicating a great need for large-scale implementation of evidence-based PA interventions. We briefly present practical aspects of intervention planning, implementation and evaluation within common community settings. The first stage of intervention planning is formative research, which allows for a better understanding of the elements needed for a successful intervention. Partnering with community settings (schools, worksites, faith-based organizations and healthcare organizations) offers many benefits and the opportunity to reach specific populations. Setting-based approaches allow for multilevel strategies, ranging from individual-based programmes and educational initiatives to physical and social environmental changes. Various settings such as healthcare, worksite, and school- and community-based settings are discussed. Intervention delivery methods and strategies can range, depending on the population and setting targeted, from small-group approaches to mediated methods (e.g. print, telephone, electronic). The final phase of intervention planning and implementation is evaluation. Several objective and subjective methods of PA assessment are available to determine the effectiveness of the intervention. We have highlighted the need for process evaluation of intervention implementation to provide valuable information for the dissemination and sustainability of successful interventions. Although there are numerous considerations for the design, implementation, assessment and evaluation of PA interventions, the potential for positive impact on the overall health of the public indicates the necessity for programmes designed to increase PA.

  5. Effect of three decellularisation protocols on the mechanical behaviour and structural properties of sheep aortic valve conduits.

    PubMed

    Khorramirouz, Reza; Sabetkish, Shabnam; Akbarzadeh, Aram; Muhammadnejad, Ahad; Heidari, Reza; Kajbafzadeh, Abdol-Mohammad

    2014-09-01

    To determine the best method for decellularisation of aortic valve conduits (AVCs) that efficiently removes the cells while preserving the extracellular matrix (ECM) by examining the valvular and conduit sections separately. Sheep AVCs were decellularised by using three different protocols: detergent-based (1% SDS+1% SDC), detergent and enzyme-based (Triton+EDTA+RNase and DNase), and enzyme-based (Trypsin+RNase and DNase) methods. The efficacy of the decellularisation methods to completely remove the cells while preserving the ECM was evaluated by histological evaluation, scanning electron microscopy (SEM), hydroxyproline analysis, tensile test, and DAPI staining. The detergent-based method completely removed the cells and left the ECM and collagen content in the valve and conduit sections relatively well preserved. The detergent and enzyme-based protocol did not completely remove the cells, but left the collagen content in both sections well preserved. ECM deterioration was observed in the aortic valves (AVs), but the ultrastructure of the conduits was well preserved, with no media distortion. The enzyme-based protocol removed the cells relatively well; however, mild structural distortion and poor collagen content was observed in the AVs. Incomplete cell removal (better than that observed with the detergent and enzyme-based protocol), poor collagen preservation, and mild structural distortion were observed in conduits treated with the enzyme-based method. The results suggested that the detergent-based methods are the most effective protocols for cell removal and ECM preservation of AVCs. The AVCs treated with this detergent-based method may be excellent scaffolds for recellularisation. Copyright © 2014 Medical University of Bialystok. Published by Elsevier Urban & Partner Sp. z o.o. All rights reserved.

  6. Evaluation of microplate immunocapture method for detection of Vibrio cholerae, Salmonella Typhi and Shigella flexneri from food.

    PubMed

    Fakruddin, Md; Hossain, Md Nur; Ahmed, Monzur Morshed

    2017-08-29

    Improved methods with better separation and concentration ability for detection of foodborne pathogens are in constant need. The aim of this study was to evaluate microplate immunocapture (IC) method for detection of Salmonella Typhi, Shigella flexneri and Vibrio cholerae from food samples to provide a better alternative to conventional culture based methods. The IC method was optimized for incubation time, bacterial concentration, and capture efficiency. 6 h incubation and log 6 CFU/ml cell concentration provided optimal results. The method was shown to be highly specific for the pathogens concerned. Capture efficiency (CE) was around 100% of the target pathogens, whereas CE was either zero or very low for non-target pathogens. The IC method also showed better pathogen detection ability at different concentrations of cells from artificially contaminated food samples in comparison with culture based methods. Performance parameter of the method was also comparable (Detection limit- 25 CFU/25 g; sensitivity 100%; specificity-96.8%; Accuracy-96.7%), even better than culture based methods (Detection limit- 125 CFU/25 g; sensitivity 95.9%; specificity-97%; Accuracy-96.2%). The IC method poses to be the potential to be used as a method of choice for detection of foodborne pathogens in routine laboratory practice after proper validation.

  7. OPTIMIZING USABILITY OF AN ECONOMIC DECISION SUPPORT TOOL: PROTOTYPE OF THE EQUIPT TOOL.

    PubMed

    Cheung, Kei Long; Hiligsmann, Mickaël; Präger, Maximilian; Jones, Teresa; Józwiak-Hagymásy, Judit; Muñoz, Celia; Lester-George, Adam; Pokhrel, Subhash; López-Nicolás, Ángel; Trapero-Bertran, Marta; Evers, Silvia M A A; de Vries, Hein

    2018-01-01

    Economic decision-support tools can provide valuable information for tobacco control stakeholders, but their usability may impact the adoption of such tools. This study aims to illustrate a mixed-method usability evaluation of an economic decision-support tool for tobacco control, using the EQUIPT ROI tool prototype as a case study. A cross-sectional mixed methods design was used, including a heuristic evaluation, a thinking aloud approach, and a questionnaire testing and exploring the usability of the Return of Investment tool. A total of sixty-six users evaluated the tool (thinking aloud) and completed the questionnaire. For the heuristic evaluation, four experts evaluated the interface. In total twenty-one percent of the respondents perceived good usability. A total of 118 usability problems were identified, from which twenty-six problems were categorized as most severe, indicating high priority to fix them before implementation. Combining user-based and expert-based evaluation methods is recommended as these were shown to identify unique usability problems. The evaluation provides input to optimize usability of a decision-support tool, and may serve as a vantage point for other developers to conduct usability evaluations to refine similar tools before wide-scale implementation. Such studies could reduce implementation gaps by optimizing usability, enhancing in turn the research impact of such interventions.

  8. Evaluating the evaluation of cancer driver genes

    PubMed Central

    Tokheim, Collin J.; Papadopoulos, Nickolas; Kinzler, Kenneth W.; Vogelstein, Bert; Karchin, Rachel

    2016-01-01

    Sequencing has identified millions of somatic mutations in human cancers, but distinguishing cancer driver genes remains a major challenge. Numerous methods have been developed to identify driver genes, but evaluation of the performance of these methods is hindered by the lack of a gold standard, that is, bona fide driver gene mutations. Here, we establish an evaluation framework that can be applied to driver gene prediction methods. We used this framework to compare the performance of eight such methods. One of these methods, described here, incorporated a machine-learning–based ratiometric approach. We show that the driver genes predicted by each of the eight methods vary widely. Moreover, the P values reported by several of the methods were inconsistent with the uniform values expected, thus calling into question the assumptions that were used to generate them. Finally, we evaluated the potential effects of unexplained variability in mutation rates on false-positive driver gene predictions. Our analysis points to the strengths and weaknesses of each of the currently available methods and offers guidance for improving them in the future. PMID:27911828

  9. Using Corporate-Based Methods To Assess Technical Communication Programs.

    ERIC Educational Resources Information Center

    Faber, Brenton; Bekins, Linn; Karis, Bill

    2002-01-01

    Investigates methods of program assessment used by corporate learning sites and profiles value added methods as a way to both construct and evaluate academic programs in technical communication. Examines and critiques assessment methods from corporate training environments including methods employed by corporate universities and value added…

  10. Discussion on accuracy degree evaluation of accident velocity reconstruction model

    NASA Astrophysics Data System (ADS)

    Zou, Tiefang; Dai, Yingbiao; Cai, Ming; Liu, Jike

    In order to investigate the applicability of accident velocity reconstruction model in different cases, a method used to evaluate accuracy degree of accident velocity reconstruction model is given. Based on pre-crash velocity in theory and calculation, an accuracy degree evaluation formula is obtained. With a numerical simulation case, Accuracy degrees and applicability of two accident velocity reconstruction models are analyzed; results show that this method is feasible in practice.

  11. Evaluation of health promotion in schools: a realistic evaluation approach using mixed methods

    PubMed Central

    2010-01-01

    Background Schools are key settings for health promotion (HP) but the development of suitable approaches for evaluating HP in schools is still a major topic of discussion. This article presents a research protocol of a program developed to evaluate HP. After reviewing HP evaluation issues, the various possible approaches are analyzed and the importance of a realistic evaluation framework and a mixed methods (MM) design are demonstrated. Methods/Design The design is based on a systemic approach to evaluation, taking into account the mechanisms, context and outcomes, as defined in realistic evaluation, adjusted to our own French context using an MM approach. The characteristics of the design are illustrated through the evaluation of a nationwide HP program in French primary schools designed to enhance children's social, emotional and physical health by improving teachers' HP practices and promoting a healthy school environment. An embedded MM design is used in which a qualitative data set plays a supportive, secondary role in a study based primarily on a different quantitative data set. The way the qualitative and quantitative approaches are combined through the entire evaluation framework is detailed. Discussion This study is a contribution towards the development of suitable approaches for evaluating HP programs in schools. The systemic approach of the evaluation carried out in this research is appropriate since it takes account of the limitations of traditional evaluation approaches and considers suggestions made by the HP research community. PMID:20109202

  12. Benchmarking of Methods for Genomic Taxonomy

    DOE PAGES

    Larsen, Mette V.; Cosentino, Salvatore; Lukjancenko, Oksana; ...

    2014-02-26

    One of the first issues that emerges when a prokaryotic organism of interest is encountered is the question of what it is—that is, which species it is. The 16S rRNA gene formed the basis of the first method for sequence-based taxonomy and has had a tremendous impact on the field of microbiology. Nevertheless, the method has been found to have a number of shortcomings. In this paper, we trained and benchmarked five methods for whole-genome sequence-based prokaryotic species identification on a common data set of complete genomes: (i) SpeciesFinder, which is based on the complete 16S rRNA gene; (ii) Reads2Typemore » that searches for species-specific 50-mers in either the 16S rRNA gene or the gyrB gene (for the Enterobacteraceae family); (iii) the ribosomal multilocus sequence typing (rMLST) method that samples up to 53 ribosomal genes; (iv) TaxonomyFinder, which is based on species-specific functional protein domain profiles; and finally (v) KmerFinder, which examines the number of cooccurring k-mers (substrings of k nucleotides in DNA sequence data). The performances of the methods were subsequently evaluated on three data sets of short sequence reads or draft genomes from public databases. In total, the evaluation sets constituted sequence data from more than 11,000 isolates covering 159 genera and 243 species. Our results indicate that methods that sample only chromosomal, core genes have difficulties in distinguishing closely related species which only recently diverged. Finally, the KmerFinder method had the overall highest accuracy and correctly identified from 93% to 97% of the isolates in the evaluations sets.« less

  13. Wastewater treatment evaluation for enterprises based on fuzzy-AHP comprehensive evaluation: a case study in industrial park in Taihu Basin, China.

    PubMed

    Hu, Wei; Liu, Guangbing; Tu, Yong

    2016-01-01

    This paper applied the fuzzy comprehensive evaluation (FCE) technique and analytic hierarchy process (AHP) procedure to evaluate the wastewater treatment for enterprises. Based on the characteristics of wastewater treatment for enterprises in Taihu basin, an evaluating index system was established for enterprise and analysis hierarchy process method was applied to determine index weight. Then the AHP and FCE methods were combined to validate the wastewater treatment level of 3 representative enterprises. The results show that the evaluation grade of enterprise 1, enterprise 2 and enterprise 3 was middle, good and excellent, respectively. Finally, the scores of 3 enterprises were calculated according to the hundred-mark system, and enterprise 3 has the highest wastewater treatment level, followed by enterprise 2 and enterprises 1. The application of this work can make the evaluation results more scientific and accurate. It is expected that this work may serve as an assistance tool for managers of enterprise in improving the wastewater treatment level.

  14. Pansharpening on the Narrow Vnir and SWIR Spectral Bands of SENTINEL-2

    NASA Astrophysics Data System (ADS)

    Vaiopoulos, A. D.; Karantzalos, K.

    2016-06-01

    In this paper results from the evaluation of several state-of-the-art pansharpening techniques are presented for the VNIR and SWIR bands of Sentinel-2. A procedure for the pansharpening is also proposed which aims at respecting the closest spectral similarities between the higher and lower resolution bands. The evaluation included 21 different fusion algorithms and three evaluation frameworks based both on standard quantitative image similarity indexes and qualitative evaluation from remote sensing experts. The overall analysis of the evaluation results indicated that remote sensing experts disagreed with the outcomes and method ranking from the quantitative assessment. The employed image quality similarity indexes and quantitative evaluation framework based on both high and reduced resolution data from the literature didn't manage to highlight/evaluate mainly the spatial information that was injected to the lower resolution images. Regarding the SWIR bands none of the methods managed to deliver significantly better results than a standard bicubic interpolation on the original low resolution bands.

  15. Comparison of Control Group Generating Methods.

    PubMed

    Szekér, Szabolcs; Fogarassy, György; Vathy-Fogarassy, Ágnes

    2017-01-01

    Retrospective studies suffer from drawbacks such as selection bias. As the selection of the control group has a significant impact on the evaluation of the results, it is very important to find the proper method to generate the most appropriate control group. In this paper we suggest two nearest neighbors based control group selection methods that aim to achieve good matching between the individuals of case and control groups. The effectiveness of the proposed methods is evaluated by runtime and accuracy tests and the results are compared to the classical stratified sampling method.

  16. [Evaluating the Significance of Odor Gas Released During the Directly Drying Process of Sludge: Based on the Multi-index Integrated Assessment Method].

    PubMed

    Ding, Wen-jie; Chen, Wen-he; Deng, Ming-jia; Luo, Hui; Li, Lin; Liu, Jun-xin

    2016-02-15

    Co-processing of sewage sludge using the cement kiln can realize sludge harmless treatment, quantity reduction, stabilization and reutilization. The moisture content should be reduced to below 30% to meet the requirement of combustion. Thermal drying is an effective way for sludge desiccation. Odors and volatile organic compounds are generated and released during the sludge drying process, which could lead to odor pollution. The main odor pollutants were selected by the multi-index integrated assessment method. The concentration, olfactory threshold, threshold limit value, smell security level and saturated vapor pressure were considered as indexes based on the related regulations in China and foreign countries. Taking the pollution potential as the evaluation target, and the risk index and odor emission intensity as evaluation indexes, the odor pollution potential rated evaluation model of the pollutants was built according to the Weber-Fechner law. The aim of the present study is to form the rating evaluation method of odor potential pollution capacity suitable for the directly drying process of sludge.

  17. Unmanned aircraft system sense and avoid integrity and continuity

    NASA Astrophysics Data System (ADS)

    Jamoom, Michael B.

    This thesis describes new methods to guarantee safety of sense and avoid (SAA) functions for Unmanned Aircraft Systems (UAS) by evaluating integrity and continuity risks. Previous SAA efforts focused on relative safety metrics, such as risk ratios, comparing the risk of using an SAA system versus not using it. The methods in this thesis evaluate integrity and continuity risks as absolute measures of safety, as is the established practice in commercial aircraft terminal area navigation applications. The main contribution of this thesis is a derivation of a new method, based on a standard intruder relative constant velocity assumption, that uses hazard state estimates and estimate error covariances to establish (1) the integrity risk of the SAA system not detecting imminent loss of '"well clear," which is the time and distance required to maintain safe separation from intruder aircraft, and (2) the probability of false alert, the continuity risk. Another contribution is applying these integrity and continuity risk evaluation methods to set quantifiable and certifiable safety requirements on sensors. A sensitivity analysis uses this methodology to evaluate the impact of sensor errors on integrity and continuity risks. The penultimate contribution is an integrity and continuity risk evaluation where the estimation model is refined to address realistic intruder relative linear accelerations, which goes beyond the current constant velocity standard. The final contribution is an integrity and continuity risk evaluation addressing multiple intruders. This evaluation is a new innovation-based method to determine the risk of mis-associating intruder measurements. A mis-association occurs when the SAA system incorrectly associates a measurement to the wrong intruder, causing large errors in the estimated intruder trajectories. The new methods described in this thesis can help ensure safe encounters between aircraft and enable SAA sensor certification for UAS integration into the National Airspace System.

  18. Development of nondestructive methods for measurement of slab thickness and modulus of rupture in concrete pavements.

    DOT National Transportation Integrated Search

    2005-01-01

    This report describes work to develop non-destructive testing methods for concrete pavements. Two methods, for pavement thickness and in-place strength estimation, respectively, were developed and evaluated. The thickness estimation method is based o...

  19. Scientific and Humanistic Evaluations of Follow Through.

    ERIC Educational Resources Information Center

    House, Ernest R.

    The thesis of this paper is that the humanistic mode of inquiry is underemployed in evaluation studies and the future evaluation of Follow Through could profitably use humanistic approaches. The original Follow Through evaluation was based on the assumption that the world consists of a single system explainable by appropriate methods; the…

  20. Broadening the Educational Evaluation Lens with Communicative Evaluation

    ERIC Educational Resources Information Center

    Brooks-LaRaviere, Margaret; Ryan, Katherine; Miron, Luis; Samuels, Maurice

    2009-01-01

    Outcomes-based accountability in the form of test scores and performance indicators are a primary lever for improving student achievement in the current educational landscape. The article presents communicative evaluation as a complementary evaluation approach that may be used along with the primary methods of school accountability to provide a…

Top