Evaluation of background parenchymal enhancement on breast MRI: a systematic review
Signori, Alessio; Valdora, Francesca; Rossi, Federica; Calabrese, Massimo; Durando, Manuela; Mariscotto, Giovanna; Tagliafico, Alberto
2017-01-01
Objective: To perform a systematic review of the methods used for background parenchymal enhancement (BPE) evaluation on breast MRI. Methods: Studies dealing with BPE assessment on breast MRI were retrieved from major medical libraries independently by four reviewers up to 6 October 2015. The keywords used for database searching are “background parenchymal enhancement”, “parenchymal enhancement”, “MRI” and “breast”. The studies were included if qualitative and/or quantitative methods for BPE assessment were described. Results: Of the 420 studies identified, a total of 52 articles were included in the systematic review. 28 studies performed only a qualitative assessment of BPE, 13 studies performed only a quantitative assessment and 11 studies performed both qualitative and quantitative assessments. A wide heterogeneity was found in the MRI sequences and in the quantitative methods used for BPE assessment. Conclusion: A wide variability exists in the quantitative evaluation of BPE on breast MRI. More studies focused on a reliable and comparable method for quantitative BPE assessment are needed. Advances in knowledge: More studies focused on a quantitative BPE assessment are needed. PMID:27925480
Comparative Evaluation of Quantitative Test Methods for Gases on a Hard Surface
2017-02-01
COMPARATIVE EVALUATION OF QUANTITATIVE TEST METHODS FOR GASES ON A HARD SURFACE ECBC-TR-1426 Vipin Rastogi...1 COMPARATIVE EVALUATION OF QUANTITATIVE TEST METHODS FOR GASES ON A HARD SURFACE 1. INTRODUCTION Members of the U.S. Environmental...Generator 4 2.4 Experimental Design Each quantitative method was performed three times on three consecutive days. For the CD runs, three
Park, Jiyoon; Woo, Ok Hee; Shin, Hye Seon; Cho, Kyu Ran; Seo, Bo Kyoung; Kang, Eun Young
2015-10-01
The purpose of this study is to evaluate the diagnostic performance of SWE in palpable breast mass and to compare with color overlay pattern in SWE with conventional US and quantitative SWE for assessing palpable breast mass. SWE and conventional breast US were performed in 133 women with 156 palpable breast lesions (81 benign, 75 malignant) between August 2013 to June 2014. Either pathology or periodic imaging surveillance more than 2 years was a reference standard. Existence of previous image was blinded to performing radiologists. US BI-RADS final assessment, qualitative and quantitative SWE measurements were evaluated. Diagnostic performances of grayscale US, SWE and US combined to SWE were calculated and compared. Correlation between pattern classification and quantitative SWE was evaluated. Both color overlay pattern and quantitative SWE improved the specificity of conventional US, from 81.48% to 96.30% (p=0.0005), without improvement in sensitivity. Color overlay pattern was significantly related to all quantitative SWE parameters and malignancy rate (p<0.0001.). The optimal cutoff of color overlay pattern was between 2 and 3. Emax with optimal cutoff at 45.1 kPa showed the highest Az value, sensitivity, specificity and accuracy among other quantitative SWE parameters (p<0.0001). Echogenic halo on grayscale US showed significant correlation with color overlay pattern and pathology (p<0.0001). In evaluation of palpable breast mass, conventional US combine to SWE improves specificity and reduces the number of biopsies that ultimately yield a benign result. Color overlay pattern classification is more quick and easy and may represent quantitative SWE measurements with similar diagnostic performances. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Harlow, Lisa L.; Burkholder, Gary J.; Morrow, Jennifer A.
2002-01-01
Used a structural modeling approach to evaluate relations among attitudes, initial skills, and performance in a Quantitative Methods course that involved students in active learning. Results largely confirmed hypotheses offering support for educational reform efforts that propose actively involving students in the learning process, especially in…
NASA Astrophysics Data System (ADS)
Gupta, Arun; Kim, Kyeong Yun; Hwang, Donghwi; Lee, Min Sun; Lee, Dong Soo; Lee, Jae Sung
2018-06-01
SPECT plays important role in peptide receptor targeted radionuclide therapy using theranostic radionuclides such as Lu-177 for the treatment of various cancers. However, SPECT studies must be quantitatively accurate because the reliable assessment of tumor uptake and tumor-to-normal tissue ratios can only be performed using quantitatively accurate images. Hence, it is important to evaluate performance parameters and quantitative accuracy of preclinical SPECT systems for therapeutic radioisotopes before conducting pre- and post-therapy SPECT imaging or dosimetry studies. In this study, we evaluated system performance and quantitative accuracy of NanoSPECT/CT scanner for Lu-177 imaging using point source and uniform phantom studies. We measured recovery coefficient, uniformity, spatial resolution, system sensitivity and calibration factor for mouse whole body standard aperture. We also performed the experiments using Tc-99m to compare the results with that of Lu-177. We found that the recovery coefficient of more than 70% for Lu-177 at the optimum noise level when nine iterations were used. The spatial resolutions of Lu-177 with and without adding uniform background was comparable to that of Tc-99m in axial, radial and tangential directions. System sensitivity measured for Lu-177 was almost three times less than that of Tc-99m.
Nixon, Gavin J; Svenstrup, Helle F; Donald, Carol E; Carder, Caroline; Stephenson, Judith M; Morris-Jones, Stephen; Huggett, Jim F; Foy, Carole A
2014-12-01
Molecular diagnostic measurements are currently underpinned by the polymerase chain reaction (PCR). There are also a number of alternative nucleic acid amplification technologies, which unlike PCR, work at a single temperature. These 'isothermal' methods, reportedly offer potential advantages over PCR such as simplicity, speed and resistance to inhibitors and could also be used for quantitative molecular analysis. However there are currently limited mechanisms to evaluate their quantitative performance, which would assist assay development and study comparisons. This study uses a sexually transmitted infection diagnostic model in combination with an adapted metric termed isothermal doubling time (IDT), akin to PCR efficiency, to compare quantitative PCR and quantitative loop-mediated isothermal amplification (qLAMP) assays, and to quantify the impact of matrix interference. The performance metric described here facilitates the comparison of qLAMP assays that could assist assay development and validation activities.
Statistical shape analysis using 3D Poisson equation--A quantitatively validated approach.
Gao, Yi; Bouix, Sylvain
2016-05-01
Statistical shape analysis has been an important area of research with applications in biology, anatomy, neuroscience, agriculture, paleontology, etc. Unfortunately, the proposed methods are rarely quantitatively evaluated, and as shown in recent studies, when they are evaluated, significant discrepancies exist in their outputs. In this work, we concentrate on the problem of finding the consistent location of deformation between two population of shapes. We propose a new shape analysis algorithm along with a framework to perform a quantitative evaluation of its performance. Specifically, the algorithm constructs a Signed Poisson Map (SPoM) by solving two Poisson equations on the volumetric shapes of arbitrary topology, and statistical analysis is then carried out on the SPoMs. The method is quantitatively evaluated on synthetic shapes and applied on real shape data sets in brain structures. Copyright © 2016 Elsevier B.V. All rights reserved.
Pi, Shan; Cao, Rong; Qiang, Jin Wei; Guo, Yan Hui
2018-01-01
Background Diffusion-weighted imaging (DWI) and quantitative apparent diffusion coefficient (ADC) values are widely used in the differential diagnosis of ovarian tumors. Purpose To assess the diagnostic performance of quantitative ADC values in ovarian tumors. Material and Methods PubMed, Embase, the Cochrane Library, and local databases were searched for studies assessing ovarian tumors using quantitative ADC values. We quantitatively analyzed the diagnostic performances for two clinical problems: benign vs. malignant tumors and borderline vs. malignant tumors. We evaluated diagnostic performances by the pooled sensitivity and specificity values and by summary receiver operating characteristic (SROC) curves. Subgroup analyses were used to analyze study heterogeneity. Results From the 742 studies identified in the search results, 16 studies met our inclusion criteria. A total of ten studies evaluated malignant vs. benign ovarian tumors and six studies assessed malignant vs. borderline ovarian tumors. Regarding the diagnostic accuracy of quantitative ADC values for distinguishing between malignant and benign ovarian tumors, the pooled sensitivity and specificity values were 0.91 and 0.91, respectively. The area under the SROC curve (AUC) was 0.96. For differentiating borderline from malignant tumors, the pooled sensitivity and specificity values were 0.89 and 0.79, and the AUC was 0.91. The methodological quality of the included studies was moderate. Conclusion Quantitative ADC values could serve as useful preoperative markers for predicting the nature of ovarian tumors. Nevertheless, prospective trials focused on standardized imaging parameters are needed to evaluate the clinical value of quantitative ADC values in ovarian tumors.
Jha, Abhinav K; Song, Na; Caffo, Brian; Frey, Eric C
2015-04-13
Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method provided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output.
Sun, Zeyu; Hamilton, Karyn L.; Reardon, Kenneth F.
2014-01-01
We evaluated a sequential elution protocol from immobilized metal affinity chromatography (SIMAC) employing gallium-based immobilized metal affinity chromatography (IMAC) in conjunction with titanium-dioxide-based metal oxide affinity chromatography (MOAC). The quantitative performance of this SIMAC enrichment approach, assessed in terms of repeatability, dynamic range, and linearity, was evaluated using a mixture composed of tryptic peptides from caseins, bovine serum albumin, and phosphopeptide standards. While our data demonstrate the overall consistent performance of the SIMAC approach under various loading conditions, the results also revealed that the method had limited repeatability and linearity for most phosphopeptides tested, and different phosphopeptides were found to have different linear ranges. These data suggest that, unless additional strategies are used, SIMAC should be regarded as a semi-quantitative method when used in large-scale phosphoproteomics studies in complex backgrounds. PMID:24096195
Fosbøl, M; Reving, S; Petersen, E H; Rossing, P; Lajer, M; Zerahn, B
2017-01-01
To investigate whether inclusion of quantitative data on blood flow distribution compared with visual qualitative evaluation improve the reliability and diagnostic performance of 99 m Tc-hydroxymethylene diphosphate three-phase bone scintigraphy (TPBS) in patients suspected for charcot neuropathic osteoarthropathy (CNO) of the foot. A retrospective cohort study of TPBS performed on 148 patients with suspected acute CNO referred from a single specialized diabetes care centre. The quantitative blood flow distribution was calculated based on the method described by Deutsch et al. All scintigraphies were re-evaluated by independent, blinded observers twice with and without quantitative data on blood flow distribution at ankle and focus level, respectively. The diagnostic validity of TPBS was determined by subsequent review of clinical data and radiological examinations. A total of 90 patients (61%) had confirmed diagnosis of CNO. The sensitivity, specificity and accuracy of three-phase bone scintigraphy without/with quantitative data were 89%/88%, 58%/62% and 77%/78%, respectively. The intra-observer agreement improved significantly by adding quantitative data in the evaluation (Kappa value 0·79/0·94). The interobserver agreement was not significantly improved. Adding quantitative data on blood flow distribution in the interpretation of TBPS improves intra-observer variation, whereas no difference in interobserver variation was observed. The sensitivity of TPBS in the diagnosis of CNO is high, but holds limited specificity. Diagnostic performance does not improve using quantitative data in the evaluation. This may be due to the reference intervals applied in the study or the absence of a proper gold standard diagnostic procedure for comparison. © 2015 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.
Metrics for Performance Evaluation of Patient Exercises during Physical Therapy.
Vakanski, Aleksandar; Ferguson, Jake M; Lee, Stephen
2017-06-01
The article proposes a set of metrics for evaluation of patient performance in physical therapy exercises. Taxonomy is employed that classifies the metrics into quantitative and qualitative categories, based on the level of abstraction of the captured motion sequences. Further, the quantitative metrics are classified into model-less and model-based metrics, in reference to whether the evaluation employs the raw measurements of patient performed motions, or whether the evaluation is based on a mathematical model of the motions. The reviewed metrics include root-mean square distance, Kullback Leibler divergence, log-likelihood, heuristic consistency, Fugl-Meyer Assessment, and similar. The metrics are evaluated for a set of five human motions captured with a Kinect sensor. The metrics can potentially be integrated into a system that employs machine learning for modelling and assessment of the consistency of patient performance in home-based therapy setting. Automated performance evaluation can overcome the inherent subjectivity in human performed therapy assessment, and it can increase the adherence to prescribed therapy plans, and reduce healthcare costs.
Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun
2015-02-01
Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test-retest repeatability data for illustrative purposes. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun
2017-01-01
Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test–retest repeatability data for illustrative purposes. PMID:24872353
Ren, Wei-Wei; Li, Xiao-Long; Wang, Dan; Liu, Bo-Ji; Zhao, Chong-Ke; Xu, Hui-Xiong
2018-04-13
To evaluate a special kind of ultrasound (US) shear wave elastography for differential diagnosis of breast lesions, using a new qualitative analysis (i.e. the elasticity score in the travel time map) compared with conventional quantitative analysis. From June 2014 to July 2015, 266 pathologically proven breast lesions were enrolled in this study. The maximum, mean, median, minimum, and standard deviation of shear wave speed (SWS) values (m/s) were assessed. The elasticity score, a new qualitative feature, was evaluated in the travel time map. The area under the receiver operating characteristic (AUROC) curves were plotted to evaluate the diagnostic performance of both qualitative and quantitative analyses for differentiation of breast lesions. Among all quantitative parameters, SWS-max showed the highest AUROC (0.805; 95% CI: 0.752, 0.851) compared with SWS-mean (0.786; 95% CI:0.732, 0.834; P = 0.094), SWS-median (0.775; 95% CI:0.720, 0.824; P = 0.046), SWS-min (0.675; 95% CI:0.615, 0.731; P = 0.000), and SWS-SD (0.768; 95% CI:0.712, 0.817; P = 0.074). The AUROC of qualitative analysis in this study obtained the best diagnostic performance (0.871; 95% CI: 0.825, 0.909, compared with the best parameter of SWS-max in quantitative analysis, P = 0.011). The new qualitative analysis of shear wave travel time showed the superior diagnostic performance in the differentiation of breast lesions in comparison with conventional quantitative analysis.
Reliability and Validity of the Professional Counseling Performance Evaluation
ERIC Educational Resources Information Center
Shepherd, J. Brad; Britton, Paula J.; Kress, Victoria E.
2008-01-01
The definition and measurement of counsellor trainee competency is an issue that has received increased attention yet lacks quantitative study. This research evaluates item responses, scale reliability and intercorrelations, interrater agreement, and criterion-related validity of the Professional Performance Fitness Evaluation/Professional…
Quantitative Evaluation of Performance during Robot-assisted Treatment.
Peri, E; Biffi, E; Maghini, C; Servodio Iammarrone, F; Gagliardi, C; Germiniasi, C; Pedrocchi, A; Turconi, A C; Reni, G
2016-01-01
This article is part of the Focus Theme of Methods of Information in Medicine on "Methodologies, Models and Algorithms for Patients Rehabilitation". The great potential of robots in extracting quantitative and meaningful data is not always exploited in clinical practice. The aim of the present work is to describe a simple parameter to assess the performance of subjects during upper limb robotic training exploiting data automatically recorded by the robot, with no additional effort for patients and clinicians. Fourteen children affected by cerebral palsy (CP) performed a training with Armeo®Spring. Each session was evaluated with P, a simple parameter that depends on the overall performance recorded, and median and interquartile values were computed to perform a group analysis. Median (interquartile) values of P significantly increased from 0.27 (0.21) at T0 to 0.55 (0.27) at T1 . This improvement was functionally validated by a significant increase of the Melbourne Assessment of Unilateral Upper Limb Function. The parameter described here was able to show variations in performance over time and enabled a quantitative evaluation of motion abilities in a way that is reliable with respect to a well-known clinical scale.
Fuzzy Performance between Surface Fitting and Energy Distribution in Turbulence Runner
Liang, Zhongwei; Liu, Xiaochu; Ye, Bangyan; Brauwer, Richard Kars
2012-01-01
Because the application of surface fitting algorithms exerts a considerable fuzzy influence on the mathematical features of kinetic energy distribution, their relation mechanism in different external conditional parameters must be quantitatively analyzed. Through determining the kinetic energy value of each selected representative position coordinate point by calculating kinetic energy parameters, several typical algorithms of complicated surface fitting are applied for constructing microkinetic energy distribution surface models in the objective turbulence runner with those obtained kinetic energy values. On the base of calculating the newly proposed mathematical features, we construct fuzzy evaluation data sequence and present a new three-dimensional fuzzy quantitative evaluation method; then the value change tendencies of kinetic energy distribution surface features can be clearly quantified, and the fuzzy performance mechanism discipline between the performance results of surface fitting algorithms, the spatial features of turbulence kinetic energy distribution surface, and their respective environmental parameter conditions can be quantitatively analyzed in detail, which results in the acquirement of final conclusions concerning the inherent turbulence kinetic energy distribution performance mechanism and its mathematical relation. A further turbulence energy quantitative study can be ensured. PMID:23213287
Gender Differences in Motor Skills of the Overarm Throw
Gromeier, Michael; Koester, Dirk; Schack, Thomas
2017-01-01
In this cross-sectional study, the qualitative and quantitative throwing performance of male and female athletes (6 to 16 years of age) was analyzed. The goal of this study was to assess whether there were gender based qualitative and quantitative differences in throwing performance of young athletes, throughout three different age bands (childhood, pubescence, and adolescence). Furthermore, we explored whether all components of the throwing movement are equally affected by gender differences. Focus was placed on five essential components of action: trunk, forearm, humerus, stepping, and backswing. Therefore, children and adolescents (N = 96) were invited to throw three times from three different distances, while aiming at a target placed at shoulder height. The participants were aspiring athletes, competitive in the sport handball. For analyzing the quality of movement the component approach of Halverson and Roberton (1984) was used. The throwing accuracy was noted and used to evaluate the quantitative performance of the throwing movement. Throughout three different age bands, no statistically significant difference was found between genders in throwing accuracy, i.e., quantitative performance. Regarding the qualitative evaluation of the throwing movement, male and female athletes differed significantly. The component approach yielded higher scores for male than for female participants. As expected, with increasing age qualitative and quantitative performance of male and female athletes improved. These results suggest that there are gender-specific differences in qualitative throwing performance, but not necessarily in quantitative throwing performance. Exploration shows that differences in the qualitative throwing performance were seen in specific components of action. Male and female athletes demonstrated similar movement patterns in humerus and forearm actions, but differed in trunk, stepping, and backswing actions. PMID:28261142
Kosasih, Agus Susanto; Sugiarto, Christine; Hayuanta, Hubertus Hosti; Juhaendi, Runingsih; Setiawan, Lyana
2017-08-08
Measurement of viral load in human immunodeficiency virus type 1 (HIV-1) infected patients is essential for the establishment of a therapeutic strategy. Several assays based on qPCR are available for the measurement of viral load; they differ in sample volume, technology applied, target gene, sensitivity and dynamic range. The Bioneer AccuPower® HIV-1 Quantitative RT-PCR is a novel commercial kit that has not been evaluated for its performance. This study aimed to evaluate the performance of the Bioneer AccuPower® HIV-1 Quantitative RT-PCR kit. In total, 288 EDTA plasma samples from the Dharmais Cancer Hospital were analyzed with the Bioneer AccuPower® HIV-1 Quantitative RT-PCR kit and the Roche COBAS? AmpliPrep/COBAS® TaqMan® HIV-1 version 2.0 (CAP/CTM v2.0). The performance of the Bioneer assay was then evaluated against the Roche CAP/CTM v2.0. Overall, there was good agreement between the two assays. The Bioneer assay showed significant linear correlation with CAP/CTM v2.0 (R2=0.963, p<0.001) for all samples (N=118) which were quantified by both assays, with high agreement (94.9%, 112/118) according to the Bland-Altman model. The mean difference between the quantitative values measured by Bioneer assay and CAP/CTM v2.0 was 0.11 Log10 IU/mL (SD=0.26). Based on these results, the Bioneer assay can be used to quantify HIV-1 RNA in clinical laboratories.
Promoting the safety performance of industrial radiography using a quantitative assessment system.
Kardan, M R; Mianji, F A; Rastkhah, N; Babakhani, A; Azad, S Borhan
2006-12-01
The increasing number of industrial radiographers and their considerable occupational exposure has been one of the main concerns of the Iran Nuclear Regulatory Authority (INRA) in recent years. In 2002, a quantitative system of evaluating the safety performance of licensees and a complementary enforcement system was introduced by the National Radiation Protection Department (NRPD). Each parameter of the practice is given a weighting factor according to its importance to safety. Assessment of the licensees is done quantitatively by summing up their scores using prepared tables. Implementing this system of evaluation showed a considerable decrease in deficiencies in the various centres. Tables are updated regularly as a result of findings during the inspections. This system is used in addition to enforcement to promote safety performance and to increase the culture of safety in industrial radiography.
ERIC Educational Resources Information Center
Jenkins, Craig
2015-01-01
This paper is a comparative quantitative evaluation of an approach to teaching poetry in the subject domain of English that employs a "guided discovery" pedagogy using computer-based microworlds. It uses a quasi-experimental design in order to measure performance gains in computational thinking and poetic thinking following a…
Chinese Middle School Teachers' Preferences Regarding Performance Evaluation Measures
ERIC Educational Resources Information Center
Liu, Shujie; Xu, Xianxuan; Stronge, James H.
2016-01-01
Teacher performance evaluation currently is receiving unprecedented attention from policy makers, scholars, and practitioners worldwide. This study is one of the few studies of teacher perceptions regarding teacher performance measures that focus on China. We employed a quantitative dominant mixed research design to investigate Chinese teachers'…
Aims: To determine the performance of a rapid, real time polymerase chain reaction (PCR) method for the detection and quantitative analysis Helicobacter pylori at low concentrations in drinking water.
Methods and Results: A rapid DNA extraction and quantitative PCR (QPCR)...
Devonshire, Alison S; O'Sullivan, Denise M; Honeyborne, Isobella; Jones, Gerwyn; Karczmarczyk, Maria; Pavšič, Jernej; Gutteridge, Alice; Milavec, Mojca; Mendoza, Pablo; Schimmel, Heinz; Van Heuverswyn, Fran; Gorton, Rebecca; Cirillo, Daniela Maria; Borroni, Emanuele; Harris, Kathryn; Barnard, Marinus; Heydenrych, Anthenette; Ndusilo, Norah; Wallis, Carole L; Pillay, Keshree; Barry, Thomas; Reddington, Kate; Richter, Elvira; Mozioğlu, Erkan; Akyürek, Sema; Yalçınkaya, Burhanettin; Akgoz, Muslum; Žel, Jana; Foy, Carole A; McHugh, Timothy D; Huggett, Jim F
2016-08-03
Real-time PCR (qPCR) based methods, such as the Xpert MTB/RIF, are increasingly being used to diagnose tuberculosis (TB). While qualitative methods are adequate for diagnosis, the therapeutic monitoring of TB patients requires quantitative methods currently performed using smear microscopy. The potential use of quantitative molecular measurements for therapeutic monitoring has been investigated but findings have been variable and inconclusive. The lack of an adequate reference method and reference materials is a barrier to understanding the source of such disagreement. Digital PCR (dPCR) offers the potential for an accurate method for quantification of specific DNA sequences in reference materials which can be used to evaluate quantitative molecular methods for TB treatment monitoring. To assess a novel approach for the development of quality assurance materials we used dPCR to quantify specific DNA sequences in a range of prototype reference materials and evaluated accuracy between different laboratories and instruments. The materials were then also used to evaluate the quantitative performance of qPCR and Xpert MTB/RIF in eight clinical testing laboratories. dPCR was found to provide results in good agreement with the other methods tested and to be highly reproducible between laboratories without calibration even when using different instruments. When the reference materials were analysed with qPCR and Xpert MTB/RIF by clinical laboratories, all laboratories were able to correctly rank the reference materials according to concentration, however there was a marked difference in the measured magnitude. TB is a disease where the quantification of the pathogen could lead to better patient management and qPCR methods offer the potential to rapidly perform such analysis. However, our findings suggest that when precisely characterised materials are used to evaluate qPCR methods, the measurement result variation is too high to determine whether molecular quantification of Mycobacterium tuberculosis would provide a clinically useful readout. The methods described in this study provide a means by which the technical performance of quantitative molecular methods can be evaluated independently of clinical variability to improve accuracy of measurement results. These will assist in ultimately increasing the likelihood that such approaches could be used to improve patient management of TB.
Design and Implementation of Performance Metrics for Evaluation of Assessments Data
ERIC Educational Resources Information Center
Ahmed, Irfan; Bhatti, Arif
2016-01-01
Evocative evaluation of assessment data is essential to quantify the achievements at course and program levels. The objective of this paper is to design performance metrics and respective formulas to quantitatively evaluate the achievement of set objectives and expected outcomes at the course levels for program accreditation. Even though…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scrucca, Flavio; Severi, Claudio; Galvan, Nicola
Nowadays an increasing attention of public and private agencies to the sustainability performance of events is observed, since it is recognized as a key issue in the context of sustainable development. Assessing the sustainability performance of events involves environmental, social and economic aspects; their impacts are complex and a quantitative assessment is often difficult. This paper presents a new quali-quantitative method developed to measure the sustainability of events, taking into account all its potential impacts. The 2014 World Orienteering Championship, held in Italy, was selected to test the proposed evaluation methodology. The total carbon footprint of the event was 165.34more » tCO{sub 2}eq and the avoided emissions were estimated as being 46 tCO{sub 2}eq. The adopted quali-quantitative method resulted to be efficient in assessing the sustainability impacts and can be applied for the evaluation of similar events. - Highlights: • A quali-quantitative method to assess events' sustainability is presented. • All the methodological issues related to the method are explained. • The method is used to evaluate the sustainability of an international sports event. • The method resulted to be valid to assess the event's sustainability level. • The carbon footprint of the event has been calculated.« less
Winkler, Peter; Zurl, Brigitte; Guss, Helmuth; Kindl, Peter; Stuecklschweiger, Georg
2005-02-21
A system for dosimetric verification of intensity-modulated radiotherapy (IMRT) treatment plans using absolute calibrated radiographic films is presented. At our institution this verification procedure is performed for all IMRT treatment plans prior to patient irradiation. Therefore clinical treatment plans are transferred to a phantom and recalculated. Composite treatment plans are irradiated to a single film. Film density to absolute dose conversion is performed automatically based on a single calibration film. A software application encompassing film calibration, 2D registration of measurement and calculated distributions, image fusion, and a number of visual and quantitative evaluation utilities was developed. The main topic of this paper is a performance analysis for this quality assurance procedure, with regard to the specification of tolerance levels for quantitative evaluations. Spatial and dosimetric precision and accuracy were determined for the entire procedure, comprising all possible sources of error. The overall dosimetric and spatial measurement uncertainties obtained thereby were 1.9% and 0.8 mm respectively. Based on these results, we specified 5% dose difference and 3 mm distance-to-agreement as our tolerance levels for patient-specific quality assurance for IMRT treatments.
Metrology Standards for Quantitative Imaging Biomarkers
Obuchowski, Nancy A.; Kessler, Larry G.; Raunig, David L.; Gatsonis, Constantine; Huang, Erich P.; Kondratovich, Marina; McShane, Lisa M.; Reeves, Anthony P.; Barboriak, Daniel P.; Guimaraes, Alexander R.; Wahl, Richard L.
2015-01-01
Although investigators in the imaging community have been active in developing and evaluating quantitative imaging biomarkers (QIBs), the development and implementation of QIBs have been hampered by the inconsistent or incorrect use of terminology or methods for technical performance and statistical concepts. Technical performance is an assessment of how a test performs in reference objects or subjects under controlled conditions. In this article, some of the relevant statistical concepts are reviewed, methods that can be used for evaluating and comparing QIBs are described, and some of the technical performance issues related to imaging biomarkers are discussed. More consistent and correct use of terminology and study design principles will improve clinical research, advance regulatory science, and foster better care for patients who undergo imaging studies. © RSNA, 2015 PMID:26267831
Congenital hypothyroidism: diagnostic scintigraphic evaluation of an organification defect
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cone, L.; Oates, E.; Vazquez, R.
1988-06-01
Quantitative Tc-99m pertechnetate thyroid imaging was performed on a hypothyroid neonate. The image revealed markedly increased trapping in an enlarged, bilobed, eutopic gland. A perchlorate washout test using quantitative imaging with I-123 confirmed an organification problem.
Analysis of physical exercises and exercise protocols for space transportation system operation
NASA Technical Reports Server (NTRS)
Coleman, A. E.
1982-01-01
A quantitative evaluation of the Thornton-Whitmore treadmill was made so that informed management decisions regarding the role of this treadmill in operational flight crew exercise programs could be made. Specific tasks to be completed were: The Thornton-Whitmore passive treadmill as an exercise device at one-g was evaluated. Hardware, harness and restraint systems for use with the Thornton-Whitmore treadmill in the laboratory and in Shuttle flights were established. The quantitative and qualitative performance of human subjects on the Thorton-Whitmore treadmill with forces in excess of one-g, was evaluated. The performance of human subjects on the Thornton-Whitmore treadmill in weightlessness (onboard Shuttle flights) was also determined.
Raunig, David L; McShane, Lisa M; Pennello, Gene; Gatsonis, Constantine; Carson, Paul L; Voyvodic, James T; Wahl, Richard L; Kurland, Brenda F; Schwarz, Adam J; Gönen, Mithat; Zahlmann, Gudrun; Kondratovich, Marina V; O'Donnell, Kevin; Petrick, Nicholas; Cole, Patricia E; Garra, Brian; Sullivan, Daniel C
2015-02-01
Technological developments and greater rigor in the quantitative measurement of biological features in medical images have given rise to an increased interest in using quantitative imaging biomarkers to measure changes in these features. Critical to the performance of a quantitative imaging biomarker in preclinical or clinical settings are three primary metrology areas of interest: measurement linearity and bias, repeatability, and the ability to consistently reproduce equivalent results when conditions change, as would be expected in any clinical trial. Unfortunately, performance studies to date differ greatly in designs, analysis method, and metrics used to assess a quantitative imaging biomarker for clinical use. It is therefore difficult or not possible to integrate results from different studies or to use reported results to design studies. The Radiological Society of North America and the Quantitative Imaging Biomarker Alliance with technical, radiological, and statistical experts developed a set of technical performance analysis methods, metrics, and study designs that provide terminology, metrics, and methods consistent with widely accepted metrological standards. This document provides a consistent framework for the conduct and evaluation of quantitative imaging biomarker performance studies so that results from multiple studies can be compared, contrasted, or combined. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
A Stereological Method for the Quantitative Evaluation of Cartilage Repair Tissue
Nyengaard, Jens Randel; Lind, Martin; Spector, Myron
2015-01-01
Objective To implement stereological principles to develop an easy applicable algorithm for unbiased and quantitative evaluation of cartilage repair. Design Design-unbiased sampling was performed by systematically sectioning the defect perpendicular to the joint surface in parallel planes providing 7 to 10 hematoxylin–eosin stained histological sections. Counting windows were systematically selected and converted into image files (40-50 per defect). The quantification was performed by two-step point counting: (1) calculation of defect volume and (2) quantitative analysis of tissue composition. Step 2 was performed by assigning each point to one of the following categories based on validated and easy distinguishable morphological characteristics: (1) hyaline cartilage (rounded cells in lacunae in hyaline matrix), (2) fibrocartilage (rounded cells in lacunae in fibrous matrix), (3) fibrous tissue (elongated cells in fibrous tissue), (4) bone, (5) scaffold material, and (6) others. The ability to discriminate between the tissue types was determined using conventional or polarized light microscopy, and the interobserver variability was evaluated. Results We describe the application of the stereological method. In the example, we assessed the defect repair tissue volume to be 4.4 mm3 (CE = 0.01). The tissue fractions were subsequently evaluated. Polarized light illumination of the slides improved discrimination between hyaline cartilage and fibrocartilage and increased the interobserver agreement compared with conventional transmitted light. Conclusion We have applied a design-unbiased method for quantitative evaluation of cartilage repair, and we propose this algorithm as a natural supplement to existing descriptive semiquantitative scoring systems. We also propose that polarized light is effective for discrimination between hyaline cartilage and fibrocartilage. PMID:26069715
A Stereological Method for the Quantitative Evaluation of Cartilage Repair Tissue.
Foldager, Casper Bindzus; Nyengaard, Jens Randel; Lind, Martin; Spector, Myron
2015-04-01
To implement stereological principles to develop an easy applicable algorithm for unbiased and quantitative evaluation of cartilage repair. Design-unbiased sampling was performed by systematically sectioning the defect perpendicular to the joint surface in parallel planes providing 7 to 10 hematoxylin-eosin stained histological sections. Counting windows were systematically selected and converted into image files (40-50 per defect). The quantification was performed by two-step point counting: (1) calculation of defect volume and (2) quantitative analysis of tissue composition. Step 2 was performed by assigning each point to one of the following categories based on validated and easy distinguishable morphological characteristics: (1) hyaline cartilage (rounded cells in lacunae in hyaline matrix), (2) fibrocartilage (rounded cells in lacunae in fibrous matrix), (3) fibrous tissue (elongated cells in fibrous tissue), (4) bone, (5) scaffold material, and (6) others. The ability to discriminate between the tissue types was determined using conventional or polarized light microscopy, and the interobserver variability was evaluated. We describe the application of the stereological method. In the example, we assessed the defect repair tissue volume to be 4.4 mm(3) (CE = 0.01). The tissue fractions were subsequently evaluated. Polarized light illumination of the slides improved discrimination between hyaline cartilage and fibrocartilage and increased the interobserver agreement compared with conventional transmitted light. We have applied a design-unbiased method for quantitative evaluation of cartilage repair, and we propose this algorithm as a natural supplement to existing descriptive semiquantitative scoring systems. We also propose that polarized light is effective for discrimination between hyaline cartilage and fibrocartilage.
Ernst, Marielle; Kriston, Levente; Romero, Javier M; Frölich, Andreas M; Jansen, Olav; Fiehler, Jens; Buhk, Jan-Hendrik
2016-01-01
We sought to develop a standardized curriculum capable of assessing key competencies in Interventional Neuroradiology by the use of models and simulators in an objective, quantitative, and efficient way. In this evaluation we analyzed the associations between the practical experience, theoretical knowledge, and the skills lab performance of interventionalists. We evaluated the endovascular skills of 26 participants of the Advanced Course in Endovascular Interventional Neuroradiology of the European Society of Neuroradiology with a set of three tasks (aneurysm coiling and thrombectomy in a virtual simulator and placement of an intra-aneurysmal flow disruptor in a flow model). Practical experience was assessed by a survey. Participants completed a written and oral examination to evaluate theoretical knowledge. Bivariate and multivariate analyses were performed. In multivariate analysis knowledge of materials and techniques in Interventional Neuroradiology was moderately associated with skills in aneurysm coiling and thrombectomy. Experience in mechanical thrombectomy was moderately associated with thrombectomy skills, while age was negatively associated with thrombectomy skills. We found no significant association between age, sex, or work experience and skills in aneurysm coiling. Our study gives an example of how an integrated curriculum for reasonable and cost-effective assessment of key competences of an interventional neuroradiologist could look. In addition to traditional assessment of theoretical knowledge practical skills are measured by the use of endovascular simulators yielding objective, quantitative, and constructive data for the evaluation of the current performance status of participants as well as the evolution of their technical competency over time.
Metzger, Gregory J; Kalavagunta, Chaitanya; Spilseth, Benjamin; Bolan, Patrick J; Li, Xiufeng; Hutter, Diane; Nam, Jung W; Johnson, Andrew D; Henriksen, Jonathan C; Moench, Laura; Konety, Badrinath; Warlick, Christopher A; Schmechel, Stephen C; Koopmeiners, Joseph S
2016-06-01
Purpose To develop multiparametric magnetic resonance (MR) imaging models to generate a quantitative, user-independent, voxel-wise composite biomarker score (CBS) for detection of prostate cancer by using coregistered correlative histopathologic results, and to compare performance of CBS-based detection with that of single quantitative MR imaging parameters. Materials and Methods Institutional review board approval and informed consent were obtained. Patients with a diagnosis of prostate cancer underwent multiparametric MR imaging before surgery for treatment. All MR imaging voxels in the prostate were classified as cancer or noncancer on the basis of coregistered histopathologic data. Predictive models were developed by using more than one quantitative MR imaging parameter to generate CBS maps. Model development and evaluation of quantitative MR imaging parameters and CBS were performed separately for the peripheral zone and the whole gland. Model accuracy was evaluated by using the area under the receiver operating characteristic curve (AUC), and confidence intervals were calculated with the bootstrap procedure. The improvement in classification accuracy was evaluated by comparing the AUC for the multiparametric model and the single best-performing quantitative MR imaging parameter at the individual level and in aggregate. Results Quantitative T2, apparent diffusion coefficient (ADC), volume transfer constant (K(trans)), reflux rate constant (kep), and area under the gadolinium concentration curve at 90 seconds (AUGC90) were significantly different between cancer and noncancer voxels (P < .001), with ADC showing the best accuracy (peripheral zone AUC, 0.82; whole gland AUC, 0.74). Four-parameter models demonstrated the best performance in both the peripheral zone (AUC, 0.85; P = .010 vs ADC alone) and whole gland (AUC, 0.77; P = .043 vs ADC alone). Individual-level analysis showed statistically significant improvement in AUC in 82% (23 of 28) and 71% (24 of 34) of patients with peripheral-zone and whole-gland models, respectively, compared with ADC alone. Model-based CBS maps for cancer detection showed improved visualization of cancer location and extent. Conclusion Quantitative multiparametric MR imaging models developed by using coregistered correlative histopathologic data yielded a voxel-wise CBS that outperformed single quantitative MR imaging parameters for detection of prostate cancer, especially when the models were assessed at the individual level. (©) RSNA, 2016 Online supplemental material is available for this article.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-08
... Coordinator; (2) applies research methodologies to perform evaluation studies of health information technology grant programs; and, (3) applies advanced mathematical or quantitative modeling to the U.S. health care... remaining items in the paragraph accordingly: ``(1) Applying research methodologies to perform evaluation...
Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L
2018-01-01
Aims A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R2), using R2 as the primary metric of assay agreement. However, the use of R2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. Methods We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Results Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. Conclusions The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. PMID:28747393
Ramus, Claire; Hovasse, Agnès; Marcellin, Marlène; Hesse, Anne-Marie; Mouton-Barbosa, Emmanuelle; Bouyssié, David; Vaca, Sebastian; Carapito, Christine; Chaoui, Karima; Bruley, Christophe; Garin, Jérôme; Cianférani, Sarah; Ferro, Myriam; Van Dorssaeler, Alain; Burlet-Schiltz, Odile; Schaeffer, Christine; Couté, Yohann; Gonzalez de Peredo, Anne
2016-01-30
Proteomic workflows based on nanoLC-MS/MS data-dependent-acquisition analysis have progressed tremendously in recent years. High-resolution and fast sequencing instruments have enabled the use of label-free quantitative methods, based either on spectral counting or on MS signal analysis, which appear as an attractive way to analyze differential protein expression in complex biological samples. However, the computational processing of the data for label-free quantification still remains a challenge. Here, we used a proteomic standard composed of an equimolar mixture of 48 human proteins (Sigma UPS1) spiked at different concentrations into a background of yeast cell lysate to benchmark several label-free quantitative workflows, involving different software packages developed in recent years. This experimental design allowed to finely assess their performances in terms of sensitivity and false discovery rate, by measuring the number of true and false-positive (respectively UPS1 or yeast background proteins found as differential). The spiked standard dataset has been deposited to the ProteomeXchange repository with the identifier PXD001819 and can be used to benchmark other label-free workflows, adjust software parameter settings, improve algorithms for extraction of the quantitative metrics from raw MS data, or evaluate downstream statistical methods. Bioinformatic pipelines for label-free quantitative analysis must be objectively evaluated in their ability to detect variant proteins with good sensitivity and low false discovery rate in large-scale proteomic studies. This can be done through the use of complex spiked samples, for which the "ground truth" of variant proteins is known, allowing a statistical evaluation of the performances of the data processing workflow. We provide here such a controlled standard dataset and used it to evaluate the performances of several label-free bioinformatics tools (including MaxQuant, Skyline, MFPaQ, IRMa-hEIDI and Scaffold) in different workflows, for detection of variant proteins with different absolute expression levels and fold change values. The dataset presented here can be useful for tuning software tool parameters, and also testing new algorithms for label-free quantitative analysis, or for evaluation of downstream statistical methods. Copyright © 2015 Elsevier B.V. All rights reserved.
A multi-center study benchmarks software tools for label-free proteome quantification
Gillet, Ludovic C; Bernhardt, Oliver M.; MacLean, Brendan; Röst, Hannes L.; Tate, Stephen A.; Tsou, Chih-Chiang; Reiter, Lukas; Distler, Ute; Rosenberger, George; Perez-Riverol, Yasset; Nesvizhskii, Alexey I.; Aebersold, Ruedi; Tenzer, Stefan
2016-01-01
The consistent and accurate quantification of proteins by mass spectrometry (MS)-based proteomics depends on the performance of instruments, acquisition methods and data analysis software. In collaboration with the software developers, we evaluated OpenSWATH, SWATH2.0, Skyline, Spectronaut and DIA-Umpire, five of the most widely used software methods for processing data from SWATH-MS (sequential window acquisition of all theoretical fragment ion spectra), a method that uses data-independent acquisition (DIA) for label-free protein quantification. We analyzed high-complexity test datasets from hybrid proteome samples of defined quantitative composition acquired on two different MS instruments using different SWATH isolation windows setups. For consistent evaluation we developed LFQbench, an R-package to calculate metrics of precision and accuracy in label-free quantitative MS, and report the identification performance, robustness and specificity of each software tool. Our reference datasets enabled developers to improve their software tools. After optimization, all tools provided highly convergent identification and reliable quantification performance, underscoring their robustness for label-free quantitative proteomics. PMID:27701404
A multicenter study benchmarks software tools for label-free proteome quantification.
Navarro, Pedro; Kuharev, Jörg; Gillet, Ludovic C; Bernhardt, Oliver M; MacLean, Brendan; Röst, Hannes L; Tate, Stephen A; Tsou, Chih-Chiang; Reiter, Lukas; Distler, Ute; Rosenberger, George; Perez-Riverol, Yasset; Nesvizhskii, Alexey I; Aebersold, Ruedi; Tenzer, Stefan
2016-11-01
Consistent and accurate quantification of proteins by mass spectrometry (MS)-based proteomics depends on the performance of instruments, acquisition methods and data analysis software. In collaboration with the software developers, we evaluated OpenSWATH, SWATH 2.0, Skyline, Spectronaut and DIA-Umpire, five of the most widely used software methods for processing data from sequential window acquisition of all theoretical fragment-ion spectra (SWATH)-MS, which uses data-independent acquisition (DIA) for label-free protein quantification. We analyzed high-complexity test data sets from hybrid proteome samples of defined quantitative composition acquired on two different MS instruments using different SWATH isolation-window setups. For consistent evaluation, we developed LFQbench, an R package, to calculate metrics of precision and accuracy in label-free quantitative MS and report the identification performance, robustness and specificity of each software tool. Our reference data sets enabled developers to improve their software tools. After optimization, all tools provided highly convergent identification and reliable quantification performance, underscoring their robustness for label-free quantitative proteomics.
Zhang, Yin; Wang, Lei
2013-01-01
Abstract The Clinical and Translational Science Awards (CTSA) program is one of the most important initiatives in translational medical funding. The quantitative evaluation of the efficiency and performance of the CTSA program has a significant referential meaning for the decision making of global translational medical funding. Using science mapping and scientometric analytic tools, this study quantitatively analyzed the scientific articles funded by the CTSA program. The results of the study showed that the quantitative productivities of the CTSA program had a stable increase since 2008. In addition, the emerging trends of the research funded by the CTSA program covered clinical and basic medical research fields. The academic benefits from the CTSA program were assisting its members to build a robust academic home for the Clinical and Translational Science and to attract other financial support. This study provided a quantitative evaluation of the CTSA program based on science mapping and scientometric analysis. Further research is required to compare and optimize other quantitative methods and to integrate various research results. PMID:24330689
Zhang, Yin; Wang, Lei; Diao, Tianxi
2013-12-01
The Clinical and Translational Science Awards (CTSA) program is one of the most important initiatives in translational medical funding. The quantitative evaluation of the efficiency and performance of the CTSA program has a significant referential meaning for the decision making of global translational medical funding. Using science mapping and scientometric analytic tools, this study quantitatively analyzed the scientific articles funded by the CTSA program. The results of the study showed that the quantitative productivities of the CTSA program had a stable increase since 2008. In addition, the emerging trends of the research funded by the CTSA program covered clinical and basic medical research fields. The academic benefits from the CTSA program were assisting its members to build a robust academic home for the Clinical and Translational Science and to attract other financial support. This study provided a quantitative evaluation of the CTSA program based on science mapping and scientometric analysis. Further research is required to compare and optimize other quantitative methods and to integrate various research results. © 2013 Wiley Periodicals, Inc.
Evaluation of board performance in Iran's universities of medical sciences.
Sajadi, Haniye Sadat; Maleki, Mohammadreza; Ravaghi, Hamid; Farzan, Homayoun; Aminlou, Hasan; Hadi, Mohammad
2014-10-01
The critical role that the board plays in governance of universities clarifies the necessity of evaluating its performance. This study was aimed to evaluate the performance of the boards of medical universities and provide solutions to enhance its performance. The first phase of present study was a qualitative research in which data were collected through face-to-face semi-structured interviews. Data were analyzed by thematic approach. The second phase was a mixed qualitative and quantitative study, with quantitative part in cross-sectional format and qualitative part in content analysis format. In the quantitative part, data were collected through Ministry of Health and Medical Education (MoHME). In the qualitative part, the content of 2,148 resolutions that were selected by using stratified sampling method were analyzed. Participants believed that the boards had no acceptable performance for a long time.RESULTS also indicated the increasing number of meetings and resolutions of the boards in these 21 years. The boards' resolutions were mostly operational in domain and administrative in nature. The share of specific resolutions was more than the general ones. Given the current pace of change and development and the need to timely respond them, it is recommended to accelerate the slow pace of improvement process of the boards. It appears that more delegation and strengthening the position of the boards are the effective strategies to speed up this process.
Evaluation of board performance in Iran’s universities of medical sciences
Sajadi, Haniye Sadat; Maleki, Mohammadreza; Ravaghi, Hamid; Farzan, Homayoun; Aminlou, Hasan; Hadi, Mohammad
2014-01-01
Background: The critical role that the board plays in governance of universities clarifies the necessity of evaluating its performance. This study was aimed to evaluate the performance of the boards of medical universities and provide solutions to enhance its performance. Methods: The first phase of present study was a qualitative research in which data were collected through face-to-face semi-structured interviews. Data were analyzed by thematic approach. The second phase was a mixed qualitative and quantitative study, with quantitative part in cross-sectional format and qualitative part in content analysis format. In the quantitative part, data were collected through Ministry of Health and Medical Education (MoHME). In the qualitative part, the content of 2,148 resolutions that were selected by using stratified sampling method were analyzed. Results: Participants believed that the boards had no acceptable performance for a long time.Results also indicated the increasing number of meetings and resolutions of the boards in these 21 years. The boards’ resolutions were mostly operational in domain and administrative in nature. The share of specific resolutions was more than the general ones. Conclusion: Given the current pace of change and development and the need to timely respond them, it is recommended to accelerate the slow pace of improvement process of the boards. It appears that more delegation and strengthening the position of the boards are the effective strategies to speed up this process. PMID:25337597
High Resolution Qualitative and Quantitative MR Evaluation of the Glenoid Labrum
Iwasaki, Kenyu; Tafur, Monica; Chang, Eric Y.; SherondaStatum; Biswas, Reni; Tran, Betty; Bae, Won C.; Du, Jiang; Bydder, Graeme M.; Chung, Christine B.
2015-01-01
Objective To implement qualitative and quantitative MR sequences for the evaluation of labral pathology. Methods Six glenoid labra were dissected and the anterior and posterior portions were divided into normal, mildly degenerated, or severely degenerated groups using gross and MR findings. Qualitative evaluation was performed using T1-weighted, proton density-weighted (PD), spoiled gradient echo (SPGR) and ultra-short echo time (UTE) sequences. Quantitative evaluation included T2 and T1rho measurements as well as T1, T2*, and T1rho measurements acquired with UTE techniques. Results SPGR and UTE sequences best demonstrated labral fiber structure. Degenerated labra had a tendency towards decreased T1 values, increased T2/T2* values and increased T1 rho values. T2* values obtained with the UTE sequence allowed for delineation between normal, mildly degenerated and severely degenerated groups (p<0.001). Conclusion Quantitative T2* measurements acquired with the UTE technique are useful for distinguishing between normal, mildly degenerated and severely degenerated labra. PMID:26359581
A Comparative Assessment of Greek Universities' Efficiency Using Quantitative Analysis
ERIC Educational Resources Information Center
Katharaki, Maria; Katharakis, George
2010-01-01
In part due to the increased demand for higher education, typical evaluation frameworks for universities often address the key issue of available resource utilisation. This study seeks to estimate the efficiency of 20 public universities in Greece through quantitative analysis (including performance indicators, data envelopment analysis (DEA) and…
Evaluating Inquiry-Based Learning as a Means to Advance Individual Student Achievement
ERIC Educational Resources Information Center
Ziemer, Cherilyn G.
2013-01-01
Although inquiry-based learning has been debated throughout the greater educational community and demonstrated with some effect in modern classrooms, little quantitative analysis has been performed to empirically validate sustained benefits. This quantitative study focused on whether inquiry-based pedagogy actually brought about sustained and…
NASA Astrophysics Data System (ADS)
Muldoon, Timothy J.; Thekkek, Nadhi; Roblyer, Darren; Maru, Dipen; Harpaz, Noam; Potack, Jonathan; Anandasabapathy, Sharmila; Richards-Kortum, Rebecca
2010-03-01
Early detection of neoplasia in patients with Barrett's esophagus is essential to improve outcomes. The aim of this ex vivo study was to evaluate the ability of high-resolution microendoscopic imaging and quantitative image analysis to identify neoplastic lesions in patients with Barrett's esophagus. Nine patients with pathologically confirmed Barrett's esophagus underwent endoscopic examination with biopsies or endoscopic mucosal resection. Resected fresh tissue was imaged with fiber bundle microendoscopy; images were analyzed by visual interpretation or by quantitative image analysis to predict whether the imaged sites were non-neoplastic or neoplastic. The best performing pair of quantitative features were chosen based on their ability to correctly classify the data into the two groups. Predictions were compared to the gold standard of histopathology. Subjective analysis of the images by expert clinicians achieved average sensitivity and specificity of 87% and 61%, respectively. The best performing quantitative classification algorithm relied on two image textural features and achieved a sensitivity and specificity of 87% and 85%, respectively. This ex vivo pilot trial demonstrates that quantitative analysis of images obtained with a simple microendoscope system can distinguish neoplasia in Barrett's esophagus with good sensitivity and specificity when compared to histopathology and to subjective image interpretation.
A Standard Psychophysiological Preparation (SPP) for the evaluation of the physiological and biomechanical mechanisms responsible for performance...consumption, biomechanical parameters and performance level. The results of pilot tests (including 6-hours exposure to vibration while performing a...tracking task) verify the applicability and utility of the SPP and demonstrate that high quality, quantitative physiological and biomechanical data can
Ernst, Marielle; Kriston, Levente; Romero, Javier M.; Frölich, Andreas M.; Jansen, Olav; Fiehler, Jens; Buhk, Jan-Hendrik
2016-01-01
Purpose We sought to develop a standardized curriculum capable of assessing key competencies in Interventional Neuroradiology by the use of models and simulators in an objective, quantitative, and efficient way. In this evaluation we analyzed the associations between the practical experience, theoretical knowledge, and the skills lab performance of interventionalists. Materials and Methods We evaluated the endovascular skills of 26 participants of the Advanced Course in Endovascular Interventional Neuroradiology of the European Society of Neuroradiology with a set of three tasks (aneurysm coiling and thrombectomy in a virtual simulator and placement of an intra-aneurysmal flow disruptor in a flow model). Practical experience was assessed by a survey. Participants completed a written and oral examination to evaluate theoretical knowledge. Bivariate and multivariate analyses were performed. Results In multivariate analysis knowledge of materials and techniques in Interventional Neuroradiology was moderately associated with skills in aneurysm coiling and thrombectomy. Experience in mechanical thrombectomy was moderately associated with thrombectomy skills, while age was negatively associated with thrombectomy skills. We found no significant association between age, sex, or work experience and skills in aneurysm coiling. Conclusion Our study gives an example of how an integrated curriculum for reasonable and cost-effective assessment of key competences of an interventional neuroradiologist could look. In addition to traditional assessment of theoretical knowledge practical skills are measured by the use of endovascular simulators yielding objective, quantitative, and constructive data for the evaluation of the current performance status of participants as well as the evolution of their technical competency over time. PMID:26848840
Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L
2018-02-01
A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R 2 ), using R 2 as the primary metric of assay agreement. However, the use of R 2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
A Database Evaluation Based on Information Needs of Academic Social Scientists.
ERIC Educational Resources Information Center
Buterbaugh, Nancy Toth
This study evaluates two databases, "Historical Abstracts" and REESWeb, to determine their effectiveness in supporting academic social science research. While many performance evaluations gather quantitative data from isolated query and response transactions, this study is a qualitative evaluation of the databases in the context of…
Evaluation Processes Used to Assess the Effectiveness of Vocational-Technical Programs.
ERIC Educational Resources Information Center
Bruhns, Arthur E.
Evaluation is quantitative or qualitative, the criteria determined by or given to the student. The criteria show how close he has come to the program's objectives and the ranking of individual performance. Vocational education programs susceptible to evaluation are listed and relevant evaluative techniques discussed. Graduate interviews concerning…
Rota, Cristina; Biondi, Marco; Trenti, Tommaso
2011-09-26
Aution Max AX-4030, a test strip analyzer recently introduced to the market, represents an upgrade of the Aution Max AX-4280 widely employed for urinalysis. This new instrument model can allocate two different test strips at the same time. In the present study the two instruments have been compared together with the usage of Uriflet 9UB and the recently produced Aution Sticks 10PA urine strips, the latter presenting an additional test area for the measurement of urinary creatinine. Imprecision and correlation between instruments and strips have been evaluated for chemical-physical parameters. Accuracy was evaluated for protein, glucose and creatinine by comparing the semi-quantitative results to those obtained by quantitative methods. The well-known interference effect of high ascorbic acid levels on urine glucose test strip determination was evaluated, ascorbic acid influence was also evaluated on protein and creatinine determination. The two instruments have demonstrated comparable performances: precision and correlation between instruments and strips, evaluated for chemical-physical parameters, were always good. Furthermore, accuracy was always very good: results of protein and glucose semi-quantitative measurements resulted to be highly correlated with those obtained by quantitative methods. Moreover, the semi-quantitative measurements of creatinine, employing Aution Sticks 10PA urine strips, were highly comparable with quantitative results. 10PA urine strips are eligible for urine creatinine determination with the possibility of correcting urinalysis results for urinary creatinine concentration, whenever necessary and calculating the protein creatinine ratio. Further studies should be carried out to evaluate effectiveness and appropriateness of the usage of creatinine semi-quantitative analysis.
van Zadelhoff, Claudia; Ehrle, Anna; Merle, Roswitha; Jahn, Werner; Lischer, Christoph
2018-05-09
Scintigraphy is a standard diagnostic method for evaluating horses with back pain due to suspected thoracic processus spinosus pathology. Lesion detection is based on subjective or semi-quantitative assessments of increased uptake. This retrospective, analytical study is aimed to compare semi-quantitative and subjective methods in the evaluation of scintigraphic images of the processi spinosi in the equine thoracic spine. Scintigraphic images of 20 Warmblood horses, presented for assessment of orthopedic conditions between 2014 and 2016, were included in the study. Randomized, blinded image evaluation was performed by 11 veterinarians using subjective and semi-quantitative methods. Subjective grading was performed for the analysis of red-green-blue and grayscale scintigraphic images, which were presented in full-size or as masked images. For the semi-quantitative assessment, observers placed regions of interest over each processus spinosus. The uptake ratio of each processus spinosus in comparison to a reference region of interest was determined. Subsequently, a modified semi-quantitative calculation was developed whereby only the highest counts-per-pixel for a specified number of pixels was processed. Inter- and intraobserver agreement was calculated using intraclass correlation coefficients. Inter- and intraobserver intraclass correlation coefficients were 41.65% and 71.39%, respectively, for the subjective image assessment. Additionally, a correlation between intraobserver agreement, experience, and grayscale images was identified. The inter- and intraobserver agreement was significantly increased when using semi-quantitative analysis (97.35% and 98.36%, respectively) or the modified semi-quantitative calculation (98.61% and 98.82%, respectively). The proposed modified semi-quantitative technique showed a higher inter- and intraobserver agreement when compared to other methods, which makes it a useful tool for the analysis of scintigraphic images. The association of the findings from this study with clinical and radiological examinations requires further investigation. © 2018 American College of Veterinary Radiology.
HuMOVE: a low-invasive wearable monitoring platform in sexual medicine.
Ciuti, Gastone; Nardi, Matteo; Valdastri, Pietro; Menciassi, Arianna; Basile Fasolo, Ciro; Dario, Paolo
2014-10-01
To investigate an accelerometer-based wearable system, named Human Movement (HuMOVE) platform, designed to enable quantitative and continuous measurement of sexual performance with minimal invasiveness and inconvenience for users. Design, implementation, and development of HuMOVE, a wearable platform equipped with an accelerometer sensor for monitoring inertial parameters for sexual performance assessment and diagnosis, were performed. The system enables quantitative measurement of movement parameters during sexual intercourse, meeting the requirements of wearability, data storage, sampling rate, and interfacing methods, which are fundamental for human sexual intercourse performance analysis. HuMOVE was validated through characterization using a controlled experimental test bench and evaluated in a human model during simulated sexual intercourse conditions. HuMOVE demonstrated to be a robust and quantitative monitoring platform and a reliable candidate for sexual performance evaluation and diagnosis. Characterization analysis on the controlled experimental test bench demonstrated an accurate correlation between the HuMOVE system and data from a reference displacement sensor. Experimental tests in the human model during simulated intercourse conditions confirmed the accuracy of the sexual performance evaluation platform and the effectiveness of the selected and derived parameters. The obtained outcomes also established the project expectations in terms of usability and comfort, evidenced by the questionnaires that highlighted the low invasiveness and acceptance of the device. To the best of our knowledge, HuMOVE platform is the first device for human sexual performance analysis compatible with sexual intercourse; the system has the potential to be a helpful tool for physicians to accurately classify sexual disorders, such as premature or delayed ejaculation. Copyright © 2014 Elsevier Inc. All rights reserved.
Fukae, Jun; Kon, Yujiro; Henmi, Mihoko; Sakamoto, Fumihiko; Narita, Akihiro; Shimizu, Masato; Tanimura, Kazuhide; Matsuhashi, Megumi; Kamishima, Tamotsu; Atsumi, Tatsuya; Koike, Takao
2010-05-01
To investigate the relationship between synovial vascularity assessed by quantitative power Doppler sonography (PDS) and progression of structural bone damage in a single finger joint in patients with rheumatoid arthritis (RA). We studied 190 metacarpophalangeal (MCP) joints and 190 proximal interphalangeal (PIP) joints of 19 patients with active RA who had initial treatment with disease-modifying antirheumatic drugs (DMARDs). Patients were examined by clinical and laboratory assessments throughout the study. Hand and foot radiography was performed at baseline and the twentieth week. Magnetic resonance imaging (MRI) was performed at baseline. PDS was performed at baseline and the eighth week. Synovial vascularity was evaluated according to both quantitative and semiquantitative methods. Quantitative PDS was significantly correlated with the enhancement rate of MRI in each single finger joint. Comparing quantitative synovial vascularity and radiographic change in single MCP or PIP joints, the level of vascularity at baseline showed a significant positive correlation with radiographic progression at the twentieth week. The change of vascularity in response to DMARDs, defined as the percentage change in vascularity by the eighth week from baseline, was inversely correlated with radiographic progression in each MCP joint. The quantitative PDS method was more useful than the semiquantitative method for the evaluation of synovial vascularity in a single finger joint. The change of synovial vascularity in a single finger joint determined by quantitative PDS could numerically predict its radiographic progression. Using vascularity as a guide to consider a therapeutic approach would have benefits for patients with active RA.
Quantitative methods in assessment of neurologic function.
Potvin, A R; Tourtellotte, W W; Syndulko, K; Potvin, J
1981-01-01
Traditionally, neurologists have emphasized qualitative techniques for assessing results of clinical trials. However, in recent years qualitative evaluations have been increasingly augmented by quantitative tests for measuring neurologic functions pertaining to mental state, strength, steadiness, reactions, speed, coordination, sensation, fatigue, gait, station, and simulated activities of daily living. Quantitative tests have long been used by psychologists for evaluating asymptomatic function, assessing human information processing, and predicting proficiency in skilled tasks; however, their methodology has never been directly assessed for validity in a clinical environment. In this report, relevant contributions from the literature on asymptomatic human performance and that on clinical quantitative neurologic function are reviewed and assessed. While emphasis is focused on tests appropriate for evaluating clinical neurologic trials, evaluations of tests for reproducibility, reliability, validity, and examiner training procedures, and for effects of motivation, learning, handedness, age, and sex are also reported and interpreted. Examples of statistical strategies for data analysis, scoring systems, data reduction methods, and data display concepts are presented. Although investigative work still remains to be done, it appears that carefully selected and evaluated tests of sensory and motor function should be an essential factor for evaluating clinical trials in an objective manner.
Quantitative analysis of the anti-noise performance of an m-sequence in an electromagnetic method
NASA Astrophysics Data System (ADS)
Yuan, Zhe; Zhang, Yiming; Zheng, Qijia
2018-02-01
An electromagnetic method with a transmitted waveform coded by an m-sequence achieved better anti-noise performance compared to the conventional manner with a square-wave. The anti-noise performance of the m-sequence varied with multiple coding parameters; hence, a quantitative analysis of the anti-noise performance for m-sequences with different coding parameters was required to optimize them. This paper proposes the concept of an identification system, with the identified Earth impulse response obtained by measuring the system output with the input of the voltage response. A quantitative analysis of the anti-noise performance of the m-sequence was achieved by analyzing the amplitude-frequency response of the corresponding identification system. The effects of the coding parameters on the anti-noise performance are summarized by numerical simulation, and their optimization is further discussed in our conclusions; the validity of the conclusions is further verified by field experiment. The quantitative analysis method proposed in this paper provides a new insight into the anti-noise mechanism of the m-sequence, and could be used to evaluate the anti-noise performance of artificial sources in other time-domain exploration methods, such as the seismic method.
Shi, Yin; Zong, Min; Xu, Xiaoquan; Zou, Yuefen; Feng, Yang; Liu, Wei; Wang, Chuanbing; Wang, Dehang
2015-04-01
To quantitatively evaluate nerve roots by measuring fractional anisotropy (FA) values in healthy volunteers and sciatica patients, visualize nerve roots by tractography, and compare the diagnostic efficacy between conventional magnetic resonance imaging (MRI) and DTI. Seventy-five sciatica patients and thirty-six healthy volunteers underwent MR imaging using DTI. FA values for L5-S1 lumbar nerve roots were calculated at three levels from DTI images. Tractography was performed on L3-S1 nerve roots. ROC analysis was performed for FA values. The lumbar nerve roots were visualized and FA values were calculated in all subjects. FA values decreased in compressed nerve roots and declined from proximal to distal along the compressed nerve tracts. Mean FA values were more sensitive and specific than MR imaging for differentiating compressed nerve roots, especially in the far lateral zone at distal nerves. DTI can quantitatively evaluate compressed nerve roots, and DTT enables visualization of abnormal nerve tracts, providing vivid anatomic information and localization of probable nerve compression. DTI has great potential utility for evaluating lumbar nerve compression in sciatica. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wörz, Stefan; Hoegen, Philipp; Liao, Wei; Müller-Eschner, Matthias; Kauczor, Hans-Ulrich; von Tengg-Kobligk, Hendrik; Rohr, Karl
2016-03-01
We introduce a framework for quantitative evaluation of 3D vessel segmentation approaches using vascular phantoms. Phantoms are designed using a CAD system and created with a 3D printer, and comprise realistic shapes including branches and pathologies such as abdominal aortic aneurysms (AAA). To transfer ground truth information to the 3D image coordinate system, we use a landmark-based registration scheme utilizing fiducial markers integrated in the phantom design. For accurate 3D localization of the markers we developed a novel 3D parametric intensity model that is directly fitted to the markers in the images. We also performed a quantitative evaluation of different vessel segmentation approaches for a phantom of an AAA.
Performance Assessment of Human and Cattle Associated Quantitative Real-time PCR Assays - slides
The presentation overview is (1) Single laboratory performance assessment of human- and cattle associated PCR assays and (2) A Field Study: Evaluation of two human fecal waste management practices in Ohio watershed.
Exploring the gender gap in the conceptual survey of electricity and magnetism
NASA Astrophysics Data System (ADS)
Henderson, Rachel; Stewart, Gay; Stewart, John; Michaluk, Lynnette; Traxler, Adrienne
2017-12-01
The "gender gap" on various physics conceptual evaluations has been extensively studied. Men's average pretest scores on the Force Concept Inventory and Force and Motion Conceptual Evaluation are 13% higher than women's, and post-test scores are on average 12% higher than women's. This study analyzed the gender differences within the Conceptual Survey of Electricity and Magnetism (CSEM) in which the gender gap has been less well studied and is less consistent. In the current study, data collected from 1407 students (77% men, 23% women) in a calculus-based physics course over ten semesters showed that male students outperformed female students on the CSEM pretest (5%) and post-test (6%). Separate analyses were conducted for qualitative and quantitative problems on lab quizzes and course exams and showed that male students outperformed female students by 3% on qualitative quiz and exam problems. Male and female students performed equally on the quantitative course exam problems. The gender gaps within CSEM post-test scores, qualitative lab quiz scores, and qualitative exam scores were insignificant for students with a CSEM pretest score of 25% or less but grew as pretest scores increased. Structural equation modeling demonstrated that a latent variable, called Conceptual Physics Performance/Non-Quantitative (CPP/NonQnt), orthogonal to quantitative test performance was useful in explaining the differences observed in qualitative performance; this variable was most strongly related to CSEM post-test scores. The CPP/NonQnt of male students was 0.44 standard deviations higher than female students. The CSEM pretest measured CPP/NonQnt much less accurately for women (R2=4 % ) than for men (R2=17 % ). The failure to detect a gender gap for students scoring 25% or less on the pretest suggests that the CSEM instrument itself is not gender biased. The failure to find a performance difference in quantitative test performance while detecting a gap in qualitative performance suggests the qualitative differences do not result from psychological factors such as science anxiety or stereotype threat.
Designing automation for human use: empirical studies and quantitative models.
Parasuraman, R
2000-07-01
An emerging knowledge base of human performance research can provide guidelines for designing automation that can be used effectively by human operators of complex systems. Which functions should be automated and to what extent in a given system? A model for types and levels of automation that provides a framework and an objective basis for making such choices is described. The human performance consequences of particular types and levels of automation constitute primary evaluative criteria for automation design when using the model. Four human performance areas are considered--mental workload, situation awareness, complacency and skill degradation. Secondary evaluative criteria include such factors as automation reliability, the risks of decision/action consequences and the ease of systems integration. In addition to this qualitative approach, quantitative models can inform design. Several computational and formal models of human interaction with automation that have been proposed by various researchers are reviewed. An important future research need is the integration of qualitative and quantitative approaches. Application of these models provides an objective basis for designing automation for effective human use.
Data Driven Performance Evaluation of Wireless Sensor Networks
Frery, Alejandro C.; Ramos, Heitor S.; Alencar-Neto, José; Nakamura, Eduardo; Loureiro, Antonio A. F.
2010-01-01
Wireless Sensor Networks are presented as devices for signal sampling and reconstruction. Within this framework, the qualitative and quantitative influence of (i) signal granularity, (ii) spatial distribution of sensors, (iii) sensors clustering, and (iv) signal reconstruction procedure are assessed. This is done by defining an error metric and performing a Monte Carlo experiment. It is shown that all these factors have significant impact on the quality of the reconstructed signal. The extent of such impact is quantitatively assessed. PMID:22294920
NASA Astrophysics Data System (ADS)
Nuraeni, E.; Rahmat, A.
2018-05-01
Forming of cognitive schemes of plant anatomy concepts is performed by processing of qualitative and quantitative data obtained from microscopic observations. To enhancing student’s quantitative literacy, strategy of plant anatomy course was modified by adding the task to analyze quantitative data produced by quantitative measurement of plant anatomy guided by material course. Participant in this study was 24 biology students and 35 biology education students. Quantitative Literacy test, complex thinking in plant anatomy test and questioner used to evaluate the course. Quantitative literacy capability data was collected by quantitative literacy test with the rubric from the Association of American Colleges and Universities, Complex thinking in plant anatomy by test according to Marzano and questioner. Quantitative literacy data are categorized according to modified Rhodes and Finley categories. The results showed that quantitative literacy of biology education students is better than biology students.
ERIC Educational Resources Information Center
Winters, Michael Joseph
2017-01-01
This quantitative correlational study examined the relationship between teacher pre-hiring effectiveness evaluation and their performance evaluation scores at the end of the first year of teaching. Prior to this study, it was not known if and to what degree teachers' scores on the Ventures for Excellence B-22 screening interview correlated to…
Technology efficacy in active prosthetic knees for transfemoral amputees: a quantitative evaluation.
El-Sayed, Amr M; Hamzaid, Nur Azah; Abu Osman, Noor Azuan
2014-01-01
Several studies have presented technological ensembles of active knee systems for transfemoral prosthesis. Other studies have examined the amputees' gait performance while wearing a specific active prosthesis. This paper combined both insights, that is, a technical examination of the components used, with an evaluation of how these improved the gait of respective users. This study aims to offer a quantitative understanding of the potential enhancement derived from strategic integration of core elements in developing an effective device. The study systematically discussed the current technology in active transfemoral prosthesis with respect to its functional walking performance amongst above-knee amputee users, to evaluate the system's efficacy in producing close-to-normal user performance. The performances of its actuator, sensory system, and control technique that are incorporated in each reported system were evaluated separately and numerical comparisons were conducted based on the percentage of amputees' gait deviation from normal gait profile points. The results identified particular components that contributed closest to normal gait parameters. However, the conclusion is limitedly extendable due to the small number of studies. Thus, more clinical validation of the active prosthetic knee technology is needed to better understand the extent of contribution of each component to the most functional development.
Tu, Chengjian; Shen, Shichen; Sheng, Quanhu; Shyr, Yu; Qu, Jun
2017-01-30
Reliable quantification of low-abundance proteins in complex proteomes is challenging largely owing to the limited number of spectra/peptides identified. In this study we developed a straightforward method to improve the quantitative accuracy and precision of proteins by strategically retrieving the less confident peptides that were previously filtered out using the standard target-decoy search strategy. The filtered-out MS/MS spectra matched to confidently-identified proteins were recovered, and the peptide-spectrum-match FDR were re-calculated and controlled at a confident level of FDR≤1%, while protein FDR maintained at ~1%. We evaluated the performance of this strategy in both spectral count- and ion current-based methods. >60% increase of total quantified spectra/peptides was respectively achieved for analyzing a spike-in sample set and a public dataset from CPTAC. Incorporating the peptide retrieval strategy significantly improved the quantitative accuracy and precision, especially for low-abundance proteins (e.g. one-hit proteins). Moreover, the capacity of confidently discovering significantly-altered proteins was also enhanced substantially, as demonstrated with two spike-in datasets. In summary, improved quantitative performance was achieved by this peptide recovery strategy without compromising confidence of protein identification, which can be readily implemented in a broad range of quantitative proteomics techniques including label-free or labeling approaches. We hypothesize that more quantifiable spectra and peptides in a protein, even including less confident peptides, could help reduce variations and improve protein quantification. Hence the peptide retrieval strategy was developed and evaluated in two spike-in sample sets with different LC-MS/MS variations using both MS1- and MS2-based quantitative approach. The list of confidently identified proteins using the standard target-decoy search strategy was fixed and more spectra/peptides with less confidence matched to confident proteins were retrieved. However, the total peptide-spectrum-match false discovery rate (PSM FDR) after retrieval analysis was still controlled at a confident level of FDR≤1%. As expected, the penalty for occasionally incorporating incorrect peptide identifications is negligible by comparison with the improvements in quantitative performance. More quantifiable peptides, lower missing value rate, better quantitative accuracy and precision were significantly achieved for the same protein identifications by this simple strategy. This strategy is theoretically applicable for any quantitative approaches in proteomics and thereby provides more quantitative information, especially on low-abundance proteins. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Chen, Xiaogang; Hu, Xizhen; Huang, Dexiu
2014-09-01
The transmission performance of single sideband (SSB) radio over fiber (RoF) system is evaluated through tuning the modulation index of Mach-Zehnder modulator, two different data modulation schemes and the influence of fiber dispersion are considered. The quantitative simulation results validate that there exist an optimum modulation index, and the system performance could be improved if the data signal is modulated on only optical carrier or sidebands.
Watanabe, Jun-ichiro; Ishibashi, Nozomu; Yano, Kazuo
2014-01-01
Quantitative analyses of human-generated data collected in various fields have uncovered many patterns of complex human behaviors. However, thus far the quantitative evaluation of the relationship between the physical behaviors of employees and their performance has been inadequate. Here, we present findings demonstrating the significant relationship between the physical behaviors of employees and their performance via experiments we conducted in inbound call centers while the employees wore sensor badges. There were two main findings. First, we found that face-to-face interaction among telecommunicators and the frequency of their bodily movements caused by the face-to-face interaction had a significant correlation with the entire call center performance, which we measured as "Calls per Hour." Second, our trial to activate face-to-face interaction on the basis of data collected by the wearable sensor badges the employees wore significantly increased their performance. These results demonstrate quantitatively that human-human interaction in the physical world plays an important role in team performance.
Watanabe, Jun-ichiro; Ishibashi, Nozomu; Yano, Kazuo
2014-01-01
Quantitative analyses of human-generated data collected in various fields have uncovered many patterns of complex human behaviors. However, thus far the quantitative evaluation of the relationship between the physical behaviors of employees and their performance has been inadequate. Here, we present findings demonstrating the significant relationship between the physical behaviors of employees and their performance via experiments we conducted in inbound call centers while the employees wore sensor badges. There were two main findings. First, we found that face-to-face interaction among telecommunicators and the frequency of their bodily movements caused by the face-to-face interaction had a significant correlation with the entire call center performance, which we measured as “Calls per Hour.” Second, our trial to activate face-to-face interaction on the basis of data collected by the wearable sensor badges the employees wore significantly increased their performance. These results demonstrate quantitatively that human-human interaction in the physical world plays an important role in team performance. PMID:25501748
Chapiro, Julius; Wood, Laura D.; Lin, MingDe; Duran, Rafael; Cornish, Toby; Lesage, David; Charu, Vivek; Schernthaner, Rüdiger; Wang, Zhijun; Tacher, Vania; Savic, Lynn Jeanette; Kamel, Ihab R.
2014-01-01
Purpose To evaluate the diagnostic performance of three-dimensional (3Dthree-dimensional) quantitative enhancement-based and diffusion-weighted volumetric magnetic resonance (MR) imaging assessment of hepatocellular carcinoma (HCChepatocellular carcinoma) lesions in determining the extent of pathologic tumor necrosis after transarterial chemoembolization (TACEtransarterial chemoembolization). Materials and Methods This institutional review board–approved retrospective study included 17 patients with HCChepatocellular carcinoma who underwent TACEtransarterial chemoembolization before surgery. Semiautomatic 3Dthree-dimensional volumetric segmentation of target lesions was performed at the last MR examination before orthotopic liver transplantation or surgical resection. The amount of necrotic tumor tissue on contrast material–enhanced arterial phase MR images and the amount of diffusion-restricted tumor tissue on apparent diffusion coefficient (ADCapparent diffusion coefficient) maps were expressed as a percentage of the total tumor volume. Visual assessment of the extent of tumor necrosis and tumor response according to European Association for the Study of the Liver (EASLEuropean Association for the Study of the Liver) criteria was performed. Pathologic tumor necrosis was quantified by using slide-by-slide segmentation. Correlation analysis was performed to evaluate the predictive values of the radiologic techniques. Results At histopathologic examination, the mean percentage of tumor necrosis was 70% (range, 10%–100%). Both 3Dthree-dimensional quantitative techniques demonstrated a strong correlation with tumor necrosis at pathologic examination (R2 = 0.9657 and R2 = 0.9662 for quantitative EASLEuropean Association for the Study of the Liver and quantitative ADCapparent diffusion coefficient, respectively) and a strong intermethod agreement (R2 = 0.9585). Both methods showed a significantly lower discrepancy with pathologically measured necrosis (residual standard error [RSEresidual standard error] = 6.38 and 6.33 for quantitative EASLEuropean Association for the Study of the Liver and quantitative ADCapparent diffusion coefficient, respectively), when compared with non-3Dthree-dimensional techniques (RSEresidual standard error = 12.18 for visual assessment). Conclusion This radiologic-pathologic correlation study demonstrates the diagnostic accuracy of 3Dthree-dimensional quantitative MR imaging techniques in identifying pathologically measured tumor necrosis in HCChepatocellular carcinoma lesions treated with TACEtransarterial chemoembolization. © RSNA, 2014 Online supplemental material is available for this article. PMID:25028783
New technologies are creating the potential for using nucleic acid sequence detection to perform routine microbiological analyses of environmental samples. Our laboratory has recently reported on the development of a method for the quantitative detection of Stachybotrys chartarum...
Tomasino, Stephen F; Hamilton, Martin A
2007-01-01
Two quantitative carrier-based test methods for determining the efficacy of liquid sporicides and sterilants on a hard surface, the Standard Quantitative Carrier Test Method-ASTM E 2111-00 and an adaptation of a quantitative micro-method as reported by Sagripanti and Bonifacino, were compared in this study. The methods were selected based on their desirable characteristics (e.g., well-developed protocol, previous use with spores, fully quantitative, and use of readily available equipment) for testing liquid sporicides and sterilants on a hard surface. In this paper, the Sagripanti-Bonifacino procedure is referred to as the Three Step Method (TSM). AOAC Official Method 966.04 was included in this study as a reference method. Three laboratories participated in the evaluation. Three chemical treatments were tested: (1) 3000 ppm sodium hypochlorite with pH adjusted to 7.0, (2) a hydrogen peroxide/peroxyacetic acid product, and (3) 3000 ppm sodium hypochlorite with pH unadjusted (pH of approximately 10.0). A fourth treatment, 6000 ppm sodium hypochlorite solution with pH adjusted to 7.0, was included only for Method 966.04 as a positive control (high level of efficacy). The contact time was 10 min for all chemical treatments except the 6000 ppm sodium hypochlorite treatment which was tested at 30 min. Each chemical treatment was tested 3 times using each of the methods. Only 2 of the laboratories performed the AOAC method. Method performance was assessed by the within-laboratory variance, between-laboratory variance, and total variance associated with the log reduction (LR) estimates generated by each quantitative method. The quantitative methods performed similarly, and the LR values generated by each method were not statistically different for the 3 treatments evaluated. Based on feedback from the participating laboratories, compared to the TSM, ASTM E 2111-00 was more resource demanding and required more set-up time. The logistical and resource concerns identified for ASTM E 2111-00 were largely associated with the filtration process and counting bacterial colonies on filters. Thus, the TSM was determined to be the most suitable method.
Quantitative electromyography in ambulatory boys with Duchenne muscular dystrophy.
Verma, Sumit; Lin, Jenny; Travers, Curtis; McCracken, Courtney; Shah, Durga
2017-12-01
This study's objective was to evaluate quantitative electromyography (QEMG) using multiple-motor-unit (multi-MUP) analysis in Duchenne muscular dystrophy (DMD). Ambulatory DMD boys, aged 5-15 years, were evaluated with QEMG at 6-month intervals over 14 months. EMG was performed in the right biceps brachii (BB) and tibialis anterior (TA) muscles. Normative QEMG data were obtained from age-matched healthy boys. Wilcoxon signed-rank tests were performed. Eighteen DMD subjects were enrolled, with a median age of 7 (interquartile range 7-10) years. Six-month evaluations were performed on 14 subjects. QEMG showed significantly abnormal mean MUP duration in BB and TA muscles, with no significant change over 6 months. QEMG is a sensitive electrophysiological marker of myopathy in DMD. Preliminary data do not reflect a significant change in MUP parameters over a 6-month interval; long-term follow-up QEMG studies are needed to understand its role as a biomarker for disease progression. Muscle Nerve 56: 1361-1364, 2017. © 2017 Wiley Periodicals, Inc.
Wolak, Arik; Slomka, Piotr J; Fish, Mathews B; Lorenzo, Santiago; Berman, Daniel S; Germano, Guido
2008-06-01
Attenuation correction (AC) for myocardial perfusion SPECT (MPS) had not been evaluated separately in women despite specific considerations in this group because of breast photon attenuation. We aimed to evaluate the performance of AC in women by using automated quantitative analysis of MPS to avoid any bias. Consecutive female patients--134 with a low likelihood (LLk) of coronary artery disease (CAD) and 114 with coronary angiography performed within less than 3 mo of MPS--who were referred for rest-stress electrocardiography-gated 99mTc-sestamibi MPS with AC were considered. Imaging data were evaluated for contour quality control. An additional 50 LLk studies in women were used to create equivalent normal limits for studies with AC and with no correction (NC). An experienced technologist unaware of the angiography and other results performed the contour quality control. All other processing was performed in a fully automated manner. Quantitative analysis was performed with the Cedars-Sinai myocardial perfusion analysis package. All automated segmental analyses were performed with the 17-segment, 5-point American Heart Association model. Summed stress scores (SSS) of > or =3 were considered abnormal. CAD (> or =70% stenosis) was present in 69 of 114 patients (60%). The normalcy rates were 93% for both NC and AC studies. The SSS for patients with CAD and without CAD for NC versus AC were 10.0 +/- 9.0 (mean +/- SD) versus 10.2 +/- 8.5 and 1.6 +/- 2.3 versus 1.8 +/- 2.5, respectively; P was not significant (NS) for all comparisons of NC versus AC. The SSS for LLk patients for NC versus AC were 0.51 +/- 1.0 versus 0.6 +/- 1.1, respectively; P was NS. The specificity for both NC and AC was 73%. The sensitivities for NC and AC were 80% and 81%, respectively, and the accuracies for NC and AC were 77% and 78%, respectively; P was NS for both comparisons. There are no significant diagnostic differences between automated quantitative MPS analyses performed in studies processed with and without AC in women.
Glover, William A; Atienza, Ederlyn E; Nesbitt, Shannon; Kim, Woo J; Castor, Jared; Cook, Linda; Jerome, Keith R
2016-01-01
Quantitative DNA detection of cytomegalovirus (CMV) and BK virus (BKV) is critical in the management of transplant patients. Quantitative laboratory-developed procedures for CMV and BKV have been described in which much of the processing is automated, resulting in rapid, reproducible, and high-throughput testing of transplant patients. To increase the efficiency of such assays, the performance and stability of four commercial preassembled frozen fast qPCR master mixes (Roche FastStart Universal Probe Master Mix with Rox, Bio-Rad SsoFast Probes Supermix with Rox, Life Technologies TaqMan FastAdvanced Master Mix, and Life Technologies Fast Universal PCR Master Mix), in combination with in-house designed primers and probes, was evaluated using controls and standards from standard CMV and BK assays. A subsequent parallel evaluation using patient samples was performed comparing the performance of freshly prepared assay mixes versus aliquoted frozen master mixes made with two of the fast qPCR mixes (Life Technologies TaqMan FastAdvanced Master Mix, and Bio-Rad SsoFast Probes Supermix with Rox), chosen based on their performance and compatibility with existing PCR cycling conditions. The data demonstrate that the frozen master mixes retain excellent performance over a period of at least 10 weeks. During the parallel testing using clinical specimens, no difference in quantitative results was observed between the preassembled frozen master mixes and freshly prepared master mixes. Preassembled fast real-time qPCR frozen master mixes perform well and represent an additional strategy laboratories can implement to reduce assay preparation times, and to minimize technical errors and effort necessary to perform clinical PCR. © 2015 Wiley Periodicals, Inc.
An Examination of the (Un)Intended Consequences of Performance Funding in Higher Education
ERIC Educational Resources Information Center
Umbricht, Mark R.; Fernandez, Frank; Ortagus, Justin C.
2017-01-01
Previous studies have shown that state performance funding policies do not increase baccalaureate degree production, but higher education scholarship lacks a rigorous, quantitative analysis of the unintended consequences of performance funding. In this article, we use difference-in-differences estimation with fixed effects to evaluate performance…
Nondestructive Evaluation for Aerospace Composites
NASA Technical Reports Server (NTRS)
Leckey, Cara; Cramer, Elliott; Perey, Daniel
2015-01-01
Nondestructive evaluation (NDE) techniques are important for enabling NASA's missions in space exploration and aeronautics. The expanded and continued use of composite materials for aerospace components and vehicles leads to a need for advanced NDE techniques capable of quantitatively characterizing damage in composites. Quantitative damage detection techniques help to ensure safety, reliability and durability of space and aeronautic vehicles. This presentation will give a broad outline of NASA's range of technical work and an overview of the NDE research performed in the Nondestructive Evaluation Sciences Branch at NASA Langley Research Center. The presentation will focus on ongoing research in the development of NDE techniques for composite materials and structures, including development of automated data processing tools to turn NDE data into quantitative location and sizing results. Composites focused NDE research in the areas of ultrasonics, thermography, X-ray computed tomography, and NDE modeling will be discussed.
Assessing deep and shallow learning methods for quantitative prediction of acute chemical toxicity.
Liu, Ruifeng; Madore, Michael; Glover, Kyle P; Feasel, Michael G; Wallqvist, Anders
2018-05-02
Animal-based methods for assessing chemical toxicity are struggling to meet testing demands. In silico approaches, including machine-learning methods, are promising alternatives. Recently, deep neural networks (DNNs) were evaluated and reported to outperform other machine-learning methods for quantitative structure-activity relationship modeling of molecular properties. However, most of the reported performance evaluations relied on global performance metrics, such as the root mean squared error (RMSE) between the predicted and experimental values of all samples, without considering the impact of sample distribution across the activity spectrum. Here, we carried out an in-depth analysis of DNN performance for quantitative prediction of acute chemical toxicity using several datasets. We found that the overall performance of DNN models on datasets of up to 30,000 compounds was similar to that of random forest (RF) models, as measured by the RMSE and correlation coefficients between the predicted and experimental results. However, our detailed analyses demonstrated that global performance metrics are inappropriate for datasets with a highly uneven sample distribution, because they show a strong bias for the most populous compounds along the toxicity spectrum. For highly toxic compounds, DNN and RF models trained on all samples performed much worse than the global performance metrics indicated. Surprisingly, our variable nearest neighbor method, which utilizes only structurally similar compounds to make predictions, performed reasonably well, suggesting that information of close near neighbors in the training sets is a key determinant of acute toxicity predictions.
Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study
Sappa, Angel D.; Carvajal, Juan A.; Aguilera, Cristhian A.; Oliveira, Miguel; Romero, Dennis; Vintimilla, Boris X.
2016-01-01
This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and Long Wave InfraRed (LWIR). PMID:27294938
Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study.
Sappa, Angel D; Carvajal, Juan A; Aguilera, Cristhian A; Oliveira, Miguel; Romero, Dennis; Vintimilla, Boris X
2016-06-10
This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and Long Wave InfraRed (LWIR).
Wu, Cheng-Ching; Lin, Hung-Yu; Wang, Chao-Ping; Lu, Li-Fen; Yu, Teng-Hung; Hung, Wei-Chin; Houng, Jer-Yiing; Chung, Fu-Mei; Lee, Yau-Jiunn; Hu, Jin-Jia
2015-11-03
Prostate cancer remains the most common cancer in men. Qualitative or semi-quantitative immunochromatographic measurements of prostate specific antigen (PSA) have been shown to be simple, noninvasive and feasible. The aim of this study was to evaluate an optimized gold immunochromatographic strip device for the detection of PSA, in which the results can be analysed using a Chromogenic Rapid Test Reader to quantitatively assess the test results. This reader measures the reflectance of the signal line via a charge-coupled device camera. For quantitative analysis, PSA concentration was computed via a calibration equation. Capillary blood samples from 305 men were evaluated, and two independent observers interpreted the test results after 12 min. Blood samples were also collected and tested with a conventional quantitative assay. Sensitivity, specificity, positive and negative predictive values, and accuracy of the PSA rapid quantitative test system were 100, 96.6, 89.5, 100, and 97.4 %, respectively. Reproducibility of the test was 99.2, and interobserver variation was 8 % with a false positive rate of 3.4 %. The correlation coefficient between the ordinary quantitative assay and the rapid quantitative test was 0.960. The PSA rapid quantitative test system provided results quickly and was easy to use, so that tests using this system can be easily performed at outpatient clinics or elsewhere. This system may also be useful for initial cancer screening and for point-of-care testing, because results can be obtained within 12 min and at a cost lower than that of conventional quantitative assays.
48 CFR 15.305 - Proposal evaluation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... comparative assessment of past performance information is separate from the responsibility determination... requirements; and (ii) A summary, matrix, or quantitative ranking, along with appropriate supporting narrative...
48 CFR 15.305 - Proposal evaluation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... comparative assessment of past performance information is separate from the responsibility determination... requirements; and (ii) A summary, matrix, or quantitative ranking, along with appropriate supporting narrative...
48 CFR 15.305 - Proposal evaluation.
Code of Federal Regulations, 2014 CFR
2014-10-01
... comparative assessment of past performance information is separate from the responsibility determination... requirements; and (ii) A summary, matrix, or quantitative ranking, along with appropriate supporting narrative...
48 CFR 15.305 - Proposal evaluation.
Code of Federal Regulations, 2012 CFR
2012-10-01
... comparative assessment of past performance information is separate from the responsibility determination... requirements; and (ii) A summary, matrix, or quantitative ranking, along with appropriate supporting narrative...
48 CFR 15.305 - Proposal evaluation.
Code of Federal Regulations, 2013 CFR
2013-10-01
... comparative assessment of past performance information is separate from the responsibility determination... requirements; and (ii) A summary, matrix, or quantitative ranking, along with appropriate supporting narrative...
Tao, Lingyan; Zhang, Qing; Wu, Yongjiang; Liu, Xuesong
2016-12-01
In this study, a fast and effective high-performance liquid chromatography method was developed to obtain a fingerprint chromatogram and quantitative analysis simultaneously of four indexes including gallic acid, chlorogenic acid, albiflorin and paeoniflorin of the traditional Chinese medicine Moluodan Concentrated Pill. The method was performed by using a Waters X-bridge C 18 reversed phase column on an Agilent 1200S high-performance liquid chromatography system coupled with diode array detection. The mobile phase of the high-performance liquid chromatography method was composed of 20 mmol/L phosphate solution and acetonitrile with a 1 mL/min eluent velocity, under a detection temperature of 30°C and a UV detection wavelength of 254 nm. After the methodology validation, 16 batches of Moluodan Concentrated Pill were analyzed by this high-performance liquid chromatography method and both qualitative and quantitative evaluation results were achieved by similarity analysis, principal component analysis and hierarchical cluster analysis. The results of these three chemometrics were in good agreement and all indicated that batch 10 and batch 16 showed significant differences with the other 14 batches. This suggested that the developed high-performance liquid chromatography method could be applied in the quality evaluation of Moluodan Concentrated Pill. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Liu, L; Kan, A; Leckie, C; Hodgkin, P D
2017-04-01
Time-lapse fluorescence microscopy is a valuable technology in cell biology, but it suffers from the inherent problem of intensity inhomogeneity due to uneven illumination or camera nonlinearity, known as shading artefacts. This will lead to inaccurate estimates of single-cell features such as average and total intensity. Numerous shading correction methods have been proposed to remove this effect. In order to compare the performance of different methods, many quantitative performance measures have been developed. However, there is little discussion about which performance measure should be generally applied for evaluation on real data, where the ground truth is absent. In this paper, the state-of-the-art shading correction methods and performance evaluation methods are reviewed. We implement 10 popular shading correction methods on two artificial datasets and four real ones. In order to make an objective comparison between those methods, we employ a number of quantitative performance measures. Extensive validation demonstrates that the coefficient of joint variation (CJV) is the most applicable measure in time-lapse fluorescence images. Based on this measure, we have proposed a novel shading correction method that performs better compared to well-established methods for a range of real data tested. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Wu, Suo-Wei; Chen, Tong; Pan, Qi; Wei, Liang-Yu; Wang, Qin; Li, Chao; Song, Jing-Chen; Luo, Ji
2018-06-05
The development and application of medical technologies reflect the medical quality and clinical capacity of a hospital. It is also an effective approach in upgrading medical service and core competitiveness among medical institutions. This study aimed to build a quantitative medical technology evaluation system through questionnaire survey within medical institutions to perform an assessment to medical technologies more objectively and accurately, and promote the management of medical quality technologies and ensure the medical safety of various operations among the hospitals. A two-leveled quantitative medical technology evaluation system was built through a two-round questionnaire survey of chosen experts. The Delphi method was applied in identifying the structure of evaluation system and indicators. The judgment of the experts on the indicators was adopted in building the matrix so that the weight coefficient and maximum eigenvalue (λ max), consistency index (CI), and random consistency ratio (CR) could be obtained and collected. The results were verified through consistency tests, and the index weight coefficient of each indicator was conducted and calculated through analytical hierarchy process. Twenty-six experts of different medical fields were involved in the questionnaire survey, 25 of whom successfully responded to the two-round research. Altogether, 4 primary indicators (safety, effectiveness, innovativeness, and benefits), as well as 13 secondary indicators, were included in the evaluation system. The matrix is built to conduct the λ max, CI, and CR of each expert in the survey, and the index weight coefficients of primary indicators were 0.33, 0.28, 0.27, and 0.12, respectively, and the index weight coefficients of secondary indicators were conducted and calculated accordingly. As the two-round questionnaire survey of experts and statistical analysis were performed and credibility of the results was verified through consistency evaluation test, the study established a quantitative medical technology evaluation system model and assessment indicators within medical institutions based on the Delphi method and analytical hierarchy process. Moreover, further verifications, adjustments, and optimizations of the system and indicators will be performed in follow-up studies.
Buelow, Daelynn; Sun, Yilun; Tang, Li; Gu, Zhengming; Pounds, Stanley; Hayden, Randall
2016-07-01
Monitoring of Epstein-Barr virus (EBV) load in immunocompromised patients has become integral to their care. An increasing number of reagents are available for quantitative detection of EBV; however, there are little published comparative data. Four real-time PCR systems (one using laboratory-developed reagents and three using analyte-specific reagents) were compared with one another for detection of EBV from whole blood. Whole blood specimens seeded with EBV were used to determine quantitative linearity, analytical measurement range, lower limit of detection, and CV for each assay. Retrospective testing of 198 clinical samples was performed in parallel with all methods; results were compared to determine relative quantitative and qualitative performance. All assays showed similar performance. No significant difference was found in limit of detection (3.12-3.49 log10 copies/mL; P = 0.37). A strong qualitative correlation was seen with all assays that used clinical samples (positive detection rates of 89.5%-95.8%). Quantitative correlation of clinical samples across assays was also seen in pairwise regression analysis, with R(2) ranging from 0.83 to 0.95. Normalizing clinical sample results to IU/mL did not alter the quantitative correlation between assays. Quantitative EBV detection by real-time PCR can be performed over a wide linear dynamic range, using three different commercially available reagents and laboratory-developed methods. EBV was detected with comparable sensitivity and quantitative correlation for all assays. Copyright © 2016 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.
Image processing system performance prediction and product quality evaluation
NASA Technical Reports Server (NTRS)
Stein, E. K.; Hammill, H. B. (Principal Investigator)
1976-01-01
The author has identified the following significant results. A new technique for image processing system performance prediction and product quality evaluation was developed. It was entirely objective, quantitative, and general, and should prove useful in system design and quality control. The technique and its application to determination of quality control procedures for the Earth Resources Technology Satellite NASA Data Processing Facility are described.
2017-01-01
Technological developments and greater rigor in the quantitative measurement of biological features in medical images have given rise to an increased interest in using quantitative imaging biomarkers (QIBs) to measure changes in these features. Critical to the performance of a QIB in preclinical or clinical settings are three primary metrology areas of interest: measurement linearity and bias, repeatability, and the ability to consistently reproduce equivalent results when conditions change, as would be expected in any clinical trial. Unfortunately, performance studies to date differ greatly in designs, analysis method and metrics used to assess a QIB for clinical use. It is therefore, difficult or not possible to integrate results from different studies or to use reported results to design studies. The Radiological Society of North America (RSNA) and the Quantitative Imaging Biomarker Alliance (QIBA) with technical, radiological and statistical experts developed a set of technical performance analysis methods, metrics and study designs that provide terminology, metrics and methods consistent with widely accepted metrological standards. This document provides a consistent framework for the conduct and evaluation of QIB performance studies so that results from multiple studies can be compared, contrasted or combined. PMID:24919831
Leithner, Doris; Mahmoudi, Scherwin; Wichmann, Julian L; Martin, Simon S; Lenga, Lukas; Albrecht, Moritz H; Booz, Christian; Arendt, Christophe T; Beeres, Martin; D'Angelo, Tommaso; Bodelle, Boris; Vogl, Thomas J; Scholtz, Jan-Erik
2018-02-01
To investigate the impact of traditional (VMI) and noise-optimized virtual monoenergetic imaging (VMI+) algorithms on quantitative and qualitative image quality, and the assessment of stenosis in carotid and intracranial dual-energy CTA (DE-CTA). DE-CTA studies of 40 patients performed on a third-generation 192-slice dual-source CT scanner were included in this retrospective study. 120-kVp image-equivalent linearly-blended, VMI and VMI+ series were reconstructed. Quantitative analysis included evaluation of contrast-to-noise ratios (CNR) of the aorta, common carotid artery, internal carotid artery, middle cerebral artery, and basilar artery. VMI and VMI+ with highest CNR, and linearly-blended series were rated qualitatively. Three radiologists assessed artefacts and suitability for evaluation at shoulder height, carotid bifurcation, siphon, and intracranial using 5-point Likert scales. Detection and grading of stenosis were performed at carotid bifurcation and siphon. Highest CNR values were observed for 40-keV VMI+ compared to 65-keV VMI and linearly-blended images (P < 0.001). Artefacts were low in all qualitatively assessed series with excellent suitability for supraaortic artery evaluation at shoulder and bifurcation height. Suitability was significantly higher in VMI+ and VMI compared to linearly-blended images for intracranial and ICA assessment (P < 0.002). VMI and VMI+ showed excellent accordance for detection and grading of stenosis at carotid bifurcation and siphon with no differences in diagnostic performance. 40-keV VMI+ showed improved quantitative image quality compared to 65-keV VMI and linearly-blended series in supraaortic DE-CTA. VMI and VMI+ provided increased suitability for carotid and intracranial artery evaluation with excellent assessment of stenosis, but did not translate into increased diagnostic performance. Copyright © 2017 Elsevier B.V. All rights reserved.
Sánchez Gómez, Serafín; Ostos, Elisa María Cabot; Solano, Juan Manuel Maza; Salado, Tomás Francisco Herrero
2013-05-06
We evaluated a newly designed electronic portfolio (e-Portfolio) that provided quantitative evaluation of surgical skills. Medical students at the University of Seville used the e-Portfolio on a voluntary basis for evaluation of their performance in undergraduate surgical subjects. Our new web-based e-Portfolio was designed to evaluate surgical practical knowledge and skills targets. Students recorded each activity on a form, attached evidence, and added their reflections. Students self-assessed their practical knowledge using qualitative criteria (yes/no), and graded their skills according to complexity (basic/advanced) and participation (observer/assistant/independent). A numerical value was assigned to each activity, and the values of all activities were summated to obtain the total score. The application automatically displayed quantitative feedback. We performed qualitative evaluation of the perceived usefulness of the e-Portfolio and quantitative evaluation of the targets achieved. Thirty-seven of 112 students (33%) used the e-Portfolio, of which 87% reported that they understood the methodology of the portfolio. All students reported an improved understanding of their learning objectives resulting from the numerical visualization of progress, all students reported that the quantitative feedback encouraged their learning, and 79% of students felt that their teachers were more available because they were using the e-Portfolio. Only 51.3% of students reported that the reflective aspects of learning were useful. Individual students achieved a maximum of 65% of the total targets and 87% of the skills targets. The mean total score was 345 ± 38 points. For basic skills, 92% of students achieved the maximum score for participation as an independent operator, and all achieved the maximum scores for participation as an observer and assistant. For complex skills, 62% of students achieved the maximum score for participation as an independent operator, and 98% achieved the maximum scores for participation as an observer or assistant. Medical students reported that use of an electronic portfolio that provided quantitative feedback on their progress was useful when the number and complexity of targets were appropriate, but not when the portfolio offered only formative evaluations based on reflection. Students felt that use of the e-Portfolio guided their learning process by indicating knowledge gaps to themselves and teachers.
The BEACH Act of 2000 directed the U.S. EPA to establish more expeditious methods for the detection of pathogen indicators in coastal waters, as well as new water quality criteria based on these methods. Progress has been made in developing a quantitative PCR (qPCR) method for en...
Tendency for interlaboratory precision in the GMO analysis method based on real-time PCR.
Kodama, Takashi; Kurosawa, Yasunori; Kitta, Kazumi; Naito, Shigehiro
2010-01-01
The Horwitz curve estimates interlaboratory precision as a function only of concentration, and is frequently used as a method performance criterion in food analysis with chemical methods. The quantitative biochemical methods based on real-time PCR require an analogous criterion to progressively promote method validation. We analyzed the tendency of precision using a simplex real-time PCR technique in 53 collaborative studies of seven genetically modified (GM) crops. Reproducibility standard deviation (SR) and repeatability standard deviation (Sr) of the genetically modified organism (GMO) amount (%) was more or less independent of GM crops (i.e., maize, soybean, cotton, oilseed rape, potato, sugar beet, and rice) and evaluation procedure steps. Some studies evaluated whole steps consisting of DNA extraction and PCR quantitation, whereas others focused only on the PCR quantitation step by using DNA extraction solutions. Therefore, SR and Sr for GMO amount (%) are functions only of concentration similar to the Horwitz curve. We proposed S(R) = 0.1971C 0.8685 and S(r) = 0.1478C 0.8424, where C is the GMO amount (%). We also proposed a method performance index in GMO quantitative methods that is analogous to the Horwitz Ratio.
Gomes, Perpétua; Palma, Ana Carolina; Cabanas, Joaquim; Abecasis, Ana; Carvalho, Ana Patrícia; Ziermann, Rainer; Diogo, Isabel; Gonçalves, Fátima; Lobo, Céu Sousa; Camacho, Ricardo
2006-08-01
Quantitation of HIV-1 RNA levels in plasma has an undisputed prognostic value and is extremely important for evaluating response to antiretroviral therapy. The purpose of this study was to evaluate the performance of the real-time PCR COBAS TaqMan 48 analyser, comparing it to the existing VERSANT 3.0 (bDNA) for HIV-1 RNA quantitation in plasma of individuals infected with different HIV-1 subtypes (104 blood samples). A positive linear correlation between the two tests (r2 = 0.88) was found. Quantitation by the COBAS TaqMan assay was approximately 0.32log10 higher than by bDNA. The relationship between the two assays was similar within all subtypes with a Deming regression of <1 and <0 for the Bland-Altman plots. Overall, no significant differences were found in plasma viral load quantitation in different HIV-1 subtypes between both assays; therefore these assays are suitable for viral load quantitation of highly genetically diverse HIV-1 plasma samples.
Doblas, Ana; Sánchez-Ortiga, Emilio; Martínez-Corral, Manuel; Saavedra, Genaro; Garcia-Sucerquia, Jorge
2014-04-01
The advantages of using a telecentric imaging system in digital holographic microscopy (DHM) to study biological specimens are highlighted. To this end, the performances of nontelecentric DHM and telecentric DHM are evaluated from the quantitative phase imaging (QPI) point of view. The evaluated stability of the microscope allows single-shot QPI in DHM by using telecentric imaging systems. Quantitative phase maps of a section of the head of the drosophila melanogaster fly and of red blood cells are obtained via single-shot DHM with no numerical postprocessing. With these maps we show that the use of telecentric DHM provides larger field of view for a given magnification and permits more accurate QPI measurements with less number of computational operations.
How to Perform a Systematic Review and Meta-analysis of Diagnostic Imaging Studies.
Cronin, Paul; Kelly, Aine Marie; Altaee, Duaa; Foerster, Bradley; Petrou, Myria; Dwamena, Ben A
2018-05-01
A systematic review is a comprehensive search, critical evaluation, and synthesis of all the relevant studies on a specific (clinical) topic that can be applied to the evaluation of diagnostic and screening imaging studies. It can be a qualitative or a quantitative (meta-analysis) review of available literature. A meta-analysis uses statistical methods to combine and summarize the results of several studies. In this review, a 12-step approach to performing a systematic review (and meta-analysis) is outlined under the four domains: (1) Problem Formulation and Data Acquisition, (2) Quality Appraisal of Eligible Studies, (3) Statistical Analysis of Quantitative Data, and (4) Clinical Interpretation of the Evidence. This review is specifically geared toward the performance of a systematic review and meta-analysis of diagnostic test accuracy (imaging) studies. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Performance evaluation soil samples utilizing encapsulation technology
Dahlgran, J.R.
1999-08-17
Performance evaluation soil samples and method of their preparation uses encapsulation technology to encapsulate analytes which are introduced into a soil matrix for analysis and evaluation by analytical laboratories. Target analytes are mixed in an appropriate solvent at predetermined concentrations. The mixture is emulsified in a solution of polymeric film forming material. The emulsified solution is polymerized to form microcapsules. The microcapsules are recovered, quantitated and introduced into a soil matrix in a predetermined ratio to form soil samples with the desired analyte concentration. 1 fig.
Performance evaluation soil samples utilizing encapsulation technology
Dahlgran, James R.
1999-01-01
Performance evaluation soil samples and method of their preparation using encapsulation technology to encapsulate analytes which are introduced into a soil matrix for analysis and evaluation by analytical laboratories. Target analytes are mixed in an appropriate solvent at predetermined concentrations. The mixture is emulsified in a solution of polymeric film forming material. The emulsified solution is polymerized to form microcapsules. The microcapsules are recovered, quantitated and introduced into a soil matrix in a predetermined ratio to form soil samples with the desired analyte concentration.
Bashiri, Azadeh; Shahmoradi, Leila; Beigy, Hamid; Savareh, Behrouz A; Nosratabadi, Masood; N Kalhori, Sharareh R; Ghazisaeedi, Marjan
2018-06-01
Quantitative EEG gives valuable information in the clinical evaluation of psychological disorders. The purpose of the present study is to identify the most prominent features of quantitative electroencephalography (QEEG) that affect attention and response control parameters in children with attention deficit hyperactivity disorder. The QEEG features and the Integrated Visual and Auditory-Continuous Performance Test ( IVA-CPT) of 95 attention deficit hyperactivity disorder subjects were preprocessed by Independent Evaluation Criterion for Binary Classification. Then, the importance of selected features in the classification of desired outputs was evaluated using the artificial neural network. Findings uncovered the highest rank of QEEG features in each IVA-CPT parameters related to attention and response control. Using the designed model could help therapists to determine the existence or absence of defects in attention and response control relying on QEEG.
Designing Electronic Performance Support Systems To Facilitate Learning.
ERIC Educational Resources Information Center
van Schaik, Paul; Pearson, Robert; Barker, Philip
2002-01-01
Discussion of electronic performance support systems (EPSS) focuses on an application within the domain of quantitative research methods as taught in a university psychology course. Describes the design and implementation of the system and outlines results of an evaluative study of the EPSS. (Author/LRW)
Wang, Lin; Wan, Cai-Feng; Du, Jing; Li, Feng-Hua
2018-04-15
The purpose of this study was to evaluate the application of a new elastographic technique, acoustic radiation force impulse (ARFI) imaging, and its diagnostic performance for characterizing breast lesions. One hundred consecutive female patients with 126 breast lesions were enrolled in our study. After routine breast ultrasound examinations, the patients underwent ARFI elasticity imaging. Virtual Touch tissue imaging (VTI) and Virtual Touch tissue quantification (Siemens Medical Solutions, Mountain View, CA) were used to qualitatively and quantitatively analyze the elasticity and hardness of tumors. A receiver operating characteristic curve analysis was performed to evaluate the diagnostic performance of ARFI for discrimination between benign and malignant breast lesions. Pathologic analysis revealed 40 lesions in the malignant group and 86 lesions in the benign group. Different VTI patterns were observed in benign and malignant breast lesions. Eighty lesions (93.0%) of benign group had pattern 1, 2, or 3, whereas all pattern 4b lesions (n = 20 [50.0%]) were malignant. Regarding the quantitative analysis, the mean VTI-to-B-mode area ratio, internal shear wave velocity, and marginal shear wave velocity of benign lesions were statistically significantly lower than those of malignant lesions (all P < .001). The cutoff point for a scoring system constructed to evaluate the diagnostic performance of ARFI was estimated to be between 3 and 4 points for malignancy, with sensitivity of 77.5%, specificity of 96.5%, accuracy of 90.5%, and an area under the curve of 0.933. The application of ARFI technology has shown promising results by noninvasively providing substantial complementary information and could potentially serve as an effective diagnostic tool for differentiation between benign and malignant breast lesions. © 2018 by the American Institute of Ultrasound in Medicine.
Forecast and Warning Services of the National Weather Service Introduction Quantitative precipitation future which is an active area of research currently. 2) Evaluate HPN performance for forecast periods
NASA Astrophysics Data System (ADS)
Medjoubi, K.; Dawiec, A.
2017-12-01
A simple method is proposed in this work for quantitative evaluation of the quality of the threshold adjustment and the flat-field correction of Hybrid Photon Counting pixel (HPC) detectors. This approach is based on the Photon Transfer Curve (PTC) corresponding to the measurement of the standard deviation of the signal in flat field images. Fixed pattern noise (FPN), easily identifiable in the curve, is linked to the residual threshold dispersion, sensor inhomogeneity and the remnant errors in flat fielding techniques. The analytical expression of the signal to noise ratio curve is developed for HPC and successfully used as a fit function applied to experimental data obtained with the XPAD detector. The quantitative evaluation of the FPN, described by the photon response non-uniformity (PRNU), is measured for different configurations (threshold adjustment method and flat fielding technique) and is demonstrated to be used in order to evaluate the best setting for having the best image quality from a commercial or a R&D detector.
Potyrailo, Radislav A; Chisholm, Bret J; Morris, William G; Cawse, James N; Flanagan, William P; Hassib, Lamyaa; Molaison, Chris A; Ezbiansky, Karin; Medford, George; Reitz, Hariklia
2003-01-01
Coupling of combinatorial chemistry methods with high-throughput (HT) performance testing and measurements of resulting properties has provided a powerful set of tools for the 10-fold accelerated discovery of new high-performance coating materials for automotive applications. Our approach replaces labor-intensive steps with automated systems for evaluation of adhesion of 8 x 6 arrays of coating elements that are discretely deposited on a single 9 x 12 cm plastic substrate. Performance of coatings is evaluated with respect to their resistance to adhesion loss, because this parameter is one of the primary considerations in end-use automotive applications. Our HT adhesion evaluation provides previously unavailable capabilities of high speed and reproducibility of testing by using a robotic automation, an expanded range of types of tested coatings by using the coating tagging strategy, and an improved quantitation by using high signal-to-noise automatic imaging. Upon testing, the coatings undergo changes that are impossible to quantitatively predict using existing knowledge. Using our HT methodology, we have developed several coatings leads. These HT screening results for the best coating compositions have been validated on the traditional scales of coating formulation and adhesion loss testing. These validation results have confirmed the superb performance of combinatorially developed coatings over conventional coatings on the traditional scale.
Technology Efficacy in Active Prosthetic Knees for Transfemoral Amputees: A Quantitative Evaluation
El-Sayed, Amr M.; Abu Osman, Noor Azuan
2014-01-01
Several studies have presented technological ensembles of active knee systems for transfemoral prosthesis. Other studies have examined the amputees' gait performance while wearing a specific active prosthesis. This paper combined both insights, that is, a technical examination of the components used, with an evaluation of how these improved the gait of respective users. This study aims to offer a quantitative understanding of the potential enhancement derived from strategic integration of core elements in developing an effective device. The study systematically discussed the current technology in active transfemoral prosthesis with respect to its functional walking performance amongst above-knee amputee users, to evaluate the system's efficacy in producing close-to-normal user performance. The performances of its actuator, sensory system, and control technique that are incorporated in each reported system were evaluated separately and numerical comparisons were conducted based on the percentage of amputees' gait deviation from normal gait profile points. The results identified particular components that contributed closest to normal gait parameters. However, the conclusion is limitedly extendable due to the small number of studies. Thus, more clinical validation of the active prosthetic knee technology is needed to better understand the extent of contribution of each component to the most functional development. PMID:25110727
One of the top research needs identified at the September 2008 U.S. EPA International Technology Forum on Rehabilitation of Water and Wastewater Systems is to undertake a broad and quantitative retrospective evaluation of the performance of previously rehabilitated systems. Curr...
NASA Astrophysics Data System (ADS)
Matsui, Daichi; Ishii, Katsunori; Awazu, Kunio
2015-07-01
Atherosclerosis is a primary cause of critical ischemic diseases like heart infarction or stroke. A method that can provide detailed information about the stability of atherosclerotic plaques is required. We focused on spectroscopic techniques that could evaluate the chemical composition of lipid in plaques. A novel angioscope using multispectral imaging at wavelengths around 1200 nm for quantitative evaluation of atherosclerotic plaques was developed. The angioscope consists of a halogen lamp, an indium gallium arsenide (InGaAs) camera, 3 optical band pass filters transmitting wavelengths of 1150, 1200, and 1300 nm, an image fiber having 0.7 mm outer diameter, and an irradiation fiber which consists of 7 multimode fibers. Atherosclerotic plaque phantoms with 100, 60, 20 vol.% of lipid were prepared and measured by the multispectral angioscope. The acquired datasets were processed by spectral angle mapper (SAM) method. As a result, simulated plaque areas in atherosclerotic plaque phantoms that could not be detected by an angioscopic visible image could be clearly enhanced. In addition, quantitative evaluation of atherosclerotic plaque phantoms based on the lipid volume fractions was performed up to 20 vol.%. These results show the potential of a multispectral angioscope at wavelengths around 1200 nm for quantitative evaluation of the stability of atherosclerotic plaques.
NASA Technical Reports Server (NTRS)
Takallu, M. A.; Wong, D. T.; Uenking, M. D.
2002-01-01
An experimental investigation was conducted to study the effectiveness of modern flight displays in general aviation cockpits for mitigating Low Visibility Loss of Control and the Controlled Flight Into Terrain accidents. A total of 18 General Aviation (GA) pilots with private pilot, single engine land rating, with no additional instrument training beyond private pilot license requirements, were recruited to evaluate three different display concepts in a fixed-based flight simulator at the NASA Langley Research Center's General Aviation Work Station. Evaluation pilots were asked to continue flight from Visual Meteorological Conditions (VMC) into Instrument Meteorological Conditions (IMC) while performing a series of 4 basic precision maneuvers. During the experiment, relevant pilot/vehicle performance variables, pilot control inputs and physiological data were recorded. Human factors questionnaires and interviews were administered after each scenario. Qualitative and quantitative data have been analyzed and the results are presented here. Pilot performance deviations from the established target values (errors) were computed and compared with the FAA Practical Test Standards. Results of the quantitative data indicate that evaluation pilots committed substantially fewer errors when using the Synthetic Vision Systems (SVS) displays than when they were using conventional instruments. Results of the qualitative data indicate that evaluation pilots perceived themselves to have a much higher level of situation awareness while using the SVS display concept.
Kennedy, Jacob J.; Whiteaker, Jeffrey R.; Schoenherr, Regine M.; Yan, Ping; Allison, Kimberly; Shipley, Melissa; Lerch, Melissa; Hoofnagle, Andrew N.; Baird, Geoffrey Stuart; Paulovich, Amanda G.
2016-01-01
Despite a clinical, economic, and regulatory imperative to develop companion diagnostics, precious few new biomarkers have been successfully translated into clinical use, due in part to inadequate protein assay technologies to support large-scale testing of hundreds of candidate biomarkers in formalin-fixed paraffin embedded (FFPE) tissues. While the feasibility of using targeted, multiple reaction monitoring-mass spectrometry (MRM-MS) for quantitative analyses of FFPE tissues has been demonstrated, protocols have not been systematically optimized for robust quantification across a large number of analytes, nor has the performance of peptide immuno-MRM been evaluated. To address this gap, we used a test battery approach coupled to MRM-MS with the addition of stable isotope labeled standard peptides (targeting 512 analytes) to quantitatively evaluate the performance of three extraction protocols in combination with three trypsin digestion protocols (i.e. 9 processes). A process based on RapiGest buffer extraction and urea-based digestion was identified to enable similar quantitation results from FFPE and frozen tissues. Using the optimized protocols for MRM-based analysis of FFPE tissues, median precision was 11.4% (across 249 analytes). There was excellent correlation between measurements made on matched FFPE and frozen tissues, both for direct MRM analysis (R2 = 0.94) and immuno-MRM (R2 = 0.89). The optimized process enables highly reproducible, multiplex, standardizable, quantitative MRM in archival tissue specimens. PMID:27462933
A model for evaluating the social performance of construction waste management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan Hongping, E-mail: hpyuan2005@gmail.com
Highlights: Black-Right-Pointing-Pointer Scant attention is paid to social performance of construction waste management (CWM). Black-Right-Pointing-Pointer We develop a model for assessing the social performance of CWM. Black-Right-Pointing-Pointer With the model, the social performance of CWM can be quantitatively simulated. - Abstract: It has been determined by existing literature that a lot of research efforts have been made to the economic performance of construction waste management (CWM), but less attention is paid to investigation of the social performance of CWM. This study therefore attempts to develop a model for quantitatively evaluating the social performance of CWM by using a system dynamicsmore » (SD) approach. Firstly, major variables affecting the social performance of CWM are identified and a holistic system for assessing the social performance of CWM is formulated in line with feedback relationships underlying these variables. The developed system is then converted into a SD model through the software iThink. An empirical case study is finally conducted to demonstrate application of the model. Results of model validation indicate that the model is robust and reasonable to reflect the situation of the real system under study. Findings of the case study offer helpful insights into effectively promoting the social performance of CWM of the project investigated. Furthermore, the model exhibits great potential to function as an experimental platform for dynamically evaluating effects of management measures on improving the social performance of CWM of construction projects.« less
Student evaluations of teaching: teaching quantitative courses can be hazardous to one’s career
Smibert, Dylan
2017-01-01
Anonymous student evaluations of teaching (SETs) are used by colleges and universities to measure teaching effectiveness and to make decisions about faculty hiring, firing, re-appointment, promotion, tenure, and merit pay. Although numerous studies have found that SETs correlate with various teaching effectiveness irrelevant factors (TEIFs) such as subject, class size, and grading standards, it has been argued that such correlations are small and do not undermine the validity of SETs as measures of professors’ teaching effectiveness. However, previous research has generally used inappropriate parametric statistics and effect sizes to examine and to evaluate the significance of TEIFs on personnel decisions. Accordingly, we examined the influence of quantitative vs. non-quantitative courses on SET ratings and SET based personnel decisions using 14,872 publicly posted class evaluations where each evaluation represents a summary of SET ratings provided by individual students responding in each class. In total, 325,538 individual student evaluations from a US mid-size university contributed to theses class evaluations. The results demonstrate that class subject (math vs. English) is strongly associated with SET ratings, has a substantial impact on professors being labeled satisfactory vs. unsatisfactory and excellent vs. non-excellent, and the impact varies substantially depending on the criteria used to classify professors as satisfactory vs. unsatisfactory. Professors teaching quantitative courses are far more likely not to receive tenure, promotion, and/or merit pay when their performance is evaluated against common standards. PMID:28503380
Student evaluations of teaching: teaching quantitative courses can be hazardous to one's career.
Uttl, Bob; Smibert, Dylan
2017-01-01
Anonymous student evaluations of teaching (SETs) are used by colleges and universities to measure teaching effectiveness and to make decisions about faculty hiring, firing, re-appointment, promotion, tenure, and merit pay. Although numerous studies have found that SETs correlate with various teaching effectiveness irrelevant factors (TEIFs) such as subject, class size, and grading standards, it has been argued that such correlations are small and do not undermine the validity of SETs as measures of professors' teaching effectiveness. However, previous research has generally used inappropriate parametric statistics and effect sizes to examine and to evaluate the significance of TEIFs on personnel decisions. Accordingly, we examined the influence of quantitative vs. non-quantitative courses on SET ratings and SET based personnel decisions using 14,872 publicly posted class evaluations where each evaluation represents a summary of SET ratings provided by individual students responding in each class. In total, 325,538 individual student evaluations from a US mid-size university contributed to theses class evaluations. The results demonstrate that class subject (math vs. English) is strongly associated with SET ratings, has a substantial impact on professors being labeled satisfactory vs. unsatisfactory and excellent vs. non-excellent, and the impact varies substantially depending on the criteria used to classify professors as satisfactory vs. unsatisfactory. Professors teaching quantitative courses are far more likely not to receive tenure, promotion, and/or merit pay when their performance is evaluated against common standards.
Souto Filho, João Tadeu D; Loureiro, Monique M; Pulcheri, Wolmar; Morais, José Carlos; Nucci, Marcio; Portugal, Rodrigo D
2015-07-25
Early assessment of response to chemotherapy in acute myeloid leukemia may be performed by examining bone marrow aspirate (BMA) or biopsy (BMB); a hypocellular bone marrow sample indicates adequate anti-leukemic activity. We sought to evaluate the quantitative and qualitative assessment of BMA performed on day 14 (D14) of chemotherapy, to verify the inter-observer agreement, to compare the results of BMA and BMB, and to evaluate the impact of D14 blast clearance on the overall survival (OS). A total of 107 patients who received standard induction chemotherapy and had bone marrow samples were included. BMA evaluation was performed by two observers using two methods: quantitative assessment and a qualitative (Likert) scale. ROC curves were obtained correlating the BMA quantification of blasts and the qualitative scale, by both observers, with BMB result as gold-standard. There was a significant agreement between the two observers in both the qualitative and quantitative assessments (Kw = 0.737, p < 0.001, and rs = 0.798, p < 0.001; ICC = 0.836, p < 0.001, respectively). The areas under the curve (AUC) were 0.924 and 0.946 for observer 1 and 0.867 and 0.870 for observer 2 for assessments of the percentage of blasts and qualitative scale, respectively. The best cutoff for blast percentage in BMA was 6% and 7% for observers 1 and 2, respectively. A similar analysis for the qualitative scale showed the best cutoff as "probably infiltrated". Patients who attained higher grades of cytoreduction on D14 had better OS. Evaluation of D14 BMA using both methods had a significant agreement with BMB and between observers, identifying a population of patients with poor outcome.
48 CFR 245.603-70 - Contractor performance of plant clearance duties.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Government-furnished property from inventory schedules; (v) Evaluate physical, quantitative, and technical allocability of contractor inventory prior to disposal using Standard Form 1423, Inventory Verification Survey...
Developing medical educators – a mixed method evaluation of a teaching education program
Roos, Marco; Kadmon, Martina; Kirschfink, Michael; Koch, Eginhard; Jünger, Jana; Strittmatter-Haubold, Veronika; Steiner, Thorsten
2014-01-01
Background It is well accepted that medical faculty teaching staff require an understanding of educational theory and pedagogical methods for effective medical teaching. The purpose of this study was to evaluate the effectiveness of a 5-day teaching education program. Methods An open prospective interventional study using quantitative and qualitative instruments was performed, covering all four levels of the Kirkpatrick model: Evaluation of 1) ‘Reaction’ on a professional and emotional level using standardized questionnaires; 2) ‘Learning’ applying a multiple choice test; 3) ‘Behavior’ by self-, peer-, and expert assessment of teaching sessions with semistructured interviews; and 4) ‘Results’ from student evaluations. Results Our data indicate the success of the educational intervention at all observed levels. 1) Reaction: The participants showed a high acceptance of the instructional content. 2) Learning: There was a significant increase in knowledge (P<0.001) as deduced from a pre-post multiple-choice questionnaire, which was retained at 6 months (P<0.001). 3) Behavior: Peer-, self-, and expert-assessment indicated a transfer of learning into teaching performance. Semistructured interviews reflected a higher level of professionalism in medical teaching by the participants. 4) Results: Teaching performance ratings improved in students’ evaluations. Conclusions Our results demonstrate the success of a 5-day education program in embedding knowledge and skills to improve performance of medical educators. This multimethodological approach, using both qualitative and quantitative measures, may serve as a model to evaluate effectiveness of comparable interventions in other settings. PMID:24679671
ERIC Educational Resources Information Center
Yan, Xun
2014-01-01
This paper reports on a mixed-methods approach to evaluate rater performance on a local oral English proficiency test. Three types of reliability estimates were reported to examine rater performance from different perspectives. Quantitative results were also triangulated with qualitative rater comments to arrive at a more representative picture of…
Weber, Charles N; Poff, Jason A; Lev-Toaff, Anna S; Levine, Marc S; Zafar, Hanna M
To explore quantitative differences between genders in morphologic colonic metrics and determine metric reproducibility. Quantitative colonic metrics from 20 male and 20 female CTC datasets were evaluated twice by two readers; all exams were performed after incomplete optical colonoscopy. Intra-/inter-reader reliability was measured with intraclass correlation coefficient (ICC) and concordance correlation coefficient (CCC). Women had overall decreased colonic volume, increased tortuosity and compactness and lower sigmoid apex height on CTC compared to men (p<0.0001,all). Quantitative measurements in colonic metrics were highly reproducible (ICC=0.9989 and 0.9970; CCC=0.9945). Quantitative morphologic differences between genders can be reproducibility measured. Copyright © 2017 Elsevier Inc. All rights reserved.
Near-infrared fluorescence image quality test methods for standardized performance evaluation
NASA Astrophysics Data System (ADS)
Kanniyappan, Udayakumar; Wang, Bohan; Yang, Charles; Ghassemi, Pejhman; Wang, Quanzeng; Chen, Yu; Pfefer, Joshua
2017-03-01
Near-infrared fluorescence (NIRF) imaging has gained much attention as a clinical method for enhancing visualization of cancers, perfusion and biological structures in surgical applications where a fluorescent dye is monitored by an imaging system. In order to address the emerging need for standardization of this innovative technology, it is necessary to develop and validate test methods suitable for objective, quantitative assessment of device performance. Towards this goal, we develop target-based test methods and investigate best practices for key NIRF imaging system performance characteristics including spatial resolution, depth of field and sensitivity. Characterization of fluorescence properties was performed by generating excitation-emission matrix properties of indocyanine green and quantum dots in biological solutions and matrix materials. A turbid, fluorophore-doped target was used, along with a resolution target for assessing image sharpness. Multi-well plates filled with either liquid or solid targets were generated to explore best practices for evaluating detection sensitivity. Overall, our results demonstrate the utility of objective, quantitative, target-based testing approaches as well as the need to consider a wide range of factors in establishing standardized approaches for NIRF imaging system performance.
Predictive Heterosis in Multibreed Evaluations Using Quantitative and Molecular Approaches
USDA-ARS?s Scientific Manuscript database
Heterosis is the extra genetic boost in performance obtained by crossing two cattle breeds. It is an important tool for increasing the efficiency of beef production. It is also important to adjust data used to calculate genetic evaluations for differences in heterosis. Good estimates of heterosis...
The Interpretation of Student Performance on Evaluative Tests.
ERIC Educational Resources Information Center
Aikenhead, Glen S.
Reported is a study on the use of quantitative data in evaluating a science course for the purpose of introducing an alternative form of information presentation capable of supplying qualitative feedback valuable to students, teachers, and curriculum developers. Fifty-five teachers, randomly selected during the 1967-68 Project Physics (PP)…
The Use and Evaluation of Videodiscs in the Chemistry Laboratory.
ERIC Educational Resources Information Center
Russell, Arlene A.; And Others
1985-01-01
Describes a quantitative evaluation of an interactive videodisc program in which students measure the temperature dependence of the solubility product of lead chloride by titration of chloride with silver nitrate using a Mohr titration. Student reaction (based on responses made using the program, quiz answers, and laboratory performance) was…
Kennedy, Joseph H; Wiseman, Justin M
2010-02-01
The present work describes the methodology and investigates the performance of desorption electrospray ionization (DESI) combined with a triple quadrupole mass spectrometer for the quantitation of small drug molecules in human plasma. Amoxepine, atenolol, carbamazepine, clozapine, prazosin, propranolol and verapamil were selected as target analytes while terfenadine was selected as the internal standard common to each of the analytes. Protein precipitation of human plasma using acetonitrile was utilized for all samples. Limits of detection were determined for all analytes in plasma and shown to be in the range 0.2-40 ng/mL. Quantitative analysis of amoxepine, prazosin and verapamil was performed over the range 20-7400 ng/mL and shown to be linear in all cases with R(2) >0.99. In most cases, the precision (relative standard deviation) and accuracy (relative error) of each method were less than or equal to 20%, respectively. The performance of the combined techniques made it possible to analyze each sample in 15 s illustrating DESI tandem mass spectrometry (MS/MS) as powerful tool for the quantitation of analytes in deproteinized human plasma. Copyright 2010 John Wiley & Sons, Ltd.
Method and platform standardization in MRM-based quantitative plasma proteomics.
Percy, Andrew J; Chambers, Andrew G; Yang, Juncong; Jackson, Angela M; Domanski, Dominik; Burkhart, Julia; Sickmann, Albert; Borchers, Christoph H
2013-12-16
There exists a growing demand in the proteomics community to standardize experimental methods and liquid chromatography-mass spectrometry (LC/MS) platforms in order to enable the acquisition of more precise and accurate quantitative data. This necessity is heightened by the evolving trend of verifying and validating candidate disease biomarkers in complex biofluids, such as blood plasma, through targeted multiple reaction monitoring (MRM)-based approaches with stable isotope-labeled standards (SIS). Considering the lack of performance standards for quantitative plasma proteomics, we previously developed two reference kits to evaluate the MRM with SIS peptide approach using undepleted and non-enriched human plasma. The first kit tests the effectiveness of the LC/MRM-MS platform (kit #1), while the second evaluates the performance of an entire analytical workflow (kit #2). Here, these kits have been refined for practical use and then evaluated through intra- and inter-laboratory testing on 6 common LC/MS platforms. For an identical panel of 22 plasma proteins, similar concentrations were determined, regardless of the kit, instrument platform, and laboratory of analysis. These results demonstrate the value of the kit and reinforce the utility of standardized methods and protocols. The proteomics community needs standardized experimental protocols and quality control methods in order to improve the reproducibility of MS-based quantitative data. This need is heightened by the evolving trend for MRM-based validation of proposed disease biomarkers in complex biofluids such as blood plasma. We have developed two kits to assist in the inter- and intra-laboratory quality control of MRM experiments: the first kit tests the effectiveness of the LC/MRM-MS platform (kit #1), while the second evaluates the performance of an entire analytical workflow (kit #2). In this paper, we report the use of these kits in intra- and inter-laboratory testing on 6 common LC/MS platforms. This article is part of a Special Issue entitled: Standardization and Quality Control in Proteomics. © 2013.
Performance evaluation methodology for historical document image binarization.
Ntirogiannis, Konstantinos; Gatos, Basilis; Pratikakis, Ioannis
2013-02-01
Document image binarization is of great importance in the document image analysis and recognition pipeline since it affects further stages of the recognition process. The evaluation of a binarization method aids in studying its algorithmic behavior, as well as verifying its effectiveness, by providing qualitative and quantitative indication of its performance. This paper addresses a pixel-based binarization evaluation methodology for historical handwritten/machine-printed document images. In the proposed evaluation scheme, the recall and precision evaluation measures are properly modified using a weighting scheme that diminishes any potential evaluation bias. Additional performance metrics of the proposed evaluation scheme consist of the percentage rates of broken and missed text, false alarms, background noise, character enlargement, and merging. Several experiments conducted in comparison with other pixel-based evaluation measures demonstrate the validity of the proposed evaluation scheme.
NASA Astrophysics Data System (ADS)
Sugiyama, S.; Hong, Y.-J.; Kasaragod, D.; Makita, S.; Miura, M.; Ikuno, Y.; Yasuno, Y.
2016-03-01
Quantitative evaluation of optical properties of choroid and sclera are performed by multifunctional optical coherence tomography. Five normal eyes, five glaucoma eyes and one choroidal atrophy eye are examined. The refractive error was found to be correlated with choroidal birefringence, polarization uniformity, and flow in addition to scleral birefringence among normal eyes. The significant differences were observed between the normal and the glaucoma eyes, as for choroidal polarization uniformity, flow and scleral birefringence. An automatic segmentation algorithm of retinal pigment epithelium and chorioscleral interface based on multifunctional signals is also presented.
Wengert, G J; Helbich, T H; Woitek, R; Kapetas, P; Clauser, P; Baltzer, P A; Vogl, W-D; Weber, M; Meyer-Baese, A; Pinker, Katja
2016-11-01
To evaluate the inter-/intra-observer agreement of BI-RADS-based subjective visual estimation of the amount of fibroglandular tissue (FGT) with magnetic resonance imaging (MRI), and to investigate whether FGT assessment benefits from an automated, observer-independent, quantitative MRI measurement by comparing both approaches. Eighty women with no imaging abnormalities (BI-RADS 1 and 2) were included in this institutional review board (IRB)-approved prospective study. All women underwent un-enhanced breast MRI. Four radiologists independently assessed FGT with MRI by subjective visual estimation according to BI-RADS. Automated observer-independent quantitative measurement of FGT with MRI was performed using a previously described measurement system. Inter-/intra-observer agreements of qualitative and quantitative FGT measurements were assessed using Cohen's kappa (k). Inexperienced readers achieved moderate inter-/intra-observer agreement and experienced readers a substantial inter- and perfect intra-observer agreement for subjective visual estimation of FGT. Practice and experience reduced observer-dependency. Automated observer-independent quantitative measurement of FGT was successfully performed and revealed only fair to moderate agreement (k = 0.209-0.497) with subjective visual estimations of FGT. Subjective visual estimation of FGT with MRI shows moderate intra-/inter-observer agreement, which can be improved by practice and experience. Automated observer-independent quantitative measurements of FGT are necessary to allow a standardized risk evaluation. • Subjective FGT estimation with MRI shows moderate intra-/inter-observer agreement in inexperienced readers. • Inter-observer agreement can be improved by practice and experience. • Automated observer-independent quantitative measurements can provide reliable and standardized assessment of FGT with MRI.
ERIC Educational Resources Information Center
Wrigley, William J.; Emmerson, Stephen B.
2013-01-01
This study investigated ways to improve the quality of music performance evaluation in an effort to address the accountability imperative in tertiary music education. An enhanced scientific methodology was employed incorporating ecological validity and using recognized qualitative methods involving grounded theory and quantitative methods…
Farzianpour, Fereshteh; Mohamadi, Efat; Najafpour, Zhila; Yousefinezhadi, Taraneh; Forootan, Sara; Foroushani, Abbas Rahimi
2016-09-01
Existence of doctors with high performance is one of the necessary conditions to provide high quality services. There are different motivations, which could affect their performance. Recognizing Factors which effect the performance of doctors as an effective force in health care centers is necessary. The aim of this article was evaluate the effective factors which influence on clinical performance of general surgery of Tehran University of Medical Sciences in 2015. This is a cross-sectional qualitative-quantitative study. This research conducted in 3 phases-phases I: (use of library studies and databases to collect data), phase II: localization of detected factors in first phase by using the Delphi technique and phase III: prioritizing the affecting factors on performance of doctors by using qualitative interviews. 12 articles were analyzed from 300 abstracts during the evaluation process. The output of assessment identified 23 factors was sent to surgeons and their assistants for obtaining their opinions. Quantitative analysis of the findings showed that "work qualification" (86.1%) and "managers and supervisors style" (50%) have respectively the most and the least impact on the performance of doctors. Finally 18 effective factors were identified and prioritized in the performance of general surgeons. The results showed that motivation and performance is not a single operating parameter and it depends on several factors according to cultural background. Therefore it is necessary to design, implementation and monitoring based on key determinants of effective interventions due to cultural background.
Ninomiya, Shinji; Tokumine, Asako; Yasuda, Toru; Tomizawa, Yasuko
2007-01-01
A training system with quantitative evaluation of performance for training perfusionists is valuable for preparation for rare but critical situations. A simulator system, ECCSIM-Lite, for extracorporeal circulation (ECC) training of perfusionists was developed. This system consists of a computer system containing a simulation program of the hemodynamic conditions and the training scenario with instructions, a flow sensor unit, a reservoir with a built-in water level sensor, and an ECC circuit with a soft bag representing the human body. This system is relatively simple, easy to handle, compact, and reasonably inexpensive. Quantitative information is recorded, including the changes in arterial flow by the manipulation of a knob, the changes in venous drainage by handling a clamp, and the change in reservoir level; the time courses of the above parameters are presented graphically. To increase the realism of the training, a numerical-hydraulic circulatory model was applied. Following the instruction and explanation of the scenario in the form of audio and video captions, it is possible for a trainee to undertake self-study without an instructor or a computer operator. To validate the system, a training session was given to three beginners using a simple training scenario; it was possible to record the performance of the perfusion sessions quantitatively. In conclusion, the ECCSIM-Lite system is expected to be useful for perfusion training, since quantitative information about the trainee's performance is recorded and it is possible to use the data for assessment and comparison.
Quantitative analysis of peel-off degree for printed electronics
NASA Astrophysics Data System (ADS)
Park, Janghoon; Lee, Jongsu; Sung, Ki-Hak; Shin, Kee-Hyun; Kang, Hyunkyoo
2018-02-01
We suggest a facile methodology of peel-off degree evaluation by image processing on printed electronics. The quantification of peeled and printed areas was performed using open source programs. To verify the accuracy of methods, we manually removed areas from the printed circuit that was measured, resulting in 96.3% accuracy. The sintered patterns showed a decreasing tendency in accordance with the increase in the energy density of an infrared lamp, and the peel-off degree increased. Thus, the comparison between both results was presented. Finally, the correlation between performance characteristics was determined by quantitative analysis.
Wu, Hsiu; Cohen, Stephanie E; Westheimer, Emily; Gay, Cynthia L; Hall, Laura; Rose, Charles; Hightow-Weidman, Lisa B; Gose, Severin; Fu, Jie; Peters, Philip J
2017-08-01
New recommendations for laboratory diagnosis of HIV infection in the United States were published in 2014. The updated testing algorithm includes a qualitative HIV-1 RNA assay to resolve discordant immunoassay results and to identify acute HIV-1 infection (AHI). The qualitative HIV-1 RNA assay is not widely available; therefore, we evaluated the performance of a more widely available quantitative HIV-1 RNA assay, viral load, for diagnosing AHI. We determined that quantitative viral loads consistently distinguished AHI from a false-positive immunoassay result. Among 100 study participants with AHI and a viral load result, the estimated geometric mean viral load was 1,377,793copies/mL. Copyright © 2017 Elsevier B.V. All rights reserved.
Becton-Dickson Model 420 Fluorescence-Activated Cell Sorter (FACS).
1986-05-01
respectively) have been associated with certain autoimmune or immunodeficient diseases. The effects of UDMH on Lyt. antigens were previously evaluated...measured in cells from feline leukemia virus (FeLV)-infected cats and normal cat cells. The measurements are performed using the calcium-specific dye...ucavd as a stimulator, which allows for quantitation of " . phagocytosis activity of the cells. c) Quantitation of IL-2 receptor site on feline and murine
Implementing a Quantitative Analysis Design Tool for Future Generation Interfaces
2012-03-01
with Remotely Piloted Aircraft (RPA) has resulted in the need of a platform to evaluate interface design. The Vigilant Spirit Control Station ( VSCS ...Spirit interface. A modified version of the HCI Index was successfully applied to perform a quantitative analysis of the baseline VSCS interface and...time of the original VSCS interface. These results revealed the effectiveness of the tool and demonstrated in the design of future generation
Yaripour, Saeid; Mohammadi, Ali; Esfanjani, Isa; Walker, Roderick B; Nojavan, Saeed
2018-01-01
In this study, for the first time, an electro-driven microextraction method named electromembrane extraction combined with a simple high performance liquid chromatography and ultraviolet detection was developed and validated for the quantitation of zolpidem in biological samples. Parameters influencing electromembrane extraction were evaluated and optimized. The membrane consisted of 2-ethylhexanol immobilized in the pores of a hollow fiber. As a driving force, a 150 V electric field was applied to facilitate the analyte migration from the sample matrix to an acceptor solution through a supported liquid membrane. The pHs of donor and acceptor solutions were optimized to 6.0 and 2.0, respectively. The enrichment factor was obtained >75 within 15 minutes. The effect of carbon nanotubes (as solid nano-sorbents) on the membrane performance and EME efficiency was evaluated. The method was linear over the range of 10-1000 ng/mL for zolpidem (R 2 >0.9991) with repeatability ( %RSD) between 0.3 % and 7.3 % ( n = 3). The limits of detection and quantitation were 3 and 10 ng/mL, respectively. The sensitivity of HPLC-UV for the determination of zolpidem was enhanced by electromembrane extraction. Finally, the method was employed for the quantitation of zolpidem in biological samples with relative recoveries in the range of 60-79 %.
Yaripour, Saeid; Mohammadi, Ali; Esfanjani, Isa; Walker, Roderick B.; Nojavan, Saeed
2018-01-01
In this study, for the first time, an electro-driven microextraction method named electromembrane extraction combined with a simple high performance liquid chromatography and ultraviolet detection was developed and validated for the quantitation of zolpidem in biological samples. Parameters influencing electromembrane extraction were evaluated and optimized. The membrane consisted of 2-ethylhexanol immobilized in the pores of a hollow fiber. As a driving force, a 150 V electric field was applied to facilitate the analyte migration from the sample matrix to an acceptor solution through a supported liquid membrane. The pHs of donor and acceptor solutions were optimized to 6.0 and 2.0, respectively. The enrichment factor was obtained >75 within 15 minutes. The effect of carbon nanotubes (as solid nano-sorbents) on the membrane performance and EME efficiency was evaluated. The method was linear over the range of 10-1000 ng/mL for zolpidem (R2 >0.9991) with repeatability ( %RSD) between 0.3 % and 7.3 % (n = 3). The limits of detection and quantitation were 3 and 10 ng/mL, respectively. The sensitivity of HPLC-UV for the determination of zolpidem was enhanced by electromembrane extraction. Finally, the method was employed for the quantitation of zolpidem in biological samples with relative recoveries in the range of 60-79 %. PMID:29805344
Popelka, Stanislav; Stachoň, Zdeněk; Šašinka, Čeněk; Doležalová, Jitka
2016-01-01
The mixed research design is a progressive methodological discourse that combines the advantages of quantitative and qualitative methods. Its possibilities of application are, however, dependent on the efficiency with which the particular research techniques are used and combined. The aim of the paper is to introduce the possible combination of Hypothesis with EyeTribe tracker. The Hypothesis is intended for quantitative data acquisition and the EyeTribe is intended for qualitative (eye-tracking) data recording. In the first part of the paper, Hypothesis software is described. The Hypothesis platform provides an environment for web-based computerized experiment design and mass data collection. Then, evaluation of the accuracy of data recorded by EyeTribe tracker was performed with the use of concurrent recording together with the SMI RED 250 eye-tracker. Both qualitative and quantitative results showed that data accuracy is sufficient for cartographic research. In the third part of the paper, a system for connecting EyeTribe tracker and Hypothesis software is presented. The interconnection was performed with the help of developed web application HypOgama. The created system uses open-source software OGAMA for recording the eye-movements of participants together with quantitative data from Hypothesis. The final part of the paper describes the integrated research system combining Hypothesis and EyeTribe.
Stachoň, Zdeněk; Šašinka, Čeněk; Doležalová, Jitka
2016-01-01
The mixed research design is a progressive methodological discourse that combines the advantages of quantitative and qualitative methods. Its possibilities of application are, however, dependent on the efficiency with which the particular research techniques are used and combined. The aim of the paper is to introduce the possible combination of Hypothesis with EyeTribe tracker. The Hypothesis is intended for quantitative data acquisition and the EyeTribe is intended for qualitative (eye-tracking) data recording. In the first part of the paper, Hypothesis software is described. The Hypothesis platform provides an environment for web-based computerized experiment design and mass data collection. Then, evaluation of the accuracy of data recorded by EyeTribe tracker was performed with the use of concurrent recording together with the SMI RED 250 eye-tracker. Both qualitative and quantitative results showed that data accuracy is sufficient for cartographic research. In the third part of the paper, a system for connecting EyeTribe tracker and Hypothesis software is presented. The interconnection was performed with the help of developed web application HypOgama. The created system uses open-source software OGAMA for recording the eye-movements of participants together with quantitative data from Hypothesis. The final part of the paper describes the integrated research system combining Hypothesis and EyeTribe. PMID:27087805
Zheng, Xiujuan; Wei, Wentao; Huang, Qiu; Song, Shaoli; Wan, Jieqing; Huang, Gang
2017-01-01
The objective and quantitative analysis of longitudinal single photon emission computed tomography (SPECT) images are significant for the treatment monitoring of brain disorders. Therefore, a computer aided analysis (CAA) method is introduced to extract a change-rate map (CRM) as a parametric image for quantifying the changes of regional cerebral blood flow (rCBF) in longitudinal SPECT brain images. The performances of the CAA-CRM approach in treatment monitoring are evaluated by the computer simulations and clinical applications. The results of computer simulations show that the derived CRMs have high similarities with their ground truths when the lesion size is larger than system spatial resolution and the change rate is higher than 20%. In clinical applications, the CAA-CRM approach is used to assess the treatment of 50 patients with brain ischemia. The results demonstrate that CAA-CRM approach has a 93.4% accuracy of recovered region's localization. Moreover, the quantitative indexes of recovered regions derived from CRM are all significantly different among the groups and highly correlated with the experienced clinical diagnosis. In conclusion, the proposed CAA-CRM approach provides a convenient solution to generate a parametric image and derive the quantitative indexes from the longitudinal SPECT brain images for treatment monitoring.
TECHNOLOGICAL INNOVATION IN NEUROSURGERY: A QUANTITATIVE STUDY
Marcus, Hani J; Hughes-Hallett, Archie; Kwasnicki, Richard M; Darzi, Ara; Yang, Guang-Zhong; Nandi, Dipankar
2015-01-01
Object Technological innovation within healthcare may be defined as the introduction of a new technology that initiates a change in clinical practice. Neurosurgery is a particularly technologically intensive surgical discipline, and new technologies have preceded many of the major advances in operative neurosurgical technique. The aim of the present study was to quantitatively evaluate technological innovation in neurosurgery using patents and peer-reviewed publications as metrics of technology development and clinical translation respectively. Methods A patent database was searched between 1960 and 2010 using the search terms “neurosurgeon” OR “neurosurgical” OR “neurosurgery”. The top 50 performing patent codes were then grouped into technology clusters. Patent and publication growth curves were then generated for these technology clusters. A top performing technology cluster was then selected as an exemplar for more detailed analysis of individual patents. Results In all, 11,672 patents and 208,203 publications relating to neurosurgery were identified. The top performing technology clusters over the 50 years were: image guidance devices, clinical neurophysiology devices, neuromodulation devices, operating microscopes and endoscopes. Image guidance and neuromodulation devices demonstrated a highly correlated rapid rise in patents and publications, suggesting they are areas of technology expansion. In-depth analysis of neuromodulation patents revealed that the majority of high performing patents were related to Deep Brain Stimulation (DBS). Conclusions Patent and publication data may be used to quantitatively evaluate technological innovation in neurosurgery. PMID:25699414
An evaluation of performance-arts based HIV-prevention events in London with 13- 16-year-olds.
Campbell, Tomás; Bath, Michael; Bradbear, Rachel; Cottle, Justine; Parrett, Neil
2009-09-01
The London borough of Newham is ethnically diverse and is one of the poorest regions in the UK. Rates of teenage pregnancy, sexually transmitted infections (STIs) and HIV are high compared to the rest of the country. One strand of the local school-based HIV-prevention programme for young people utilizes performance arts as a tool for HIV education and prevention. This study evaluated HIV knowledge, confidence and intention to use a condom in two groups of 13- 16-year-olds who had participated in performance-based events. Group 1 (n = 14) participated in a six-week programme of performance arts-based HIV education and prevention workshops, which culminated in a theatre-based performance. Group 2 (n = 65) were audience members who attended the performance. Participants completed a short questionnaire containing both qualitative and quantitative items. Qualitative data suggested that the participants had learned about condoms and their efficacy in preventing acquisition of HIV and sexually transmitted diseases. Quantitative results indicated that after participation in the events, respondents had more information about HIV and condom use; were more confident that they could insist on condom use with partners; and planned to use condoms in the future. There was a statistically significant difference between Groups 1 and 2 but because of the small numbers in Group 1 this result should be interpreted cautiously. Performance-based HIV-prevention activities may be a useful way to deliver HIV-prevention messages to young people. This evaluation will form the basis of a more systematic and robust evaluation of future events.
NASA Astrophysics Data System (ADS)
Pandey, Gavendra; Sharan, Maithili
2018-01-01
Application of atmospheric dispersion models in air quality analysis requires a proper representation of the vertical and horizontal growth of the plume. For this purpose, various schemes for the parameterization of dispersion parameters σ‧s are described in both stable and unstable conditions. These schemes differ on the use of (i) extent of availability of on-site measurements (ii) formulations developed for other sites and (iii) empirical relations. The performance of these schemes is evaluated in an earlier developed IIT (Indian Institute of Technology) dispersion model with the data set in single and multiple releases conducted at Fusion Field Trials, Dugway Proving Ground, Utah 2007. Qualitative and quantitative evaluation of the relative performance of all the schemes is carried out in both stable and unstable conditions in the light of (i) peak/maximum concentrations, and (ii) overall concentration distribution. The blocked bootstrap resampling technique is adopted to investigate the statistical significance of the differences in performances of each of the schemes by computing 95% confidence limits on the parameters FB and NMSE. The various analysis based on some selected statistical measures indicated consistency in the qualitative and quantitative performances of σ schemes. The scheme which is based on standard deviation of wind velocity fluctuations and Lagrangian time scales exhibits a relatively better performance in predicting the peak as well as the lateral spread.
Quantitative aspects of inductively coupled plasma mass spectrometry
NASA Astrophysics Data System (ADS)
Bulska, Ewa; Wagner, Barbara
2016-10-01
Accurate determination of elements in various kinds of samples is essential for many areas, including environmental science, medicine, as well as industry. Inductively coupled plasma mass spectrometry (ICP-MS) is a powerful tool enabling multi-elemental analysis of numerous matrices with high sensitivity and good precision. Various calibration approaches can be used to perform accurate quantitative measurements by ICP-MS. They include the use of pure standards, matrix-matched standards, or relevant certified reference materials, assuring traceability of the reported results. This review critically evaluates the advantages and limitations of different calibration approaches, which are used in quantitative analyses by ICP-MS. Examples of such analyses are provided. This article is part of the themed issue 'Quantitative mass spectrometry'.
ERIC Educational Resources Information Center
Khan, Huma Imran
2015-01-01
The prime purpose of this study was to explore a correlation between bilingual instruction in an ESL class and the class performance of the ESL learners at the secondary level. Quantitative research method was used to evaluate the test performance of 60 ESL learners divided into two groups: One was the controlled group (which was given…
Ayaz, Shirazi Muhammad; Kim, Min Young
2018-01-01
In this article, a multi-view registration approach for the 3D handheld profiling system based on the multiple shot structured light technique is proposed. The multi-view registration approach is categorized into coarse registration and point cloud refinement using the iterative closest point (ICP) algorithm. Coarse registration of multiple point clouds was performed using relative orientation and translation parameters estimated via homography-based visual navigation. The proposed system was evaluated using an artificial human skull and a paper box object. For the quantitative evaluation of the accuracy of a single 3D scan, a paper box was reconstructed, and the mean errors in its height and breadth were found to be 9.4 μm and 23 μm, respectively. A comprehensive quantitative evaluation and comparison of proposed algorithm was performed with other variants of ICP. The root mean square error for the ICP algorithm to register a pair of point clouds of the skull object was also found to be less than 1 mm. PMID:29642552
Performance Evaluation and Benchmarking of Intelligent Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madhavan, Raj; Messina, Elena; Tunstel, Edward
To design and develop capable, dependable, and affordable intelligent systems, their performance must be measurable. Scientific methodologies for standardization and benchmarking are crucial for quantitatively evaluating the performance of emerging robotic and intelligent systems technologies. There is currently no accepted standard for quantitatively measuring the performance of these systems against user-defined requirements; and furthermore, there is no consensus on what objective evaluation procedures need to be followed to understand the performance of these systems. The lack of reproducible and repeatable test methods has precluded researchers working towards a common goal from exchanging and communicating results, inter-comparing system performance, and leveragingmore » previous work that could otherwise avoid duplication and expedite technology transfer. Currently, this lack of cohesion in the community hinders progress in many domains, such as manufacturing, service, healthcare, and security. By providing the research community with access to standardized tools, reference data sets, and open source libraries of solutions, researchers and consumers will be able to evaluate the cost and benefits associated with intelligent systems and associated technologies. In this vein, the edited book volume addresses performance evaluation and metrics for intelligent systems, in general, while emphasizing the need and solutions for standardized methods. To the knowledge of the editors, there is not a single book on the market that is solely dedicated to the subject of performance evaluation and benchmarking of intelligent systems. Even books that address this topic do so only marginally or are out of date. The research work presented in this volume fills this void by drawing from the experiences and insights of experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. The book presents a detailed and coherent picture of state-of-the-art, recent developments, and further research areas in intelligent systems.« less
Volozhin, A I; Grigor'ian, A S; Desiatnichenko, K S; Ozhelevskaia, S A; Doktorov, A A; Kurdiumov, S G; Fionova, E V; Gurin, A N; Karakov, K G
2008-01-01
In rat experiments the ability of noncollagen bone proteins (NCBP) in the composition of osteoplactic modified material Gapkol (not tanned in formalin and subjected to vacuum extraction) to increase bone reparation in comparison with traditional Gapkol was studied. Quantitative evaluation was performed on rat parietal bone and qualitative evaluation was performed on rat mandible. It was shown that Gapkol with NCBP (not tanned in formalin and subjected to vacuum extraction) increased reparative osteogenesis.
Using hybrid method to evaluate the green performance in uncertainty.
Tseng, Ming-Lang; Lan, Lawrence W; Wang, Ray; Chiu, Anthony; Cheng, Hui-Ping
2011-04-01
Green performance measure is vital for enterprises in making continuous improvements to maintain sustainable competitive advantages. Evaluation of green performance, however, is a challenging task due to the dependence complexity of the aspects, criteria, and the linguistic vagueness of some qualitative information and quantitative data together. To deal with this issue, this study proposes a novel approach to evaluate the dependence aspects and criteria of firm's green performance. The rationale of the proposed approach, namely green network balanced scorecard, is using balanced scorecard to combine fuzzy set theory with analytical network process (ANP) and importance-performance analysis (IPA) methods, wherein fuzzy set theory accounts for the linguistic vagueness of qualitative criteria and ANP converts the relations among the dependence aspects and criteria into an intelligible structural modeling used IPA. For the empirical case study, four dependence aspects and 34 green performance criteria for PCB firms in Taiwan were evaluated. The managerial implications are discussed.
Crausman, R S; Ferguson, G; Irvin, C G; Make, B; Newell, J D
1995-06-01
We assessed the value of quantitative high-resolution computed tomography (CT) as a diagnostic and prognostic tool in smoking-related emphysema. We performed an inception cohort study of 14 patients referred with emphysema. The diagnosis of emphysema was based on a compatible history, physical examination, chest radiograph, CT scan of the lung, and pulmonary physiologic evaluation. As a group, those who underwent exercise testing were hyperinflated (percentage predicted total lung capacity +/- standard error of the mean = 133 +/- 9%), and there was evidence of air trapping (percentage predicted respiratory volume = 318 +/- 31%) and airflow limitation (forced expiratory volume in 1 sec [FEV1] = 40 +/- 7%). The exercise performance of the group was severely limited (maximum achievable workload = 43 +/- 6%) and was characterized by prominent ventilatory, gas exchange, and pulmonary vascular abnormalities. The quantitative CT index was markedly elevated in all patients (76 +/- 9; n = 14; normal < 4). There were correlations between this quantitative CT index and measures of airflow limitation (FEV1 r2 = .34, p = 09; FEV1/forced vital capacity r2 = .46, p = .04) and between maximum workload achieved (r2 = .93, p = .0001) and maximum oxygen utilization (r2 = .83, p = .0007). Quantitative chest CT assessment of disease severity is correlated with the degree of airflow limitation and exercise impairment in pulmonary emphysema.
CCTV Coverage Index Based on Surveillance Resolution and Its Evaluation Using 3D Spatial Analysis
Choi, Kyoungah; Lee, Impyeong
2015-01-01
We propose a novel approach to evaluating how effectively a closed circuit television (CCTV) system can monitor a targeted area. With 3D models of the target area and the camera parameters of the CCTV system, the approach produces surveillance coverage index, which is newly defined in this study as a quantitative measure for surveillance performance. This index indicates the proportion of the space being monitored with a sufficient resolution to the entire space of the target area. It is determined by computing surveillance resolution at every position and orientation, which indicates how closely a specific object can be monitored with a CCTV system. We present full mathematical derivation for the resolution, which depends on the location and orientation of the object as well as the geometric model of a camera. With the proposed approach, we quantitatively evaluated the surveillance coverage of a CCTV system in an underground parking area. Our evaluation process provided various quantitative-analysis results, compelling us to examine the design of the CCTV system prior to its installation and understand the surveillance capability of an existing CCTV system. PMID:26389909
Exploring the Epistemic Impacts of Academic Performance Indicators in the Life Sciences
ERIC Educational Resources Information Center
Müller, Ruth; de Rijcke, Sarah
2017-01-01
While quantitative performance indicators are widely used by organizations and individuals for evaluative purposes, little is known about their impacts on the epistemic processes of academic knowledge production. In this article we bring together three qualitative research projects undertaken in the Netherlands and Austria to contribute to filling…
Safeguards Technology Development Program 1st Quarter FY 2018 Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prasad, Manoj K.
LLNL will evaluate the performance of a stilbene-based scintillation detector array for IAEA neutron multiplicity counting (NMC) applications. This effort will combine newly developed modeling methodologies and recently acquired high-efficiency stilbene detector units to quantitatively compare the prototype system performance with the conventional He-3 counters and liquid scintillator alternatives.
Geiger, Daniel; Bae, Won C.; Statum, Sheronda; Du, Jiang; Chung, Christine B.
2014-01-01
Objective Temporomandibular dysfunction involves osteoarthritis of the TMJ, including degeneration and morphologic changes of the mandibular condyle. Purpose of this study was to determine accuracy of novel 3D-UTE MRI versus micro-CT (μCT) for quantitative evaluation of mandibular condyle morphology. Material & Methods Nine TMJ condyle specimens were harvested from cadavers (2M, 3F; Age 85 ± 10 yrs., mean±SD). 3D-UTE MRI (TR=50ms, TE=0.05 ms, 104 μm isotropic-voxel) was performed using a 3-T MR scanner and μCT (18 μm isotropic-voxel) was performed. MR datasets were spatially-registered with μCT dataset. Two observers segmented bony contours of the condyles. Fibrocartilage was segmented on MR dataset. Using a custom program, bone and fibrocartilage surface coordinates, Gaussian curvature, volume of segmented regions and fibrocartilage thickness were determined for quantitative evaluation of joint morphology. Agreement between techniques (MRI vs. μCT) and observers (MRI vs. MRI) for Gaussian curvature, mean curvature and segmented volume of the bone were determined using intraclass correlation correlation (ICC) analyses. Results Between MRI and μCT, the average deviation of surface coordinates was 0.19±0.15 mm, slightly higher than spatial resolution of MRI. Average deviation of the Gaussian curvature and volume of segmented regions, from MRI to μCT, was 5.7±6.5% and 6.6±6.2%, respectively. ICC coefficients (MRI vs. μCT) for Gaussian curvature, mean curvature and segmented volumes were respectively 0.892, 0.893 and 0.972. Between observers (MRI vs. MRI), the ICC coefficients were 0.998, 0.999 and 0.997 respectively. Fibrocartilage thickness was 0.55±0.11 mm, as previously described in literature for grossly normal TMJ samples. Conclusion 3D-UTE MR quantitative evaluation of TMJ condyle morphology ex-vivo, including surface, curvature and segmented volume, shows high correlation against μCT and between observers. In addition, UTE MRI allows quantitative evaluation of the fibrocartilaginous condylar component. PMID:24092237
A grid for a precise analysis of daily activities.
Wojtasik, V; Olivier, C; Lekeu, F; Quittre, A; Adam, S; Salmon, E
2010-01-01
Assessment of daily living activities is essential in patients with Alzheimer's disease. Most current tools quantitatively assess overall ability but provide little qualitative information on individual difficulties. Only a few tools allow therapists to evaluate stereotyped activities and record different types of errors. We capitalised on the Kitchen Activity Assessment to design a widely applicable analysis grid that provides both qualitative and quantitative data on activity performance. A cooking activity was videotaped in 15 patients with dementia and assessed according to the different steps in the execution of the task. The evaluations obtained with our grid showed good correlations between raters, between versions of the grid and between sessions. Moreover, the degree of independence obtained with our analysis of the task correlated with the Kitchen Activity Assessment score and with a global score of cognitive functioning. We conclude that assessment of a daily living activity with this analysis grid is reproducible and relatively independent of the therapist, and thus provides quantitative and qualitative information useful for both evaluating and caring for demented patients.
21 CFR 314.126 - Adequate and well-controlled studies.
Code of Federal Regulations, 2010 CFR
2010-04-01
... permits a valid comparison with a control to provide a quantitative assessment of drug effect. The... data analyses performed. (c) The Director of the Center for Drug Evaluation and Research may, on the...
21 CFR 314.126 - Adequate and well-controlled studies.
Code of Federal Regulations, 2012 CFR
2012-04-01
... permits a valid comparison with a control to provide a quantitative assessment of drug effect. The... data analyses performed. (c) The Director of the Center for Drug Evaluation and Research may, on the...
21 CFR 314.126 - Adequate and well-controlled studies.
Code of Federal Regulations, 2013 CFR
2013-04-01
... permits a valid comparison with a control to provide a quantitative assessment of drug effect. The... data analyses performed. (c) The Director of the Center for Drug Evaluation and Research may, on the...
21 CFR 314.126 - Adequate and well-controlled studies.
Code of Federal Regulations, 2011 CFR
2011-04-01
... permits a valid comparison with a control to provide a quantitative assessment of drug effect. The... data analyses performed. (c) The Director of the Center for Drug Evaluation and Research may, on the...
21 CFR 314.126 - Adequate and well-controlled studies.
Code of Federal Regulations, 2014 CFR
2014-04-01
... permits a valid comparison with a control to provide a quantitative assessment of drug effect. The... data analyses performed. (c) The Director of the Center for Drug Evaluation and Research may, on the...
Comparison of methods for quantitative evaluation of endoscopic distortion
NASA Astrophysics Data System (ADS)
Wang, Quanzeng; Castro, Kurt; Desai, Viraj N.; Cheng, Wei-Chung; Pfefer, Joshua
2015-03-01
Endoscopy is a well-established paradigm in medical imaging, and emerging endoscopic technologies such as high resolution, capsule and disposable endoscopes promise significant improvements in effectiveness, as well as patient safety and acceptance of endoscopy. However, the field lacks practical standardized test methods to evaluate key optical performance characteristics (OPCs), in particular the geometric distortion caused by fisheye lens effects in clinical endoscopic systems. As a result, it has been difficult to evaluate an endoscope's image quality or assess its changes over time. The goal of this work was to identify optimal techniques for objective, quantitative characterization of distortion that are effective and not burdensome. Specifically, distortion measurements from a commercially available distortion evaluation/correction software package were compared with a custom algorithm based on a local magnification (ML) approach. Measurements were performed using a clinical gastroscope to image square grid targets. Recorded images were analyzed with the ML approach and the commercial software where the results were used to obtain corrected images. Corrected images based on the ML approach and the software were compared. The study showed that the ML method could assess distortion patterns more accurately than the commercial software. Overall, the development of standardized test methods for characterizing distortion and other OPCs will facilitate development, clinical translation, manufacturing quality and assurance of performance during clinical use of endoscopic technologies.
Towards standardized assessment of endoscope optical performance: geometric distortion
NASA Astrophysics Data System (ADS)
Wang, Quanzeng; Desai, Viraj N.; Ngo, Ying Z.; Cheng, Wei-Chung; Pfefer, Joshua
2013-12-01
Technological advances in endoscopes, such as capsule, ultrathin and disposable devices, promise significant improvements in safety, clinical effectiveness and patient acceptance. Unfortunately, the industry lacks test methods for preclinical evaluation of key optical performance characteristics (OPCs) of endoscopic devices that are quantitative, objective and well-validated. As a result, it is difficult for researchers and developers to compare image quality and evaluate equivalence to, or improvement upon, prior technologies. While endoscope OPCs include resolution, field of view, and depth of field, among others, our focus in this paper is geometric image distortion. We reviewed specific test methods for distortion and then developed an objective, quantitative test method based on well-defined experimental and data processing steps to evaluate radial distortion in the full field of view of an endoscopic imaging system. Our measurements and analyses showed that a second-degree polynomial equation could well describe the radial distortion curve of a traditional endoscope. The distortion evaluation method was effective for correcting the image and can be used to explain other widely accepted evaluation methods such as picture height distortion. Development of consensus standards based on promising test methods for image quality assessment, such as the method studied here, will facilitate clinical implementation of innovative endoscopic devices.
Evaluative procedures to detect, characterize, and assess the severity of diabetic neuropathy.
Dyck, P J
1991-01-01
Minimal criteria for diabetic neuropathy need to be defined and universally applied. Standardized evaluative procedures need to be agreed and normal ranges determined from healthy volunteers. Types and stages of neuropathy should be established and assessments performed on representative populations of both Type 1 and Type 2 diabetic patients. Potential minimal criteria include absent ankle reflexes and vibratory sensation, and abnormalities of nerve conduction. However, the preferred criterion is the identification of more than two statistically defined abnormalities among symptoms and deficits, nerve conduction, quantitative sensory examination or quantitative autonomic examination. Various evaluative procedures are available. Symptoms should be assessed and scores can be assigned to neurological deficits. However, assessments of nerve conduction provide the most specific, objective, sensitive, and repeatable procedures, although these may be the least meaningful. Many techniques are available for quantitative sensory examination, but are poorly standardized and normal values are not available. For quantitative autonomic examination, tests are available for the adequacy of cardiovascular and peripheral vascular reflexes and increasingly for other autonomic functions. In any assessment of nerve function the conditions should be optimized and standardized, and stimuli defined. Specific instructions should be given and normal ranges established in healthy volunteers.
[Clinical evaluation of the optic disc in glaucoma].
Greslechner, R; Spiegel, D
2016-10-01
Glaucoma is defined as a progressive neuropathy of the optic nerve, characterized by specific changes of the optic disc, parapapillary region, and retinal nerve fiber layer. Characteristic glaucomatous changes of the optic disc, parapapillary region, and retinal nerve fiber layer are discussed and their ophthalmoscopic examination is described. A literature search in the PubMed database was conducted. A systematic step-by-step approach to a qualitative and quantitative ophthalmoscopic evaluation of the optic disc regarding glaucomatous damage is presented. A systematic, clinical, qualitative, and quantitative assessment of the optic disc can be performed with little effort and forms the basis for diagnosis and treatment of glaucoma.
NASA Technical Reports Server (NTRS)
Natesh, R.; Smith, J. M.; Qidwai, H. A.; Bruce, T.
1979-01-01
The evaluation and prediction of the conversion efficiency for a variety of silicon samples with differences in structural defects, such as grain boundaries, twin boundaries, precipitate particles, dislocations, etc. are discussed. Quantitative characterization of these structural defects, which were revealed by etching the surface of silicon samples, is performed by using an image analyzer. Due to different crystal growth and fabrication techniques the various types of silicon contain a variety of trace impurity elements and structural defects. The two most important criteria in evaluating the various silicon types for solar cell applications are cost and conversion efficiency.
Liu, Sifei; Zhang, Guangrui; Qiu, Ying; Wang, Xiaobo; Guo, Lihan; Zhao, Yanxin; Tong, Meng; Wei, Lan; Sun, Lixin
2016-12-01
In this study, we aimed to establish a comprehensive and practical quality evaluation system for Shenmaidihuang pills. A simple and reliable high-performance liquid chromatography coupled with photodiode array detection method was developed both for fingerprint analysis and quantitative determination. In fingerprint analysis, relative retention time and relative peak area were used to identify the common peaks in 18 samples for investigation. Twenty one peaks were selected as the common peaks to evaluate the similarities of 18 Shenmaidihuang pills samples with different manufacture dates. Furthermore, similarity analysis was applied to evaluate the similarity of samples. Hierarchical cluster analysis and principal component analysis were also performed to evaluate the variation of Shenmaidihuang pills. In quantitative analysis, linear regressions, injection precisions, recovery, repeatability and sample stability were all tested and good results were obtained to simultaneously determine the seven identified compounds, namely, 5-hydroxymethylfurfural, morroniside, loganin, paeonol, paeoniflorin, psoralen, isopsoralen in Shenmaidihuang pills. The contents of some analytes in different batches of samples indicated significant difference, especially for 5-hydroxymethylfurfural. So, it was concluded that the chromatographic fingerprint method obtained by high-performance liquid chromatography coupled with photodiode array detection associated with multiple compounds determination is a powerful and meaningful tool to comprehensively conduct the quality control of Shenmaidihuang pills. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Jeon, Hosang; Kim, Hyunduk; Cha, Bo Kyung; Kim, Jong Yul; Cho, Gyuseong; Chung, Yong Hyun; Yun, Jong-Il
2009-06-01
Presently, the gamma camera system is widely used in various medical diagnostic, industrial and environmental fields. Hence, the quantitative and effective evaluation of its imaging performance is essential for design and quality assurance. The National Electrical Manufacturers Association (NEMA) standards for gamma camera evaluation are insufficient to perform sensitive evaluation. In this study, modulation transfer function (MTF) and normalized noise power spectrum (NNPS) will be suggested to evaluate the performance of small gamma camera with changeable pinhole collimators using Monte Carlo simulation. We simulated the system with a cylinder and a disk source, and seven different pinhole collimators from 1- to 4-mm-diameter pinhole with lead. The MTF and NNPS data were obtained from output images and were compared with full-width at half-maximum (FWHM), sensitivity and differential uniformity. In the result, we found that MTF and NNPS are effective and novel standards to evaluate imaging performance of gamma cameras instead of conventional NEMA standards.
A Model of Risk Analysis in Analytical Methodology for Biopharmaceutical Quality Control.
Andrade, Cleyton Lage; Herrera, Miguel Angel De La O; Lemes, Elezer Monte Blanco
2018-01-01
One key quality control parameter for biopharmaceutical products is the analysis of residual cellular DNA. To determine small amounts of DNA (around 100 pg) that may be in a biologically derived drug substance, an analytical method should be sensitive, robust, reliable, and accurate. In principle, three techniques have the ability to measure residual cellular DNA: radioactive dot-blot, a type of hybridization; threshold analysis; and quantitative polymerase chain reaction. Quality risk management is a systematic process for evaluating, controlling, and reporting of risks that may affects method capabilities and supports a scientific and practical approach to decision making. This paper evaluates, by quality risk management, an alternative approach to assessing the performance risks associated with quality control methods used with biopharmaceuticals, using the tool hazard analysis and critical control points. This tool provides the possibility to find the steps in an analytical procedure with higher impact on method performance. By applying these principles to DNA analysis methods, we conclude that the radioactive dot-blot assay has the largest number of critical control points, followed by quantitative polymerase chain reaction, and threshold analysis. From the analysis of hazards (i.e., points of method failure) and the associated method procedure critical control points, we conclude that the analytical methodology with the lowest risk for performance failure for residual cellular DNA testing is quantitative polymerase chain reaction. LAY ABSTRACT: In order to mitigate the risk of adverse events by residual cellular DNA that is not completely cleared from downstream production processes, regulatory agencies have required the industry to guarantee a very low level of DNA in biologically derived pharmaceutical products. The technique historically used was radioactive blot hybridization. However, the technique is a challenging method to implement in a quality control laboratory: It is laborious, time consuming, semi-quantitative, and requires a radioisotope. Along with dot-blot hybridization, two alternatives techniques were evaluated: threshold analysis and quantitative polymerase chain reaction. Quality risk management tools were applied to compare the techniques, taking into account the uncertainties, the possibility of circumstances or future events, and their effects upon method performance. By illustrating the application of these tools with DNA methods, we provide an example of how they can be used to support a scientific and practical approach to decision making and can assess and manage method performance risk using such tools. This paper discusses, considering the principles of quality risk management, an additional approach to the development and selection of analytical quality control methods using the risk analysis tool hazard analysis and critical control points. This tool provides the possibility to find the method procedural steps with higher impact on method reliability (called critical control points). Our model concluded that the radioactive dot-blot assay has the larger number of critical control points, followed by quantitative polymerase chain reaction and threshold analysis. Quantitative polymerase chain reaction is shown to be the better alternative analytical methodology in residual cellular DNA analysis. © PDA, Inc. 2018.
Comparison of quantitative and qualitative tests for glucose-6-phosphate dehydrogenase deficiency.
LaRue, Nicole; Kahn, Maria; Murray, Marjorie; Leader, Brandon T; Bansil, Pooja; McGray, Sarah; Kalnoky, Michael; Zhang, Hao; Huang, Huiqiang; Jiang, Hui; Domingo, Gonzalo J
2014-10-01
A barrier to eliminating Plasmodium vivax malaria is inadequate treatment of infected patients. 8-Aminoquinoline-based drugs clear the parasite; however, people with glucose-6-phosphate dehydrogenase (G6PD) deficiency are at risk for hemolysis from these drugs. Understanding the performance of G6PD deficiency tests is critical for patient safety. Two quantitative assays and two qualitative tests were evaluated. The comparison of quantitative assays gave a Pearson correlation coefficient of 0.7585 with significant difference in mean G6PD activity, highlighting the need to adhere to a single reference assay. Both qualitative tests had high sensitivity and negative predictive value at a cutoff G6PD value of 40% of normal activity if interpreted conservatively and performed under laboratory conditions. The performance of both tests dropped at a cutoff level of 45%. Cytochemical staining of specimens confirmed that heterozygous females with > 50% G6PD-deficient cells can seem normal by phenotypic tests. © The American Society of Tropical Medicine and Hygiene.
A Privacy-Preserving Platform for User-Centric Quantitative Benchmarking
NASA Astrophysics Data System (ADS)
Herrmann, Dominik; Scheuer, Florian; Feustel, Philipp; Nowey, Thomas; Federrath, Hannes
We propose a centralised platform for quantitative benchmarking of key performance indicators (KPI) among mutually distrustful organisations. Our platform offers users the opportunity to request an ad-hoc benchmarking for a specific KPI within a peer group of their choice. Architecture and protocol are designed to provide anonymity to its users and to hide the sensitive KPI values from other clients and the central server. To this end, we integrate user-centric peer group formation, exchangeable secure multi-party computation protocols, short-lived ephemeral key pairs as pseudonyms, and attribute certificates. We show by empirical evaluation of a prototype that the performance is acceptable for reasonably sized peer groups.
Barousse, Rafael; Socolovsky, Mariano; Luna, Antonio
2017-01-01
Traumatic conditions of peripheral nerves and plexus have been classically evaluated by morphological imaging techniques and electrophysiological tests. New magnetic resonance imaging (MRI) studies based on 3D fat-suppressed techniques are providing high accuracy for peripheral nerve injury evaluation from a qualitative point of view. However, these techniques do not provide quantitative information. Diffusion weighted imaging (DWI) and diffusion tensor imaging (DTI) are functional MRI techniques that are able to evaluate and quantify the movement of water molecules within different biological structures. These techniques have been successfully applied in other anatomical areas, especially in the assessment of central nervous system, and now are being imported, with promising results for peripheral nerve and plexus evaluation. DWI and DTI allow performing a qualitative and quantitative peripheral nerve analysis, providing valuable pathophysiological information about functional integrity of these structures. In the field of trauma and peripheral nerve or plexus injury, several derived parameters from DWI and DTI studies such as apparent diffusion coefficient (ADC) or fractional anisotropy (FA) among others, can be used as potential biomarkers of neural damage providing information about fiber organization, axonal flow or myelin integrity. A proper knowledge of physical basis of these techniques and their limitations is important for an optimal interpretation of the imaging findings and derived data. In this paper, a comprehensive review of the potential applications of DWI and DTI neurographic studies is performed with a focus on traumatic conditions, including main nerve entrapment syndromes in both peripheral nerves and brachial or lumbar plexus. PMID:28932698
Global scaling for semi-quantitative analysis in FP-CIT SPECT.
Kupitz, D; Apostolova, I; Lange, C; Ulrich, G; Amthauer, H; Brenner, W; Buchert, R
2014-01-01
Semi-quantitative characterization of dopamine transporter availability from single photon emission computed tomography (SPECT) with 123I-ioflupane (FP-CIT) is based on uptake ratios relative to a reference region. The aim of this study was to evaluate the whole brain as reference region for semi-quantitative analysis of FP-CIT SPECT. The rationale was that this might reduce statistical noise associated with the estimation of non-displaceable FP-CIT uptake. 150 FP-CIT SPECTs were categorized as neurodegenerative or non-neurodegenerative by an expert. Semi-quantitative analysis of specific binding ratios (SBR) was performed with a custom-made tool based on the Statistical Parametric Mapping software package using predefined regions of interest (ROIs) in the anatomical space of the Montreal Neurological Institute. The following reference regions were compared: predefined ROIs for frontal and occipital lobe and whole brain (without striata, thalamus and brainstem). Tracer uptake in the reference region was characterized by the mean, median or 75th percentile of its voxel intensities. The area (AUC) under the receiver operating characteristic curve was used as performance measure. The highest AUC of 0.973 was achieved by the SBR of the putamen with the 75th percentile in the whole brain as reference. The lowest AUC for the putamen SBR of 0.937 was obtained with the mean in the frontal lobe as reference. We recommend the 75th percentile in the whole brain as reference for semi-quantitative analysis in FP-CIT SPECT. This combination provided the best agreement of the semi-quantitative analysis with visual evaluation of the SPECT images by an expert and, therefore, is appropriate to support less experienced physicians.
Ammitzbøll-Danielsen, Mads; Østergaard, Mikkel; Naredo, Esperanza; Terslev, Lene
2016-12-01
The aim was to evaluate the metric properties of the semi-quantitative OMERACT US scoring system vs a novel quantitative US scoring system for tenosynovitis, by testing its intra- and inter-reader reliability, sensitivity to change and comparison with clinical tenosynovitis scoring in a 6-month follow-up study. US and clinical assessments of the tendon sheaths of the clinically most affected hand and foot were performed at baseline, 3 and 6 months in 51 patients with RA. Tenosynovitis was assessed using the semi-quantitative scoring system (0-3) proposed by the OMERACT US group and a new quantitative US evaluation (0-100). A sum for US grey scale (GS), colour Doppler (CD) and pixel index (PI), respectively, was calculated for each patient. In 20 patients, intra- and inter-observer agreement was established between two independent investigators. A binary clinical tenosynovitis score was performed, calculating a sum score per patient. The intra- and inter-observer agreements for US tenosynovitis assessments were very good at baseline and for change for GS and CD, but less good for PI. The smallest detectable change was 0.97 for GS, 0.93 for CD and 30.1 for PI. The sensitivity to change from month 0 to 6 was high for GS and CD, and slightly higher than for clinical tenosynovitis score and PI. This study demonstrated an excellent intra- and inter-reader agreement between two investigators for the OMERACT US scoring system for tenosynovitis and a high ability to detect changes over time. Quantitative assessment by PI did not add further information. © The Author 2016. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Wykrzykowska, Joanna J.; Arbab-Zadeh, Armin; Godoy, Gustavo; Miller, Julie M.; Lin, Shezhang; Vavere, Andrea; Paul, Narinder; Niinuma, Hiroyuki; Hoe, John; Brinker, Jeffrey; Khosa, Faisal; Sarwar, Sheryar; Lima, Joao; Clouse, Melvin E.
2012-01-01
OBJECTIVE Evaluations of stents by MDCT from studies performed at single centers have yielded variable results with a high proportion of unassessable stents. The purpose of this study was to evaluate the accuracy of 64-MDCT angiography (MDCTA) in identifying in-stent restenosis in a multicenter trial. MATERIALS AND METHODS The Coronary Evaluation Using Multidetector Spiral Computed Tomography Angiography Using 64 Detectors (CORE-64) Multicenter Trial and Registry evaluated the accuracy of 64-MDCTA in assessing 405 patients referred for coronary angiography. A total of 75 stents in 52 patients were assessed: 48 of 75 stents (64%) in 36 of 52 patients (69%) could be evaluated. The prevalence of in-stent restenosis by quantitative coronary angiography (QCA) in this subgroup was 23% (17/75). Eighty percent of the stents were ≤ 3.0 mm in diameter. RESULTS The overall sensitivity, specificity, positive predictive value, and negative predictive value to detect 50% in-stent stenosis visually using MDCT compared with QCA was 33.3%, 91.7%, 57.1%, and 80.5%, respectively, with an overall accuracy of 77.1% for the 48 assessable stents. The ability to evaluate stents on MDCTA varied by stent type: Thick-strut stents such as Bx Velocity were assessable in 50% of the cases; Cypher, 62.5% of the cases; and thinner-strut stents such as Taxus, 75% of the cases. We performed quantitative assessment of in-stent contrast attenuation in Hounsfield units and correlated that value with the quantitative percentage of stenosis by QCA. The correlation coefficient between the average attenuation decrease and ≥ 50% stenosis by QCA was 0.25 (p = 0.073). Quantitative assessment failed to improve the accuracy of MDCT over qualitative assessment. CONCLUSION The results of our study showed that 64-MDCT has poor ability to detect in-stent restenosis in small-diameter stents. Evaluability and negative predictive value were better in large-diameter stents. Thus, 64-MDCT may be appropriate for stent assessment in only selected patients. PMID:20028909
Quantitative aspects of inductively coupled plasma mass spectrometry
Wagner, Barbara
2016-01-01
Accurate determination of elements in various kinds of samples is essential for many areas, including environmental science, medicine, as well as industry. Inductively coupled plasma mass spectrometry (ICP-MS) is a powerful tool enabling multi-elemental analysis of numerous matrices with high sensitivity and good precision. Various calibration approaches can be used to perform accurate quantitative measurements by ICP-MS. They include the use of pure standards, matrix-matched standards, or relevant certified reference materials, assuring traceability of the reported results. This review critically evaluates the advantages and limitations of different calibration approaches, which are used in quantitative analyses by ICP-MS. Examples of such analyses are provided. This article is part of the themed issue ‘Quantitative mass spectrometry’. PMID:27644971
A Novel Health Evaluation Strategy for Multifunctional Self-Validating Sensors
Shen, Zhengguang; Wang, Qi
2013-01-01
The performance evaluation of sensors is very important in actual application. In this paper, a theory based on multi-variable information fusion is studied to evaluate the health level of multifunctional sensors. A novel conception of health reliability degree (HRD) is defined to indicate a quantitative health level, which is different from traditional so-called qualitative fault diagnosis. To evaluate the health condition from both local and global perspectives, the HRD of a single sensitive component at multiple time points and the overall multifunctional sensor at a single time point are defined, respectively. The HRD methodology is emphasized by using multi-variable data fusion technology coupled with a grey comprehensive evaluation method. In this method, to acquire the distinct importance of each sensitive unit and the sensitivity of different time points, the information entropy and analytic hierarchy process method are used, respectively. In order to verify the feasibility of the proposed strategy, a health evaluating experimental system for multifunctional self-validating sensors was designed. The five different health level situations have been discussed. Successful results show that the proposed method is feasible, the HRD could be used to quantitatively indicate the health level and it does have a fast response to the performance changes of multifunctional sensors. PMID:23291576
In a recent Cancer Discovery report, CTD2 researchers at the University of California in San Francisco developed a new quantitative chemical-genetic interaction mapping approach to evaluate drug sensitivity or resistance in isogenic cell lines. Performing a high-throughput screen with isogenic cell lines allowed the researchers to explore the impact of a panel of emerging and established drugs on cells overexpressing a single cancer-associated gene in isolation.
ERIC Educational Resources Information Center
Lee, Jeong W.
Quantitative financial measures were applied to evaluate the performance of the North Dakota Public Employee Retirement System (NDPERS) pension fund portfolios and the Teachers Insurance and Annuity Association (TIAA)/College Retirement Equities Fund (CREF) portfolios, thus providing a relative performance assessment. Ten years of data were…
A Quantitative Evaluation of the Flipped Classroom in a Large Lecture Principles of Economics Course
ERIC Educational Resources Information Center
Balaban, Rita A.; Gilleskie, Donna B.; Tran, Uyen
2016-01-01
This research provides evidence that the flipped classroom instructional format increases student final exam performance, relative to the traditional instructional format, in a large lecture principles of economics course. The authors find that the flipped classroom directly improves performance by 0.2 to 0.7 standardized deviations, depending on…
ERIC Educational Resources Information Center
Ngoma, Muhammed; Ntale, Peter Dithan; Abaho, Earnest
2017-01-01
This article evaluates the relationship between social-economic factors, students' factors, student academic goals and performance of students. The study adopts a cross-sectional survey, with largely quantitative approaches. A sample of 950 students was randomly and proportionately drawn from undergraduates in four institutions of higher learning.…
The Long-Term Effects of Florida's Third Grade Retention Policy
ERIC Educational Resources Information Center
Smith, Andre K.
2016-01-01
The purpose of this quantitative causal-comparative study was to evaluate the long-term effects of Florida's Third-Grade Retention policy on low performing students' subsequent academic performance as measured by FCAT reading scores. The study included a random stratified sample of 1500 retained third graders for failure to meet Florida's…
Technological innovation in neurosurgery: a quantitative study.
Marcus, Hani J; Hughes-Hallett, Archie; Kwasnicki, Richard M; Darzi, Ara; Yang, Guang-Zhong; Nandi, Dipankar
2015-07-01
Technological innovation within health care may be defined as the introduction of a new technology that initiates a change in clinical practice. Neurosurgery is a particularly technology-intensive surgical discipline, and new technologies have preceded many of the major advances in operative neurosurgical techniques. The aim of the present study was to quantitatively evaluate technological innovation in neurosurgery using patents and peer-reviewed publications as metrics of technology development and clinical translation, respectively. The authors searched a patent database for articles published between 1960 and 2010 using the Boolean search term "neurosurgeon OR neurosurgical OR neurosurgery." The top 50 performing patent codes were then grouped into technology clusters. Patent and publication growth curves were then generated for these technology clusters. A top-performing technology cluster was then selected as an exemplar for a more detailed analysis of individual patents. In all, 11,672 patents and 208,203 publications related to neurosurgery were identified. The top-performing technology clusters during these 50 years were image-guidance devices, clinical neurophysiology devices, neuromodulation devices, operating microscopes, and endoscopes. In relation to image-guidance and neuromodulation devices, the authors found a highly correlated rapid rise in the numbers of patents and publications, which suggests that these are areas of technology expansion. An in-depth analysis of neuromodulation-device patents revealed that the majority of well-performing patents were related to deep brain stimulation. Patent and publication data may be used to quantitatively evaluate technological innovation in neurosurgery.
van Dijk, R; van Assen, M; Vliegenthart, R; de Bock, G H; van der Harst, P; Oudkerk, M
2017-11-27
Stress cardiovascular magnetic resonance (CMR) perfusion imaging is a promising modality for the evaluation of coronary artery disease (CAD) due to high spatial resolution and absence of radiation. Semi-quantitative and quantitative analysis of CMR perfusion are based on signal-intensity curves produced during the first-pass of gadolinium contrast. Multiple semi-quantitative and quantitative parameters have been introduced. Diagnostic performance of these parameters varies extensively among studies and standardized protocols are lacking. This study aims to determine the diagnostic accuracy of semi- quantitative and quantitative CMR perfusion parameters, compared to multiple reference standards. Pubmed, WebOfScience, and Embase were systematically searched using predefined criteria (3272 articles). A check for duplicates was performed (1967 articles). Eligibility and relevance of the articles was determined by two reviewers using pre-defined criteria. The primary data extraction was performed independently by two researchers with the use of a predefined template. Differences in extracted data were resolved by discussion between the two researchers. The quality of the included studies was assessed using the 'Quality Assessment of Diagnostic Accuracy Studies Tool' (QUADAS-2). True positives, false positives, true negatives, and false negatives were subtracted/calculated from the articles. The principal summary measures used to assess diagnostic accuracy were sensitivity, specificity, andarea under the receiver operating curve (AUC). Data was pooled according to analysis territory, reference standard and perfusion parameter. Twenty-two articles were eligible based on the predefined study eligibility criteria. The pooled diagnostic accuracy for segment-, territory- and patient-based analyses showed good diagnostic performance with sensitivity of 0.88, 0.82, and 0.83, specificity of 0.72, 0.83, and 0.76 and AUC of 0.90, 0.84, and 0.87, respectively. In per territory analysis our results show similar diagnostic accuracy comparing anatomical (AUC 0.86(0.83-0.89)) and functional reference standards (AUC 0.88(0.84-0.90)). Only the per territory analysis sensitivity did not show significant heterogeneity. None of the groups showed signs of publication bias. The clinical value of semi-quantitative and quantitative CMR perfusion analysis remains uncertain due to extensive inter-study heterogeneity and large differences in CMR perfusion acquisition protocols, reference standards, and methods of assessment of myocardial perfusion parameters. For wide spread implementation, standardization of CMR perfusion techniques is essential. CRD42016040176 .
Rudnick, Paul A.; Clauser, Karl R.; Kilpatrick, Lisa E.; Tchekhovskoi, Dmitrii V.; Neta, Pedatsur; Blonder, Nikša; Billheimer, Dean D.; Blackman, Ronald K.; Bunk, David M.; Cardasis, Helene L.; Ham, Amy-Joan L.; Jaffe, Jacob D.; Kinsinger, Christopher R.; Mesri, Mehdi; Neubert, Thomas A.; Schilling, Birgit; Tabb, David L.; Tegeler, Tony J.; Vega-Montoto, Lorenzo; Variyath, Asokan Mulayath; Wang, Mu; Wang, Pei; Whiteaker, Jeffrey R.; Zimmerman, Lisa J.; Carr, Steven A.; Fisher, Susan J.; Gibson, Bradford W.; Paulovich, Amanda G.; Regnier, Fred E.; Rodriguez, Henry; Spiegelman, Cliff; Tempst, Paul; Liebler, Daniel C.; Stein, Stephen E.
2010-01-01
A major unmet need in LC-MS/MS-based proteomics analyses is a set of tools for quantitative assessment of system performance and evaluation of technical variability. Here we describe 46 system performance metrics for monitoring chromatographic performance, electrospray source stability, MS1 and MS2 signals, dynamic sampling of ions for MS/MS, and peptide identification. Applied to data sets from replicate LC-MS/MS analyses, these metrics displayed consistent, reasonable responses to controlled perturbations. The metrics typically displayed variations less than 10% and thus can reveal even subtle differences in performance of system components. Analyses of data from interlaboratory studies conducted under a common standard operating procedure identified outlier data and provided clues to specific causes. Moreover, interlaboratory variation reflected by the metrics indicates which system components vary the most between laboratories. Application of these metrics enables rational, quantitative quality assessment for proteomics and other LC-MS/MS analytical applications. PMID:19837981
Models and techniques for evaluating the effectiveness of aircraft computing systems
NASA Technical Reports Server (NTRS)
Meyer, J. F.
1978-01-01
The development of system models that can provide a basis for the formulation and evaluation of aircraft computer system effectiveness, the formulation of quantitative measures of system effectiveness, and the development of analytic and simulation techniques for evaluating the effectiveness of a proposed or existing aircraft computer are described. Specific topics covered include: system models; performability evaluation; capability and functional dependence; computation of trajectory set probabilities; and hierarchical modeling of an air transport mission.
Quantitative reactive modeling and verification.
Henzinger, Thomas A
Formal verification aims to improve the quality of software by detecting errors before they do harm. At the basis of formal verification is the logical notion of correctness , which purports to capture whether or not a program behaves as desired. We suggest that the boolean partition of software into correct and incorrect programs falls short of the practical need to assess the behavior of software in a more nuanced fashion against multiple criteria. We therefore propose to introduce quantitative fitness measures for programs, specifically for measuring the function, performance, and robustness of reactive programs such as concurrent processes. This article describes the goals of the ERC Advanced Investigator Project QUAREM. The project aims to build and evaluate a theory of quantitative fitness measures for reactive models. Such a theory must strive to obtain quantitative generalizations of the paradigms that have been success stories in qualitative reactive modeling, such as compositionality, property-preserving abstraction and abstraction refinement, model checking, and synthesis. The theory will be evaluated not only in the context of software and hardware engineering, but also in the context of systems biology. In particular, we will use the quantitative reactive models and fitness measures developed in this project for testing hypotheses about the mechanisms behind data from biological experiments.
Stocker, Elena; Toschkoff, Gregor; Sacher, Stephan; Khinast, Johannes G
2014-11-20
The purpose of this study is to evaluate the use of computer simulations for generating quantitative knowledge as a basis for risk ranking and mechanistic process understanding, as required by ICH Q9 on quality risk management systems. In this specific publication, the main focus is the demonstration of a risk assessment workflow, including a computer simulation for the generation of mechanistic understanding of active tablet coating in a pan coater. Process parameter screening studies are statistically planned under consideration of impacts on a potentially critical quality attribute, i.e., coating mass uniformity. Based on computer simulation data the process failure mode and effects analysis of the risk factors is performed. This results in a quantitative criticality assessment of process parameters and the risk priority evaluation of failure modes. The factor for a quantitative reassessment of the criticality and risk priority is the coefficient of variation, which represents the coating mass uniformity. The major conclusion drawn from this work is a successful demonstration of the integration of computer simulation in the risk management workflow leading to an objective and quantitative risk assessment. Copyright © 2014. Published by Elsevier B.V.
Sub-band denoising and spline curve fitting method for hemodynamic measurement in perfusion MRI
NASA Astrophysics Data System (ADS)
Lin, Hong-Dun; Huang, Hsiao-Ling; Hsu, Yuan-Yu; Chen, Chi-Chen; Chen, Ing-Yi; Wu, Liang-Chi; Liu, Ren-Shyan; Lin, Kang-Ping
2003-05-01
In clinical research, non-invasive MR perfusion imaging is capable of investigating brain perfusion phenomenon via various hemodynamic measurements, such as cerebral blood volume (CBV), cerebral blood flow (CBF), and mean trasnit time (MTT). These hemodynamic parameters are useful in diagnosing brain disorders such as stroke, infarction and periinfarct ischemia by further semi-quantitative analysis. However, the accuracy of quantitative analysis is usually affected by poor signal-to-noise ratio image quality. In this paper, we propose a hemodynamic measurement method based upon sub-band denoising and spline curve fitting processes to improve image quality for better hemodynamic quantitative analysis results. Ten sets of perfusion MRI data and corresponding PET images were used to validate the performance. For quantitative comparison, we evaluate gray/white matter CBF ratio. As a result, the hemodynamic semi-quantitative analysis result of mean gray to white matter CBF ratio is 2.10 +/- 0.34. The evaluated ratio of brain tissues in perfusion MRI is comparable to PET technique is less than 1-% difference in average. Furthermore, the method features excellent noise reduction and boundary preserving in image processing, and short hemodynamic measurement time.
Fluorescent proteins for quantitative microscopy: important properties and practical evaluation.
Shaner, Nathan Christopher
2014-01-01
More than 20 years after their discovery, fluorescent proteins (FPs) continue to be the subject of massive engineering efforts yielding continued improvements. Among these efforts are many aspects that should be of great interest to quantitative imaging users. With new variants frequently introduced into the research community, "tried and true" FPs that have been relied on for many years may now be due for upgrades to more modern variants. However, the dizzying array of FPs now available can make the initial act of narrowing down the potential choices an intimidating prospect. This chapter describes the FP properties that most strongly impact their performance in quantitative imaging experiments, along with their physical origins as they are currently understood. A workflow for evaluating a given FP in the researcher's chosen experimental system (e.g., a specific cell line) is described. © 2014 Elsevier Inc. All rights reserved.
Quantitative assessment of upper extremities motor function in multiple sclerosis.
Daunoraviciene, Kristina; Ziziene, Jurgita; Griskevicius, Julius; Pauk, Jolanta; Ovcinikova, Agne; Kizlaitiene, Rasa; Kaubrys, Gintaras
2018-05-18
Upper extremity (UE) motor function deficits are commonly noted in multiple sclerosis (MS) patients and assessing it is challenging because of the lack of consensus regarding its definition. Instrumented biomechanical analysis of upper extremity movements can quantify coordination with different spatiotemporal measures and facilitate disability rating in MS patients. To identify objective quantitative parameters for more accurate evaluation of UE disability and relate it to existing clinical scores. Thirty-four MS patients and 24 healthy controls (CG) performed a finger-to-nose test as fast as possible and, in addition, clinical evaluation kinematic parameters of UE were measured by using inertial sensors. Generally, a higher disability score was associated with an increase of several temporal parameters, like slower task performance. The time taken to touch their nose was longer when the task was fulfilled with eyes closed. Time to peak angular velocity significantly changed in MS patients (EDSS > 5.0). The inter-joint coordination significantly decreases in MS patients (EDSS 3.0-5.5). Spatial parameters indicated that maximal ROM changes were in elbow flexion. Our findings have revealed that spatiotemporal parameters are related to the UE motor function and MS disability level. Moreover, they facilitate clinical rating by supporting clinical decisions with quantitative data.
Bae, Won C; Ruangchaijatuporn, Thumanoon; Chang, Eric Y; Biswas, Reni; Du, Jiang; Statum, Sheronda; Chung, Christine B
2016-04-01
To evaluate pathology of the triangular fibrocartilage complex (TFCC) using high-resolution morphologic magnetic resonance (MR) imaging, and compare with quantitative MR and biomechanical properties. Five cadaveric wrists (22-70 years) were imaged at 3 T using morphologic (proton density weighted spin echo, PD FS, and 3D spoiled gradient echo, 3D SPGR) and quantitative MR sequences to determine T2 and T1rho properties. In eight geographic regions, morphology of TFC disc and laminae were evaluated for pathology and quantitative MR values. Samples were disarticulated and biomechanical indentation testing was performed on the distal surface of the TFC disc. On morphologic PD SE images, TFC disc pathology included degeneration and tears, while that of the laminae included degeneration, degeneration with superimposed tear, mucinous transformation, and globular calcification. Punctate calcifications were highly visible on 3D SPGR images and found only in pathologic regions. Disc pathology occurred more frequently in proximal regions of the disc than distal regions. Quantitative MR values were lowest in normal samples, and generally higher in pathologic regions. Biomechanical testing demonstrated an inverse relationship, with indentation modulus being high in normal regions with low MR values. The laminae studied were mostly pathologic, and additional normal samples are needed to discern quantitative changes. These results show technical feasibility of morphologic MR, quantitative MR, and biomechanical techniques to characterize pathology of the TFCC. Quantitative MRI may be a suitable surrogate marker of soft tissue mechanical properties, and a useful adjunct to conventional morphologic MR techniques.
Agent-based modeling as a tool for program design and evaluation.
Lawlor, Jennifer A; McGirr, Sara
2017-12-01
Recently, systems thinking and systems science approaches have gained popularity in the field of evaluation; however, there has been relatively little exploration of how evaluators could use quantitative tools to assist in the implementation of systems approaches therein. The purpose of this paper is to explore potential uses of one such quantitative tool, agent-based modeling, in evaluation practice. To this end, we define agent-based modeling and offer potential uses for it in typical evaluation activities, including: engaging stakeholders, selecting an intervention, modeling program theory, setting performance targets, and interpreting evaluation results. We provide demonstrative examples from published agent-based modeling efforts both inside and outside the field of evaluation for each of the evaluative activities discussed. We further describe potential pitfalls of this tool and offer cautions for evaluators who may chose to implement it in their practice. Finally, the article concludes with a discussion of the future of agent-based modeling in evaluation practice and a call for more formal exploration of this tool as well as other approaches to simulation modeling in the field. Copyright © 2017 Elsevier Ltd. All rights reserved.
Evaluating drywells for stormwater management and enhanced aquifer recharge
USDA-ARS?s Scientific Manuscript database
Drywells are increasingly used for stormwater management and enhanced aquifer recharge, but only limited research has quantitatively determined drywell performance. Numerical and field experiments were therefore conducted to improve our understanding and ability to characterize drywell behavior. I...
From striving to thriving: systems thinking, strategy, and the performance of safety net hospitals.
Clark, Jonathan; Singer, Sara; Kane, Nancy; Valentine, Melissa
2013-01-01
Safety net hospitals (SNH) have, on average, experienced declining financial margins and faced an elevated risk of closure over the past decade. Despite these challenges, not all SNHs are weakening and some are prospering. These higher-performing SNHs provide substantial care to safety net populations and produce sustainable financial returns. Drawing on the alternative structural positioning and resource-based views, we explore strategic management as a source of performance differences across SNHs. We employ a mixed-method design, blending quantitative and qualitative data and analysis. We measure financial performance using hospital operating margin and quantitatively evaluate its relationship with a limited set of well-defined structural positions. We further evaluate these structures and also explore the internal resources of SNHs based on nine in-depth case studies developed from site visits and extensive interviews. Quantitative results suggest that structural positions alone are not related to performance. Comparative case studies suggest that higher-performing SNH differ in four respects: (1) coordinating patient flow across the care continuum, (2) engaging in partnerships with other providers, (3) managing scope of services, and (4) investing in human capital. On the basis of these findings, we propose a model of strategic action related to systems thinking--the ability to see wholes and interrelationships rather than individual parts alone. Our exploratory findings suggest the need to move beyond generic strategies alone and acknowledge the importance of underlying managerial capabilities. Specifically, our findings suggest that effective strategy is a function of both the internal resources (e.g., managers' systems-thinking capability) and structural positions (e.g., partnerships) of organizations. From this perspective, framing resources and positioning as distinct alternatives misses the nuances of how strategic advantage is actually achieved.
Bo, Xiao W; Li, Xiao L; Guo, Le H; Li, Dan D; Liu, Bo J; Wang, Dan; He, Ya P; Xu, Xiao H
2016-01-01
Objective: To evaluate the usefulness of two-dimensional quantitative ultrasound shear-wave elastography (2D-SWE) [i.e. virtual touch imaging quantification (VTIQ)] in assessing the ablation zone after radiofrequency ablation (RFA) for ex vivo swine livers. Methods: RFA was performed in 10 pieces of fresh ex vivo swine livers with a T20 electrode needle and 20-W output power. Conventional ultrasound, conventional strain elastography (SE) and VTIQ were performed to depict the ablation zone 0 min, 10 min, 30 min and 60 min after ablation. On VTIQ, the ablation zones were evaluated qualitatively by evaluating the shear-wave velocity (SWV) map and quantitatively by measuring the SWV. The ultrasound, SE and VTIQ results were compared against gross pathological and histopathological specimens. Results: VTIQ SWV maps gave more details about the ablation zone, the central necrotic zone appeared as red, lateral necrotic zone as green and transitional zone as light green, from inner to exterior, while the peripheral unablated liver appeared as blue. Conventional ultrasound and SE, however, only marginally depicted the whole ablation zone. The volumes of the whole ablation zone (central necrotic zone + lateral necrotic zone + transitional zone) and necrotic zone (central necrotic zone + lateral necrotic zone) measured by VTIQ showed excellent correlation (r = 0.915, p < 0.001, and 0.856, p = 0.002, respectively) with those by gross pathological specimen, whereas both conventional ultrasound and SE underestimated the volume of the whole ablation zone. The SWV values of the central necrotic zone, lateral necrotic zone, transitional zone and unablated liver parenchyma were 7.54–8.03 m s−1, 5.13–5.28 m s−1, 3.31–3.53 m s−1 and 2.11–2.21 m s−1, respectively (p < 0.001 for all the comparisons). The SWV value for each ablation zone did not change significantly at different observation times within an hour after RFA (all p > 0.05). Conclusion: The quantitative 2D-SWE of VTIQ is useful for the depiction of the ablation zone after RFA and it facilitates discrimination of different areas in the ablation zone qualitatively and quantitatively. This elastography technique might be useful for the therapeutic response evaluation instantly after RFA. Advances in knowledge: A new quantitative 2D-SWE (i.e. VTIQ) for evaluation treatment response after RFA is demonstrated. It facilitates discrimination of the different areas in the ablation zone qualitatively and quantitatively and may be useful for the therapeutic response evaluation instantly after RFA in the future. PMID:26933911
Laparoscopic training using a quantitative assessment and instructional system.
Yamaguchi, T; Nakamura, R
2018-04-28
Laparoscopic surgery requires complex surgical skills; hence, surgeons require regular training to improve their surgical techniques. The quantitative assessment of a surgeon's skills and the provision of feedback are important processes for conducting effective training. The aim of this study was to develop an inexpensive training system that provides automatic technique evaluation and feedback. We detected the instrument using image processing of commercial web camera images and calculated the motion analysis parameters (MAPs) of the instrument to quantify performance features. Upon receiving the results, we developed a method of evaluating the surgeon's skill level. The feedback system was developed using MAPs-based radar charts and scores for determining the skill level. These methods were evaluated using the videos of 38 surgeons performing a suturing task. There were significant differences in MAPs among surgeons; therefore, MAPs can be effectively used to quantify a surgeon's performance features. The results of skill evaluation and feedback differed greatly between skilled and unskilled surgeons, and it was possible to indicate points of improvement for the procedure performed in this study. Furthermore, the results obtained for certain novice surgeons were similar to those obtained for skilled surgeons. This system can be used to assess the skill level of surgeons, independent of the years of experience, and provide an understanding of the individual's current surgical skill level effectively. We conclude that our system is useful as an inexpensive laparoscopic training system that might aid in skill improvement.
2013-01-01
Numerous quantitative PCR assays for microbial fecal source tracking (MST) have been developed and evaluated in recent years. Widespread application has been hindered by a lack of knowledge regarding the geographical stability and hence applicability of such methods beyond the regional level. This study assessed the performance of five previously reported quantitative PCR assays targeting human-, cattle-, or ruminant-associated Bacteroidetes populations on 280 human and animal fecal samples from 16 countries across six continents. The tested cattle-associated markers were shown to be ruminant-associated. The quantitative distributions of marker concentrations in target and nontarget samples proved to be essential for the assessment of assay performance and were used to establish a new metric for quantitative source-specificity. In general, this study demonstrates that stable target populations required for marker-based MST occur around the globe. Ruminant-associated marker concentrations were strongly correlated with total intestinal Bacteroidetes populations and with each other, indicating that the detected ruminant-associated populations seem to be part of the intestinal core microbiome of ruminants worldwide. Consequently tested ruminant-targeted assays appear to be suitable quantitative MST tools beyond the regional level while the targeted human-associated populations seem to be less prevalent and stable, suggesting potential for improvements in human-targeted methods. PMID:23755882
NASA Technical Reports Server (NTRS)
Bement, Laurence J.; Schimmel, Morry L.
1989-01-01
To overcome serious weaknesses in determining the performance of initiating devices, a novel 'ignitability test method', representing actual design interfaces and ignition materials, has been developed. Ignition device output consists of heat, light, gas an burning particles. Past research methods have evaluated these parameters individually. This paper describes the development and demonstration of an ignitability test method combining all these parameters, and the quantitative assessment of the ignition performance of two widely used percussion primers, the M42C1-PA101 and the M42C2-793. The ignition materials used for this evaluation were several powder, granule and pellet sizes of black powder and boron-potassium nitrate. This test method should be useful for performance evaluation of all initiator types, quality assurance, evaluation of ignition interfaces, and service life studies of initiators and ignition materials.
Barthassat, Emilienne; Afifi, Faik; Konala, Praveen; Rasch, Helmut; Hirschmann, Michael T
2017-05-08
It was the primary purpose of our study to evaluate the inter- and intra-observer reliability of a standardized SPECT/CT algorithm for evaluating patients with painful primary total hip arthroplasty (THA). The secondary purpose was a comparison of semi-quantitative and 3D volumetric quantification method for assessment of bone tracer uptake (BTU) in those patients. A novel SPECT/CT localization scheme consisting of 14 femoral and 4 acetabular regions on standardized axial and coronal slices was introduced and evaluated in terms of inter- and intra-observer reliability in 37 consecutive patients with hip pain after THA. BTU for each anatomical region was assessed semi-quantitatively using a color-coded Likert type scale (0-10) and volumetrically quantified using a validated software. Two observers interpreted the SPECT/CT findings in all patients two times with six weeks interval between interpretations in random order. Semi-quantitative and quantitative measurements were compared in terms of reliability. In addition, the values were correlated using Pearson`s correlation. A factorial cluster analysis of BTU was performed to identify clinically relevant regions, which should be grouped and analysed together. The localization scheme showed high inter- and intra-observer reliabilities for all femoral and acetabular regions independent of the measurement method used (semiquantitative versus 3D volumetric quantitative measurements). A high to moderate correlation between both measurement methods was shown for the distal femur, the proximal femur and the acetabular cup. The factorial cluster analysis showed that the anatomical regions might be summarized into three distinct anatomical regions. These were the proximal femur, the distal femur and the acetabular cup region. The SPECT/CT algorithm for assessment of patients with pain after THA is highly reliable independent from the measurement method used. Three clinically relevant anatomical regions (proximal femoral, distal femoral, acetabular) were identified.
Kaneko, Miki; Yamashita, Yushiro; Iramina, Keiji
2016-01-18
Attention deficit hyperactivity disorder (ADHD) is a neurodevelopmental disorder characterized by symptoms of inattention, hyperactivity, and impulsivity. Soft neurological signs (SNS) are minor neurological abnormalities in motor performance, and are used as one evaluation method for neurodevelopmental delays in children with ADHD. Our aim is to establish a quantitative evaluation system for children with ADHD. We focused on the arm movement called pronation and supination, which is one such soft neurological sign. Thirty three children with ADHD aged 7-11 years (27 males, six females) and twenty five adults participants aged 21-29 years old (19 males, six females) participated in our experiments. Our results suggested that the pronation and supination function in children with ADHD has a tendency to lag behind that of typically developing children by several years. From these results, our system has a possibility to objectively evaluate the neurodevelopmental delay of children with ADHD.
ERIC Educational Resources Information Center
Blank, Jason M.; McGaughey, Karen J.; Keeling, Elena L.; Thorp, Kristen L.; Shannon, Conor C.; Scaramozzino, Jeanine M.
2016-01-01
Expertise in searching and evaluating scientific literature is a requisite skill of trained scientists and science students, yet information literacy instruction varies greatly among institutions and programs. To ensure that science students acquire information literacy skills, robust methods of assessment are needed. Here, we describe a novel…
ERIC Educational Resources Information Center
Plough, India C.; Briggs, Sarah L.; Van Bonn, Sarah
2010-01-01
The study reported here examined the evaluation criteria used to assess the proficiency and effectiveness of the language produced in an oral performance test of English conducted in an American university context. Empirical methods were used to analyze qualitatively and quantitatively transcriptions of the Oral English Tests (OET) of 44…
ERIC Educational Resources Information Center
Catano, Nancy; Stronge, James H.
2007-01-01
This study used both quantitative and qualitative methods of content analysis to examine principal evaluation instruments and state and professional standards for principals in school districts located in a mid-Atlantic state in the USA. The purposes of this study were to (a) determine the degrees of emphasis that are placed upon leadership and…
Generating One Biometric Feature from Another: Faces from Fingerprints
Ozkaya, Necla; Sagiroglu, Seref
2010-01-01
This study presents a new approach based on artificial neural networks for generating one biometric feature (faces) from another (only fingerprints). An automatic and intelligent system was designed and developed to analyze the relationships among fingerprints and faces and also to model and to improve the existence of the relationships. The new proposed system is the first study that generates all parts of the face including eyebrows, eyes, nose, mouth, ears and face border from only fingerprints. It is also unique and different from similar studies recently presented in the literature with some superior features. The parameter settings of the system were achieved with the help of Taguchi experimental design technique. The performance and accuracy of the system have been evaluated with 10-fold cross validation technique using qualitative evaluation metrics in addition to the expanded quantitative evaluation metrics. Consequently, the results were presented on the basis of the combination of these objective and subjective metrics for illustrating the qualitative properties of the proposed methods as well as a quantitative evaluation of their performances. Experimental results have shown that one biometric feature can be determined from another. These results have once more indicated that there is a strong relationship between fingerprints and faces. PMID:22399877
A Computational Framework for Quantitative Evaluation of Movement during Rehabilitation
NASA Astrophysics Data System (ADS)
Chen, Yinpeng; Duff, Margaret; Lehrer, Nicole; Sundaram, Hari; He, Jiping; Wolf, Steven L.; Rikakis, Thanassis
2011-06-01
This paper presents a novel generalized computational framework for quantitative kinematic evaluation of movement in a rehabilitation clinic setting. The framework integrates clinical knowledge and computational data-driven analysis together in a systematic manner. The framework provides three key benefits to rehabilitation: (a) the resulting continuous normalized measure allows the clinician to monitor movement quality on a fine scale and easily compare impairments across participants, (b) the framework reveals the effect of individual movement components on the composite movement performance helping the clinician decide the training foci, and (c) the evaluation runs in real-time, which allows the clinician to constantly track a patient's progress and make appropriate adaptations to the therapy protocol. The creation of such an evaluation is difficult because of the sparse amount of recorded clinical observations, the high dimensionality of movement and high variations in subject's performance. We address these issues by modeling the evaluation function as linear combination of multiple normalized kinematic attributes y = Σwiφi(xi) and estimating the attribute normalization function φi(ṡ) by integrating distributions of idealized movement and deviated movement. The weights wi are derived from a therapist's pair-wise comparison using a modified RankSVM algorithm. We have applied this framework to evaluate upper limb movement for stroke survivors with excellent results—the evaluation results are highly correlated to the therapist's observations.
Wei, Zheng-mao; Du, Xiang-ke; Huo, Tian-long; Li, Xu-bin; Quan, Guang-nan; Li, Tian-ran; Cheng, Jin; Zhang, Wei-tao
2012-03-01
Quantitative T2 mapping has been a widely used method for the evaluation of pathological cartilage properties, and the histological assessment system of osteoarthritis in the rabbit has been published recently. The aim of the study was to investigate the effectiveness of quantitative T2 mapping evaluation for articular cartilage lesions of a rabbit model of anterior cruciate ligament transection (ACLT) osteoarthritis. Twenty New Zealand White (NZW) rabbits were divided into ACLT surgical group and sham operated group equally. The anterior cruciate ligaments of the rabbits in ACLT group were transected, while the joints were closed intactly in sham operated group. Magnetic resonance (MR) examinations were performed on 3.0T MR unit at week 0, week 6, and week 12. T2 values were computed on GE ADW4.3 workstation. All rabbits were killed at week 13, and left knees were stained with Haematoxylin and Eosin. Semiquantitative histological grading was obtained according to the osteoarthritis cartilage histopathology assessment system. Computerized image analysis was performed to quantitate the immunostained collagen type II. The average MR T2 value of whole left knee cartilage in ACLT surgical group ((29.05±12.01) ms) was significantly higher than that in sham operated group ((24.52±7.97) ms) (P=0.024) at week 6. The average T2 value increased to (32.18±12.79) ms in ACLT group at week 12, but remained near the baseline level ((27.66±8.08) ms) in the sham operated group (P=0.03). The cartilage lesion level of left knee in ACLT group was significantly increased at week 6 (P=0.005) and week 12 (P<0.001). T2 values had positive correlation with histological grading scores, but inverse correlation with optical densities (OD) of type II collagen. This study demonstrated the reliability and practicability of quantitative T2 mapping for the cartilage injury of rabbit ACLT osteoarthritis model.
Sahiner, Ilgin; Akdemir, Umit O; Kocaman, Sinan A; Sahinarslan, Asife; Timurkaynak, Timur; Unlu, Mustafa
2013-02-01
Myocardial perfusion SPECT (MPS) is a noninvasive method commonly used for assessment of the hemodynamic significance of intermediate coronary stenoses. Fractional flow reserve (FFR) measurement is a well-validated invasive method used for the evaluation of intermediate stenoses. We aimed to determine the association between MPS and FFR findings in intermediate degree stenoses and evaluate the added value of quantification in MPS. Fifty-eight patients who underwent intracoronary pressure measurement in the catheterization laboratory to assess the physiological significance of intermediate (40-70%) left anterior descending (LAD) artery lesions, and who also underwent stress myocardial perfusion SPECT either for the assessment of an intermediate stenosis or for suspected coronary artery disease were analyzed retrospectively in the study. Quantitative analysis was performed using the 4DMSPECT program, with visual assessment performed by two experienced nuclear medicine physicians blinded to the angiographic findings. Summed stress scores (SSS) and summed difference scores (SDS) in the LAD artery territory according to the 20 segment model were calculated. A summed stress score of ≥ 3 and an SDS of ≥ 2 were assumed as pathologic, indicating significance of the lesion; a cutoff value of 0.75 was used to define abnormal FFR. Both visual and quantitative assessment results were compared with FFR using Chi-square (χ²) test. The mean time interval between two studies was 13 ± 11 days. FFR was normal in 45 and abnormal in 13 patients. Considering the FFR results as the gold standard method for assessing the significance of the lesion, the sensitivity and specificity of quantitative analysis determining the abnormal flow reserve were 85 and 84%, respectively, while visual analysis had a sensitivity of 77% and a specificity of 51%. There was a good agreement between the observers (κ = 0.856). Summed stress and difference scores demonstrated moderate inverse correlations with FFR values (r = -0.542, p < 0.001 and r = -0.506, p < 0.001, respectively). Quantitative analysis of the myocardial perfusion SPECT increases the specificity in evaluating the significance of intermediate degree coronary lesions.
Mano, Junichi; Masubuchi, Tomoko; Hatano, Shuko; Futo, Satoshi; Koiwa, Tomohiro; Minegishi, Yasutaka; Noguchi, Akio; Kondo, Kazunari; Akiyama, Hiroshi; Teshima, Reiko; Kurashima, Takeyo; Takabatake, Reona; Kitta, Kazumi
2013-01-01
In this article, we report a novel real-time PCR-based analytical method for quantitation of the GM maize event LY038. We designed LY038-specific and maize endogenous reference DNA-specific PCR amplifications. After confirming the specificity and linearity of the LY038-specific PCR amplification, we determined the conversion factor required to calculate the weight-based content of GM organism (GMO) in a multilaboratory evaluation. Finally, in order to validate the developed method, an interlaboratory collaborative trial according to the internationally harmonized guidelines was performed with blind DNA samples containing LY038 at the mixing levels of 0, 0.5, 1.0, 5.0 and 10.0%. The precision of the method was evaluated as the RSD of reproducibility (RSDR), and the values obtained were all less than 25%. The limit of quantitation of the method was judged to be 0.5% based on the definition of ISO 24276 guideline. The results from the collaborative trial suggested that the developed quantitative method would be suitable for practical testing of LY038 maize.
Analysis of Radio Frequency Surveillance Systems for Air Traffic Control : Volume 1. Text.
DOT National Transportation Integrated Search
1976-02-01
Performance criteria that afford quantitative evaluation of a variety of current and proposed configurations of the Air Traffic Control Radar Beacon System (ATCRBS) are described in detail. Two analytic system models are developed to allow applicatio...
Evaluating drywells for stormwater management and enhanced aquifer recharge
USDA-ARS?s Scientific Manuscript database
Drywells are increasingly used for stormwater management and enhanced aquifer recharge, but only limited research has quantitatively determined the performance of drywells. Numerical and field scale experiments were, therefore, conducted to improve our understanding and ability to characterize the d...
Evaluating drywells for stormwater management and enhanced aquifer recharge
Drywells are increasingly used for stormwater management and enhanced aquifer recharge, but only limited research has quantitatively determined drywells' performance. Numerical and field scale experiments were conducted to characterize the drywell behavior. HYDRUS (2D/3D) was mod...
Fisher, Elliott S; Shortell, Stephen M; Kreindler, Sara A; Van Citters, Aricca D; Larson, Bridget K
2012-11-01
The implementation of accountable care organizations (ACOs), a new health care payment and delivery model designed to improve care and lower costs, is proceeding rapidly. We build on our experience tracking early ACOs to identify the major factors-such as contract characteristics; structure, capabilities, and activities; and local context-that would be likely to influence ACO formation, implementation, and performance. We then propose how an ACO evaluation program could be structured to guide policy makers and payers in improving the design of ACO contracts, while providing insights for providers on approaches to care transformation that are most likely to be successful in different contexts. We also propose key activities to support evaluation of ACOs in the near term, including tracking their formation, developing a set of performance measures across all ACOs and payers, aggregating those performance data, conducting qualitative and quantitative research, and coordinating different evaluation activities.
Feng, Tao; Wang, Jizhe; Tsui, Benjamin M W
2018-04-01
The goal of this study was to develop and evaluate four post-reconstruction respiratory and cardiac (R&C) motion vector field (MVF) estimation methods for cardiac 4D PET data. In Method 1, the dual R&C motions were estimated directly from the dual R&C gated images. In Method 2, respiratory motion (RM) and cardiac motion (CM) were separately estimated from the respiratory gated only and cardiac gated only images. The effects of RM on CM estimation were modeled in Method 3 by applying an image-based RM correction on the cardiac gated images before CM estimation, the effects of CM on RM estimation were neglected. Method 4 iteratively models the mutual effects of RM and CM during dual R&C motion estimations. Realistic simulation data were generated for quantitative evaluation of four methods. Almost noise-free PET projection data were generated from the 4D XCAT phantom with realistic R&C MVF using Monte Carlo simulation. Poisson noise was added to the scaled projection data to generate additional datasets of two more different noise levels. All the projection data were reconstructed using a 4D image reconstruction method to obtain dual R&C gated images. The four dual R&C MVF estimation methods were applied to the dual R&C gated images and the accuracy of motion estimation was quantitatively evaluated using the root mean square error (RMSE) of the estimated MVFs. Results show that among the four estimation methods, Methods 2 performed the worst for noise-free case while Method 1 performed the worst for noisy cases in terms of quantitative accuracy of the estimated MVF. Methods 4 and 3 showed comparable results and achieved RMSE lower by up to 35% than that in Method 1 for noisy cases. In conclusion, we have developed and evaluated 4 different post-reconstruction R&C MVF estimation methods for use in 4D PET imaging. Comparison of the performance of four methods on simulated data indicates separate R&C estimation with modeling of RM before CM estimation (Method 3) to be the best option for accurate estimation of dual R&C motion in clinical situation. © 2018 American Association of Physicists in Medicine.
Domestic violence against women in India: A systematic review of a decade of quantitative studies.
Kalokhe, Ameeta; Del Rio, Carlos; Dunkle, Kristin; Stephenson, Rob; Metheny, Nicholas; Paranjape, Anuradha; Sahay, Seema
2017-04-01
Domestic violence (DV) is prevalent among women in India and has been associated with poor mental and physical health. We performed a systematic review of 137 quantitative studies published in the prior decade that directly evaluated the DV experiences of Indian women to summarise the breadth of recent work and identify gaps in the literature. Among studies surveying at least two forms of abuse, a median 41% of women reported experiencing DV during their lifetime and 30% in the past year. We noted substantial inter-study variance in DV prevalence estimates, attributable in part to different study populations and settings, but also to a lack of standardisation, validation, and cultural adaptation of DV survey instruments. There was paucity of studies evaluating the DV experiences of women over age 50, residing in live-in relationships, same-sex relationships, tribal villages, and of women from the northern regions of India. Additionally, our review highlighted a gap in research evaluating the impact of DV on physical health. We conclude with a research agenda calling for additional qualitative and longitudinal quantitative studies to explore the DV correlates proposed by this quantitative literature to inform the development of a culturally tailored DV scale and prevention strategies.
Domestic violence against women in India: A systematic review of a decade of quantitative studies
Kalokhe, Ameeta; del Rio, Carlos; Dunkle, Kristin; Stephenson, Rob; Metheny, Nicholas; Paranjape, Anuradha; Sahay, Seema
2016-01-01
Domestic violence (DV) is prevalent among women in India and has been associated with poor mental and physical health. We performed a systematic review of 137 quantitative studies published in the prior decade that directly evaluated the DV experiences of Indian women to summarise the breadth of recent work and identify gaps in the literature. Among studies surveying at least two forms of abuse, a median 41% of women reported experiencing DV during their lifetime and 30% in the past year. We noted substantial inter-study variance in DV prevalence estimates, attributable in part to different study populations and settings, but also to a lack of standardisation, validation, and cultural adaptation of DV survey instruments. There was paucity of studies evaluating the DV experiences of women over age 50, residing in live-in relationships, same-sex relationships, tribal villages, and of women from the northern regions of India. Additionally, our review highlighted a gap in research evaluating the impact of DV on physical health. We conclude with a research agenda calling for additional qualitative and longitudinal quantitative studies to explore the DV correlates proposed by this quantitative literature to inform the development of a culturally tailored DV scale and prevention strategies. PMID:26886155
ERIC Educational Resources Information Center
De Lisle, Jerome
2015-01-01
This article explores the challenge of setting performance standards in a non-Western context. The study is centered on standard-setting practice in the national learning assessments of Trinidad and Tobago. Quantitative and qualitative data from annual evaluations between 2005 and 2009 were compiled, analyzed, and deconstructed. In the mixed…
Gören, Ahmet C; Bilsel, Gökhan; Şimşek, Adnan; Bilsel, Mine; Akçadağ, Fatma; Topal, Kevser; Ozgen, Hasan
2015-05-15
High Performance Liquid Chromatography LC-UV and LC-MS/MS methods were developed and validated for quantitative analyses of sodium benzoate and potassium sorbate in foods and beverages. HPLC-UV and LC-MS/MS methods were compared for quantitative analyses of sodium benzoate and potassium sorbate in a representative ketchup sample. Optimisation of the methods enabled the chromatographic separation of the analytes in less than 4 min. A correlation coefficient of 0.999 was achieved over the measured calibration range for both compounds and methods (HPLC and LC-MS/MS). The uncertainty values of sodium benzoate and potassium sorbate were found as 0.199 and 0.150 mg/L by HPLC and 0.072 and 0.044 mg/L by LC-MS/MS, respectively. Proficiency testing performance of Turkish accredited laboratories between the years 2005 and 2013 was evaluated and reported herein. The aim of the proficiency testing scheme was to evaluate the performance of the laboratories, analysing benzoate and sorbate in tomato ketchup. Copyright © 2014 Elsevier Ltd. All rights reserved.
An importance-performance analysis of hospital information system attributes: A nurses' perspective.
Cohen, Jason F; Coleman, Emma; Kangethe, Matheri J
2016-02-01
Health workers have numerous concerns about hospital IS (HIS) usage. Addressing these concerns requires understanding the system attributes most important to their satisfaction and productivity. Following a recent HIS implementation, our objective was to identify priorities for managerial intervention based on user evaluations of the performance of the HIS attributes as well as the relative importance of these attributes to user satisfaction and productivity outcomes. We collected data along a set of attributes representing system quality, data quality, information quality, and service quality from 154 nurse users. Their quantitative responses were analysed using the partial least squares approach followed by an importance-performance analysis. Qualitative responses were analysed using thematic analysis to triangulate and supplement the quantitative findings. Two system quality attributes (responsiveness and ease of learning), one information quality attribute (detail), one service quality attribute (sufficient support), and three data quality attributes (records complete, accurate and never missing) were identified as high priorities for intervention. Our application of importance-performance analysis is unique in HIS evaluation and we have illustrated its utility for identifying those system attributes for which underperformance is not acceptable to users and therefore should be high priorities for intervention. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Generalized PSF modeling for optimized quantitation in PET imaging.
Ashrafinia, Saeed; Mohy-Ud-Din, Hassan; Karakatsanis, Nicolas A; Jha, Abhinav K; Casey, Michael E; Kadrmas, Dan J; Rahmim, Arman
2017-06-21
Point-spread function (PSF) modeling offers the ability to account for resolution degrading phenomena within the PET image generation framework. PSF modeling improves resolution and enhances contrast, but at the same time significantly alters image noise properties and induces edge overshoot effect. Thus, studying the effect of PSF modeling on quantitation task performance can be very important. Frameworks explored in the past involved a dichotomy of PSF versus no-PSF modeling. By contrast, the present work focuses on quantitative performance evaluation of standard uptake value (SUV) PET images, while incorporating a wide spectrum of PSF models, including those that under- and over-estimate the true PSF, for the potential of enhanced quantitation of SUVs. The developed framework first analytically models the true PSF, considering a range of resolution degradation phenomena (including photon non-collinearity, inter-crystal penetration and scattering) as present in data acquisitions with modern commercial PET systems. In the context of oncologic liver FDG PET imaging, we generated 200 noisy datasets per image-set (with clinically realistic noise levels) using an XCAT anthropomorphic phantom with liver tumours of varying sizes. These were subsequently reconstructed using the OS-EM algorithm with varying PSF modelled kernels. We focused on quantitation of both SUV mean and SUV max , including assessment of contrast recovery coefficients, as well as noise-bias characteristics (including both image roughness and coefficient of-variability), for different tumours/iterations/PSF kernels. It was observed that overestimated PSF yielded more accurate contrast recovery for a range of tumours, and typically improved quantitative performance. For a clinically reasonable number of iterations, edge enhancement due to PSF modeling (especially due to over-estimated PSF) was in fact seen to lower SUV mean bias in small tumours. Overall, the results indicate that exactly matched PSF modeling does not offer optimized PET quantitation, and that PSF overestimation may provide enhanced SUV quantitation. Furthermore, generalized PSF modeling may provide a valuable approach for quantitative tasks such as treatment-response assessment and prognostication.
Quantitative framework for prospective motion correction evaluation.
Pannetier, Nicolas A; Stavrinos, Theano; Ng, Peter; Herbst, Michael; Zaitsev, Maxim; Young, Karl; Matson, Gerald; Schuff, Norbert
2016-02-01
Establishing a framework to evaluate performances of prospective motion correction (PMC) MRI considering motion variability between MRI scans. A framework was developed to obtain quantitative comparisons between different motion correction setups, considering that varying intrinsic motion patterns between acquisitions can induce bias. Intrinsic motion was considered by replaying in a phantom experiment the recorded motion trajectories from subjects. T1-weighted MRI on five volunteers and two different marker fixations (mouth guard and nose bridge fixations) were used to test the framework. Two metrics were investigated to quantify the improvement of the image quality with PMC. Motion patterns vary between subjects as well as between repeated scans within a subject. This variability can be approximated by replaying the motion in a distinct phantom experiment and used as a covariate in models comparing motion corrections. We show that considering the intrinsic motion alters the statistical significance in comparing marker fixations. As an example, two marker fixations, a mouth guard and a nose bridge, were evaluated in terms of their effectiveness for PMC. A mouth guard achieved better PMC performance. Intrinsic motion patterns can bias comparisons between PMC configurations and must be considered for robust evaluations. A framework for evaluating intrinsic motion patterns in PMC is presented. © 2015 Wiley Periodicals, Inc.
Assessment of hospital daily cleaning practices using ATP bioluminescence in a developing country.
Zambrano, Alejandra A; Jones, Alex; Otero, Paula; Ajenjo, Maria Cristina; Labarca, Jaime A
2014-01-01
Visual assessment of surfaces may not be enough to document the level of cleanliness in the hospital setting. It is necessary to introduce quantitative methods to document the results of this practice. To evaluate the efficacy of hospital terminal cleaning procedures, using an adenosine triphosphate (ATP) bioluminescence method in a teaching hospital. During 2008 we conducted an evaluation using ATP bioluminescence LIGHTNING MVP™ (Arquimed) of external and internal housekeeping service. After conducting an initial evaluation we implemented education of cleaning practices and finally we did a post intervention evaluation. Using chi-square method we compared prior versus after cleaning, quality of cleaning performed by external versus internal personnel, single versus double terminal cleaning procedures and prior versus after intervention. A finding of three RLU or less was considered a clean surface. We performed 198 evaluations in 33 patient units and nine OR. Internal personnel accomplished 25.37% of clean surfaces before and 80% after the education intervention (p=0.01). In contrast, external personnel obtained 68.8% before and 73.33% after intervention (p=0.3). This study suggests that visual assessment is not enough to ensure quality of the process and it is necessary to document the level of cleanliness by quantitative methods. Copyright © 2014 Elsevier Editora Ltda. All rights reserved.
Deng, Ning; Li, Zhenye; Pan, Chao; Duan, Huilong
2015-01-01
Study of complex proteome brings forward higher request for the quantification method using mass spectrometry technology. In this paper, we present a mass spectrometry label-free quantification tool for complex proteomes, called freeQuant, which integrated quantification with functional analysis effectively. freeQuant consists of two well-integrated modules: label-free quantification and functional analysis with biomedical knowledge. freeQuant supports label-free quantitative analysis which makes full use of tandem mass spectrometry (MS/MS) spectral count, protein sequence length, shared peptides, and ion intensity. It adopts spectral count for quantitative analysis and builds a new method for shared peptides to accurately evaluate abundance of isoforms. For proteins with low abundance, MS/MS total ion count coupled with spectral count is included to ensure accurate protein quantification. Furthermore, freeQuant supports the large-scale functional annotations for complex proteomes. Mitochondrial proteomes from the mouse heart, the mouse liver, and the human heart were used to evaluate the usability and performance of freeQuant. The evaluation showed that the quantitative algorithms implemented in freeQuant can improve accuracy of quantification with better dynamic range.
A quantitative evaluation of the high elbow technique in front crawl.
Suito, Hiroshi; Nunome, Hiroyuki; Ikegami, Yasuo
2017-07-01
Many coaches often instruct swimmers to keep the elbow in a high position (high elbow position) during early phase of the underwater stroke motion (pull phase) in front crawl, however, the high elbow position has never been quantitatively evaluated. The aims of this study were (1) to quantitatively evaluate the "high elbow" position, (2) to clarify the relationship between the high elbow position and required upper limb configuration and (3) to examine the efficacy of high elbow position on the resultant swimming velocity. Sixteen highly skilled and 6 novice male swimmers performed 25 m front crawl with maximal effort and their 3-dimensional arm stroke motion was captured at 60 Hz. An attempt was made to develop a new index to evaluate the high elbow position (I he : high elbow index) using 3-dimensional coordinates of the shoulder, elbow and wrist joints. I he of skilled swimmers moderately correlated with the average shoulder internal rotation angle (r = -0.652, P < 0.01) and swimming velocity (r = -0.683, P < 0.01) during the pull phase. These results indicate that I he is a useful index for evaluating high elbow arm stroke technique during the pull phase in front crawl.
Assessment of calcium scoring performance in cardiac computed tomography.
Ulzheimer, Stefan; Kalender, Willi A
2003-03-01
Electron beam tomography (EBT) has been used for cardiac diagnosis and the quantitative assessment of coronary calcium since the late 1980s. The introduction of mechanical multi-slice spiral CT (MSCT) scanners with shorter rotation times opened new possibilities of cardiac imaging with conventional CT scanners. The purpose of this work was to qualitatively and quantitatively evaluate the performance for EBT and MSCT for the task of coronary artery calcium imaging as a function of acquisition protocol, heart rate, spiral reconstruction algorithm (where applicable) and calcium scoring method. A cardiac CT semi-anthropomorphic phantom was designed and manufactured for the investigation of all relevant image quality parameters in cardiac CT. This phantom includes various test objects, some of which can be moved within the anthropomorphic phantom in a manner that mimics realistic heart motion. These tools were used to qualitatively and quantitatively demonstrate the accuracy of coronary calcium imaging using typical protocols for an electron beam (Evolution C-150XP, Imatron, South San Francisco, Calif.) and a 0.5-s four-slice spiral CT scanner (Sensation 4, Siemens, Erlangen, Germany). A special focus was put on the method of quantifying coronary calcium, and three scoring systems were evaluated (Agatston, volume, and mass scoring). Good reproducibility in coronary calcium scoring is always the result of a combination of high temporal and spatial resolution; consequently, thin-slice protocols in combination with retrospective gating on MSCT scanners yielded the best results. The Agatston score was found to be the least reproducible scoring method. The hydroxyapatite mass, being better reproducible and comparable on different scanners and being a physical quantitative measure, appears to be the method of choice for future clinical studies. The hydroxyapatite mass is highly correlated to the Agatston score. The introduced phantoms can be used to quantitatively assess the performance characteristics of, for example, different scanners, reconstruction algorithms, and quantification methods in cardiac CT. This is especially important for quantitative tasks, such as the determination of the amount of calcium in the coronary arteries, to achieve high and constant quality in this field.
Qualitative and quantitative ultrasound attributes of maternal-foetal structures in pregnant ewes.
da Silva, Pda; Uscategui, Rar; Santos, Vjc; Taira, A R; Mariano, Rsg; Rodrigues, Mgk; Simões, Apr; Maronezi, M C; Avante, M L; Vicente, Wrr; Feliciano, Mar
2018-06-01
The aim of this study was to examine foetal organs and placental tissue to establish a correlation between the changes in the composition of these structures associated with their maturation and the ultrasonographic characteristics of the images. Twenty-four pregnant ewes were included in the study. Ultrasonography assessments were performed in B-mode, from the ninth gestational week until parturition. The lungs, liver and kidneys of foetuses and placentomes were located in transverse and longitudinal sections to evaluate the echogenicity (hypoechoic, isoechoic, hyperechoic or mixed) and echotexture (homogeneous and heterogeneous) of the tissues of interest. For quantitative evaluation of the ultrasonographic characteristics, it was performed a computerized image analysis using a commercial software (Image ProPlus ® ). Mean numerical pixel values (NPVs), pixel heterogeneity (standard deviation of NPVs) and minimum and maximum pixel values were measured by selecting five circular regions of interest in each assessed tissue. All evaluated tissues presented significant variations in the NPVs, except for the liver. Pulmonary NPVmean, NPVmin and NPVmax decreased gradually through gestational weeks. The renal parameters gradually decreased with the advancement of the gestational weeks until the 17th week and later stabilized. The placentome NPVmean, NPVmin and NPVmax decreased gradually over the course of weeks. The hepatic tissue did not show echogenicity and echotexture variations and presented medium echogenicity and homogeneous echotexture throughout the experimental period. It was concluded that pixels numerical evaluation of maternal-foetal tissues was applicable and allowed the identification of quantitative ultrasonographic characteristics showing changes in echogenicity related to gestational age. © 2018 Blackwell Verlag GmbH.
The Quantitative Science of Evaluating Imaging Evidence.
Genders, Tessa S S; Ferket, Bart S; Hunink, M G Myriam
2017-03-01
Cardiovascular diagnostic imaging tests are increasingly used in everyday clinical practice, but are often imperfect, just like any other diagnostic test. The performance of a cardiovascular diagnostic imaging test is usually expressed in terms of sensitivity and specificity compared with the reference standard (gold standard) for diagnosing the disease. However, evidence-based application of a diagnostic test also requires knowledge about the pre-test probability of disease, the benefit of making a correct diagnosis, the harm caused by false-positive imaging test results, and potential adverse effects of performing the test itself. To assist in clinical decision making regarding appropriate use of cardiovascular diagnostic imaging tests, we reviewed quantitative concepts related to diagnostic performance (e.g., sensitivity, specificity, predictive values, likelihood ratios), as well as possible biases and solutions in diagnostic performance studies, Bayesian principles, and the threshold approach to decision making. Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Quantitative criteria for assessment of gamma-ray imager performance
NASA Astrophysics Data System (ADS)
Gottesman, Steve; Keller, Kristi; Malik, Hans
2015-08-01
In recent years gamma ray imagers such as the GammaCamTM and Polaris have demonstrated good imaging performance in the field. Imager performance is often summarized as "resolution", either angular, or spatial at some distance from the imager, however the definition of resolution is not always related to the ability to image an object. It is difficult to quantitatively compare imagers without a common definition of image quality. This paper examines three categories of definition: point source; line source; and area source. It discusses the details of those definitions and which ones are more relevant for different situations. Metrics such as Full Width Half Maximum (FWHM), variations on the Rayleigh criterion, and some analogous to National Imagery Interpretability Rating Scale (NIIRS) are discussed. The performance against these metrics is evaluated for a high resolution coded aperture imager modeled using Monte Carlo N-Particle (MCNP), and for a medium resolution imager measured in the lab.
Lasnon, Charline; Quak, Elske; Briand, Mélanie; Gu, Zheng; Louis, Marie-Hélène; Aide, Nicolas
2013-01-17
The use of iodinated contrast media in small-animal positron emission tomography (PET)/computed tomography (CT) could improve anatomic referencing and tumor delineation but may introduce inaccuracies in the attenuation correction of the PET images. This study evaluated the diagnostic performance and accuracy of quantitative values in contrast-enhanced small-animal PET/CT (CEPET/CT) as compared to unenhanced small animal PET/CT (UEPET/CT). Firstly, a NEMA NU 4-2008 phantom (filled with 18F-FDG or 18F-FDG plus contrast media) and a homemade phantom, mimicking an abdominal tumor surrounded by water or contrast media, were used to evaluate the impact of iodinated contrast media on the image quality parameters and accuracy of quantitative values for a pertinent-sized target. Secondly, two studies in 22 abdominal tumor-bearing mice and rats were performed. The first animal experiment studied the impact of a dual-contrast media protocol, comprising the intravenous injection of a long-lasting contrast agent mixed with 18F-FDG and the intraperitoneal injection of contrast media, on tumor delineation and the accuracy of quantitative values. The second animal experiment compared the diagnostic performance and quantitative values of CEPET/CT versus UEPET/CT by sacrificing the animals after the tracer uptake period and imaging them before and after intraperitoneal injection of contrast media. There was minimal impact on IQ parameters (%SDunif and spillover ratios in air and water) when the NEMA NU 4-2008 phantom was filled with 18F-FDG plus contrast media. In the homemade phantom, measured activity was similar to true activity (-0.02%) and overestimated by 10.30% when vials were surrounded by water or by an iodine solution, respectively. The first animal experiment showed excellent tumor delineation and a good correlation between small-animal (SA)-PET and ex vivo quantification (r2 = 0.87, P < 0.0001). The second animal experiment showed a good correlation between CEPET/CT and UEPET/CT quantitative values (r2 = 0.99, P < 0.0001). Receiver operating characteristic analysis demonstrated better diagnostic accuracy of CEPET/CT versus UEPET/CT (senior researcher, area under the curve (AUC) 0.96 versus 0.77, P = 0.004; junior researcher, AUC 0.78 versus 0.58, P = 0.004). The use of iodinated contrast media for small-animal PET imaging significantly improves tumor delineation and diagnostic performance, without significant alteration of SA-PET quantitative accuracy and NEMA NU 4-2008 IQ parameters.
42 CFR 431.424 - Evaluation requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... evaluations. Demonstration evaluations will include the following: (1) Quantitative research methods. (i... of appropriate evaluation strategies (including experimental and other quantitative and qualitative... demonstration. (ii) CMS will consider alternative evaluation designs when quantitative designs are technically...
42 CFR 431.424 - Evaluation requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... evaluations. Demonstration evaluations will include the following: (1) Quantitative research methods. (i... of appropriate evaluation strategies (including experimental and other quantitative and qualitative... demonstration. (ii) CMS will consider alternative evaluation designs when quantitative designs are technically...
42 CFR 431.424 - Evaluation requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... evaluations. Demonstration evaluations will include the following: (1) Quantitative research methods. (i... of appropriate evaluation strategies (including experimental and other quantitative and qualitative... demonstration. (ii) CMS will consider alternative evaluation designs when quantitative designs are technically...
Nyflot, Matthew J.; Yang, Fei; Byrd, Darrin; Bowen, Stephen R.; Sandison, George A.; Kinahan, Paul E.
2015-01-01
Abstract. Image heterogeneity metrics such as textural features are an active area of research for evaluating clinical outcomes with positron emission tomography (PET) imaging and other modalities. However, the effects of stochastic image acquisition noise on these metrics are poorly understood. We performed a simulation study by generating 50 statistically independent PET images of the NEMA IQ phantom with realistic noise and resolution properties. Heterogeneity metrics based on gray-level intensity histograms, co-occurrence matrices, neighborhood difference matrices, and zone size matrices were evaluated within regions of interest surrounding the lesions. The impact of stochastic variability was evaluated with percent difference from the mean of the 50 realizations, coefficient of variation and estimated sample size for clinical trials. Additionally, sensitivity studies were performed to simulate the effects of patient size and image reconstruction method on the quantitative performance of these metrics. Complex trends in variability were revealed as a function of textural feature, lesion size, patient size, and reconstruction parameters. In conclusion, the sensitivity of PET textural features to normal stochastic image variation and imaging parameters can be large and is feature-dependent. Standards are needed to ensure that prospective studies that incorporate textural features are properly designed to measure true effects that may impact clinical outcomes. PMID:26251842
Nyflot, Matthew J; Yang, Fei; Byrd, Darrin; Bowen, Stephen R; Sandison, George A; Kinahan, Paul E
2015-10-01
Image heterogeneity metrics such as textural features are an active area of research for evaluating clinical outcomes with positron emission tomography (PET) imaging and other modalities. However, the effects of stochastic image acquisition noise on these metrics are poorly understood. We performed a simulation study by generating 50 statistically independent PET images of the NEMA IQ phantom with realistic noise and resolution properties. Heterogeneity metrics based on gray-level intensity histograms, co-occurrence matrices, neighborhood difference matrices, and zone size matrices were evaluated within regions of interest surrounding the lesions. The impact of stochastic variability was evaluated with percent difference from the mean of the 50 realizations, coefficient of variation and estimated sample size for clinical trials. Additionally, sensitivity studies were performed to simulate the effects of patient size and image reconstruction method on the quantitative performance of these metrics. Complex trends in variability were revealed as a function of textural feature, lesion size, patient size, and reconstruction parameters. In conclusion, the sensitivity of PET textural features to normal stochastic image variation and imaging parameters can be large and is feature-dependent. Standards are needed to ensure that prospective studies that incorporate textural features are properly designed to measure true effects that may impact clinical outcomes.
An analysis of radio frequency surveillance systems for air traffic control volume II: appendixes
DOT National Transportation Integrated Search
1976-02-01
Performance criteria that afford quantitative evaluation of a variety of current and proposed configurations of the Air Traffic Control Radar Beacon System (ATCRBS) are described in detail. Two analytic system models are developed to allow applicatio...
EVALUATION OF METHODS FOR SAMPLING, RECOVERY, AND ENUMERATION OF BACTERIA APPLIED TO THE PHYLLOPANE
Determining the fate and survival of genetically engineered microorganisms released into the environment requires the development and application of accurate and practical methods of detection and enumeration. everal experiments were performed to examine quantitative recovery met...
Classification of cassava genotypes based on qualitative and quantitative data.
Oliveira, E J; Oliveira Filho, O S; Santos, V S
2015-02-02
We evaluated the genetic variation of cassava accessions based on qualitative (binomial and multicategorical) and quantitative traits (continuous). We characterized 95 accessions obtained from the Cassava Germplasm Bank of Embrapa Mandioca e Fruticultura; we evaluated these accessions for 13 continuous, 10 binary, and 25 multicategorical traits. First, we analyzed the accessions based only on quantitative traits; next, we conducted joint analysis (qualitative and quantitative traits) based on the Ward-MLM method, which performs clustering in two stages. According to the pseudo-F, pseudo-t2, and maximum likelihood criteria, we identified five and four groups based on quantitative trait and joint analysis, respectively. The smaller number of groups identified based on joint analysis may be related to the nature of the data. On the other hand, quantitative data are more subject to environmental effects in the phenotype expression; this results in the absence of genetic differences, thereby contributing to greater differentiation among accessions. For most of the accessions, the maximum probability of classification was >0.90, independent of the trait analyzed, indicating a good fit of the clustering method. Differences in clustering according to the type of data implied that analysis of quantitative and qualitative traits in cassava germplasm might explore different genomic regions. On the other hand, when joint analysis was used, the means and ranges of genetic distances were high, indicating that the Ward-MLM method is very useful for clustering genotypes when there are several phenotypic traits, such as in the case of genetic resources and breeding programs.
Geith, Tobias; Schmidt, Gerwin; Biffar, Andreas; Dietrich, Olaf; Dürr, Hans Roland; Reiser, Maximilian; Baur-Melnyk, Andrea
2012-11-01
The objective of our study was to compare the diagnostic value of qualitative diffusion-weighted imaging (DWI), quantitative DWI, and chemical-shift imaging in a single prospective cohort of patients with acute osteoporotic and malignant vertebral fractures. The study group was composed of patients with 26 osteoporotic vertebral fractures (18 women, eight men; mean age, 69 years; age range, 31 years 6 months to 86 years 2 months) and 20 malignant vertebral fractures (nine women, 11 men; mean age, 63.4 years; age range, 24 years 8 months to 86 years 4 months). T1-weighted, STIR, and T2-weighted sequences were acquired at 1.5 T. A DW reverse fast imaging with steady-state free precession (PSIF) sequence at different delta values was evaluated qualitatively. A DW echo-planar imaging (EPI) sequence and a DW single-shot turbo spin-echo (TSE) sequence at different b values were evaluated qualitatively and quantitatively using the apparent diffusion coefficient. Opposed-phase sequences were used to assess signal intensity qualitatively. The signal loss between in- and opposed-phase images was determined quantitatively. Two-tailed Fisher exact test, Mann-Whitney test, and receiver operating characteristic analysis were performed. Sensitivities, specificities, and accuracies were determined. Qualitative DW-PSIF imaging (delta = 3 ms) showed the best performance for distinguishing between benign and malignant fractures (sensitivity, 100%; specificity, 88.5%; accuracy, 93.5%). Qualitative DW-EPI (b = 50 s/mm(2) [p = 1.00]; b = 250 s/mm(2) [p = 0.50]) and DW single-shot TSE imaging (b = 100 s/mm(2) [p = 1.00]; b = 250 s/mm(2) [p = 0.18]; b = 400 s/mm(2) [p = 0.18]; b = 600 s/mm(2) [p = 0.39]) did not indicate significant differences between benign and malignant fractures. DW-EPI using a b value of 500 s/mm(2) (p = 0.01) indicated significant differences between benign and malignant vertebral fractures. Quantitative DW-EPI (p = 0.09) and qualitative opposed-phase imaging (p = 0.06) did not exhibit significant differences, quantitative DW single-shot TSE imaging (p = 0.002) and quantitative chemical-shift imaging (p = 0.01) showed significant differences between benign and malignant fractures. The DW-PSIF sequence (delta = 3 ms) had the highest accuracy in differentiating benign from malignant vertebral fractures. Quantitative chemical-shift imaging and quantitative DW single-shot TSE imaging had a lower accuracy than DW-PSIF imaging because of a large overlap. Qualitative assessment of opposed-phase, DW-EPI, and DW single-shot TSE sequences and quantitative assessment of the DW-EPI sequence were not suitable for distinguishing between benign and malignant vertebral fractures.
NASA Astrophysics Data System (ADS)
Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R.; La Riviere, Patrick J.; Alessio, Adam M.
2014-04-01
Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)-1, cardiac output = 3, 5, 8 L min-1). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This suggests that there is no particular advantage between quantitative estimation methods nor to performing dose reduction via tube current reduction compared to temporal sampling reduction. These data are important for optimizing implementation of cardiac dynamic CT in clinical practice and in prospective CT MBF trials.
Bae, Won C.; Ruangchaijatuporn, Thumanoon; Chang, Eric Y; Biswas, Reni; Du, Jiang; Statum, Sheronda
2016-01-01
Objective To evaluate pathology of the triangular fibrocartilage complex (TFCC) using high resolution morphologic magnetic resonance (MR) imaging, and compare with quantitative MR and biomechanical properties. Materials and Methods Five cadaveric wrists (22 to 70 yrs) were imaged at 3T using morphologic (proton density weighted spin echo, PD FS, and 3D spoiled gradient echo, 3D SPGR) and quantitative MR sequences to determine T2 and T1rho properties. In eight geographic regions, morphology of TFC disc and laminae were evaluated for pathology and quantitative MR values. Samples were disarticulated and biomechanical indentation testing was performed on the distal surface of the TFC disc. Results On morphologic PD SE images, TFC disc pathology included degeneration and tears, while that of the laminae included degeneration, degeneration with superimposed tear, mucinous transformation, and globular calcification. Punctate calcifications were highly visible on 3D SPGR images and found only in pathologic regions. Disc pathology occurred more frequently in proximal regions of the disc than distal regions. Quantitative MR values were lowest in normal samples, and generally higher in pathologic regions. Biomechanical testing demonstrated an inverse relationship, with indentation modulus being high in normal regions with low MR values. The laminae studied were mostly pathologic, and additional normal samples are needed to discern quantitative changes. Conclusion These results show technical feasibility of morphologic MR, quantitative MR, and biomechanical techniques to characterize pathology of the TFCC. Quantitative MRI may be a suitable surrogate marker of soft tissue mechanical properties, and a useful adjunct to conventional morphologic MR techniques. PMID:26691643
Evaluation of an interactive, case-based review session in teaching medical microbiology.
Blewett, Earl L; Kisamore, Jennifer L
2009-08-27
Oklahoma State University-Center for Health Sciences (OSU-CHS) has replaced its microbiology wet laboratory with a variety of tutorials including a case-based interactive session called Microbial Jeopardy!. The question remains whether the time spent by students and faculty in the interactive case-based tutorial is worthwhile? This study was designed to address this question by analyzing both student performance data and assessing students' perceptions regarding the tutorial. Both quantitative and qualitative data were used in the current study. Part One of the study involved assessing student performance using archival records of seven case-based exam questions used in the 2004, 2005, 2006, and 2007 OSU-CHS Medical Microbiology course. Two sample t-tests for proportions were used to test for significant differences related to tutorial usage. Part Two used both quantitative and qualitative means to assess student's perceptions of the Microbial Jeopardy! session. First, a retrospective survey was administered to students who were enrolled in Medical Microbiology in 2006 or 2007. Second, responses to open-ended items from the 2008 course evaluations were reviewed for comments regarding the Microbial Jeopardy! session. Both student performance and student perception data support continued use of the tutorials. Quantitative and qualitative data converge to suggest that students like and learn from the interactive, case-based session. The case-based tutorial appears to improve student performance on case-based exam questions. Additionally, students perceived the tutorial as helpful in preparing for exam questions and reviewing the course material. The time commitment for use of the case-based tutorial appears to be justified.
Evaluation of an interactive, case-based review session in teaching medical microbiology
Blewett, Earl L; Kisamore, Jennifer L
2009-01-01
Background Oklahoma State University-Center for Health Sciences (OSU-CHS) has replaced its microbiology wet laboratory with a variety of tutorials including a case-based interactive session called Microbial Jeopardy!. The question remains whether the time spent by students and faculty in the interactive case-based tutorial is worthwhile? This study was designed to address this question by analyzing both student performance data and assessing students' perceptions regarding the tutorial. Methods Both quantitative and qualitative data were used in the current study. Part One of the study involved assessing student performance using archival records of seven case-based exam questions used in the 2004, 2005, 2006, and 2007 OSU-CHS Medical Microbiology course. Two sample t-tests for proportions were used to test for significant differences related to tutorial usage. Part Two used both quantitative and qualitative means to assess student's perceptions of the Microbial Jeopardy! session. First, a retrospective survey was administered to students who were enrolled in Medical Microbiology in 2006 or 2007. Second, responses to open-ended items from the 2008 course evaluations were reviewed for comments regarding the Microbial Jeopardy! session. Results Both student performance and student perception data support continued use of the tutorials. Quantitative and qualitative data converge to suggest that students like and learn from the interactive, case-based session. Conclusion The case-based tutorial appears to improve student performance on case-based exam questions. Additionally, students perceived the tutorial as helpful in preparing for exam questions and reviewing the course material. The time commitment for use of the case-based tutorial appears to be justified. PMID:19712473
Libong, Danielle; Bouchonnet, Stéphane; Ricordel, Ivan
2003-01-01
A gas chromatography-ion trap tandem mass spectrometry (GC-ion trap MS-MS) method for detection and quantitation of LSD in whole blood is presented. The sample preparation process, including a solid-phase extraction step with Bond Elut cartridges, was performed with 2 mL of whole blood. Eight microliters of the purified extract was injected with a cold on-column injection method. Positive chemical ionization was performed using acetonitrile as reagent gas; LSD was detected in the MS-MS mode. The chromatograms obtained from blood extracts showed the great selectivity of the method. GC-MS quantitation was performed using lysergic acid methylpropylamide as the internal standard. The response of the MS was linear for concentrations ranging from 0.02 ng/mL (detection threshold) to 10.0 ng/mL. Several parameters such as the choice of the capillary column, the choice of the internal standard and that of the ionization mode (positive CI vs. EI) were rationalized. Decomposition pathways under both ionization modes were studied. Within-day and between-day stability were evaluated.
Colagiorgio, P; Romano, F; Sardi, F; Moraschini, M; Sozzi, A; Bejor, M; Ricevuti, G; Buizza, A; Ramat, S
2014-01-01
The problem of a correct fall risk assessment is becoming more and more critical with the ageing of the population. In spite of the available approaches allowing a quantitative analysis of the human movement control system's performance, the clinical assessment and diagnostic approach to fall risk assessment still relies mostly on non-quantitative exams, such as clinical scales. This work documents our current effort to develop a novel method to assess balance control abilities through a system implementing an automatic evaluation of exercises drawn from balance assessment scales. Our aim is to overcome the classical limits characterizing these scales i.e. limited granularity and inter-/intra-examiner reliability, to obtain objective scores and more detailed information allowing to predict fall risk. We used Microsoft Kinect to record subjects' movements while performing challenging exercises drawn from clinical balance scales. We then computed a set of parameters quantifying the execution of the exercises and fed them to a supervised classifier to perform a classification based on the clinical score. We obtained a good accuracy (~82%) and especially a high sensitivity (~83%).
Hildon, Zoe; Allwood, Dominique; Black, Nick
2012-02-01
Displays comparing the performance of healthcare providers are largely based on commonsense. To review the literature on the impact of compositional format and content of quantitative data displays on people's comprehension, choice and preference. Ovid databases, expert recommendations and snowballing techniques. Evaluations of the impact of different formats (bar charts, tables and pictographs) and content (ordering, explanatory visual cues, etc.) of quantitative data displays meeting defined quality criteria. Data extraction Type of decision; decision-making domains; audiences; formats; content; methodology; findings. Most of the 30 studies used quantitative (n= 26) methods with patients or public groups (n= 28) rather than with professionals (n= 2). Bar charts were the most frequent format, followed by pictographs and tables. As regards format, tables and pictographs appeared better understood than bar charts despite the latter being preferred. Although accessible to less numerate and older populations, pictographs tended to lead to more risk avoidance. Tables appeared accessible to all. Aspects of content enhancing the impact of data displays included giving visual explanatory cues and contextual information while still attempting simplicity ('less is more'); ordering data; consistency. Icons rather than numbers were more user-friendly but could lead to over-estimation of risk. Uncertainty was not widely understood, nor well represented. Though heterogeneous and limited in scope, there is sufficient research evidence to inform the presentation of quantitative data that compares the performance of healthcare providers. The impact of new formats, such as funnel plots, needs to be evaluated.
Jha, Abhinav K; Caffo, Brian; Frey, Eric C
2016-01-01
The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest. Results showed that the proposed technique provided accurate ranking of the reconstruction methods for 97.5% of the 50 noise realizations. Further, the technique was robust to the choice of evaluated reconstruction methods. The simulation study pointed to possible violations of the assumptions made in the NGS technique under clinical scenarios. However, numerical experiments indicated that the NGS technique was robust in ranking methods even when there was some degree of such violation. PMID:26982626
Jha, Abhinav K; Caffo, Brian; Frey, Eric C
2016-04-07
The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest. Results showed that the proposed technique provided accurate ranking of the reconstruction methods for 97.5% of the 50 noise realizations. Further, the technique was robust to the choice of evaluated reconstruction methods. The simulation study pointed to possible violations of the assumptions made in the NGS technique under clinical scenarios. However, numerical experiments indicated that the NGS technique was robust in ranking methods even when there was some degree of such violation.
Zotti, Alessandro; Banzato, Tommaso; Gelain, Maria Elena; Centelleghe, Cinzia; Vaccaro, Calogero; Aresu, Luca
2015-04-25
Increased cortical or cortical and medullary echogenicity is one of the most common signs of chronic or acute kidney disease in dogs and cats. Subjective evaluation of the echogenicity is reported to be unreliable. Patient and technical-related factors affect in-vivo quantitative evaluation of the echogenicity of parenchymal organs. The aim of the present study is to investigate the relationship between histopathology and ex-vivo renal cortical echogenicity in dogs and cats devoid of any patient and technical-related biases. Kidney samples were collected from 68 dog and 32 cat cadavers donated by the owners to the Veterinary Teaching Hospital of the University of Padua and standardized ultrasonographic images of each sample were collected. The echogenicity of the renal cortex was quantitatively assessed by means of mean gray value (MGV), and then histopathological analysis was performed. Statistical analysis to evaluate the influence of histological lesions on MGV was performed. The differentiation efficiency of MGV to detect pathological changes in the kidneys was calculated for dogs and cats. Statistical analysis revealed that only glomerulosclerosis was an independent determinant of echogenicity in dogs whereas interstitial nephritis, interstitial necrosis and fibrosis were independent determinants of echogenicity in cats. The global influence of histological lesions on renal echogenicity was higher in cats (23%) than in dogs (12%). Different histopathological lesions influence the echogenicity of the kidneys in dogs and cats. Moreover, MGV is a poor test for distinguishing between normal and pathological kidneys in the dog with a sensitivity of 58.3% and specificity of 59.8%. Instead, it seems to perform globally better in the cat, resulting in a fair test, with a sensitivity of 80.6% and a specificity of 56%.
Choi, Moon Hyung; Oh, Soon Nam; Rha, Sung Eun; Choi, Joon-Il; Lee, Sung Hak; Jang, Hong Seok; Kim, Jun-Gi; Grimm, Robert; Son, Yohan
2016-07-01
To investigate the usefulness of apparent diffusion coefficient (ADC) values derived from histogram analysis of the whole rectal cancer as a quantitative parameter to evaluate pathologic complete response (pCR) on preoperative magnetic resonance imaging (MRI). We enrolled a total of 86 consecutive patients who had undergone surgery for rectal cancer after neoadjuvant chemoradiotherapy (CRT) at our institution between July 2012 and November 2014. Two radiologists who were blinded to the final pathological results reviewed post-CRT MRI to evaluate tumor stage. Quantitative image analysis was performed using T2 -weighted and diffusion-weighted images independently by two radiologists using dedicated software that performed histogram analysis to assess the distribution of ADC in the whole tumor. After surgery, 16 patients were confirmed to have achieved pCR (18.6%). All parameters from pre- and post-CRT ADC histogram showed good or excellent agreement between two readers. The minimum, 10th, 25th, 50th, and 75th percentile and mean ADC from post-CRT ADC histogram were significantly higher in the pCR group than in the non-pCR group for both readers. The 25th percentile value from ADC histogram in post-CRT MRI had the best diagnostic performance for detecting pCR, with an area under the receiver operating characteristic curve of 0.796. Low percentile values derived from the ADC histogram analysis of rectal cancer on MRI after CRT showed a significant difference between pCR and non-pCR groups, demonstrating the utility of the ADC value as a quantitative and objective marker to evaluate complete pathologic response to preoperative CRT in rectal cancer. J. Magn. Reson. Imaging 2016;44:212-220. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Weiss, Brian A.; Fronczek, Lisa; Morse, Emile; Kootbally, Zeid; Schlenoff, Craig
2013-05-01
Transformative Apps (TransApps) is a Defense Advanced Research Projects Agency (DARPA) funded program whose goal is to develop a range of militarily-relevant software applications ("apps") to enhance the operational-effectiveness of military personnel on (and off) the battlefield. TransApps is also developing a military apps marketplace to facilitate rapid development and dissemination of applications to address user needs by connecting engaged communities of endusers with development groups. The National Institute of Standards and Technology's (NIST) role in the TransApps program is to design and implement evaluation procedures to assess the performance of: 1) the various software applications, 2) software-hardware interactions, and 3) the supporting online application marketplace. Specifically, NIST is responsible for evaluating 50+ tactically-relevant applications operating on numerous Android™-powered platforms. NIST efforts include functional regression testing and quantitative performance testing. This paper discusses the evaluation methodologies employed to assess the performance of three key program elements: 1) handheld-based applications and their integration with various hardware platforms, 2) client-based applications and 3) network technologies operating on both the handheld and client systems along with their integration into the application marketplace. Handheld-based applications are assessed using a combination of utility and usability-based checklists and quantitative performance tests. Client-based applications are assessed to replicate current overseas disconnected (i.e. no network connectivity between handhelds) operations and to assess connected operations envisioned for later use. Finally, networked applications are assessed on handhelds to establish baselines of performance for when connectivity will be common usage.
NASA Astrophysics Data System (ADS)
Qiu, Yuchen; Tan, Maxine; McMeekin, Scott; Thai, Theresa; Moore, Kathleen; Ding, Kai; Liu, Hong; Zheng, Bin
2015-03-01
The purpose of this study is to identify and apply quantitative image biomarkers for early prediction of the tumor response to the chemotherapy among the ovarian cancer patients participated in the clinical trials of testing new drugs. In the experiment, we retrospectively selected 30 cases from the patients who participated in Phase I clinical trials of new drug or drug agents for ovarian cancer treatment. Each case is composed of two sets of CT images acquired pre- and post-treatment (4-6 weeks after starting treatment). A computer-aided detection (CAD) scheme was developed to extract and analyze the quantitative image features of the metastatic tumors previously tracked by the radiologists using the standard Response Evaluation Criteria in Solid Tumors (RECIST) guideline. The CAD scheme first segmented 3-D tumor volumes from the background using a hybrid tumor segmentation scheme. Then, for each segmented tumor, CAD computed three quantitative image features including the change of tumor volume, tumor CT number (density) and density variance. The feature changes were calculated between the matched tumors tracked on the CT images acquired pre- and post-treatments. Finally, CAD predicted patient's 6-month progression-free survival (PFS) using a decision-tree based classifier. The performance of the CAD scheme was compared with the RECIST category. The result shows that the CAD scheme achieved a prediction accuracy of 76.7% (23/30 cases) with a Kappa coefficient of 0.493, which is significantly higher than the performance of RECIST prediction with a prediction accuracy and Kappa coefficient of 60% (17/30) and 0.062, respectively. This study demonstrated the feasibility of analyzing quantitative image features to improve the early predicting accuracy of the tumor response to the new testing drugs or therapeutic methods for the ovarian cancer patients.
Dong, Yi; Wang, Wen-Ping; Lin, Pan; Fan, Peili; Mao, Feng
2016-01-01
We performed a prospective study to evaluate the value of contrast-enhanced ultrasound (CEUS) in quantitative evaluation of renal cortex perfusion in patients suspected of early diabetic nephropathies (DN), with the estimated GFR (MDRD equation) as the gold standard. The study protocol was approved by the hospital review board; each patient gave written informed consent. Our study included 46 cases (21 males and 25 females, mean age 55.6 ± 4.14 years) of clinical confirmed early DN patients. After intravenous bolus injection of 1 ml sulfur hexafluoride microbubbles of ultrasound contrast agent, real time CEUS of renal cortex was performed successively using a 2-5 MHz convex probe. Time-intensity curves (TICs) and quantitative indexes were created with Qlab software. Receiver operating characteristic (ROC) curves were used to predict the diagnostic criteria of CEUS quantitative indexes, and their diagnostic efficiencies were compared with resistance index (RI) and peak systolic velocity (PSV) of renal segmental arteries by chi square test. Our control group included forty-five healthy volunteers. Difference was considered statistically significant with P < 0.05. Changes of area under curve (AUC), derived peak intensity (DPI) were statistically significant (P < 0.05). DPI less than 12 and AUC greater than 1400 had high utility in DN, with 71.7% and 67.3% sensitivity, 77.8% and 80.0% specificity. These results were significantly better than those obtained with RI and PSV which had no significant difference in early stage of DN (P > 0.05). CEUS might be helpful to improve early diagnosis of DN by quantitative analyses. AUC and DPI might be valuable quantitative indexes.
2015-01-01
Changes in glycosylation have been shown to have a profound correlation with development/malignancy in many cancer types. Currently, two major enrichment techniques have been widely applied in glycoproteomics, namely, lectin affinity chromatography (LAC)-based and hydrazide chemistry (HC)-based enrichments. Here we report the LC–MS/MS quantitative analyses of human blood serum glycoproteins and glycopeptides associated with esophageal diseases by LAC- and HC-based enrichment. The separate and complementary qualitative and quantitative data analyses of protein glycosylation were performed using both enrichment techniques. Chemometric and statistical evaluations, PCA plots, or ANOVA test, respectively, were employed to determine and confirm candidate cancer-associated glycoprotein/glycopeptide biomarkers. Out of 139, 59 common glycoproteins (42% overlap) were observed in both enrichment techniques. This overlap is very similar to previously published studies. The quantitation and evaluation of significantly changed glycoproteins/glycopeptides are complementary between LAC and HC enrichments. LC–ESI–MS/MS analyses indicated that 7 glycoproteins enriched by LAC and 11 glycoproteins enriched by HC showed significantly different abundances between disease-free and disease cohorts. Multiple reaction monitoring quantitation resulted in 13 glycopeptides by LAC enrichment and 10 glycosylation sites by HC enrichment to be statistically different among disease cohorts. PMID:25134008
Chen, Tao; Fan, Jun; Gao, Ruiqi; Wang, Tai; Yu, Ying; Zhang, Weiguang
2016-10-07
Chiral stationary phase-high performance liquid chromatography coupled with various detectors has been one of most commonly used methods for analysis and separation of chiral compounds over the past decades. Various detectors exhibit different characteristics in qualitative and quantitative studies under different chromatographic conditions. Herein, a comparative evaluation of HPLC coupled with ultraviolet, optical rotation, refractive index, and evaporative light scattering detectors has been conducted for qualitative and quantitative analyses of metalaxyl racemate. Effects of separation conditions on the peak area ratio between two enantiomers, including sample concentration, column temperature, mobile phase composition, as well as flow rate, have been investigated in detail. In addition, the limits of detection, the limits of quantitation, quantitative range and precision for these two enantiomers by using four detectors have been also studied. As indicated, the chromatographic separation conditions have been slight effects on ultraviolet and refractive index detections and the peak area ratio between two enantiomers remains almost unchanged, but the evaporative light scattering detection has been significantly affected by the above-mentioned chromatographic conditions and the corresponding peak area ratios varied greatly. Moreover, the limits of detection, the limits of quantitation, and the quantitative ranges of two enantiomers with UV detection were remarkably lower by 1-2 magnitudes than the others. Copyright © 2016 Elsevier B.V. All rights reserved.
Sturla, Francesco; Onorati, Francesco; Puppini, Giovanni; Pappalardo, Omar A; Selmi, Matteo; Votta, Emiliano; Faggian, Giuseppe; Redaelli, Alberto
2017-04-01
Accurate quantification of mitral valve (MV) morphology and dynamic behavior over the cardiac cycle is crucial to understand the mechanisms of degenerative MV dysfunction and to guide the surgical intervention. Cardiac magnetic resonance (CMR) imaging has progressively been adopted to evaluate MV pathophysiology, although a dedicated framework is required to perform a quantitative assessment of the functional MV anatomy. We investigated MV dynamic behavior in subjects with normal MV anatomy (n=10) and patients referred to surgery due to degenerative MV prolapse, classified as fibro-elastic deficiency (FED, n=9) and Barlow's disease (BD, n=10). A CMR-dedicated framework was adopted to evaluate prolapse height and volume and quantitatively assess valvular morphology and papillary muscles (PAPs) function over the cardiac cycle. Multiple comparison was used to investigate the hallmarks associated to MV degenerative prolapse and evaluate the feasibility of anatomical and functional distinction between FED and BD phenotypes. On average, annular dimensions were significantly (P<0.05) larger in BD than in FED and normal subjects while no significant differences were noticed between FED and normal. MV eccentricity progressively decreased passing from normal to FED and BD, with the latter exhibiting a rounder annulus shape. Over the cardiac cycle, we noticed significant differences for BD during systole with an abnormal annular enlargement between mid and late systole (LS) (P<0.001 vs. normal); the PAPs dynamics remained comparable in the three groups. Prolapse height and volume highlighted significant differences among normal, FED and BD valves. Our CMR-dedicated framework allows for the quantitative and dynamic evaluation of MV apparatus, with quantifiable annular alterations representing the primary hallmark of severe MV degeneration. This may aid surgeons in the evaluation of the severity of MV dysfunction and the selection of the appropriate MV treatment.
Beug, M W; Bigwood, J
1981-03-27
Rapid quantification of psilocybin and psilocin in extracts of wild mushrooms is accomplished by reversed-phase high-performance liquid chromatography with paired-ion reagents. Nine solvent systems and three solid supports are evaluated for their efficiency in separating psilocybin, psilocin and other components of crude mushroom extracts by thin-layer chromatography.
Assis, Sofia; Costa, Pedro; Rosas, Maria Jose; Vaz, Rui; Silva Cunha, Joao Paulo
2016-08-01
Intraoperative evaluation of the efficacy of Deep Brain Stimulation includes evaluation of the effect on rigidity. A subjective semi-quantitative scale is used, dependent on the examiner perception and experience. A system was proposed previously, aiming to tackle this subjectivity, using quantitative data and providing real-time feedback of the computed rigidity reduction, hence supporting the physician decision. This system comprised of a gyroscope-based motion sensor in a textile band, placed in the patients hand, which communicated its measurements to a laptop. The latter computed a signal descriptor from the angular velocity of the hand during wrist flexion in DBS surgery. The first approach relied on using a general rigidity reduction model, regardless of the initial severity of the symptom. Thus, to enhance the performance of the previously presented system, we aimed to develop models for high and low baseline rigidity, according to the examiner assessment before any stimulation. This would allow a more patient-oriented approach. Additionally, usability was improved by having in situ processing in a smartphone, instead of a computer. Such system has shown to be reliable, presenting an accuracy of 82.0% and a mean error of 3.4%. Relatively to previous results, the performance was similar, further supporting the importance of considering the cogwheel rigidity to better infer about the reduction in rigidity. Overall, we present a simple, wearable, mobile system, suitable for intra-operatory conditions during DBS, supporting a physician in decision-making when setting stimulation parameters.
Polarization variations in installed fibers and their influence on quantum key distribution systems.
Ding, Yu-Yang; Chen, Hua; Wang, Shuang; He, De-Yong; Yin, Zhen-Qiang; Chen, Wei; Zhou, Zheng; Guo, Guang-Can; Han, Zheng-Fu
2017-10-30
Polarization variations in the installed fibers are complex and volatile, and would severely affect the performances of polarization-sensitive quantum key distribution (QKD) systems. Based on the recorded data about polarization variations of different installed fibers, we establish an analytical methodology to quantitatively evaluate the influence of polarization variations on polarization-sensitive QKD systems. Using the increased quantum bit error rate induced by polarization variations as a key criteria, we propose two parameters - polarization drift time and required tracking speed - to characterize polarization variations. For field buried and aerial fibers with different length, we quantitatively evaluate the influence of polarization variations, and also provide requirements and suggestions for polarization basis alignment modules of QKD systems deployed in different kind of fibers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, C.; et al.
We describe the concept and procedure of drifted-charge extraction developed in the MicroBooNE experiment, a single-phase liquid argon time projection chamber (LArTPC). This technique converts the raw digitized TPC waveform to the number of ionization electrons passing through a wire plane at a given time. A robust recovery of the number of ionization electrons from both induction and collection anode wire planes will augment the 3D reconstruction, and is particularly important for tomographic reconstruction algorithms. A number of building blocks of the overall procedure are described. The performance of the signal processing is quantitatively evaluated by comparing extracted charge withmore » the true charge through a detailed TPC detector simulation taking into account position-dependent induced current inside a single wire region and across multiple wires. Some areas for further improvement of the performance of the charge extraction procedure are also discussed.« less
Yang, Yuan-Gui; Zhang, Ji; Zhao, Yan-Li; Zhang, Jin-Yu; Wang, Yuan-Zhong
2017-07-01
A rapid method was developed and validated by ultra-performance liquid chromatography-triple quadrupole mass spectroscopy with ultraviolet detection (UPLC-UV-MS) for simultaneous determination of paris saponin I, paris saponin II, paris saponin VI and paris saponin VII. Partial least squares discriminant analysis (PLS-DA) based on UPLC and Fourier transform infrared (FT-IR) spectroscopy was employed to evaluate Paris polyphylla var. yunnanensis (PPY) at different harvesting times. Quantitative determination implied that the various contents of bioactive compounds with different harvesting times may lead to different pharmacological effects; the average content of total saponins for PPY harvested at 8 years was higher than that from other samples. The PLS-DA of FT-IR spectra had a better performance than that of UPLC for discrimination of PPY from different harvesting times. Copyright © 2016 John Wiley & Sons, Ltd.
Chapelain, Pascal; Morineau, Thierry; Gautier, Claudie
2015-11-01
To explore how nursing performance is impacted by different forms of team communication including a message transmitted through an earpiece which triggers reflective thinking in the simulation of a deteriorating patient situation. Communication can either support team performance or produce interruptions potentially leading to error. Today, technology offers the opportunity to use devices that can permit communication. An experimental protocol was used with quantitative and qualitative analyses. Pairs of nursing students (N = 26) were dispatched to either an experimental group having to wear an earpiece priming reflective thinking, or to a control group. The study was conducted between October 2013-April 2014. The number of spontaneous information exchanges between pairs of participants was positively correlated with overall performance (actions performed and physician call) and with actions performed at the right moment. The number of questions in the team was positively correlated with overall action performance. No quantitative effect of the earpiece message on the performance indicators was found. But, a qualitative observation showed that this message can allow for error avoidance. Subjective evaluation of the earpiece as an aid was negatively correlated with overall action performance. Its evaluation as a disturbance was also negatively correlated with the measurement of actions performed at the right moment. The ability to exchange information and to ask questions seems to contribute to performance in care delivery. The use of communication devices to trigger reflective thinking must be studied in more depth to assess their capacity to improve performance. © 2015 John Wiley & Sons Ltd.
Au, Frederick Wing-Fai; Ghai, Sandeep; Moshonov, Hadas; Kahn, Harriette; Brennan, Cressida; Dua, Hemi; Crystal, Pavel
2014-09-01
The purpose of this article is to assess the diagnostic performance of quantitative shear wave elastography in the evaluation of solid breast masses and to determine the most discriminatory parameter. B-mode ultrasound and shear wave elastography were performed before core biopsy of 123 masses in 112 women. The diagnostic performance of ultrasound and quantitative shear wave elastography parameters (mean elasticity, maximum elasticity, and elasticity ratio) were compared. The added effect of shear wave elastography on the performance of ultrasound was determined. The mean elasticity, maximum elasticity, and elasticity ratio were 24.8 kPa, 30.3 kPa, and 1.90, respectively, for 79 benign masses and 130.7 kPa, 154.9 kPa, and 11.52, respectively, for 44 malignant masses (p < 0.001). The optimal cutoff value for each parameter was determined to be 42.5 kPa, 46.7 kPa, and 3.56, respectively. The AUC of each shear wave elastography parameter was higher than that of ultrasound (p < 0.001); the AUC value for the elasticity ratio (0.943) was the highest. By adding shear wave elastography parameters to the evaluation of BI-RADS category 4a masses, about 90% of masses could be downgraded to BI-RADS category 3. The numbers of downgraded masses were 40 of 44 (91%) for mean elasticity, 39 of 44 (89%) for maximum elasticity, and 42 of 44 (95%) for elasticity ratio. The numbers of correctly downgraded masses were 39 of 40 (98%) for mean elasticity, 38 of 39 (97%) for maximum elasticity, and 41 of 42 (98%) for elasticity ratio. There was improvement in the diagnostic performance of ultrasound of mass assessment with shear wave elastography parameters added to BI-RADS category 4a masses compared with ultrasound alone. Combined ultrasound and elasticity ratio had the highest improvement, from 35.44% to 87.34% for specificity, from 45.74% to 80.77% for positive predictive value, and from 57.72% to 90.24% for accuracy (p < 0.0001). The AUC of combined ultrasound and elasticity ratio (0.914) was the highest compared with the other combined parameters. There was a statistically significant difference in the values of the quantitative shear wave elastography parameters of benign and malignant solid breast masses. By adding shear wave elastography parameters to BI-RADS category 4a masses, we found that about 90% of them could be correctly downgraded to BI-RADS category 3, thereby avoiding biopsy. Elasticity ratio (cutoff, 3.56) appeared to be the most discriminatory parameter.
NASA Astrophysics Data System (ADS)
Siddiqui, Khan M.; Siegel, Eliot L.; Reiner, Bruce I.; Johnson, Jeffrey P.
2005-04-01
The authors identify a fundamental disconnect between the ways in which industry and radiologists assess and even discuss product performance. What is needed is a quantitative methodology that can assess both subjective image quality and observer task performance. In this study, we propose and evaluate the use of a visual discrimination model (VDM) that assesses just-noticeable differences (JNDs) to serve this purpose. The study compares radiologists' subjective perceptions of image quality of computer tomography (CT) and computed radiography (CR) images with quantitative measures of peak signal-to-noise ratio (PSNR) and JNDs as measured by a VDM. The study included 4 CT and 6 CR studies with compression ratios ranging from lossless to 90:1 (total of 80 sets of images were generated [n = 1,200]). Eleven radiologists reviewed the images and rated them in terms of overall quality and readability and identified images not acceptable for interpretation. Normalized reader scores were correlated with compression, objective PSNR, and mean JND values. Results indicated a significantly higher correlation between observer performance and JND values than with PSNR methods. These results support the use of the VDM as a metric not only for the threshold discriminations for which it was calibrated, but also as a general image quality metric. This VDM is a highly promising, reproducible, and reliable adjunct or even alternative to human observer studies for research or to establish clinical guidelines for image compression, dose reductions, and evaluation of various display technologies.
Layton, Blythe A.; Cao, Yiping; Ebentier, Darcy L.; Hanley, Kaitlyn; Ballesté, Elisenda; Brandão, João; Byappanahalli, Muruleedhara N.; Converse, Reagan; Farnleitner, Andreas H.; Gentry-Shields, Jennifer; Gourmelon, Michèle; Lee, Chang Soo; Lee, Jiyoung; Lozach, Solen; Madi, Tania; Meijer, Wim G.; Noble, Rachel; Peed, Lindsay; Reischer, Georg H.; Rodrigues, Raquel; Rose, Joan B.; Schriewer, Alexander; Sinigalliano, Chris; Srinivasan, Sangeetha; Stewart, Jill; ,; Laurie, C.; Wang, Dan; Whitman, Richard; Wuertz, Stefan; Jay, Jenny; Holden, Patricia A.; Boehm, Alexandria B.; Shanks, Orin; Griffith, John F.
2013-01-01
A number of PCR-based methods for detecting human fecal material in environmental waters have been developed over the past decade, but these methods have rarely received independent comparative testing in large multi-laboratory studies. Here, we evaluated ten of these methods (BacH, BacHum-UCD, Bacteroides thetaiotaomicron (BtH), BsteriF1, gyrB, HF183 endpoint, HF183 SYBR, HF183 Taqman®, HumM2, and Methanobrevibacter smithii nifH (Mnif)) using 64 blind samples prepared in one laboratory. The blind samples contained either one or two fecal sources from human, wastewater or non-human sources. The assay results were assessed for presence/absence of the human markers and also quantitatively while varying the following: 1) classification of samples that were detected but not quantifiable (DNQ) as positive or negative; 2) reference fecal sample concentration unit of measure (such as culturable indicator bacteria, wet mass, total DNA, etc); and 3) human fecal source type (stool, sewage or septage). Assay performance using presence/absence metrics was found to depend on the classification of DNQ samples. The assays that performed best quantitatively varied based on the fecal concentration unit of measure and laboratory protocol. All methods were consistently more sensitive to human stools compared to sewage or septage in both the presence/absence and quantitative analysis. Overall, HF183 Taqman® was found to be the most effective marker of human fecal contamination in this California-based study.
Breast cancer Ki67 expression preoperative discrimination by DCE-MRI radiomics features
NASA Astrophysics Data System (ADS)
Ma, Wenjuan; Ji, Yu; Qin, Zhuanping; Guo, Xinpeng; Jian, Xiqi; Liu, Peifang
2018-02-01
To investigate whether quantitative radiomics features extracted from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) are associated with Ki67 expression of breast cancer. In this institutional review board approved retrospective study, we collected 377 cases Chinese women who were diagnosed with invasive breast cancer in 2015. This cohort included 53 low-Ki67 expression (Ki67 proliferation index less than 14%) and 324 cases with high-Ki67 expression (Ki67 proliferation index more than 14%). A binary-classification of low- vs. high- Ki67 expression was performed. A set of 52 quantitative radiomics features, including morphological, gray scale statistic, and texture features, were extracted from the segmented lesion area. Three most common machine learning classification methods, including Naive Bayes, k-Nearest Neighbor and support vector machine with Gaussian kernel, were employed for the classification and the least absolute shrink age and selection operator (LASSO) method was used to select most predictive features set for the classifiers. Classification performance was evaluated by the area under receiver operating characteristic curve (AUC), accuracy, sensitivity and specificity. The model that used Naive Bayes classification method achieved the best performance than the other two methods, yielding 0.773 AUC value, 0.757 accuracy, 0.777 sensitivity and 0.769 specificity. Our study showed that quantitative radiomics imaging features of breast tumor extracted from DCE-MRI are associated with breast cancer Ki67 expression. Future larger studies are needed in order to further evaluate the findings.
Performance of biometric quality measures.
Grother, Patrick; Tabassi, Elham
2007-04-01
We document methods for the quantitative evaluation of systems that produce a scalar summary of a biometric sample's quality. We are motivated by a need to test claims that quality measures are predictive of matching performance. We regard a quality measurement algorithm as a black box that converts an input sample to an output scalar. We evaluate it by quantifying the association between those values and observed matching results. We advance detection error trade-off and error versus reject characteristics as metrics for the comparative evaluation of sample quality measurement algorithms. We proceed this with a definition of sample quality, a description of the operational use of quality measures. We emphasize the performance goal by including a procedure for annotating the samples of a reference corpus with quality values derived from empirical recognition scores.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Thomas Martin; Patton, Bruce W.; Weber, Charles F.
The primary goal of this project is to evaluate x-ray spectra generated within a scanning electron microscope (SEM) to determine elemental composition of small samples. This will be accomplished by performing Monte Carlo simulations of the electron and photon interactions in the sample and in the x-ray detector. The elemental inventories will be determined by an inverse process that progressively reduces the difference between the measured and simulated x-ray spectra by iteratively adjusting composition and geometric variables in the computational model. The intended benefit of this work will be to develop a method to perform quantitative analysis on substandard samplesmore » (heterogeneous phases, rough surfaces, small sizes, etc.) without involving standard elemental samples or empirical matrix corrections (i.e., true standardless quantitative analysis).« less
Imaging Performance of Quantitative Transmission Ultrasound
Lenox, Mark W.; Wiskin, James; Lewis, Matthew A.; Darrouzet, Stephen; Borup, David; Hsieh, Scott
2015-01-01
Quantitative Transmission Ultrasound (QTUS) is a tomographic transmission ultrasound modality that is capable of generating 3D speed-of-sound maps of objects in the field of view. It performs this measurement by propagating a plane wave through the medium from a transmitter on one side of a water tank to a high resolution receiver on the opposite side. This information is then used via inverse scattering to compute a speed map. In addition, the presence of reflection transducers allows the creation of a high resolution, spatially compounded reflection map that is natively coregistered to the speed map. A prototype QTUS system was evaluated for measurement and geometric accuracy as well as for the ability to correctly determine speed of sound. PMID:26604918
Supplemental Hazard Analysis and Risk Assessment - Hydrotreater
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lowry, Peter P.; Wagner, Katie A.
A supplemental hazard analysis was conducted and quantitative risk assessment performed in response to an independent review comment received by the Pacific Northwest National Laboratory (PNNL) from the U.S. Department of Energy Pacific Northwest Field Office (PNSO) against the Hydrotreater/Distillation Column Hazard Analysis Report issued in April 2013. The supplemental analysis used the hazardous conditions documented by the previous April 2013 report as a basis. The conditions were screened and grouped for the purpose of identifying whether additional prudent, practical hazard controls could be identified, using a quantitative risk evaluation to assess the adequacy of the controls and establish amore » lower level of concern for the likelihood of potential serious accidents. Calculations were performed to support conclusions where necessary.« less
NASA Astrophysics Data System (ADS)
Wu, Tao; Cheung, Tak-Hong; Yim, So-Fan; Qu, Jianan Y.
2010-03-01
A quantitative colposcopic imaging system for the diagnosis of early cervical cancer is evaluated in a clinical study. This imaging technology based on 3-D active stereo vision and motion tracking extracts diagnostic information from the kinetics of acetowhitening process measured from the cervix of human subjects in vivo. Acetowhitening kinetics measured from 137 cervical sites of 57 subjects are analyzed and classified using multivariate statistical algorithms. Cross-validation methods are used to evaluate the performance of the diagnostic algorithms. The results show that an algorithm for screening precancer produced 95% sensitivity (SE) and 96% specificity (SP) for discriminating normal and human papillomavirus (HPV)-infected tissues from cervical intraepithelial neoplasia (CIN) lesions. For a diagnostic algorithm, 91% SE and 90% SP are achieved for discriminating normal tissue, HPV infected tissue, and low-grade CIN lesions from high-grade CIN lesions. The results demonstrate that the quantitative colposcopic imaging system could provide objective screening and diagnostic information for early detection of cervical cancer.
Reiffsteck, A; Dehennin, L; Scholler, R
1982-11-01
Estrone, 2-methoxyestrone and estradiol-17 beta have been definitely identified in seminal plasma of man, bull, boar and stallion by high resolution gas chromatography associated with selective monitoring of characteristic ions of suitable derivatives. Quantitative estimations were performed by isotope dilution with deuterated analogues and by monitoring molecular ions of trimethylsilyl ethers of labelled and unlabelled compounds. Concentrations of unconjugated and total estrogens are reported together with the statistical evaluation of accuracy and precision.
National Security Technology Incubator Evaluation Process
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
This report describes the process by which the National Security Technology Incubator (NSTI) will be evaluated. The technology incubator is being developed as part of the National Security Preparedness Project (NSPP), funded by a Department of Energy (DOE)/National Nuclear Security Administration (NNSA) grant. This report includes a brief description of the components, steps, and measures of the proposed evaluation process. The purpose of the NSPP is to promote national security technologies through business incubation, technology demonstration and validation, and workforce development. The NSTI will focus on serving businesses with national security technology applications by nurturing them through critical stages ofmore » early development. An effective evaluation process of the NSTI is an important step as it can provide qualitative and quantitative information on incubator performance over a given period. The vision of the NSTI is to be a successful incubator of technologies and private enterprise that assist the NNSA in meeting new challenges in national safety and security. The mission of the NSTI is to identify, incubate, and accelerate technologies with national security applications at various stages of development by providing hands-on mentoring and business assistance to small businesses and emerging or growing companies. To achieve success for both incubator businesses and the NSTI program, an evaluation process is essential to effectively measure results and implement corrective processes in the incubation design if needed. The evaluation process design will collect and analyze qualitative and quantitative data through performance evaluation system.« less
Path selection system simulation and evaluation for a Martian roving vehicle
NASA Technical Reports Server (NTRS)
Boheim, S. L.; Prudon, W. C.
1972-01-01
The simulation and evaluation of proposed path selection systems for an autonomous Martian roving vehicle was developed. The package incorporates a number of realistic features, such as the simulation of random effects due to vehicle bounce and sensor-reading uncertainty, to increase the reliability of the results. Qualitative and quantitative evaluation criteria were established. The performance of three different path selection systems was evaluated to determine the effectiveness of the simulation package, and to form some preliminary conclusions regarding the tradeoffs involved in a path selection system design.
The use of a battery of tracking tests in the quantitative evaluation of neurological function
NASA Technical Reports Server (NTRS)
Repa, B. S.; Albers, J. W.; Potvin, A. R.; Tourtellotte, W. W.
1972-01-01
A tracking test battery has been applied in a drug trail designed to compare the efficacy of L-DOPA and amantadine to that of L-DOPA and placebo in the treatment of 28 patients with Parkinson's disease. The drug trial provided an ideal opportunity for objectively evaluating the usefulness of tracking tests in assessing changes in neurologic function. Evaluating changes in patient performance resulting from disease progression and controlled clinical trials is of great importance in establishing effective treatment programs.
ERIC Educational Resources Information Center
Branum-Martin, Lee; Patchan, Melissa M.
2016-01-01
Peer learning is often used in classrooms to help and support knowledge and skill acquisition. One form of peer learning, peer assessment, involves the quantitative (i.e., peer ratings) or qualitative (i.e., peer feedback) evaluation of a learner's performance by another learner among students. While we might be concerned about the quality of the…
A new method to evaluate human-robot system performance
NASA Technical Reports Server (NTRS)
Rodriguez, G.; Weisbin, C. R.
2003-01-01
One of the key issues in space exploration is that of deciding what space tasks are best done with humans, with robots, or a suitable combination of each. In general, human and robot skills are complementary. Humans provide as yet unmatched capabilities to perceive, think, and act when faced with anomalies and unforeseen events, but there can be huge potential risks to human safety in getting these benefits. Robots provide complementary skills in being able to work in extremely risky environments, but their ability to perceive, think, and act by themselves is currently not error-free, although these capabilities are continually improving with the emergence of new technologies. Substantial past experience validates these generally qualitative notions. However, there is a need for more rigorously systematic evaluation of human and robot roles, in order to optimize the design and performance of human-robot system architectures using well-defined performance evaluation metrics. This article summarizes a new analytical method to conduct such quantitative evaluations. While the article focuses on evaluating human-robot systems, the method is generally applicable to a much broader class of systems whose performance needs to be evaluated.
Feedback Effects of Teaching Quality Assessment: Macro and Micro Evidence
ERIC Educational Resources Information Center
Bianchini, Stefano
2014-01-01
This study investigates the feedback effects of teaching quality assessment. Previous literature looked separately at the evolution of individual and aggregate scores to understand whether instructors and university performance depends on its past evaluation. I propose a new quantitative-based methodology, combining statistical distributions and…
DOT National Transportation Integrated Search
1981-10-01
The objectives of the Systems Operation Studies (SOS) for automated guideway transit (AGT) systems are to develop models for the analysis of system operations, to evaluate performance and cost, and to establish guidelines for the design and operation...
Saad, Hisham A; Terry, Mark A; Shamie, Neda; Chen, Edwin S; Friend, Daniel F; Holiman, Jeffrey D; Stoeger, Christopher
2008-08-01
We developed a simple, practical, and inexpensive technique to analyze areas of endothelial cell loss and/or damage over the entire corneal area after vital dye staining by using a readily available, off-the-shelf, consumer software program, Adobe Photoshop. The purpose of this article is to convey a method of quantifying areas of cell loss and/or damage. Descemet-stripping automated endothelial keratoplasty corneal transplant surgery was performed by using 5 precut corneas on a human cadaver eye. Corneas were removed and stained with trypan blue and alizarin red S and subsequently photographed. Quantitative assessment of endothelial damage was performed by using Adobe Photoshop 7.0 software. The average difference for cell area damage for analyses performed by 1 observer twice was 1.41%. For analyses performed by 2 observers, the average difference was 1.71%. Three masked observers were 100% successful in matching the randomized stained corneas to their randomized processed Adobe images. Vital dye staining of corneal endothelial cells can be combined with Adobe Photoshop software to yield a quantitative assessment of areas of acute endothelial cell loss and/or damage. This described technique holds promise for a more consistent and accurate method to evaluate the surgical trauma to the endothelial cell layer in laboratory models. This method of quantitative analysis can probably be generalized to any area of research that involves areas that are differentiated by color or contrast.
Suh, Chong Hyun; Yun, Seong Jong; Jin, Wook; Lee, Sun Hwa; Park, So Young; Ryu, Chang-Woo
2018-07-01
To assess the sensitivity and specificity of quantitative assessment of the apparent diffusion coefficient (ADC) for differentiating benign and malignant vertebral bone marrow lesions (BMLs) and compression fractures (CFs) METHODS: An electronic literature search of MEDLINE and EMBASE was conducted. Bivariate modelling and hierarchical summary receiver operating characteristic modelling were performed to evaluate the diagnostic performance of ADC for differentiating vertebral BMLs. Subgroup analysis was performed for differentiating benign and malignant vertebral CFs. Meta-regression analyses according to subject, study and diffusion-weighted imaging (DWI) characteristics were performed. Twelve eligible studies (748 lesions, 661 patients) were included. The ADC exhibited a pooled sensitivity of 0.89 (95% confidence interval [CI] 0.80-0.94) and a pooled specificity of 0.87 (95% CI 0.78-0.93) for differentiating benign and malignant vertebral BMLs. In addition, the pooled sensitivity and specificity for differentiating benign and malignant CFs were 0.92 (95% CI 0.82-0.97) and 0.91 (95% CI 0.87-0.94), respectively. In the meta-regression analysis, the DWI slice thickness was a significant factor affecting heterogeneity (p < 0.01); thinner slice thickness (< 5 mm) showed higher specificity (95%) than thicker slice thickness (81%). Quantitative assessment of ADC is a useful diagnostic tool for differentiating benign and malignant vertebral BMLs and CFs. • Quantitative assessment of ADC is useful in differentiating vertebral BMLs. • Quantitative ADC assessment for BMLs had sensitivity of 89%, specificity of 87%. • Quantitative ADC assessment for CFs had sensitivity of 92%, specificity of 91%. • The specificity is highest (95%) with thinner (< 5 mm) DWI slice thickness.
Koenigkam-Santos, Marcel; Optazaite, Elzbieta; Sommer, Gregor; Safi, Seyer; Heussel, Claus Peter; Kauczor, Hans-Ulrich; Puderbach, Michael
2015-01-01
To propose a technique for evaluation of pulmonary lesions using contrast-enhanced MRI; to assess morphological patterns of enhancement and correlate quantitative analysis with histopathology. Thirty-six patients were prospectively studied. Volumetric-interpolated T1W images were obtained during consecutive breath holds after bolus triggered contrast injection. Volume coverage of first three acquisitions was limited (higher temporal resolution) and last acquisition obtained at 4th min. Two radiologists individually evaluated the patterns of enhancement. Region-of-interest-based signal intensity (SI)-time curves were created to assess quantitative parameters. Readers agreed moderately to substantially concerning lesions' enhancement pattern. SI-time curves could be created for all lesions. In comparison to benign, malignant lesions showed higher values of maximum enhancement, early peak, slope and 4th min enhancement. Early peak >15% showed 100% sensitivity to detect malignancy, maximum enhancement >40% showed 100% specificity. The proposed technique is robust, simple to perform and can be applied in clinical scenario. It allows visual evaluation of enhancement pattern/progression together with creation of SI-time curves and assessment of derived quantitative parameters. Perfusion analysis was highly sensitive to detect malignancy, in accordance to what is recommended by most recent guidelines on imaging evaluation of pulmonary lesions. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Friedman, Karen A; Raimo, John; Spielmann, Kelly; Chaudhry, Saima
2016-01-01
Introduction Under the Next Accreditation System, programs need to find ways to collect and assess meaningful reportable information on its residents to assist the program director regarding resident milestone progression. This paper discusses the process that one large Internal Medicine Residency Program used to provide both quantitative and qualitative data to its clinical competency committee (CCC) through the creation of a resident dashboard. Methods Program leadership at a large university-based program developed four new end of rotation evaluations based on the American Board of Internal Medicine (ABIM) and Accreditation Council of Graduated Medical Education's (ACGME) 22 reportable milestones. A resident dashboard was then created to pull together both milestone- and non-milestone-based quantitative data and qualitative data compiled from faculty, nurses, peers, staff, and patients. Results Dashboards were distributed to the members of the CCC in preparation for the semiannual CCC meeting. CCC members adjudicated quantitative and qualitative data to present their cohort of residents at the CCC meeting. Based on the committee's response, evaluation scores remained the same or were adjusted. Final milestone scores were then entered into the accreditation data system (ADS) on the ACGME website. Conclusions The process of resident assessment is complex and should comprise both quantitative and qualitative data. The dashboard is a valuable tool for program leadership to use both when evaluating house staff on a semiannual basis at the CCC and to the resident in person.
Friedman, Karen A; Raimo, John; Spielmann, Kelly; Chaudhry, Saima
2016-01-01
Under the Next Accreditation System, programs need to find ways to collect and assess meaningful reportable information on its residents to assist the program director regarding resident milestone progression. This paper discusses the process that one large Internal Medicine Residency Program used to provide both quantitative and qualitative data to its clinical competency committee (CCC) through the creation of a resident dashboard. Program leadership at a large university-based program developed four new end of rotation evaluations based on the American Board of Internal Medicine (ABIM) and Accreditation Council of Graduated Medical Education's (ACGME) 22 reportable milestones. A resident dashboard was then created to pull together both milestone- and non-milestone-based quantitative data and qualitative data compiled from faculty, nurses, peers, staff, and patients. Dashboards were distributed to the members of the CCC in preparation for the semiannual CCC meeting. CCC members adjudicated quantitative and qualitative data to present their cohort of residents at the CCC meeting. Based on the committee's response, evaluation scores remained the same or were adjusted. Final milestone scores were then entered into the accreditation data system (ADS) on the ACGME website. The process of resident assessment is complex and should comprise both quantitative and qualitative data. The dashboard is a valuable tool for program leadership to use both when evaluating house staff on a semiannual basis at the CCC and to the resident in person.
Cohen, Deborah J; Balasubramanian, Bijal A; Gordon, Leah; Marino, Miguel; Ono, Sarah; Solberg, Leif I; Crabtree, Benjamin F; Stange, Kurt C; Davis, Melinda; Miller, William L; Damschroder, Laura J; McConnell, K John; Creswell, John
2016-06-29
The Agency for Healthcare Research and Quality (AHRQ) launched the EvidenceNOW Initiative to rapidly disseminate and implement evidence-based cardiovascular disease (CVD) preventive care in smaller primary care practices. AHRQ funded eight grantees (seven regional Cooperatives and one independent national evaluation) to participate in EvidenceNOW. The national evaluation examines quality improvement efforts and outcomes for more than 1500 small primary care practices (restricted to those with fewer than ten physicians per clinic). Examples of external support include practice facilitation, expert consultation, performance feedback, and educational materials and activities. This paper describes the study protocol for the EvidenceNOW national evaluation, which is called Evaluating System Change to Advance Learning and Take Evidence to Scale (ESCALATES). This prospective observational study will examine the portfolio of EvidenceNOW Cooperatives using both qualitative and quantitative data. Qualitative data include: online implementation diaries, observation and interviews at Cooperatives and practices, and systematic assessment of context from the perspective of Cooperative team members. Quantitative data include: practice-level performance on clinical quality measures (aspirin prescribing, blood pressure and cholesterol control, and smoking cessation; ABCS) collected by Cooperatives from electronic health records (EHRs); practice and practice member surveys to assess practice capacity and other organizational and structural characteristics; and systematic tracking of intervention delivery. Quantitative, qualitative, and mixed methods analyses will be conducted to examine how Cooperatives organize to provide external support to practices, to compare effectiveness of the dissemination and implementation approaches they implement, and to examine how regional variations and other organization and contextual factors influence implementation and effectiveness. ESCALATES is a national evaluation of an ambitious large-scale dissemination and implementation effort focused on transforming smaller primary care practices. Insights will help to inform the design of national health care practice extension systems aimed at supporting practice transformation efforts in the USA. NCT02560428 (09/21/15).
A numerical algorithm with preference statements to evaluate the performance of scientists.
Ricker, Martin
Academic evaluation committees have been increasingly receptive for using the number of published indexed articles, as well as citations, to evaluate the performance of scientists. It is, however, impossible to develop a stand-alone, objective numerical algorithm for the evaluation of academic activities, because any evaluation necessarily includes subjective preference statements. In a market, the market prices represent preference statements, but scientists work largely in a non-market context. I propose a numerical algorithm that serves to determine the distribution of reward money in Mexico's evaluation system, which uses relative prices of scientific goods and services as input. The relative prices would be determined by an evaluation committee. In this way, large evaluation systems (like Mexico's Sistema Nacional de Investigadores ) could work semi-automatically, but not arbitrarily or superficially, to determine quantitatively the academic performance of scientists every few years. Data of 73 scientists from the Biology Institute of Mexico's National University are analyzed, and it is shown that the reward assignation and academic priorities depend heavily on those preferences. A maximum number of products or activities to be evaluated is recommended, to encourage quality over quantity.
Effects of normalization on quantitative traits in association test
2009-01-01
Background Quantitative trait loci analysis assumes that the trait is normally distributed. In reality, this is often not observed and one strategy is to transform the trait. However, it is not clear how much normality is required and which transformation works best in association studies. Results We performed simulations on four types of common quantitative traits to evaluate the effects of normalization using the logarithm, Box-Cox, and rank-based transformations. The impact of sample size and genetic effects on normalization is also investigated. Our results show that rank-based transformation gives generally the best and consistent performance in identifying the causal polymorphism and ranking it highly in association tests, with a slight increase in false positive rate. Conclusion For small sample size or genetic effects, the improvement in sensitivity for rank transformation outweighs the slight increase in false positive rate. However, for large sample size and genetic effects, normalization may not be necessary since the increase in sensitivity is relatively modest. PMID:20003414
Noge, Sachiko; Ohishi, Tatsuo; Yoshida, Takuya; Kumagai, Hiromichi
2017-01-01
[Purpose] Locomotive syndrome (LS) is a condition by which older people may require care service because of problems with locomotive organs. This study examined whether the loco-check, a 7-item questionnaire, is useful for quantitatively assessing the severity of LS. [Subjects and Methods] Seventy-one community dwelling Japanese females aged 64–96 years (81.7 ± 8.0 years) participated in this study. The associations of the loco-check with thigh muscle mass measured by X-ray CT, physical performance, nutritional status, and quality of life (QOL) were investigated. [Results] The results showed that the number of times that “yes” was selected in the loco-check was significantly correlated with thigh muscle mass, major measures of physical performance, nutritional status, and QOL. This number was also significantly larger in the participants experiencing falling, fracture, and lumbar pain than in those without these episodes. [Conclusion] These results suggest that the loco-check might be useful for quantitatively evaluating LS. PMID:28932003
NASA Astrophysics Data System (ADS)
Hermawan, Hermawan; Prianto, Eddy
2017-12-01
A building can be considered as having a good thermal performance if it can make the occupant comfortable. Thermal comfort can be seen from the occupant's respond toward the architectural elements and the environment, such as lighting, the room crowding, air temperature, humidity, oxygen level, and occupant's behaviours. The objective of this research is to analyse the thermal performance of four different orientation houses in mountainous area. The research was conducted on the four expose stone houses with four different orientations in the slope of Sindoro Mountain which has relative cool temperature, about 26°C. The measurement of the elements above was done quantitatively and qualitatively for 24 hours. The results are as follows. First, the most comfortable house is west-orientation house. Second, based on the quantitative and qualitative observation, there is no significant difference (±5 %). Third, the occupant's behaviours (caring and genen) also become factors influencing occupant's comfort.
Tian, Hua; Wang, Xueying; Shu, Gequn; Wu, Mingqiang; Yan, Nanhua; Ma, Xiaonan
2017-09-15
Mixture of hydrocarbon and carbon dioxide shows excellent cycle performance in Organic Rankine Cycle (ORC) used for engine waste heat recovery, but the unavoidable leakage in practical application is a threat for safety due to its flammability. In this work, a quantitative risk assessment system (QR-AS) is established aiming at providing a general method of risk assessment for flammable working fluid leakage. The QR-AS covers three main aspects: analysis of concentration distribution based on CFD simulations, explosive risk assessment based on the TNT equivalent method and risk mitigation based on evaluation results. A typical case of propane/carbon dioxide mixture leaking from ORC is investigated to illustrate the application of QR-AS. According to the assessment results, proper ventilation speed, safe mixture ratio and location of gas-detecting devices have been proposed to guarantee the security in case of leakage. The results revealed that this presented QR-AS was reliable for the practical application and the evaluation results could provide valuable guidance for the design of mitigation measures to improve the safe performance of ORC system. Copyright © 2017 Elsevier B.V. All rights reserved.
Mullan, Barbara A; Kothe, Emily J
2010-11-01
Effective communication is a vital component of nursing care, however, nurses often lack the skills to communicate with patients, carers and other health care professionals. Communication skills training programs are frequently used to develop these skills. However, there is a paucity of data on how best to evaluate such courses. The aim of the current study was to evaluate the relationship between student self rating of their own ability and their satisfaction with a nurse training course as compared with an objective measure of communication skills. 209 first year nursing students completed a communication skills program. Both qualitative and quantitative data were collected and associations between measures were investigated. Paired samples t-tests showed significant improvement in self-rated ability over the course of the program. Students generally were very satisfied with the course which was reflected in both qualitative and quantitative measures. However, neither self-rated ability nor satisfaction was significantly correlated with the objective measure of performance, but self-rated ability and satisfaction were highly correlated with one another. The importance of these findings is discussed and implications for nurse education are proposed. Copyright © 2010 Elsevier Ltd. All rights reserved.
2009-08-01
assess the performance of remedial efforts. These techniques are expensive and, by themselves, are effectively random samples guided by the training...technology should be further explored and developed for use in pre-amendment tracer tests and quantitative remedial assessments . 15. SUBJECT TERMS...and flow of injectate. Site assessment following groundwater remediation efforts typically involves discrete point sampling using wells or
Lien, W P; Lee, Y S; Chang, F Z; Chen, J J; Shieh, W B
1978-01-01
Quantitative one-plane cineangiocardiography in right anterior oblique position for evaluation of LV performance was carried out in 62 patients with various heart diseases and in 13 subjects with normal LV. Parameters for evaluating both pump and muscle performances were derived from volume and pressure measurements. Of 31 patients with either systolic hypertension or LV myocardial diseases (coronary artery disease or idiopathic cardiomyopathy), 14 had clinical evidence of LV failure before the study. It was found that mean VCF and EF were most sensitive indicators of impaired LV performance among the various parameters. There was a close correlation between mean VCF and EF, yet discordant changes of both parameters were noted in some patients. Furthermore, wall motion abnormalities were not infrequently observed in patients with coronary artery disease or primary cardiomyopathy. Therefore, assessment of at least three ejection properties (EF, mean VCF and wall motion abnormalities) are considered to be essential for full understanding of derangement of LV function in heart disease. This is especially true of patients with coronary artery disease. LV behavior in relation to different pathological stresses or lesions, such as chronic pressure or volume load, myocardial disease and mitral stenosis, was also studied and possible cause of impaired LV myocardial function in mitral stenosis was discussed.
Blew, Robert M; Lee, Vinson R; Farr, Joshua N; Schiferl, Daniel J; Going, Scott B
2014-02-01
Peripheral quantitative computed tomography (pQCT) is an essential tool for assessing bone parameters of the limbs, but subject movement and its impact on image quality remains a challenge to manage. The current approach to determine image viability is by visual inspection, but pQCT lacks a quantitative evaluation. Therefore, the aims of this study were to (1) examine the reliability of a qualitative visual inspection scale and (2) establish a quantitative motion assessment methodology. Scans were performed on 506 healthy girls (9-13 years) at diaphyseal regions of the femur and tibia. Scans were rated for movement independently by three technicians using a linear, nominal scale. Quantitatively, a ratio of movement to limb size (%Move) provided a measure of movement artifact. A repeat-scan subsample (n = 46) was examined to determine %Move's impact on bone parameters. Agreement between measurers was strong (intraclass correlation coefficient = 0.732 for tibia, 0.812 for femur), but greater variability was observed in scans rated 3 or 4, the delineation between repeat and no repeat. The quantitative approach found ≥95% of subjects had %Move <25 %. Comparison of initial and repeat scans by groups above and below 25% initial movement showed significant differences in the >25 % grouping. A pQCT visual inspection scale can be a reliable metric of image quality, but technicians may periodically mischaracterize subject motion. The presented quantitative methodology yields more consistent movement assessment and could unify procedure across laboratories. Data suggest a delineation of 25% movement for determining whether a diaphyseal scan is viable or requires repeat.
Blew, Robert M.; Lee, Vinson R.; Farr, Joshua N.; Schiferl, Daniel J.; Going, Scott B.
2013-01-01
Purpose Peripheral quantitative computed tomography (pQCT) is an essential tool for assessing bone parameters of the limbs, but subject movement and its impact on image quality remains a challenge to manage. The current approach to determine image viability is by visual inspection, but pQCT lacks a quantitative evaluation. Therefore, the aims of this study were to (1) examine the reliability of a qualitative visual inspection scale, and (2) establish a quantitative motion assessment methodology. Methods Scans were performed on 506 healthy girls (9–13yr) at diaphyseal regions of the femur and tibia. Scans were rated for movement independently by three technicians using a linear, nominal scale. Quantitatively, a ratio of movement to limb size (%Move) provided a measure of movement artifact. A repeat-scan subsample (n=46) was examined to determine %Move’s impact on bone parameters. Results Agreement between measurers was strong (ICC = .732 for tibia, .812 for femur), but greater variability was observed in scans rated 3 or 4, the delineation between repeat or no repeat. The quantitative approach found ≥95% of subjects had %Move <25%. Comparison of initial and repeat scans by groups above and below 25% initial movement, showed significant differences in the >25% grouping. Conclusions A pQCT visual inspection scale can be a reliable metric of image quality but technicians may periodically mischaracterize subject motion. The presented quantitative methodology yields more consistent movement assessment and could unify procedure across laboratories. Data suggest a delineation of 25% movement for determining whether a diaphyseal scan is viable or requires repeat. PMID:24077875
Development of iPad application "Postima" for quantitative analysis of the effects of manual therapy
NASA Astrophysics Data System (ADS)
Sugiyama, Naruhisa; Shirakawa, Tomohiro
2017-07-01
The technical difficulty of diagnosing joint misalignment and/or dysfunction by quantitative evaluation is commonly acknowledged among manual therapists. Usually, manual therapists make a diagnosis based on a combination of observing patient symptoms and performing physical examinations, both of which rely on subjective criteria and thus contain some uncertainty. We thus sought to investigate the correlations among posture, skeletal misalignment, and pain severity over the course of manual therapy treatment, and to explore the possibility of establishing objective criteria for diagnosis. For this purpose, we developed an iPad application that realizes the measurement of patients' postures and analyzes them quantitatively. We also discuss the results and effectiveness of the measurement and analysis.
Sachpekidis, Christos; Hillengass, Jens; Goldschmidt, Hartmut; Anwar, Hoda; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia
2017-01-01
A renewed interest has been recently developed for the highly sensitive bone-seeking radiopharmaceutical 18F-NaF. Aim of the present study is to evaluate the potential utility of quantitative analysis of 18F-NaF dynamic PET/CT data in differentiating malignant from benign degenerative lesions in multiple myeloma (MM). 80 MM patients underwent whole-body PET/CT and dynamic PET/CT scanning of the pelvis with 18F-NaF. PET/CT data evaluation was based on visual (qualitative) assessment, semi-quantitative (SUV) calculations, and absolute quantitative estimations after application of a 2-tissue compartment model and a non-compartmental approach leading to the extraction of fractal dimension (FD). In total 263 MM lesions were demonstrated on 18F-NaF PET/CT. Semi-quantitative and quantitative evaluations were performed for 25 MM lesions as well as for 25 benign, degenerative and traumatic lesions. Mean SUVaverage for MM lesions was 11.9 and mean SUVmax was 23.2. Respectively, SUVaverage and SUVmax for degenerative lesions were 13.5 and 20.2. Kinetic analysis of 18F-NaF revealed the following mean values for MM lesions: K1 = 0.248 (1/min), k3 = 0.359 (1/min), influx (Ki) = 0.107 (1/min), FD = 1.382, while the respective values for degenerative lesions were: K1 = 0.169 (1/min), k3 = 0.422 (1/min), influx (Ki) = 0.095 (1/min), FD = 1. 411. No statistically significant differences between MM and benign degenerative disease regarding SUVaverage, SUVmax, K1, k3 and influx (Ki) were demonstrated. FD was significantly higher in degenerative than in malignant lesions. The present findings show that quantitative analysis of 18F-NaF PET data cannot differentiate malignant from benign degenerative lesions in MM patients, supporting previously published results, which reflect the limited role of 18F-NaF PET/CT in the diagnostic workup of MM. PMID:28913153
NASA Astrophysics Data System (ADS)
Green, David M.; Dallaire, Joel D.; Reaper, Jerome H.
2004-08-01
The Joint Battlespace Infosphere (JBI) program is performing a technology investigation into global communications, data mining and warehousing, and data fusion technologies by focusing on techniques and methodologies that support twenty first century military distributed collaboration. Advancement of these technologies is vitally important if military decision makers are to have the right data, in the right format, at the right time and place to support making the right decisions within available timelines. A quantitative understanding of individual and combinational effects arising from the application of technologies within a framework is presently far too complex to evaluate at more than a cursory depth. In order to facilitate quantitative analysis under these circumstances, the Distributed Information Enterprise Modeling and Simulation (DIEMS) team was formed to apply modeling and simulation (M&S) techniques to help in addressing JBI analysis challenges. The DIEMS team has been tasked utilizing collaborative distributed M&S architectures to quantitatively evaluate JBI technologies and tradeoffs. This paper first presents a high level view of the DIEMS project. Once this approach has been established, a more concentrated view of the detailed communications simulation techniques used in generating the underlying support data sets is presented.
The role of quantitative safety evaluation in regulatory decision making of drugs.
Chakravarty, Aloka G; Izem, Rima; Keeton, Stephine; Kim, Clara Y; Levenson, Mark S; Soukup, Mat
2016-01-01
Evaluation of safety is a critical component of drug review at the US Food and Drug Administration (FDA). Statisticians are playing an increasingly visible role in quantitative safety evaluation and regulatory decision-making. This article reviews the history and the recent events relating to quantitative drug safety evaluation at the FDA. The article then focuses on five active areas of quantitative drug safety evaluation and the role Division of Biometrics VII (DBVII) plays in these areas, namely meta-analysis for safety evaluation, large safety outcome trials, post-marketing requirements (PMRs), the Sentinel Initiative, and the evaluation of risk from extended/long-acting opioids. This article will focus chiefly on developments related to quantitative drug safety evaluation and not on the many additional developments in drug safety in general.
Evaluation of thresholding techniques for segmenting scaffold images in tissue engineering
NASA Astrophysics Data System (ADS)
Rajagopalan, Srinivasan; Yaszemski, Michael J.; Robb, Richard A.
2004-05-01
Tissue engineering attempts to address the ever widening gap between the demand and supply of organ and tissue transplants using natural and biomimetic scaffolds. The regeneration of specific tissues aided by synthetic materials is dependent on the structural and morphometric properties of the scaffold. These properties can be derived non-destructively using quantitative analysis of high resolution microCT scans of scaffolds. Thresholding of the scanned images into polymeric and porous phase is central to the outcome of the subsequent structural and morphometric analysis. Visual thresholding of scaffolds produced using stochastic processes is inaccurate. Depending on the algorithmic assumptions made, automatic thresholding might also be inaccurate. Hence there is a need to analyze the performance of different techniques and propose alternate ones, if needed. This paper provides a quantitative comparison of different thresholding techniques for segmenting scaffold images. The thresholding algorithms examined include those that exploit spatial information, locally adaptive characteristics, histogram entropy information, histogram shape information, and clustering of gray-level information. The performance of different techniques was evaluated using established criteria, including misclassification error, edge mismatch, relative foreground error, and region non-uniformity. Algorithms that exploit local image characteristics seem to perform much better than those using global information.
Gaass, Thomas; Schneider, Moritz Jörg; Dietrich, Olaf; Ingrisch, Michael; Dinkel, Julien
2017-04-01
Variability across devices, patients, and time still hinders widespread recognition of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) as quantitative biomarker. The purpose of this work was to introduce and characterize a dedicated microchannel phantom as a model for quantitative DCE-MRI measurements. A perfusable, MR-compatible microchannel network was constructed on the basis of sacrificial melt-spun sugar fibers embedded in a block of epoxy resin. Structural analysis was performed on the basis of light microscopy images before DCE-MRI experiments. During dynamic acquisition the capillary network was perfused with a standard contrast agent injection system. Flow-dependency, as well as inter- and intrascanner reproducibility of the computed DCE parameters were evaluated using a 3.0 T whole-body MRI. Semi-quantitative and quantitative flow-related parameters exhibited the expected proportionality to the set flow rate (mean Pearson correlation coefficient: 0.991, P < 2.5e-5). The volume fraction was approximately independent from changes of the applied flow rate through the phantom. Repeatability and reproducibility experiments yielded maximum intrascanner coefficients of variation (CV) of 4.6% for quantitative parameters. All evaluated parameters were well in the range of known in vivo results for the applied flow rates. The constructed phantom enables reproducible, flow-dependent, contrast-enhanced MR measurements with the potential to facilitate standardization and comparability of DCE-MRI examinations. © 2017 American Association of Physicists in Medicine.
Quantitation of polymethoxylated flavones in orange juice by high-performance liquid chromatography.
Rouseff, R L; Ting, S V
1979-08-01
A quantitative high-performance liquid chromatographic (HPLC) procedure for the determination of the five major polymethoxylated flavones (PMFs) in orange juice has been developed. It employs a unique ternary solvent system with coupled UV-fluorescence detection. The dual detectors were employed to determine the presence of interfering substances and served as a cross check on quantitation. Stop flow UV and fluorescence scanning was used to identify peaks and determine the presence of impurities. Although all five citrus PMFs fluoresce, some HPLC fluorescence peaks were too small to be of much practical use. All five citrus PMFs could be quantitated satisfactorily with the fixed wavelength UV (313 nm) detector. The HPLC procedure has been used to evaluate each step in the preparation. The optimum extracting solvent was selected and one time consuming step was eliminated, as it was found to be unnecessary. HPLC values for nobiletin and sinensetin are in good agreement with the thin-layer chromatographic (TLC) values in the literature. HPLC values for the other three flavones were considerably lower than those reported in the literature. The HPLC procedure is considerably faster than the TLC procedure with equal or superior precision and accuracy.
Shin, Sung Yul; Kim, Jung Yoon; Lee, Sanghyeop; Lee, Junwon; Kim, Seung-Jong; Kim, ChangHwan
2013-06-01
The purpose of this paper is to propose a new assessment method for evaluating motor function of the patients who are suffering from physical weakness after stroke, incomplete spinal cord injury (iSCI) or other diseases. In this work, we use a robotic device to obtain the information of interaction occur between patient and robot, and use it as a measure for assessing the patients. The Intentional Movement Performance Ability (IMPA) is defined by the root mean square of the interactive torque, while the subject performs given periodic movement with the robot. IMPA is proposed to quantitatively determine the level of subject's impaired motor function. The method is indirectly tested by asking the healthy subjects to lift a barbell to disturb their motor function. The experimental result shows that the IMPA has a potential for providing a proper information of the subject's motor function level.
An automated system for chromosome analysis. Volume 1: Goals, system design, and performance
NASA Technical Reports Server (NTRS)
Castleman, K. R.; Melnyk, J. H.
1975-01-01
The design, construction, and testing of a complete system to produce karyotypes and chromosome measurement data from human blood samples, and a basis for statistical analysis of quantitative chromosome measurement data is described. The prototype was assembled, tested, and evaluated on clinical material and thoroughly documented.
Post Advanced Technology Implementation Effects on School Psychologist Job Performance
ERIC Educational Resources Information Center
Hobson, Rana Dirice
2017-01-01
The technology acceptance model (TAM) has been widely used to assess technology adoption in business, education, and health care. The New York City Department of Education (NYCDOE) launched a web-based Individualized Educational Program (IEP) system for school psychologists to use in conducting evaluations and reviews. This quantitative study…
Peer Instruction: An Evaluation of Its Theory, Application, and Contribution
ERIC Educational Resources Information Center
Gok, Tolga; Gok, Ozge
2017-01-01
Many qualitative and quantitative studies performed on peer instruction based on interactive engagement method used in many different disciplines and courses were reviewed in the present study. The researchers examined the effects of peer instruction on students' cognitive skills (conceptual learning, problem solving, reasoning ability, etc.) and…
ERIC Educational Resources Information Center
Burton, Hilary D.
TIS (Technology Information System) is an intelligent gateway system capable of performing quantitative evaluation and analysis of bibliographic citations using a set of Process functions. Originally developed by Lawrence Livermore National Laboratory (LLNL) to analyze information retrieved from three major federal databases, DOE/RECON,…
Developing and Assessing E-Learning Techniques for Teaching Forecasting
ERIC Educational Resources Information Center
Gel, Yulia R.; O'Hara Hines, R. Jeanette; Chen, He; Noguchi, Kimihiro; Schoner, Vivian
2014-01-01
In the modern business environment, managers are increasingly required to perform decision making and evaluate related risks based on quantitative information in the face of uncertainty, which in turn increases demand for business professionals with sound skills and hands-on experience with statistical data analysis. Computer-based training…
Design and application of an array extended blackbody
NASA Astrophysics Data System (ADS)
Zhang, Ya-zhou; Fan, Xiao-li; Lei, Hao; Zhou, Zhi-yuan
2018-02-01
An array extended blackbody is designed to quantitatively measure and evaluate the performance of infrared imaging systems. The theory, structure, control software and application of blackbody are introduced. The parameters of infrared imaging systems such as the maximum detectable range, detection sensitivity, spatial resolution and temperature resolution can be measured.
An Investigation of Basic Design Capacity Performance in Different Background Students
ERIC Educational Resources Information Center
Cheng, Chu-Yu; Ou, Yang-Kun
2017-01-01
The technological and vocational higher education system in Taiwan is offering an undergraduate degree for design-based vocational high school students and general high school students whose qualitative and quantitative abilities are evaluated through a student selection examination. This study focused on the conceptual understandings of 64…
The contribution of fecal pollution from dogs in urbanized areas can be significant and is an often underestimated problem. Microbial source tracking methods (MST) utilizing quantitative PCR of dog-associated gene sequences encoding 16S rRNA of Bacteroidales are a useful tool to ...
Alvarez Del Arco, D; Rodríguez Rieiro, C; Sanchidrián De Blás, C; Alejos, B; Plá Mestre, R
2012-01-01
We examined the usefulness of incorporating a qualitative phase in the evaluation of the quality of care in a high-resolution medical service carried out with quantitative methods. A quantitative research was performed using a structured questionnaire and selecting interviewees by systematic randomized sampling methods (n=320). In addition, a qualitative research was carried on through semi-structured interviews with patients selected by convenience criteria (n=11), observations in the care assistance circuit, and a group interview with health professionals working in the service. A multidisciplinary research team conducted an individual analysis of the information collected in both quantitative and qualitative phases. Subsequently, three meetings based on group brainstorming techniques were held to identify the diverse contributions of each of the methodologies employed to the research, using affinity graphs to analyse the different results obtained in both phases and evaluate possible bias arising from the use of qualitative methods. Qualitative research allowed examining specific aspects of the health care service that had been collected in the quantitative phase, harmonizing the results obtained in the previous phase, giving in-depth data on the reasons for patient dissatisfaction with specific aspects, such as waiting times and available infrastructures, and identifying emerging issues of the service which had not been previously assessed. Overall, the qualitative phase enriched the results of the research. It is appropriate and recommendable to incorporate this methodological approach in research aimed at evaluating the quality of the service in specific health care settings, since it is provided first hand, by the voice of the customer. Copyright © 2011 SECA. Published by Elsevier Espana. All rights reserved.
NASA Astrophysics Data System (ADS)
Brekhna, Brekhna; Mahmood, Arif; Zhou, Yuanfeng; Zhang, Caiming
2017-11-01
Superpixels have gradually become popular in computer vision and image processing applications. However, no comprehensive study has been performed to evaluate the robustness of superpixel algorithms in regard to common forms of noise in natural images. We evaluated the robustness of 11 recently proposed algorithms to different types of noise. The images were corrupted with various degrees of Gaussian blur, additive white Gaussian noise, and impulse noise that either made the object boundaries weak or added extra information to it. We performed a robustness analysis of simple linear iterative clustering (SLIC), Voronoi Cells (VCells), flooding-based superpixel generation (FCCS), bilateral geodesic distance (Bilateral-G), superpixel via geodesic distance (SSS-G), manifold SLIC (M-SLIC), Turbopixels, superpixels extracted via energy-driven sampling (SEEDS), lazy random walk (LRW), real-time superpixel segmentation by DBSCAN clustering, and video supervoxels using partially absorbing random walks (PARW) algorithms. The evaluation process was carried out both qualitatively and quantitatively. For quantitative performance comparison, we used achievable segmentation accuracy (ASA), compactness, under-segmentation error (USE), and boundary recall (BR) on the Berkeley image database. The results demonstrated that all algorithms suffered performance degradation due to noise. For Gaussian blur, Bilateral-G exhibited optimal results for ASA and USE measures, SLIC yielded optimal compactness, whereas FCCS and DBSCAN remained optimal for BR. For the case of additive Gaussian and impulse noises, FCCS exhibited optimal results for ASA, USE, and BR, whereas Bilateral-G remained a close competitor in ASA and USE for Gaussian noise only. Additionally, Turbopixel demonstrated optimal performance for compactness for both types of noise. Thus, no single algorithm was able to yield optimal results for all three types of noise across all performance measures. Conclusively, to solve real-world problems effectively, more robust superpixel algorithms must be developed.
Casartelli, Nicola C; Maffiuletti, Nicola A; Brunner, Romana; Büchi, Marcel; Sutter, Reto; Pfirrmann, Christian W; Naal, Florian D; Leunig, Michael; Bizzini, Mario
2018-04-01
Study Design Cross-sectional study. Objectives To evaluate intrarater and interrater agreement among physical therapists with different clinical experience in performing a visual rating of movement-pattern quality of patients with femoroacetabular impingement (FAI) syndrome using a semi-quantitative scale. Background Visual rating of movement patterns in patients with FAI syndrome is of interest, because poor control of dynamic hip motion is frequently noted. Methods A video camera was used to record the performance of 34 patients with FAI syndrome performing single-limb standing, squat, frontal lunge, hop lunge, bridge, and plank. Visual rating of movement, as recorded on video, was performed by a highly experienced, a moderately experienced, and a novice physical therapist on 2 occasions using a semi-quantitative scale. Hip abductor strength was assessed using dynamometry, and hip pain and function were assessed with a patient-reported questionnaire. Intrarater and interrater agreement among physical therapists was evaluated using Gwet's agreement coefficient 1. Construct validity was evaluated as the association between physical therapists' rating and patients' hip abductor strength, pain, and function. Results Good intrarater and interrater agreement was observed in the highly experienced and moderately experienced physical therapists when rating single-limb standing, bridge, and plank. Poor to moderate intrarater and interrater agreement was found when they rated squat, frontal lunge, and hop lunge. Poor performers, as rated by the highly experienced physical therapist only, demonstrated lower hip abductor strength (P<.05), and similar hip pain and hip function compared to those of good performers. Conclusion Movement-pattern quality of patients with FAI syndrome should be rated by a highly experienced physical therapist. J Orthop Sports Phys Ther 2018;48(4):260-269. doi:10.2519/jospt.2018.7840.
NASA Astrophysics Data System (ADS)
Zhang, Min; Katsumata, Akitoshi; Muramatsu, Chisako; Hara, Takeshi; Suzuki, Hiroki; Fujita, Hiroshi
2014-03-01
Periodontal disease is a kind of typical dental diseases, which affects many adults. The presence of alveolar bone resorption, which can be observed from dental panoramic radiographs, is one of the most important signs of the progression of periodontal disease. Automatically evaluating alveolar-bone resorption is of important clinic meaning in dental radiology. The purpose of this study was to propose a novel system for automated alveolar-bone-resorption evaluation from digital dental panoramic radiographs for the first time. The proposed system enables visualization and quantitative evaluation of alveolar bone resorption degree surrounding the teeth. It has the following procedures: (1) pre-processing for a test image; (2) detection of tooth root apices with Gabor filter and curve fitting for the root apex line; (3) detection of features related with alveolar bone by using image phase congruency map and template matching and curving fitting for the alveolar line; (4) detection of occlusion line with selected Gabor filter; (5) finally, evaluation of the quantitative alveolar-bone-resorption degree in the area surrounding teeth by simply computing the average ratio of the height of the alveolar bone and the height of the teeth. The proposed scheme was applied to 30 patient cases of digital panoramic radiographs, with alveolar bone resorption of different stages. Our initial trial on these test cases indicates that the quantitative evaluation results are correlated with the alveolar-boneresorption degree, although the performance still needs further improvement. Therefore it has potential clinical practicability.
Stelzeneder, David; Welsch, Goetz Hannes; Kovács, Balázs Krisztián; Goed, Sabine; Paternostro-Sluga, Tatjana; Vlychou, Marianna; Friedrich, Klaus; Mamisch, Tallal Charles; Trattnig, Siegfried
2012-02-01
The purpose of our investigation was to compare quantitative T2 relaxation time measurement evaluation of lumbar intervertebral discs with morphological grading in young to middle-aged patients with low back pain, using a standardized region-of-interest evaluation approach. Three hundred thirty lumbar discs from 66 patients (mean age, 39 years) with low back pain were examined on a 3.0T MR unit. Sagittal T1-FSE, sagittal, coronal, and axial T2-weighted FSE for morphological MRI, as well as a multi-echo spin-echo sequence for T2 mapping, were performed. Morphologically, all discs were classified according to Pfirrmann et al. Equally sized rectangular regions of interest (ROIs) for the annulus fibrosus were selected anteriorly and posteriorly in the outermost 20% of the disc. The space between was defined as the nucleus pulposus. To assess the reproducibility of this evaluation, inter- and intraobserver statistics were performed. The Pfirrmann scoring of 330 discs showed the following results: grade I: six discs (1.8%); grade II: 189 (57.3%); grade III: 96 (29.1%); grade IV: 38 (11.5%); and grade V: one (0.3%). The mean T2 values (in milliseconds) for the anterior and the posterior annulus, and the nucleus pulposus for the respective Pfirrmann groups were: I: 57/30/239; II: 44/67/129; III: 42/51/82; and IV: 42/44/56. The nucleus pulposus T2 values showed a stepwise decrease from Pfirrmann grade I to IV. The posterior annulus showed the highest T2 values in Pfirrmann group II, while the anterior annulus showed relatively constant T2 values in all Pfirrmann groups. The inter- and intraobserver analysis yielded intraclass correlation coefficients (ICC) for average measures in a range from 0.82 (anterior annulus) to 0.99 (nucleus). Our standardized method of region-specific quantitative T2 relaxation time evaluation seems to be able to characterize different degrees of disc degeneration quantitatively. The reproducibility of our ROI measurements is sufficient to encourage the use of this method in future investigations, particularly for longitudinal studies. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
A comparison of cosegregation analysis methods for the clinical setting.
Rañola, John Michael O; Liu, Quanhui; Rosenthal, Elisabeth A; Shirts, Brian H
2018-04-01
Quantitative cosegregation analysis can help evaluate the pathogenicity of genetic variants. However, genetics professionals without statistical training often use simple methods, reporting only qualitative findings. We evaluate the potential utility of quantitative cosegregation in the clinical setting by comparing three methods. One thousand pedigrees each were simulated for benign and pathogenic variants in BRCA1 and MLH1 using United States historical demographic data to produce pedigrees similar to those seen in the clinic. These pedigrees were analyzed using two robust methods, full likelihood Bayes factors (FLB) and cosegregation likelihood ratios (CSLR), and a simpler method, counting meioses. Both FLB and CSLR outperform counting meioses when dealing with pathogenic variants, though counting meioses is not far behind. For benign variants, FLB and CSLR greatly outperform as counting meioses is unable to generate evidence for benign variants. Comparing FLB and CSLR, we find that the two methods perform similarly, indicating that quantitative results from either of these methods could be combined in multifactorial calculations. Combining quantitative information will be important as isolated use of cosegregation in single families will yield classification for less than 1% of variants. To encourage wider use of robust cosegregation analysis, we present a website ( http://www.analyze.myvariant.org ) which implements the CSLR, FLB, and Counting Meioses methods for ATM, BRCA1, BRCA2, CHEK2, MEN1, MLH1, MSH2, MSH6, and PMS2. We also present an R package, CoSeg, which performs the CSLR analysis on any gene with user supplied parameters. Future variant classification guidelines should allow nuanced inclusion of cosegregation evidence against pathogenicity.
Masci, Ilaria; Vannozzi, Giuseppe; Bergamini, Elena; Pesce, Caterina; Getchell, Nancy; Cappozzo, Aurelio
2013-04-01
Objective quantitative evaluation of motor skill development is of increasing importance to carefully drive physical exercise programs in childhood. Running is a fundamental motor skill humans adopt to accomplish locomotion, which is linked to physical activity levels, although the assessment is traditionally carried out using qualitative evaluation tests. The present study aimed at investigating the feasibility of using inertial sensors to quantify developmental differences in the running pattern of young children. Qualitative and quantitative assessment tools were adopted to identify a skill-sensitive set of biomechanical parameters for running and to further our understanding of the factors that determine progression to skilled running performance. Running performances of 54 children between the ages of 2 and 12 years were submitted to both qualitative and quantitative analysis, the former using sequences of developmental level, the latter estimating temporal and kinematic parameters from inertial sensor measurements. Discriminant analysis with running developmental level as dependent variable allowed to identify a set of temporal and kinematic parameters, within those obtained with the sensor, that best classified children into the qualitative developmental levels (accuracy higher than 67%). Multivariate analysis of variance with the quantitative parameters as dependent variables allowed to identify whether and which specific parameters or parameter subsets were differentially sensitive to specific transitions between contiguous developmental levels. The findings showed that different sets of temporal and kinematic parameters are able to tap all steps of the transitional process in running skill described through qualitative observation and can be prospectively used for applied diagnostic and sport training purposes. Copyright © 2012 Elsevier B.V. All rights reserved.
Lecoeur, Marie; Decaudin, Bertrand; Guillotin, Yoann; Sautou, Valérie; Vaccher, Claude
2015-10-23
Recently, interest in supercritical fluid chromatography (SFC) has increased due to its high throughput and the development of new system improving chromatographic performances. However, most papers dealt with fundamental studies and chiral applications and only few works described validation process of SFC method. Likewise, evaporative light scattering detection (ELSD) has been widely employed in liquid chromatography but only a few recent works presented its quantitative performances hyphenated with SFC apparatus. The present paper discusses about the quantitative performances of SFC-ELSD compared to HPLC-ELSD, for the determination of plasticizers (ATBC, DEHA, DEHT and TOTM) in PVC tubing used as medical devices. After the development of HPLC-ELSD, both methods were evaluated based on the total error approach using accuracy profile. The results show that HPLC-ELSD was more precise than SFC-ELSD but lower limits of quantitation were obtained by SFC. Hence, HPLC was validated in the ± 10% acceptance limits whereas SFC lacks of accuracy to quantify plasticizers. Finally, both methods were used to determine the composition of plasticized-PVC medical devices. Results demonstrated that SFC and HPLC both hyphenated with ELSD provided similar results. Copyright © 2015 Elsevier B.V. All rights reserved.
Prognostic Value of Quantitative Stress Perfusion Cardiac Magnetic Resonance.
Sammut, Eva C; Villa, Adriana D M; Di Giovine, Gabriella; Dancy, Luke; Bosio, Filippo; Gibbs, Thomas; Jeyabraba, Swarna; Schwenke, Susanne; Williams, Steven E; Marber, Michael; Alfakih, Khaled; Ismail, Tevfik F; Razavi, Reza; Chiribiri, Amedeo
2018-05-01
This study sought to evaluate the prognostic usefulness of visual and quantitative perfusion cardiac magnetic resonance (CMR) ischemic burden in an unselected group of patients and to assess the validity of consensus-based ischemic burden thresholds extrapolated from nuclear studies. There are limited data on the prognostic value of assessing myocardial ischemic burden by CMR, and there are none using quantitative perfusion analysis. Patients with suspected coronary artery disease referred for adenosine-stress perfusion CMR were included (n = 395; 70% male; age 58 ± 13 years). The primary endpoint was a composite of cardiovascular death, nonfatal myocardial infarction, aborted sudden death, and revascularization after 90 days. Perfusion scans were assessed visually and with quantitative analysis. Cross-validated Cox regression analysis and net reclassification improvement were used to assess the incremental prognostic value of visual or quantitative perfusion analysis over a baseline clinical model, initially as continuous covariates, then using accepted thresholds of ≥2 segments or ≥10% myocardium. After a median 460 days (interquartile range: 190 to 869 days) follow-up, 52 patients reached the primary endpoint. At 2 years, the addition of ischemic burden was found to increase prognostic value over a baseline model of age, sex, and late gadolinium enhancement (baseline model area under the curve [AUC]: 0.75; visual AUC: 0.84; quantitative AUC: 0.85). Dichotomized quantitative ischemic burden performed better than visual assessment (net reclassification improvement 0.043 vs. 0.003 against baseline model). This study was the first to address the prognostic benefit of quantitative analysis of perfusion CMR and to support the use of consensus-based ischemic burden thresholds by perfusion CMR for prognostic evaluation of patients with suspected coronary artery disease. Quantitative analysis provided incremental prognostic value to visual assessment and established risk factors, potentially representing an important step forward in the translation of quantitative CMR perfusion analysis to the clinical setting. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Rădulescu, Marius; Holban, Alina Maria; Mogoantă, Laurențiu; Bălşeanu, Tudor-Adrian; Mogoșanu, George Dan; Savu, Diana; Popescu, Roxana Cristina; Fufă, Oana; Grumezescu, Alexandru Mihai; Bezirtzoglou, Eugenia; Lazar, Veronica; Chifiriuc, Mariana Carmen
2016-06-10
The aim of our research activity was to obtain a biocompatible nanostructured composite based on naturally derived biopolymers (chitin and sodium alginate) loaded with commercial antibiotics (either Cefuroxime or Cefepime) with dual functions, namely promoting wound healing and assuring the local delivery of the loaded antibiotic. Compositional, structural, and morphological evaluations were performed by using the thermogravimetric analysis (TGA), scanning electron microscopy (SEM), and fourier transform infrared spectroscopy (FTIR) analytical techniques. In order to quantitatively and qualitatively evaluate the biocompatibility of the obtained composites, we performed the tetrazolium-salt (MTT) and agar diffusion in vitro assays on the L929 cell line. The evaluation of antimicrobial potential was evaluated by the viable cell count assay on strains belonging to two clinically relevant bacterial species (i.e., Escherichia coli and Staphylococcus aureus).
Kim, Paul; Lee, Ju Kang; Lim, Oh Kyung; Park, Heung Kyu; Park, Ki Deok
2017-12-01
To predict the probability of lymphedema development in breast cancer patients in the early post-operation stage, we investigated the ability of quantitative lymphoscintigraphic assessment. This retrospective study included 201 patients without lymphedema after unilateral breast cancer surgery. Lymphoscintigraphy was performed between 4 and 8 weeks after surgery to evaluate the lymphatic system in the early postoperative stage. Quantitative lymphoscintigraphy was performed using four methods: ratio of radiopharmaceutical clearance rate of the affected to normal hand; ratio of radioactivity of the affected to normal hand; ratio of radiopharmaceutical uptake rate of the affected to normal axilla (RUA); and ratio of radioactivity of the affected to normal axilla (RRA). During a 1-year follow-up, patients with a circumferential interlimb difference of 2 cm at any measurement location and a 200-mL interlimb volume difference were diagnosed with lymphedema. We investigated the difference in quantitative lymphoscintigraphic assessment between the non-lymphedema and lymphedema groups. Quantitative lymphoscintigraphic assessment revealed that the RUA and RRA were significantly lower in the lymphedema group than in the non-lymphedema group. After adjusting the model for all significant variables (body mass index, N-stage, T-stage, type of surgery, and type of lymph node surgery), RRA was associated with lymphedema (odds ratio=0.14; 95% confidence interval, 0.04-0.46; p=0.001). In patients in the early postoperative stage after unilateral breast cancer surgery, quantitative lymphoscintigraphic assessment can be used to predict the probability of developing lymphedema.
Yoon, Jung Hyun; Ko, Kyung Hee; Jung, Hae Kyoung; Lee, Jong Tae
2013-12-01
To determine the correlation of qualitative shear wave elastography (SWE) pattern classification to quantitative SWE measurements and whether it is representative of quantitative SWE values with similar performances. From October 2012 to January 2013, 267 breast masses of 236 women (mean age: 45.12 ± 10.54 years, range: 21-88 years) who had undergone ultrasonography (US), SWE, and subsequent biopsy were included. US BI-RADS final assessment and qualitative and quantitative SWE measurements were recorded. Correlation between pattern classification and mean elasticity, maximum elasticity, elasticity ratio and standard deviation were evaluated. Diagnostic performances of grayscale US, SWE parameters, and US combined to SWE values were calculated and compared. Of the 267 breast masses, 208 (77.9%) were benign and 59 (22.1%) were malignant. Pattern classifications significantly correlated with all quantitative SWE measurements, showing highest correlation with maximum elasticity, r = 0.721 (P<0.001). Sensitivity was significantly decreased in US combined to SWE measurements to grayscale US: 69.5-89.8% to 100.0%, while specificity was significantly improved: 62.5-81.7% to 13.9% (P<0.001). Area under the ROC curve (Az) did not show significant differences between grayscale US to US combined to SWE (P>0.05). Pattern classification shows high correlation to maximum stiffness and may be representative of quantitative SWE values. When combined to grayscale US, SWE improves specificity of US. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Jungmann, Pia M; Baum, Thomas; Schaeffeler, Christoph; Sauerschnig, Martin; Brucker, Peter U; Mann, Alexander; Ganter, Carl; Bieri, Oliver; Rummeny, Ernst J; Woertler, Klaus; Bauer, Jan S
2015-08-01
To determine the impact of axial traction during high resolution 3.0T MR imaging of the ankle on morphological assessment of articular cartilage and quantitative cartilage imaging parameters. MR images of n=25 asymptomatic ankles were acquired with and without axial traction (6kg). Coronal and sagittal T1-weighted (w) turbo spin echo (TSE) sequences with a driven equilibrium pulse and sagittal fat-saturated intermediate-w (IMfs) TSE sequences were acquired for morphological evaluation on a four-point scale (1=best, 4=worst). For quantitative assessment of cartilage degradation segmentation was performed on 2D multislice-multiecho (MSME) SE T2, steady-state free-precession (SSFP; n=8) T2 and SSFP diffusion-weighted imaging (DWI; n=8) images. Wilcoxon-tests and paired t-tests were used for statistical analysis. With axial traction, joint space width increased significantly and delineation of cartilage surfaces was rated superior (P<0.05). Cartilage surfaces were best visualized on coronal T1-w images (P<0.05). Differences for cartilage matrix evaluation were smaller. Subchondral bone evaluation, motion artifacts and image quality were not significantly different between the acquisition methods (P>0.05). T2 values were lower at the tibia than at the talus (P<0.001). Reproducibility was better for images with axial traction. Axial traction increased the joint space width, allowed for better visualization of cartilage surfaces and improved compartment discrimination and reproducibility of quantitative cartilage parameters. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Using normalization 3D model for automatic clinical brain quantative analysis and evaluation
NASA Astrophysics Data System (ADS)
Lin, Hong-Dun; Yao, Wei-Jen; Hwang, Wen-Ju; Chung, Being-Tau; Lin, Kang-Ping
2003-05-01
Functional medical imaging, such as PET or SPECT, is capable of revealing physiological functions of the brain, and has been broadly used in diagnosing brain disorders by clinically quantitative analysis for many years. In routine procedures, physicians manually select desired ROIs from structural MR images and then obtain physiological information from correspondent functional PET or SPECT images. The accuracy of quantitative analysis thus relies on that of the subjectively selected ROIs. Therefore, standardizing the analysis procedure is fundamental and important in improving the analysis outcome. In this paper, we propose and evaluate a normalization procedure with a standard 3D-brain model to achieve precise quantitative analysis. In the normalization process, the mutual information registration technique was applied for realigning functional medical images to standard structural medical images. Then, the standard 3D-brain model that shows well-defined brain regions was used, replacing the manual ROIs in the objective clinical analysis. To validate the performance, twenty cases of I-123 IBZM SPECT images were used in practical clinical evaluation. The results show that the quantitative analysis outcomes obtained from this automated method are in agreement with the clinical diagnosis evaluation score with less than 3% error in average. To sum up, the method takes advantage of obtaining precise VOIs, information automatically by well-defined standard 3-D brain model, sparing manually drawn ROIs slice by slice from structural medical images in traditional procedure. That is, the method not only can provide precise analysis results, but also improve the process rate for mass medical images in clinical.
Tian, Jie; Liu, Qianqi; Wang, Xi; Xing, Ping; Yang, Zhuowen; Wu, Changjun
2017-01-20
As breast cancer tissues are stiffer than normal tissues, shear wave elastography (SWE) can locally quantify tissue stiffness and provide histological information. Moreover, tissue stiffness can be observed on three-dimensional (3D) colour-coded elasticity maps. Our objective was to evaluate the diagnostic performances of quantitative features in differentiating breast masses by two-dimensional (2D) and 3D SWE. Two hundred ten consecutive women with 210 breast masses were examined with B-mode ultrasound (US) and SWE. Quantitative features of 3D and 2D SWE were assessed, including elastic modulus standard deviation (E SD E ) measured on SWE mode images and E SD U measured on B-mode images, as well as maximum elasticity (E max ). Adding quantitative features to B-mode US improved the diagnostic performance (p < 0.05) and reduced false-positive biopsies (p < 0.0001). The area under the receiver operating characteristic curve (AUC) of 3D SWE was similar to that of 2D SWE for E SD E (p = 0.026) and E SD U (p = 0.159) but inferior to that of 2D SWE for E max (p = 0.002). Compared with E SD U , E SD E showed a higher AUC on 2D (p = 0.0038) and 3D SWE (p = 0.0057). Our study indicates that quantitative features of 3D and 2D SWE can significantly improve the diagnostic performance of B-mode US, especially 3D SWE E SD E , which shows considerable clinical value.
Quantiprot - a Python package for quantitative analysis of protein sequences.
Konopka, Bogumił M; Marciniak, Marta; Dyrka, Witold
2017-07-17
The field of protein sequence analysis is dominated by tools rooted in substitution matrices and alignments. A complementary approach is provided by methods of quantitative characterization. A major advantage of the approach is that quantitative properties defines a multidimensional solution space, where sequences can be related to each other and differences can be meaningfully interpreted. Quantiprot is a software package in Python, which provides a simple and consistent interface to multiple methods for quantitative characterization of protein sequences. The package can be used to calculate dozens of characteristics directly from sequences or using physico-chemical properties of amino acids. Besides basic measures, Quantiprot performs quantitative analysis of recurrence and determinism in the sequence, calculates distribution of n-grams and computes the Zipf's law coefficient. We propose three main fields of application of the Quantiprot package. First, quantitative characteristics can be used in alignment-free similarity searches, and in clustering of large and/or divergent sequence sets. Second, a feature space defined by quantitative properties can be used in comparative studies of protein families and organisms. Third, the feature space can be used for evaluating generative models, where large number of sequences generated by the model can be compared to actually observed sequences.
Scoring severity in trauma: comparison of prehospital scoring systems in trauma ICU patients.
Llompart-Pou, J A; Chico-Fernández, M; Sánchez-Casado, M; Salaberria-Udabe, R; Carbayo-Górriz, C; Guerrero-López, F; González-Robledo, J; Ballesteros-Sanz, M Á; Herrán-Monge, R; Servià-Goixart, L; León-López, R; Val-Jordán, E
2017-06-01
We evaluated the predictive ability of mechanism, Glasgow coma scale, age and arterial pressure (MGAP), Glasgow coma scale, age and systolic blood pressure (GAP), and triage-revised trauma Score (T-RTS) scores in patients from the Spanish trauma ICU registry using the trauma and injury severity score (TRISS) as a reference standard. Patients admitted for traumatic disease in the participating ICU were included. Quantitative data were reported as median [interquartile range (IQR), categorical data as number (percentage)]. Comparisons between groups with quantitative variables and categorical variables were performed using Student's T Test and Chi Square Test, respectively. We performed receiving operating curves (ROC) and evaluated the area under the curve (AUC) with its 95 % confidence interval (CI). Sensitivity, specificity, positive predictive and negative predictive values and accuracy were evaluated in all the scores. A value of p < 0.05 was considered significant. The final sample included 1361 trauma ICU patients. Median age was 45 (30-61) years. 1092 patients (80.3 %) were male. Median ISS was 18 (13-26) and median T-RTS was 11 (10-12). Median GAP was 20 (15-22) and median MGAP 24 (20-27). Observed mortality was 17.7 % whilst predicted mortality using TRISS was 16.9 %. The AUC in the scores evaluated was: TRISS 0.897 (95 % CI 0.876-0.918), MGAP 0.860 (95 % CI 0.835-0.886), GAP 0.849 (95 % CI 0.823-0.876) and T-RTS 0.796 (95 % CI 0.762-0.830). Both MGAP and GAP scores performed better than the T-RTS in the prediction of hospital mortality in Spanish trauma ICU patients. Since these are easy-to-perform scores, they should be incorporated in clinical practice as a triaging tool.
McCance, Tanya; Wilson, Val; Kornman, Kelly
2016-07-01
The aim of the Paediatric International Nursing Study was to explore the utility of key performance indicators in developing person-centred practice across a range of services provided to sick children. The objective addressed in this paper was evaluating the use of these indicators to benchmark services internationally. This study builds on primary research, which produced indicators that were considered novel both in terms of their positive orientation and use in generating data that privileges the patient voice. This study extends this research through wider testing on an international platform within paediatrics. The overall methodological approach was a realistic evaluation used to evaluate the implementation of the key performance indicators, which combined an integrated development and evaluation methodology. The study involved children's wards/hospitals in Australia (six sites across three states) and Europe (seven sites across four countries). Qualitative and quantitative methods were used during the implementation process, however, this paper reports the quantitative data only, which used survey, observations and documentary review. The findings demonstrate the quality of care being delivered to children and their families across different international sites. The benchmarking does, however, highlight some differences between paediatric and general hospitals, and between the different key performance indicators across all the sites. The findings support the use of the key performance indicators as a novel method to benchmark services internationally. Whilst the data collected across 20 paediatric sites suggest services are more similar than different, benchmarking illuminates variations that encourage a critical dialogue about what works and why. The transferability of the key performance indicators and measurement framework across different settings has significant implications for practice. The findings offer an approach to benchmarking and celebrating the successes within practice, while learning from partners across the globe in further developing person-centred cultures. © 2016 John Wiley & Sons Ltd.
Barrow, Emma; Evans, D Gareth; McMahon, Ray; Hill, James; Byers, Richard
2011-03-01
Lynch Syndrome is caused by mutations in DNA mismatch repair (MMR) genes. Mutation carrier identification is facilitated by immunohistochemical detection of the MMR proteins MHL1 and MSH2 in tumour tissue and is desirable as colonoscopic screening reduces mortality. However, protein detection by conventional immunohistochemistry (IHC) is subjective, and quantitative techniques are required. Quantum dots (QDs) are novel fluorescent labels that enable quantitative multiplex staining. This study compared their use with quantitative 3,3'-diaminobenzidine (DAB) IHC for the diagnosis of Lynch Syndrome. Tumour sections from 36 mutation carriers and six controls were obtained. These were stained with DAB on an automated platform using antibodies against MLH1 and MSH2. Multiplex QD immunofluorescent staining of the sections was performed using antibodies against MLH1, MSH2 and smooth muscle actin (SMA). Multispectral analysis of the slides was performed. The staining intensity of DAB and QDs was measured in multiple colonic crypts, and the mean intensity scores calculated. Receiver operating characteristic (ROC) curves of staining performance for the identification of mutation carriers were evaluated. For quantitative DAB IHC, the area under the MLH1 ROC curve was 0.872 (95% CI 0.763 to 0.981), and the area under the MSH2 ROC curve was 0.832 (95% CI 0.704 to 0.960). For quantitative QD IHC, the area under the MLH1 ROC curve was 0.812 (95% CI 0.681 to 0.943), and the area under the MSH2 ROC curve was 0.598 (95% CI 0.418 to 0.777). Despite the advantage of QD staining to enable several markers to be measured simultaneously, it is of lower utility than DAB IHC for the identification of MMR mutation carriers. Automated DAB IHC staining and quantitative slide analysis may enable high-throughput IHC.
NASA Astrophysics Data System (ADS)
Favicchio, Rosy; Psycharakis, Stylianos; Schönig, Kai; Bartsch, Dusan; Mamalaki, Clio; Papamatheakis, Joseph; Ripoll, Jorge; Zacharakis, Giannis
2016-02-01
Fluorescent proteins and dyes are routine tools for biological research to describe the behavior of genes, proteins, and cells, as well as more complex physiological dynamics such as vessel permeability and pharmacokinetics. The use of these probes in whole body in vivo imaging would allow extending the range and scope of current biomedical applications and would be of great interest. In order to comply with a wide variety of application demands, in vivo imaging platform requirements span from wide spectral coverage to precise quantification capabilities. Fluorescence molecular tomography (FMT) detects and reconstructs in three dimensions the distribution of a fluorophore in vivo. Noncontact FMT allows fast scanning of an excitation source and noninvasive measurement of emitted fluorescent light using a virtual array detector operating in free space. Here, a rigorous process is defined that fully characterizes the performance of a custom-built horizontal noncontact FMT setup. Dynamic range, sensitivity, and quantitative accuracy across the visible spectrum were evaluated using fluorophores with emissions between 520 and 660 nm. These results demonstrate that high-performance quantitative three-dimensional visible light FMT allowed the detection of challenging mesenteric lymph nodes in vivo and the comparison of spectrally distinct fluorescent reporters in cell culture.
Docx, Lise; Sabbe, Bernard; Provinciael, Pieter; Merckx, Niel; Morrens, Manuel
2013-01-01
The aim of the present study was to investigate the predictive value of qualitative psychomotor performance levels and subaspects of the negative syndrome for quantitative motor activity levels in patients with schizophrenia. Twenty-seven stabilized patients with schizophrenia and 22 age- and sex-matched healthy controls were included in the study. An extensive battery of psychomotor performance tests (Finger Tapping Test, Purdue Pegboard Test, Line Copying Test, Neurological Evaluation Scale, Salpêtrière Retardation Rating Scale), clinical rating scales (Positive and Negative Syndrome Scale) and 24-hour actigraphy were administered to all participants. Correlational analyses showed that motor activity levels were associated with avolition as well as clinically assessed psychomotor slowing. However, in a regression model, only avolition was found to be a significant predictor for motor activity levels in patients with schizophrenia; none of the psychomotor performance tests nor the severity of emotional expressivity deficits contributed to the model. Qualitative and quantitative psychomotor deficits seem to be independent phenomena in stabilized patients with schizophrenia. The diminishing in motor activity in patients with schizophrenia is related to a loss of drive and not to problems in the quality of movement execution. © 2013 S. Karger AG, Basel.
Building-integrated photovoltaics: A case study
NASA Astrophysics Data System (ADS)
Kiss, G.; Kinkead, J.; Raman, M.
1995-03-01
In 1992, Kiss Cathcart Anders Architects performed a study for NREL on Building-Integrated Photovoltaics (BIPV) issues as seen from the perspective of the building community. In general, the purpose of the study was to list major issues and potential applications; by it's nature it asked more questions than it answered. This second phase study was to produce quantitative data on the performance of specific BIPV systems. Only roof systems are evaluated. The energy performance, construction cost and simple payback for five different BIPV roof options are evaluated in six different locations: Oakland, New York, Miami, Phoenix, Chicago, and Cincinnati. The roof options evaluated include the following: single-glazed PV roof using glass-substrate PVs; double-glazed PV roof with insulating PV modules; ballasted roof-mounted system; sawtooth light monitor roof with indirect north daylighting; sawtooth roof with north light and active heat recovery.
Crouch, Edmund A; Labarre, David; Golden, Neal J; Kause, Janell R; Dearfield, Kerry L
2009-10-01
The U.S. Department of Agriculture, Food Safety and Inspection Service is exploring quantitative risk assessment methodologies to incorporate the use of the Codex Alimentarius' newly adopted risk management metrics (e.g., food safety objectives and performance objectives). It is suggested that use of these metrics would more closely tie the results of quantitative microbial risk assessments (QMRAs) to public health outcomes. By estimating the food safety objective (the maximum frequency and/or concentration of a hazard in a food at the time of consumption) and the performance objective (the maximum frequency and/or concentration of a hazard in a food at a specified step in the food chain before the time of consumption), risk managers will have a better understanding of the appropriate level of protection (ALOP) from microbial hazards for public health protection. We here demonstrate a general methodology that allows identification of an ALOP and evaluation of corresponding metrics at appropriate points in the food chain. It requires a two-dimensional probabilistic risk assessment, the example used being the Monte Carlo QMRA for Clostridium perfringens in ready-to eat and partially cooked meat and poultry products, with minor modifications to evaluate and abstract required measures. For demonstration purposes, the QMRA model was applied specifically to hot dogs produced and consumed in the United States. Evaluation of the cumulative uncertainty distribution for illness rate allows a specification of an ALOP that, with defined confidence, corresponds to current industry practices.
A whole-cell bioreporter assay for quantitative genotoxicity evaluation of environmental samples.
Jiang, Bo; Li, Guanghe; Xing, Yi; Zhang, Dayi; Jia, Jianli; Cui, Zhisong; Luan, Xiao; Tang, Hui
2017-10-01
Whole-cell bioreporters have emerged as promising tools for genotoxicity evaluation, due to their rapidity, cost-effectiveness, sensitivity and selectivity. In this study, a method for detecting genotoxicity in environmental samples was developed using the bioluminescent whole-cell bioreporter Escherichia coli recA::luxCDABE. To further test its performance in a real world scenario, the E. coli bioreporter was applied in two cases: i) soil samples collected from chromium(VI) contaminated sites; ii) crude oil contaminated seawater collected after the Jiaozhou Bay oil spill which occurred in 2013. The chromium(VI) contaminated soils were pretreated by water extraction, and directly exposed to the bioreporter in two phases: aqueous soil extraction (water phase) and soil supernatant (solid phase). The results indicated that both extractable and soil particle fixed chromium(VI) were bioavailable to the bioreporter, and the solid-phase contact bioreporter assay provided a more precise evaluation of soil genotoxicity. For crude oil contaminated seawater, the response of the bioreporter clearly illustrated the spatial and time change in genotoxicity surrounding the spill site, suggesting that the crude oil degradation process decreased the genotoxic risk to ecosystem. In addition, the performance of the bioreporter was simulated by a modified cross-regulation gene expression model, which quantitatively described the DNA damage response of the E. coli bioreporter. Accordingly, the bioluminescent response of the bioreporter was calculated as the mitomycin C equivalent, enabling quantitative comparison of genotoxicities between different environmental samples. This bioreporter assay provides a rapid and sensitive screening tool for direct genotoxicity assessment of environmental samples. Copyright © 2017. Published by Elsevier Ltd.
Metrics and the effective computational scientist: process, quality and communication.
Baldwin, Eric T
2012-09-01
Recent treatments of computational knowledge worker productivity have focused upon the value the discipline brings to drug discovery using positive anecdotes. While this big picture approach provides important validation of the contributions of these knowledge workers, the impact accounts do not provide the granular detail that can help individuals and teams perform better. I suggest balancing the impact-focus with quantitative measures that can inform the development of scientists. Measuring the quality of work, analyzing and improving processes, and the critical evaluation of communication can provide immediate performance feedback. The introduction of quantitative measures can complement the longer term reporting of impacts on drug discovery. These metric data can document effectiveness trends and can provide a stronger foundation for the impact dialogue. Copyright © 2012 Elsevier Ltd. All rights reserved.
Gimbel, Ronald W; Cruess, David F; Schor, Kenneth; Hooper, Tomoko I; Barbour, Galen L
2008-10-01
To provide baseline data on evaluation of faculty performance in U.S. schools and programs of public health. The authors administered an anonymous Internet-based questionnaire using PHP Surveyor. The invited sample consisted of individuals listed in the Council on Education for Public Health (CEPH) Directory of Accredited Schools and Programs of Public Health. The authors explored performance measures in teaching, research, and service, and assessed how faculty performance measures are used. A total of 64 individuals (60.4%) responded to the survey, with 26 (40.6%) reporting accreditation/reaccreditation by CEPH within the preceding 24 months. Although all schools and programs employ faculty performance evaluations, a significant difference exists between schools and programs in the use of results for merit pay increases and mentoring purposes. Thirty-one (48.4%) of the organizations published minimum performance expectations. Fifty-nine (92.2%) of the respondents counted number of publications, but only 22 (34.4%) formally evaluated their quality. Sixty-two (96.9%) evaluated teaching through student course evaluations, and only 29 (45.3%) engaged in peer assessment. Although aggregate results of teaching evaluation are available to faculty and administrators, this information is often unavailable to students and the public. Most schools and programs documented faculty service activities qualitatively but neither assessed it quantitatively nor evaluated its impact. This study provides insight into how schools and programs of public health evaluate faculty performance. Results suggest that although schools and programs do evaluate faculty performance on a basic level, many do not devote substantial attention to this process.
Van Hecke, L L; Hermans, K; Haspeslagh, M; Chiers, K; Pint, E; Boyen, F; Martens, A M
2017-07-01
The aim of this study was to evaluate different techniques for diagnosing wound infection in wounds healing by second intention in horses and to assess the effect of a vortex and sonication protocol on quantitative bacteriology in specimens with a histologically confirmed biofilm. In 50 wounds healing by second intention, a clinical assessment, a quantitative swab, a semi-quantitative swab, and a swab for cytology were compared to a quantitative tissue biopsy (reference standard). Part of the biopsy specimen was examined histologically for evidence of a biofilm. There was a significant, high correlation (P<0.001; r=0.747) between the outcome of the quantitative swabs and the quantitative biopsies. The semi-quantitative swabs showed a significant, moderate correlation with the quantitative biopsies (P<0.001; ρ=0.524). Higher white blood cell counts for cytology were significantly associated with lower log 10 colony-forming units (CFU) in the wounds (P=0.02). Wounds with black granulation tissue showed significantly higher log 10 CFU (P=0.003). Specimens with biofilms did not yield higher bacteriological counts after a vortex and sonication protocol was performed to release bacteria from the biofilm. Based on these findings, a quantitative swab is an acceptable non-invasive alternative to a quantitative biopsy for quantifying bacterial load in equine wounds healing by second intention. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Dennehy, Cornelius J.
2013-01-01
The NASA Engineering and Safety Center (NESC) received a request from the NASA Associate Administrator (AA) for Human Exploration and Operations Mission Directorate (HEOMD), to quantitatively evaluate the individual performance of three light detection and ranging (LIDAR) rendezvous sensors flown as orbiter's development test objective on Space Transportation System (STS)-127, STS-133, STS-134, and STS-135. This document contains the outcome of the NESC assessment.
Simbol-X Background Minimization: Mirror Spacecraft Passive Shielding Trade-off Study
NASA Astrophysics Data System (ADS)
Fioretti, V.; Malaguti, G.; Bulgarelli, A.; Palumbo, G. G. C.; Ferri, A.; Attinà, P.
2009-05-01
The present work shows a quantitative trade-off analysis of the Simbol-X Mirror Spacecraft (MSC) passive shielding, in the phase space of the various parameters: mass budget, dimension, geometry and composition. A simplified physical (and geometrical) model of the sky screen, implemented by means of a GEANT4 simulation, has been developed to perform a performance-driven mass optimization and evaluate the residual background level on Simbol-X focal plane.
NASA Astrophysics Data System (ADS)
Lee, S. S.; Kim, H. J.; Kim, M. O.; Lee, K.; Lee, K. K.
2016-12-01
A study finding evidence of remediation represented on monitoring data before and after in site intensive remedial action was performed with various quantitative evaluation methods such as mass discharge analysis, tracer data, statistical trend analysis, and analytical solutions at DNAPL contaminated site, Wonju, Korea. Remediation technologies such as soil vapor extraction, soil flushing, biostimulation, and pump-and-treat have been applied to eliminate the contaminant sources of trichloroethylene (TCE) and to prevent the migration of TCE plume from remediation target zones. Prior to the remediation action, the concentration and mass discharges of TCE at all transects were affected by seasonal recharge variation and residual DNAPLs sources. After the remediation, the effect of remediation took place clearly at the main source zone and industrial complex. By tracing a time-series of plume evolution, a greater variation in the TCE concentrations was detected at the plumes near the source zones compared to the relatively stable plumes in the downstream. The removal amount of the residual source mass during the intensive remedial action was estimated to evaluate the efficiency of the intensive remedial action using analytical solution. From results of quantitative evaluation using analytical solution, it is assessed that the intensive remedial action had effectively performed with removal efficiency of 70% for the residual source mass during the remediation period. Analytical solution which can consider and quantify the impacts of partial mass reduction have been proven to be useful tools for quantifying unknown contaminant source mass and verifying dissolved concentration at the DNAPL contaminated site and evaluating the efficiency of remediation using long-term monitoring data. Acknowledgement : This subject was supported by the Korea Ministry of Environment under "GAIA project (173-092-009) and (201400540010)", R&D Project on Enviornmental Management of Geologic CO2 storage" from the KEITI (Project number:2014001810003).
Howe, A
1998-01-01
BACKGROUND: General practitioners (GPs) should be able to detect psychological distress in their patients. However, there is much evidence of underperformance in this area. The principle of clinical audit is the identification of underperformance and amelioration of its causes, but there appear to be few evaluated models of audit in this area of clinical practice. AIM: To evaluate the feasibility of auditing GPs' performance as detectors of psychological distress. Specific objectives were to test a model of the audit cycle in the detection of psychological distress by GPs; to research GP perceptions of prior audit activity in this area and the validity of the instruments used to measure GP performance; and to research GP perceptions of the value of this specific approach to the audit of their performance and the particular value of different aspects of the model in terms of its impact on clinician behaviour. METHOD: Prospective controlled study of an audit cycle of GP detection of psychological distress. Nineteen GP principals used a self-directed educational intervention involving measurement of their performance, followed by data feedback and review of selected videotaped consultations. Qualitative data on GP views of audit in this area of clinical activity were collected before and after the quantitative data collection. RESULTS: The study shows that the GP cohort had not previously considered auditing their performance as detectors of psychological distress. They found the instruments of measurement and the model of audit acceptable. However, they also suggested modifications that might be educationally more effective and make the audit more practical. These included smaller patient numbers and more peer contact. The implications of the study for a definitive model of audit in this area are discussed. CONCLUSION: Effective audit of GP performance in detection of psychological distress is possible using validated instruments, and GP performance can be improved by educational intervention. GPs in this study appear more motivated by individual case studies and reflection through video analysis on undiagnosed patients than by quantitative data feedback on their performance. This study therefore supports other evidence that clinical audit has most impact when quantitative data is coupled with clinical examples derived from patient review. PMID:9604413
Howe, A
1998-01-01
General practitioners (GPs) should be able to detect psychological distress in their patients. However, there is much evidence of underperformance in this area. The principle of clinical audit is the identification of underperformance and amelioration of its causes, but there appear to be few evaluated models of audit in this area of clinical practice. To evaluate the feasibility of auditing GPs' performance as detectors of psychological distress. Specific objectives were to test a model of the audit cycle in the detection of psychological distress by GPs; to research GP perceptions of prior audit activity in this area and the validity of the instruments used to measure GP performance; and to research GP perceptions of the value of this specific approach to the audit of their performance and the particular value of different aspects of the model in terms of its impact on clinician behaviour. Prospective controlled study of an audit cycle of GP detection of psychological distress. Nineteen GP principals used a self-directed educational intervention involving measurement of their performance, followed by data feedback and review of selected videotaped consultations. Qualitative data on GP views of audit in this area of clinical activity were collected before and after the quantitative data collection. The study shows that the GP cohort had not previously considered auditing their performance as detectors of psychological distress. They found the instruments of measurement and the model of audit acceptable. However, they also suggested modifications that might be educationally more effective and make the audit more practical. These included smaller patient numbers and more peer contact. The implications of the study for a definitive model of audit in this area are discussed. Effective audit of GP performance in detection of psychological distress is possible using validated instruments, and GP performance can be improved by educational intervention. GPs in this study appear more motivated by individual case studies and reflection through video analysis on undiagnosed patients than by quantitative data feedback on their performance. This study therefore supports other evidence that clinical audit has most impact when quantitative data is coupled with clinical examples derived from patient review.
Evaluation of the long-term performance of six alternative disposal methods for LLRW
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kossik, R.; Sharp, G.; Chau, T.
1995-12-31
The State of New York has carried out a comparison of six alternative disposal methods for low-level radioactive waste (LLRW). An important part of these evaluations involved quantitatively analyzing the long-term (10,000 yr) performance of the methods with respect to dose to humans, radionuclide concentrations in the environment, and cumulative release from the facility. Four near-surface methods (covered above-grade vault, uncovered above-grade vault, below-grade vault, augered holes) and two mine methods (vertical shaft mine and drift mine) were evaluated. Each method was analyzed for several generic site conditions applicable for the state. The evaluations were carried out using RIP (Repositorymore » Integration Program), an integrated, total system performance assessment computer code which has been applied to radioactive waste disposal facilities both in the U.S. (Yucca Mountain, WIPP) and worldwide. The evaluations indicate that mines in intact low-permeability rock and near-surface facilities with engineered covers generally have a high potential to perform well (within regulatory limits). Uncovered above-grade vaults and mines in highly fractured crystalline rock, however, have a high potential to perform poorly, exceeding regulatory limits.« less
Li, Ning; Ziegemeier, Daisy; Bass, Laura; Wang, Wei
2008-12-15
In this study, size exclusion high performance liquid chromatography was evaluated for its application in separation and quantitation of free polyethylene glycol (PEG) and its PEGylated-protein-conjugate (PEG-conjugate). Although the large mass of the free PEG (2-fold greater than the protein) made separation difficult, chromatographic conditions were identified enabling resolution and quantitation of the free PEG, PEG-conjugate and non-PEGylated protein with Shodex Protein KW803 and KW804 columns in series and refractive index detection. The optimum resolution of 1.7 and 2.0 was achieved for the free PEG and PEG-conjugate as well as the free PEG and non-PEGylated protein using 20mM HEPES buffer at pH 6.5. Under this condition, the plot of log(10)MW of all the pertinent analytes against retention time showed a linear relationship with a correlation coefficient of 1. Limited assay performance evaluation demonstrated that the method was linear in the concentration range of 10 to 250 microg/mL of free PEG with correlation coefficients of > or = 0.99. When free PEG in this concentration range was spiked into PEG-conjugate samples at 1mg/mL, the recovery was in the range of 78%-120%. Detection and quantitation limits were determined to be, respectively, 10 and 25 microg/mL for free PEG. The R.S.D. for intra- and inter-day precision was 0.09% or less for retention time measurements and 2.9% or less for area count measurements. Robustness testing was performed by deliberately deviating +/-0.2 pH units away from the desired pH as well as by increasing the flow rate. These deviations resulted in no significant impact on area percent distribution of all species. However, separation was found to be sensitive to high ionic strength and buffer species.
NASA Technical Reports Server (NTRS)
Mulhall, B. D. L.
1980-01-01
The performance, costs, organization and other characteristics of both the manual system and AIDS 2 were used to establish a baseline case. The results of the evaluation are to be used to determine the feasibility of the AIDS 3 System, as well as provide a basis for ranking alternative systems during the second phase of the JPL study. The results of the study were tabulated by subject, scope and methods, providing a descriptive, quantitative and qualitative analysis of the current operating systems employed by the FBI Identification Division.
NASA Astrophysics Data System (ADS)
Sumiya, H.; Hamaki, K.; Harano, K.
2018-05-01
Ultra-hard and high-strength spherical indenters with high precision and sphericity were successfully prepared from nanopolycrystalline diamond (NPD) synthesized by direct conversion sintering from graphite under high pressure and high temperature. It was shown that highly accurate and stable microfracture strength tests can be performed on various super-hard diamond materials by using the NPD spherical indenters. It was also verified that this technique enables quantitative evaluation of the strength characteristics of single crystal diamonds and NPDs which have been quite difficult to evaluate.
Reproducibility of sonographic measurement of thickness and echogenicity of the plantar fascia.
Cheng, Ju-Wen; Tsai, Wen-Chung; Yu, Tung-Yang; Huang, Kuo-Yao
2012-01-01
To evaluate the intra- and interrater reliability of ultrasonographic measurements of the thickness and echogenicity of the plantar fascia. Eleven patients (20 feet), who complained of inferior heel pain, and 26 volunteers (52 feet) were enrolled. Two sonographers independently imaged the plantar fascia in both longitudinal and transverse planes. Volunteers were assessed twice to evaluate intrarater reliability. Quantitative evaluation of the echogenicity of the plantar fascia was performed by measuring the mean gray level of the region of interest using Digital Imaging and Communications in Medicine viewer software. Sonographic evaluation of the thickness of the plantar fascia showed high reliability. Sonographic evaluations of the presence or absence of hypoechoic change in the plantar fascia showed surprisingly low agreement. The reliability of gray-scale evaluations appears to be much better than subjective judgments in the evaluation of echogenicity. Transverse scanning did not show any advantage in sonographic evaluation of the plantar fascia. The reliability of sonographic examination of the thickness of the plantar fascia is high. Mean gray-level analysis of quantitative sonography can be used for the evaluation of echogenicity, which could reduce discrepancies in the interpretation of echogenicity by different sonographers. Longitudinal instead of transverse scanning is recommended for imaging the plantar fascia. Copyright © 2011 Wiley Periodicals, Inc.
Establishing Content Validity for a Literacy Coach Performance Appraisal Instrument
ERIC Educational Resources Information Center
Lane, Mae; Robbins, Mary; Price, Debra
2013-01-01
This study's purpose was to determine whether or not the Literacy Coach Appraisal Instrument developed for use in evaluating literacy coaches had content validity. The study, a fully mixed concurrent equal status design conducted from a pragmatist philosophy, collected qualitative and quantitative data from literacy experts about the elements of…
Peer Assessment of Student-Produced Mechanics Lab Report Videos
ERIC Educational Resources Information Center
Douglas, Scott S.; Aiken, John M.; Lin, Shih-Yin; Greco, Edwin F.; Alicea-Muñoz, Emily; Schatz, Michael F.
2017-01-01
We examine changes in students' rating behavior during a semester-long sequence of peer evaluation laboratory exercises in an introductory mechanics course. We perform a quantitative analysis of the ratings given by students to peers' physics lab reports, and conduct interviews with students. We find that peers persistently assign higher ratings…
Equipment
The Grafton and Boston Tufts laboratories are impressively equipped to perform this work. They were able to work with the company Haemonetics to make a new automated centrifuge, with remote control and offline capabilities. Sadly, while...
Evaluating Serious Games through User Experience and Performance Assessment
ERIC Educational Resources Information Center
Barton, Irving Gary, Jr.
2017-01-01
Military education is held to high standards for our servicemen and women. The purpose of this quantitative, comparative study was to determine if significant differences existed in learning styles relative to military experience as determined by learning in a serious game environment. Study results are expected to advance the state of research in…
It is widely accepted that global climate change will impact the regional and local climate and alter some aspects of the hydrologic cycle, which in turn can affect the performance of the urban water supply, wastewater and storm water infrastructur4e. How the urban water infrastr...
Sorg, Heiko; Knobloch, Karsten
2012-01-01
First quantitative evaluation of the requirements for the promotion as associate professor (AP) at German medical faculties. Analysis of the AP-regulations of German medical faculties according to a validated scoring system, which has been adapted to this study. The overall scoring for the AP-requirements at 35 German medical faculties was 13.5±0.6 of 20 possible scoring points (95% confidence interval 12.2-14.7). More than 88% of the AP-regulations demand sufficient performance in teaching and research with adequate scientific publication. Furthermore, 83% of the faculties expect an expert review of the candidate's performance. Conference presentations required as an assistant professor as well as the reduction of the minimum time as an assistant professor do only play minor roles. The requirements for assistant professors to get nominated as an associate professor at German medical faculties are high with an only small range. In detail, however, it can be seen that there still exists large heterogeneity, which hinders equal opportunities and career possibilities. These data might be used for the ongoing objective discussion.
Iuppariello, Luigi; D'Addio, Giovanni; Romano, Maria; Bifulco, Paolo; Lanzillo, Bernardo; Pappone, Nicola; Cesarelli, Mario
2016-01-01
Robot-mediated therapy (RMT) has been a very dynamic area of research in recent years. Robotics devices are in fact capable to quantify the performances of a rehabilitation task in treatments of several disorders of the arm and the shoulder of various central and peripheral etiology. Different systems for robot-aided neuro-rehabilitation are available for upper limb rehabilitation but the biomechanical parameters proposed until today, to evaluate the quality of the movement, are related to the specific robot used and to the type of exercise performed. Besides, none study indicated a standardized quantitative evaluation of robot assisted upper arm reaching movements, so the RMT is still far to be considered a standardised tool. In this paper a quantitative kinematic assessment of robot assisted upper arm reaching movements, considering also the effect of gravity on the quality of the movements, is proposed. We studied a group of 10 healthy subjects and results indicate that our advised protocol can be useful for characterising normal pattern in reaching movements.
Martelli, S; Zaffagnini, S; Bignozzi, S; Lopomo, N F; Iacono, F; Marcacci, M
2007-10-01
In this paper a new navigation system, KIN-Nav, developed for research and used during 80 anterior cruciate ligament (ACL) reconstructions is described. KIN-Nav is a user-friendly navigation system for flexible intraoperative acquisitions of anatomical and kinematic data, suitable for validation of biomechanical hypotheses. It performs real-time quantitative evaluation of antero-posterior, internal-external, and varus-valgus knee laxity at any degree of flexion and provides a new interface for this task, suitable also for comparison of pre-operative and post-operative knee laxity and surgical documentation. In this paper the concept and features of KIN-Nav, which represents a new approach to navigation and allows the investigation of new quantitative measurements in ACL reconstruction, are described. Two clinical studies are reported, as examples of clinical potentiality and correct use of this methodology. In this paper a preliminary analysis of KIN-Nav's reliability and clinical efficacy, performed during blinded repeated measures by three independent examiners, is also given. This analysis is the first assessment of the potential of navigation systems for evaluating knee kinematics.
Early motor development and cognitive abilities among Mexican preschoolers.
Osorio-Valencia, Erika; Torres-Sánchez, Luisa; López-Carrillo, Lizbeth; Rothenberg, Stephen J; Schnaas, Lourdes
2017-07-18
Psychomotricity plays a very important role in children's development, especially for learning involving reading-writing and mathematical calculations. Evaluate motor development in children 3 years old and its relationship with their cognitive abilities at the age of 5 years. Based on a cohort study, we analyzed the information about motor performance evaluated at 3 years old by Peabody Motor Scale and cognitive abilities at 5 years old. The association was estimated using linear regression models adjusted by mother's intelligence quotient, sex, Bayley mental development index at 18 months, and quality of the environment at home (HOME scale). 148 children whose motor performance was determined at age 3 and was evaluated later at age 5 to determine their cognitive abilities. Cognitive abilities (verbal, quantitative, and memory) measured by McCarthy Scales. Significant positive associations were observed between stationary balance at age 3 with verbal abilities (β = 0.67, p = .04) and memory (β = 0.81, p = .02) at 5 years. Grasping and visual-motor integration were significant and positively associated with quantitative abilities (β = 0.74, p = .005; β = 0.61, p = .01) and memory (β = 2.11, p = .001; β = 1.74, p = .004). The results suggest that early motor performance contributes to the establishment of cognitive abilities at 5 years. Evaluation and early motor stimulation before the child is faced with formal learning likely helps to create neuronal networks that facilitate the acquisition of academic knowledge.
Kaneko, Hiromasa; Funatsu, Kimito
2013-09-23
We propose predictive performance criteria for nonlinear regression models without cross-validation. The proposed criteria are the determination coefficient and the root-mean-square error for the midpoints between k-nearest-neighbor data points. These criteria can be used to evaluate predictive ability after the regression models are updated, whereas cross-validation cannot be performed in such a situation. The proposed method is effective and helpful in handling big data when cross-validation cannot be applied. By analyzing data from numerical simulations and quantitative structural relationships, we confirm that the proposed criteria enable the predictive ability of the nonlinear regression models to be appropriately quantified.
Ceol, M; Forino, M; Gambaro, G; Sauer, U; Schleicher, E D; D'Angelo, A; Anglani, F
2001-01-01
Gene expression can be examined with different techniques including ribonuclease protection assay (RPA), in situ hybridisation (ISH), and quantitative reverse transcription-polymerase chain reaction (RT/PCR). These methods differ considerably in their sensitivity and precision in detecting and quantifying low abundance mRNA. Although there is evidence that RT/PCR can be performed in a quantitative manner, the quantitative capacity of this method is generally underestimated. To demonstrate that the comparative kinetic RT/PCR strategy-which uses a housekeeping gene as internal standard-is a quantitative method to detect significant differences in mRNA levels between different samples, the inhibitory effect of heparin on phorbol 12-myristate 13-acetate (PMA)-induced-TGF-beta1 mRNA expression was evaluated by RT/PCR and RPA, the standard method of mRNA quantification, and the results were compared. The reproducibility of RT/PCR amplification was calculated by comparing the quantity of G3PDH and TGF-beta1 PCR products, generated during the exponential phases, estimated from two different RT/PCR (G3PDH, r = 0.968, P = 0.0000; TGF-beta1, r = 0.966, P = 0.0000). The quantitative capacity of comparative kinetic RT/PCR was demonstrated by comparing the results obtained from RPA and RT/PCR using linear regression analysis. Starting from the same RNA extraction, but using only 1% of the RNA for the RT/PCR compared to RPA, significant correlation was observed (r = 0.984, P = 0.0004). Moreover the morphometric analysis of ISH signal was applied for the semi-quantitative evaluation of the expression and localisation of TGF-beta1 mRNA in the entire cell population. Our results demonstrate the close similarity of the RT/PCR and RPA methods in giving quantitative information on mRNA expression and indicate the possibility to adopt the comparative kinetic RT/PCR as reliable quantitative method of mRNA analysis. Copyright 2001 Wiley-Liss, Inc.
Chen, Chia-Lin; Wang, Yuchuan; Lee, Jason J. S.; Tsui, Benjamin M. W.
2011-01-01
Purpose We assessed the quantitation accuracy of small animal pinhole single photon emission computed tomography (SPECT) under the current preclinical settings, where image compensations are not routinely applied. Procedures The effects of several common image-degrading factors and imaging parameters on quantitation accuracy were evaluated using Monte-Carlo simulation methods. Typical preclinical imaging configurations were modeled, and quantitative analyses were performed based on image reconstructions without compensating for attenuation, scatter, and limited system resolution. Results Using mouse-sized phantom studies as examples, attenuation effects alone degraded quantitation accuracy by up to −18% (Tc-99m or In-111) or −41% (I-125). The inclusion of scatter effects changed the above numbers to −12% (Tc-99m or In-111) and −21% (I-125), respectively, indicating the significance of scatter in quantitative I-125 imaging. Region-of-interest (ROI) definitions have greater impacts on regional quantitation accuracy for small sphere sources as compared to attenuation and scatter effects. For the same ROI, SPECT acquisitions using pinhole apertures of different sizes could significantly affect the outcome, whereas the use of different radii-of-rotation yielded negligible differences in quantitation accuracy for the imaging configurations simulated. Conclusions We have systematically quantified the influence of several factors affecting the quantitation accuracy of small animal pinhole SPECT. In order to consistently achieve accurate quantitation within 5% of the truth, comprehensive image compensation methods are needed. PMID:19048346
NASA Astrophysics Data System (ADS)
Xia, H.; Shen, X. M.; Yang, X. C.; Xiong, Y.; Jiang, G. L.
2018-01-01
Deterministic electroplating repair is a novel method for rapidly repairing the attrited parts. By the qualitative contrast and quantitative comparison, influences of the current density on performances of the chrome-plated layer were concluded in this study. The chrome-plated layers were fabricated under different current densities when the other parameters were kept constant. Hardnesses, thicknesses and components, surface morphologies and roughnesses, and wearability of the chrome-plated layers were detected by the Vickers hardness tester, scanning electron microscope / energy dispersive X-ray detector, digital microscope in the 3D imaging mode, and the ball-milling instrument with profilograph, respectively. In order to scientifically evaluate each factor, the experimental data was normalized. A comprehensive evaluation model was founded to quantitative analyse influence of the current density based on analytic hierarchy process method and the weighted evaluation method. The calculated comprehensive evaluation indexes corresponding to current density of 40A/dm2, 45A/dm2, 50A/dm2, 55A/dm2, 60A/dm2, and 65A/dm2 were 0.2246, 0.4850, 0.4799, 0.4922, 0.8672, and 0.1381, respectively. Experimental results indicate that final optimal option was 60A/dm2, and the priority orders were 60A/dm2, 55A/dm2, 45A/dm2, 50A/dm2, 40A/dm2, and 65A/dm2.
Optical Fiber Sensors for Advanced Civil Structures
NASA Astrophysics Data System (ADS)
de Vries, Marten Johannes Cornelius
1995-01-01
The objective of this dissertation is to develop, analyze, and implement optical fiber-based sensors for the nondestructive quantitative evaluation of advanced civil structures. Based on a comparative evaluation of optical fiber sensors that may be used to obtain quantitative information related to physical perturbations in the civil structure, the extrinsic Fabry-Perot interferometric (EFPI) optical fiber sensor is selected as the most attractive sensor. The operation of the EFPI sensor is explained using the Kirchhoff diffraction approach. As is shown in this dissertation, this approach better predicts the signal-to-noise ratio as a function of gap length than methods employed previously. The performance of the optical fiber sensor is demonstrated in three different implementations. In the first implementation, performed with researchers in the Civil Engineering Department at the University of Southern California in Los Angeles, optical fiber sensors were used to obtain quantitative strain information from reinforced concrete interior and exterior column-to-beam connections. The second implementation, performed in cooperation with researchers at the United States Bureau of Mines in Spokane, Washington, used optical fiber sensors to monitor the performance of roof bolts used in mines. The last implementation, performed in cooperation with researchers at the Turner-Fairbanks Federal Highway Administration Research Center in McLean, Virginia, used optical fiber sensors, attached to composite prestressing strands used for reinforcing concrete, to obtain absolute strain information. Multiplexing techniques including time, frequency and wavelength division multiplexing are briefly discussed, whereas the principles of operation of spread spectrum and optical time domain reflectometery (OTDR) are discussed in greater detail. Results demonstrating that spread spectrum and OTDR techniques can be used to multiplex optical fiber sensors are presented. Finally, practical considerations that have to be taken into account when implementing optical fiber sensors into a civil structure environment are discussed, and possible solutions to some of these problems are proposed.
An evaluation of the ecological and environmental security on China's terrestrial ecosystems.
Zhang, Hongqi; Xu, Erqi
2017-04-11
With rapid economic growth, industrialization, and urbanization, various ecological and environmental problems occur, which threaten and undermine the sustainable development and domestic survival of China. On the national scale, our progress remains in a state of qualitative or semi-quantitative evaluation, lacking a quantitative evaluation and a spatial visualization of ecological and environmental security. This study collected 14 indictors of water, land, air, and biodiversity securities to compile a spatial evaluation of ecological and environmental security in terrestrial ecosystems of China. With area-weighted normalization and scaling transformations, the veto aggregation (focusing on the limit indicator) and balanced aggregation (measuring balanced performance among different indicators) methods were used to aggregate security evaluation indicators. Results showed that water, land, air, and biodiversity securities presented different spatial distributions. A relatively serious ecological and environmental security crisis was found in China, but presented an obviously spatial variation of security evaluation scores. Hotspot areas at the danger level, which are scattered throughout the entirety of the country, were identified. The spatial diversities and causes of ecological and environmental problems in different regions were analyzed. Spatial integration of regional development and proposals for improving the ecological and environmental security were put forward.
Reference condition approach to restoration planning
Nestler, J.M.; Theiling, C.H.; Lubinski, S.J.; Smith, D.L.
2010-01-01
Ecosystem restoration planning requires quantitative rigor to evaluate alternatives, define end states, report progress and perform environmental benefits analysis (EBA). Unfortunately, existing planning frameworks are, at best, semi-quantitative. In this paper, we: (1) describe a quantitative restoration planning approach based on a comprehensive, but simple mathematical framework that can be used to effectively apply knowledge and evaluate alternatives, (2) use the approach to derive a simple but precisely defined lexicon based on the reference condition concept and allied terms and (3) illustrate the approach with an example from the Upper Mississippi River System (UMRS) using hydrologic indicators. The approach supports the development of a scaleable restoration strategy that, in theory, can be expanded to ecosystem characteristics such as hydraulics, geomorphology, habitat and biodiversity. We identify three reference condition types, best achievable condition (A BAC), measured magnitude (MMi which can be determined at one or many times and places) and desired future condition (ADFC) that, when used with the mathematical framework, provide a complete system of accounts useful for goal-oriented system-level management and restoration. Published in 2010 by John Wiley & Sons, Ltd.
A Systematic Approach for Quantitative Analysis of Multidisciplinary Design Optimization Framework
NASA Astrophysics Data System (ADS)
Kim, Sangho; Park, Jungkeun; Lee, Jeong-Oog; Lee, Jae-Woo
An efficient Multidisciplinary Design and Optimization (MDO) framework for an aerospace engineering system should use and integrate distributed resources such as various analysis codes, optimization codes, Computer Aided Design (CAD) tools, Data Base Management Systems (DBMS), etc. in a heterogeneous environment, and need to provide user-friendly graphical user interfaces. In this paper, we propose a systematic approach for determining a reference MDO framework and for evaluating MDO frameworks. The proposed approach incorporates two well-known methods, Analytic Hierarchy Process (AHP) and Quality Function Deployment (QFD), in order to provide a quantitative analysis of the qualitative criteria of MDO frameworks. Identification and hierarchy of the framework requirements and the corresponding solutions for the reference MDO frameworks, the general one and the aircraft oriented one were carefully investigated. The reference frameworks were also quantitatively identified using AHP and QFD. An assessment of three in-house frameworks was then performed. The results produced clear and useful guidelines for improvement of the in-house MDO frameworks and showed the feasibility of the proposed approach for evaluating an MDO framework without a human interference.
Hematocrit Measurement with R2* and Quantitative Susceptibility Mapping in Postmortem Brain.
Walsh, A J; Sun, H; Emery, D J; Wilman, A H
2018-05-24
Noninvasive venous oxygenation quantification with MR imaging will improve the neurophysiologic investigation and the understanding of the pathophysiology in neurologic diseases. Available MR imaging methods are limited by sensitivity to flow and often require assumptions of the hematocrit level. In situ postmortem imaging enables evaluation of methods in a fully deoxygenated environment without flow artifacts, allowing direct calculation of hematocrit. This study compares 2 venous oxygenation quantification methods in in situ postmortem subjects. Transverse relaxation (R2*) mapping and quantitative susceptibility mapping were performed on a whole-body 4.7T MR imaging system. Intravenous measurements in major draining intracranial veins were compared between the 2 methods in 3 postmortem subjects. The quantitative susceptibility mapping technique was also applied in 10 healthy control subjects and compared with reference venous oxygenation values. In 2 early postmortem subjects, R2* mapping and quantitative susceptibility mapping measurements within intracranial veins had a significant and strong correlation ( R 2 = 0.805, P = .004 and R 2 = 0.836, P = .02). Higher R2* and susceptibility values were consistently demonstrated within gravitationally dependent venous segments during the early postmortem period. Hematocrit ranged from 0.102 to 0.580 in postmortem subjects, with R2* and susceptibility as large as 291 seconds -1 and 1.75 ppm, respectively. Measurements of R2* and quantitative susceptibility mapping within large intracranial draining veins have a high correlation in early postmortem subjects. This study supports the use of quantitative susceptibility mapping for evaluation of in vivo venous oxygenation and postmortem hematocrit concentrations. © 2018 by American Journal of Neuroradiology.
de Certaines, J D; Henriksen, O; Spisni, A; Cortsen, M; Ring, P B
1993-01-01
Quantitative magnetic resonance imaging may offer unique potential for tissue characterization in vivo. In this connection texture analysis of quantitative MR images may be of special importance. Because evaluation of texture analysis needs large data material, multicenter approaches become mandatory. Within the frame of BME Concerted Action on Tissue Characterization by MRI and MRS, a pilot multicenter study was launched in order to evaluate the technical problems including comparability of relaxation time measurements carried out in the individual sites. Human brain, skeletal muscle, and liver were used as models. A total of 218 healthy volunteers were studied. Fifteen MRI scanners with field strength ranging from 0.08 T to 1.5 T were induced. Measurement accuracy was tested on the Eurospin relaxation time test object (TO5) and the obtained calibration curve was used for correction of the in vivo data. The results established that, by following a standardized procedure, comparable quantitative measurements can be obtained in vivo from a number of MR sites. The overall variation coefficient in vivo was in the same order of magnitude as ex vivo relaxometry. Thus, it is possible to carry out international multicenter studies on quantitative imaging, provided that quality control with respect to measurement accuracy and calibration of the MR equipments are performed.
Onorati, Francesco; Puppini, Giovanni; Pappalardo, Omar A.; Selmi, Matteo; Votta, Emiliano; Faggian, Giuseppe; Redaelli, Alberto
2017-01-01
Background Accurate quantification of mitral valve (MV) morphology and dynamic behavior over the cardiac cycle is crucial to understand the mechanisms of degenerative MV dysfunction and to guide the surgical intervention. Cardiac magnetic resonance (CMR) imaging has progressively been adopted to evaluate MV pathophysiology, although a dedicated framework is required to perform a quantitative assessment of the functional MV anatomy. Methods We investigated MV dynamic behavior in subjects with normal MV anatomy (n=10) and patients referred to surgery due to degenerative MV prolapse, classified as fibro-elastic deficiency (FED, n=9) and Barlow’s disease (BD, n=10). A CMR-dedicated framework was adopted to evaluate prolapse height and volume and quantitatively assess valvular morphology and papillary muscles (PAPs) function over the cardiac cycle. Multiple comparison was used to investigate the hallmarks associated to MV degenerative prolapse and evaluate the feasibility of anatomical and functional distinction between FED and BD phenotypes. Results On average, annular dimensions were significantly (P<0.05) larger in BD than in FED and normal subjects while no significant differences were noticed between FED and normal. MV eccentricity progressively decreased passing from normal to FED and BD, with the latter exhibiting a rounder annulus shape. Over the cardiac cycle, we noticed significant differences for BD during systole with an abnormal annular enlargement between mid and late systole (LS) (P<0.001 vs. normal); the PAPs dynamics remained comparable in the three groups. Prolapse height and volume highlighted significant differences among normal, FED and BD valves. Conclusions Our CMR-dedicated framework allows for the quantitative and dynamic evaluation of MV apparatus, with quantifiable annular alterations representing the primary hallmark of severe MV degeneration. This may aid surgeons in the evaluation of the severity of MV dysfunction and the selection of the appropriate MV treatment. PMID:28540065
Endo, Hiroshi
2015-01-01
This study examined whether manipulability during smartphone thumb-based touch operations could be predicted by the following robotic manipulability indices: the volume and direction of the 'manipulability ellipsoid' (MEd), both of which evaluate the influence of kinematics on manipulability. Limits of the thumb's range of motion were considered in the MEd to improve predictability. Thumb postures at 25 key target locations were measured in 16 subjects. Though there was no correlation between subjective evaluation and the volume of the MEd, high correlation was obtained when motion range limits were taken into account. These limits changed the size of the MEd and improved the accuracy of the manipulability evaluation. Movement directions associated with higher performance could also be predicted. In conclusion, robotic manipulability indices with motion range limits were considered to be useful measures for quantitatively evaluating human hand operations.
Jaafaripooyan, Ebrahim
2014-01-01
Background: Performance evaluation is essential to quality improvement in healthcare. The current study has identified the potential pros and cons of external healthcare evaluation programs, utilizing them subsequently to look into the merits of a similar case in a developing country. Methods: A mixed method study employing both qualitative and quantitative data collection and analysis techniques was adopted to achieve the study end. Subject Matter Experts (SMEs) and professionals were approached for two-stage process of data collection. Results: Potential advantages included greater attractiveness of high accreditation rank healthcare organizations to their customers/purchasers and boosted morale of their personnel. Downsides, as such, comprised the programs’ over-reliance on value judgment of surveyors, routinization and incurring undue cost on the organizations. In addition, the improved, standardized care processes as well as the judgmental nature of program survey were associated, as pros and cons, to the program investigated by the professionals. Conclusion: Besides rendering a tentative assessment of Iranian hospital evaluation program, the study provides those running external performance evaluations with a lens to scrutinize the virtues of their own evaluation systems through identifying the potential advantages and drawbacks of such programs. Moreover, the approach followed could be utilized for performance assessment of similar evaluation programs. PMID:25279381
Jaafaripooyan, Ebrahim
2014-09-01
Performance evaluation is essential to quality improvement in healthcare. The current study has identified the potential pros and cons of external healthcare evaluation programs, utilizing them subsequently to look into the merits of a similar case in a developing country. A mixed method study employing both qualitative and quantitative data collection and analysis techniques was adopted to achieve the study end. Subject Matter Experts (SMEs) and professionals were approached for two-stage process of data collection. Potential advantages included greater attractiveness of high accreditation rank healthcare organizations to their customers/purchasers and boosted morale of their personnel. Downsides, as such, comprised the programs' over-reliance on value judgment of surveyors, routinization and incurring undue cost on the organizations. In addition, the improved, standardized care processes as well as the judgmental nature of program survey were associated, as pros and cons, to the program investigated by the professionals. Besides rendering a tentative assessment of Iranian hospital evaluation program, the study provides those running external performance evaluations with a lens to scrutinize the virtues of their own evaluation systems through identifying the potential advantages and drawbacks of such programs. Moreover, the approach followed could be utilized for performance assessment of similar evaluation programs.
NASA Technical Reports Server (NTRS)
Edeen, Marybeth; Henninger, Donald
1991-01-01
By growing higher plants for food, lunar and Martian manned habitats will not only reduce resupply requirements but obtain CO2 removal and both oxygen-production and water-reclamation requirements. Plants have been grown in the RLSS at NASA-Johnson in order to quantitatively evaluate plant CO2 accumulation, O2 generation, evapotranspiration, trace-contaminant generation, and biomass productivity. Attention is presently given to test conditions and anomalies in these RLSS trials; areas where performance must be improved have been identified.
Abdeldaim, G; Herrmann, B; Korsgaard, J; Olcén, P; Blomberg, J; Strålin, K
2009-06-01
The pneumolysin (ply) gene is widely used as a target in PCR assays for Streptococcus pneumoniae in respiratory secretions. However, false-positive results with conventional ply-based PCR have been reported. The aim here was to study the performance of a quantitative ply-based PCR for the identification of pneumococcal lower respiratory tract infection (LRTI). In a prospective study, fibreoptic bronchoscopy was performed in 156 hospitalized adult patients with LRTI and 31 controls who underwent bronchoscopy because of suspicion of malignancy. Among the LRTI patients and controls, the quantitative ply-based PCR applied to bronchoalveolar lavage (BAL) fluid was positive at >or=10(3) genome copies/mL in 61% and 71% of the subjects, at >or=10(5) genome copies/mL in 40% and 58% of the subjects, and at >or=10(7) genome copies/mL in 15% and 3.2% of the subjects, respectively. Using BAL fluid culture, blood culture, and/or a urinary antigen test, S. pneumoniae was identified in 19 LRTI patients. As compared with these diagnostic methods used in combination, quantitative ply-based PCR showed sensitivities and specificities of 89% and 43% at a cut-off of 10(3) genome copies/mL, of 84% and 66% at a cut-off of 10(5) genome copies/mL, and of 53% and 90% at a cut-off of 10(7) genome copies/mL, respectively. In conclusion, a high cut-off with the quantitative ply-based PCR was required to reach acceptable specificity. However, as a high cut-off resulted in low sensitivity, quantitative ply-based PCR does not appear to be clinically useful. Quantitative PCR methods for S. pneumoniae using alternative gene targets should be evaluated.
Kuo, Jane Z.; Zangwill, Linda M.; Medeiros, Felipe A.; Liebmann, Jeffery M.; Girkin, Christopher A.; Hammel, Na’ama; Rotter, Jerome I.; Weinreb, Robert N.
2015-01-01
Purpose To perform a quantitative trait locus (QTL) analysis and evaluate whether a locus between SIX1 and SIX6 is associated with retinal nerve fiber layer (RNFL) thickness in individuals of European descent. Design Observational, multi-center, cross-sectional study. Methods 231 participants were recruited from the Diagnostic Innovations in Glaucoma Study and the African Descent and Glaucoma Evaluation Study. Association of rs10483727 in SIX1-SIX6 with global and sectoral RNFL thickness was performed. Quantitative trait analysis with the additive model of inheritance was analyzed using linear regression. Trend analysis was performed to evaluate the mean global and sectoral RNFL thickness with 3 genotypes of interest (T/T, C/T, C/C). All models were adjusted for age and gender. Results Direction of association between T allele and RNFL thickness was consistent in the global and different sectoral RNFL regions. Each copy of the T risk allele in rs10483727 was associated with −0.16 μm thinner global RNFL thickness (β=−0.16, 95% CI: −0.28 to −0.03; P=0.01). Similar patterns were found for the sectoral regions, including inferior (P=0.03), inferior-nasal (P=0.017), superior-nasal (P=0.0025), superior (P=0.002) and superior-temporal (P=0.008). The greatest differences were observed in the superior and inferior quadrants, supporting clinical observations for RNFL thinning in glaucoma. Thinner global RNFL was found in subjects with T/T genotypes compared to subjects with C/T and C/C genotypes (P=0.044). Conclusions Each copy of the T risk allele has an additive effect and was associated with thinner global and sectoral RNFL. Findings from this QTL analysis further support a genetic contribution to glaucoma pathophysiology. PMID:25849520
NASA Astrophysics Data System (ADS)
Li, Bo; Cai Ren, Fa; Tang, Xiao Ying
2018-03-01
The manufacture of pressure vessels with austenitic stainless steel strain strengthening technology has become an important technical means for the light weight of cryogenic pressure vessels. In the process of increasing the strength of austenitic stainless steel, strain can induce the martensitic phase transformation in austenite phase. There is a quantitative relationship between the transformation quantity of martensitic phase and the basic mechanical properties. Then, the martensitic phase variables can be obtained by means of detection, and the mechanical properties and safety performance are evaluated and calculated. Based on this, the quantitative relationship between strain hardening and deformation induced martensite phase content is studied in this paper, and the mechanism of deformation induced martensitic transformation of austenitic stainless steel is detailed.
Development and evaluation of a study design typology for human research.
Carini, Simona; Pollock, Brad H; Lehmann, Harold P; Bakken, Suzanne; Barbour, Edward M; Gabriel, Davera; Hagler, Herbert K; Harper, Caryn R; Mollah, Shamim A; Nahm, Meredith; Nguyen, Hien H; Scheuermann, Richard H; Sim, Ida
2009-11-14
A systematic classification of study designs would be useful for researchers, systematic reviewers, readers, and research administrators, among others. As part of the Human Studies Database Project, we developed the Study Design Typology to standardize the classification of study designs in human research. We then performed a multiple observer masked evaluation of active research protocols in four institutions according to a standardized protocol. Thirty-five protocols were classified by three reviewers each into one of nine high-level study designs for interventional and observational research (e.g., N-of-1, Parallel Group, Case Crossover). Rater classification agreement was moderately high for the 35 protocols (Fleiss' kappa = 0.442) and higher still for the 23 quantitative studies (Fleiss' kappa = 0.463). We conclude that our typology shows initial promise for reliably distinguishing study design types for quantitative human research.
Advancing Usability Evaluation through Human Reliability Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronald L. Boring; David I. Gertman
2005-07-01
This paper introduces a novel augmentation to the current heuristic usability evaluation methodology. The SPAR-H human reliability analysis method was developed for categorizing human performance in nuclear power plants. Despite the specialized use of SPAR-H for safety critical scenarios, the method also holds promise for use in commercial off-the-shelf software usability evaluations. The SPAR-H method shares task analysis underpinnings with human-computer interaction, and it can be easily adapted to incorporate usability heuristics as performance shaping factors. By assigning probabilistic modifiers to heuristics, it is possible to arrive at the usability error probability (UEP). This UEP is not a literal probabilitymore » of error but nonetheless provides a quantitative basis to heuristic evaluation. When combined with a consequence matrix for usability errors, this method affords ready prioritization of usability issues.« less
Nielsen, Karsten H.; Karlsson, Stefan; Limbach, Rene; Wondraczek, Lothar
2015-01-01
The abrasion resistance of coated glass surfaces is an important parameter for judging lifetime performance, but practical testing procedures remain overly simplistic and do often not allow for direct conclusions on real-world degradation. Here, we combine quantitative two-dimensional image analysis and mechanical abrasion into a facile tool for probing the abrasion resistance of anti-reflective (AR) coatings. We determine variations in the average coated area, during and after controlled abrasion. Through comparison with other experimental techniques, we show that this method provides a practical, rapid and versatile tool for the evaluation of the abrasion resistance of sol-gel-derived thin films on glass. The method yields informative data, which correlates with measurements of diffuse reflectance and is further supported by qualitative investigations through scanning electron microscopy. In particular, the method directly addresses degradation of coating performance, i.e., the gradual areal loss of antireflective functionality. As an exemplary subject, we studied the abrasion resistance of state-of-the-art nanoporous SiO2 thin films which were derived from 5–6 wt% aqueous solutions of potassium silicates, or from colloidal suspensions of SiO2 nanoparticles. It is shown how abrasion resistance is governed by coating density and film adhesion, defining the trade-off between optimal AR performance and acceptable mechanical performance. PMID:26656260
Reslan, Summar; Axelrod, Bradley N
2017-01-01
The purpose of the current study was to compare three potential profiles of the Medical Symptom Validity Test (MSVT; Pass, Genuine Memory Impairment Profile [GMIP], and Fail) on other freestanding and embedded performance validity tests (PVTs). Notably, a quantitatively computed version of the GMIP was utilized in this investigation. Data obtained from veterans referred for a neuropsychological evaluation in a metropolitan Veteran Affairs medical center were included (N = 494). Individuals age 65 and older were not included to exclude individuals with dementia from this investigation. The sample revealed 222 (45%) in the Pass group. Of the 272 who failed the easy subtests of the MSVT, 221 (81%) met quantitative criteria for the GMIP and 51 (19%) were classified as Fail. The Pass group failed fewer freestanding and embedded PVTs and obtained higher raw scores on all PVTs than both GMIP and Fail groups. The differences in performances of the GMIP and Fail groups were minimal. Specifically, GMIP protocols failed fewer freestanding PVTs than the Fail group; failure on embedded PVTs did not differ between GMIP and Fail. The MSVT GMIP incorporates the presence of clinical correlates of disability to assist with this distinction, but future research should consider performances on other freestanding measures of performance validity to differentiate cognitive impairment from invalidity.
Chung, Cheng-Shiu; Wang, Hongwu; Cooper, Rory A
2013-07-01
The user interface development of assistive robotic manipulators can be traced back to the 1960s. Studies include kinematic designs, cost-efficiency, user experience involvements, and performance evaluation. This paper is to review studies conducted with clinical trials using activities of daily living (ADLs) tasks to evaluate performance categorized using the International Classification of Functioning, Disability, and Health (ICF) frameworks, in order to give the scope of current research and provide suggestions for future studies. We conducted a literature search of assistive robotic manipulators from 1970 to 2012 in PubMed, Google Scholar, and University of Pittsburgh Library System - PITTCat. Twenty relevant studies were identified. Studies were separated into two broad categories: user task preferences and user-interface performance measurements of commercialized and developing assistive robotic manipulators. The outcome measures and ICF codes associated with the performance evaluations are reported. Suggestions for the future studies include (1) standardized ADL tasks for the quantitative and qualitative evaluation of task efficiency and performance to build comparable measures between research groups, (2) studies relevant to the tasks from user priority lists and ICF codes, and (3) appropriate clinical functional assessment tests with consideration of constraints in assistive robotic manipulator user interfaces. In addition, these outcome measures will help physicians and therapists build standardized tools while prescribing and assessing assistive robotic manipulators.
An evaluative case study of online learning for healthcare professionals.
Pullen, Darren L
2006-01-01
This evaluation study assessed the pedagogical and instructional design (e-pedagogy) effectiveness of online continuing professional education (CPE) courses offered by a large Australian CPE provider. A naturalistic theory approach and a multilevel evaluation were used to examine the impact of web-based learning on more than 300 healthcare professionals. Participant satisfaction, learning achievement, self-reported practice performance change, and e-pedagogical courseware characteristics were assessed by various qualitative and quantitative data collection methods. Findings revealed that learning online was an effective means for increasing CPE knowledge (p < .05) and improving self-reported practice performance change (p < .05). Courses containing a clinical tool resulted in an increased self-reported practice performance change over courses that did not (Zobs = 3.757). Online CPE offers a convenient format for healthcare professionals from educationally and geographically diverse populations to update their knowledge and view best practice.
Evaluating conflation methods using uncertainty modeling
NASA Astrophysics Data System (ADS)
Doucette, Peter; Dolloff, John; Canavosio-Zuzelski, Roberto; Lenihan, Michael; Motsko, Dennis
2013-05-01
The classic problem of computer-assisted conflation involves the matching of individual features (e.g., point, polyline, or polygon vectors) as stored in a geographic information system (GIS), between two different sets (layers) of features. The classical goal of conflation is the transfer of feature metadata (attributes) from one layer to another. The age of free public and open source geospatial feature data has significantly increased the opportunity to conflate such data to create enhanced products. There are currently several spatial conflation tools in the marketplace with varying degrees of automation. An ability to evaluate conflation tool performance quantitatively is of operational value, although manual truthing of matched features is laborious and costly. In this paper, we present a novel methodology that uses spatial uncertainty modeling to simulate realistic feature layers to streamline evaluation of feature matching performance for conflation methods. Performance results are compiled for DCGIS street centerline features.
Building-integrated photovoltaics: A case study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kiss, G.; Kinkead, J.; Raman, M.
1995-03-01
In 1992, Kiss Cathcart Anders Architects performed a study for NREL on Building-Integrated Photovoltaics (BIPV) issues as seen from the perspective of the building community. In general, the purpose of the study was to list major issues and potential applications; by it`s nature it asked more questions than it answered. This second phase study was to produce quantitative data on the performance of specific BIPV systems. Only roof systems are evaluated. The energy performance, construction cost and simple payback for five different BIPV roof options are evaluated in six different locations: Oakland, New York, Miami, Phoenix, Chicago, and Cincinnati. Themore » roof options evaluated include the following: single-glazed PV roof using glass-substrate PVs; double-glazed PV roof with insulating PV modules; ballasted roof-mounted system; sawtooth light monitor roof with indirect north daylighting; sawtooth roof with north light and active heat recovery.« less
Performance Evaluation and Analysis for Gravity Matching Aided Navigation.
Wu, Lin; Wang, Hubiao; Chai, Hua; Zhang, Lu; Hsu, Houtse; Wang, Yong
2017-04-05
Simulation tests were accomplished in this paper to evaluate the performance of gravity matching aided navigation (GMAN). Four essential factors were focused in this study to quantitatively evaluate the performance: gravity database (DB) resolution, fitting degree of gravity measurements, number of samples in matching, and gravity changes in the matching area. Marine gravity anomaly DB derived from satellite altimetry was employed. Actual dynamic gravimetry accuracy and operating conditions were referenced to design the simulation parameters. The results verified that the improvement of DB resolution, gravimetry accuracy, number of measurement samples, or gravity changes in the matching area generally led to higher positioning accuracies, while the effects of them were different and interrelated. Moreover, three typical positioning accuracy targets of GMAN were proposed, and the conditions to achieve these targets were concluded based on the analysis of several different system requirements. Finally, various approaches were provided to improve the positioning accuracy of GMAN.
Performance Evaluation and Analysis for Gravity Matching Aided Navigation
Wu, Lin; Wang, Hubiao; Chai, Hua; Zhang, Lu; Hsu, Houtse; Wang, Yong
2017-01-01
Simulation tests were accomplished in this paper to evaluate the performance of gravity matching aided navigation (GMAN). Four essential factors were focused in this study to quantitatively evaluate the performance: gravity database (DB) resolution, fitting degree of gravity measurements, number of samples in matching, and gravity changes in the matching area. Marine gravity anomaly DB derived from satellite altimetry was employed. Actual dynamic gravimetry accuracy and operating conditions were referenced to design the simulation parameters. The results verified that the improvement of DB resolution, gravimetry accuracy, number of measurement samples, or gravity changes in the matching area generally led to higher positioning accuracies, while the effects of them were different and interrelated. Moreover, three typical positioning accuracy targets of GMAN were proposed, and the conditions to achieve these targets were concluded based on the analysis of several different system requirements. Finally, various approaches were provided to improve the positioning accuracy of GMAN. PMID:28379178
NASA Astrophysics Data System (ADS)
Jiang, Q. F.; Zhuang, M.; Zhu, Z. G.; Y Zhang, Q.; Sheng, L. H.
2017-12-01
Counter-flow plate-fin heat exchangers are commonly utilized in cryogenic applications due to their high effectiveness and compact size. For cryogenic heat exchangers in helium liquefaction/refrigeration systems, conventional design theory is no longer applicable and they are usually sensitive to longitudinal heat conduction, heat in-leak from surroundings and variable fluid properties. Governing equations based on distributed parameter method are developed to evaluate performance deterioration caused by these effects. The numerical model could also be applied in many other recuperators with different structures and, hence, available experimental data are used to validate it. For a specific case of the multi-stream heat exchanger in the EAST helium refrigerator, quantitative effects of these heat losses are further discussed, in comparison with design results obtained by the common commercial software. The numerical model could be useful to evaluate and rate the heat exchanger performance under the actual cryogenic environment.
From big data to rich data: The key features of athlete wheelchair mobility performance.
van der Slikke, R M A; Berger, M A M; Bregman, D J J; Veeger, H E J
2016-10-03
Quantitative assessment of an athlete׳s individual wheelchair mobility performance is one prerequisite needed to evaluate game performance, improve wheelchair settings and optimize training routines. Inertial Measurement Unit (IMU) based methods can be used to perform such quantitative assessment, providing a large number of kinematic data. The goal of this research was to reduce that large amount of data to a set of key features best describing wheelchair mobility performance in match play and present them in meaningful way for both scientists and athletes. To test the discriminative power, wheelchair mobility characteristics of athletes with different performance levels were compared. The wheelchair kinematics of 29 (inter-)national level athletes were measured during a match using three inertial sensors mounted on the wheelchair. Principal component analysis was used to reduce 22 kinematic outcomes to a set of six outcomes regarding linear and rotational movement; speed and acceleration; average and best performance. In addition, it was explored whether groups of athletes with known performance differences based on their impairment classification also differed with respect to these key outcomes using univariate general linear models. For all six key outcomes classification showed to be a significant factor (p<0.05). We composed a set of six key kinematic outcomes that accurately describe wheelchair mobility performance in match play. The key kinematic outcomes were displayed in an easy to interpret way, usable for athletes, coaches and scientists. This standardized representation enables comparison of different wheelchair sports regarding wheelchair mobility, but also evaluation at the level of an individual athlete. By this means, the tool could enhance further development of wheelchair sports in general. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sunwoo, Leonard; Yun, Tae Jin; You, Sung-Hye; Yoo, Roh-Eul; Kang, Koung Mi; Choi, Seung Hong; Kim, Ji-Hoon; Sohn, Chul-Ho; Park, Sun-Won; Jung, Cheolkyu; Park, Chul-Kee
2016-01-01
To evaluate the diagnostic performance of cerebral blood flow (CBF) by using arterial spin labeling (ASL) perfusion magnetic resonance (MR) imaging to differentiate glioblastoma (GBM) from brain metastasis. The institutional review board of our hospital approved this retrospective study. The study population consisted of 128 consecutive patients who underwent surgical resection and were diagnosed as either GBM (n = 89) or brain metastasis (n = 39). All participants underwent preoperative MR imaging including ASL. For qualitative analysis, the tumors were visually graded into five categories based on ASL-CBF maps by two blinded reviewers. For quantitative analysis, the reviewers drew regions of interest (ROIs) on ASL-CBF maps upon the most hyperperfused portion within the tumor and upon peritumoral T2 hyperintensity area. Signal intensities of intratumoral and peritumoral ROIs for each subject were normalized by dividing the values by those of contralateral normal gray matter (nCBFintratumoral and nCBFperitumoral, respectively). Visual grading scales and quantitative parameters between GBM and brain metastasis were compared. In addition, the area under the receiver-operating characteristic curve was used to evaluate the diagnostic performance of ASL-driven CBF to differentiate GBM from brain metastasis. For qualitative analysis, GBM group showed significantly higher grade compared to metastasis group (p = 0.001). For quantitative analysis, both nCBFintratumoral and nCBFperitumoral in GBM were significantly higher than those in metastasis (both p < 0.001). The areas under the curve were 0.677, 0.714, and 0.835 for visual grading, nCBFintratumoral, and nCBFperitumoral, respectively (all p < 0.001). ASL perfusion MR imaging can aid in the differentiation of GBM from brain metastasis.
Big fish in a big pond: a study of academic self concept in first year medical students.
Jackman, Kirsty; Wilson, Ian G; Seaton, Marjorie; Craven, Rhonda G
2011-07-27
Big-fish-little-pond effect (BFLPE) research has demonstrated that students in high-ability environments have lower academic self-concepts than equally able students in low-ability settings. Research has shown low academic self-concepts to be associated with negative educational outcomes. Social comparison processes have been implicated as fundamental to the BFLPE. Twenty first-year students in an Australian medical school completed a survey that included academic self-concept and social comparison measures, before and after their first written assessments. Focus groups were also conducted with a separate group of students to explore students' perceptions of competence, the medical school environment, and social comparison processes. The quantitative study did not reveal any changes in academic self-concept or self-evaluation. The qualitative study suggested that the attributions that students used when discussing performance were those that have been demonstrated to negatively affect self-concept. Students reported that the environment was slightly competitive and they used social comparison to evaluate their performance. Although the BFLPE was not evident in the quantitative study, results from the qualitative study suggest that the BFLPE might be operating In that students were using attributions that are associated with lower self-concepts, the environment was slightly competitive, and social comparisons were used for evaluation.
Magnuson, Matthew L; Speth, Thomas F
2005-10-01
Granular activated carbon is a frequently explored technology for removing synthetic organic contaminants from drinking water sources. The success of this technology relies on a number of factors based not only on the adsorptive properties of the contaminant but also on properties of the water itself, notably the presence of substances in the water which compete for adsorption sites. Because it is impractical to perform field-scale evaluations for all possible contaminants, the pore surface diffusion model (PSDM) has been developed and used to predict activated carbon column performance using single-solute isotherm data as inputs. Many assumptions are built into this model to account for kinetics of adsorption and competition for adsorption sites. This work further evaluates and expands this model, through the use of quantitative structure-property relationships (QSPRs) to predict the effect of natural organic matter fouling on activated carbon adsorption of specific contaminants. The QSPRs developed are based on a combination of calculated topographical indices and quantum chemical parameters. The QSPRs were evaluated in terms of their statistical predictive ability,the physical significance of the descriptors, and by comparison with field data. The QSPR-enhanced PSDM was judged to give results better than what could previously be obtained.
NASA Astrophysics Data System (ADS)
Wang, Jen-Chieh; Zhou, Yufeng
2017-03-01
Extracorporeal shock wave lithotripsy (ESWL) has been used widely in the noninvasive treatment of kidney calculi. The fine fragments less than 2 mm in size can be discharged by urination, which determines the success of ESWL. Although ultrasonic and fluorescent imaging are used to localize the calculi, it's challenging to monitor the stone comminution progress, especially at the late stage of ESWL when fragments spread out as a cloud. The lack of real-time and quantitative evaluation makes this procedure semi-blind, resulting in either under- or over-treatment after the legal number of pulses required by FDA. The time reversal operator (TRO) method has the ability to detect point-like scatterers, and the number of non-zero eigenvalues of TRO is equal to that of the scatterers. In this study, the validation of TRO method to identify stones was illustrated from both numerical and experimental results for one to two stones with various sizes and locations. Furthermore, the parameters affecting the performance of TRO method has also been investigated. Overall, TRO method is effective in identifying the fragments in a stone cluster in real-time. Further development of a detection system and evaluation of its performance both in vitro and in vivo during ESWL is necessary for application.
The EyeHarp: A Gaze-Controlled Digital Musical Instrument
Vamvakousis, Zacharias; Ramirez, Rafael
2016-01-01
We present and evaluate the EyeHarp, a new gaze-controlled Digital Musical Instrument, which aims to enable people with severe motor disabilities to learn, perform, and compose music using only their gaze as control mechanism. It consists of (1) a step-sequencer layer, which serves for constructing chords/arpeggios, and (2) a melody layer, for playing melodies and changing the chords/arpeggios. We have conducted a pilot evaluation of the EyeHarp involving 39 participants with no disabilities from both a performer and an audience perspective. In the first case, eight people with normal vision and no motor disability participated in a music-playing session in which both quantitative and qualitative data were collected. In the second case 31 people qualitatively evaluated the EyeHarp in a concert setting consisting of two parts: a solo performance part, and an ensemble (EyeHarp, two guitars, and flute) performance part. The obtained results indicate that, similarly to traditional music instruments, the proposed digital musical instrument has a steep learning curve, and allows to produce expressive performances both from the performer and audience perspective. PMID:27445885
Comparison of 18F-FDG PET/CT and PET/MRI in patients with multiple myeloma
Sachpekidis, Christos; Hillengass, Jens; Goldschmidt, Hartmut; Mosebach, Jennifer; Pan, Leyun; Schlemmer, Heinz-Peter; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia
2015-01-01
PET/MRI represents a promising hybrid imaging modality with several potential clinical applications. Although PET/MRI seems highly attractive in the diagnostic approach of multiple myeloma (MM), its role has not yet been evaluated. The aims of this prospective study are to evaluate the feasibility of 18F-FDG PET/MRI in detection of MM lesions, and to investigate the reproducibility of bone marrow lesions detection and quantitative data of 18F-FDG uptake between the functional (PET) component of PET/CT and PET/MRI in MM patients. The study includes 30 MM patients. All patients initially underwent 18F-FDG PET/CT (60 min p.i.), followed by PET/MRI (120 min p.i.). PET/CT and PET/MRI data were assessed and compared based on qualitative (lesion detection) and quantitative (SUV) evaluation. The hybrid PET/MRI system provided good image quality in all cases without artefacts. PET/MRI identified 65 of the 69 lesions, which were detectable with PET/CT (94.2%). Quantitative PET evaluations showed the following mean values in MM lesions: SUVaverage=5.5 and SUVmax=7.9 for PET/CT; SUVaverage=3.9 and SUVmax=5.8 for PET/MRI. Both SUVaverage and SUVmax were significantly higher on PET/CT than on PET/MRI. Spearman correlation analysis demonstrated a strong correlation between both lesional SUVaverage (r=0.744) and lesional SUVmax (r=0.855) values derived from PET/CT and PET/MRI. Regarding detection of myeloma skeletal lesions, PET/MRI exhibited equivalent performance to PET/CT. In terms of tracer uptake quantitation, a significant correlation between the two techniques was demonstrated, despite the statistically significant differences in lesional SUVs between PET/CT and PET/MRI. PMID:26550538
Ding, Shujing; Dudley, Ed; Plummer, Sue; Tang, Jiandong; Newton, Russell P; Brenton, A Gareth
2006-01-01
A reversed-phase high-performance liquid chromatography/electrospray ionisation mass spectrometry (RP-HPLC/ESI-MS) method was developed and validated for the simultaneous determination of ten major active components in Ginkgo biloba extract (bilobalide, ginkgolides A, B, C, quercetin, kaempferol, isorhamnetin, rutin hydrate, quercetin-3-beta-D-glucoside and quercitrin hydrate) which have not been previously reported to be quantified in a single analysis. The ten components exhibit baseline separation in 50 min by C18 chromatography using a water/1:1 (v/v) methanol/acetonitrile gradient. Quantitation was performed using negative ESI-MS in selected ion monitoring (SIM) mode. Good reproducibility and recovery were obtained by this method. The sensitivity of both UV and different mass spectrometry modes (full scan, selected ion monitoring (SIM), and selected reaction monitoring (SRM)) were compared and both quantitation with and without internal standard were evaluated. The analysis of Ginkgo biloba commercial products showed remarkable variations in the rutin and quercetin content as well as the terpene lactone contents although all the products satisfy the conventional quality control method. Copyright 2006 John Wiley & Sons, Ltd.
Simulating realistic predator signatures in quantitative fatty acid signature analysis
Bromaghin, Jeffrey F.
2015-01-01
Diet estimation is an important field within quantitative ecology, providing critical insights into many aspects of ecology and community dynamics. Quantitative fatty acid signature analysis (QFASA) is a prominent method of diet estimation, particularly for marine mammal and bird species. Investigators using QFASA commonly use computer simulation to evaluate statistical characteristics of diet estimators for the populations they study. Similar computer simulations have been used to explore and compare the performance of different variations of the original QFASA diet estimator. In both cases, computer simulations involve bootstrap sampling prey signature data to construct pseudo-predator signatures with known properties. However, bootstrap sample sizes have been selected arbitrarily and pseudo-predator signatures therefore may not have realistic properties. I develop an algorithm to objectively establish bootstrap sample sizes that generates pseudo-predator signatures with realistic properties, thereby enhancing the utility of computer simulation for assessing QFASA estimator performance. The algorithm also appears to be computationally efficient, resulting in bootstrap sample sizes that are smaller than those commonly used. I illustrate the algorithm with an example using data from Chukchi Sea polar bears (Ursus maritimus) and their marine mammal prey. The concepts underlying the approach may have value in other areas of quantitative ecology in which bootstrap samples are post-processed prior to their use.
NASA Astrophysics Data System (ADS)
Yamaguchi, Hideshi; Soeda, Takeshi
2015-03-01
A practical framework for an electron beam induced current (EBIC) technique has been established for conductive materials based on a numerical optimization approach. Although the conventional EBIC technique is useful for evaluating the distributions of dopants or crystal defects in semiconductor transistors, issues related to the reproducibility and quantitative capability of measurements using this technique persist. For instance, it is difficult to acquire high-quality EBIC images throughout continuous tests due to variation in operator skill or test environment. Recently, due to the evaluation of EBIC equipment performance and the numerical optimization of equipment items, the constant acquisition of high contrast images has become possible, improving the reproducibility as well as yield regardless of operator skill or test environment. The technique proposed herein is even more sensitive and quantitative than scanning probe microscopy, an imaging technique that can possibly damage the sample. The new technique is expected to benefit the electrical evaluation of fragile or soft materials along with LSI materials.
[Clinical research XXIII. From clinical judgment to meta-analyses].
Rivas-Ruiz, Rodolfo; Castelán-Martínez, Osvaldo D; Pérez-Rodríguez, Marcela; Palacios-Cruz, Lino; Noyola-Castillo, Maura E; Talavera, Juan O
2014-01-01
Systematic reviews (SR) are studies made in order to ask clinical questions based on original articles. Meta-analysis (MTA) is the mathematical analysis of SR. These analyses are divided in two groups, those which evaluate the measured results of quantitative variables (for example, the body mass index -BMI-) and those which evaluate qualitative variables (for example, if a patient is alive or dead, or if he is healing or not). Quantitative variables generally use the mean difference analysis and qualitative variables can be performed using several calculations: odds ratio (OR), relative risk (RR), absolute risk reduction (ARR) and hazard ratio (HR). These analyses are represented through forest plots which allow the evaluation of each individual study, as well as the heterogeneity between studies and the overall effect of the intervention. These analyses are mainly based on Student's t test and chi-squared. To take appropriate decisions based on the MTA, it is important to understand the characteristics of statistical methods in order to avoid misinterpretations.
Study On The Application Of CBERS-02B To Quantitative Soil Erosion Monitoring
NASA Astrophysics Data System (ADS)
Shi, Mingchang; Xu, Jing; Wang, Lei; Wang, Xiaoyun; Mu, Jing
2010-10-01
Currently, the reduction of soil erosion is an important prerequisite for achieving ecological security. Since real-time and quantitative evaluation on regional soil erosion plays a significant role in reducing the soil erosion, soil erosion models are more and more widely used. Based on RUSLE model, this paper carries out the quantitative soil erosion monitoring in the Xi River Basin and its surrounding areas by using CBERS-02B CCD, DEM, TRMM and other data. Besides, it performs the validation for monitoring results by using remote sensing investigation results in 2005. The monitoring results show that in 2009, the total amount of soil erosion in the study area was 1.94×106t, the erosion area was 2055.2km2 (54.06% of the total area), and the average soil erosion modulus was 509.7t km-2 a-1. As a case using CBERS-02B data for quantitative soil erosion monitoring, this study provides experience on the application of CBERS-02B data in the field of quantitative soil erosion monitoring and also for local soil erosion management.
Evaluation of Multiclass Model Observers in PET LROC Studies
NASA Astrophysics Data System (ADS)
Gifford, H. C.; Kinahan, P. E.; Lartizien, C.; King, M. A.
2007-02-01
A localization ROC (LROC) study was conducted to evaluate nonprewhitening matched-filter (NPW) and channelized NPW (CNPW) versions of a multiclass model observer as predictors of human tumor-detection performance with PET images. Target localization is explicitly performed by these model observers. Tumors were placed in the liver, lungs, and background soft tissue of a mathematical phantom, and the data simulation modeled a full-3D acquisition mode. Reconstructions were performed with the FORE+AWOSEM algorithm. The LROC study measured observer performance with 2D images consisting of either coronal, sagittal, or transverse views of the same set of cases. Versions of the CNPW observer based on two previously published difference-of-Gaussian channel models demonstrated good quantitative agreement with human observers. One interpretation of these results treats the CNPW observer as a channelized Hotelling observer with implicit internal noise
NASA Astrophysics Data System (ADS)
Mercer, Gary J.
This quantitative study examined the relationship between secondary students with math anxiety and physics performance in an inquiry-based constructivist classroom. The Revised Math Anxiety Rating Scale was used to evaluate math anxiety levels. The results were then compared to the performance on a physics standardized final examination. A simple correlation was performed, followed by a multivariate regression analysis to examine effects based on gender and prior math background. The correlation showed statistical significance between math anxiety and physics performance. The regression analysis showed statistical significance for math anxiety, physics performance, and prior math background, but did not show statistical significance for math anxiety, physics performance, and gender.
Fouad, Anthony; Pfefer, T. Joshua; Chen, Chao-Wei; Gong, Wei; Agrawal, Anant; Tomlins, Peter H.; Woolliams, Peter D.; Drezek, Rebekah A.; Chen, Yu
2014-01-01
Point spread function (PSF) phantoms based on unstructured distributions of sub-resolution particles in a transparent matrix have been demonstrated as a useful tool for evaluating resolution and its spatial variation across image volumes in optical coherence tomography (OCT) systems. Measurements based on PSF phantoms have the potential to become a standard test method for consistent, objective and quantitative inter-comparison of OCT system performance. Towards this end, we have evaluated three PSF phantoms and investigated their ability to compare the performance of four OCT systems. The phantoms are based on 260-nm-diameter gold nanoshells, 400-nm-diameter iron oxide particles and 1.5-micron-diameter silica particles. The OCT systems included spectral-domain and swept source systems in free-beam geometries as well as a time-domain system in both free-beam and fiberoptic probe geometries. Results indicated that iron oxide particles and gold nanoshells were most effective for measuring spatial variations in the magnitude and shape of PSFs across the image volume. The intensity of individual particles was also used to evaluate spatial variations in signal intensity uniformity. Significant system-to-system differences in resolution and signal intensity and their spatial variation were readily quantified. The phantoms proved useful for identification and characterization of irregularities such as astigmatism. Our multi-system results provide evidence of the practical utility of PSF-phantom-based test methods for quantitative inter-comparison of OCT system resolution and signal uniformity. PMID:25071949
Brenner, Stephan; Muula, Adamson S; Robyn, Paul Jacob; Bärnighausen, Till; Sarker, Malabika; Mathanga, Don P; Bossert, Thomas; De Allegri, Manuela
2014-04-22
In this article we present a study design to evaluate the causal impact of providing supply-side performance-based financing incentives in combination with a demand-side cash transfer component on equitable access to and quality of maternal and neonatal healthcare services. This intervention is introduced to selected emergency obstetric care facilities and catchment area populations in four districts in Malawi. We here describe and discuss our study protocol with regard to the research aims, the local implementation context, and our rationale for selecting a mixed methods explanatory design with a quasi-experimental quantitative component. The quantitative research component consists of a controlled pre- and post-test design with multiple post-test measurements. This allows us to quantitatively measure 'equitable access to healthcare services' at the community level and 'healthcare quality' at the health facility level. Guided by a theoretical framework of causal relationships, we determined a number of input, process, and output indicators to evaluate both intended and unintended effects of the intervention. Overall causal impact estimates will result from a difference-in-difference analysis comparing selected indicators across intervention and control facilities/catchment populations over time.To further explain heterogeneity of quantitatively observed effects and to understand the experiential dimensions of financial incentives on clients and providers, we designed a qualitative component in line with the overall explanatory mixed methods approach. This component consists of in-depth interviews and focus group discussions with providers, service user, non-users, and policy stakeholders. In this explanatory design comprehensive understanding of expected and unexpected effects of the intervention on both access and quality will emerge through careful triangulation at two levels: across multiple quantitative elements and across quantitative and qualitative elements. Combining a traditional quasi-experimental controlled pre- and post-test design with an explanatory mixed methods model permits an additional assessment of organizational and behavioral changes affecting complex processes. Through this impact evaluation approach, our design will not only create robust evidence measures for the outcome of interest, but also generate insights on how and why the investigated interventions produce certain intended and unintended effects and allows for a more in-depth evaluation approach.
Azam, Philippe; Peiffer, Jean-Luc; Ourlin, Jean-Claude; Bonnet, Pierre-Antoine; Tissier, Marie-Hélène; Vian, Laurence; Fabre, Isabelle
2005-01-15
The local lymph node assay (LLNA) is a regular method for the detection of sensitizing chemicals in mice which measures the incorporation of tritiated thymidine in lymph node cells. We have evaluated an alternative to this method based on the interleukin-2 (IL-2) production of lymph node cells. At the mRNA level, no change in the IL-2 gene expression level was detected by real-time PCR analysis. At the protein level, various experimental conditions were checked in order to improve the irritant versus sensitizer discrimination with a restricted set of prototypic compounds. In particular, the use of phytohemagglutinin A (PHA) in an ex vivo cell culture step showed an improvement of both signal and discrimination. In these optimised conditions, a panel of irritants and potency-graded sensitizers was used to assess the performance of the modified method. IFN-gamma production was used as a positive control. For each compound, a dose-response was performed and stimulation indexes (SI) were determined. Effective concentrations (EC) for each sensitizers were then extracted and compared to the literature data of the regular LLNA. The IL-2-based LLNA showed similar performances at both qualitative and quantitative levels compared to regular LLNA.
Buh Gasparic, Meti; Tengs, Torstein; La Paz, Jose Luis; Holst-Jensen, Arne; Pla, Maria; Esteve, Teresa; Zel, Jana; Gruden, Kristina
2010-03-01
Several techniques have been developed for detection and quantification of genetically modified organisms, but quantitative real-time PCR is by far the most popular approach. Among the most commonly used real-time PCR chemistries are TaqMan probes and SYBR green, but many other detection chemistries have also been developed. Because their performance has never been compared systematically, here we present an extensive evaluation of some promising chemistries: sequence-unspecific DNA labeling dyes (SYBR green), primer-based technologies (AmpliFluor, Plexor, Lux primers), and techniques involving double-labeled probes, comprising hybridization (molecular beacon) and hydrolysis (TaqMan, CPT, LNA, and MGB) probes, based on recently published experimental data. For each of the detection chemistries assays were included targeting selected loci. Real-time PCR chemistries were subsequently compared for their efficiency in PCR amplification and limits of detection and quantification. The overall applicability of the chemistries was evaluated, adding practicability and cost issues to the performance characteristics. None of the chemistries seemed to be significantly better than any other, but certain features favor LNA and MGB technology as good alternatives to TaqMan in quantification assays. SYBR green and molecular beacon assays can perform equally well but may need more optimization prior to use.
Zhang, Tisheng; Niu, Xiaoji; Ban, Yalong; Zhang, Hongping; Shi, Chuang; Liu, Jingnan
2015-01-01
A GNSS/INS deeply-coupled system can improve the satellite signals tracking performance by INS aiding tracking loops under dynamics. However, there was no literature available on the complete modeling of the INS branch in the INS-aided tracking loop, which caused the lack of a theoretical tool to guide the selections of inertial sensors, parameter optimization and quantitative analysis of INS-aided PLLs. This paper makes an effort on the INS branch in modeling and parameter optimization of phase-locked loops (PLLs) based on the scalar-based GNSS/INS deeply-coupled system. It establishes the transfer function between all known error sources and the PLL tracking error, which can be used to quantitatively evaluate the candidate inertial measurement unit (IMU) affecting the carrier phase tracking error. Based on that, a steady-state error model is proposed to design INS-aided PLLs and to analyze their tracking performance. Based on the modeling and error analysis, an integrated deeply-coupled hardware prototype is developed, with the optimization of the aiding information. Finally, the performance of the INS-aided PLLs designed based on the proposed steady-state error model is evaluated through the simulation and road tests of the hardware prototype. PMID:25569751
Yang, Pan; Peng, Yulan; Zhao, Haina; Luo, Honghao; Jin, Ya; He, Yushuang
2015-01-01
Static shear wave elastography (SWE) is used to detect breast lesions, but slice and plane selections result in discrepancies. To evaluate the intraobserver reproducibility of continuous SWE, and whether quantitative elasticities in orthogonal planes perform better in the differential diagnosis of breast lesions. One hundred and twenty-two breast lesions scheduled for ultrasound-guided biopsy were recruited. Continuous SWE scans were conducted in orthogonal planes separately. Quantitative elasticities and histopathology results were collected. Reproducibility in the same plane and diagnostic performance in different planes were evaluated. The maximum and mean elasticities of the hardest portion, and standard deviation of whole lesion, had high inter-class correlation coefficients (0.87 to 0.95) and large areas under receiver operation characteristic curve (0.887 to 0.899). Without loss of accuracy, sensitivities had increased in orthogonal planes compared with single plane (from 73.17% up to 82.93% at most). Mean elasticity of whole lesion and lesion-to-parenchyma ratio were significantly less reproducible and less accurate. Continuous SWE is highly reproducible for the same observer. The maximum and mean elasticities of the hardest portion and standard deviation of whole lesion are most reliable. Furthermore, the sensitivities of the three parameters are improved in orthogonal planes without loss of accuracies.
Performance evaluation model of a pilot food waste collection system in Suzhou City, China.
Wen, Zongguo; Wang, Yuanjia; De Clercq, Djavan
2015-05-01
This paper analyses the food waste collection and transportation (C&T) system in a pilot project in Suzhou by using a novel performance evaluation method. The method employed to conduct this analysis involves a unified performance evaluation index containing qualitative and quantitative indicators applied to data from Suzhou City. Two major inefficiencies were identified: a) low system efficiency due to insufficient processing capacity of commercial food waste facilities; and b) low waste resource utilization due to low efficiency of manual sorting. The performance evaluation indicated that the pilot project collection system's strong points included strong economics, low environmental impact and low social impact. This study also shows that Suzhou's integrated system has developed a comprehensive body of laws and clarified regulatory responsibilities for each of the various government departments to solve the problems of commercial food waste management. Based on Suzhou's experience, perspectives and lessons can be drawn for other cities and areas where food waste management systems are in the planning stage, or are encountering operational problems. Copyright © 2015 Elsevier Ltd. All rights reserved.
Hruska, Carrie B; Geske, Jennifer R; Swanson, Tiffinee N; Mammel, Alyssa N; Lake, David S; Manduca, Armando; Conners, Amy Lynn; Whaley, Dana H; Scott, Christopher G; Carter, Rickey E; Rhodes, Deborah J; O'Connor, Michael K; Vachon, Celine M
2018-06-05
Background parenchymal uptake (BPU), which refers to the level of Tc-99m sestamibi uptake within normal fibroglandular tissue on molecular breast imaging (MBI), has been identified as a breast cancer risk factor, independent of mammographic density. Prior analyses have used subjective categories to describe BPU. We evaluate a new quantitative method for assessing BPU by testing its reproducibility, comparing quantitative results with previously established subjective BPU categories, and determining the association of quantitative BPU with breast cancer risk. Two nonradiologist operators independently performed region-of-interest analysis on MBI images viewed in conjunction with corresponding digital mammograms. Quantitative BPU was defined as a unitless ratio of the average pixel intensity (counts/pixel) within the fibroglandular tissue versus the average pixel intensity in fat. Operator agreement and the correlation of quantitative BPU measures with subjective BPU categories assessed by expert radiologists were determined. Percent density on mammograms was estimated using Cumulus. The association of quantitative BPU with breast cancer (per one unit BPU) was examined within an established case-control study of 62 incident breast cancer cases and 177 matched controls. Quantitative BPU ranged from 0.4 to 3.2 across all subjects and was on average higher in cases compared to controls (1.4 versus 1.2, p < 0.007 for both operators). Quantitative BPU was strongly correlated with subjective BPU categories (Spearman's r = 0.59 to 0.69, p < 0.0001, for each paired combination of two operators and two radiologists). Interoperator and intraoperator agreement in the quantitative BPU measure, assessed by intraclass correlation, was 0.92 and 0.98, respectively. Quantitative BPU measures showed either no correlation or weak negative correlation with mammographic percent density. In a model adjusted for body mass index and percent density, higher quantitative BPU was associated with increased risk of breast cancer for both operators (OR = 4.0, 95% confidence interval (CI) 1.6-10.1, and 2.4, 95% CI 1.2-4.7). Quantitative measurement of BPU, defined as the ratio of average counts in fibroglandular tissue relative to that in fat, can be reliably performed by nonradiologist operators with a simple region-of-interest analysis tool. Similar to results obtained with subjective BPU categories, quantitative BPU is a functional imaging biomarker of breast cancer risk, independent of mammographic density and hormonal factors.
a Performance Comparison of Feature Detectors for Planetary Rover Mapping and Localization
NASA Astrophysics Data System (ADS)
Wan, W.; Peng, M.; Xing, Y.; Wang, Y.; Liu, Z.; Di, K.; Teng, B.; Mao, X.; Zhao, Q.; Xin, X.; Jia, M.
2017-07-01
Feature detection and matching are key techniques in computer vision and robotics, and have been successfully implemented in many fields. So far there is no performance comparison of feature detectors and matching methods for planetary mapping and rover localization using rover stereo images. In this research, we present a comprehensive evaluation and comparison of six feature detectors, including Moravec, Förstner, Harris, FAST, SIFT and SURF, aiming for optimal implementation of feature-based matching in planetary surface environment. To facilitate quantitative analysis, a series of evaluation criteria, including distribution evenness of matched points, coverage of detected points, and feature matching accuracy, are developed in the research. In order to perform exhaustive evaluation, stereo images, simulated under different baseline, pitch angle, and interval of adjacent rover locations, are taken as experimental data source. The comparison results show that SIFT offers the best overall performance, especially it is less sensitive to changes of image taken at adjacent locations.
Kelay, Tanika; Chan, Kah Leong; Ako, Emmanuel; Yasin, Mohammad; Costopoulos, Charis; Gold, Matthew; Kneebone, Roger K; Malik, Iqbal S; Bello, Fernando
2017-01-01
Distributed Simulation is the concept of portable, high-fidelity immersive simulation. Here, it is used for the development of a simulation-based training programme for cardiovascular specialities. We present an evidence base for how accessible, portable and self-contained simulated environments can be effectively utilised for the modelling, development and testing of a complex training framework and assessment methodology. Iterative user feedback through mixed-methods evaluation techniques resulted in the implementation of the training programme. Four phases were involved in the development of our immersive simulation-based training programme: ( 1) initial conceptual stage for mapping structural criteria and parameters of the simulation training framework and scenario development ( n = 16), (2) training facility design using Distributed Simulation , (3) test cases with clinicians ( n = 8) and collaborative design, where evaluation and user feedback involved a mixed-methods approach featuring (a) quantitative surveys to evaluate the realism and perceived educational relevance of the simulation format and framework for training and (b) qualitative semi-structured interviews to capture detailed feedback including changes and scope for development. Refinements were made iteratively to the simulation framework based on user feedback, resulting in (4) transition towards implementation of the simulation training framework, involving consistent quantitative evaluation techniques for clinicians ( n = 62). For comparative purposes, clinicians' initial quantitative mean evaluation scores for realism of the simulation training framework, realism of the training facility and relevance for training ( n = 8) are presented longitudinally, alongside feedback throughout the development stages from concept to delivery, including the implementation stage ( n = 62). Initially, mean evaluation scores fluctuated from low to average, rising incrementally. This corresponded with the qualitative component, which augmented the quantitative findings; trainees' user feedback was used to perform iterative refinements to the simulation design and components (collaborative design), resulting in higher mean evaluation scores leading up to the implementation phase. Through application of innovative Distributed Simulation techniques, collaborative design, and consistent evaluation techniques from conceptual, development, and implementation stages, fully immersive simulation techniques for cardiovascular specialities are achievable and have the potential to be implemented more broadly.
Tan, Peng; Zhang, Hai-Zhu; Zhang, Ding-Kun; Wu, Shan-Na; Niu, Ming; Wang, Jia-Bo; Xiao, Xiao-He
2017-07-01
This study attempts to evaluate the quality of Chinese formula granules by combined use of multi-component simultaneous quantitative analysis and bioassay. The rhubarb dispensing granules were used as the model drug for demonstrative study. The ultra-high performance liquid chromatography (UPLC) method was adopted for simultaneously quantitative determination of the 10 anthraquinone derivatives (such as aloe emodin-8-O-β-D-glucoside) in rhubarb dispensing granules; purgative biopotency of different batches of rhubarb dispensing granules was determined based on compound diphenoxylate tablets-induced mouse constipation model; blood activating biopotency of different batches of rhubarb dispensing granules was determined based on in vitro rat antiplatelet aggregation model; SPSS 22.0 statistical software was used for correlation analysis between 10 anthraquinone derivatives and purgative biopotency, blood activating biopotency. The results of multi-components simultaneous quantitative analysisshowed that there was a great difference in chemical characterizationand certain differences inpurgative biopotency and blood activating biopotency among 10 batches of rhubarb dispensing granules. The correlation analysis showed that the intensity of purgative biopotency was significantly correlated with the content of conjugated anthraquinone glycosides (P<0.01), and the intensity of blood activating biopotency was significantly correlated with the content of free anthraquinone (P<0.01). In summary, the combined use of multi-component simultaneous quantitative analysis and bioassay can achieve objective quantification and more comprehensive reflection on overall quality difference among different batches of rhubarb dispensing granules. Copyright© by the Chinese Pharmaceutical Association.
NASA Astrophysics Data System (ADS)
Bhave, Ajay; Dessai, Suraje; Conway, Declan; Stainforth, David
2016-04-01
Deep uncertainty in future climate change and socio-economic conditions necessitates the use of assess-risk-of-policy approaches over predict-then-act approaches for adaptation decision making. Robust Decision Making (RDM) approaches embody this principle and help evaluate the ability of adaptation options to satisfy stakeholder preferences under wide-ranging future conditions. This study involves the simultaneous application of two RDM approaches; qualitative and quantitative, in the Cauvery River Basin in Karnataka (population ~23 million), India. The study aims to (a) determine robust water resources adaptation options for the 2030s and 2050s and (b) compare the usefulness of a qualitative stakeholder-driven approach with a quantitative modelling approach. For developing a large set of future scenarios a combination of climate narratives and socio-economic narratives was used. Using structured expert elicitation with a group of climate experts in the Indian Summer Monsoon, climatic narratives were developed. Socio-economic narratives were developed to reflect potential future urban and agricultural water demand. In the qualitative RDM approach, a stakeholder workshop helped elicit key vulnerabilities, water resources adaptation options and performance criteria for evaluating options. During a second workshop, stakeholders discussed and evaluated adaptation options against the performance criteria for a large number of scenarios of climatic and socio-economic change in the basin. In the quantitative RDM approach, a Water Evaluation And Planning (WEAP) model was forced by precipitation and evapotranspiration data, coherent with the climatic narratives, together with water demand data based on socio-economic narratives. We find that compared to business-as-usual conditions options addressing urban water demand satisfy performance criteria across scenarios and provide co-benefits like energy savings and reduction in groundwater depletion, while options reducing agricultural water demand significantly affect downstream water availability. Water demand options demonstrate potential to improve environmental flow conditions and satisfy legal water supply requirements for downstream riparian states. On the other hand, currently planned large scale infrastructural projects demonstrate reduced value in certain scenarios, illustrating the impacts of lock-in effects of large scale infrastructure. From a methodological perspective, we find that while the stakeholder-driven approach revealed robust options in a resource-light manner and helped initiate much needed interaction amongst stakeholders, the modelling approach provides complementary quantitative information. The study reveals robust adaptation options for this important basin and provides a strong methodological basis for carrying out future studies that support adaptation decision making.
Sedaghat, Ahmad R; Kieff, David A; Bergmark, Regan W; Cunnane, Mary E; Busaba, Nicolas Y
2015-03-01
Performance of septoplasty is dependent on objective evidence of nasal septal deviation. Although physical examination including anterior rhinoscopy and endoscopic examination is the gold standard for evaluation of septal deviation, third-party payors' reviews of septoplasty claims are often made on computed tomography (CT) findings. However, the correlation between radiographic evaluation of septal deviation with physical examination findings is unknown. Retrospective, blinded, independent evaluation of septal deviation in 39 consecutive patients from physical examination, including anterior rhinoscopy and endoscopic examination, by an otolaryngologist and radiographic evaluation of sinus CT scan by a neuroradiologist. Four distinct septal locations (nasal valve, cartilaginous, inferior/maxillary crest and osseous septum) were evaluated on a 4-point scale representing (1) 0% to 25%, (2) >25% to 50%, (3) >50% to 75%, and (4) >75% obstruction. Correlation between physical examination and radiographic evaluations was made by Pearson's correlation and quantitative agreement assessed by Krippendorf's alpha. Statistically significant correlation was detected between physical examination including nasal endoscopy and radiographic assessment of septal deviation only at the osseous septum (p = 0.007, r = 0.425) with low quantitative agreement (α = 0.290). No significant correlation was detected at the cartilaginous septum (p = 0.286, r = 0.175), inferior septum (p = 0.117, r = 0.255), or nasal valve (p = 0.174, r = 0.222). Quantitative agreement at the nasal valve suggested a bias in CT to underestimate physical exam findings (α = -0.490). CT is a poor substitute for physical examination, the gold standard, in assessment of septal deviation. Clinical decisions about pursuit of septoplasty or third-party payors' decisions to approve septoplasty should not be made on radiographic evidence. © 2014 ARS-AAOA, LLC.
Tabrizi, Jafar Sadegh; Gholipour, Kamal; Iezadi, Shabnam; Farahbakhsh, Mostafa; Ghiasi, Akbar
2018-01-01
The aim was to design a district health management performance framework for Iran's healthcare system. The mixed-method study was conducted between September 2015 and May 2016 in Tabriz, Iran. In this study, the indicators of district health management performance were obtained by analyzing the 45 semi-structured surveys of experts in the public health system. Content validity of performance indicators which were generated in qualitative part were reviewed and confirmed based on content validity index (CVI). Also content validity ratio (CVR) was calculated using data acquired from a survey of 21 experts in quantitative part. The result of this study indicated that, initially, 81 indicators were considered in framework of district health management performance and, at the end, 53 indicators were validated and confirmed. These indicators were classified in 11 categories which include: human resources and organizational creativity, management and leadership, rules and ethics, planning and evaluation, district managing, health resources management and economics, community participation, quality improvement, research in health system, health information management, epidemiology and situation analysis. The designed framework model can be used to assess the district health management and facilitates performance improvement at the district level.
Using Perturbation Theory to Reduce Noise in Diffusion Tensor Fields
Bansal, Ravi; Staib, Lawrence H.; Xu, Dongrong; Laine, Andrew F.; Liu, Jun; Peterson, Bradley S.
2009-01-01
We propose the use of Perturbation theory to reduce noise in Diffusion Tensor (DT) fields. Diffusion Tensor Imaging (DTI) encodes the diffusion of water molecules along different spatial directions in a positive-definite, 3 × 3 symmetric tensor. Eigenvectors and eigenvalues of DTs allow the in vivo visualization and quantitative analysis of white matter fiber bundles across the brain. The validity and reliability of these analyses are limited, however, by the low spatial resolution and low Signal-to-Noise Ratio (SNR) in DTI datasets. Our procedures can be applied to improve the validity and reliability of these quantitative analyses by reducing noise in the tensor fields. We model a tensor field as a three-dimensional Markov Random Field and then compute the likelihood and the prior terms of this model using Perturbation theory. The prior term constrains the tensor field to be smooth, whereas the likelihood term constrains the smoothed tensor field to be similar to the original field. Thus, the proposed method generates a smoothed field that is close in structure to the original tensor field. We evaluate the performance of our method both visually and quantitatively using synthetic and real-world datasets. We quantitatively assess the performance of our method by computing the SNR for eigenvalues and the coherence measures for eigenvectors of DTs across tensor fields. In addition, we quantitatively compare the performance of our procedures with the performance of one method that uses a Riemannian distance to compute the similarity between two tensors, and with another method that reduces noise in tensor fields by anisotropically filtering the diffusion weighted images that are used to estimate diffusion tensors. These experiments demonstrate that our method significantly increases the coherence of the eigenvectors and the SNR of the eigenvalues, while simultaneously preserving the fine structure and boundaries between homogeneous regions, in the smoothed tensor field. PMID:19540791
Shaimova, V A; Shaimov, T B; Shaimov, R B; Galin, A Yu; Goloshchapova, Zh A; Ryzhkov, P K; Fomin, A V
2018-01-01
To develop methods for evaluating effectiveness of YAG-laser vitreolysis of vitreous floaters. The study included 144 patients (173 eyes) who had underwent YAG-laser vitreolysis and were under observation from 01.09.16 to 31.01.18. The patients were 34 to 86 years old (mean age 62.7±10.2 years), 28 (19.4%) patients were male, 116 (80.6%) - female. All patients underwent standard and additional examination: ultrasonography (Accutome B-scan plus, U.S.A.), optic biometry (Lenstar 900, Haag-Streit, Switzerland), spectral optical coherence tomography using RTVue XR Avanti scanner (Optovue, U.S.A.) in modes Enhanced HD Line, 3D Retina, 3D Widefield MCT, Cross Line, Angio Retina, and scanning laser ophthalmoscopy (SLO) using Navilas 577s system. Laser vitreolysis was performed using the Ultra Q Reflex laser (Ellex, Australia). This paper presents methods of objective quantitative and qualitative assessment of artifactual shadows of vitreous floaters with spectral optical coherence tomographic scanner RTVue xR Avanti employing an algorithm of automatic detection of non-perfusion zones in modes Angio Retina, HD Angio Retina, as well as foveal avascular zone (FAZ) measurement with Angio Analytics® software. SLO performed with Navilas 577s was used as method of visualizing floaters and artifactual shadows in retinal surface layers prior to surgical treatment and after YAG-laser vitreolysis. Suggested methods of quantitative and qualitative assessment of artifactual shadows of the floaters in retinal layers are promising and may prove to be highly relevant for clinical monitoring of patients, optimization of treatment indications and evaluating effectiveness of YAG-laser vitreolysis. Further research of laser vitreolysis effectiveness in patients with vitreous floaters is necessary.
Methods for the field evaluation of quantitative G6PD diagnostics: a review.
Ley, Benedikt; Bancone, Germana; von Seidlein, Lorenz; Thriemer, Kamala; Richards, Jack S; Domingo, Gonzalo J; Price, Ric N
2017-09-11
Individuals with glucose-6-phosphate dehydrogenase (G6PD) deficiency are at risk of severe haemolysis following the administration of 8-aminoquinoline compounds. Primaquine is the only widely available 8-aminoquinoline for the radical cure of Plasmodium vivax. Tafenoquine is under development with the potential to simplify treatment regimens, but point-of-care (PoC) tests will be needed to provide quantitative measurement of G6PD activity prior to its administration. There is currently a lack of appropriate G6PD PoC tests, but a number of new tests are in development and are likely to enter the market in the coming years. As these are implemented, they will need to be validated in field studies. This article outlines the technical details for the field evaluation of novel quantitative G6PD diagnostics such as sample handling, reference testing and statistical analysis. Field evaluation is based on the comparison of paired samples, including one sample tested by the new assay at point of care and one sample tested by the gold-standard reference method, UV spectrophotometry in an established laboratory. Samples can be collected as capillary or venous blood; the existing literature suggests that potential differences in capillary or venous blood are unlikely to affect results substantially. The collection and storage of samples is critical to ensure preservation of enzyme activity, it is recommended that samples are stored at 4 °C and testing occurs within 4 days of collection. Test results can be visually presented as scatter plot, Bland-Altman plot, and a histogram of the G6PD activity distribution of the study population. Calculating the adjusted male median allows categorizing results according to G6PD activity to calculate standard performance indicators and to perform receiver operating characteristic (ROC) analysis.
Le Pogam, Adrien; Hatt, Mathieu; Descourt, Patrice; Boussion, Nicolas; Tsoumpas, Charalampos; Turkheimer, Federico E; Prunier-Aesch, Caroline; Baulieu, Jean-Louis; Guilloteau, Denis; Visvikis, Dimitris
2011-09-01
Partial volume effects (PVEs) are consequences of the limited spatial resolution in emission tomography leading to underestimation of uptake in tissues of size similar to the point spread function (PSF) of the scanner as well as activity spillover between adjacent structures. Among PVE correction methodologies, a voxel-wise mutual multiresolution analysis (MMA) was recently introduced. MMA is based on the extraction and transformation of high resolution details from an anatomical image (MR/CT) and their subsequent incorporation into a low-resolution PET image using wavelet decompositions. Although this method allows creating PVE corrected images, it is based on a 2D global correlation model, which may introduce artifacts in regions where no significant correlation exists between anatomical and functional details. A new model was designed to overcome these two issues (2D only and global correlation) using a 3D wavelet decomposition process combined with a local analysis. The algorithm was evaluated on synthetic, simulated and patient images, and its performance was compared to the original approach as well as the geometric transfer matrix (GTM) method. Quantitative performance was similar to the 2D global model and GTM in correlated cases. In cases where mismatches between anatomical and functional information were present, the new model outperformed the 2D global approach, avoiding artifacts and significantly improving quality of the corrected images and their quantitative accuracy. A new 3D local model was proposed for a voxel-wise PVE correction based on the original mutual multiresolution analysis approach. Its evaluation demonstrated an improved and more robust qualitative and quantitative accuracy compared to the original MMA methodology, particularly in the absence of full correlation between anatomical and functional information.
Le Pogam, Adrien; Hatt, Mathieu; Descourt, Patrice; Boussion, Nicolas; Tsoumpas, Charalampos; Turkheimer, Federico E.; Prunier-Aesch, Caroline; Baulieu, Jean-Louis; Guilloteau, Denis; Visvikis, Dimitris
2011-01-01
Purpose Partial volume effects (PVE) are consequences of the limited spatial resolution in emission tomography leading to under-estimation of uptake in tissues of size similar to the point spread function (PSF) of the scanner as well as activity spillover between adjacent structures. Among PVE correction methodologies, a voxel-wise mutual multi-resolution analysis (MMA) was recently introduced. MMA is based on the extraction and transformation of high resolution details from an anatomical image (MR/CT) and their subsequent incorporation into a low resolution PET image using wavelet decompositions. Although this method allows creating PVE corrected images, it is based on a 2D global correlation model which may introduce artefacts in regions where no significant correlation exists between anatomical and functional details. Methods A new model was designed to overcome these two issues (2D only and global correlation) using a 3D wavelet decomposition process combined with a local analysis. The algorithm was evaluated on synthetic, simulated and patient images, and its performance was compared to the original approach as well as the geometric transfer matrix (GTM) method. Results Quantitative performance was similar to the 2D global model and GTM in correlated cases. In cases where mismatches between anatomical and functional information were present the new model outperformed the 2D global approach, avoiding artefacts and significantly improving quality of the corrected images and their quantitative accuracy. Conclusions A new 3D local model was proposed for a voxel-wise PVE correction based on the original mutual multi-resolution analysis approach. Its evaluation demonstrated an improved and more robust qualitative and quantitative accuracy compared to the original MMA methodology, particularly in the absence of full correlation between anatomical and functional information. PMID:21978037
Nimpagaritse, Manassé; Korachais, Catherine; Roberfroid, Dominique; Kolsteren, Patrick; Zine Eddine El Idrissi, Moulay Driss; Meessen, Bruno
2016-06-14
Malnutrition is a huge problem in Burundi. In order to improve the provision of services at hospital, health centre and community levels, the Ministry of Health is piloting the introduction of malnutrition prevention and care indicators within its performance based financing (PBF) scheme. Paying for units of services and for qualitative indicators is expected to enhance provision and quality of these nutrition services, as PBF has done, in Burundi and elsewhere, for several other services. This paper presents the protocol for the impact evaluation of the PBF scheme applied to malnutrition. The research design consists in a mixed methods model adopting a sequential explanatory design. The quantitative component is a cluster-randomized controlled evaluation design: among the 90 health centres selected for the study, half receive payment related to their results in malnutrition activities, while the other half get a budget allocation. Qualitative research will be carried out both during the intervention period and at the end of the quantitative evaluation. Data are collected from 1) baseline and follow-up surveys of 90 health centres and 6,480 households with children aged 6 to 23 months, 2) logbooks filled in weekly in health centres, and 3) in-depth interviews and focus group discussions. The evaluation aims to provide the best estimate of the impact of the project on malnutrition outcomes in the community as well as outputs at the health centre level (malnutrition care outputs) and to describe quantitatively and qualitatively the changes that took place (or did not take place) within health centres as a result of the program. Although PBF schemes are blooming in low in-come countries, there is still a need for evidence, especially on the impact of revising the list of remunerated indicators. It is expected that this impact evaluation will be helpful for the national policy dialogue in Burundi, but it will also provide key evidence for countries with an existing PBF scheme and confronted with malnutrition problems on the appropriateness to extend the strategy to nutrition services. ClinicalTrials.gov PRS Identifier: NCT02721160; registered March 2016.
2009-09-01
Legionella Pneumophila (ATCC 33152) were acquired from the American Type Culture Collection (ATCC; Manassas, VA, USA). Conventional PCR testing The primers... Legionella Pneumophila – One nanogram of genomic DNA was used as a DNA template in each assay. Assays were performed in triplicate. Samples were negative at
Nuclear Scintigraphy in Practice: Gastrointestinal Motility.
Solnes, Lilja B; Sheikhbahaei, Sara; Ziessman, Harvey A
2018-05-29
The purpose of this article is to describe the clinical utility of state-of-theart gastrointestinal transit scintigraphy, including the standardized esophageal transit, solid and liquid gastric emptying, small-bowel transit, colon transit, and whole-gut transit scintigraphy, with an emphasis on procedure performance. Radionuclide gastrointestinal motility studies are noninvasive, quantitative, and physiologic diagnostic tools for evaluating patients with gastrointestinal complaints.
Application of Simulation to Individualized Self-Paced Training. Final Report. TAEG Report No. 11-2.
ERIC Educational Resources Information Center
Lindahl, William H.; Gardner, James H.
Computer simulation is recognized as a valuable systems analysis research tool which enables the detailed examination, evaluation, and manipulation, under stated conditions, of a system without direct action on the system. This technique provides management with quantitative data on system performance and capabilities which can be used to compare…
ERIC Educational Resources Information Center
Rehberg, Robb S.; Gazzillo Diaz, Linda; Middlemas, David A.
2009-01-01
Objective: The objective of this study was to determine whether computer-based CPR training is comparable to traditional classroom training. Design and Setting: This study was quantitative in design. Data was gathered from a standardized examination and skill performance evaluation which yielded numerical scores. Subjects: The subjects were 64…
ERIC Educational Resources Information Center
Palmer, Phyllis M.; Jaffe, Debra M.; McCulloch, Timothy M.; Finnegan, Eileen M.; Van Daele, Douglas J.; Luschei, Erich S.
2008-01-01
Purpose: The purpose of this investigation was to evaluate the relationship between tongue-to-palate pressure and the electromyography (EMG) measured from the mylohyoid, anterior belly of the digastric, geniohyoid, medial pterygoid, velum, genioglossus, and intrinsic tongue muscles. Methods: Seven healthy adults performed tongue-to-palate pressure…
A model for evaluating the social performance of construction waste management.
Yuan, Hongping
2012-06-01
It has been determined by existing literature that a lot of research efforts have been made to the economic performance of construction waste management (CWM), but less attention is paid to investigation of the social performance of CWM. This study therefore attempts to develop a model for quantitatively evaluating the social performance of CWM by using a system dynamics (SD) approach. Firstly, major variables affecting the social performance of CWM are identified and a holistic system for assessing the social performance of CWM is formulated in line with feedback relationships underlying these variables. The developed system is then converted into a SD model through the software iThink. An empirical case study is finally conducted to demonstrate application of the model. Results of model validation indicate that the model is robust and reasonable to reflect the situation of the real system under study. Findings of the case study offer helpful insights into effectively promoting the social performance of CWM of the project investigated. Furthermore, the model exhibits great potential to function as an experimental platform for dynamically evaluating effects of management measures on improving the social performance of CWM of construction projects. Copyright © 2012 Elsevier Ltd. All rights reserved.
Georgsson, Mattias; Staggers, Nancy
2016-01-01
Mobile health (mHealth) systems are becoming more common for chronic disease management, but usability studies are still needed on patients' perspectives and mHealth interaction performance. This deficiency is addressed by our quantitative usability study of a mHealth diabetes system evaluating patients' task performance, satisfaction, and the relationship of these measures to user characteristics. We used metrics in the International Organization for Standardization (ISO) 9241-11 standard. After standardized training, 10 patients performed representative tasks and were assessed on individual task success, errors, efficiency (time on task), satisfaction (System Usability Scale [SUS]) and user characteristics. Tasks of exporting and correcting values proved the most difficult, had the most errors, the lowest task success rates, and consumed the longest times on task. The average SUS satisfaction score was 80.5, indicating good but not excellent system usability. Data trends showed males were more successful in task completion, and younger participants had higher performance scores. Educational level did not influence performance, but a more recent diabetes diagnosis did. Patients with more experience in information technology (IT) also had higher performance rates. Difficult task performance indicated areas for redesign. Our methods can assist others in identifying areas in need of improvement. Data about user background and IT skills also showed how user characteristics influence performance and can provide future considerations for targeted mHealth designs. Using the ISO 9241-11 usability standard, the SUS instrument for satisfaction and measuring user characteristics provided objective measures of patients' experienced usability. These could serve as an exemplar for standardized, quantitative methods for usability studies on mHealth systems. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Staggers, Nancy
2016-01-01
Objective Mobile health (mHealth) systems are becoming more common for chronic disease management, but usability studies are still needed on patients’ perspectives and mHealth interaction performance. This deficiency is addressed by our quantitative usability study of a mHealth diabetes system evaluating patients’ task performance, satisfaction, and the relationship of these measures to user characteristics. Materials and Methods We used metrics in the International Organization for Standardization (ISO) 9241-11 standard. After standardized training, 10 patients performed representative tasks and were assessed on individual task success, errors, efficiency (time on task), satisfaction (System Usability Scale [SUS]) and user characteristics. Results Tasks of exporting and correcting values proved the most difficult, had the most errors, the lowest task success rates, and consumed the longest times on task. The average SUS satisfaction score was 80.5, indicating good but not excellent system usability. Data trends showed males were more successful in task completion, and younger participants had higher performance scores. Educational level did not influence performance, but a more recent diabetes diagnosis did. Patients with more experience in information technology (IT) also had higher performance rates. Discussion Difficult task performance indicated areas for redesign. Our methods can assist others in identifying areas in need of improvement. Data about user background and IT skills also showed how user characteristics influence performance and can provide future considerations for targeted mHealth designs. Conclusion Using the ISO 9241-11 usability standard, the SUS instrument for satisfaction and measuring user characteristics provided objective measures of patients’ experienced usability. These could serve as an exemplar for standardized, quantitative methods for usability studies on mHealth systems. PMID:26377990
Alves, Antoine; Attik, Nina; Bayon, Yves; Royet, Elodie; Wirth, Carine; Bourges, Xavier; Piat, Alexis; Dolmazon, Gaëlle; Clermont, Gaëlle; Boutrand, Jean-Pierre; Grosgogeat, Brigitte; Gritsch, Kerstin
2018-03-14
The paradigm shift brought about by the expansion of tissue engineering and regenerative medicine away from the use of biomaterials, currently questions the value of histopathologic methods in the evaluation of biological changes. To date, the available tools of evaluation are not fully consistent and satisfactory for these advanced therapies. We have developed a new, simple and inexpensive quantitative digital approach that provides key metrics for structural and compositional characterization of the regenerated tissues. For example, metrics provide the tissue ingrowth rate (TIR) which integrates two separate indicators; the cell ingrowth rate (CIR) and the total collagen content (TCC) as featured in the equation, TIR% = CIR% + TCC%. Moreover a subset of quantitative indicators describing the directional organization of the collagen (relating structure and mechanical function of tissues), the ratio of collagen I to collagen III (remodeling quality) and the optical anisotropy property of the collagen (maturity indicator) was automatically assessed as well. Using an image analyzer, all metrics were extracted from only two serial sections stained with either Feulgen & Rossenbeck (cell specific) or Picrosirius Red F3BA (collagen specific). To validate this new procedure, three-dimensional (3D) scaffolds were intraperitoneally implanted in healthy and in diabetic rats. It was hypothesized that quantitatively, the healing tissue would be significantly delayed and of poor quality in diabetic rats in comparison to healthy rats. In addition, a chemically modified 3D scaffold was similarly implanted in a third group of healthy rats with the assumption that modulation of the ingrown tissue would be quantitatively present in comparison to the 3D scaffold-healthy group. After 21 days of implantation, both hypotheses were verified by use of this novel computerized approach. When the two methods were run in parallel, the quantitative results revealed fine details and differences not detected by the semi-quantitative assessment, demonstrating the importance of quantitative analysis in the performance evaluation of soft tissue healing. This automated and supervised method reduced operator dependency and proved to be simple, sensitive, cost-effective and time-effective. It supports objective therapeutic comparisons and helps to elucidate regeneration and the dynamics of a functional tissue.
Sachpekidis, Christos; Hillengass, Jens; Goldschmidt, Hartmut; Anwar, Hoda; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia
2017-01-01
A renewed interest has been recently developed for the highly sensitive bone-seeking radiopharmaceutical 18 F-NaF. Aim of the present study is to evaluate the potential utility of quantitative analysis of 18 F-NaF dynamic PET/CT data in differentiating malignant from benign degenerative lesions in multiple myeloma (MM). 80 MM patients underwent whole-body PET/CT and dynamic PET/CT scanning of the pelvis with 18 F-NaF. PET/CT data evaluation was based on visual (qualitative) assessment, semi-quantitative (SUV) calculations, and absolute quantitative estimations after application of a 2-tissue compartment model and a non-compartmental approach leading to the extraction of fractal dimension (FD). In total 263 MM lesions were demonstrated on 18 F-NaF PET/CT. Semi-quantitative and quantitative evaluations were performed for 25 MM lesions as well as for 25 benign, degenerative and traumatic lesions. Mean SUV average for MM lesions was 11.9 and mean SUV max was 23.2. Respectively, SUV average and SUV max for degenerative lesions were 13.5 and 20.2. Kinetic analysis of 18 F-NaF revealed the following mean values for MM lesions: K 1 = 0.248 (1/min), k 3 = 0.359 (1/min), influx (K i ) = 0.107 (1/min), FD = 1.382, while the respective values for degenerative lesions were: K 1 = 0.169 (1/min), k 3 = 0.422 (1/min), influx (K i ) = 0.095 (1/min), FD = 1. 411. No statistically significant differences between MM and benign degenerative disease regarding SUV average , SUV max , K 1 , k 3 and influx (K i ) were demonstrated. FD was significantly higher in degenerative than in malignant lesions. The present findings show that quantitative analysis of 18 F-NaF PET data cannot differentiate malignant from benign degenerative lesions in MM patients, supporting previously published results, which reflect the limited role of 18 F-NaF PET/CT in the diagnostic workup of MM.
Development and evaluation of a quantitative PCR assay for detection of Hepatozoon sp.
Criado-Fornelio, A; Buling, A; Cunha-Filho, N A; Ruas, J L; Farias, N A R; Rey-Valeiron, C; Pingret, J L; Etievant, M; Barba-Carretero, J C
2007-12-25
With the aim to improve current molecular diagnostic techniques of Hepatozoon sp. in carnivore mammals, we developed a quantitative PCR (qPCR) assay with SYBR Green I((R)). The method, consisting of amplification of a 235bp fragment of the 18S rRNA gene, is able to detect at least 0.1fg of parasite DNA. Reproducible quantitative results were obtained over a range of 0.1ng-0.1fg of Hepatozoon sp. DNA. To assess the performance of the qPCR assay, DNA samples from dogs (140) and cats (50) were tested with either standard PCR or qPCR. Positive samples were always confirmed by partial sequencing of the 18S rRNA gene. Quantitative PCR was 15.8% more sensitive than standard PCR to detect H. canis in dogs. In cats, no infections were detected by standard PCR, compared to two positives by qPCR (which were infected by H. canis as shown by sequencing).
Napolitano, Assunta; Akay, Seref; Mari, Angela; Bedir, Erdal; Pizza, Cosimo; Piacente, Sonia
2013-11-01
Astragalus species are widely used as health foods and dietary supplements, as well as drugs in traditional medicine. To rapidly evaluate metabolite similarities and differences among the EtOH extracts of the roots of eight commercial Astragalus spp., an approach based on direct analyses by ESI-MS followed by PCA of ESI-MS data, was carried out. Successively, quali-quantitative analyses of cycloartane derivatives in the eight Astragalus spp. by LC-ESI-MS(n) and PCA of LC-ESI-MS data were performed. This approach allowed to promptly highlighting metabolite similarities and differences among the various Astragalus spp. PCA results from LC-ESI-MS data of Astragalus samples were in reasonable agreement with both PCA results of ESI-MS data and quantitative results. This study affords an analytical method for the quali-quantitative determination of cycloartane derivatives in herbal preparations used as health and food supplements. Copyright © 2013 Elsevier B.V. All rights reserved.
Kim, Hyo Jin; Kim, Sun Mi; Kim, Bohyoung; La Yun, Bo; Jang, Mijung; Ko, Yousun; Lee, Soo Hyun; Jeong, Heeyeong; Chang, Jung Min; Cho, Nariya
2018-04-18
We investigated addition of strain and shear wave elastography to conventional ultrasonography for the qualitative and quantitative assessment of breast masses; cut-off points were determined for strain ratio, elasticity ratio, and visual score for differentiating between benign and malignant masses. In all, 108 masses from 94 patients were evaluated with strain and shear wave elastography and scored for suspicion of malignancy, visual score, strain ratio, and elasticity ratio. The diagnostic performance between ultrasonography alone and ultrasonography combined with either type of elastography was compared; cut-off points were determined for strain ratio, elasticity ratio, and visual score. Of the 108 masses, 44 were malignant and 64 were benign. The areas under the curves were significantly higher for strain and shear wave elastography-supplemented ultrasonography (0.839 and 0.826, respectively; P = 0.656) than for ultrasonography alone (0.764; P = 0.018 and 0.035, respectively). The diagnostic performances of strain and elasticity ratios were similar when differentiating benign from malignant masses. Cut-off values for strain ratio, elasticity ratio, and visual scores for strain and shear wave elastography were 2.93, 4, 3, and 2, respectively. Both forms of elastography similarly improved the diagnostic performance of conventional ultrasonography in the qualitative and quantitative assessment of breast masses.
Determination of emamectin benzoate in medicated fish feed: a multisite study.
Farer, Leslie J
2005-01-01
A new method was developed for the quantitation of emamectin benzoate in medicated fish feed at levels between 1 and 30 ppm. The new procedure, based on a previously reported assay, consists of a wet methanolic extraction of ground feed, followed by solid-phase extraction and injection onto a gradient liquid chromatographic system. A multisite study involving 3 laboratories (the developing laboratory and 2 independent laboratories) was performed to evaluate precision, recovery, linearity, and sensitivity. Mean recove;ries for triplicate analyses at 3 levels, performed by 2 analysts per laboratory, were between 89 and 97%, with coefficients of variation ranging from 1.6 to 8.6%. Coefficients of determination (r2) obtained from the plotted data were > or =0.993. The precision of the method, determined from 6 replicate preparations from the same batch of medicated feed assayed in 3 separate trials per laboratory, was between 0.6 and 5.8%. The quantitation limit was established at 0.5 ppm. Specificity and robustness studies were performed by the developing laboratory.
Del Grande, Filippo; Aro, Michael; Jalali Farahani, Sahar; Cosgarea, Andrew; Wilckens, John; Carrino, John A
2016-01-01
To retrospectively assess the qualitative and quantitative high-resolution 3-T magnetic resonance imaging (MRI) findings in nonsymptomatic baseball pitcher draft picks. Institutional review board-approved and HIPAA compliant study. Three-Tesla MRI of the dominant shoulder of 19 asymptomatic baseball pitcher draft picks and detailed clinical examination was performed before contract signing. Two radiologists performed independently qualitative and quantitative evaluation of shoulder structures. Descriptive statistics were performed. Sixty-eight percent (13/19), 32% (6/19), and 21% (4/19) of the baseball pitcher draft picks showed tendinopathy, partial thickness tendon tear of the supraspinatus, and acromioclavicular joint osteoarthritis, respectively. Glenohumeral subluxation, glenoid remodeling, and Bennett lesion were present in 53% (10/19), 79% (15/19), and 21% (4/19) of the subjects, respectively. Interclass coefficient was 0.633 to 0.863 and κ was 0.27 to 1. In asymptomatic baseball pitcher draft picks, 3-T MRI frequently shows abnormalities involving rotator cuff tendons, the coracohumeral, inferior glenohumeral, labrum, and osseous structures.
Uncertainty Analysis of Radar and Gauge Rainfall Estimates in the Russian River Basin
NASA Astrophysics Data System (ADS)
Cifelli, R.; Chen, H.; Willie, D.; Reynolds, D.; Campbell, C.; Sukovich, E.
2013-12-01
Radar Quantitative Precipitation Estimation (QPE) has been a very important application of weather radar since it was introduced and made widely available after World War II. Although great progress has been made over the last two decades, it is still a challenging process especially in regions of complex terrain such as the western U.S. It is also extremely difficult to make direct use of radar precipitation data in quantitative hydrologic forecasting models. To improve the understanding of rainfall estimation and distributions in the NOAA Hydrometeorology Testbed in northern California (HMT-West), extensive evaluation of radar and gauge QPE products has been performed using a set of independent rain gauge data. This study focuses on the rainfall evaluation in the Russian River Basin. The statistical properties of the different gridded QPE products will be compared quantitatively. The main emphasis of this study will be on the analysis of uncertainties of the radar and gauge rainfall products that are subject to various sources of error. The spatial variation analysis of the radar estimates is performed by measuring the statistical distribution of the radar base data such as reflectivity and by the comparison with a rain gauge cluster. The application of mean field bias values to the radar rainfall data will also be described. The uncertainty analysis of the gauge rainfall will be focused on the comparison of traditional kriging and conditional bias penalized kriging (Seo 2012) methods. This comparison is performed with the retrospective Multisensor Precipitation Estimator (MPE) system installed at the NOAA Earth System Research Laboratory. The independent gauge set will again be used as the verification tool for the newly generated rainfall products.
Zang, Qingce; Gao, Yang; Huang, Luojiao; He, Jiuming; Lin, Sheng; Jin, Hongtao; Zhang, Ruiping; Abliz, Zeper
2018-03-01
With the rapid development and wide application of traditional Chinese medicine injection (TCMI), a number of adverse events of some TCMIs have incessantly been reported and have drawn broad attention in recent years. Establishing effective and practical analytical methods for safety evaluation and quality control of TCMI can help to improve the safety of TCMIs in clinical applications. In this study, a sensitive and rapid high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) method has been developed and validated for the quantitative determination of potentially harmful substance 5,5'-oxydimethylenebis (2-furfural, OMBF) in TCMI samples. Chromatographic separation was performed on a C18 reversed-phase column (150 mm × 2.1 mm, 5 µm) by gradient elution, using methanol-water containing 0.1% formic acid as mobile phase at the flow rate of 0.3 mL/min. MS/MS detection was performed on a triple quadrupole mass spectrometer with positive electrospray ionization in the multiple reaction-monitoring mode. The method was sensitive with a limit of quantification of 0.3 ng/mL and linear over the range of 0.3-30 ng/mL ( r =0.9998). Intra- and inter-day precision for analyte was <9.52% RSD with recoveries in the range 88.0-109.67% at three concentration levels. The validated method was successfully applied to quantitatively determine the compound OMBF in TCMIs and glucose injections. Our study indicates that this method is simple, sensitive, practicable and reliable, and could be applied for safety evaluation and quality control of TCMIs and glucose injections.
Blind Source Parameters for Performance Evaluation of Despeckling Filters.
Biradar, Nagashettappa; Dewal, M L; Rohit, ManojKumar; Gowre, Sanjaykumar; Gundge, Yogesh
2016-01-01
The speckle noise is inherent to transthoracic echocardiographic images. A standard noise-free reference echocardiographic image does not exist. The evaluation of filters based on the traditional parameters such as peak signal-to-noise ratio, mean square error, and structural similarity index may not reflect the true filter performance on echocardiographic images. Therefore, the performance of despeckling can be evaluated using blind assessment metrics like the speckle suppression index, speckle suppression and mean preservation index (SMPI), and beta metric. The need for noise-free reference image is overcome using these three parameters. This paper presents a comprehensive analysis and evaluation of eleven types of despeckling filters for echocardiographic images in terms of blind and traditional performance parameters along with clinical validation. The noise is effectively suppressed using the logarithmic neighborhood shrinkage (NeighShrink) embedded with Stein's unbiased risk estimation (SURE). The SMPI is three times more effective compared to the wavelet based generalized likelihood estimation approach. The quantitative evaluation and clinical validation reveal that the filters such as the nonlocal mean, posterior sampling based Bayesian estimation, hybrid median, and probabilistic patch based filters are acceptable whereas median, anisotropic diffusion, fuzzy, and Ripplet nonlinear approximation filters have limited applications for echocardiographic images.
Blind Source Parameters for Performance Evaluation of Despeckling Filters
Biradar, Nagashettappa; Dewal, M. L.; Rohit, ManojKumar; Gowre, Sanjaykumar; Gundge, Yogesh
2016-01-01
The speckle noise is inherent to transthoracic echocardiographic images. A standard noise-free reference echocardiographic image does not exist. The evaluation of filters based on the traditional parameters such as peak signal-to-noise ratio, mean square error, and structural similarity index may not reflect the true filter performance on echocardiographic images. Therefore, the performance of despeckling can be evaluated using blind assessment metrics like the speckle suppression index, speckle suppression and mean preservation index (SMPI), and beta metric. The need for noise-free reference image is overcome using these three parameters. This paper presents a comprehensive analysis and evaluation of eleven types of despeckling filters for echocardiographic images in terms of blind and traditional performance parameters along with clinical validation. The noise is effectively suppressed using the logarithmic neighborhood shrinkage (NeighShrink) embedded with Stein's unbiased risk estimation (SURE). The SMPI is three times more effective compared to the wavelet based generalized likelihood estimation approach. The quantitative evaluation and clinical validation reveal that the filters such as the nonlocal mean, posterior sampling based Bayesian estimation, hybrid median, and probabilistic patch based filters are acceptable whereas median, anisotropic diffusion, fuzzy, and Ripplet nonlinear approximation filters have limited applications for echocardiographic images. PMID:27298618
Kessler, Larry G; Barnhart, Huiman X; Buckler, Andrew J; Choudhury, Kingshuk Roy; Kondratovich, Marina V; Toledano, Alicia; Guimaraes, Alexander R; Filice, Ross; Zhang, Zheng; Sullivan, Daniel C
2015-02-01
The development and implementation of quantitative imaging biomarkers has been hampered by the inconsistent and often incorrect use of terminology related to these markers. Sponsored by the Radiological Society of North America, an interdisciplinary group of radiologists, statisticians, physicists, and other researchers worked to develop a comprehensive terminology to serve as a foundation for quantitative imaging biomarker claims. Where possible, this working group adapted existing definitions derived from national or international standards bodies rather than invent new definitions for these terms. This terminology also serves as a foundation for the design of studies that evaluate the technical performance of quantitative imaging biomarkers and for studies of algorithms that generate the quantitative imaging biomarkers from clinical scans. This paper provides examples of research studies and quantitative imaging biomarker claims that use terminology consistent with these definitions as well as examples of the rampant confusion in this emerging field. We provide recommendations for appropriate use of quantitative imaging biomarker terminological concepts. It is hoped that this document will assist researchers and regulatory reviewers who examine quantitative imaging biomarkers and will also inform regulatory guidance. More consistent and correct use of terminology could advance regulatory science, improve clinical research, and provide better care for patients who undergo imaging studies. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Houts, Carrie R; Edwards, Michael C; Wirth, R J; Deal, Linda S
2016-11-01
There has been a notable increase in the advocacy of using small-sample designs as an initial quantitative assessment of item and scale performance during the scale development process. This is particularly true in the development of clinical outcome assessments (COAs), where Rasch analysis has been advanced as an appropriate statistical tool for evaluating the developing COAs using a small sample. We review the benefits such methods are purported to offer from both a practical and statistical standpoint and detail several problematic areas, including both practical and statistical theory concerns, with respect to the use of quantitative methods, including Rasch-consistent methods, with small samples. The feasibility of obtaining accurate information and the potential negative impacts of misusing large-sample statistical methods with small samples during COA development are discussed.
Quantitative analysis of microtubule orientation in interdigitated leaf pavement cells.
Akita, Kae; Higaki, Takumi; Kutsuna, Natsumaro; Hasezawa, Seiichiro
2015-01-01
Leaf pavement cells are shaped like a jigsaw puzzle in most dicotyledon species. Molecular genetic studies have identified several genes required for pavement cells morphogenesis and proposed that microtubules play crucial roles in the interdigitation of pavement cells. In this study, we performed quantitative analysis of cortical microtubule orientation in leaf pavement cells in Arabidopsis thaliana. We captured confocal images of cortical microtubules in cotyledon leaf epidermis expressing GFP-tubulinβ and quantitatively evaluated the microtubule orientations relative to the pavement cell growth axis using original image processing techniques. Our results showed that microtubules kept parallel orientations to the growth axis during pavement cell growth. In addition, we showed that immersion treatment of seed cotyledons in solutions containing tubulin polymerization and depolymerization inhibitors decreased pavement cell complexity. Treatment with oryzalin and colchicine inhibited the symmetric division of guard mother cells.
Evaluating biomarkers for prognostic enrichment of clinical trials.
Kerr, Kathleen F; Roth, Jeremy; Zhu, Kehao; Thiessen-Philbrook, Heather; Meisner, Allison; Wilson, Francis Perry; Coca, Steven; Parikh, Chirag R
2017-12-01
A potential use of biomarkers is to assist in prognostic enrichment of clinical trials, where only patients at relatively higher risk for an outcome of interest are eligible for the trial. We investigated methods for evaluating biomarkers for prognostic enrichment. We identified five key considerations when considering a biomarker and a screening threshold for prognostic enrichment: (1) clinical trial sample size, (2) calendar time to enroll the trial, (3) total patient screening costs and the total per-patient trial costs, (4) generalizability of trial results, and (5) ethical evaluation of trial eligibility criteria. Items (1)-(3) are amenable to quantitative analysis. We developed the Biomarker Prognostic Enrichment Tool for evaluating biomarkers for prognostic enrichment at varying levels of screening stringency. We demonstrate that both modestly prognostic and strongly prognostic biomarkers can improve trial metrics using Biomarker Prognostic Enrichment Tool. Biomarker Prognostic Enrichment Tool is available as a webtool at http://prognosticenrichment.com and as a package for the R statistical computing platform. In some clinical settings, even biomarkers with modest prognostic performance can be useful for prognostic enrichment. In addition to the quantitative analysis provided by Biomarker Prognostic Enrichment Tool, investigators must consider the generalizability of trial results and evaluate the ethics of trial eligibility criteria.
Pinches, Mark D G; Helps, Christopher R; Gruffydd-Jones, Tim J; Egan, Kathy; Jarrett, Oswald; Tasker, Séverine
2007-02-01
In this paper the design and use of a semi-quantitative real-time polymerase chain reaction assay (RT-PCR) for feline leukaemia virus (FeLV) provirus is described. Its performance is evaluated against established methods of FeLV diagnosis, including virus isolation and enzyme-linked immunoassay (ELISA) in a population of naturally infected cats. The RT-PCR assay is found to have both a high sensitivity (0.92) and specificity (0.99) when examined by expectation maximisation methods and is also able to detect a large number of cats with low FeLV proviral loads that were negative by other conventional test methods.
Investment appraisal using quantitative risk analysis.
Johansson, Henrik
2002-07-01
Investment appraisal concerned with investments in fire safety systems is discussed. Particular attention is directed at evaluating, in terms of the Bayesian decision theory, the risk reduction that investment in a fire safety system involves. It is shown how the monetary value of the change from a building design without any specific fire protection system to one including such a system can be estimated by use of quantitative risk analysis, the results of which are expressed in terms of a Risk-adjusted net present value. This represents the intrinsic monetary value of investing in the fire safety system. The method suggested is exemplified by a case study performed in an Avesta Sheffield factory.
Ma, Gao; Xu, Xiao-Quan; Hu, Hao; Su, Guo-Yi; Shen, Jie; Shi, Hai-Bin; Wu, Fei-Yun
2018-01-01
To compare the diagnostic performance of readout-segmented echo-planar imaging (RS-EPI)-based diffusion kurtosis imaging (DKI) and that of diffusion-weighted imaging (DWI) for differentiating malignant from benign masses in head and neck region. Between December 2014 and April 2016, we retrospectively enrolled 72 consecutive patients with head and neck masses who had undergone RS-EPI-based DKI scan (b value of 0, 500, 1000, and 1500 s/mm 2 ) for pretreatment evaluation. Imaging data were post-processed by using monoexponential and diffusion kurtosis (DK) model for quantitation of apparent diffusion coefficient (ADC), apparent diffusion for Gaussian distribution (D app ), and apparent kurtosis coefficient (K app ). Unpaired t test and Mann-Whitney U test were used to compare differences of quantitative parameters between malignant and benign groups. Receiver operating characteristic curve analyses were performed to determine and compare the diagnostic ability of quantitative parameters in predicting malignancy. Malignant group demonstrated significantly lower ADC (0.754 ± 0.167 vs. 1.222 ± 0.420, p < 0.001) and D app (1.029 ± 0.226 vs. 1.640 ± 0.445, p < 0.001) while higher K app (1.344 ± 0.309 vs. 0.715 ± 0.249, p < 0.001) than benign group. Using a combination of D app and K app as diagnostic index, significantly better differentiating performance was achieved than using ADC alone (area under curve: 0.956 vs. 0.876, p = 0.042). Compared to DWI, DKI could provide additional data related to tumor heterogeneity with significantly better differentiating performance. Its derived quantitative metrics could serve as a promising imaging biomarker for differentiating malignant from benign masses in head and neck region.
Embedded Resistors and Capacitors in Organic and Inorganic Substrates
NASA Technical Reports Server (NTRS)
Gerke, Robert David; Ator, Danielle
2006-01-01
Embedded resistors and capacitors were purchased from two technology; organic PWB and inorganic low temperature co-fire ceramic (LTCC). Small groups of each substrate were exposed to four environmental tests and several characterization tests to evaluate their performance and reliability. Even though all passive components maintained electrical performance throughout environmental testing, differences between the two technologies were observed. Environmental testing was taken beyond manufacturers' reported testing, but general not taken to failure. When possible, data was quantitatively compared to manufacturer's data.
Reduction of bias and variance for evaluation of computer-aided diagnostic schemes.
Li, Qiang; Doi, Kunio
2006-04-01
Computer-aided diagnostic (CAD) schemes have been developed to assist radiologists in detecting various lesions in medical images. In addition to the development, an equally important problem is the reliable evaluation of the performance levels of various CAD schemes. It is good to see that more and more investigators are employing more reliable evaluation methods such as leave-one-out and cross validation, instead of less reliable methods such as resubstitution, for assessing their CAD schemes. However, the common applications of leave-one-out and cross-validation evaluation methods do not necessarily imply that the estimated performance levels are accurate and precise. Pitfalls often occur in the use of leave-one-out and cross-validation evaluation methods, and they lead to unreliable estimation of performance levels. In this study, we first identified a number of typical pitfalls for the evaluation of CAD schemes, and conducted a Monte Carlo simulation experiment for each of the pitfalls to demonstrate quantitatively the extent of bias and/or variance caused by the pitfall. Our experimental results indicate that considerable bias and variance may exist in the estimated performance levels of CAD schemes if one employs various flawed leave-one-out and cross-validation evaluation methods. In addition, for promoting and utilizing a high standard for reliable evaluation of CAD schemes, we attempt to make recommendations, whenever possible, for overcoming these pitfalls. We believe that, with the recommended evaluation methods, we can considerably reduce the bias and variance in the estimated performance levels of CAD schemes.
OTEC modular experiment cold water pipe concept evaluation. Volume III. Appendices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1979-04-01
The Cold Water Pipe System Design Study was undertaken to evaluate the diverse CWP concepts, recommend the most viable alternatives for a 1984 deployment of the 10 to 40 MWe MEP, and carry out preliminary designs of three concepts. The concept evaluation phase reported involved a systems analysis of design alternatives in the broad categories of rigid walled (with hinges), compliant walled, stockade and bottom mounted buoyant. Quantitative evaluations were made of concept performance, availability, deployment schedule, technical feasibility and cost. CWP concepts were analyzed to determine if they met or could be made to meet established system requirements andmore » could be deployed by 1984. Fabrication, construction and installation plans were developed for successful concepts, and costs were determined in a WBS format. Evaluations were performed on the basis of technical and cost risk. This volume includes the following appendices: (A) materials and associated design criteria; (B) summary of results of dynamic flow and transportation analysis; (C) CWP sizing analysis; (D) CWP thermal performance; and (E) investigation of the APL/ABAM CWP design. (WHK)« less
Gómez-Bruton, A; Gonzalez-Agüero, A; Casajus, J A; Vicente-Rodriguez, German
2014-08-01
Enviromental factors such as exercise participation and nutrition have often been linked to bone improvements. However, not all sports have the same effects, being non-osteogenic sports such as swimming defined as negative or neutral sports to practice regarding bone mass by some authors, similarly exercise-diet interaction in especific groups is still not clear. To present the methodology of the RENACIMENTO project that aims to evaluate body composition and more specifically bone mass by several techniques in adolescent swimmers and to observe the effects and perdurability of whole body vibration (WBV) and jumping intervention (JIN) on body composition and fitness on this population and explore posible diet interactions. Randomized controlled trial. 78 swimmers (12-17 y) and 26 sex- and age-matched controls will participate in this study. Dual energy X-ray, peripheral Quantitative Computed Tomography, Quantitative Ultrasound, Bioelectrical Impedance Analysis, and anthropometry measurements will be performed in order to evaluate body composition. Physical activity, nutrition, pubertal development and socio-economical status may act as confounders of body composition and therefore will also be registered. Several fitness factors regarding strength, endurance, performance and others will also be registered to evaluate differences with controls and act as confounders. A 7-month WBV therapy will be performed by 26 swimmers consisting of a training of 15 minutes 3 times per week. An 8 month JIM will also be performed by 26 swimmers 3 times per week. The remaining 26 swimmers will continue their normal swimming training. Four evaluations will be performed, the first one in order to describe differences between swimmers and controls. The second one to describe the effects of the interventions and the third and fourth evaluations to describe the perdurability of the effects of the WBV and JIN. The RENACIMIENTO project will allow to answer several questions regarding body composition, fitness, bone mass and interaction with diet of adolescent swimmers, describe swimming as a positive, negative or neutral sport to practice regarding these parameters and elucidate the effects and perdurability of WBV and JIM on body composition. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.
Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks
NASA Technical Reports Server (NTRS)
Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.
2000-01-01
Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.
The use of a tracking test battery in the quantitative evaluation of neurological function
NASA Technical Reports Server (NTRS)
Repa, B. S.
1973-01-01
A number of tracking tasks that have proven useful to control engineers and psychologists measuring skilled performance have been evaluated for clinical use. Normal subjects as well as patients with previous diagnoses of Parkinson's disease, multiple sclerosis, and cerebral palsy were used in the evaluation. The tests that were studied included step tracking, random tracking, and critical tracking. The results of the present experiments encourage the continued use of tracking tasks as assessment precedures in a clinical environment. They have proven to be reliable, valid, and sensitive measures of neurological function.
Sandoval, Ruben M; Garcia-Sanchez, Antonio-Javier; Garcia-Sanchez, Felipe; Garcia-Haro, Joan
2016-12-31
IoT has begun to be employed pervasively in industrial environments and critical infrastructures thanks to its positive impact on performance and efficiency. Among these environments, the Smart Grid (SG) excels as the perfect host for this technology, mainly due to its potential to become the motor of the rest of electrically-dependent infrastructures. To make this SG-oriented IoT cost-effective, most deployments employ unlicensed ISM bands, specifically the 2400 MHz one, due to its extended communication bandwidth in comparison with lower bands. This band has been extensively used for years by Wireless Sensor Networks (WSN) and Mobile Ad-hoc Networks (MANET), from which the IoT technologically inherits. However, this work questions and evaluates the suitability of such a "default" communication band in SG environments, compared with the 915 MHz ISM band. A comprehensive quantitative comparison of these bands has been accomplished in terms of: power consumption, average network delay, and packet reception rate. To allow such a study, a dual-band propagation model specifically designed for the SG has been derived, tested, and incorporated into the well-known TOSSIM simulator. Simulation results reveal that only in the absence of other 2400 MHz interfering devices (such as WiFi or Bluetooth) or in small networks, is the 2400 MHz band the best option. In any other case, SG-oriented IoT quantitatively perform better if operating in the 915 MHz band.
Rapid quantification of plant-powdery mildew interactions by qPCR and conidiospore counts.
Weßling, Ralf; Panstruga, Ralph
2012-08-31
The powdery mildew disease represents a valuable patho-system to study the interaction between plant hosts and obligate biotrophic fungal pathogens. Numerous discoveries have been made on the basis of the quantitative evaluation of plant-powdery mildew interactions, especially in the context of hyper-susceptible and/or resistant plant mutants. However, the presently available methods to score the pathogenic success of powdery mildew fungi are laborious and thus not well suited for medium- to high-throughput analysis. Here we present two new protocols that allow the rapid quantitative assessment of powdery mildew disease development. One procedure depends on quantitative polymerase chain reaction (qPCR)-based evaluation of fungal biomass, while the other relies on the quantification of fungal conidiospores. We validated both techniques using the powdery mildew pathogen Golovinomyces orontii on a set of hyper-susceptible and resistant Arabidopsis thaliana mutants and found that both cover a wide dynamic range of one to two (qPCR) and four to five (quantification of conidia) orders of magnitude, respectively. The two approaches yield reproducible results and are easy to perform without specialized equipment. The qPCR and spore count assays rapidly and reproducibly quantify powdery mildew pathogenesis. Our methods are performed at later stages of infection and discern mutant phenotypes accurately. The assays therefore complement currently used procedures of powdery mildew quantification and can overcome some of their limitations. In addition, they can easily be adapted to other plant-powdery mildew patho-systems.
A method for normalizing pathology images to improve feature extraction for quantitative pathology.
Tam, Allison; Barker, Jocelyn; Rubin, Daniel
2016-01-01
With the advent of digital slide scanning technologies and the potential proliferation of large repositories of digital pathology images, many research studies can leverage these data for biomedical discovery and to develop clinical applications. However, quantitative analysis of digital pathology images is impeded by batch effects generated by varied staining protocols and staining conditions of pathological slides. To overcome this problem, this paper proposes a novel, fully automated stain normalization method to reduce batch effects and thus aid research in digital pathology applications. Their method, intensity centering and histogram equalization (ICHE), normalizes a diverse set of pathology images by first scaling the centroids of the intensity histograms to a common point and then applying a modified version of contrast-limited adaptive histogram equalization. Normalization was performed on two datasets of digitized hematoxylin and eosin (H&E) slides of different tissue slices from the same lung tumor, and one immunohistochemistry dataset of digitized slides created by restaining one of the H&E datasets. The ICHE method was evaluated based on image intensity values, quantitative features, and the effect on downstream applications, such as a computer aided diagnosis. For comparison, three methods from the literature were reimplemented and evaluated using the same criteria. The authors found that ICHE not only improved performance compared with un-normalized images, but in most cases showed improvement compared with previous methods for correcting batch effects in the literature. ICHE may be a useful preprocessing step a digital pathology image processing pipeline.
Taira, Chiaki; Matsuda, Kazuyuki; Yamaguchi, Akemi; Uehara, Masayuki; Sugano, Mitsutoshi; Okumura, Nobuo; Honda, Takayuki
2015-05-20
Chimerism analysis is important for the evaluation of engraftment and predicting relapse following hematopoietic stem cell transplantation (HSCT). We developed a chimerism analysis for single nucleotide polymorphisms (SNPs), including rapid screening of the discriminable donor/recipient alleles using droplet allele-specific PCR (droplet-AS-PCR) pre-HSCT and quantitation of recipient DNA using AS-quantitative PCR (AS-qPCR) following HSCT. SNP genotyping of 20 donor/recipient pairs via droplet-AS-PCR and the evaluation of the informativity of 5 SNP markers for chimerism analysis were performed. Samples from six follow-up patients were analyzed to assess the chimerism via AS-qPCR. These results were compared with that determined by short tandem repeat PCR (STR-PCR). Droplet-AS-PCR could determine genotypes within 8min. The total informativity using all 5 loci was 95% (19/20). AS-qPCR provided the percentage of recipient DNA in all 6 follow-up patients without influence of the stutter peak or the amplification efficacy, which affected the STR-PCR results. The droplet-AS-PCR had an advantage over STR-PCR in terms of rapidity and simplicity for screening before HSCT. Furthermore, AS-qPCR had better accuracy than STR-PCR for quantification of recipient DNA following HSCT. The present chimerism assay compensates for the disadvantages of STR-PCR and is readily performable in clinical laboratories. Copyright © 2015 Elsevier B.V. All rights reserved.
Sandoval, Ruben M.; Garcia-Sanchez, Antonio-Javier; Garcia-Sanchez, Felipe; Garcia-Haro, Joan
2016-01-01
IoT has begun to be employed pervasively in industrial environments and critical infrastructures thanks to its positive impact on performance and efficiency. Among these environments, the Smart Grid (SG) excels as the perfect host for this technology, mainly due to its potential to become the motor of the rest of electrically-dependent infrastructures. To make this SG-oriented IoT cost-effective, most deployments employ unlicensed ISM bands, specifically the 2400 MHz one, due to its extended communication bandwidth in comparison with lower bands. This band has been extensively used for years by Wireless Sensor Networks (WSN) and Mobile Ad-hoc Networks (MANET), from which the IoT technologically inherits. However, this work questions and evaluates the suitability of such a “default” communication band in SG environments, compared with the 915 MHz ISM band. A comprehensive quantitative comparison of these bands has been accomplished in terms of: power consumption, average network delay, and packet reception rate. To allow such a study, a dual-band propagation model specifically designed for the SG has been derived, tested, and incorporated into the well-known TOSSIM simulator. Simulation results reveal that only in the absence of other 2400 MHz interfering devices (such as WiFi or Bluetooth) or in small networks, is the 2400 MHz band the best option. In any other case, SG-oriented IoT quantitatively perform better if operating in the 915 MHz band. PMID:28042863
Rautiainen, Jari; Nissi, Mikko J.; Salo, Elli-Noora; Tiitu, Virpi; Finnilä, Mikko A.J.; Aho, Olli-Matti; Saarakkala, Simo; Lehenkari, Petri; Ellermann, Jutta; Nieminen, Miika T.
2014-01-01
Purpose To evaluate the sensitivity of quantitative MRI techniques (T1, T1,Gd, T2, continous wave (CW) T1ρ dispersion, adiabatic T1ρ, adiabatic T2ρ, RAFF and inversion-prepared magnetization transfer (MT)) for assessment of human articular cartilage with varying degrees of natural degeneration. Methods Osteochondral samples (n = 14) were obtained from the tibial plateaus of patients undergoing total knee replacement. MRI of the specimens was performed at 9.4 T and the relaxation time maps were evaluated in the cartilage zones. For reference, quantitative histology, OARSI grading and biomechanical measurements were performed and correlated with MRI findings. Results All MRI parameters, except T1,Gd, showed statistically significant differences in tangential and full-thickness ROIs between early and advanced osteoarthritis (OA) groups, as classified by OARSI grading. CW-T1ρ showed significant dispersion in all ROIs and featured classical laminar structure of cartilage with spin-lock powers below 1000 Hz. Adiabatic T1ρ, T2ρ, CW-T1ρ, MT and RAFF correlated strongly with OARSI grade and biomechanical parameters. Conclusion MRI parameters were able to differentiate between early and advanced OA. Furthermore, rotating frame methods, namely adiabatic T1ρ, adiabatic T2ρ, CW-T1ρ and RAFF, as well as MT experiment correlated strongly with biomechanical parameters and OARSI grade, suggesting high sensitivity of the parameters for cartilage degeneration. PMID:25104181
Quantitative evaluation of toothbrush and arm-joint motion during tooth brushing.
Inada, Emi; Saitoh, Issei; Yu, Yong; Tomiyama, Daisuke; Murakami, Daisuke; Takemoto, Yoshihiko; Morizono, Ken; Iwasaki, Tomonori; Iwase, Yoko; Yamasaki, Youichi
2015-07-01
It is very difficult for dental professionals to objectively assess tooth brushing skill of patients, because an obvious index to assess the brushing motion of patients has not been established. The purpose of this study was to quantitatively evaluate toothbrush and arm-joint motion during tooth brushing. Tooth brushing motion, performed by dental hygienists for 15 s, was captured using a motion-capture system that continuously calculates the three-dimensional coordinates of object's motion relative to the floor. The dental hygienists performed the tooth brushing on the buccal and palatal sides of their right and left upper molars. The frequencies and power spectra of toothbrush motion and joint angles of the shoulder, elbow, and wrist were calculated and analyzed statistically. The frequency of toothbrush motion was higher on the left side (both buccal and palatal areas) than on the right side. There were no significant differences among joint angle frequencies within each brushing area. The inter- and intra-individual variations of the power spectrum of the elbow flexion angle when brushing were smaller than for any of the other angles. This study quantitatively confirmed that dental hygienists have individual distinctive rhythms during tooth brushing. All arm joints moved synchronously during brushing, and tooth brushing motion was controlled by coordinated movement of the joints. The elbow generated an individual's frequency through a stabilizing movement. The shoulder and wrist control the hand motion, and the elbow generates the cyclic rhythm during tooth brushing.
Robust human machine interface based on head movements applied to assistive robotics.
Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano
2013-01-01
This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair.
Robust Human Machine Interface Based on Head Movements Applied to Assistive Robotics
Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano
2013-01-01
This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair. PMID:24453877
Teuho, Jarmo; Saunavaara, Virva; Tolvanen, Tuula; Tuokkola, Terhi; Karlsson, Antti; Tuisku, Jouni; Teräs, Mika
2017-10-01
In PET, corrections for photon scatter and attenuation are essential for visual and quantitative consistency. MR attenuation correction (MRAC) is generally conducted by image segmentation and assignment of discrete attenuation coefficients, which offer limited accuracy compared with CT attenuation correction. Potential inaccuracies in MRAC may affect scatter correction, because the attenuation image (μ-map) is used in single scatter simulation (SSS) to calculate the scatter estimate. We assessed the impact of MRAC to scatter correction using 2 scatter-correction techniques and 3 μ-maps for MRAC. Methods: The tail-fitted SSS (TF-SSS) and a Monte Carlo-based single scatter simulation (MC-SSS) algorithm implementations on the Philips Ingenuity TF PET/MR were used with 1 CT-based and 2 MR-based μ-maps. Data from 7 subjects were used in the clinical evaluation, and a phantom study using an anatomic brain phantom was conducted. Scatter-correction sinograms were evaluated for each scatter correction method and μ-map. Absolute image quantification was investigated with the phantom data. Quantitative assessment of PET images was performed by volume-of-interest and ratio image analysis. Results: MRAC did not result in large differences in scatter algorithm performance, especially with TF-SSS. Scatter sinograms and scatter fractions did not reveal large differences regardless of the μ-map used. TF-SSS showed slightly higher absolute quantification. The differences in volume-of-interest analysis between TF-SSS and MC-SSS were 3% at maximum in the phantom and 4% in the patient study. Both algorithms showed excellent correlation with each other with no visual differences between PET images. MC-SSS showed a slight dependency on the μ-map used, with a difference of 2% on average and 4% at maximum when a μ-map without bone was used. Conclusion: The effect of different MR-based μ-maps on the performance of scatter correction was minimal in non-time-of-flight 18 F-FDG PET/MR brain imaging. The SSS algorithm was not affected significantly by MRAC. The performance of the MC-SSS algorithm is comparable but not superior to TF-SSS, warranting further investigations of algorithm optimization and performance with different radiotracers and time-of-flight imaging. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Rutkove, Seward B; Geisbush, Tom R; Mijailovic, Aleksandar; Shklyar, Irina; Pasternak, Amy; Visyak, Nicole; Wu, Jim S; Zaidman, Craig; Darras, Basil T
2014-07-01
Electrical impedance myography and quantitative ultrasound are two noninvasive, painless, and effort-independent approaches for assessing neuromuscular disease. Both techniques have potential to serve as useful biomarkers in clinical trials in Duchenne muscular dystrophy. However, their comparative sensitivity to disease status and how they relate to one another are unknown. We performed a cross-sectional analysis of electrical impedance myography and quantitative ultrasound in 24 healthy boys and 24 with Duchenne muscular dystrophy, aged 2 to 14 years with trained research assistants performing all measurements. Three upper and three lower extremity muscles were studied unilaterally in each child, and the data averaged for each individual. Both electrical impedance myography and quantitative ultrasound differentiated healthy boys from those with Duchenne muscular dystrophy (P < 0.001 for both). Quantitative ultrasound values correlated with age in Duchenne muscular dystrophy boys (rho = 0.45; P = 0.029), whereas electrical impedance myography did not (rho = -0.31; P = 0.14). However, electrical impedance myography phase correlated with age in healthy boys (rho = 0.51; P = 0.012), whereas quantitative ultrasound did not (rho = -0.021; P = 0.92). In Duchenne muscular dystrophy boys, electrical impedance myography phase correlated with the North Star Ambulatory Assessment (rho = 0.65; P = 0.022); quantitative ultrasound revealed a near-significant association (rho = -0.56; P = 0.060). The two technologies trended toward a moderate correlation with one another in the Duchenne muscular dystrophy cohort but not in the healthy group (rho = -0.40; P = 0.054 and rho = -0.32; P = 0.13, respectively). Electrical impedance myography and quantitative ultrasound are complementary modalities for the assessment of boys with Duchenne muscular dystrophy; further study and application of these two modalities alone or in combination in a longitudinal fashion are warranted. Copyright © 2014 Elsevier Inc. All rights reserved.
[Reconstituting evaluation methods based on both qualitative and quantitative paradigms].
Miyata, Hiroaki; Okubo, Suguru; Yoshie, Satoru; Kai, Ichiro
2011-01-01
Debate about the relationship between quantitative and qualitative paradigms is often muddled and confusing and the clutter of terms and arguments has resulted in the concepts becoming obscure and unrecognizable. In this study we conducted content analysis regarding evaluation methods of qualitative healthcare research. We extracted descriptions on four types of evaluation paradigm (validity/credibility, reliability/credibility, objectivity/confirmability, and generalizability/transferability), and classified them into subcategories. In quantitative research, there has been many evaluation methods based on qualitative paradigms, and vice versa. Thus, it might not be useful to consider evaluation methods of qualitative paradigm are isolated from those of quantitative methods. Choosing practical evaluation methods based on the situation and prior conditions of each study is an important approach for researchers.
Deller, Timothy W; Khalighi, Mohammad Mehdi; Jansen, Floris P; Glover, Gary H
2018-01-01
The recent introduction of simultaneous whole-body PET/MR scanners has enabled new research taking advantage of the complementary information obtainable with PET and MRI. One such application is kinetic modeling, which requires high levels of PET quantitative stability. To accomplish the required PET stability levels, the PET subsystem must be sufficiently isolated from the effects of MR activity. Performance measurements have previously been published, demonstrating sufficient PET stability in the presence of MR pulsing for typical clinical use; however, PET stability during radiofrequency (RF)-intensive and gradient-intensive sequences has not previously been evaluated for a clinical whole-body scanner. In this work, PET stability of the GE SIGNA PET/MR was examined during simultaneous scanning of aggressive MR pulse sequences. Methods: PET performance tests were acquired with MR idle and during simultaneous MR pulsing. Recent system improvements mitigating RF interference and gain variation were used. A fast recovery fast spin echo MR sequence was selected for high RF power, and an echo planar imaging sequence was selected for its high heat-inducing gradients. Measurements were performed to determine PET stability under varying MR conditions using the following metrics: sensitivity, scatter fraction, contrast recovery, uniformity, count rate performance, and image quantitation. A final PET quantitative stability assessment for simultaneous PET scanning during functional MRI studies was performed with a spiral in-and-out gradient echo sequence. Results: Quantitation stability of a 68 Ge flood phantom was demonstrated within 0.34%. Normalized sensitivity was stable during simultaneous scanning within 0.3%. Scatter fraction measured with a 68 Ge line source in the scatter phantom was stable within the range of 40.4%-40.6%. Contrast recovery and uniformity were comparable for PET images acquired simultaneously with multiple MR conditions. Peak noise equivalent count rate was 224 kcps at an effective activity concentration of 18.6 kBq/mL, and the count rate curves and scatter fraction curve were consistent for the alternating MR pulsing states. A final test demonstrated quantitative stability during a spiral functional MRI sequence. Conclusion: PET stability metrics demonstrated that PET quantitation was not affected during simultaneous aggressive MRI. This stability enables demanding applications such as kinetic modeling. © 2018 by the Society of Nuclear Medicine and Molecular Imaging.
Ferreira, Fabiano G; Nouer, Darcy F; Silva, Nelson P; Garbui, Ivana U; Correr-Sobrinho, Lourenço; Nouer, Paulo R A
2014-09-01
The aim of this study was to undertake a qualitative and quantitative evaluation of changes on enamel surfaces after debonding of brackets followed by finishing procedures, using a high-resolution three-dimensional optical profiler and to investigate the accuracy of the technique. The labial surfaces of 36 extracted upper central incisors were examined. Before bonding, the enamel surfaces were subjected to profilometry, recording four amplitude parameters. Brackets were then bonded using two types of light-cured orthodontic adhesive: composite resin and resin-modified glass ionomer cement. Finishing was performed by three different methods: pumice on a rubber cup, fine and ultrafine aluminum oxide discs, and microfine diamond cups followed by silicon carbide brushes. The samples were subsequently re-analyzed by profilometry. Wilcoxon signed-rank test, Kruskal-Wallis test (p < 0.05) and a posteriori Mann-Whitney U test with Bonferroni correction (p < 0.0167) revealed a significant reduction of enamel roughness when diamond cups followed by silicon carbide brushes were used to finish surfaces that had remnants of resin-modified glass ionomer adhesive and when pumice was used to finish surfaces that had traces of composite resin. Enamel loss was minimal. The 3D optical profilometry technique was able to provide accurate qualitative and quantitative assessment of changes on the enamel surface after debonding. Morphological changes in the topography of dental surfaces, especially if related to enamel loss and roughness, are of considerable clinical importance. The quantitative evaluation method used herein enables a more comprehensive understanding of the effects of orthodontic bonding on teeth.
Engineered Barrier System performance requirements systems study report. Revision 02
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balady, M.A.
This study evaluates the current design concept for the Engineered Barrier System (EBS), in concert with the current understanding of the geologic setting to assess whether enhancements to the required performance of the EBS are necessary. The performance assessment calculations are performed by coupling the EBS with the geologic setting based on the models (some of which were updated for this study) and assumptions used for the 1995 Total System Performance Assessment (TSPA). The need for enhancements is determined by comparing the performance assessment results against the EBS related performance requirements. Subsystem quantitative performance requirements related to the EBS includemore » the requirement to allow no more than 1% of the waste packages (WPs) to fail before 1,000 years after permanent closure of the repository, as well as a requirement to control the release rate of radionuclides from the EBS. The EBS performance enhancements considered included additional engineered components as well as evaluating additional performance available from existing design features but for which no performance credit is currently being taken.« less
2015-01-01
The rapidly expanding availability of high-resolution mass spectrometry has substantially enhanced the ion-current-based relative quantification techniques. Despite the increasing interest in ion-current-based methods, quantitative sensitivity, accuracy, and false discovery rate remain the major concerns; consequently, comprehensive evaluation and development in these regards are urgently needed. Here we describe an integrated, new procedure for data normalization and protein ratio estimation, termed ICan, for improved ion-current-based analysis of data generated by high-resolution mass spectrometry (MS). ICan achieved significantly better accuracy and precision, and lower false-positive rate for discovering altered proteins, over current popular pipelines. A spiked-in experiment was used to evaluate the performance of ICan to detect small changes. In this study E. coli extracts were spiked with moderate-abundance proteins from human plasma (MAP, enriched by IgY14-SuperMix procedure) at two different levels to set a small change of 1.5-fold. Forty-five (92%, with an average ratio of 1.71 ± 0.13) of 49 identified MAP protein (i.e., the true positives) and none of the reference proteins (1.0-fold) were determined as significantly altered proteins, with cutoff thresholds of ≥1.3-fold change and p ≤ 0.05. This is the first study to evaluate and prove competitive performance of the ion-current-based approach for assigning significance to proteins with small changes. By comparison, other methods showed remarkably inferior performance. ICan can be broadly applicable to reliable and sensitive proteomic survey of multiple biological samples with the use of high-resolution MS. Moreover, many key features evaluated and optimized here such as normalization, protein ratio determination, and statistical analyses are also valuable for data analysis by isotope-labeling methods. PMID:25285707
Kuwabara, Cleuza Catsue Takeda; Evora, Yolanda Dora Martinez; de Oliveira, Márcio Mattos Borges
2010-01-01
With the continuous incorporation of health technologies, hospital risk management should be implemented to systemize the monitoring of adverse effects, performing actions to control and eliminate their damage. As part of these actions, Technovigilance is active in the procedures of acquisition, use and quality control of health products and equipment. This study aimed to construct and validate an instrument to evaluate medical-hospital products. This is a quantitative, exploratory, longitudinal and methodological development study, based on the Six Sigma quality management model, which has as its principle basis the component stages of the DMAIC Cycle. For data collection and content validation, the Delphi technique was used with professionals from the Brazilian Sentinel Hospital Network. It was concluded that the instrument developed permitted the evaluation of the product, differentiating between the results of the tested brands, in line with the initial study goal of qualifying the evaluations performed.
A Resource of Quantitative Functional Annotation for Homo sapiens Genes.
Taşan, Murat; Drabkin, Harold J; Beaver, John E; Chua, Hon Nian; Dunham, Julie; Tian, Weidong; Blake, Judith A; Roth, Frederick P
2012-02-01
The body of human genomic and proteomic evidence continues to grow at ever-increasing rates, while annotation efforts struggle to keep pace. A surprisingly small fraction of human genes have clear, documented associations with specific functions, and new functions continue to be found for characterized genes. Here we assembled an integrated collection of diverse genomic and proteomic data for 21,341 human genes and make quantitative associations of each to 4333 Gene Ontology terms. We combined guilt-by-profiling and guilt-by-association approaches to exploit features unique to the data types. Performance was evaluated by cross-validation, prospective validation, and by manual evaluation with the biological literature. Functional-linkage networks were also constructed, and their utility was demonstrated by identifying candidate genes related to a glioma FLN using a seed network from genome-wide association studies. Our annotations are presented-alongside existing validated annotations-in a publicly accessible and searchable web interface.
A quantitative approach to evaluating caring in nursing simulation.
Eggenberger, Terry L; Keller, Kathryn B; Chase, Susan K; Payne, Linda
2012-01-01
This study was designed to test a quantitative method of measuring caring in the simulated environment. Since competency in caring is central to nursing practice, ways of including caring concepts in designing scenarios and in evaluation of performance need to be developed. Coates' Caring Efficacy scales were adapted for simulation and named the Caring Efficacy Scale-Simulation Student Version (CES-SSV) and Caring Efficacy Scale-Simulation Faculty Version (CES-SFV). A correlational study was designed to compare student self-ratings with faculty ratings on caring efficacy during an adult acute simulation experience with traditional and accelerated baccalaureate students in a nursing program grounded in caring theory. Student self-ratings were significantly correlated with objective ratings (r = 0.345, 0.356). Both the CES-SSV and the CES-SFV were found to have excellent internal consistency and significantly correlated interrater reliability. They were useful in measuring caring in the simulated learning environment.
Giger, Maryellen L.; Chan, Heang-Ping; Boone, John
2008-01-01
The roles of physicists in medical imaging have expanded over the years, from the study of imaging systems (sources and detectors) and dose to the assessment of image quality and perception, the development of image processing techniques, and the development of image analysis methods to assist in detection and diagnosis. The latter is a natural extension of medical physicists’ goals in developing imaging techniques to help physicians acquire diagnostic information and improve clinical decisions. Studies indicate that radiologists do not detect all abnormalities on images that are visible on retrospective review, and they do not always correctly characterize abnormalities that are found. Since the 1950s, the potential use of computers had been considered for analysis of radiographic abnormalities. In the mid-1980s, however, medical physicists and radiologists began major research efforts for computer-aided detection or computer-aided diagnosis (CAD), that is, using the computer output as an aid to radiologists—as opposed to a completely automatic computer interpretation—focusing initially on methods for the detection of lesions on chest radiographs and mammograms. Since then, extensive investigations of computerized image analysis for detection or diagnosis of abnormalities in a variety of 2D and 3D medical images have been conducted. The growth of CAD over the past 20 years has been tremendous—from the early days of time-consuming film digitization and CPU-intensive computations on a limited number of cases to its current status in which developed CAD approaches are evaluated rigorously on large clinically relevant databases. CAD research by medical physicists includes many aspects—collecting relevant normal and pathological cases; developing computer algorithms appropriate for the medical interpretation task including those for segmentation, feature extraction, and classifier design; developing methodology for assessing CAD performance; validating the algorithms using appropriate cases to measure performance and robustness; conducting observer studies with which to evaluate radiologists in the diagnostic task without and with the use of the computer aid; and ultimately assessing performance with a clinical trial. Medical physicists also have an important role in quantitative imaging, by validating the quantitative integrity of scanners and developing imaging techniques, and image analysis tools that extract quantitative data in a more accurate and automated fashion. As imaging systems become more complex and the need for better quantitative information from images grows, the future includes the combined research efforts from physicists working in CAD with those working on quantitative imaging systems to readily yield information on morphology, function, molecular structure, and more—from animal imaging research to clinical patient care. A historical review of CAD and a discussion of challenges for the future are presented here, along with the extension to quantitative image analysis. PMID:19175137
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giger, Maryellen L.; Chan, Heang-Ping; Boone, John
2008-12-15
The roles of physicists in medical imaging have expanded over the years, from the study of imaging systems (sources and detectors) and dose to the assessment of image quality and perception, the development of image processing techniques, and the development of image analysis methods to assist in detection and diagnosis. The latter is a natural extension of medical physicists' goals in developing imaging techniques to help physicians acquire diagnostic information and improve clinical decisions. Studies indicate that radiologists do not detect all abnormalities on images that are visible on retrospective review, and they do not always correctly characterize abnormalities thatmore » are found. Since the 1950s, the potential use of computers had been considered for analysis of radiographic abnormalities. In the mid-1980s, however, medical physicists and radiologists began major research efforts for computer-aided detection or computer-aided diagnosis (CAD), that is, using the computer output as an aid to radiologists--as opposed to a completely automatic computer interpretation--focusing initially on methods for the detection of lesions on chest radiographs and mammograms. Since then, extensive investigations of computerized image analysis for detection or diagnosis of abnormalities in a variety of 2D and 3D medical images have been conducted. The growth of CAD over the past 20 years has been tremendous--from the early days of time-consuming film digitization and CPU-intensive computations on a limited number of cases to its current status in which developed CAD approaches are evaluated rigorously on large clinically relevant databases. CAD research by medical physicists includes many aspects--collecting relevant normal and pathological cases; developing computer algorithms appropriate for the medical interpretation task including those for segmentation, feature extraction, and classifier design; developing methodology for assessing CAD performance; validating the algorithms using appropriate cases to measure performance and robustness; conducting observer studies with which to evaluate radiologists in the diagnostic task without and with the use of the computer aid; and ultimately assessing performance with a clinical trial. Medical physicists also have an important role in quantitative imaging, by validating the quantitative integrity of scanners and developing imaging techniques, and image analysis tools that extract quantitative data in a more accurate and automated fashion. As imaging systems become more complex and the need for better quantitative information from images grows, the future includes the combined research efforts from physicists working in CAD with those working on quantitative imaging systems to readily yield information on morphology, function, molecular structure, and more--from animal imaging research to clinical patient care. A historical review of CAD and a discussion of challenges for the future are presented here, along with the extension to quantitative image analysis.« less
Evaluation of colonoscopy technical skill levels by use of an objective kinematic-based system.
Obstein, Keith L; Patil, Vaibhav D; Jayender, Jagadeesan; San José Estépar, Raúl; Spofford, Inbar S; Lengyel, Balazs I; Vosburgh, Kirby G; Thompson, Christopher C
2011-02-01
Colonoscopy requires training and experience to ensure accuracy and safety. Currently, no objective, validated process exists to determine when an endoscopist has attained technical competence. Kinematics data describing movements of laparoscopic instruments have been used in surgical skill assessment to define expert surgical technique. We have developed a novel system to record kinematics data during colonoscopy and quantitatively assess colonoscopist performance. To use kinematic analysis of colonoscopy to quantitatively assess endoscopic technical performance. Prospective cohort study. Tertiary-care academic medical center. This study involved physicians who perform colonoscopy. Application of a kinematics data collection system to colonoscopy evaluation. Kinematics data, validated task load assessment instrument, and technical difficulty visual analog scale. All 13 participants completed the colonoscopy to the terminal ileum on the standard colon model. Attending physicians reached the terminal ileum quicker than fellows (median time, 150.19 seconds vs 299.86 seconds; p<.01) with reduced path lengths for all 4 sensors, decreased flex (1.75 m vs 3.14 m; P=.03), smaller tip angulation, reduced absolute roll, and lower curvature of the endoscope. With performance of attending physicians serving as the expert reference standard, the mean kinematic score increased by 19.89 for each decrease in postgraduate year (P<.01). Overall, fellows experienced greater mental, physical, and temporal demand than did attending physicians. Small cohort size. Kinematic data and score calculation appear useful in the evaluation of colonoscopy technical skill levels. The kinematic score appears to consistently vary by year of training. Because this assessment is nonsubjective, it may be an improvement over current methods for determination of competence. Ongoing studies are establishing benchmarks and characteristic profiles of skill groups based on kinematics data. Copyright © 2011 American Society for Gastrointestinal Endoscopy. Published by Mosby, Inc. All rights reserved.
Haggerty, Jeannie L; Levesque, Jean-Frédéric
2017-04-01
Patients are the most valid source for evaluating the accessibility of services, but a previous study observed differential psychometric performance of instruments in rural and urban respondents. To validate a measure of organizational accessibility free of differential rural-urban performance that predicts consequences of difficult access for patient-initiated care. Sequential qualitative-quantitative study. Qualitative findings used to adapt or develop evaluative and reporting items. Quantitative validation study. Primary data by telephone from 750 urban, rural and remote respondents in Quebec, Canada; follow-up mailed questionnaire to a subset of 316. Items were developed for barriers along the care trajectory. We used common factor and confirmatory factor analysis to identify constructs and compare models. We used item response theory analysis to test for differential rural-urban performance; examine individual item performance; adjust response options; and exclude redundant or non-discriminatory items. We used logistic regression to examine predictive validity of the subscale on access difficulty (outcome). Initial factor resolution suggested geographic and organizational dimensions, plus consequences of access difficulty. After second administration, organizational accommodation and geographic indicators were integrated into a 6-item subscale of Effective Availability and Accommodation, which demonstrates good variability and internal consistency (α = 0.84) and no differential functioning by geographic area. Each unit increase predicts decreased likelihood of consequences of access difficulties (unmet need and problem aggravation). The new subscale is a practical, valid and reliable measure for patients to evaluate first-contact health services accessibility, yielding valid comparisons between urban and rural contexts. © 2016 The Authors. Health Expectations published by John Wiley & Sons Ltd.
Powers, Mary K; Boonjindasup, Aaron; Pinsky, Michael; Dorsey, Philip; Maddox, Michael; Su, Li-Ming; Gettman, Matthew; Sundaram, Chandru P; Castle, Erik P; Lee, Jason Y; Lee, Benjamin R
2016-04-01
We sought to describe a methodology of crowdsourcing for obtaining quantitative performance ratings of surgeons performing renal artery and vein dissection of robotic partial nephrectomy (RPN). We sought to compare assessment of technical performance obtained from the crowdsourcers with that of surgical content experts (CE). Our hypothesis is that the crowd can score performances of renal hilar dissection comparably to surgical CE using the Global Evaluative Assessment of Robotic Skills (GEARS). A group of resident and attending robotic surgeons submitted a total of 14 video clips of RPN during hilar dissection. These videos were rated by both crowd and CE for technical skills performance using GEARS. A minimum of 3 CE and 30 Amazon Mechanical Turk crowdworkers evaluated each video with the GEARS scale. Within 13 days, we received ratings of all videos from all CE, and within 11.5 hours, we received 548 GEARS ratings from crowdworkers. Even though CE were exposed to a training module, internal consistency across videos of CE GEARS ratings remained low (ICC = 0.38). Despite this, we found that crowdworker GEARS ratings of videos were highly correlated with CE ratings at both the video level (R = 0.82, p < 0.001) and surgeon level (R = 0.84, p < 0.001). Similarly, crowdworker ratings of the renal artery dissection were highly correlated with expert assessments (R = 0.83, p < 0.001) for the unique surgery-specific assessment question. We conclude that crowdsourced assessment of qualitative performance ratings may be an alternative and/or adjunct to surgical experts' ratings and would provide a rapid scalable solution to triage technical skills.
Sunderland, John J; Christian, Paul E
2015-01-01
The Clinical Trials Network (CTN) of the Society of Nuclear Medicine and Molecular Imaging (SNMMI) operates a PET/CT phantom imaging program using the CTN's oncology clinical simulator phantom, designed to validate scanners at sites that wish to participate in oncology clinical trials. Since its inception in 2008, the CTN has collected 406 well-characterized phantom datasets from 237 scanners at 170 imaging sites covering the spectrum of commercially available PET/CT systems. The combined and collated phantom data describe a global profile of quantitative performance and variability of PET/CT data used in both clinical practice and clinical trials. Individual sites filled and imaged the CTN oncology PET phantom according to detailed instructions. Standard clinical reconstructions were requested and submitted. The phantom itself contains uniform regions suitable for scanner calibration assessment, lung fields, and 6 hot spheric lesions with diameters ranging from 7 to 20 mm at a 4:1 contrast ratio with primary background. The CTN Phantom Imaging Core evaluated the quality of the phantom fill and imaging and measured background standardized uptake values to assess scanner calibration and maximum standardized uptake values of all 6 lesions to review quantitative performance. Scanner make-and-model-specific measurements were pooled and then subdivided by reconstruction to create scanner-specific quantitative profiles. Different makes and models of scanners predictably demonstrated different quantitative performance profiles including, in some cases, small calibration bias. Differences in site-specific reconstruction parameters increased the quantitative variability among similar scanners, with postreconstruction smoothing filters being the most influential parameter. Quantitative assessment of this intrascanner variability over this large collection of phantom data gives, for the first time, estimates of reconstruction variance introduced into trials from allowing trial sites to use their preferred reconstruction methodologies. Predictably, time-of-flight-enabled scanners exhibited less size-based partial-volume bias than non-time-of-flight scanners. The CTN scanner validation experience over the past 5 y has generated a rich, well-curated phantom dataset from which PET/CT make-and-model and reconstruction-dependent quantitative behaviors were characterized for the purposes of understanding and estimating scanner-based variances in clinical trials. These results should make it possible to identify and recommend make-and-model-specific reconstruction strategies to minimize measurement variability in cancer clinical trials. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Harada, Taiyo L; Saito, Kazuhiro; Araki, Yoichi; Matsubayashi, Jun; Nagao, Toshitaka; Sugimoto, Katsutoshi; Tokuuye, Koichi
2018-05-01
Background Recently, diffusion-weighted imaging (DWI) and quantitative enhancement ratio measured at the hepatobiliary phase (HBP) of Gd-EOB-DTPA-enhanced magnetic resonance imaging (MRI) has been established as an effective method for evaluating liver fibrosis. Purpose To evaluate which is a more favorable surrogate marker in predicting high-stage liver fibrosis, apparently diffusion coefficient (ADC) value or quantitative enhancement ratio measured on HBP. Material and Methods Eighty-three patients with 99 surgically resected hepatic lesions were enrolled in this study. DWI was performed with b-values of 100 and 800 s/mm 2 . Regions of interest were set on ADC map, and the HBP of Gd-EOB-DTPA-enhanced MRI, to calculate ADC value, liver-to-muscle ratio (LMR), liver-to-spleen ratio (LSR), and contrast enhancement index (CEI) of liver. We compared these parameters between low-stage fibrosis (F0, F1, and F2) and high-stage fibrosis (F3 and F4). Receiver operating characteristic analysis was performed to compare the diagnostic performance when distinguishing low-stage fibrosis from high-stage fibrosis. Results LMR and CEI were significantly lower at high-stage fibrosis than at the low stage ( P < 0.01 and P = 0.04, respectively), whereas LSR did not show a significant difference ( P = 0.053). No significant difference was observed in diagnostic performance between LMR and CEI ( P = 0.185). The best sensitivity and specificity, when an LMR of 2.80 or higher was considered to be low-stage fibrosis, were 82.4% and 75.6%, respectively. ADC value showed no significant differences among fibrosis grades ( P = 0.320). Conclusion LMR and CEI were both adequate surrogate parameters to distinguish high-stage fibrosis from low-stage fibrosis.
Coyne, Karin S; Sexton, Chris C; Thompson, Christine; Bavendam, Tamara; Brubaker, Linda
2015-03-01
Urinary urgency is the cardinal symptom of overactive bladder (OAB). However, there is no single instrument that assesses the context, severity, intensity, and daily life impact of urinary urgency. The purpose of this manuscript is to describe the methods and results of the qualitative and quantitative research conducted to develop a new tool for this purpose, the Urgency Questionnaire (UQ). Qualitative data from interviews with patients with urinary urgency were used to develop and refine the items and response options of the UQ. Three studies were used to evaluate psychometric properties: a clinical trial of tolterodine (Detrol; n = 974); a psychometric validation study (n = 163); and a test-retest validation study (n = 47). Item and exploratory factor analysis (EFA) were performed to assess the subscale structure, and the psychometric performance of the resulting scales was evaluated. Fifteen Likert-scale items and four VAS questions were retained. A four-factor solution was shown to best fit the data, with the subscales: Impact on Daily Activities, Time to Control Urgency, Nocturia, and Fear of Incontinence. All subscales and VAS items demonstrated good reliability (Cronbach's α 0.79-0.94), convergent and discriminant validity, and responsiveness to change. The UQ differentiated between OAB patients and controls. The results provide quantitative evidence that urinary urgency, as assessed by the UQ, is a pathological sensation distinctive from the normal urge to void and suggest that the UQ might be a reliable, valid, and responsive instrument for evaluating the severity and HRQL impact of urinary urgency in OAB.
NASA Astrophysics Data System (ADS)
Saruwatari, Shunsuke; Suzuki, Makoto; Morikawa, Hiroyuki
The paper shows a compact hard real-time operating system for wireless sensor nodes called PAVENET OS. PAVENET OS provides hybrid multithreading: preemptive multithreading and cooperative multithreading. Both of the multithreading are optimized for two kinds of tasks on wireless sensor networks, and those are real-time tasks and best-effort ones. PAVENET OS can efficiently perform hard real-time tasks that cannot be performed by TinyOS. The paper demonstrates the hybrid multithreading realizes compactness and low overheads, which are comparable to those of TinyOS, through quantitative evaluation. The evaluation results show PAVENET OS performs 100 Hz sensor sampling with 0.01% jitter while performing wireless communication tasks, whereas optimized TinyOS has 0.62% jitter. In addition, PAVENET OS has a small footprint and low overheads (minimum RAM size: 29 bytes, minimum ROM size: 490 bytes, minimum task switch time: 23 cycles).
Ju, Hyunjin; Lee, Deuck Hang; Cho, Hae-Chang; Kim, Kang Su; Yoon, Seyoon; Seo, Soo-Yeon
2014-01-01
In this study, hydrophilic chemical grout using silanol (HCGS) was adopted to overcome the performance limitations of epoxy materials used for strengthening existing buildings and civil engineering structures. The enhanced material performances of HCGS were introduced, and applied to the section enlargement method, which is one of the typical structural strengthening methods used in practice. To evaluate the excellent structural strengthening performance of the HCGS, structural tests were conducted on reinforced concrete beams, and analyses on the flexural behaviors of test specimens were performed by modified partial interaction theory (PIT). In particular, to improve the constructability of the section enlargement method, an advanced strengthening method was proposed, in which the precast panel was directly attached to the bottom of the damaged structural member by HCGS, and the degree of connection of the test specimens, strengthened by the section enlargement method, were quantitatively evaluated by PIT-based analysis. PMID:28788708
Ju, Hyunjin; Lee, Deuck Hang; Cho, Hae-Chang; Kim, Kang Su; Yoon, Seyoon; Seo, Soo-Yeon
2014-06-23
In this study, hydrophilic chemical grout using silanol (HCGS) was adopted to overcome the performance limitations of epoxy materials used for strengthening existing buildings and civil engineering structures. The enhanced material performances of HCGS were introduced, and applied to the section enlargement method, which is one of the typical structural strengthening methods used in practice. To evaluate the excellent structural strengthening performance of the HCGS, structural tests were conducted on reinforced concrete beams, and analyses on the flexural behaviors of test specimens were performed by modified partial interaction theory (PIT). In particular, to improve the constructability of the section enlargement method, an advanced strengthening method was proposed, in which the precast panel was directly attached to the bottom of the damaged structural member by HCGS, and the degree of connection of the test specimens, strengthened by the section enlargement method, were quantitatively evaluated by PIT-based analysis.
NASA Astrophysics Data System (ADS)
Habib, Akka; Abdelhamid, Bouzidi; Said, Housni
2018-05-01
Because of the absence of regulations and specific national norms, the unilaterally applied indicators for performance evaluation of water distribution management services are insufficient. This does not pave the way for a clear visibility of water resources. The indicators are also so heterogeneous that they are not in equilibrium with the applied management patterns. In fact: 1- The performance (yield and Linear loss index) of drinking water networks presents a discrepancy between operators and lack of homogeneity in terms of parameters put in its equation. Hence, It these indicators lose efficiency and reliability; 2- Liquid sanitation service has to go beyond the quantitative evaluation target in order to consider the qualitative aspects of water. To reach this aim, a reasonable enlargement of performance indicators is of paramount importance in order to better manage water resource which is becoming scarce and insufficient.
NASA Technical Reports Server (NTRS)
Larson, T. J.; Schweikhard, W. G.
1974-01-01
A method for evaluating aircraft takeoff performance from brake release to air-phase height that requires fewer tests than conventionally required is evaluated with data for the XB-70 airplane. The method defines the effects of pilot technique on takeoff performance quantitatively, including the decrease in acceleration from drag due to lift. For a given takeoff weight and throttle setting, a single takeoff provides enough data to establish a standardizing relationship for the distance from brake release to any point where velocity is appropriate to rotation. The lower rotation rates penalized takeoff performance in terms of ground roll distance; the lowest observed rotation rate required a ground roll distance that was 19 percent longer than the highest. Rotations at the minimum rate also resulted in lift-off velocities that were approximately 5 knots lower than the highest rotation rate at any given lift-off distance.
Quantitative prediction of drug side effects based on drug-related features.
Niu, Yanqing; Zhang, Wen
2017-09-01
Unexpected side effects of drugs are great concern in the drug development, and the identification of side effects is an important task. Recently, machine learning methods are proposed to predict the presence or absence of interested side effects for drugs, but it is difficult to make the accurate prediction for all of them. In this paper, we transform side effect profiles of drugs as their quantitative scores, by summing up their side effects with weights. The quantitative scores may measure the dangers of drugs, and thus help to compare the risk of different drugs. Here, we attempt to predict quantitative scores of drugs, namely the quantitative prediction. Specifically, we explore a variety of drug-related features and evaluate their discriminative powers for the quantitative prediction. Then, we consider several feature combination strategies (direct combination, average scoring ensemble combination) to integrate three informative features: chemical substructures, targets, and treatment indications. Finally, the average scoring ensemble model which produces the better performances is used as the final quantitative prediction model. Since weights for side effects are empirical values, we randomly generate different weights in the simulation experiments. The experimental results show that the quantitative method is robust to different weights, and produces satisfying results. Although other state-of-the-art methods cannot make the quantitative prediction directly, the prediction results can be transformed as the quantitative scores. By indirect comparison, the proposed method produces much better results than benchmark methods in the quantitative prediction. In conclusion, the proposed method is promising for the quantitative prediction of side effects, which may work cooperatively with existing state-of-the-art methods to reveal dangers of drugs.
Chen, Jing; Wang, Shu-Mei; Meng, Jiang; Sun, Fei; Liang, Sheng-Wang
2013-05-01
To establish a new method for quality evaluation and validate its feasibilities by simultaneous quantitative assay of five alkaloids in Sophora flavescens. The new quality evaluation method, quantitative analysis of multi-components by single marker (QAMS), was established and validated with S. flavescens. Five main alkaloids, oxymatrine, sophocarpine, matrine, oxysophocarpine and sophoridine, were selected as analytes to evaluate the quality of rhizome of S. flavescens, and the relative correction factor has good repeatibility. Their contents in 21 batches of samples, collected from different areas, were determined by both external standard method and QAMS. The method was evaluated by comparison of the quantitative results between external standard method and QAMS. No significant differences were found in the quantitative results of five alkaloids in 21 batches of S. flavescens determined by external standard method and QAMS. It is feasible and suitable to evaluate the quality of rhizome of S. flavescens by QAMS.
Mathematics Competency for Beginning Chemistry Students Through Dimensional Analysis.
Pursell, David P; Forlemu, Neville Y; Anagho, Leonard E
2017-01-01
Mathematics competency in nursing education and practice may be addressed by an instructional variation of the traditional dimensional analysis technique typically presented in beginning chemistry courses. The authors studied 73 beginning chemistry students using the typical dimensional analysis technique and the variation technique. Student quantitative problem-solving performance was evaluated. Students using the variation technique scored significantly better (18.3 of 20 points, p < .0001) on the final examination quantitative titration problem than those who used the typical technique (10.9 of 20 points). American Chemical Society examination scores and in-house assessment indicate that better performing beginning chemistry students were more likely to use the variation technique rather than the typical technique. The variation technique may be useful as an alternative instructional approach to enhance beginning chemistry students' mathematics competency and problem-solving ability in both education and practice. [J Nurs Educ. 2017;56(1):22-26.]. Copyright 2017, SLACK Incorporated.
Spalenza, Veronica; Girolami, Flavia; Bevilacqua, Claudia; Riondato, Fulvio; Rasero, Roberto; Nebbia, Carlo; Sacchi, Paola; Martin, Patrice
2011-09-01
Gene expression studies in blood cells, particularly lymphocytes, are useful for monitoring potential exposure to toxicants or environmental pollutants in humans and livestock species. Quantitative PCR is the method of choice for obtaining accurate quantification of mRNA transcripts although variations in the amount of starting material, enzymatic efficiency, and the presence of inhibitors can lead to evaluation errors. As a result, normalization of data is of crucial importance. The most common approach is the use of endogenous reference genes as an internal control, whose expression should ideally not vary among individuals and under different experimental conditions. The accurate selection of reference genes is therefore an important step in interpreting quantitative PCR studies. Since no systematic investigation in bovine lymphocytes has been performed, the aim of the present study was to assess the expression stability of seven candidate reference genes in circulating lymphocytes collected from 15 dairy cows. Following the characterization by flow cytometric analysis of the cell populations obtained from blood through a density gradient procedure, three popular softwares were used to evaluate the gene expression data. The results showed that two genes are sufficient for normalization of quantitative PCR studies in cattle lymphocytes and that YWAHZ, S24 and PPIA are the most stable genes. Copyright © 2010 Elsevier Ltd. All rights reserved.
Quantitative measurement of feline colonic transit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krevsky, B.; Somers, M.B.; Maurer, A.H.
1988-10-01
Colonic transit scintigraphy, a method for quantitatively evaluating the movement of the fecal stream in vivo, was employed to evaluate colonic transit in the cat. Scintigraphy was performed in duplicate in five cats and repeated four times in one cat. After instillation of an 111In marker into the cecum through a surgically implanted silicone cecostomy tube, colonic movement of the instillate was quantitated for 24 h using gamma scintigraphy. Antegrade and retrograde motion of radionuclide was observed. The cecum and ascending colon emptied rapidly, with a half-emptying time of 1.68 +/- 0.56 h (mean +/- SE). After 24 h, 25.1more » +/- 5.2% of the activity remained in the transverse colon. The progression of the geometric center was initially rapid, followed later by a delayed phase. Geometric center reproducibility was found to be high when analyzed using simple linear regression (slope = 0.92; r = 0.73; P less than 0.01). Atropine (0.1 mg/kg im) was found to delay cecum and ascending colon emptying and delay progression of the geometric center. These results demonstrate both 1) the ability of colonic transit scintigraphy to detect changes in transit induced by pharmacological manipulation and 2) the fact that muscarinic blockade inhibits antegrade transit of the fecal stream. We conclude that feline colonic transit may be studied in a quantitative and reproducible manner with colonic transit scintigraphy.« less
Chung, Cheng-Shiu; Wang, Hongwu; Cooper, Rory A.
2013-01-01
Context The user interface development of assistive robotic manipulators can be traced back to the 1960s. Studies include kinematic designs, cost-efficiency, user experience involvements, and performance evaluation. This paper is to review studies conducted with clinical trials using activities of daily living (ADLs) tasks to evaluate performance categorized using the International Classification of Functioning, Disability, and Health (ICF) frameworks, in order to give the scope of current research and provide suggestions for future studies. Methods We conducted a literature search of assistive robotic manipulators from 1970 to 2012 in PubMed, Google Scholar, and University of Pittsburgh Library System – PITTCat. Results Twenty relevant studies were identified. Conclusion Studies were separated into two broad categories: user task preferences and user-interface performance measurements of commercialized and developing assistive robotic manipulators. The outcome measures and ICF codes associated with the performance evaluations are reported. Suggestions for the future studies include (1) standardized ADL tasks for the quantitative and qualitative evaluation of task efficiency and performance to build comparable measures between research groups, (2) studies relevant to the tasks from user priority lists and ICF codes, and (3) appropriate clinical functional assessment tests with consideration of constraints in assistive robotic manipulator user interfaces. In addition, these outcome measures will help physicians and therapists build standardized tools while prescribing and assessing assistive robotic manipulators. PMID:23820143
Validity of the Medical College Admission Test for Predicting MD-PhD Student Outcomes
ERIC Educational Resources Information Center
Bills, James L.; VanHouten, Jacob; Grundy, Michelle M.; Chalkley, Roger; Dermody, Terence S.
2016-01-01
The Medical College Admission Test (MCAT) is a quantitative metric used by MD and MD-PhD programs to evaluate applicants for admission. This study assessed the validity of the MCAT in predicting training performance measures and career outcomes for MD-PhD students at a single institution. The study population consisted of 153 graduates of the…
ERIC Educational Resources Information Center
Trice, Toni M.
2017-01-01
Research shows a math achievement gap for at-risk and economically-disadvantaged students in the United States. To address this issue, a Texas school district implemented a 90-minute math block-scheduling program with 8th grade students. Shaped by the academic learning time and social justice theories, the purpose of this quantitative program…
ERIC Educational Resources Information Center
Ryan, Timothy K.
2010-01-01
During the period from 1978 to 2009, more than 200 commercial airlines were forced to merge, cease operations, or file for bankruptcy protection. The purpose of this quantitative study is to evaluate the global commercial airline industry from an IT-business alignment perspective and correlate the alignment maturity level of each airline with…
Gao, Bin; Li, Xiaoqing; Woo, Wai Lok; Tian, Gui Yun
2018-05-01
Thermographic inspection has been widely applied to non-destructive testing and evaluation with the capabilities of rapid, contactless, and large surface area detection. Image segmentation is considered essential for identifying and sizing defects. To attain a high-level performance, specific physics-based models that describe defects generation and enable the precise extraction of target region are of crucial importance. In this paper, an effective genetic first-order statistical image segmentation algorithm is proposed for quantitative crack detection. The proposed method automatically extracts valuable spatial-temporal patterns from unsupervised feature extraction algorithm and avoids a range of issues associated with human intervention in laborious manual selection of specific thermal video frames for processing. An internal genetic functionality is built into the proposed algorithm to automatically control the segmentation threshold to render enhanced accuracy in sizing the cracks. Eddy current pulsed thermography will be implemented as a platform to demonstrate surface crack detection. Experimental tests and comparisons have been conducted to verify the efficacy of the proposed method. In addition, a global quantitative assessment index F-score has been adopted to objectively evaluate the performance of different segmentation algorithms.
Oeck, Sebastian; Malewicz, Nathalie M; Hurst, Sebastian; Al-Refae, Klaudia; Krysztofiak, Adam; Jendrossek, Verena
2017-07-01
The quantitative analysis of foci plays an important role in various cell biological methods. In the fields of radiation biology and experimental oncology, the effect of ionizing radiation, chemotherapy or molecularly targeted drugs on DNA damage induction and repair is frequently performed by the analysis of protein clusters or phosphorylated proteins recruited to so called repair foci at DNA damage sites, involving for example γ-H2A.X, 53BP1 or RAD51. We recently developed "The Focinator" as a reliable and fast tool for automated quantitative and qualitative analysis of nuclei and DNA damage foci. The refined software is now even more user-friendly due to a graphical interface and further features. Thus, we included an R-script-based mode for automated image opening, file naming, progress monitoring and an error report. Consequently, the evaluation no longer required the attendance of the operator after initial parameter definition. Moreover, the Focinator v2-0 is now able to perform multi-channel analysis of four channels and evaluation of protein-protein colocalization by comparison of up to three foci channels. This enables for example the quantification of foci in cells of a specific cell cycle phase.
Inertial Sensor-Based Motion Analysis of Lower Limbs for Rehabilitation Treatments
Sun, Tongyang; Duan, Lihong; Wang, Yulong
2017-01-01
The hemiplegic rehabilitation state diagnosing performed by therapists can be biased due to their subjective experience, which may deteriorate the rehabilitation effect. In order to improve this situation, a quantitative evaluation is proposed. Though many motion analysis systems are available, they are too complicated for practical application by therapists. In this paper, a method for detecting the motion of human lower limbs including all degrees of freedom (DOFs) via the inertial sensors is proposed, which permits analyzing the patient's motion ability. This method is applicable to arbitrary walking directions and tracks of persons under study, and its results are unbiased, as compared to therapist qualitative estimations. Using the simplified mathematical model of a human body, the rotation angles for each lower limb joint are calculated from the input signals acquired by the inertial sensors. Finally, the rotation angle versus joint displacement curves are constructed, and the estimated values of joint motion angle and motion ability are obtained. The experimental verification of the proposed motion detection and analysis method was performed, which proved that it can efficiently detect the differences between motion behaviors of disabled and healthy persons and provide a reliable quantitative evaluation of the rehabilitation state. PMID:29065575
Soncin, Rafael; Mezêncio, Bruno; Ferreira, Jacielle Carolina; Rodrigues, Sara Andrade; Huebner, Rudolf; Serrão, Julio Cerca; Szmuchrowski, Leszek
2017-06-01
The aim of this study was to propose a new force parameter, associated with swimmers' technique and performance. Twelve swimmers performed five repetitions of 25 m sprint crawl and a tethered swimming test with maximal effort. The parameters calculated were: the mean swimming velocity for crawl sprint, the mean propulsive force of the tethered swimming test as well as an oscillation parameter calculated from force fluctuation. The oscillation parameter evaluates the force variation around the mean force during the tethered test as a measure of swimming technique. Two parameters showed significant correlations with swimming velocity: the mean force during the tethered swimming (r = 0.85) and the product of the mean force square root and the oscillation (r = 0.86). However, the intercept coefficient was significantly different from zero only for the mean force, suggesting that although the correlation coefficient of the parameters was similar, part of the mean velocity magnitude that was not associated with the mean force was associated with the product of the mean force square root and the oscillation. Thus, force fluctuation during tethered swimming can be used as a quantitative index of swimmers' technique.