Sample records for quantitative performance evaluation

  1. Quantitative Evaluation of Performance during Robot-assisted Treatment.

    PubMed

    Peri, E; Biffi, E; Maghini, C; Servodio Iammarrone, F; Gagliardi, C; Germiniasi, C; Pedrocchi, A; Turconi, A C; Reni, G

    2016-01-01

    This article is part of the Focus Theme of Methods of Information in Medicine on "Methodologies, Models and Algorithms for Patients Rehabilitation". The great potential of robots in extracting quantitative and meaningful data is not always exploited in clinical practice. The aim of the present work is to describe a simple parameter to assess the performance of subjects during upper limb robotic training exploiting data automatically recorded by the robot, with no additional effort for patients and clinicians. Fourteen children affected by cerebral palsy (CP) performed a training with Armeo®Spring. Each session was evaluated with P, a simple parameter that depends on the overall performance recorded, and median and interquartile values were computed to perform a group analysis. Median (interquartile) values of P significantly increased from 0.27 (0.21) at T0 to 0.55 (0.27) at T1 . This improvement was functionally validated by a significant increase of the Melbourne Assessment of Unilateral Upper Limb Function. The parameter described here was able to show variations in performance over time and enabled a quantitative evaluation of motion abilities in a way that is reliable with respect to a well-known clinical scale.

  2. Evaluation of Quantitative Performance of Sequential Immobilized Metal Affinity Chromatographic Enrichment for Phosphopeptides

    PubMed Central

    Sun, Zeyu; Hamilton, Karyn L.; Reardon, Kenneth F.

    2014-01-01

    We evaluated a sequential elution protocol from immobilized metal affinity chromatography (SIMAC) employing gallium-based immobilized metal affinity chromatography (IMAC) in conjunction with titanium-dioxide-based metal oxide affinity chromatography (MOAC). The quantitative performance of this SIMAC enrichment approach, assessed in terms of repeatability, dynamic range, and linearity, was evaluated using a mixture composed of tryptic peptides from caseins, bovine serum albumin, and phosphopeptide standards. While our data demonstrate the overall consistent performance of the SIMAC approach under various loading conditions, the results also revealed that the method had limited repeatability and linearity for most phosphopeptides tested, and different phosphopeptides were found to have different linear ranges. These data suggest that, unless additional strategies are used, SIMAC should be regarded as a semi-quantitative method when used in large-scale phosphoproteomics studies in complex backgrounds. PMID:24096195

  3. Performance Evaluation and Quantitative Accuracy of Multipinhole NanoSPECT/CT Scanner for Theranostic Lu-177 Imaging

    NASA Astrophysics Data System (ADS)

    Gupta, Arun; Kim, Kyeong Yun; Hwang, Donghwi; Lee, Min Sun; Lee, Dong Soo; Lee, Jae Sung

    2018-06-01

    SPECT plays important role in peptide receptor targeted radionuclide therapy using theranostic radionuclides such as Lu-177 for the treatment of various cancers. However, SPECT studies must be quantitatively accurate because the reliable assessment of tumor uptake and tumor-to-normal tissue ratios can only be performed using quantitatively accurate images. Hence, it is important to evaluate performance parameters and quantitative accuracy of preclinical SPECT systems for therapeutic radioisotopes before conducting pre- and post-therapy SPECT imaging or dosimetry studies. In this study, we evaluated system performance and quantitative accuracy of NanoSPECT/CT scanner for Lu-177 imaging using point source and uniform phantom studies. We measured recovery coefficient, uniformity, spatial resolution, system sensitivity and calibration factor for mouse whole body standard aperture. We also performed the experiments using Tc-99m to compare the results with that of Lu-177. We found that the recovery coefficient of more than 70% for Lu-177 at the optimum noise level when nine iterations were used. The spatial resolutions of Lu-177 with and without adding uniform background was comparable to that of Tc-99m in axial, radial and tangential directions. System sensitivity measured for Lu-177 was almost three times less than that of Tc-99m.

  4. A novel approach for evaluating the performance of real time quantitative loop-mediated isothermal amplification-based methods.

    PubMed

    Nixon, Gavin J; Svenstrup, Helle F; Donald, Carol E; Carder, Caroline; Stephenson, Judith M; Morris-Jones, Stephen; Huggett, Jim F; Foy, Carole A

    2014-12-01

    Molecular diagnostic measurements are currently underpinned by the polymerase chain reaction (PCR). There are also a number of alternative nucleic acid amplification technologies, which unlike PCR, work at a single temperature. These 'isothermal' methods, reportedly offer potential advantages over PCR such as simplicity, speed and resistance to inhibitors and could also be used for quantitative molecular analysis. However there are currently limited mechanisms to evaluate their quantitative performance, which would assist assay development and study comparisons. This study uses a sexually transmitted infection diagnostic model in combination with an adapted metric termed isothermal doubling time (IDT), akin to PCR efficiency, to compare quantitative PCR and quantitative loop-mediated isothermal amplification (qLAMP) assays, and to quantify the impact of matrix interference. The performance metric described here facilitates the comparison of qLAMP assays that could assist assay development and validation activities.

  5. Quantitative Evaluation of Performance in Interventional Neuroradiology: An Integrated Curriculum Featuring Theoretical and Practical Challenges.

    PubMed

    Ernst, Marielle; Kriston, Levente; Romero, Javier M; Frölich, Andreas M; Jansen, Olav; Fiehler, Jens; Buhk, Jan-Hendrik

    2016-01-01

    We sought to develop a standardized curriculum capable of assessing key competencies in Interventional Neuroradiology by the use of models and simulators in an objective, quantitative, and efficient way. In this evaluation we analyzed the associations between the practical experience, theoretical knowledge, and the skills lab performance of interventionalists. We evaluated the endovascular skills of 26 participants of the Advanced Course in Endovascular Interventional Neuroradiology of the European Society of Neuroradiology with a set of three tasks (aneurysm coiling and thrombectomy in a virtual simulator and placement of an intra-aneurysmal flow disruptor in a flow model). Practical experience was assessed by a survey. Participants completed a written and oral examination to evaluate theoretical knowledge. Bivariate and multivariate analyses were performed. In multivariate analysis knowledge of materials and techniques in Interventional Neuroradiology was moderately associated with skills in aneurysm coiling and thrombectomy. Experience in mechanical thrombectomy was moderately associated with thrombectomy skills, while age was negatively associated with thrombectomy skills. We found no significant association between age, sex, or work experience and skills in aneurysm coiling. Our study gives an example of how an integrated curriculum for reasonable and cost-effective assessment of key competences of an interventional neuroradiologist could look. In addition to traditional assessment of theoretical knowledge practical skills are measured by the use of endovascular simulators yielding objective, quantitative, and constructive data for the evaluation of the current performance status of participants as well as the evolution of their technical competency over time.

  6. Evaluating Attitudes, Skill, and Performance in a Learning-Enhanced Quantitative Methods Course: A Structural Modeling Approach.

    ERIC Educational Resources Information Center

    Harlow, Lisa L.; Burkholder, Gary J.; Morrow, Jennifer A.

    2002-01-01

    Used a structural modeling approach to evaluate relations among attitudes, initial skills, and performance in a Quantitative Methods course that involved students in active learning. Results largely confirmed hypotheses offering support for educational reform efforts that propose actively involving students in the learning process, especially in…

  7. Quantitative Evaluation of Performance in Interventional Neuroradiology: An Integrated Curriculum Featuring Theoretical and Practical Challenges

    PubMed Central

    Ernst, Marielle; Kriston, Levente; Romero, Javier M.; Frölich, Andreas M.; Jansen, Olav; Fiehler, Jens; Buhk, Jan-Hendrik

    2016-01-01

    Purpose We sought to develop a standardized curriculum capable of assessing key competencies in Interventional Neuroradiology by the use of models and simulators in an objective, quantitative, and efficient way. In this evaluation we analyzed the associations between the practical experience, theoretical knowledge, and the skills lab performance of interventionalists. Materials and Methods We evaluated the endovascular skills of 26 participants of the Advanced Course in Endovascular Interventional Neuroradiology of the European Society of Neuroradiology with a set of three tasks (aneurysm coiling and thrombectomy in a virtual simulator and placement of an intra-aneurysmal flow disruptor in a flow model). Practical experience was assessed by a survey. Participants completed a written and oral examination to evaluate theoretical knowledge. Bivariate and multivariate analyses were performed. Results In multivariate analysis knowledge of materials and techniques in Interventional Neuroradiology was moderately associated with skills in aneurysm coiling and thrombectomy. Experience in mechanical thrombectomy was moderately associated with thrombectomy skills, while age was negatively associated with thrombectomy skills. We found no significant association between age, sex, or work experience and skills in aneurysm coiling. Conclusion Our study gives an example of how an integrated curriculum for reasonable and cost-effective assessment of key competences of an interventional neuroradiologist could look. In addition to traditional assessment of theoretical knowledge practical skills are measured by the use of endovascular simulators yielding objective, quantitative, and constructive data for the evaluation of the current performance status of participants as well as the evolution of their technical competency over time. PMID:26848840

  8. Comparative Evaluation of Quantitative Test Methods for Gases on a Hard Surface

    DTIC Science & Technology

    2017-02-01

    COMPARATIVE EVALUATION OF QUANTITATIVE TEST METHODS FOR GASES ON A HARD SURFACE ECBC-TR-1426 Vipin Rastogi...1 COMPARATIVE EVALUATION OF QUANTITATIVE TEST METHODS FOR GASES ON A HARD SURFACE 1. INTRODUCTION Members of the U.S. Environmental...Generator 4 2.4 Experimental Design Each quantitative method was performed three times on three consecutive days. For the CD runs, three

  9. Performance analysis of a film dosimetric quality assurance procedure for IMRT with regard to the employment of quantitative evaluation methods.

    PubMed

    Winkler, Peter; Zurl, Brigitte; Guss, Helmuth; Kindl, Peter; Stuecklschweiger, Georg

    2005-02-21

    A system for dosimetric verification of intensity-modulated radiotherapy (IMRT) treatment plans using absolute calibrated radiographic films is presented. At our institution this verification procedure is performed for all IMRT treatment plans prior to patient irradiation. Therefore clinical treatment plans are transferred to a phantom and recalculated. Composite treatment plans are irradiated to a single film. Film density to absolute dose conversion is performed automatically based on a single calibration film. A software application encompassing film calibration, 2D registration of measurement and calculated distributions, image fusion, and a number of visual and quantitative evaluation utilities was developed. The main topic of this paper is a performance analysis for this quality assurance procedure, with regard to the specification of tolerance levels for quantitative evaluations. Spatial and dosimetric precision and accuracy were determined for the entire procedure, comprising all possible sources of error. The overall dosimetric and spatial measurement uncertainties obtained thereby were 1.9% and 0.8 mm respectively. Based on these results, we specified 5% dose difference and 3 mm distance-to-agreement as our tolerance levels for patient-specific quality assurance for IMRT treatments.

  10. Diagnostic performance of semi-quantitative and quantitative stress CMR perfusion analysis: a meta-analysis.

    PubMed

    van Dijk, R; van Assen, M; Vliegenthart, R; de Bock, G H; van der Harst, P; Oudkerk, M

    2017-11-27

    Stress cardiovascular magnetic resonance (CMR) perfusion imaging is a promising modality for the evaluation of coronary artery disease (CAD) due to high spatial resolution and absence of radiation. Semi-quantitative and quantitative analysis of CMR perfusion are based on signal-intensity curves produced during the first-pass of gadolinium contrast. Multiple semi-quantitative and quantitative parameters have been introduced. Diagnostic performance of these parameters varies extensively among studies and standardized protocols are lacking. This study aims to determine the diagnostic accuracy of semi- quantitative and quantitative CMR perfusion parameters, compared to multiple reference standards. Pubmed, WebOfScience, and Embase were systematically searched using predefined criteria (3272 articles). A check for duplicates was performed (1967 articles). Eligibility and relevance of the articles was determined by two reviewers using pre-defined criteria. The primary data extraction was performed independently by two researchers with the use of a predefined template. Differences in extracted data were resolved by discussion between the two researchers. The quality of the included studies was assessed using the 'Quality Assessment of Diagnostic Accuracy Studies Tool' (QUADAS-2). True positives, false positives, true negatives, and false negatives were subtracted/calculated from the articles. The principal summary measures used to assess diagnostic accuracy were sensitivity, specificity, andarea under the receiver operating curve (AUC). Data was pooled according to analysis territory, reference standard and perfusion parameter. Twenty-two articles were eligible based on the predefined study eligibility criteria. The pooled diagnostic accuracy for segment-, territory- and patient-based analyses showed good diagnostic performance with sensitivity of 0.88, 0.82, and 0.83, specificity of 0.72, 0.83, and 0.76 and AUC of 0.90, 0.84, and 0.87, respectively. In per territory

  11. Quantitative imaging biomarkers: a review of statistical methods for technical performance assessment.

    PubMed

    Raunig, David L; McShane, Lisa M; Pennello, Gene; Gatsonis, Constantine; Carson, Paul L; Voyvodic, James T; Wahl, Richard L; Kurland, Brenda F; Schwarz, Adam J; Gönen, Mithat; Zahlmann, Gudrun; Kondratovich, Marina V; O'Donnell, Kevin; Petrick, Nicholas; Cole, Patricia E; Garra, Brian; Sullivan, Daniel C

    2015-02-01

    Technological developments and greater rigor in the quantitative measurement of biological features in medical images have given rise to an increased interest in using quantitative imaging biomarkers to measure changes in these features. Critical to the performance of a quantitative imaging biomarker in preclinical or clinical settings are three primary metrology areas of interest: measurement linearity and bias, repeatability, and the ability to consistently reproduce equivalent results when conditions change, as would be expected in any clinical trial. Unfortunately, performance studies to date differ greatly in designs, analysis method, and metrics used to assess a quantitative imaging biomarker for clinical use. It is therefore difficult or not possible to integrate results from different studies or to use reported results to design studies. The Radiological Society of North America and the Quantitative Imaging Biomarker Alliance with technical, radiological, and statistical experts developed a set of technical performance analysis methods, metrics, and study designs that provide terminology, metrics, and methods consistent with widely accepted metrological standards. This document provides a consistent framework for the conduct and evaluation of quantitative imaging biomarker performance studies so that results from multiple studies can be compared, contrasted, or combined. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  12. Promoting the safety performance of industrial radiography using a quantitative assessment system.

    PubMed

    Kardan, M R; Mianji, F A; Rastkhah, N; Babakhani, A; Azad, S Borhan

    2006-12-01

    The increasing number of industrial radiographers and their considerable occupational exposure has been one of the main concerns of the Iran Nuclear Regulatory Authority (INRA) in recent years. In 2002, a quantitative system of evaluating the safety performance of licensees and a complementary enforcement system was introduced by the National Radiation Protection Department (NRPD). Each parameter of the practice is given a weighting factor according to its importance to safety. Assessment of the licensees is done quantitatively by summing up their scores using prepared tables. Implementing this system of evaluation showed a considerable decrease in deficiencies in the various centres. Tables are updated regularly as a result of findings during the inspections. This system is used in addition to enforcement to promote safety performance and to increase the culture of safety in industrial radiography.

  13. A Stereological Method for the Quantitative Evaluation of Cartilage Repair Tissue.

    PubMed

    Foldager, Casper Bindzus; Nyengaard, Jens Randel; Lind, Martin; Spector, Myron

    2015-04-01

    To implement stereological principles to develop an easy applicable algorithm for unbiased and quantitative evaluation of cartilage repair. Design-unbiased sampling was performed by systematically sectioning the defect perpendicular to the joint surface in parallel planes providing 7 to 10 hematoxylin-eosin stained histological sections. Counting windows were systematically selected and converted into image files (40-50 per defect). The quantification was performed by two-step point counting: (1) calculation of defect volume and (2) quantitative analysis of tissue composition. Step 2 was performed by assigning each point to one of the following categories based on validated and easy distinguishable morphological characteristics: (1) hyaline cartilage (rounded cells in lacunae in hyaline matrix), (2) fibrocartilage (rounded cells in lacunae in fibrous matrix), (3) fibrous tissue (elongated cells in fibrous tissue), (4) bone, (5) scaffold material, and (6) others. The ability to discriminate between the tissue types was determined using conventional or polarized light microscopy, and the interobserver variability was evaluated. We describe the application of the stereological method. In the example, we assessed the defect repair tissue volume to be 4.4 mm(3) (CE = 0.01). The tissue fractions were subsequently evaluated. Polarized light illumination of the slides improved discrimination between hyaline cartilage and fibrocartilage and increased the interobserver agreement compared with conventional transmitted light. We have applied a design-unbiased method for quantitative evaluation of cartilage repair, and we propose this algorithm as a natural supplement to existing descriptive semiquantitative scoring systems. We also propose that polarized light is effective for discrimination between hyaline cartilage and fibrocartilage.

  14. Imaging Performance of Quantitative Transmission Ultrasound

    PubMed Central

    Lenox, Mark W.; Wiskin, James; Lewis, Matthew A.; Darrouzet, Stephen; Borup, David; Hsieh, Scott

    2015-01-01

    Quantitative Transmission Ultrasound (QTUS) is a tomographic transmission ultrasound modality that is capable of generating 3D speed-of-sound maps of objects in the field of view. It performs this measurement by propagating a plane wave through the medium from a transmitter on one side of a water tank to a high resolution receiver on the opposite side. This information is then used via inverse scattering to compute a speed map. In addition, the presence of reflection transducers allows the creation of a high resolution, spatially compounded reflection map that is natively coregistered to the speed map. A prototype QTUS system was evaluated for measurement and geometric accuracy as well as for the ability to correctly determine speed of sound. PMID:26604918

  15. A Computer-Aided Analysis Method of SPECT Brain Images for Quantitative Treatment Monitoring: Performance Evaluations and Clinical Applications.

    PubMed

    Zheng, Xiujuan; Wei, Wentao; Huang, Qiu; Song, Shaoli; Wan, Jieqing; Huang, Gang

    2017-01-01

    The objective and quantitative analysis of longitudinal single photon emission computed tomography (SPECT) images are significant for the treatment monitoring of brain disorders. Therefore, a computer aided analysis (CAA) method is introduced to extract a change-rate map (CRM) as a parametric image for quantifying the changes of regional cerebral blood flow (rCBF) in longitudinal SPECT brain images. The performances of the CAA-CRM approach in treatment monitoring are evaluated by the computer simulations and clinical applications. The results of computer simulations show that the derived CRMs have high similarities with their ground truths when the lesion size is larger than system spatial resolution and the change rate is higher than 20%. In clinical applications, the CAA-CRM approach is used to assess the treatment of 50 patients with brain ischemia. The results demonstrate that CAA-CRM approach has a 93.4% accuracy of recovered region's localization. Moreover, the quantitative indexes of recovered regions derived from CRM are all significantly different among the groups and highly correlated with the experienced clinical diagnosis. In conclusion, the proposed CAA-CRM approach provides a convenient solution to generate a parametric image and derive the quantitative indexes from the longitudinal SPECT brain images for treatment monitoring.

  16. A Stereological Method for the Quantitative Evaluation of Cartilage Repair Tissue

    PubMed Central

    Nyengaard, Jens Randel; Lind, Martin; Spector, Myron

    2015-01-01

    Objective To implement stereological principles to develop an easy applicable algorithm for unbiased and quantitative evaluation of cartilage repair. Design Design-unbiased sampling was performed by systematically sectioning the defect perpendicular to the joint surface in parallel planes providing 7 to 10 hematoxylin–eosin stained histological sections. Counting windows were systematically selected and converted into image files (40-50 per defect). The quantification was performed by two-step point counting: (1) calculation of defect volume and (2) quantitative analysis of tissue composition. Step 2 was performed by assigning each point to one of the following categories based on validated and easy distinguishable morphological characteristics: (1) hyaline cartilage (rounded cells in lacunae in hyaline matrix), (2) fibrocartilage (rounded cells in lacunae in fibrous matrix), (3) fibrous tissue (elongated cells in fibrous tissue), (4) bone, (5) scaffold material, and (6) others. The ability to discriminate between the tissue types was determined using conventional or polarized light microscopy, and the interobserver variability was evaluated. Results We describe the application of the stereological method. In the example, we assessed the defect repair tissue volume to be 4.4 mm3 (CE = 0.01). The tissue fractions were subsequently evaluated. Polarized light illumination of the slides improved discrimination between hyaline cartilage and fibrocartilage and increased the interobserver agreement compared with conventional transmitted light. Conclusion We have applied a design-unbiased method for quantitative evaluation of cartilage repair, and we propose this algorithm as a natural supplement to existing descriptive semiquantitative scoring systems. We also propose that polarized light is effective for discrimination between hyaline cartilage and fibrocartilage. PMID:26069715

  17. Quantitative diagnostic performance of myocardial perfusion SPECT with attenuation correction in women.

    PubMed

    Wolak, Arik; Slomka, Piotr J; Fish, Mathews B; Lorenzo, Santiago; Berman, Daniel S; Germano, Guido

    2008-06-01

    Attenuation correction (AC) for myocardial perfusion SPECT (MPS) had not been evaluated separately in women despite specific considerations in this group because of breast photon attenuation. We aimed to evaluate the performance of AC in women by using automated quantitative analysis of MPS to avoid any bias. Consecutive female patients--134 with a low likelihood (LLk) of coronary artery disease (CAD) and 114 with coronary angiography performed within less than 3 mo of MPS--who were referred for rest-stress electrocardiography-gated 99mTc-sestamibi MPS with AC were considered. Imaging data were evaluated for contour quality control. An additional 50 LLk studies in women were used to create equivalent normal limits for studies with AC and with no correction (NC). An experienced technologist unaware of the angiography and other results performed the contour quality control. All other processing was performed in a fully automated manner. Quantitative analysis was performed with the Cedars-Sinai myocardial perfusion analysis package. All automated segmental analyses were performed with the 17-segment, 5-point American Heart Association model. Summed stress scores (SSS) of > or =3 were considered abnormal. CAD (> or =70% stenosis) was present in 69 of 114 patients (60%). The normalcy rates were 93% for both NC and AC studies. The SSS for patients with CAD and without CAD for NC versus AC were 10.0 +/- 9.0 (mean +/- SD) versus 10.2 +/- 8.5 and 1.6 +/- 2.3 versus 1.8 +/- 2.5, respectively; P was not significant (NS) for all comparisons of NC versus AC. The SSS for LLk patients for NC versus AC were 0.51 +/- 1.0 versus 0.6 +/- 1.1, respectively; P was NS. The specificity for both NC and AC was 73%. The sensitivities for NC and AC were 80% and 81%, respectively, and the accuracies for NC and AC were 77% and 78%, respectively; P was NS for both comparisons. There are no significant diagnostic differences between automated quantitative MPS analyses performed in studies

  18. Objective evaluation of reconstruction methods for quantitative SPECT imaging in the absence of ground truth.

    PubMed

    Jha, Abhinav K; Song, Na; Caffo, Brian; Frey, Eric C

    2015-04-13

    Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method provided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output.

  19. High Resolution Qualitative and Quantitative MR Evaluation of the Glenoid Labrum

    PubMed Central

    Iwasaki, Kenyu; Tafur, Monica; Chang, Eric Y.; SherondaStatum; Biswas, Reni; Tran, Betty; Bae, Won C.; Du, Jiang; Bydder, Graeme M.; Chung, Christine B.

    2015-01-01

    Objective To implement qualitative and quantitative MR sequences for the evaluation of labral pathology. Methods Six glenoid labra were dissected and the anterior and posterior portions were divided into normal, mildly degenerated, or severely degenerated groups using gross and MR findings. Qualitative evaluation was performed using T1-weighted, proton density-weighted (PD), spoiled gradient echo (SPGR) and ultra-short echo time (UTE) sequences. Quantitative evaluation included T2 and T1rho measurements as well as T1, T2*, and T1rho measurements acquired with UTE techniques. Results SPGR and UTE sequences best demonstrated labral fiber structure. Degenerated labra had a tendency towards decreased T1 values, increased T2/T2* values and increased T1 rho values. T2* values obtained with the UTE sequence allowed for delineation between normal, mildly degenerated and severely degenerated groups (p<0.001). Conclusion Quantitative T2* measurements acquired with the UTE technique are useful for distinguishing between normal, mildly degenerated and severely degenerated labra. PMID:26359581

  20. Utility of DWI with quantitative ADC values in ovarian tumors: a meta-analysis of diagnostic test performance.

    PubMed

    Pi, Shan; Cao, Rong; Qiang, Jin Wei; Guo, Yan Hui

    2018-01-01

    Background Diffusion-weighted imaging (DWI) and quantitative apparent diffusion coefficient (ADC) values are widely used in the differential diagnosis of ovarian tumors. Purpose To assess the diagnostic performance of quantitative ADC values in ovarian tumors. Material and Methods PubMed, Embase, the Cochrane Library, and local databases were searched for studies assessing ovarian tumors using quantitative ADC values. We quantitatively analyzed the diagnostic performances for two clinical problems: benign vs. malignant tumors and borderline vs. malignant tumors. We evaluated diagnostic performances by the pooled sensitivity and specificity values and by summary receiver operating characteristic (SROC) curves. Subgroup analyses were used to analyze study heterogeneity. Results From the 742 studies identified in the search results, 16 studies met our inclusion criteria. A total of ten studies evaluated malignant vs. benign ovarian tumors and six studies assessed malignant vs. borderline ovarian tumors. Regarding the diagnostic accuracy of quantitative ADC values for distinguishing between malignant and benign ovarian tumors, the pooled sensitivity and specificity values were 0.91 and 0.91, respectively. The area under the SROC curve (AUC) was 0.96. For differentiating borderline from malignant tumors, the pooled sensitivity and specificity values were 0.89 and 0.79, and the AUC was 0.91. The methodological quality of the included studies was moderate. Conclusion Quantitative ADC values could serve as useful preoperative markers for predicting the nature of ovarian tumors. Nevertheless, prospective trials focused on standardized imaging parameters are needed to evaluate the clinical value of quantitative ADC values in ovarian tumors.

  1. Quantitative Imaging Biomarkers: A Review of Statistical Methods for Technical Performance Assessment

    PubMed Central

    2017-01-01

    Technological developments and greater rigor in the quantitative measurement of biological features in medical images have given rise to an increased interest in using quantitative imaging biomarkers (QIBs) to measure changes in these features. Critical to the performance of a QIB in preclinical or clinical settings are three primary metrology areas of interest: measurement linearity and bias, repeatability, and the ability to consistently reproduce equivalent results when conditions change, as would be expected in any clinical trial. Unfortunately, performance studies to date differ greatly in designs, analysis method and metrics used to assess a QIB for clinical use. It is therefore, difficult or not possible to integrate results from different studies or to use reported results to design studies. The Radiological Society of North America (RSNA) and the Quantitative Imaging Biomarker Alliance (QIBA) with technical, radiological and statistical experts developed a set of technical performance analysis methods, metrics and study designs that provide terminology, metrics and methods consistent with widely accepted metrological standards. This document provides a consistent framework for the conduct and evaluation of QIB performance studies so that results from multiple studies can be compared, contrasted or combined. PMID:24919831

  2. Quantitative framework for prospective motion correction evaluation.

    PubMed

    Pannetier, Nicolas A; Stavrinos, Theano; Ng, Peter; Herbst, Michael; Zaitsev, Maxim; Young, Karl; Matson, Gerald; Schuff, Norbert

    2016-02-01

    Establishing a framework to evaluate performances of prospective motion correction (PMC) MRI considering motion variability between MRI scans. A framework was developed to obtain quantitative comparisons between different motion correction setups, considering that varying intrinsic motion patterns between acquisitions can induce bias. Intrinsic motion was considered by replaying in a phantom experiment the recorded motion trajectories from subjects. T1-weighted MRI on five volunteers and two different marker fixations (mouth guard and nose bridge fixations) were used to test the framework. Two metrics were investigated to quantify the improvement of the image quality with PMC. Motion patterns vary between subjects as well as between repeated scans within a subject. This variability can be approximated by replaying the motion in a distinct phantom experiment and used as a covariate in models comparing motion corrections. We show that considering the intrinsic motion alters the statistical significance in comparing marker fixations. As an example, two marker fixations, a mouth guard and a nose bridge, were evaluated in terms of their effectiveness for PMC. A mouth guard achieved better PMC performance. Intrinsic motion patterns can bias comparisons between PMC configurations and must be considered for robust evaluations. A framework for evaluating intrinsic motion patterns in PMC is presented. © 2015 Wiley Periodicals, Inc.

  3. Metrics for Performance Evaluation of Patient Exercises during Physical Therapy.

    PubMed

    Vakanski, Aleksandar; Ferguson, Jake M; Lee, Stephen

    2017-06-01

    The article proposes a set of metrics for evaluation of patient performance in physical therapy exercises. Taxonomy is employed that classifies the metrics into quantitative and qualitative categories, based on the level of abstraction of the captured motion sequences. Further, the quantitative metrics are classified into model-less and model-based metrics, in reference to whether the evaluation employs the raw measurements of patient performed motions, or whether the evaluation is based on a mathematical model of the motions. The reviewed metrics include root-mean square distance, Kullback Leibler divergence, log-likelihood, heuristic consistency, Fugl-Meyer Assessment, and similar. The metrics are evaluated for a set of five human motions captured with a Kinect sensor. The metrics can potentially be integrated into a system that employs machine learning for modelling and assessment of the consistency of patient performance in home-based therapy setting. Automated performance evaluation can overcome the inherent subjectivity in human performed therapy assessment, and it can increase the adherence to prescribed therapy plans, and reduce healthcare costs.

  4. Performance Evaluation and Benchmarking of Intelligent Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madhavan, Raj; Messina, Elena; Tunstel, Edward

    To design and develop capable, dependable, and affordable intelligent systems, their performance must be measurable. Scientific methodologies for standardization and benchmarking are crucial for quantitatively evaluating the performance of emerging robotic and intelligent systems technologies. There is currently no accepted standard for quantitatively measuring the performance of these systems against user-defined requirements; and furthermore, there is no consensus on what objective evaluation procedures need to be followed to understand the performance of these systems. The lack of reproducible and repeatable test methods has precluded researchers working towards a common goal from exchanging and communicating results, inter-comparing system performance, and leveragingmore » previous work that could otherwise avoid duplication and expedite technology transfer. Currently, this lack of cohesion in the community hinders progress in many domains, such as manufacturing, service, healthcare, and security. By providing the research community with access to standardized tools, reference data sets, and open source libraries of solutions, researchers and consumers will be able to evaluate the cost and benefits associated with intelligent systems and associated technologies. In this vein, the edited book volume addresses performance evaluation and metrics for intelligent systems, in general, while emphasizing the need and solutions for standardized methods. To the knowledge of the editors, there is not a single book on the market that is solely dedicated to the subject of performance evaluation and benchmarking of intelligent systems. Even books that address this topic do so only marginally or are out of date. The research work presented in this volume fills this void by drawing from the experiences and insights of experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. The book

  5. Quantitative method of medication system interface evaluation.

    PubMed

    Pingenot, Alleene Anne; Shanteau, James; Pingenot, James D F

    2007-01-01

    The objective of this study was to develop a quantitative method of evaluating the user interface for medication system software. A detailed task analysis provided a description of user goals and essential activity. A structural fault analysis was used to develop a detailed description of the system interface. Nurses experienced with use of the system under evaluation provided estimates of failure rates for each point in this simplified fault tree. Means of estimated failure rates provided quantitative data for fault analysis. Authors note that, although failures of steps in the program were frequent, participants reported numerous methods of working around these failures so that overall system failure was rare. However, frequent process failure can affect the time required for processing medications, making a system inefficient. This method of interface analysis, called Software Efficiency Evaluation and Fault Identification Method, provides quantitative information with which prototypes can be compared and problems within an interface identified.

  6. Reliability and Validity of the Professional Counseling Performance Evaluation

    ERIC Educational Resources Information Center

    Shepherd, J. Brad; Britton, Paula J.; Kress, Victoria E.

    2008-01-01

    The definition and measurement of counsellor trainee competency is an issue that has received increased attention yet lacks quantitative study. This research evaluates item responses, scale reliability and intercorrelations, interrater agreement, and criterion-related validity of the Professional Performance Fitness Evaluation/Professional…

  7. Diagnostic performance of quantitative shear wave elastography in the evaluation of solid breast masses: determination of the most discriminatory parameter.

    PubMed

    Au, Frederick Wing-Fai; Ghai, Sandeep; Moshonov, Hadas; Kahn, Harriette; Brennan, Cressida; Dua, Hemi; Crystal, Pavel

    2014-09-01

    The purpose of this article is to assess the diagnostic performance of quantitative shear wave elastography in the evaluation of solid breast masses and to determine the most discriminatory parameter. B-mode ultrasound and shear wave elastography were performed before core biopsy of 123 masses in 112 women. The diagnostic performance of ultrasound and quantitative shear wave elastography parameters (mean elasticity, maximum elasticity, and elasticity ratio) were compared. The added effect of shear wave elastography on the performance of ultrasound was determined. The mean elasticity, maximum elasticity, and elasticity ratio were 24.8 kPa, 30.3 kPa, and 1.90, respectively, for 79 benign masses and 130.7 kPa, 154.9 kPa, and 11.52, respectively, for 44 malignant masses (p < 0.001). The optimal cutoff value for each parameter was determined to be 42.5 kPa, 46.7 kPa, and 3.56, respectively. The AUC of each shear wave elastography parameter was higher than that of ultrasound (p < 0.001); the AUC value for the elasticity ratio (0.943) was the highest. By adding shear wave elastography parameters to the evaluation of BI-RADS category 4a masses, about 90% of masses could be downgraded to BI-RADS category 3. The numbers of downgraded masses were 40 of 44 (91%) for mean elasticity, 39 of 44 (89%) for maximum elasticity, and 42 of 44 (95%) for elasticity ratio. The numbers of correctly downgraded masses were 39 of 40 (98%) for mean elasticity, 38 of 39 (97%) for maximum elasticity, and 41 of 42 (98%) for elasticity ratio. There was improvement in the diagnostic performance of ultrasound of mass assessment with shear wave elastography parameters added to BI-RADS category 4a masses compared with ultrasound alone. Combined ultrasound and elasticity ratio had the highest improvement, from 35.44% to 87.34% for specificity, from 45.74% to 80.77% for positive predictive value, and from 57.72% to 90.24% for accuracy (p < 0.0001). The AUC of combined ultrasound and elasticity ratio (0

  8. Quantitative evaluation of performance of three-dimensional printed lenses

    NASA Astrophysics Data System (ADS)

    Gawedzinski, John; Pawlowski, Michal E.; Tkaczyk, Tomasz S.

    2017-08-01

    We present an analysis of the shape, surface quality, and imaging capabilities of custom three-dimensional (3-D) printed lenses. 3-D printing technology enables lens prototypes to be fabricated without restrictions on surface geometry. Thus, spherical, aspherical, and rotationally nonsymmetric lenses can be manufactured in an integrated production process. This technique serves as a noteworthy alternative to multistage, labor-intensive, abrasive processes, such as grinding, polishing, and diamond turning. Here, we evaluate the quality of lenses fabricated by Luxexcel using patented Printoptical©; technology that is based on an inkjet printing technique by comparing them to lenses made with traditional glass processing technologies (grinding, polishing, etc.). The surface geometry and roughness of the lenses were evaluated using white-light and Fizeau interferometers. We have compared peak-to-valley wavefront deviation, root mean square (RMS) wavefront error, radii of curvature, and the arithmetic roughness average (Ra) profile of plastic and glass lenses. In addition, the imaging performance of selected pairs of lenses was tested using 1951 USAF resolution target. The results indicate performance of 3-D printed optics that could be manufactured with surface roughness comparable to that of injection molded lenses (Ra<20 nm). The RMS wavefront error of 3-D printed prototypes was at a minimum 18.8 times larger than equivalent glass prototypes for a lens with a 12.7 mm clear aperture, but, when measured within 63% of its clear aperture, the 3-D printed components' RMS wavefront error was comparable to glass lenses.

  9. Quantitative evaluation of performance of 3D printed lenses

    PubMed Central

    Gawedzinski, John; Pawlowski, Michal E.; Tkaczyk, Tomasz S.

    2017-01-01

    We present an analysis of the shape, surface quality, and imaging capabilities of custom 3D printed lenses. 3D printing technology enables lens prototypes to be fabricated without restrictions on surface geometry. Thus, spherical, aspherical and rotationally non-symmetric lenses can be manufactured in an integrated production process. This technique serves as a noteworthy alternative to multistage, labor-intensive, abrasive processes such as grinding, polishing and diamond turning. Here, we evaluate the quality of lenses fabricated by Luxexcel using patented Printoptical© technology that is based on an inkjet printing technique by comparing them to lenses made with traditional glass processing technologies (grinding, polishing etc.). The surface geometry and roughness of the lenses were evaluated using white-light and Fizeau interferometers. We have compared peak-to-valley wavefront deviation, root-mean-squared wavefront error, radii of curvature and the arithmetic average of the roughness profile (Ra) of plastic and glass lenses. Additionally, the imaging performance of selected pairs of lenses was tested using 1951 USAF resolution target. The results indicate performance of 3D printed optics that could be manufactured with surface roughness comparable to that of injection molded lenses (Ra < 20 nm). The RMS wavefront error of 3D printed prototypes was at a minimum 18.8 times larger than equivalent glass prototypes for a lens with a 12.7 mm clear aperture, but when measured within 63% of its clear aperture, 3D printed components’ RMS wavefront error was comparable to glass lenses. PMID:29238114

  10. Chinese Middle School Teachers' Preferences Regarding Performance Evaluation Measures

    ERIC Educational Resources Information Center

    Liu, Shujie; Xu, Xianxuan; Stronge, James H.

    2016-01-01

    Teacher performance evaluation currently is receiving unprecedented attention from policy makers, scholars, and practitioners worldwide. This study is one of the few studies of teacher perceptions regarding teacher performance measures that focus on China. We employed a quantitative dominant mixed research design to investigate Chinese teachers'…

  11. Quantitative analysis of the anti-noise performance of an m-sequence in an electromagnetic method

    NASA Astrophysics Data System (ADS)

    Yuan, Zhe; Zhang, Yiming; Zheng, Qijia

    2018-02-01

    An electromagnetic method with a transmitted waveform coded by an m-sequence achieved better anti-noise performance compared to the conventional manner with a square-wave. The anti-noise performance of the m-sequence varied with multiple coding parameters; hence, a quantitative analysis of the anti-noise performance for m-sequences with different coding parameters was required to optimize them. This paper proposes the concept of an identification system, with the identified Earth impulse response obtained by measuring the system output with the input of the voltage response. A quantitative analysis of the anti-noise performance of the m-sequence was achieved by analyzing the amplitude-frequency response of the corresponding identification system. The effects of the coding parameters on the anti-noise performance are summarized by numerical simulation, and their optimization is further discussed in our conclusions; the validity of the conclusions is further verified by field experiment. The quantitative analysis method proposed in this paper provides a new insight into the anti-noise mechanism of the m-sequence, and could be used to evaluate the anti-noise performance of artificial sources in other time-domain exploration methods, such as the seismic method.

  12. Establishment of a Quantitative Medical Technology Evaluation System and Indicators within Medical Institutions.

    PubMed

    Wu, Suo-Wei; Chen, Tong; Pan, Qi; Wei, Liang-Yu; Wang, Qin; Li, Chao; Song, Jing-Chen; Luo, Ji

    2018-06-05

    The development and application of medical technologies reflect the medical quality and clinical capacity of a hospital. It is also an effective approach in upgrading medical service and core competitiveness among medical institutions. This study aimed to build a quantitative medical technology evaluation system through questionnaire survey within medical institutions to perform an assessment to medical technologies more objectively and accurately, and promote the management of medical quality technologies and ensure the medical safety of various operations among the hospitals. A two-leveled quantitative medical technology evaluation system was built through a two-round questionnaire survey of chosen experts. The Delphi method was applied in identifying the structure of evaluation system and indicators. The judgment of the experts on the indicators was adopted in building the matrix so that the weight coefficient and maximum eigenvalue (λ max), consistency index (CI), and random consistency ratio (CR) could be obtained and collected. The results were verified through consistency tests, and the index weight coefficient of each indicator was conducted and calculated through analytical hierarchy process. Twenty-six experts of different medical fields were involved in the questionnaire survey, 25 of whom successfully responded to the two-round research. Altogether, 4 primary indicators (safety, effectiveness, innovativeness, and benefits), as well as 13 secondary indicators, were included in the evaluation system. The matrix is built to conduct the λ max, CI, and CR of each expert in the survey, and the index weight coefficients of primary indicators were 0.33, 0.28, 0.27, and 0.12, respectively, and the index weight coefficients of secondary indicators were conducted and calculated accordingly. As the two-round questionnaire survey of experts and statistical analysis were performed and credibility of the results was verified through consistency evaluation test, the

  13. [Reconstituting evaluation methods based on both qualitative and quantitative paradigms].

    PubMed

    Miyata, Hiroaki; Okubo, Suguru; Yoshie, Satoru; Kai, Ichiro

    2011-01-01

    Debate about the relationship between quantitative and qualitative paradigms is often muddled and confusing and the clutter of terms and arguments has resulted in the concepts becoming obscure and unrecognizable. In this study we conducted content analysis regarding evaluation methods of qualitative healthcare research. We extracted descriptions on four types of evaluation paradigm (validity/credibility, reliability/credibility, objectivity/confirmability, and generalizability/transferability), and classified them into subcategories. In quantitative research, there has been many evaluation methods based on qualitative paradigms, and vice versa. Thus, it might not be useful to consider evaluation methods of qualitative paradigm are isolated from those of quantitative methods. Choosing practical evaluation methods based on the situation and prior conditions of each study is an important approach for researchers.

  14. Quantitative criteria for assessment of gamma-ray imager performance

    NASA Astrophysics Data System (ADS)

    Gottesman, Steve; Keller, Kristi; Malik, Hans

    2015-08-01

    In recent years gamma ray imagers such as the GammaCamTM and Polaris have demonstrated good imaging performance in the field. Imager performance is often summarized as "resolution", either angular, or spatial at some distance from the imager, however the definition of resolution is not always related to the ability to image an object. It is difficult to quantitatively compare imagers without a common definition of image quality. This paper examines three categories of definition: point source; line source; and area source. It discusses the details of those definitions and which ones are more relevant for different situations. Metrics such as Full Width Half Maximum (FWHM), variations on the Rayleigh criterion, and some analogous to National Imagery Interpretability Rating Scale (NIIRS) are discussed. The performance against these metrics is evaluated for a high resolution coded aperture imager modeled using Monte Carlo N-Particle (MCNP), and for a medium resolution imager measured in the lab.

  15. Student evaluations of teaching: teaching quantitative courses can be hazardous to one's career.

    PubMed

    Uttl, Bob; Smibert, Dylan

    2017-01-01

    Anonymous student evaluations of teaching (SETs) are used by colleges and universities to measure teaching effectiveness and to make decisions about faculty hiring, firing, re-appointment, promotion, tenure, and merit pay. Although numerous studies have found that SETs correlate with various teaching effectiveness irrelevant factors (TEIFs) such as subject, class size, and grading standards, it has been argued that such correlations are small and do not undermine the validity of SETs as measures of professors' teaching effectiveness. However, previous research has generally used inappropriate parametric statistics and effect sizes to examine and to evaluate the significance of TEIFs on personnel decisions. Accordingly, we examined the influence of quantitative vs. non-quantitative courses on SET ratings and SET based personnel decisions using 14,872 publicly posted class evaluations where each evaluation represents a summary of SET ratings provided by individual students responding in each class. In total, 325,538 individual student evaluations from a US mid-size university contributed to theses class evaluations. The results demonstrate that class subject (math vs. English) is strongly associated with SET ratings, has a substantial impact on professors being labeled satisfactory vs. unsatisfactory and excellent vs. non-excellent, and the impact varies substantially depending on the criteria used to classify professors as satisfactory vs. unsatisfactory. Professors teaching quantitative courses are far more likely not to receive tenure, promotion, and/or merit pay when their performance is evaluated against common standards.

  16. Quantitative performance evaluation of 124I PET/MRI lesion dosimetry in differentiated thyroid cancer

    NASA Astrophysics Data System (ADS)

    Wierts, R.; Jentzen, W.; Quick, H. H.; Wisselink, H. J.; Pooters, I. N. A.; Wildberger, J. E.; Herrmann, K.; Kemerink, G. J.; Backes, W. H.; Mottaghy, F. M.

    2018-01-01

    The aim was to investigate the quantitative performance of 124I PET/MRI for pre-therapy lesion dosimetry in differentiated thyroid cancer (DTC). Phantom measurements were performed on a PET/MRI system (Biograph mMR, Siemens Healthcare) using 124I and 18F. The PET calibration factor and the influence of radiofrequency coil attenuation were determined using a cylindrical phantom homogeneously filled with radioactivity. The calibration factor was 1.00  ±  0.02 for 18F and 0.88  ±  0.02 for 124I. Near the radiofrequency surface coil an underestimation of less than 5% in radioactivity concentration was observed. Soft-tissue sphere recovery coefficients were determined using the NEMA IEC body phantom. Recovery coefficients were systematically higher for 18F than for 124I. In addition, the six spheres of the phantom were segmented using a PET-based iterative segmentation algorithm. For all 124I measurements, the deviations in segmented lesion volume and mean radioactivity concentration relative to the actual values were smaller than 15% and 25%, respectively. The effect of MR-based attenuation correction (three- and four-segment µ-maps) on bone lesion quantification was assessed using radioactive spheres filled with a K2HPO4 solution mimicking bone lesions. The four-segment µ-map resulted in an underestimation of the imaged radioactivity concentration of up to 15%, whereas the three-segment µ-map resulted in an overestimation of up to 10%. For twenty lesions identified in six patients, a comparison of 124I PET/MRI to PET/CT was performed with respect to segmented lesion volume and radioactivity concentration. The interclass correlation coefficients showed excellent agreement in segmented lesion volume and radioactivity concentration (0.999 and 0.95, respectively). In conclusion, it is feasible that accurate quantitative 124I PET/MRI could be used to perform radioiodine pre-therapy lesion dosimetry in DTC.

  17. Performance evaluation methodology for historical document image binarization.

    PubMed

    Ntirogiannis, Konstantinos; Gatos, Basilis; Pratikakis, Ioannis

    2013-02-01

    Document image binarization is of great importance in the document image analysis and recognition pipeline since it affects further stages of the recognition process. The evaluation of a binarization method aids in studying its algorithmic behavior, as well as verifying its effectiveness, by providing qualitative and quantitative indication of its performance. This paper addresses a pixel-based binarization evaluation methodology for historical handwritten/machine-printed document images. In the proposed evaluation scheme, the recall and precision evaluation measures are properly modified using a weighting scheme that diminishes any potential evaluation bias. Additional performance metrics of the proposed evaluation scheme consist of the percentage rates of broken and missed text, false alarms, background noise, character enlargement, and merging. Several experiments conducted in comparison with other pixel-based evaluation measures demonstrate the validity of the proposed evaluation scheme.

  18. A peptide-retrieval strategy enables significant improvement of quantitative performance without compromising confidence of identification.

    PubMed

    Tu, Chengjian; Shen, Shichen; Sheng, Quanhu; Shyr, Yu; Qu, Jun

    2017-01-30

    Reliable quantification of low-abundance proteins in complex proteomes is challenging largely owing to the limited number of spectra/peptides identified. In this study we developed a straightforward method to improve the quantitative accuracy and precision of proteins by strategically retrieving the less confident peptides that were previously filtered out using the standard target-decoy search strategy. The filtered-out MS/MS spectra matched to confidently-identified proteins were recovered, and the peptide-spectrum-match FDR were re-calculated and controlled at a confident level of FDR≤1%, while protein FDR maintained at ~1%. We evaluated the performance of this strategy in both spectral count- and ion current-based methods. >60% increase of total quantified spectra/peptides was respectively achieved for analyzing a spike-in sample set and a public dataset from CPTAC. Incorporating the peptide retrieval strategy significantly improved the quantitative accuracy and precision, especially for low-abundance proteins (e.g. one-hit proteins). Moreover, the capacity of confidently discovering significantly-altered proteins was also enhanced substantially, as demonstrated with two spike-in datasets. In summary, improved quantitative performance was achieved by this peptide recovery strategy without compromising confidence of protein identification, which can be readily implemented in a broad range of quantitative proteomics techniques including label-free or labeling approaches. We hypothesize that more quantifiable spectra and peptides in a protein, even including less confident peptides, could help reduce variations and improve protein quantification. Hence the peptide retrieval strategy was developed and evaluated in two spike-in sample sets with different LC-MS/MS variations using both MS1- and MS2-based quantitative approach. The list of confidently identified proteins using the standard target-decoy search strategy was fixed and more spectra/peptides with less

  19. Evaluation and performance of desorption electrospray ionization using a triple quadrupole mass spectrometer for quantitation of pharmaceuticals in plasma.

    PubMed

    Kennedy, Joseph H; Wiseman, Justin M

    2010-02-01

    The present work describes the methodology and investigates the performance of desorption electrospray ionization (DESI) combined with a triple quadrupole mass spectrometer for the quantitation of small drug molecules in human plasma. Amoxepine, atenolol, carbamazepine, clozapine, prazosin, propranolol and verapamil were selected as target analytes while terfenadine was selected as the internal standard common to each of the analytes. Protein precipitation of human plasma using acetonitrile was utilized for all samples. Limits of detection were determined for all analytes in plasma and shown to be in the range 0.2-40 ng/mL. Quantitative analysis of amoxepine, prazosin and verapamil was performed over the range 20-7400 ng/mL and shown to be linear in all cases with R(2) >0.99. In most cases, the precision (relative standard deviation) and accuracy (relative error) of each method were less than or equal to 20%, respectively. The performance of the combined techniques made it possible to analyze each sample in 15 s illustrating DESI tandem mass spectrometry (MS/MS) as powerful tool for the quantitation of analytes in deproteinized human plasma. Copyright 2010 John Wiley & Sons, Ltd.

  20. The role of quantitative safety evaluation in regulatory decision making of drugs.

    PubMed

    Chakravarty, Aloka G; Izem, Rima; Keeton, Stephine; Kim, Clara Y; Levenson, Mark S; Soukup, Mat

    2016-01-01

    Evaluation of safety is a critical component of drug review at the US Food and Drug Administration (FDA). Statisticians are playing an increasingly visible role in quantitative safety evaluation and regulatory decision-making. This article reviews the history and the recent events relating to quantitative drug safety evaluation at the FDA. The article then focuses on five active areas of quantitative drug safety evaluation and the role Division of Biometrics VII (DBVII) plays in these areas, namely meta-analysis for safety evaluation, large safety outcome trials, post-marketing requirements (PMRs), the Sentinel Initiative, and the evaluation of risk from extended/long-acting opioids. This article will focus chiefly on developments related to quantitative drug safety evaluation and not on the many additional developments in drug safety in general.

  1. Quantitative evaluations of ankle spasticity and stiffness in neurological disorders using manual spasticity evaluator.

    PubMed

    Peng, Qiyu; Park, Hyung-Soon; Shah, Parag; Wilson, Nicole; Ren, Yupeng; Wu, Yi-Ning; Liu, Jie; Gaebler-Spira, Deborah J; Zhang, Li-Qun

    2011-01-01

    Spasticity and contracture are major sources of disability in people with neurological impairments that have been evaluated using various instruments: the Modified Ashworth Scale, tendon reflex scale, pendulum test, mechanical perturbations, and passive joint range of motion (ROM). These measures generally are either convenient to use in clinics but not quantitative or they are quantitative but difficult to use conveniently in clinics. We have developed a manual spasticity evaluator (MSE) to evaluate spasticity/contracture quantitatively and conveniently, with ankle ROM and stiffness measured at a controlled low velocity and joint resistance and Tardieu catch angle measured at several higher velocities. We found that the Tardieu catch angle was linearly related to the velocity, indicating that increased resistance at higher velocities was felt at further stiffer positions and, thus, that the velocity dependence of spasticity may also be position-dependent. This finding indicates the need to control velocity in spasticity evaluation, which is achieved with the MSE. Quantitative measurements of spasticity, stiffness, and ROM can lead to more accurate characterizations of pathological conditions and outcome evaluations of interventions, potentially contributing to better healthcare services for patients with neurological disorders such as cerebral palsy, spinal cord injury, traumatic brain injury, and stroke.

  2. Evaluation of a rapid quantitative determination method of PSA concentration with gold immunochromatographic strips.

    PubMed

    Wu, Cheng-Ching; Lin, Hung-Yu; Wang, Chao-Ping; Lu, Li-Fen; Yu, Teng-Hung; Hung, Wei-Chin; Houng, Jer-Yiing; Chung, Fu-Mei; Lee, Yau-Jiunn; Hu, Jin-Jia

    2015-11-03

    Prostate cancer remains the most common cancer in men. Qualitative or semi-quantitative immunochromatographic measurements of prostate specific antigen (PSA) have been shown to be simple, noninvasive and feasible. The aim of this study was to evaluate an optimized gold immunochromatographic strip device for the detection of PSA, in which the results can be analysed using a Chromogenic Rapid Test Reader to quantitatively assess the test results. This reader measures the reflectance of the signal line via a charge-coupled device camera. For quantitative analysis, PSA concentration was computed via a calibration equation. Capillary blood samples from 305 men were evaluated, and two independent observers interpreted the test results after 12 min. Blood samples were also collected and tested with a conventional quantitative assay. Sensitivity, specificity, positive and negative predictive values, and accuracy of the PSA rapid quantitative test system were 100, 96.6, 89.5, 100, and 97.4 %, respectively. Reproducibility of the test was 99.2, and interobserver variation was 8 % with a false positive rate of 3.4 %. The correlation coefficient between the ordinary quantitative assay and the rapid quantitative test was 0.960. The PSA rapid quantitative test system provided results quickly and was easy to use, so that tests using this system can be easily performed at outpatient clinics or elsewhere. This system may also be useful for initial cancer screening and for point-of-care testing, because results can be obtained within 12 min and at a cost lower than that of conventional quantitative assays.

  3. A quantitative evaluation of the high elbow technique in front crawl.

    PubMed

    Suito, Hiroshi; Nunome, Hiroyuki; Ikegami, Yasuo

    2017-07-01

    Many coaches often instruct swimmers to keep the elbow in a high position (high elbow position) during early phase of the underwater stroke motion (pull phase) in front crawl, however, the high elbow position has never been quantitatively evaluated. The aims of this study were (1) to quantitatively evaluate the "high elbow" position, (2) to clarify the relationship between the high elbow position and required upper limb configuration and (3) to examine the efficacy of high elbow position on the resultant swimming velocity. Sixteen highly skilled and 6 novice male swimmers performed 25 m front crawl with maximal effort and their 3-dimensional arm stroke motion was captured at 60 Hz. An attempt was made to develop a new index to evaluate the high elbow position (I he : high elbow index) using 3-dimensional coordinates of the shoulder, elbow and wrist joints. I he of skilled swimmers moderately correlated with the average shoulder internal rotation angle (r = -0.652, P < 0.01) and swimming velocity (r = -0.683, P < 0.01) during the pull phase. These results indicate that I he is a useful index for evaluating high elbow arm stroke technique during the pull phase in front crawl.

  4. Quantitative evaluation method of the threshold adjustment and the flat field correction performances of hybrid photon counting pixel detectors

    NASA Astrophysics Data System (ADS)

    Medjoubi, K.; Dawiec, A.

    2017-12-01

    A simple method is proposed in this work for quantitative evaluation of the quality of the threshold adjustment and the flat-field correction of Hybrid Photon Counting pixel (HPC) detectors. This approach is based on the Photon Transfer Curve (PTC) corresponding to the measurement of the standard deviation of the signal in flat field images. Fixed pattern noise (FPN), easily identifiable in the curve, is linked to the residual threshold dispersion, sensor inhomogeneity and the remnant errors in flat fielding techniques. The analytical expression of the signal to noise ratio curve is developed for HPC and successfully used as a fit function applied to experimental data obtained with the XPAD detector. The quantitative evaluation of the FPN, described by the photon response non-uniformity (PRNU), is measured for different configurations (threshold adjustment method and flat fielding technique) and is demonstrated to be used in order to evaluate the best setting for having the best image quality from a commercial or a R&D detector.

  5. Quantitative performance characterization of three-dimensional noncontact fluorescence molecular tomography

    NASA Astrophysics Data System (ADS)

    Favicchio, Rosy; Psycharakis, Stylianos; Schönig, Kai; Bartsch, Dusan; Mamalaki, Clio; Papamatheakis, Joseph; Ripoll, Jorge; Zacharakis, Giannis

    2016-02-01

    Fluorescent proteins and dyes are routine tools for biological research to describe the behavior of genes, proteins, and cells, as well as more complex physiological dynamics such as vessel permeability and pharmacokinetics. The use of these probes in whole body in vivo imaging would allow extending the range and scope of current biomedical applications and would be of great interest. In order to comply with a wide variety of application demands, in vivo imaging platform requirements span from wide spectral coverage to precise quantification capabilities. Fluorescence molecular tomography (FMT) detects and reconstructs in three dimensions the distribution of a fluorophore in vivo. Noncontact FMT allows fast scanning of an excitation source and noninvasive measurement of emitted fluorescent light using a virtual array detector operating in free space. Here, a rigorous process is defined that fully characterizes the performance of a custom-built horizontal noncontact FMT setup. Dynamic range, sensitivity, and quantitative accuracy across the visible spectrum were evaluated using fluorophores with emissions between 520 and 660 nm. These results demonstrate that high-performance quantitative three-dimensional visible light FMT allowed the detection of challenging mesenteric lymph nodes in vivo and the comparison of spectrally distinct fluorescent reporters in cell culture.

  6. Performance evaluation soil samples utilizing encapsulation technology

    DOEpatents

    Dahlgran, J.R.

    1999-08-17

    Performance evaluation soil samples and method of their preparation uses encapsulation technology to encapsulate analytes which are introduced into a soil matrix for analysis and evaluation by analytical laboratories. Target analytes are mixed in an appropriate solvent at predetermined concentrations. The mixture is emulsified in a solution of polymeric film forming material. The emulsified solution is polymerized to form microcapsules. The microcapsules are recovered, quantitated and introduced into a soil matrix in a predetermined ratio to form soil samples with the desired analyte concentration. 1 fig.

  7. Performance evaluation soil samples utilizing encapsulation technology

    DOEpatents

    Dahlgran, James R.

    1999-01-01

    Performance evaluation soil samples and method of their preparation using encapsulation technology to encapsulate analytes which are introduced into a soil matrix for analysis and evaluation by analytical laboratories. Target analytes are mixed in an appropriate solvent at predetermined concentrations. The mixture is emulsified in a solution of polymeric film forming material. The emulsified solution is polymerized to form microcapsules. The microcapsules are recovered, quantitated and introduced into a soil matrix in a predetermined ratio to form soil samples with the desired analyte concentration.

  8. Comparison of methods for quantitative evaluation of endoscopic distortion

    NASA Astrophysics Data System (ADS)

    Wang, Quanzeng; Castro, Kurt; Desai, Viraj N.; Cheng, Wei-Chung; Pfefer, Joshua

    2015-03-01

    Endoscopy is a well-established paradigm in medical imaging, and emerging endoscopic technologies such as high resolution, capsule and disposable endoscopes promise significant improvements in effectiveness, as well as patient safety and acceptance of endoscopy. However, the field lacks practical standardized test methods to evaluate key optical performance characteristics (OPCs), in particular the geometric distortion caused by fisheye lens effects in clinical endoscopic systems. As a result, it has been difficult to evaluate an endoscope's image quality or assess its changes over time. The goal of this work was to identify optimal techniques for objective, quantitative characterization of distortion that are effective and not burdensome. Specifically, distortion measurements from a commercially available distortion evaluation/correction software package were compared with a custom algorithm based on a local magnification (ML) approach. Measurements were performed using a clinical gastroscope to image square grid targets. Recorded images were analyzed with the ML approach and the commercial software where the results were used to obtain corrected images. Corrected images based on the ML approach and the software were compared. The study showed that the ML method could assess distortion patterns more accurately than the commercial software. Overall, the development of standardized test methods for characterizing distortion and other OPCs will facilitate development, clinical translation, manufacturing quality and assurance of performance during clinical use of endoscopic technologies.

  9. Student evaluations of teaching: teaching quantitative courses can be hazardous to one’s career

    PubMed Central

    Smibert, Dylan

    2017-01-01

    Anonymous student evaluations of teaching (SETs) are used by colleges and universities to measure teaching effectiveness and to make decisions about faculty hiring, firing, re-appointment, promotion, tenure, and merit pay. Although numerous studies have found that SETs correlate with various teaching effectiveness irrelevant factors (TEIFs) such as subject, class size, and grading standards, it has been argued that such correlations are small and do not undermine the validity of SETs as measures of professors’ teaching effectiveness. However, previous research has generally used inappropriate parametric statistics and effect sizes to examine and to evaluate the significance of TEIFs on personnel decisions. Accordingly, we examined the influence of quantitative vs. non-quantitative courses on SET ratings and SET based personnel decisions using 14,872 publicly posted class evaluations where each evaluation represents a summary of SET ratings provided by individual students responding in each class. In total, 325,538 individual student evaluations from a US mid-size university contributed to theses class evaluations. The results demonstrate that class subject (math vs. English) is strongly associated with SET ratings, has a substantial impact on professors being labeled satisfactory vs. unsatisfactory and excellent vs. non-excellent, and the impact varies substantially depending on the criteria used to classify professors as satisfactory vs. unsatisfactory. Professors teaching quantitative courses are far more likely not to receive tenure, promotion, and/or merit pay when their performance is evaluated against common standards. PMID:28503380

  10. Evaluation of board performance in Iran's universities of medical sciences.

    PubMed

    Sajadi, Haniye Sadat; Maleki, Mohammadreza; Ravaghi, Hamid; Farzan, Homayoun; Aminlou, Hasan; Hadi, Mohammad

    2014-10-01

    The critical role that the board plays in governance of universities clarifies the necessity of evaluating its performance. This study was aimed to evaluate the performance of the boards of medical universities and provide solutions to enhance its performance. The first phase of present study was a qualitative research in which data were collected through face-to-face semi-structured interviews. Data were analyzed by thematic approach. The second phase was a mixed qualitative and quantitative study, with quantitative part in cross-sectional format and qualitative part in content analysis format. In the quantitative part, data were collected through Ministry of Health and Medical Education (MoHME). In the qualitative part, the content of 2,148 resolutions that were selected by using stratified sampling method were analyzed. Participants believed that the boards had no acceptable performance for a long time.RESULTS also indicated the increasing number of meetings and resolutions of the boards in these 21 years. The boards' resolutions were mostly operational in domain and administrative in nature. The share of specific resolutions was more than the general ones. Given the current pace of change and development and the need to timely respond them, it is recommended to accelerate the slow pace of improvement process of the boards. It appears that more delegation and strengthening the position of the boards are the effective strategies to speed up this process.

  11. Fluorescent proteins for quantitative microscopy: important properties and practical evaluation.

    PubMed

    Shaner, Nathan Christopher

    2014-01-01

    More than 20 years after their discovery, fluorescent proteins (FPs) continue to be the subject of massive engineering efforts yielding continued improvements. Among these efforts are many aspects that should be of great interest to quantitative imaging users. With new variants frequently introduced into the research community, "tried and true" FPs that have been relied on for many years may now be due for upgrades to more modern variants. However, the dizzying array of FPs now available can make the initial act of narrowing down the potential choices an intimidating prospect. This chapter describes the FP properties that most strongly impact their performance in quantitative imaging experiments, along with their physical origins as they are currently understood. A workflow for evaluating a given FP in the researcher's chosen experimental system (e.g., a specific cell line) is described. © 2014 Elsevier Inc. All rights reserved.

  12. A Computational Framework for Quantitative Evaluation of Movement during Rehabilitation

    NASA Astrophysics Data System (ADS)

    Chen, Yinpeng; Duff, Margaret; Lehrer, Nicole; Sundaram, Hari; He, Jiping; Wolf, Steven L.; Rikakis, Thanassis

    2011-06-01

    This paper presents a novel generalized computational framework for quantitative kinematic evaluation of movement in a rehabilitation clinic setting. The framework integrates clinical knowledge and computational data-driven analysis together in a systematic manner. The framework provides three key benefits to rehabilitation: (a) the resulting continuous normalized measure allows the clinician to monitor movement quality on a fine scale and easily compare impairments across participants, (b) the framework reveals the effect of individual movement components on the composite movement performance helping the clinician decide the training foci, and (c) the evaluation runs in real-time, which allows the clinician to constantly track a patient's progress and make appropriate adaptations to the therapy protocol. The creation of such an evaluation is difficult because of the sparse amount of recorded clinical observations, the high dimensionality of movement and high variations in subject's performance. We address these issues by modeling the evaluation function as linear combination of multiple normalized kinematic attributes y = Σwiφi(xi) and estimating the attribute normalization function φi(ṡ) by integrating distributions of idealized movement and deviated movement. The weights wi are derived from a therapist's pair-wise comparison using a modified RankSVM algorithm. We have applied this framework to evaluate upper limb movement for stroke survivors with excellent results—the evaluation results are highly correlated to the therapist's observations.

  13. Design and Implementation of Performance Metrics for Evaluation of Assessments Data

    ERIC Educational Resources Information Center

    Ahmed, Irfan; Bhatti, Arif

    2016-01-01

    Evocative evaluation of assessment data is essential to quantify the achievements at course and program levels. The objective of this paper is to design performance metrics and respective formulas to quantitatively evaluate the achievement of set objectives and expected outcomes at the course levels for program accreditation. Even though…

  14. Quantitative evaluation methods of skin condition based on texture feature parameters.

    PubMed

    Pang, Hui; Chen, Tianhua; Wang, Xiaoyi; Chang, Zhineng; Shao, Siqi; Zhao, Jing

    2017-03-01

    In order to quantitatively evaluate the improvement of the skin condition after using skin care products and beauty, a quantitative evaluation method for skin surface state and texture is presented, which is convenient, fast and non-destructive. Human skin images were collected by image sensors. Firstly, the median filter of the 3 × 3 window is used and then the location of the hairy pixels on the skin is accurately detected according to the gray mean value and color information. The bilinear interpolation is used to modify the gray value of the hairy pixels in order to eliminate the negative effect of noise and tiny hairs on the texture. After the above pretreatment, the gray level co-occurrence matrix (GLCM) is calculated. On the basis of this, the four characteristic parameters, including the second moment, contrast, entropy and correlation, and their mean value are calculated at 45 ° intervals. The quantitative evaluation model of skin texture based on GLCM is established, which can calculate the comprehensive parameters of skin condition. Experiments show that using this method evaluates the skin condition, both based on biochemical indicators of skin evaluation methods in line, but also fully consistent with the human visual experience. This method overcomes the shortcomings of the biochemical evaluation method of skin damage and long waiting time, also the subjectivity and fuzziness of the visual evaluation, which achieves the non-destructive, rapid and quantitative evaluation of skin condition. It can be used for health assessment or classification of the skin condition, also can quantitatively evaluate the subtle improvement of skin condition after using skin care products or stage beauty.

  15. Evaluation of shear wave elastography for differential diagnosis of breast lesions: A new qualitative analysis versus conventional quantitative analysis.

    PubMed

    Ren, Wei-Wei; Li, Xiao-Long; Wang, Dan; Liu, Bo-Ji; Zhao, Chong-Ke; Xu, Hui-Xiong

    2018-04-13

    To evaluate a special kind of ultrasound (US) shear wave elastography for differential diagnosis of breast lesions, using a new qualitative analysis (i.e. the elasticity score in the travel time map) compared with conventional quantitative analysis. From June 2014 to July 2015, 266 pathologically proven breast lesions were enrolled in this study. The maximum, mean, median, minimum, and standard deviation of shear wave speed (SWS) values (m/s) were assessed. The elasticity score, a new qualitative feature, was evaluated in the travel time map. The area under the receiver operating characteristic (AUROC) curves were plotted to evaluate the diagnostic performance of both qualitative and quantitative analyses for differentiation of breast lesions. Among all quantitative parameters, SWS-max showed the highest AUROC (0.805; 95% CI: 0.752, 0.851) compared with SWS-mean (0.786; 95% CI:0.732, 0.834; P = 0.094), SWS-median (0.775; 95% CI:0.720, 0.824; P = 0.046), SWS-min (0.675; 95% CI:0.615, 0.731; P = 0.000), and SWS-SD (0.768; 95% CI:0.712, 0.817; P = 0.074). The AUROC of qualitative analysis in this study obtained the best diagnostic performance (0.871; 95% CI: 0.825, 0.909, compared with the best parameter of SWS-max in quantitative analysis, P = 0.011). The new qualitative analysis of shear wave travel time showed the superior diagnostic performance in the differentiation of breast lesions in comparison with conventional quantitative analysis.

  16. Quantitative chest computed tomography as a means of predicting exercise performance in severe emphysema.

    PubMed

    Crausman, R S; Ferguson, G; Irvin, C G; Make, B; Newell, J D

    1995-06-01

    We assessed the value of quantitative high-resolution computed tomography (CT) as a diagnostic and prognostic tool in smoking-related emphysema. We performed an inception cohort study of 14 patients referred with emphysema. The diagnosis of emphysema was based on a compatible history, physical examination, chest radiograph, CT scan of the lung, and pulmonary physiologic evaluation. As a group, those who underwent exercise testing were hyperinflated (percentage predicted total lung capacity +/- standard error of the mean = 133 +/- 9%), and there was evidence of air trapping (percentage predicted respiratory volume = 318 +/- 31%) and airflow limitation (forced expiratory volume in 1 sec [FEV1] = 40 +/- 7%). The exercise performance of the group was severely limited (maximum achievable workload = 43 +/- 6%) and was characterized by prominent ventilatory, gas exchange, and pulmonary vascular abnormalities. The quantitative CT index was markedly elevated in all patients (76 +/- 9; n = 14; normal < 4). There were correlations between this quantitative CT index and measures of airflow limitation (FEV1 r2 = .34, p = 09; FEV1/forced vital capacity r2 = .46, p = .04) and between maximum workload achieved (r2 = .93, p = .0001) and maximum oxygen utilization (r2 = .83, p = .0007). Quantitative chest CT assessment of disease severity is correlated with the degree of airflow limitation and exercise impairment in pulmonary emphysema.

  17. Poem Generator: A Comparative Quantitative Evaluation of a Microworlds-Based Learning Approach for Teaching English

    ERIC Educational Resources Information Center

    Jenkins, Craig

    2015-01-01

    This paper is a comparative quantitative evaluation of an approach to teaching poetry in the subject domain of English that employs a "guided discovery" pedagogy using computer-based microworlds. It uses a quasi-experimental design in order to measure performance gains in computational thinking and poetic thinking following a…

  18. Nuclear medicine and quantitative imaging research (instrumentation and quantitative methods of evaluation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beck, R.N.; Cooper, M.D.

    1990-09-01

    This report summarizes goals and accomplishments of the research program supported under DOE Grant No. FG02-86ER60418 entitled Instrumentation and Quantitative Methods of Evaluation, with R. Beck, P. I. and M. Cooper, Co-P.I. during the period January 15, 1990 through September 1, 1990. This program addresses the problems involving the basic science and technology underlying the physical and conceptual tools of radioactive tracer methodology as they relate to the measurement of structural and functional parameters of physiologic importance in health and disease. The principal tool is quantitative radionuclide imaging. The overall objective of this program is to further the development andmore » transfer of radiotracer methodology from basic theory to routine clinical practice in order that individual patients and society as a whole will receive the maximum net benefit from the new knowledge gained. The focus of the research is on the development of new instruments and radiopharmaceuticals, and the evaluation of these through the phase of clinical feasibility. 7 figs.« less

  19. Diffusion tensor imaging with quantitative evaluation and fiber tractography of lumbar nerve roots in sciatica.

    PubMed

    Shi, Yin; Zong, Min; Xu, Xiaoquan; Zou, Yuefen; Feng, Yang; Liu, Wei; Wang, Chuanbing; Wang, Dehang

    2015-04-01

    To quantitatively evaluate nerve roots by measuring fractional anisotropy (FA) values in healthy volunteers and sciatica patients, visualize nerve roots by tractography, and compare the diagnostic efficacy between conventional magnetic resonance imaging (MRI) and DTI. Seventy-five sciatica patients and thirty-six healthy volunteers underwent MR imaging using DTI. FA values for L5-S1 lumbar nerve roots were calculated at three levels from DTI images. Tractography was performed on L3-S1 nerve roots. ROC analysis was performed for FA values. The lumbar nerve roots were visualized and FA values were calculated in all subjects. FA values decreased in compressed nerve roots and declined from proximal to distal along the compressed nerve tracts. Mean FA values were more sensitive and specific than MR imaging for differentiating compressed nerve roots, especially in the far lateral zone at distal nerves. DTI can quantitatively evaluate compressed nerve roots, and DTT enables visualization of abnormal nerve tracts, providing vivid anatomic information and localization of probable nerve compression. DTI has great potential utility for evaluating lumbar nerve compression in sciatica. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. Quantitation of polymethoxylated flavones in orange juice by high-performance liquid chromatography.

    PubMed

    Rouseff, R L; Ting, S V

    1979-08-01

    A quantitative high-performance liquid chromatographic (HPLC) procedure for the determination of the five major polymethoxylated flavones (PMFs) in orange juice has been developed. It employs a unique ternary solvent system with coupled UV-fluorescence detection. The dual detectors were employed to determine the presence of interfering substances and served as a cross check on quantitation. Stop flow UV and fluorescence scanning was used to identify peaks and determine the presence of impurities. Although all five citrus PMFs fluoresce, some HPLC fluorescence peaks were too small to be of much practical use. All five citrus PMFs could be quantitated satisfactorily with the fixed wavelength UV (313 nm) detector. The HPLC procedure has been used to evaluate each step in the preparation. The optimum extracting solvent was selected and one time consuming step was eliminated, as it was found to be unnecessary. HPLC values for nobiletin and sinensetin are in good agreement with the thin-layer chromatographic (TLC) values in the literature. HPLC values for the other three flavones were considerably lower than those reported in the literature. The HPLC procedure is considerably faster than the TLC procedure with equal or superior precision and accuracy.

  1. Quantitative evaluation of Alzheimer's disease

    NASA Astrophysics Data System (ADS)

    Duchesne, S.; Frisoni, G. B.

    2009-02-01

    We propose a single, quantitative metric called the disease evaluation factor (DEF) and assess its efficiency at estimating disease burden in normal, control subjects (CTRL) and probable Alzheimer's disease (AD) patients. The study group consisted in 75 patients with a diagnosis of probable AD and 75 age-matched normal CTRL without neurological or neuropsychological deficit. We calculated a reference eigenspace of MRI appearance from reference data, in which our CTRL and probable AD subjects were projected. We then calculated the multi-dimensional hyperplane separating the CTRL and probable AD groups. The DEF was estimated via a multidimensional weighted distance of eigencoordinates for a given subject and the CTRL group mean, along salient principal components forming the separating hyperplane. We used quantile plots, Kolmogorov-Smirnov and χ2 tests to compare the DEF values and test that their distribution was normal. We used a linear discriminant test to separate CTRL from probable AD based on the DEF factor, and reached an accuracy of 87%. A quantitative biomarker in AD would act as an important surrogate marker of disease status and progression.

  2. Quantitative magnetic resonance (MR) neurography for evaluation of peripheral nerves and plexus injuries

    PubMed Central

    Barousse, Rafael; Socolovsky, Mariano; Luna, Antonio

    2017-01-01

    Traumatic conditions of peripheral nerves and plexus have been classically evaluated by morphological imaging techniques and electrophysiological tests. New magnetic resonance imaging (MRI) studies based on 3D fat-suppressed techniques are providing high accuracy for peripheral nerve injury evaluation from a qualitative point of view. However, these techniques do not provide quantitative information. Diffusion weighted imaging (DWI) and diffusion tensor imaging (DTI) are functional MRI techniques that are able to evaluate and quantify the movement of water molecules within different biological structures. These techniques have been successfully applied in other anatomical areas, especially in the assessment of central nervous system, and now are being imported, with promising results for peripheral nerve and plexus evaluation. DWI and DTI allow performing a qualitative and quantitative peripheral nerve analysis, providing valuable pathophysiological information about functional integrity of these structures. In the field of trauma and peripheral nerve or plexus injury, several derived parameters from DWI and DTI studies such as apparent diffusion coefficient (ADC) or fractional anisotropy (FA) among others, can be used as potential biomarkers of neural damage providing information about fiber organization, axonal flow or myelin integrity. A proper knowledge of physical basis of these techniques and their limitations is important for an optimal interpretation of the imaging findings and derived data. In this paper, a comprehensive review of the potential applications of DWI and DTI neurographic studies is performed with a focus on traumatic conditions, including main nerve entrapment syndromes in both peripheral nerves and brachial or lumbar plexus. PMID:28932698

  3. Technology Efficacy in Active Prosthetic Knees for Transfemoral Amputees: A Quantitative Evaluation

    PubMed Central

    El-Sayed, Amr M.; Abu Osman, Noor Azuan

    2014-01-01

    Several studies have presented technological ensembles of active knee systems for transfemoral prosthesis. Other studies have examined the amputees' gait performance while wearing a specific active prosthesis. This paper combined both insights, that is, a technical examination of the components used, with an evaluation of how these improved the gait of respective users. This study aims to offer a quantitative understanding of the potential enhancement derived from strategic integration of core elements in developing an effective device. The study systematically discussed the current technology in active transfemoral prosthesis with respect to its functional walking performance amongst above-knee amputee users, to evaluate the system's efficacy in producing close-to-normal user performance. The performances of its actuator, sensory system, and control technique that are incorporated in each reported system were evaluated separately and numerical comparisons were conducted based on the percentage of amputees' gait deviation from normal gait profile points. The results identified particular components that contributed closest to normal gait parameters. However, the conclusion is limitedly extendable due to the small number of studies. Thus, more clinical validation of the active prosthetic knee technology is needed to better understand the extent of contribution of each component to the most functional development. PMID:25110727

  4. Technology efficacy in active prosthetic knees for transfemoral amputees: a quantitative evaluation.

    PubMed

    El-Sayed, Amr M; Hamzaid, Nur Azah; Abu Osman, Noor Azuan

    2014-01-01

    Several studies have presented technological ensembles of active knee systems for transfemoral prosthesis. Other studies have examined the amputees' gait performance while wearing a specific active prosthesis. This paper combined both insights, that is, a technical examination of the components used, with an evaluation of how these improved the gait of respective users. This study aims to offer a quantitative understanding of the potential enhancement derived from strategic integration of core elements in developing an effective device. The study systematically discussed the current technology in active transfemoral prosthesis with respect to its functional walking performance amongst above-knee amputee users, to evaluate the system's efficacy in producing close-to-normal user performance. The performances of its actuator, sensory system, and control technique that are incorporated in each reported system were evaluated separately and numerical comparisons were conducted based on the percentage of amputees' gait deviation from normal gait profile points. The results identified particular components that contributed closest to normal gait parameters. However, the conclusion is limitedly extendable due to the small number of studies. Thus, more clinical validation of the active prosthetic knee technology is needed to better understand the extent of contribution of each component to the most functional development.

  5. Quantitative phase-contrast digital holographic microscopy for cell dynamic evaluation

    NASA Astrophysics Data System (ADS)

    Yu, Lingfeng; Mohanty, Samarendra; Berns, Michael W.; Chen, Zhongping

    2009-02-01

    The laser microbeam uses lasers to alter and/or to ablate intracellular organelles and cellular and tissue samples, and, today, has become an important tool for cell biologists to study the molecular mechanism of complex biological systems by removing individual cells or sub-cellular organelles. However, absolute quantitation of the localized alteration/damage to transparent phase objects, such as the cell membrane or chromosomes, was not possible using conventional phase-contrast or differential interference contrast microscopy. We report the development of phase-contrast digital holographic microscopy for quantitative evaluation of cell dynamic changes in real time during laser microsurgery. Quantitative phase images are recorded during the process of laser microsurgery and thus, the dynamic change in phase can be continuously evaluated. Out-of-focus organelles are re-focused by numerical reconstruction algorithms.

  6. Evaluation of board performance in Iran’s universities of medical sciences

    PubMed Central

    Sajadi, Haniye Sadat; Maleki, Mohammadreza; Ravaghi, Hamid; Farzan, Homayoun; Aminlou, Hasan; Hadi, Mohammad

    2014-01-01

    Background: The critical role that the board plays in governance of universities clarifies the necessity of evaluating its performance. This study was aimed to evaluate the performance of the boards of medical universities and provide solutions to enhance its performance. Methods: The first phase of present study was a qualitative research in which data were collected through face-to-face semi-structured interviews. Data were analyzed by thematic approach. The second phase was a mixed qualitative and quantitative study, with quantitative part in cross-sectional format and qualitative part in content analysis format. In the quantitative part, data were collected through Ministry of Health and Medical Education (MoHME). In the qualitative part, the content of 2,148 resolutions that were selected by using stratified sampling method were analyzed. Results: Participants believed that the boards had no acceptable performance for a long time.Results also indicated the increasing number of meetings and resolutions of the boards in these 21 years. The boards’ resolutions were mostly operational in domain and administrative in nature. The share of specific resolutions was more than the general ones. Conclusion: Given the current pace of change and development and the need to timely respond them, it is recommended to accelerate the slow pace of improvement process of the boards. It appears that more delegation and strengthening the position of the boards are the effective strategies to speed up this process. PMID:25337597

  7. Image processing system performance prediction and product quality evaluation

    NASA Technical Reports Server (NTRS)

    Stein, E. K.; Hammill, H. B. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. A new technique for image processing system performance prediction and product quality evaluation was developed. It was entirely objective, quantitative, and general, and should prove useful in system design and quality control. The technique and its application to determination of quality control procedures for the Earth Resources Technology Satellite NASA Data Processing Facility are described.

  8. Dual respiratory and cardiac motion estimation in PET imaging: Methods design and quantitative evaluation.

    PubMed

    Feng, Tao; Wang, Jizhe; Tsui, Benjamin M W

    2018-04-01

    The goal of this study was to develop and evaluate four post-reconstruction respiratory and cardiac (R&C) motion vector field (MVF) estimation methods for cardiac 4D PET data. In Method 1, the dual R&C motions were estimated directly from the dual R&C gated images. In Method 2, respiratory motion (RM) and cardiac motion (CM) were separately estimated from the respiratory gated only and cardiac gated only images. The effects of RM on CM estimation were modeled in Method 3 by applying an image-based RM correction on the cardiac gated images before CM estimation, the effects of CM on RM estimation were neglected. Method 4 iteratively models the mutual effects of RM and CM during dual R&C motion estimations. Realistic simulation data were generated for quantitative evaluation of four methods. Almost noise-free PET projection data were generated from the 4D XCAT phantom with realistic R&C MVF using Monte Carlo simulation. Poisson noise was added to the scaled projection data to generate additional datasets of two more different noise levels. All the projection data were reconstructed using a 4D image reconstruction method to obtain dual R&C gated images. The four dual R&C MVF estimation methods were applied to the dual R&C gated images and the accuracy of motion estimation was quantitatively evaluated using the root mean square error (RMSE) of the estimated MVFs. Results show that among the four estimation methods, Methods 2 performed the worst for noise-free case while Method 1 performed the worst for noisy cases in terms of quantitative accuracy of the estimated MVF. Methods 4 and 3 showed comparable results and achieved RMSE lower by up to 35% than that in Method 1 for noisy cases. In conclusion, we have developed and evaluated 4 different post-reconstruction R&C MVF estimation methods for use in 4D PET imaging. Comparison of the performance of four methods on simulated data indicates separate R&C estimation with modeling of RM before CM estimation (Method 3) to be

  9. Using hybrid method to evaluate the green performance in uncertainty.

    PubMed

    Tseng, Ming-Lang; Lan, Lawrence W; Wang, Ray; Chiu, Anthony; Cheng, Hui-Ping

    2011-04-01

    Green performance measure is vital for enterprises in making continuous improvements to maintain sustainable competitive advantages. Evaluation of green performance, however, is a challenging task due to the dependence complexity of the aspects, criteria, and the linguistic vagueness of some qualitative information and quantitative data together. To deal with this issue, this study proposes a novel approach to evaluate the dependence aspects and criteria of firm's green performance. The rationale of the proposed approach, namely green network balanced scorecard, is using balanced scorecard to combine fuzzy set theory with analytical network process (ANP) and importance-performance analysis (IPA) methods, wherein fuzzy set theory accounts for the linguistic vagueness of qualitative criteria and ANP converts the relations among the dependence aspects and criteria into an intelligible structural modeling used IPA. For the empirical case study, four dependence aspects and 34 green performance criteria for PCB firms in Taiwan were evaluated. The managerial implications are discussed.

  10. A model for evaluating the social performance of construction waste management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan Hongping, E-mail: hpyuan2005@gmail.com

    Highlights: Black-Right-Pointing-Pointer Scant attention is paid to social performance of construction waste management (CWM). Black-Right-Pointing-Pointer We develop a model for assessing the social performance of CWM. Black-Right-Pointing-Pointer With the model, the social performance of CWM can be quantitatively simulated. - Abstract: It has been determined by existing literature that a lot of research efforts have been made to the economic performance of construction waste management (CWM), but less attention is paid to investigation of the social performance of CWM. This study therefore attempts to develop a model for quantitatively evaluating the social performance of CWM by using a system dynamicsmore » (SD) approach. Firstly, major variables affecting the social performance of CWM are identified and a holistic system for assessing the social performance of CWM is formulated in line with feedback relationships underlying these variables. The developed system is then converted into a SD model through the software iThink. An empirical case study is finally conducted to demonstrate application of the model. Results of model validation indicate that the model is robust and reasonable to reflect the situation of the real system under study. Findings of the case study offer helpful insights into effectively promoting the social performance of CWM of the project investigated. Furthermore, the model exhibits great potential to function as an experimental platform for dynamically evaluating effects of management measures on improving the social performance of CWM of construction projects.« less

  11. The Quantitative Evaluation of the Clinical and Translational Science Awards (CTSA) Program Based on Science Mapping and Scientometric Analysis

    PubMed Central

    Zhang, Yin; Wang, Lei

    2013-01-01

    Abstract The Clinical and Translational Science Awards (CTSA) program is one of the most important initiatives in translational medical funding. The quantitative evaluation of the efficiency and performance of the CTSA program has a significant referential meaning for the decision making of global translational medical funding. Using science mapping and scientometric analytic tools, this study quantitatively analyzed the scientific articles funded by the CTSA program. The results of the study showed that the quantitative productivities of the CTSA program had a stable increase since 2008. In addition, the emerging trends of the research funded by the CTSA program covered clinical and basic medical research fields. The academic benefits from the CTSA program were assisting its members to build a robust academic home for the Clinical and Translational Science and to attract other financial support. This study provided a quantitative evaluation of the CTSA program based on science mapping and scientometric analysis. Further research is required to compare and optimize other quantitative methods and to integrate various research results. PMID:24330689

  12. The quantitative evaluation of the Clinical and Translational Science Awards (CTSA) program based on science mapping and scientometric analysis.

    PubMed

    Zhang, Yin; Wang, Lei; Diao, Tianxi

    2013-12-01

    The Clinical and Translational Science Awards (CTSA) program is one of the most important initiatives in translational medical funding. The quantitative evaluation of the efficiency and performance of the CTSA program has a significant referential meaning for the decision making of global translational medical funding. Using science mapping and scientometric analytic tools, this study quantitatively analyzed the scientific articles funded by the CTSA program. The results of the study showed that the quantitative productivities of the CTSA program had a stable increase since 2008. In addition, the emerging trends of the research funded by the CTSA program covered clinical and basic medical research fields. The academic benefits from the CTSA program were assisting its members to build a robust academic home for the Clinical and Translational Science and to attract other financial support. This study provided a quantitative evaluation of the CTSA program based on science mapping and scientometric analysis. Further research is required to compare and optimize other quantitative methods and to integrate various research results. © 2013 Wiley Periodicals, Inc.

  13. CONFOCAL MICROSCOPY SYSTEM PERFORMANCE: QA TESTS, QUANTITATION AND SPECTROSCOPY

    EPA Science Inventory

    Confocal Microscopy System Performance: QA tests, Quantitation and Spectroscopy.

    Robert M. Zucker 1 and Jeremy M. Lerner 2,
    1Reproductive Toxicology Division, National Health and Environmental Effects Research Laboratory, Office of Research Development, U.S. Environmen...

  14. A new method to evaluate human-robot system performance

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Weisbin, C. R.

    2003-01-01

    One of the key issues in space exploration is that of deciding what space tasks are best done with humans, with robots, or a suitable combination of each. In general, human and robot skills are complementary. Humans provide as yet unmatched capabilities to perceive, think, and act when faced with anomalies and unforeseen events, but there can be huge potential risks to human safety in getting these benefits. Robots provide complementary skills in being able to work in extremely risky environments, but their ability to perceive, think, and act by themselves is currently not error-free, although these capabilities are continually improving with the emergence of new technologies. Substantial past experience validates these generally qualitative notions. However, there is a need for more rigorously systematic evaluation of human and robot roles, in order to optimize the design and performance of human-robot system architectures using well-defined performance evaluation metrics. This article summarizes a new analytical method to conduct such quantitative evaluations. While the article focuses on evaluating human-robot systems, the method is generally applicable to a much broader class of systems whose performance needs to be evaluated.

  15. A quantitative method for evaluating alternatives. [aid to decision making

    NASA Technical Reports Server (NTRS)

    Forthofer, M. J.

    1981-01-01

    When faced with choosing between alternatives, people tend to use a number of criteria (often subjective, rather than objective) to decide which is the best alternative for them given their unique situation. The subjectivity inherent in the decision-making process can be reduced by the definition and use of a quantitative method for evaluating alternatives. This type of method can help decision makers achieve degree of uniformity and completeness in the evaluation process, as well as an increased sensitivity to the factors involved. Additional side-effects are better documentation and visibility of the rationale behind the resulting decisions. General guidelines for defining a quantitative method are presented and a particular method (called 'hierarchical weighted average') is defined and applied to the evaluation of design alternatives for a hypothetical computer system capability.

  16. Evaluation of quantitative image analysis criteria for the high-resolution microendoscopic detection of neoplasia in Barrett's esophagus

    NASA Astrophysics Data System (ADS)

    Muldoon, Timothy J.; Thekkek, Nadhi; Roblyer, Darren; Maru, Dipen; Harpaz, Noam; Potack, Jonathan; Anandasabapathy, Sharmila; Richards-Kortum, Rebecca

    2010-03-01

    Early detection of neoplasia in patients with Barrett's esophagus is essential to improve outcomes. The aim of this ex vivo study was to evaluate the ability of high-resolution microendoscopic imaging and quantitative image analysis to identify neoplastic lesions in patients with Barrett's esophagus. Nine patients with pathologically confirmed Barrett's esophagus underwent endoscopic examination with biopsies or endoscopic mucosal resection. Resected fresh tissue was imaged with fiber bundle microendoscopy; images were analyzed by visual interpretation or by quantitative image analysis to predict whether the imaged sites were non-neoplastic or neoplastic. The best performing pair of quantitative features were chosen based on their ability to correctly classify the data into the two groups. Predictions were compared to the gold standard of histopathology. Subjective analysis of the images by expert clinicians achieved average sensitivity and specificity of 87% and 61%, respectively. The best performing quantitative classification algorithm relied on two image textural features and achieved a sensitivity and specificity of 87% and 85%, respectively. This ex vivo pilot trial demonstrates that quantitative analysis of images obtained with a simple microendoscope system can distinguish neoplasia in Barrett's esophagus with good sensitivity and specificity when compared to histopathology and to subjective image interpretation.

  17. Metrics for Offline Evaluation of Prognostic Performance

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai

    2010-01-01

    Prognostic performance evaluation has gained significant attention in the past few years. Currently, prognostics concepts lack standard definitions and suffer from ambiguous and inconsistent interpretations. This lack of standards is in part due to the varied end-user requirements for different applications, time scales, available information, domain dynamics, etc. to name a few. The research community has used a variety of metrics largely based on convenience and their respective requirements. Very little attention has been focused on establishing a standardized approach to compare different efforts. This paper presents several new evaluation metrics tailored for prognostics that were recently introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. These metrics have the capability of incorporating probabilistic uncertainty estimates from prognostic algorithms. In addition to quantitative assessment they also offer a comprehensive visual perspective that can be used in designing the prognostic system. Several methods are suggested to customize these metrics for different applications. Guidelines are provided to help choose one method over another based on distribution characteristics. Various issues faced by prognostics and its performance evaluation are discussed followed by a formal notational framework to help standardize subsequent developments.

  18. A Quantitative Assessment of Student Performance and Examination Format

    ERIC Educational Resources Information Center

    Davison, Christopher B.; Dustova, Gandzhina

    2017-01-01

    This research study describes the correlations between student performance and examination format in a higher education teaching and research institution. The researchers employed a quantitative, correlational methodology utilizing linear regression analysis. The data was obtained from undergraduate student test scores over a three-year time span.…

  19. Connecting qualitative observation and quantitative measurement for enhancing quantitative literacy in plant anatomy course

    NASA Astrophysics Data System (ADS)

    Nuraeni, E.; Rahmat, A.

    2018-05-01

    Forming of cognitive schemes of plant anatomy concepts is performed by processing of qualitative and quantitative data obtained from microscopic observations. To enhancing student’s quantitative literacy, strategy of plant anatomy course was modified by adding the task to analyze quantitative data produced by quantitative measurement of plant anatomy guided by material course. Participant in this study was 24 biology students and 35 biology education students. Quantitative Literacy test, complex thinking in plant anatomy test and questioner used to evaluate the course. Quantitative literacy capability data was collected by quantitative literacy test with the rubric from the Association of American Colleges and Universities, Complex thinking in plant anatomy by test according to Marzano and questioner. Quantitative literacy data are categorized according to modified Rhodes and Finley categories. The results showed that quantitative literacy of biology education students is better than biology students.

  20. Facial asymmetry quantitative evaluation in oculoauriculovertebral spectrum.

    PubMed

    Manara, Renzo; Schifano, Giovanni; Brotto, Davide; Mardari, Rodica; Ghiselli, Sara; Gerunda, Antonio; Ghirotto, Cristina; Fusetti, Stefano; Piacentile, Katherine; Scienza, Renato; Ermani, Mario; Martini, Alessandro

    2016-03-01

    Facial asymmetries in oculoauriculovertebral spectrum (OAVS) patients might require surgical corrections that are mostly based on qualitative approach and surgeon's experience. The present study aimed to develop a quantitative 3D CT imaging-based procedure suitable for maxillo-facial surgery planning in OAVS patients. Thirteen OAVS patients (mean age 3.5 ± 4.0 years; range 0.2-14.2, 6 females) and 13 controls (mean age 7.1 ± 5.3 years; range 0.6-15.7, 5 females) who underwent head CT examination were retrospectively enrolled. Eight bilateral anatomical facial landmarks were defined on 3D CT images (porion, orbitale, most anterior point of frontozygomatic suture, most superior point of temporozygomatic suture, most posterior-lateral point of the maxilla, gonion, condylion, mental foramen) and distance from orthogonal planes (in millimeters) was used to evaluate the asymmetry on each axis and to calculate a global asymmetry index of each anatomical landmark. Mean asymmetry values and relative confidence intervals were obtained from the control group. OAVS patients showed 2.5 ± 1.8 landmarks above the confidence interval while considering the global asymmetry values; 12 patients (92%) showed at least one pathologically asymmetric landmark. Considering each axis, the mean number of pathologically asymmetric landmarks increased to 5.5 ± 2.6 (p = 0.002) and all patients presented at least one significant landmark asymmetry. Modern CT-based 3D reconstructions allow accurate assessment of facial bone asymmetries in patients affected by OAVS. The evaluation as a global score and in different orthogonal axes provides precise quantitative data suitable for maxillo-facial surgical planning. CT-based 3D reconstruction might allow a quantitative approach for planning and following-up maxillo-facial surgery in OAVS patients.

  1. A quantitative approach to evaluating caring in nursing simulation.

    PubMed

    Eggenberger, Terry L; Keller, Kathryn B; Chase, Susan K; Payne, Linda

    2012-01-01

    This study was designed to test a quantitative method of measuring caring in the simulated environment. Since competency in caring is central to nursing practice, ways of including caring concepts in designing scenarios and in evaluation of performance need to be developed. Coates' Caring Efficacy scales were adapted for simulation and named the Caring Efficacy Scale-Simulation Student Version (CES-SSV) and Caring Efficacy Scale-Simulation Faculty Version (CES-SFV). A correlational study was designed to compare student self-ratings with faculty ratings on caring efficacy during an adult acute simulation experience with traditional and accelerated baccalaureate students in a nursing program grounded in caring theory. Student self-ratings were significantly correlated with objective ratings (r = 0.345, 0.356). Both the CES-SSV and the CES-SFV were found to have excellent internal consistency and significantly correlated interrater reliability. They were useful in measuring caring in the simulated learning environment.

  2. Performance Evaluation of the Bioneer AccuPower® HIV-1 Quantitative RT-PCR kit: Comparison with the Roche COBAS® AmpliPrep/COBAS TaqMan® HIV-1 Test Ver.2.0 for Quantification of HIV-1 Viral Load in Indonesia.

    PubMed

    Kosasih, Agus Susanto; Sugiarto, Christine; Hayuanta, Hubertus Hosti; Juhaendi, Runingsih; Setiawan, Lyana

    2017-08-08

    Measurement of viral load in human immunodeficiency virus type 1 (HIV-1) infected patients is essential for the establishment of a therapeutic strategy. Several assays based on qPCR are available for the measurement of viral load; they differ in sample volume, technology applied, target gene, sensitivity and dynamic range. The Bioneer AccuPower® HIV-1 Quantitative RT-PCR is a novel commercial kit that has not been evaluated for its performance. This study aimed to evaluate the performance of the Bioneer AccuPower® HIV-1 Quantitative RT-PCR kit. In total, 288 EDTA plasma samples from the Dharmais Cancer Hospital were analyzed with the Bioneer AccuPower® HIV-1 Quantitative RT-PCR kit and the Roche COBAS? AmpliPrep/COBAS® TaqMan® HIV-1 version 2.0 (CAP/CTM v2.0). The performance of the Bioneer assay was then evaluated against the Roche CAP/CTM v2.0. Overall, there was good agreement between the two assays. The Bioneer assay showed significant linear correlation with CAP/CTM v2.0 (R2=0.963, p<0.001) for all samples (N=118) which were quantified by both assays, with high agreement (94.9%, 112/118) according to the Bland-Altman model. The mean difference between the quantitative values measured by Bioneer assay and CAP/CTM v2.0 was 0.11 Log10 IU/mL (SD=0.26). Based on these results, the Bioneer assay can be used to quantify HIV-1 RNA in clinical laboratories.

  3. Comparative evaluation of performance measures for shading correction in time-lapse fluorescence microscopy.

    PubMed

    Liu, L; Kan, A; Leckie, C; Hodgkin, P D

    2017-04-01

    Time-lapse fluorescence microscopy is a valuable technology in cell biology, but it suffers from the inherent problem of intensity inhomogeneity due to uneven illumination or camera nonlinearity, known as shading artefacts. This will lead to inaccurate estimates of single-cell features such as average and total intensity. Numerous shading correction methods have been proposed to remove this effect. In order to compare the performance of different methods, many quantitative performance measures have been developed. However, there is little discussion about which performance measure should be generally applied for evaluation on real data, where the ground truth is absent. In this paper, the state-of-the-art shading correction methods and performance evaluation methods are reviewed. We implement 10 popular shading correction methods on two artificial datasets and four real ones. In order to make an objective comparison between those methods, we employ a number of quantitative performance measures. Extensive validation demonstrates that the coefficient of joint variation (CJV) is the most applicable measure in time-lapse fluorescence images. Based on this measure, we have proposed a novel shading correction method that performs better compared to well-established methods for a range of real data tested. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  4. A whole-cell bioreporter assay for quantitative genotoxicity evaluation of environmental samples.

    PubMed

    Jiang, Bo; Li, Guanghe; Xing, Yi; Zhang, Dayi; Jia, Jianli; Cui, Zhisong; Luan, Xiao; Tang, Hui

    2017-10-01

    Whole-cell bioreporters have emerged as promising tools for genotoxicity evaluation, due to their rapidity, cost-effectiveness, sensitivity and selectivity. In this study, a method for detecting genotoxicity in environmental samples was developed using the bioluminescent whole-cell bioreporter Escherichia coli recA::luxCDABE. To further test its performance in a real world scenario, the E. coli bioreporter was applied in two cases: i) soil samples collected from chromium(VI) contaminated sites; ii) crude oil contaminated seawater collected after the Jiaozhou Bay oil spill which occurred in 2013. The chromium(VI) contaminated soils were pretreated by water extraction, and directly exposed to the bioreporter in two phases: aqueous soil extraction (water phase) and soil supernatant (solid phase). The results indicated that both extractable and soil particle fixed chromium(VI) were bioavailable to the bioreporter, and the solid-phase contact bioreporter assay provided a more precise evaluation of soil genotoxicity. For crude oil contaminated seawater, the response of the bioreporter clearly illustrated the spatial and time change in genotoxicity surrounding the spill site, suggesting that the crude oil degradation process decreased the genotoxic risk to ecosystem. In addition, the performance of the bioreporter was simulated by a modified cross-regulation gene expression model, which quantitatively described the DNA damage response of the E. coli bioreporter. Accordingly, the bioluminescent response of the bioreporter was calculated as the mitomycin C equivalent, enabling quantitative comparison of genotoxicities between different environmental samples. This bioreporter assay provides a rapid and sensitive screening tool for direct genotoxicity assessment of environmental samples. Copyright © 2017. Published by Elsevier Ltd.

  5. Quantitative evaluation of toothbrush and arm-joint motion during tooth brushing.

    PubMed

    Inada, Emi; Saitoh, Issei; Yu, Yong; Tomiyama, Daisuke; Murakami, Daisuke; Takemoto, Yoshihiko; Morizono, Ken; Iwasaki, Tomonori; Iwase, Yoko; Yamasaki, Youichi

    2015-07-01

    It is very difficult for dental professionals to objectively assess tooth brushing skill of patients, because an obvious index to assess the brushing motion of patients has not been established. The purpose of this study was to quantitatively evaluate toothbrush and arm-joint motion during tooth brushing. Tooth brushing motion, performed by dental hygienists for 15 s, was captured using a motion-capture system that continuously calculates the three-dimensional coordinates of object's motion relative to the floor. The dental hygienists performed the tooth brushing on the buccal and palatal sides of their right and left upper molars. The frequencies and power spectra of toothbrush motion and joint angles of the shoulder, elbow, and wrist were calculated and analyzed statistically. The frequency of toothbrush motion was higher on the left side (both buccal and palatal areas) than on the right side. There were no significant differences among joint angle frequencies within each brushing area. The inter- and intra-individual variations of the power spectrum of the elbow flexion angle when brushing were smaller than for any of the other angles. This study quantitatively confirmed that dental hygienists have individual distinctive rhythms during tooth brushing. All arm joints moved synchronously during brushing, and tooth brushing motion was controlled by coordinated movement of the joints. The elbow generated an individual's frequency through a stabilizing movement. The shoulder and wrist control the hand motion, and the elbow generates the cyclic rhythm during tooth brushing.

  6. Online versus Paper Evaluations: Differences in Both Quantitative and Qualitative Data

    ERIC Educational Resources Information Center

    Burton, William B.; Civitano, Adele; Steiner-Grossman, Penny

    2012-01-01

    This study sought to determine if differences exist in the quantitative and qualitative data collected with paper and online versions of a medical school clerkship evaluation form. Data from six-and-a-half years of clerkship evaluations were used, some collected before and some after the conversion from a paper to an online evaluation system. The…

  7. A new method to evaluate image quality of CBCT images quantitatively without observers

    PubMed Central

    Shimizu, Mayumi; Okamura, Kazutoshi; Yoshida, Shoko; Weerawanich, Warangkana; Tokumori, Kenji; Jasa, Gainer R; Yoshiura, Kazunori

    2017-01-01

    Objectives: To develop an observer-free method for quantitatively evaluating the image quality of CBCT images by applying just-noticeable difference (JND). Methods: We used two test objects: (1) a Teflon (polytetrafluoroethylene) plate phantom attached to a dry human mandible; and (2) a block phantom consisting of a Teflon step phantom and an aluminium step phantom. These phantoms had holes with different depths. They were immersed in water and scanned with a CB MercuRay (Hitachi Medical Corporation, Tokyo, Japan) at tube voltages of 120 kV, 100 kV, 80 kV and 60 kV. Superimposed images of the phantoms with holes were used for evaluation. The number of detectable holes was used as an index of image quality. In detecting holes quantitatively, the threshold grey value (ΔG), which differentiated holes from the background, was calculated using a specific threshold (the JND), and we extracted the holes with grey values above ΔG. The indices obtained by this quantitative method (the extracted hole values) were compared with the observer evaluations (the observed hole values). In addition, the contrast-to-noise ratio (CNR) of the shallowest detectable holes and the deepest undetectable holes were measured to evaluate the contribution of CNR to detectability. Results: The results of this evaluation method corresponded almost exactly with the evaluations made by observers. The extracted hole values reflected the influence of different tube voltages. All extracted holes had an area with a CNR of ≥1.5. Conclusions: This quantitative method of evaluating CBCT image quality may be more useful and less time-consuming than evaluation by observation. PMID:28045343

  8. Blind Source Parameters for Performance Evaluation of Despeckling Filters.

    PubMed

    Biradar, Nagashettappa; Dewal, M L; Rohit, ManojKumar; Gowre, Sanjaykumar; Gundge, Yogesh

    2016-01-01

    The speckle noise is inherent to transthoracic echocardiographic images. A standard noise-free reference echocardiographic image does not exist. The evaluation of filters based on the traditional parameters such as peak signal-to-noise ratio, mean square error, and structural similarity index may not reflect the true filter performance on echocardiographic images. Therefore, the performance of despeckling can be evaluated using blind assessment metrics like the speckle suppression index, speckle suppression and mean preservation index (SMPI), and beta metric. The need for noise-free reference image is overcome using these three parameters. This paper presents a comprehensive analysis and evaluation of eleven types of despeckling filters for echocardiographic images in terms of blind and traditional performance parameters along with clinical validation. The noise is effectively suppressed using the logarithmic neighborhood shrinkage (NeighShrink) embedded with Stein's unbiased risk estimation (SURE). The SMPI is three times more effective compared to the wavelet based generalized likelihood estimation approach. The quantitative evaluation and clinical validation reveal that the filters such as the nonlocal mean, posterior sampling based Bayesian estimation, hybrid median, and probabilistic patch based filters are acceptable whereas median, anisotropic diffusion, fuzzy, and Ripplet nonlinear approximation filters have limited applications for echocardiographic images.

  9. Blind Source Parameters for Performance Evaluation of Despeckling Filters

    PubMed Central

    Biradar, Nagashettappa; Dewal, M. L.; Rohit, ManojKumar; Gowre, Sanjaykumar; Gundge, Yogesh

    2016-01-01

    The speckle noise is inherent to transthoracic echocardiographic images. A standard noise-free reference echocardiographic image does not exist. The evaluation of filters based on the traditional parameters such as peak signal-to-noise ratio, mean square error, and structural similarity index may not reflect the true filter performance on echocardiographic images. Therefore, the performance of despeckling can be evaluated using blind assessment metrics like the speckle suppression index, speckle suppression and mean preservation index (SMPI), and beta metric. The need for noise-free reference image is overcome using these three parameters. This paper presents a comprehensive analysis and evaluation of eleven types of despeckling filters for echocardiographic images in terms of blind and traditional performance parameters along with clinical validation. The noise is effectively suppressed using the logarithmic neighborhood shrinkage (NeighShrink) embedded with Stein's unbiased risk estimation (SURE). The SMPI is three times more effective compared to the wavelet based generalized likelihood estimation approach. The quantitative evaluation and clinical validation reveal that the filters such as the nonlocal mean, posterior sampling based Bayesian estimation, hybrid median, and probabilistic patch based filters are acceptable whereas median, anisotropic diffusion, fuzzy, and Ripplet nonlinear approximation filters have limited applications for echocardiographic images. PMID:27298618

  10. Quantitative and Qualitative Change in Children's Mental Rotation Performance

    ERIC Educational Resources Information Center

    Geiser, Christian; Lehmann, Wolfgang; Corth, Martin; Eid, Michael

    2008-01-01

    This study investigated quantitative and qualitative changes in mental rotation performance and solution strategies with a focus on sex differences. German children (N = 519) completed the Mental Rotations Test (MRT) in the 5th and 6th grades (interval: one year; age range at time 1: 10-11 years). Boys on average outperformed girls on both…

  11. An adaptive model approach for quantitative wrist rigidity evaluation during deep brain stimulation surgery.

    PubMed

    Assis, Sofia; Costa, Pedro; Rosas, Maria Jose; Vaz, Rui; Silva Cunha, Joao Paulo

    2016-08-01

    Intraoperative evaluation of the efficacy of Deep Brain Stimulation includes evaluation of the effect on rigidity. A subjective semi-quantitative scale is used, dependent on the examiner perception and experience. A system was proposed previously, aiming to tackle this subjectivity, using quantitative data and providing real-time feedback of the computed rigidity reduction, hence supporting the physician decision. This system comprised of a gyroscope-based motion sensor in a textile band, placed in the patients hand, which communicated its measurements to a laptop. The latter computed a signal descriptor from the angular velocity of the hand during wrist flexion in DBS surgery. The first approach relied on using a general rigidity reduction model, regardless of the initial severity of the symptom. Thus, to enhance the performance of the previously presented system, we aimed to develop models for high and low baseline rigidity, according to the examiner assessment before any stimulation. This would allow a more patient-oriented approach. Additionally, usability was improved by having in situ processing in a smartphone, instead of a computer. Such system has shown to be reliable, presenting an accuracy of 82.0% and a mean error of 3.4%. Relatively to previous results, the performance was similar, further supporting the importance of considering the cogwheel rigidity to better infer about the reduction in rigidity. Overall, we present a simple, wearable, mobile system, suitable for intra-operatory conditions during DBS, supporting a physician in decision-making when setting stimulation parameters.

  12. A Systematic Quantitative-Qualitative Model: How To Evaluate Professional Services

    ERIC Educational Resources Information Center

    Yoda, Koji

    1973-01-01

    The proposed evaluation model provides for the assignment of relative weights to each criterion, and establishes a weighting system for calculating a quantitative-qualitative raw score for each service activity of a faculty member being reviewed. (Author)

  13. Quantitative evaluation of lipid concentration in atherosclerotic plaque phantom by near-infrared multispectral angioscope at wavelengths around 1200 nm

    NASA Astrophysics Data System (ADS)

    Matsui, Daichi; Ishii, Katsunori; Awazu, Kunio

    2015-07-01

    Atherosclerosis is a primary cause of critical ischemic diseases like heart infarction or stroke. A method that can provide detailed information about the stability of atherosclerotic plaques is required. We focused on spectroscopic techniques that could evaluate the chemical composition of lipid in plaques. A novel angioscope using multispectral imaging at wavelengths around 1200 nm for quantitative evaluation of atherosclerotic plaques was developed. The angioscope consists of a halogen lamp, an indium gallium arsenide (InGaAs) camera, 3 optical band pass filters transmitting wavelengths of 1150, 1200, and 1300 nm, an image fiber having 0.7 mm outer diameter, and an irradiation fiber which consists of 7 multimode fibers. Atherosclerotic plaque phantoms with 100, 60, 20 vol.% of lipid were prepared and measured by the multispectral angioscope. The acquired datasets were processed by spectral angle mapper (SAM) method. As a result, simulated plaque areas in atherosclerotic plaque phantoms that could not be detected by an angioscopic visible image could be clearly enhanced. In addition, quantitative evaluation of atherosclerotic plaque phantoms based on the lipid volume fractions was performed up to 20 vol.%. These results show the potential of a multispectral angioscope at wavelengths around 1200 nm for quantitative evaluation of the stability of atherosclerotic plaques.

  14. Applying Quantitative Approaches to the Formative Evaluation of Antismoking Campaign Messages

    PubMed Central

    Parvanta, Sarah; Gibson, Laura; Forquer, Heather; Shapiro-Luft, Dina; Dean, Lorraine; Freres, Derek; Lerman, Caryn; Mallya, Giridhar; Moldovan-Johnson, Mihaela; Tan, Andy; Cappella, Joseph; Hornik, Robert

    2014-01-01

    This article shares an in-depth summary of a formative evaluation that used quantitative data to inform the development and selection of promotional ads for the antismoking communication component of a social marketing campaign. A foundational survey provided cross-sectional data to identify beliefs about quitting smoking that campaign messages should target, as well as beliefs to avoid. Pretesting draft ads against quantitative indicators of message effectiveness further facilitated the selection and rejection of final campaign ads. Finally, we consider lessons learned from the process of balancing quantitative methods and judgment to make formative decisions about more and less promising persuasive messages for campaigns. PMID:24817829

  15. Style-independent document labeling: design and performance evaluation

    NASA Astrophysics Data System (ADS)

    Mao, Song; Kim, Jong Woo; Thoma, George R.

    2003-12-01

    The Medical Article Records System or MARS has been developed at the U.S. National Library of Medicine (NLM) for automated data entry of bibliographical information from medical journals into MEDLINE, the premier bibliographic citation database at NLM. Currently, a rule-based algorithm (called ZoneCzar) is used for labeling important bibliographical fields (title, author, affiliation, and abstract) on medical journal article page images. While rules have been created for medical journals with regular layout types, new rules have to be manually created for any input journals with arbitrary or new layout types. Therefore, it is of interest to label any journal articles independent of their layout styles. In this paper, we first describe a system (called ZoneMatch) for automated generation of crucial geometric and non-geometric features of important bibliographical fields based on string-matching and clustering techniques. The rule based algorithm is then modified to use these features to perform style-independent labeling. We then describe a performance evaluation method for quantitatively evaluating our algorithm and characterizing its error distributions. Experimental results show that the labeling performance of the rule-based algorithm is significantly improved when the generated features are used.

  16. Performance Evaluation of Single Sideband Radio over Fiber System through Modulation Index Enhancement

    NASA Astrophysics Data System (ADS)

    Chen, Xiaogang; Hu, Xizhen; Huang, Dexiu

    2014-09-01

    The transmission performance of single sideband (SSB) radio over fiber (RoF) system is evaluated through tuning the modulation index of Mach-Zehnder modulator, two different data modulation schemes and the influence of fiber dispersion are considered. The quantitative simulation results validate that there exist an optimum modulation index, and the system performance could be improved if the data signal is modulated on only optical carrier or sidebands.

  17. Quantitative Evaluation of Heavy Duty Machine Tools Remanufacturing Based on Modified Catastrophe Progression Method

    NASA Astrophysics Data System (ADS)

    shunhe, Li; jianhua, Rao; lin, Gui; weimin, Zhang; degang, Liu

    2017-11-01

    The result of remanufacturing evaluation is the basis for judging whether the heavy duty machine tool can remanufacture in the EOL stage of the machine tool lifecycle management.The objectivity and accuracy of evaluation is the key to the evaluation method.In this paper, the catastrophe progression method is introduced into the quantitative evaluation of heavy duty machine tools’ remanufacturing,and the results are modified by the comprehensive adjustment method,which makes the evaluation results accord with the standard of human conventional thinking.Using the catastrophe progression method to establish the heavy duty machine tools’ quantitative evaluation model,to evaluate the retired TK6916 type CNC floor milling-boring machine’s remanufacturing.The evaluation process is simple,high quantification,the result is objective.

  18. Evaluating the Quantitative Capabilities of Metagenomic Analysis Software.

    PubMed

    Kerepesi, Csaba; Grolmusz, Vince

    2016-05-01

    DNA sequencing technologies are applied widely and frequently today to describe metagenomes, i.e., microbial communities in environmental or clinical samples, without the need for culturing them. These technologies usually return short (100-300 base-pairs long) DNA reads, and these reads are processed by metagenomic analysis software that assign phylogenetic composition-information to the dataset. Here we evaluate three metagenomic analysis software (AmphoraNet--a webserver implementation of AMPHORA2--, MG-RAST, and MEGAN5) for their capabilities of assigning quantitative phylogenetic information for the data, describing the frequency of appearance of the microorganisms of the same taxa in the sample. The difficulties of the task arise from the fact that longer genomes produce more reads from the same organism than shorter genomes, and some software assign higher frequencies to species with longer genomes than to those with shorter ones. This phenomenon is called the "genome length bias." Dozens of complex artificial metagenome benchmarks can be found in the literature. Because of the complexity of those benchmarks, it is usually difficult to judge the resistance of a metagenomic software to this "genome length bias." Therefore, we have made a simple benchmark for the evaluation of the "taxon-counting" in a metagenomic sample: we have taken the same number of copies of three full bacterial genomes of different lengths, break them up randomly to short reads of average length of 150 bp, and mixed the reads, creating our simple benchmark. Because of its simplicity, the benchmark is not supposed to serve as a mock metagenome, but if a software fails on that simple task, it will surely fail on most real metagenomes. We applied three software for the benchmark. The ideal quantitative solution would assign the same proportion to the three bacterial taxa. We have found that AMPHORA2/AmphoraNet gave the most accurate results and the other two software were under-performers

  19. Quantitative Guidance for Stove Usage and Performance to Achieve Health and Environmental Targets

    PubMed Central

    Chiang, Ranyee A.

    2015-01-01

    Background Displacing the use of polluting and inefficient cookstoves in developing countries is necessary to achieve the potential health and environmental benefits sought through clean cooking solutions. Yet little quantitative context has been provided on how much displacement of traditional technologies is needed to achieve targets for household air pollutant concentrations or fuel savings. Objectives This paper provides instructive guidance on the usage of cooking technologies required to achieve health and environmental improvements. Methods We evaluated different scenarios of displacement of traditional stoves with use of higher performing technologies. The air quality and fuel consumption impacts were estimated for these scenarios using a single-zone box model of indoor air quality and ratios of thermal efficiency. Results Stove performance and usage should be considered together, as lower performing stoves can result in similar or greater benefits than a higher performing stove if the lower performing stove has considerably higher displacement of the baseline stove. Based on the indoor air quality model, there are multiple performance–usage scenarios for achieving modest indoor air quality improvements. To meet World Health Organization guidance levels, however, three-stone fire and basic charcoal stove usage must be nearly eliminated to achieve the particulate matter target (< 1–3 hr/week), and substantially limited to meet the carbon monoxide guideline (< 7–9 hr/week). Conclusions Moderate health gains may be achieved with various performance–usage scenarios. The greatest benefits are estimated to be achieved by near-complete displacement of traditional stoves with clean technologies, emphasizing the need to shift in the long term to near exclusive use of clean fuels and stoves. The performance–usage scenarios are also provided as a tool to guide technology selection and prioritize behavior change opportunities to maximize impact. Citation

  20. A Framework for Quantitative Evaluation of Care Coordination Effectiveness

    ERIC Educational Resources Information Center

    Liu, Wei

    2017-01-01

    The U.S. healthcare system lacks incentives and quantitative evaluation tools to assess coordination in a patient's care transition process. This is needed because poor care coordination has been identified by many studies as one of the major root causes for the U.S. health system's inefficiency, for poor outcomes, and for high cost. Despite…

  1. Apprentice Performance Evaluation.

    ERIC Educational Resources Information Center

    Gast, Clyde W.

    The Granite City (Illinois) Steel apprentices are under a performance evaluation from entry to graduation. Federally approved, the program is guided by joint apprenticeship committees whose monthly meetings include performance evaluation from three information sources: journeymen, supervisors, and instructors. Journeymen's evaluations are made…

  2. [Evaluation of YAG-laser vitreolysis effectiveness based on quantitative characterization of vitreous floaters].

    PubMed

    Shaimova, V A; Shaimov, T B; Shaimov, R B; Galin, A Yu; Goloshchapova, Zh A; Ryzhkov, P K; Fomin, A V

    2018-01-01

    To develop methods for evaluating effectiveness of YAG-laser vitreolysis of vitreous floaters. The study included 144 patients (173 eyes) who had underwent YAG-laser vitreolysis and were under observation from 01.09.16 to 31.01.18. The patients were 34 to 86 years old (mean age 62.7±10.2 years), 28 (19.4%) patients were male, 116 (80.6%) - female. All patients underwent standard and additional examination: ultrasonography (Accutome B-scan plus, U.S.A.), optic biometry (Lenstar 900, Haag-Streit, Switzerland), spectral optical coherence tomography using RTVue XR Avanti scanner (Optovue, U.S.A.) in modes Enhanced HD Line, 3D Retina, 3D Widefield MCT, Cross Line, Angio Retina, and scanning laser ophthalmoscopy (SLO) using Navilas 577s system. Laser vitreolysis was performed using the Ultra Q Reflex laser (Ellex, Australia). This paper presents methods of objective quantitative and qualitative assessment of artifactual shadows of vitreous floaters with spectral optical coherence tomographic scanner RTVue xR Avanti employing an algorithm of automatic detection of non-perfusion zones in modes Angio Retina, HD Angio Retina, as well as foveal avascular zone (FAZ) measurement with Angio Analytics® software. SLO performed with Navilas 577s was used as method of visualizing floaters and artifactual shadows in retinal surface layers prior to surgical treatment and after YAG-laser vitreolysis. Suggested methods of quantitative and qualitative assessment of artifactual shadows of the floaters in retinal layers are promising and may prove to be highly relevant for clinical monitoring of patients, optimization of treatment indications and evaluating effectiveness of YAG-laser vitreolysis. Further research of laser vitreolysis effectiveness in patients with vitreous floaters is necessary.

  3. Gender, Math Confidence, and Grit: Relationships with Quantitative Skills and Performance in an Undergraduate Biology Course

    PubMed Central

    Flanagan, K. M.; Einarson, J.

    2017-01-01

    In a world filled with big data, mathematical models, and statistics, the development of strong quantitative skills is becoming increasingly critical for modern biologists. Teachers in this field must understand how students acquire quantitative skills and explore barriers experienced by students when developing these skills. In this study, we examine the interrelationships among gender, grit, and math confidence for student performance on a pre–post quantitative skills assessment and overall performance in an undergraduate biology course. Here, we show that females significantly underperformed relative to males on a quantitative skills assessment at the start of term. However, females showed significantly higher gains over the semester, such that the gender gap in performance was nearly eliminated by the end of the semester. Math confidence plays an important role in the performance on both the pre and post quantitative skills assessments and overall performance in the course. The effect of grit on student performance, however, is mediated by a student’s math confidence; as math confidence increases, the positive effect of grit decreases. Consequently, the positive impact of a student’s grittiness is observed most strongly for those students with low math confidence. We also found grit to be positively associated with the midterm score and the final grade in the course. Given the relationships established in this study among gender, grit, and math confidence, we provide “instructor actions” from the literature that can be applied in the classroom to promote the development of quantitative skills in light of our findings. PMID:28798209

  4. Near-infrared fluorescence image quality test methods for standardized performance evaluation

    NASA Astrophysics Data System (ADS)

    Kanniyappan, Udayakumar; Wang, Bohan; Yang, Charles; Ghassemi, Pejhman; Wang, Quanzeng; Chen, Yu; Pfefer, Joshua

    2017-03-01

    Near-infrared fluorescence (NIRF) imaging has gained much attention as a clinical method for enhancing visualization of cancers, perfusion and biological structures in surgical applications where a fluorescent dye is monitored by an imaging system. In order to address the emerging need for standardization of this innovative technology, it is necessary to develop and validate test methods suitable for objective, quantitative assessment of device performance. Towards this goal, we develop target-based test methods and investigate best practices for key NIRF imaging system performance characteristics including spatial resolution, depth of field and sensitivity. Characterization of fluorescence properties was performed by generating excitation-emission matrix properties of indocyanine green and quantum dots in biological solutions and matrix materials. A turbid, fluorophore-doped target was used, along with a resolution target for assessing image sharpness. Multi-well plates filled with either liquid or solid targets were generated to explore best practices for evaluating detection sensitivity. Overall, our results demonstrate the utility of objective, quantitative, target-based testing approaches as well as the need to consider a wide range of factors in establishing standardized approaches for NIRF imaging system performance.

  5. Performance Evaluation and Analysis for Gravity Matching Aided Navigation.

    PubMed

    Wu, Lin; Wang, Hubiao; Chai, Hua; Zhang, Lu; Hsu, Houtse; Wang, Yong

    2017-04-05

    Simulation tests were accomplished in this paper to evaluate the performance of gravity matching aided navigation (GMAN). Four essential factors were focused in this study to quantitatively evaluate the performance: gravity database (DB) resolution, fitting degree of gravity measurements, number of samples in matching, and gravity changes in the matching area. Marine gravity anomaly DB derived from satellite altimetry was employed. Actual dynamic gravimetry accuracy and operating conditions were referenced to design the simulation parameters. The results verified that the improvement of DB resolution, gravimetry accuracy, number of measurement samples, or gravity changes in the matching area generally led to higher positioning accuracies, while the effects of them were different and interrelated. Moreover, three typical positioning accuracy targets of GMAN were proposed, and the conditions to achieve these targets were concluded based on the analysis of several different system requirements. Finally, various approaches were provided to improve the positioning accuracy of GMAN.

  6. Performance Evaluation and Analysis for Gravity Matching Aided Navigation

    PubMed Central

    Wu, Lin; Wang, Hubiao; Chai, Hua; Zhang, Lu; Hsu, Houtse; Wang, Yong

    2017-01-01

    Simulation tests were accomplished in this paper to evaluate the performance of gravity matching aided navigation (GMAN). Four essential factors were focused in this study to quantitatively evaluate the performance: gravity database (DB) resolution, fitting degree of gravity measurements, number of samples in matching, and gravity changes in the matching area. Marine gravity anomaly DB derived from satellite altimetry was employed. Actual dynamic gravimetry accuracy and operating conditions were referenced to design the simulation parameters. The results verified that the improvement of DB resolution, gravimetry accuracy, number of measurement samples, or gravity changes in the matching area generally led to higher positioning accuracies, while the effects of them were different and interrelated. Moreover, three typical positioning accuracy targets of GMAN were proposed, and the conditions to achieve these targets were concluded based on the analysis of several different system requirements. Finally, various approaches were provided to improve the positioning accuracy of GMAN. PMID:28379178

  7. Quantitative Evaluation System of Soft Neurological Signs for Children with Attention Deficit Hyperactivity Disorder.

    PubMed

    Kaneko, Miki; Yamashita, Yushiro; Iramina, Keiji

    2016-01-18

    Attention deficit hyperactivity disorder (ADHD) is a neurodevelopmental disorder characterized by symptoms of inattention, hyperactivity, and impulsivity. Soft neurological signs (SNS) are minor neurological abnormalities in motor performance, and are used as one evaluation method for neurodevelopmental delays in children with ADHD. Our aim is to establish a quantitative evaluation system for children with ADHD. We focused on the arm movement called pronation and supination, which is one such soft neurological sign. Thirty three children with ADHD aged 7-11 years (27 males, six females) and twenty five adults participants aged 21-29 years old (19 males, six females) participated in our experiments. Our results suggested that the pronation and supination function in children with ADHD has a tendency to lag behind that of typically developing children by several years. From these results, our system has a possibility to objectively evaluate the neurodevelopmental delay of children with ADHD.

  8. Chemical Fingerprint and Quantitative Analysis for the Quality Evaluation of Platycladi cacumen by Ultra-performance Liquid Chromatography Coupled with Hierarchical Cluster Analysis.

    PubMed

    Shan, Mingqiu; Li, Sam Fong Yau; Yu, Sheng; Qian, Yan; Guo, Shuchen; Zhang, Li; Ding, Anwei

    2018-01-01

    Platycladi cacumen (dried twigs and leaves of Platycladus orientalis (L.) Franco) is a frequently utilized Chinese medicinal herb. To evaluate the quality of the phytomedcine, an ultra-performance liquid chromatographic method with diode array detection was established for chemical fingerprinting and quantitative analysis. In this study, 27 batches of P. cacumen from different regions were collected for analysis. A chemical fingerprint with 20 common peaks was obtained using Similarity Evaluation System for Chromatographic Fingerprint of Traditional Chinese Medicine (Version 2004A). Among these 20 components, seven flavonoids (myricitrin, isoquercitrin, quercitrin, afzelin, cupressuflavone, amentoflavone and hinokiflavone) were identified and determined simultaneously. In the method validation, the seven analytes showed good regressions (R ≥ 0.9995) within linear ranges and good recoveries from 96.4% to 103.3%. Furthermore, with the contents of these seven flavonoids, hierarchical clustering analysis was applied to distinguish the 27 batches into five groups. The chemometric results showed that these groups were almost consistent with geographical positions and climatic conditions of the production regions. Integrating fingerprint analysis, simultaneous determination and hierarchical clustering analysis, the established method is rapid, sensitive, accurate and readily applicable, and also provides a significant foundation for quality control of P. cacumen efficiently. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. The Evaluation of Foam Performance and Flooding Efficiency

    NASA Astrophysics Data System (ADS)

    Keliang, Wang; Yuhao, Chen; Gang, Wang; Gen, Li

    2017-12-01

    ROSS-Miles and spinning drop interfacial tensionmeter are used to select suitable foam system through foam composite index (FCI) and interfacial tension (IT). The selected foam system are taken to conduct further test. The further tests are evaluating the foam system resistance to adsorption with multi-round core flooding dynamic adsorption test and evaluating the performance of foam system with four kinds of different transport distance, quantitatively analyzing the foam system effective distance after dynamic adsorption. The result shows that the foaming ability and the mobilizing ability of the foam system decrease with the increase of the round of dynamic adsorption. As the transport distance increases, the foaming ability and the mobilizing ability of the foam system decrease. This result further reveals the flooding characteristics of nitrogen foam flooding, which provides a reference for the implementation of nitrogen foam flooding technology.

  10. Quantitative evaluation of the voice range profile in patients with voice disorder.

    PubMed

    Ikeda, Y; Masuda, T; Manako, H; Yamashita, H; Yamamoto, T; Komiyama, S

    1999-01-01

    In 1953, Calvet first displayed the fundamental frequency (pitch) and sound pressure level (intensity) of a voice on a two-dimensional plane and created a voice range profile. This profile has been used to evaluate clinically various vocal disorders, although such evaluations to date have been subjective without quantitative assessment. In the present study, a quantitative system was developed to evaluate the voice range profile utilizing a personal computer. The area of the voice range profile was defined as the voice volume. This volume was analyzed in 137 males and 175 females who were treated for various dysphonias at Kyushu University between 1984 and 1990. Ten normal subjects served as controls. The voice volume in cases with voice disorders significantly decreased irrespective of the disease and sex. Furthermore, cases having better improvement after treatment showed a tendency for the voice volume to increase. These findings illustrated the voice volume as a useful clinical test for evaluating voice control in cases with vocal disorders.

  11. Quantitative evaluation of dermatological antiseptics.

    PubMed

    Leitch, C S; Leitch, A E; Tidman, M J

    2015-12-01

    Topical antiseptics are frequently used in dermatological management, yet evidence for the efficacy of traditional generic formulations is often largely anecdotal. We tested the in vitro bactericidal activity of four commonly used topical antiseptics against Staphylococcus aureus, using a modified version of the European Standard EN 1276, a quantitative suspension test for evaluation of the bactericidal activity of chemical disinfectants and antiseptics. To meet the standard for antiseptic effectiveness of EN 1276, at least a 5 log10 reduction in bacterial count within 5 minutes of exposure is required. While 1% benzalkonium chloride and 6% hydrogen peroxide both achieved a 5 log10 reduction in S. aureus count, neither 2% aqueous eosin nor 1 : 10 000 potassium permanganate showed significant bactericidal activity compared with control at exposure periods of up to 1 h. Aqueous eosin and potassium permanganate may have desirable astringent properties, but these results suggest they lack effective antiseptic activity, at least against S. aureus. © 2015 British Association of Dermatologists.

  12. Quality evaluation of moluodan concentrated pill using high-performance liquid chromatography fingerprinting coupled with chemometrics.

    PubMed

    Tao, Lingyan; Zhang, Qing; Wu, Yongjiang; Liu, Xuesong

    2016-12-01

    In this study, a fast and effective high-performance liquid chromatography method was developed to obtain a fingerprint chromatogram and quantitative analysis simultaneously of four indexes including gallic acid, chlorogenic acid, albiflorin and paeoniflorin of the traditional Chinese medicine Moluodan Concentrated Pill. The method was performed by using a Waters X-bridge C 18 reversed phase column on an Agilent 1200S high-performance liquid chromatography system coupled with diode array detection. The mobile phase of the high-performance liquid chromatography method was composed of 20 mmol/L phosphate solution and acetonitrile with a 1 mL/min eluent velocity, under a detection temperature of 30°C and a UV detection wavelength of 254 nm. After the methodology validation, 16 batches of Moluodan Concentrated Pill were analyzed by this high-performance liquid chromatography method and both qualitative and quantitative evaluation results were achieved by similarity analysis, principal component analysis and hierarchical cluster analysis. The results of these three chemometrics were in good agreement and all indicated that batch 10 and batch 16 showed significant differences with the other 14 batches. This suggested that the developed high-performance liquid chromatography method could be applied in the quality evaluation of Moluodan Concentrated Pill. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Functional assessment and performance evaluation for assistive robotic manipulators: Literature review.

    PubMed

    Chung, Cheng-Shiu; Wang, Hongwu; Cooper, Rory A

    2013-07-01

    The user interface development of assistive robotic manipulators can be traced back to the 1960s. Studies include kinematic designs, cost-efficiency, user experience involvements, and performance evaluation. This paper is to review studies conducted with clinical trials using activities of daily living (ADLs) tasks to evaluate performance categorized using the International Classification of Functioning, Disability, and Health (ICF) frameworks, in order to give the scope of current research and provide suggestions for future studies. We conducted a literature search of assistive robotic manipulators from 1970 to 2012 in PubMed, Google Scholar, and University of Pittsburgh Library System - PITTCat. Twenty relevant studies were identified. Studies were separated into two broad categories: user task preferences and user-interface performance measurements of commercialized and developing assistive robotic manipulators. The outcome measures and ICF codes associated with the performance evaluations are reported. Suggestions for the future studies include (1) standardized ADL tasks for the quantitative and qualitative evaluation of task efficiency and performance to build comparable measures between research groups, (2) studies relevant to the tasks from user priority lists and ICF codes, and (3) appropriate clinical functional assessment tests with consideration of constraints in assistive robotic manipulator user interfaces. In addition, these outcome measures will help physicians and therapists build standardized tools while prescribing and assessing assistive robotic manipulators.

  14. Functional assessment and performance evaluation for assistive robotic manipulators: Literature review

    PubMed Central

    Chung, Cheng-Shiu; Wang, Hongwu; Cooper, Rory A.

    2013-01-01

    Context The user interface development of assistive robotic manipulators can be traced back to the 1960s. Studies include kinematic designs, cost-efficiency, user experience involvements, and performance evaluation. This paper is to review studies conducted with clinical trials using activities of daily living (ADLs) tasks to evaluate performance categorized using the International Classification of Functioning, Disability, and Health (ICF) frameworks, in order to give the scope of current research and provide suggestions for future studies. Methods We conducted a literature search of assistive robotic manipulators from 1970 to 2012 in PubMed, Google Scholar, and University of Pittsburgh Library System – PITTCat. Results Twenty relevant studies were identified. Conclusion Studies were separated into two broad categories: user task preferences and user-interface performance measurements of commercialized and developing assistive robotic manipulators. The outcome measures and ICF codes associated with the performance evaluations are reported. Suggestions for the future studies include (1) standardized ADL tasks for the quantitative and qualitative evaluation of task efficiency and performance to build comparable measures between research groups, (2) studies relevant to the tasks from user priority lists and ICF codes, and (3) appropriate clinical functional assessment tests with consideration of constraints in assistive robotic manipulator user interfaces. In addition, these outcome measures will help physicians and therapists build standardized tools while prescribing and assessing assistive robotic manipulators. PMID:23820143

  15. New Performance Metrics for Quantitative Polymerase Chain Reaction-Based Microbial Source Tracking Methods

    EPA Science Inventory

    Binary sensitivity and specificity metrics are not adequate to describe the performance of quantitative microbial source tracking methods because the estimates depend on the amount of material tested and limit of detection. We introduce a new framework to compare the performance ...

  16. Methods for the field evaluation of quantitative G6PD diagnostics: a review.

    PubMed

    Ley, Benedikt; Bancone, Germana; von Seidlein, Lorenz; Thriemer, Kamala; Richards, Jack S; Domingo, Gonzalo J; Price, Ric N

    2017-09-11

    Individuals with glucose-6-phosphate dehydrogenase (G6PD) deficiency are at risk of severe haemolysis following the administration of 8-aminoquinoline compounds. Primaquine is the only widely available 8-aminoquinoline for the radical cure of Plasmodium vivax. Tafenoquine is under development with the potential to simplify treatment regimens, but point-of-care (PoC) tests will be needed to provide quantitative measurement of G6PD activity prior to its administration. There is currently a lack of appropriate G6PD PoC tests, but a number of new tests are in development and are likely to enter the market in the coming years. As these are implemented, they will need to be validated in field studies. This article outlines the technical details for the field evaluation of novel quantitative G6PD diagnostics such as sample handling, reference testing and statistical analysis. Field evaluation is based on the comparison of paired samples, including one sample tested by the new assay at point of care and one sample tested by the gold-standard reference method, UV spectrophotometry in an established laboratory. Samples can be collected as capillary or venous blood; the existing literature suggests that potential differences in capillary or venous blood are unlikely to affect results substantially. The collection and storage of samples is critical to ensure preservation of enzyme activity, it is recommended that samples are stored at 4 °C and testing occurs within 4 days of collection. Test results can be visually presented as scatter plot, Bland-Altman plot, and a histogram of the G6PD activity distribution of the study population. Calculating the adjusted male median allows categorizing results according to G6PD activity to calculate standard performance indicators and to perform receiver operating characteristic (ROC) analysis.

  17. Performing Repeated Quantitative Small-Animal PET with an Arterial Input Function Is Routinely Feasible in Rats.

    PubMed

    Huang, Chi-Cheng; Wu, Chun-Hu; Huang, Ya-Yao; Tzen, Kai-Yuan; Chen, Szu-Fu; Tsai, Miao-Ling; Wu, Hsiao-Ming

    2017-04-01

    Performing quantitative small-animal PET with an arterial input function has been considered technically challenging. Here, we introduce a catheterization procedure that keeps a rat physiologically stable for 1.5 mo. We demonstrated the feasibility of quantitative small-animal 18 F-FDG PET in rats by performing it repeatedly to monitor the time course of variations in the cerebral metabolic rate of glucose (CMR glc ). Methods: Aseptic surgery was performed on 2 rats. Each rat underwent catheterization of the right femoral artery and left femoral vein. The catheters were sealed with microinjection ports and then implanted subcutaneously. Over the next 3 wk, each rat underwent 18 F-FDG quantitative small-animal PET 6 times. The CMR glc of each brain region was calculated using a 3-compartment model and an operational equation that included a k* 4 Results: On 6 mornings, we completed 12 18 F-FDG quantitative small-animal PET studies on 2 rats. The rats grew steadily before and after the 6 quantitative small-animal PET studies. The CMR glc of the conscious brain (e.g., right parietal region, 99.6 ± 10.2 μmol/100 g/min; n = 6) was comparable to that for 14 C-deoxyglucose autoradiographic methods. Conclusion: Maintaining good blood patency in catheterized rats is not difficult. Longitudinal quantitative small-animal PET imaging with an arterial input function can be performed routinely. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  18. Gender, Math Confidence, and Grit: Relationships with Quantitative Skills and Performance in an Undergraduate Biology Course.

    PubMed

    Flanagan, K M; Einarson, J

    2017-01-01

    In a world filled with big data, mathematical models, and statistics, the development of strong quantitative skills is becoming increasingly critical for modern biologists. Teachers in this field must understand how students acquire quantitative skills and explore barriers experienced by students when developing these skills. In this study, we examine the interrelationships among gender, grit, and math confidence for student performance on a pre-post quantitative skills assessment and overall performance in an undergraduate biology course. Here, we show that females significantly underperformed relative to males on a quantitative skills assessment at the start of term. However, females showed significantly higher gains over the semester, such that the gender gap in performance was nearly eliminated by the end of the semester. Math confidence plays an important role in the performance on both the pre and post quantitative skills assessments and overall performance in the course. The effect of grit on student performance, however, is mediated by a student's math confidence; as math confidence increases, the positive effect of grit decreases. Consequently, the positive impact of a student's grittiness is observed most strongly for those students with low math confidence. We also found grit to be positively associated with the midterm score and the final grade in the course. Given the relationships established in this study among gender, grit, and math confidence, we provide "instructor actions" from the literature that can be applied in the classroom to promote the development of quantitative skills in light of our findings. © 2017 K. M. Flanagan and J. Einarson. CBE—Life Sciences Education © 2017 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http

  19. Evaluation of patients with painful total hip arthroplasty using combined single photon emission tomography and conventional computerized tomography (SPECT/CT) - a comparison of semi-quantitative versus 3D volumetric quantitative measurements.

    PubMed

    Barthassat, Emilienne; Afifi, Faik; Konala, Praveen; Rasch, Helmut; Hirschmann, Michael T

    2017-05-08

    It was the primary purpose of our study to evaluate the inter- and intra-observer reliability of a standardized SPECT/CT algorithm for evaluating patients with painful primary total hip arthroplasty (THA). The secondary purpose was a comparison of semi-quantitative and 3D volumetric quantification method for assessment of bone tracer uptake (BTU) in those patients. A novel SPECT/CT localization scheme consisting of 14 femoral and 4 acetabular regions on standardized axial and coronal slices was introduced and evaluated in terms of inter- and intra-observer reliability in 37 consecutive patients with hip pain after THA. BTU for each anatomical region was assessed semi-quantitatively using a color-coded Likert type scale (0-10) and volumetrically quantified using a validated software. Two observers interpreted the SPECT/CT findings in all patients two times with six weeks interval between interpretations in random order. Semi-quantitative and quantitative measurements were compared in terms of reliability. In addition, the values were correlated using Pearson`s correlation. A factorial cluster analysis of BTU was performed to identify clinically relevant regions, which should be grouped and analysed together. The localization scheme showed high inter- and intra-observer reliabilities for all femoral and acetabular regions independent of the measurement method used (semiquantitative versus 3D volumetric quantitative measurements). A high to moderate correlation between both measurement methods was shown for the distal femur, the proximal femur and the acetabular cup. The factorial cluster analysis showed that the anatomical regions might be summarized into three distinct anatomical regions. These were the proximal femur, the distal femur and the acetabular cup region. The SPECT/CT algorithm for assessment of patients with pain after THA is highly reliable independent from the measurement method used. Three clinically relevant anatomical regions (proximal femoral

  20. A framework for evaluating the formation, implementation, and performance of accountable care organizations.

    PubMed

    Fisher, Elliott S; Shortell, Stephen M; Kreindler, Sara A; Van Citters, Aricca D; Larson, Bridget K

    2012-11-01

    The implementation of accountable care organizations (ACOs), a new health care payment and delivery model designed to improve care and lower costs, is proceeding rapidly. We build on our experience tracking early ACOs to identify the major factors-such as contract characteristics; structure, capabilities, and activities; and local context-that would be likely to influence ACO formation, implementation, and performance. We then propose how an ACO evaluation program could be structured to guide policy makers and payers in improving the design of ACO contracts, while providing insights for providers on approaches to care transformation that are most likely to be successful in different contexts. We also propose key activities to support evaluation of ACOs in the near term, including tracking their formation, developing a set of performance measures across all ACOs and payers, aggregating those performance data, conducting qualitative and quantitative research, and coordinating different evaluation activities.

  1. Framework for quantitative evaluation of 3D vessel segmentation approaches using vascular phantoms in conjunction with 3D landmark localization and registration

    NASA Astrophysics Data System (ADS)

    Wörz, Stefan; Hoegen, Philipp; Liao, Wei; Müller-Eschner, Matthias; Kauczor, Hans-Ulrich; von Tengg-Kobligk, Hendrik; Rohr, Karl

    2016-03-01

    We introduce a framework for quantitative evaluation of 3D vessel segmentation approaches using vascular phantoms. Phantoms are designed using a CAD system and created with a 3D printer, and comprise realistic shapes including branches and pathologies such as abdominal aortic aneurysms (AAA). To transfer ground truth information to the 3D image coordinate system, we use a landmark-based registration scheme utilizing fiducial markers integrated in the phantom design. For accurate 3D localization of the markers we developed a novel 3D parametric intensity model that is directly fitted to the markers in the images. We also performed a quantitative evaluation of different vessel segmentation approaches for a phantom of an AAA.

  2. Quantitative evaluation of 3D dosimetry for stereotactic volumetric‐modulated arc delivery using COMPASS

    PubMed Central

    Manigandan, Durai; Karrthick, Karukkupalayam Palaniappan; Sambasivaselli, Raju; Senniandavar, Vellaingiri; Ramu, Mahendran; Rajesh, Thiyagarajan; Lutz, Muller; Muthukumaran, Manavalan; Karthikeyan, Nithyanantham; Tejinder, Kataria

    2014-01-01

    The purpose of this study was to evaluate quantitatively the patient‐specific 3D dosimetry tool COMPASS with 2D array MatriXX detector for stereotactic volumetric‐modulated arc delivery. Twenty‐five patients CT images and RT structures from different sites (brain, head & neck, thorax, abdomen, and spine) were taken from CyberKnife Multiplan planning system for this study. All these patients underwent radical stereotactic treatment in CyberKnife. For each patient, linac based volumetric‐modulated arc therapy (VMAT) stereotactic plans were generated in Monaco TPS v3.1 using Elekta Beam Modulator MLC. Dose prescription was in the range of 5–20 Gy per fraction. Target prescription and critical organ constraints were tried to match the delivered treatment plans. Each plan quality was analyzed using conformity index (CI), conformity number (CN), gradient Index (GI), target coverage (TC), and dose to 95% of volume (D95). Monaco Monte Carlo (MC)‐calculated treatment plan delivery accuracy was quantitatively evaluated with COMPASS‐calculated (CCA) dose and COMPASS indirectly measured (CME) dose based on dose‐volume histogram metrics. In order to ascertain the potential of COMPASS 3D dosimetry for stereotactic plan delivery, 2D fluence verification was performed with MatriXX using MultiCube phantom. Routine quality assurance of absolute point dose verification was performed to check the overall delivery accuracy. Quantitative analyses of dose delivery verification were compared with pass and fail criteria of 3 mm and 3% distance to agreement and dose differences. Gamma passing rate was compared with 2D fluence verification from MatriXX with MultiCube. Comparison of COMPASS reconstructed dose from measured fluence and COMPASS computed dose has shown a very good agreement with TPS calculated dose. Each plan was evaluated based on dose volume parameters for target volumes such as dose at 95% of volume (D95) and average dose. For critical organs dose at 20% of

  3. Quantitative evaluation research of glare from automotive headlamps

    NASA Astrophysics Data System (ADS)

    Wang, Tiecheng; Qian, Rui; Cao, Ye; Gao, Mingqiu

    2018-01-01

    This study concerns the quantized evaluation research of glare from automotive headlamps. In the actual regulations, only one point in the test screen is set for judging whether driver can bear the light caused by headlamps of opposing vehicle. To evaluating practical effect of glare, we accept a glare zone with the probability distribution information of the oncoming driver's eye position. In this focus area, glare level of headlamp is represented by weighted luminous flux. To confirm the most comfortable illuminance value to human eyes at 50 m, we used test point B50L as observation position, and collected 1,000 subjective evaluation data from 20 test personnel in different ages during two months. Basing on the assessment results, we calculated 0.60 lx as recommended value for standardized testing procedure at 25 m. Then we figured out 0.38 lm as optimum value, and 0.25 / 1.20 lm as limiting values depending on regulations. We tested 40 sample vehicles with different levels to verify the sectional nonlinear quantitative evaluation method we designed, and analyzed the typical test results.

  4. 'Stories' or 'snapshots'? A study directed at comparing qualitative and quantitative approaches to curriculum evaluation.

    PubMed

    Pateman, B; Jinks, A M

    1999-01-01

    The focus of this paper is a study designed to explore the validity of quantitative approaches of student evaluation in a pre-registration degree programme. As managers of the students' education we were concerned that the quantitative method, which used lecturer criteria, may not fully represent students' views. The approach taken is that of a process-type strategy for curriculum evaluation as described by Parlett and Hamilton (1972). The aim of the study is to produce illuminative data, or students' 'stories' of their educational experiences through use of semi-structured interviews. The results are then compared to the current quantitative measurement tools designed to obtain 'snapshots' of the educational effectiveness of the curriculum. The quantitative measurement tools use Likert scale measurements of teacher-devised criterion statements. The results of the study give a rich source of qualitative data which can be used to inform future curriculum development. However, complete validation of the current quantitative instruments used was not achieved in this study. Student and teacher agendas in respect of important issues pertaining to the course programme were found to differ. Limitations of the study are given. There is discussion of the options open to the management team with regard to future development of curriculum evaluation systems.

  5. Cartilage Repair Surgery: Outcome Evaluation by Using Noninvasive Cartilage Biomarkers Based on Quantitative MRI Techniques?

    PubMed Central

    Jungmann, Pia M.; Baum, Thomas; Bauer, Jan S.; Karampinos, Dimitrios C.; Link, Thomas M.; Li, Xiaojuan; Trattnig, Siegfried; Rummeny, Ernst J.; Woertler, Klaus; Welsch, Goetz H.

    2014-01-01

    Background. New quantitative magnetic resonance imaging (MRI) techniques are increasingly applied as outcome measures after cartilage repair. Objective. To review the current literature on the use of quantitative MRI biomarkers for evaluation of cartilage repair at the knee and ankle. Methods. Using PubMed literature research, studies on biochemical, quantitative MR imaging of cartilage repair were identified and reviewed. Results. Quantitative MR biomarkers detect early degeneration of articular cartilage, mainly represented by an increasing water content, collagen disruption, and proteoglycan loss. Recently, feasibility of biochemical MR imaging of cartilage repair tissue and surrounding cartilage was demonstrated. Ultrastructural properties of the tissue after different repair procedures resulted in differences in imaging characteristics. T2 mapping, T1rho mapping, delayed gadolinium-enhanced MRI of cartilage (dGEMRIC), and diffusion weighted imaging (DWI) are applicable on most clinical 1.5 T and 3 T MR scanners. Currently, a standard of reference is difficult to define and knowledge is limited concerning correlation of clinical and MR findings. The lack of histological correlations complicates the identification of the exact tissue composition. Conclusions. A multimodal approach combining several quantitative MRI techniques in addition to morphological and clinical evaluation might be promising. Further investigations are required to demonstrate the potential for outcome evaluation after cartilage repair. PMID:24877139

  6. Diagnosing acute HIV infection: The performance of quantitative HIV-1 RNA testing (viral load) in the 2014 laboratory testing algorithm.

    PubMed

    Wu, Hsiu; Cohen, Stephanie E; Westheimer, Emily; Gay, Cynthia L; Hall, Laura; Rose, Charles; Hightow-Weidman, Lisa B; Gose, Severin; Fu, Jie; Peters, Philip J

    2017-08-01

    New recommendations for laboratory diagnosis of HIV infection in the United States were published in 2014. The updated testing algorithm includes a qualitative HIV-1 RNA assay to resolve discordant immunoassay results and to identify acute HIV-1 infection (AHI). The qualitative HIV-1 RNA assay is not widely available; therefore, we evaluated the performance of a more widely available quantitative HIV-1 RNA assay, viral load, for diagnosing AHI. We determined that quantitative viral loads consistently distinguished AHI from a false-positive immunoassay result. Among 100 study participants with AHI and a viral load result, the estimated geometric mean viral load was 1,377,793copies/mL. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. A numerical algorithm with preference statements to evaluate the performance of scientists.

    PubMed

    Ricker, Martin

    Academic evaluation committees have been increasingly receptive for using the number of published indexed articles, as well as citations, to evaluate the performance of scientists. It is, however, impossible to develop a stand-alone, objective numerical algorithm for the evaluation of academic activities, because any evaluation necessarily includes subjective preference statements. In a market, the market prices represent preference statements, but scientists work largely in a non-market context. I propose a numerical algorithm that serves to determine the distribution of reward money in Mexico's evaluation system, which uses relative prices of scientific goods and services as input. The relative prices would be determined by an evaluation committee. In this way, large evaluation systems (like Mexico's Sistema Nacional de Investigadores ) could work semi-automatically, but not arbitrarily or superficially, to determine quantitatively the academic performance of scientists every few years. Data of 73 scientists from the Biology Institute of Mexico's National University are analyzed, and it is shown that the reward assignation and academic priorities depend heavily on those preferences. A maximum number of products or activities to be evaluated is recommended, to encourage quality over quantity.

  8. The Quantitative Science of Evaluating Imaging Evidence.

    PubMed

    Genders, Tessa S S; Ferket, Bart S; Hunink, M G Myriam

    2017-03-01

    Cardiovascular diagnostic imaging tests are increasingly used in everyday clinical practice, but are often imperfect, just like any other diagnostic test. The performance of a cardiovascular diagnostic imaging test is usually expressed in terms of sensitivity and specificity compared with the reference standard (gold standard) for diagnosing the disease. However, evidence-based application of a diagnostic test also requires knowledge about the pre-test probability of disease, the benefit of making a correct diagnosis, the harm caused by false-positive imaging test results, and potential adverse effects of performing the test itself. To assist in clinical decision making regarding appropriate use of cardiovascular diagnostic imaging tests, we reviewed quantitative concepts related to diagnostic performance (e.g., sensitivity, specificity, predictive values, likelihood ratios), as well as possible biases and solutions in diagnostic performance studies, Bayesian principles, and the threshold approach to decision making. Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  9. Genetic toxicology at the crossroads-from qualitative hazard evaluation to quantitative risk assessment.

    PubMed

    White, Paul A; Johnson, George E

    2016-05-01

    Applied genetic toxicology is undergoing a transition from qualitative hazard identification to quantitative dose-response analysis and risk assessment. To facilitate this change, the Health and Environmental Sciences Institute (HESI) Genetic Toxicology Technical Committee (GTTC) sponsored a workshop held in Lancaster, UK on July 10-11, 2014. The event included invited speakers from several institutions and the contents was divided into three themes-1: Point-of-departure Metrics for Quantitative Dose-Response Analysis in Genetic Toxicology; 2: Measurement and Estimation of Exposures for Better Extrapolation to Humans and 3: The Use of Quantitative Approaches in Genetic Toxicology for human health risk assessment (HHRA). A host of pertinent issues were discussed relating to the use of in vitro and in vivo dose-response data, the development of methods for in vitro to in vivo extrapolation and approaches to use in vivo dose-response data to determine human exposure limits for regulatory evaluations and decision-making. This Special Issue, which was inspired by the workshop, contains a series of papers that collectively address topics related to the aforementioned themes. The Issue includes contributions that collectively evaluate, describe and discuss in silico, in vitro, in vivo and statistical approaches that are facilitating the shift from qualitative hazard evaluation to quantitative risk assessment. The use and application of the benchmark dose approach was a central theme in many of the workshop presentations and discussions, and the Special Issue includes several contributions that outline novel applications for the analysis and interpretation of genetic toxicity data. Although the contents of the Special Issue constitutes an important step towards the adoption of quantitative methods for regulatory assessment of genetic toxicity, formal acceptance of quantitative methods for HHRA and regulatory decision-making will require consensus regarding the

  10. Quantitative evaluation of photoplethysmographic artifact reduction for pulse oximetry

    NASA Astrophysics Data System (ADS)

    Hayes, Matthew J.; Smith, Peter R.

    1999-01-01

    Motion artefact corruption of pulse oximeter output, causing both measurement inaccuracies and false alarm conditions, is a primary restriction in the current clinical practice and future applications of this useful technique. Artefact reduction in photoplethysmography (PPG), and therefore by application in pulse oximetry, is demonstrated using a novel non-linear methodology recently proposed by the authors. The significance of these processed PPG signals for pulse oximetry measurement is discussed, with particular attention to the normalization inherent in the artefact reduction process. Quantitative experimental investigation of the performance of PPG artefact reduction is then utilized to evaluate this technology for application to pulse oximetry. While the successfully demonstrated reduction of severe artefacts may widen the applicability of all PPG technologies and decrease the occurrence of pulse oximeter false alarms, the observed reduction of slight artefacts suggests that many such effects may go unnoticed in clinical practice. The signal processing and output averaging used in most commercial oximeters can incorporate these artefact errors into the output, while masking the true PPG signal corruption. It is therefore suggested that PPG artefact reduction should be incorporated into conventional pulse oximetry measurement, even in the absence of end-user artefact problems.

  11. [Quality evaluation of rhubarb dispensing granules based on multi-component simultaneous quantitative analysis and bioassay].

    PubMed

    Tan, Peng; Zhang, Hai-Zhu; Zhang, Ding-Kun; Wu, Shan-Na; Niu, Ming; Wang, Jia-Bo; Xiao, Xiao-He

    2017-07-01

    This study attempts to evaluate the quality of Chinese formula granules by combined use of multi-component simultaneous quantitative analysis and bioassay. The rhubarb dispensing granules were used as the model drug for demonstrative study. The ultra-high performance liquid chromatography (UPLC) method was adopted for simultaneously quantitative determination of the 10 anthraquinone derivatives (such as aloe emodin-8-O-β-D-glucoside) in rhubarb dispensing granules; purgative biopotency of different batches of rhubarb dispensing granules was determined based on compound diphenoxylate tablets-induced mouse constipation model; blood activating biopotency of different batches of rhubarb dispensing granules was determined based on in vitro rat antiplatelet aggregation model; SPSS 22.0 statistical software was used for correlation analysis between 10 anthraquinone derivatives and purgative biopotency, blood activating biopotency. The results of multi-components simultaneous quantitative analysisshowed that there was a great difference in chemical characterizationand certain differences inpurgative biopotency and blood activating biopotency among 10 batches of rhubarb dispensing granules. The correlation analysis showed that the intensity of purgative biopotency was significantly correlated with the content of conjugated anthraquinone glycosides (P<0.01), and the intensity of blood activating biopotency was significantly correlated with the content of free anthraquinone (P<0.01). In summary, the combined use of multi-component simultaneous quantitative analysis and bioassay can achieve objective quantification and more comprehensive reflection on overall quality difference among different batches of rhubarb dispensing granules. Copyright© by the Chinese Pharmaceutical Association.

  12. Quantitative polarization and flow evaluation of choroid and sclera by multifunctional Jones matrix optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Sugiyama, S.; Hong, Y.-J.; Kasaragod, D.; Makita, S.; Miura, M.; Ikuno, Y.; Yasuno, Y.

    2016-03-01

    Quantitative evaluation of optical properties of choroid and sclera are performed by multifunctional optical coherence tomography. Five normal eyes, five glaucoma eyes and one choroidal atrophy eye are examined. The refractive error was found to be correlated with choroidal birefringence, polarization uniformity, and flow in addition to scleral birefringence among normal eyes. The significant differences were observed between the normal and the glaucoma eyes, as for choroidal polarization uniformity, flow and scleral birefringence. An automatic segmentation algorithm of retinal pigment epithelium and chorioscleral interface based on multifunctional signals is also presented.

  13. Quantitative evaluation of morphological changes in activated platelets in vitro using digital holographic microscopy.

    PubMed

    Kitamura, Yutaka; Isobe, Kazushige; Kawabata, Hideo; Tsujino, Tetsuhiro; Watanabe, Taisuke; Nakamura, Masayuki; Toyoda, Toshihisa; Okudera, Hajime; Okuda, Kazuhiro; Nakata, Koh; Kawase, Tomoyuki

    2018-06-18

    Platelet activation and aggregation have been conventionally evaluated using an aggregometer. However, this method is suitable for short-term but not long-term quantitative evaluation of platelet aggregation, morphological changes, and/or adhesion to specific materials. The recently developed digital holographic microscopy (DHM) has enabled the quantitative evaluation of cell size and morphology without labeling or destruction. Thus, we aim to validate its applicability in quantitatively evaluating changes in cell morphology, especially in the aggregation and spreading of activated platelets, thus modifying typical image analysis procedures to suit aggregated platelets. Freshly prepared platelet-rich plasma was washed with phosphate-buffered saline and treated with 0.1% CaCl 2 . Platelets were then fixed and subjected to DHM, scanning electron microscopy (SEM), atomic force microscopy, optical microscopy, and flow cytometry (FCM). Tightly aggregated platelets were identified as single cells. Data obtained from time-course experiments were plotted two-dimensionally according to the average optical thickness versus attachment area and divided into four regions. The majority of the control platelets, which supposedly contained small and round platelets, were distributed in the lower left region. As activation time increased, however, this population dispersed toward the upper right region. The distribution shift demonstrated by DHM was essentially consistent with data obtained from SEM and FCM. Therefore, DHM was validated as a promising device for testing platelet function given that it allows for the quantitative evaluation of activation-dependent morphological changes in platelets. DHM technology will be applicable to the quality assurance of platelet concentrates, as well as diagnosis and drug discovery related to platelet functions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Comparative evaluation of two quantitative test methods for determining the efficacy of liquid sporicides and sterilants on a hard surface: a precollaborative study.

    PubMed

    Tomasino, Stephen F; Hamilton, Martin A

    2007-01-01

    Two quantitative carrier-based test methods for determining the efficacy of liquid sporicides and sterilants on a hard surface, the Standard Quantitative Carrier Test Method-ASTM E 2111-00 and an adaptation of a quantitative micro-method as reported by Sagripanti and Bonifacino, were compared in this study. The methods were selected based on their desirable characteristics (e.g., well-developed protocol, previous use with spores, fully quantitative, and use of readily available equipment) for testing liquid sporicides and sterilants on a hard surface. In this paper, the Sagripanti-Bonifacino procedure is referred to as the Three Step Method (TSM). AOAC Official Method 966.04 was included in this study as a reference method. Three laboratories participated in the evaluation. Three chemical treatments were tested: (1) 3000 ppm sodium hypochlorite with pH adjusted to 7.0, (2) a hydrogen peroxide/peroxyacetic acid product, and (3) 3000 ppm sodium hypochlorite with pH unadjusted (pH of approximately 10.0). A fourth treatment, 6000 ppm sodium hypochlorite solution with pH adjusted to 7.0, was included only for Method 966.04 as a positive control (high level of efficacy). The contact time was 10 min for all chemical treatments except the 6000 ppm sodium hypochlorite treatment which was tested at 30 min. Each chemical treatment was tested 3 times using each of the methods. Only 2 of the laboratories performed the AOAC method. Method performance was assessed by the within-laboratory variance, between-laboratory variance, and total variance associated with the log reduction (LR) estimates generated by each quantitative method. The quantitative methods performed similarly, and the LR values generated by each method were not statistically different for the 3 treatments evaluated. Based on feedback from the participating laboratories, compared to the TSM, ASTM E 2111-00 was more resource demanding and required more set-up time. The logistical and resource concerns identified

  15. Evaluation of the long-term performance of six alternative disposal methods for LLRW

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kossik, R.; Sharp, G.; Chau, T.

    1995-12-31

    The State of New York has carried out a comparison of six alternative disposal methods for low-level radioactive waste (LLRW). An important part of these evaluations involved quantitatively analyzing the long-term (10,000 yr) performance of the methods with respect to dose to humans, radionuclide concentrations in the environment, and cumulative release from the facility. Four near-surface methods (covered above-grade vault, uncovered above-grade vault, below-grade vault, augered holes) and two mine methods (vertical shaft mine and drift mine) were evaluated. Each method was analyzed for several generic site conditions applicable for the state. The evaluations were carried out using RIP (Repositorymore » Integration Program), an integrated, total system performance assessment computer code which has been applied to radioactive waste disposal facilities both in the U.S. (Yucca Mountain, WIPP) and worldwide. The evaluations indicate that mines in intact low-permeability rock and near-surface facilities with engineered covers generally have a high potential to perform well (within regulatory limits). Uncovered above-grade vaults and mines in highly fractured crystalline rock, however, have a high potential to perform poorly, exceeding regulatory limits.« less

  16. Statistical shape analysis using 3D Poisson equation--A quantitatively validated approach.

    PubMed

    Gao, Yi; Bouix, Sylvain

    2016-05-01

    Statistical shape analysis has been an important area of research with applications in biology, anatomy, neuroscience, agriculture, paleontology, etc. Unfortunately, the proposed methods are rarely quantitatively evaluated, and as shown in recent studies, when they are evaluated, significant discrepancies exist in their outputs. In this work, we concentrate on the problem of finding the consistent location of deformation between two population of shapes. We propose a new shape analysis algorithm along with a framework to perform a quantitative evaluation of its performance. Specifically, the algorithm constructs a Signed Poisson Map (SPoM) by solving two Poisson equations on the volumetric shapes of arbitrary topology, and statistical analysis is then carried out on the SPoMs. The method is quantitatively evaluated on synthetic shapes and applied on real shape data sets in brain structures. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Intentional Movement Performance Ability (IMPA): a method for robot-aided quantitative assessment of motor function.

    PubMed

    Shin, Sung Yul; Kim, Jung Yoon; Lee, Sanghyeop; Lee, Junwon; Kim, Seung-Jong; Kim, ChangHwan

    2013-06-01

    The purpose of this paper is to propose a new assessment method for evaluating motor function of the patients who are suffering from physical weakness after stroke, incomplete spinal cord injury (iSCI) or other diseases. In this work, we use a robotic device to obtain the information of interaction occur between patient and robot, and use it as a measure for assessing the patients. The Intentional Movement Performance Ability (IMPA) is defined by the root mean square of the interactive torque, while the subject performs given periodic movement with the robot. IMPA is proposed to quantitatively determine the level of subject's impaired motor function. The method is indirectly tested by asking the healthy subjects to lift a barbell to disturb their motor function. The experimental result shows that the IMPA has a potential for providing a proper information of the subject's motor function level.

  18. A model for evaluating the social performance of construction waste management.

    PubMed

    Yuan, Hongping

    2012-06-01

    It has been determined by existing literature that a lot of research efforts have been made to the economic performance of construction waste management (CWM), but less attention is paid to investigation of the social performance of CWM. This study therefore attempts to develop a model for quantitatively evaluating the social performance of CWM by using a system dynamics (SD) approach. Firstly, major variables affecting the social performance of CWM are identified and a holistic system for assessing the social performance of CWM is formulated in line with feedback relationships underlying these variables. The developed system is then converted into a SD model through the software iThink. An empirical case study is finally conducted to demonstrate application of the model. Results of model validation indicate that the model is robust and reasonable to reflect the situation of the real system under study. Findings of the case study offer helpful insights into effectively promoting the social performance of CWM of the project investigated. Furthermore, the model exhibits great potential to function as an experimental platform for dynamically evaluating effects of management measures on improving the social performance of CWM of construction projects. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Evaluation of airway protection: Quantitative timing measures versus penetration/aspiration score.

    PubMed

    Kendall, Katherine A

    2017-10-01

    Quantitative measures of swallowing function may improve the reliability and accuracy of modified barium swallow (MBS) study interpretation. Quantitative study analysis has not been widely instituted, however, secondary to concerns about the time required to make measures and a lack of research demonstrating impact on MBS interpretation. This study compares the accuracy of the penetration/aspiration (PEN/ASP) scale (an observational visual-perceptual assessment tool) to quantitative measures of airway closure timing relative to the arrival of the bolus at the upper esophageal sphincter in identifying a failure of airway protection during deglutition. Retrospective review of clinical swallowing data from a university-based outpatient clinic. Swallowing data from 426 patients were reviewed. Patients with normal PEN/ASP scores were identified, and the results of quantitative airway closure timing measures for three liquid bolus sizes were evaluated. The incidence of significant airway closure delay with and without a normal PEN/ASP score was determined. Inter-rater reliability for the quantitative measures was calculated. In patients with a normal PEN/ASP score, 33% demonstrated a delay in airway closure on at least one swallow during the MBS study. There was no correlation between PEN/ASP score and airway closure delay. Inter-rater reliability for the quantitative measure of airway closure timing was nearly perfect (intraclass correlation coefficient = 0.973). The use of quantitative measures of swallowing function, in conjunction with traditional visual perceptual methods of MBS study interpretation, improves the identification of airway closure delay, and hence, potential aspiration risk, even when no penetration or aspiration is apparent on the MBS study. 4. Laryngoscope, 127:2314-2318, 2017. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  20. Dynamic and quantitative evaluation of degenerative mitral valve disease: a dedicated framework based on cardiac magnetic resonance imaging

    PubMed Central

    Onorati, Francesco; Puppini, Giovanni; Pappalardo, Omar A.; Selmi, Matteo; Votta, Emiliano; Faggian, Giuseppe; Redaelli, Alberto

    2017-01-01

    Background Accurate quantification of mitral valve (MV) morphology and dynamic behavior over the cardiac cycle is crucial to understand the mechanisms of degenerative MV dysfunction and to guide the surgical intervention. Cardiac magnetic resonance (CMR) imaging has progressively been adopted to evaluate MV pathophysiology, although a dedicated framework is required to perform a quantitative assessment of the functional MV anatomy. Methods We investigated MV dynamic behavior in subjects with normal MV anatomy (n=10) and patients referred to surgery due to degenerative MV prolapse, classified as fibro-elastic deficiency (FED, n=9) and Barlow’s disease (BD, n=10). A CMR-dedicated framework was adopted to evaluate prolapse height and volume and quantitatively assess valvular morphology and papillary muscles (PAPs) function over the cardiac cycle. Multiple comparison was used to investigate the hallmarks associated to MV degenerative prolapse and evaluate the feasibility of anatomical and functional distinction between FED and BD phenotypes. Results On average, annular dimensions were significantly (P<0.05) larger in BD than in FED and normal subjects while no significant differences were noticed between FED and normal. MV eccentricity progressively decreased passing from normal to FED and BD, with the latter exhibiting a rounder annulus shape. Over the cardiac cycle, we noticed significant differences for BD during systole with an abnormal annular enlargement between mid and late systole (LS) (P<0.001 vs. normal); the PAPs dynamics remained comparable in the three groups. Prolapse height and volume highlighted significant differences among normal, FED and BD valves. Conclusions Our CMR-dedicated framework allows for the quantitative and dynamic evaluation of MV apparatus, with quantifiable annular alterations representing the primary hallmark of severe MV degeneration. This may aid surgeons in the evaluation of the severity of MV dysfunction and the selection of the

  1. Dynamic and quantitative evaluation of degenerative mitral valve disease: a dedicated framework based on cardiac magnetic resonance imaging.

    PubMed

    Sturla, Francesco; Onorati, Francesco; Puppini, Giovanni; Pappalardo, Omar A; Selmi, Matteo; Votta, Emiliano; Faggian, Giuseppe; Redaelli, Alberto

    2017-04-01

    Accurate quantification of mitral valve (MV) morphology and dynamic behavior over the cardiac cycle is crucial to understand the mechanisms of degenerative MV dysfunction and to guide the surgical intervention. Cardiac magnetic resonance (CMR) imaging has progressively been adopted to evaluate MV pathophysiology, although a dedicated framework is required to perform a quantitative assessment of the functional MV anatomy. We investigated MV dynamic behavior in subjects with normal MV anatomy (n=10) and patients referred to surgery due to degenerative MV prolapse, classified as fibro-elastic deficiency (FED, n=9) and Barlow's disease (BD, n=10). A CMR-dedicated framework was adopted to evaluate prolapse height and volume and quantitatively assess valvular morphology and papillary muscles (PAPs) function over the cardiac cycle. Multiple comparison was used to investigate the hallmarks associated to MV degenerative prolapse and evaluate the feasibility of anatomical and functional distinction between FED and BD phenotypes. On average, annular dimensions were significantly (P<0.05) larger in BD than in FED and normal subjects while no significant differences were noticed between FED and normal. MV eccentricity progressively decreased passing from normal to FED and BD, with the latter exhibiting a rounder annulus shape. Over the cardiac cycle, we noticed significant differences for BD during systole with an abnormal annular enlargement between mid and late systole (LS) (P<0.001 vs. normal); the PAPs dynamics remained comparable in the three groups. Prolapse height and volume highlighted significant differences among normal, FED and BD valves. Our CMR-dedicated framework allows for the quantitative and dynamic evaluation of MV apparatus, with quantifiable annular alterations representing the primary hallmark of severe MV degeneration. This may aid surgeons in the evaluation of the severity of MV dysfunction and the selection of the appropriate MV treatment.

  2. Ground truth and benchmarks for performance evaluation

    NASA Astrophysics Data System (ADS)

    Takeuchi, Ayako; Shneier, Michael; Hong, Tsai Hong; Chang, Tommy; Scrapper, Christopher; Cheok, Geraldine S.

    2003-09-01

    Progress in algorithm development and transfer of results to practical applications such as military robotics requires the setup of standard tasks, of standard qualitative and quantitative measurements for performance evaluation and validation. Although the evaluation and validation of algorithms have been discussed for over a decade, the research community still faces a lack of well-defined and standardized methodology. The range of fundamental problems include a lack of quantifiable measures of performance, a lack of data from state-of-the-art sensors in calibrated real-world environments, and a lack of facilities for conducting realistic experiments. In this research, we propose three methods for creating ground truth databases and benchmarks using multiple sensors. The databases and benchmarks will provide researchers with high quality data from suites of sensors operating in complex environments representing real problems of great relevance to the development of autonomous driving systems. At NIST, we have prototyped a High Mobility Multi-purpose Wheeled Vehicle (HMMWV) system with a suite of sensors including a Riegl ladar, GDRS ladar, stereo CCD, several color cameras, Global Position System (GPS), Inertial Navigation System (INS), pan/tilt encoders, and odometry . All sensors are calibrated with respect to each other in space and time. This allows a database of features and terrain elevation to be built. Ground truth for each sensor can then be extracted from the database. The main goal of this research is to provide ground truth databases for researchers and engineers to evaluate algorithms for effectiveness, efficiency, reliability, and robustness, thus advancing the development of algorithms.

  3. Effects of Computer Programming on Students' Cognitive Performance: A Quantitative Synthesis.

    ERIC Educational Resources Information Center

    Liao, Yuen-Kuang Cliff

    A meta-analysis was performed to synthesize existing data concerning the effects of computer programing on cognitive outcomes of students. Sixty-five studies were located from three sources, and their quantitative data were transformed into a common scale--Effect Size (ES). The analysis showed that 58 (89%) of the study-weighted ESs were positive…

  4. Evaluation of chemotherapy response in ovarian cancer treatment using quantitative CT image biomarkers: a preliminary study

    NASA Astrophysics Data System (ADS)

    Qiu, Yuchen; Tan, Maxine; McMeekin, Scott; Thai, Theresa; Moore, Kathleen; Ding, Kai; Liu, Hong; Zheng, Bin

    2015-03-01

    The purpose of this study is to identify and apply quantitative image biomarkers for early prediction of the tumor response to the chemotherapy among the ovarian cancer patients participated in the clinical trials of testing new drugs. In the experiment, we retrospectively selected 30 cases from the patients who participated in Phase I clinical trials of new drug or drug agents for ovarian cancer treatment. Each case is composed of two sets of CT images acquired pre- and post-treatment (4-6 weeks after starting treatment). A computer-aided detection (CAD) scheme was developed to extract and analyze the quantitative image features of the metastatic tumors previously tracked by the radiologists using the standard Response Evaluation Criteria in Solid Tumors (RECIST) guideline. The CAD scheme first segmented 3-D tumor volumes from the background using a hybrid tumor segmentation scheme. Then, for each segmented tumor, CAD computed three quantitative image features including the change of tumor volume, tumor CT number (density) and density variance. The feature changes were calculated between the matched tumors tracked on the CT images acquired pre- and post-treatments. Finally, CAD predicted patient's 6-month progression-free survival (PFS) using a decision-tree based classifier. The performance of the CAD scheme was compared with the RECIST category. The result shows that the CAD scheme achieved a prediction accuracy of 76.7% (23/30 cases) with a Kappa coefficient of 0.493, which is significantly higher than the performance of RECIST prediction with a prediction accuracy and Kappa coefficient of 60% (17/30) and 0.062, respectively. This study demonstrated the feasibility of analyzing quantitative image features to improve the early predicting accuracy of the tumor response to the new testing drugs or therapeutic methods for the ovarian cancer patients.

  5. Standardizing evaluation of pQCT image quality in the presence of subject movement: qualitative versus quantitative assessment.

    PubMed

    Blew, Robert M; Lee, Vinson R; Farr, Joshua N; Schiferl, Daniel J; Going, Scott B

    2014-02-01

    Peripheral quantitative computed tomography (pQCT) is an essential tool for assessing bone parameters of the limbs, but subject movement and its impact on image quality remains a challenge to manage. The current approach to determine image viability is by visual inspection, but pQCT lacks a quantitative evaluation. Therefore, the aims of this study were to (1) examine the reliability of a qualitative visual inspection scale and (2) establish a quantitative motion assessment methodology. Scans were performed on 506 healthy girls (9-13 years) at diaphyseal regions of the femur and tibia. Scans were rated for movement independently by three technicians using a linear, nominal scale. Quantitatively, a ratio of movement to limb size (%Move) provided a measure of movement artifact. A repeat-scan subsample (n = 46) was examined to determine %Move's impact on bone parameters. Agreement between measurers was strong (intraclass correlation coefficient = 0.732 for tibia, 0.812 for femur), but greater variability was observed in scans rated 3 or 4, the delineation between repeat and no repeat. The quantitative approach found ≥95% of subjects had %Move <25 %. Comparison of initial and repeat scans by groups above and below 25% initial movement showed significant differences in the >25 % grouping. A pQCT visual inspection scale can be a reliable metric of image quality, but technicians may periodically mischaracterize subject motion. The presented quantitative methodology yields more consistent movement assessment and could unify procedure across laboratories. Data suggest a delineation of 25% movement for determining whether a diaphyseal scan is viable or requires repeat.

  6. Standardizing Evaluation of pQCT Image Quality in the Presence of Subject Movement: Qualitative vs. Quantitative Assessment

    PubMed Central

    Blew, Robert M.; Lee, Vinson R.; Farr, Joshua N.; Schiferl, Daniel J.; Going, Scott B.

    2013-01-01

    Purpose Peripheral quantitative computed tomography (pQCT) is an essential tool for assessing bone parameters of the limbs, but subject movement and its impact on image quality remains a challenge to manage. The current approach to determine image viability is by visual inspection, but pQCT lacks a quantitative evaluation. Therefore, the aims of this study were to (1) examine the reliability of a qualitative visual inspection scale, and (2) establish a quantitative motion assessment methodology. Methods Scans were performed on 506 healthy girls (9–13yr) at diaphyseal regions of the femur and tibia. Scans were rated for movement independently by three technicians using a linear, nominal scale. Quantitatively, a ratio of movement to limb size (%Move) provided a measure of movement artifact. A repeat-scan subsample (n=46) was examined to determine %Move’s impact on bone parameters. Results Agreement between measurers was strong (ICC = .732 for tibia, .812 for femur), but greater variability was observed in scans rated 3 or 4, the delineation between repeat or no repeat. The quantitative approach found ≥95% of subjects had %Move <25%. Comparison of initial and repeat scans by groups above and below 25% initial movement, showed significant differences in the >25% grouping. Conclusions A pQCT visual inspection scale can be a reliable metric of image quality but technicians may periodically mischaracterize subject motion. The presented quantitative methodology yields more consistent movement assessment and could unify procedure across laboratories. Data suggest a delineation of 25% movement for determining whether a diaphyseal scan is viable or requires repeat. PMID:24077875

  7. Performance evaluation of dispersion parameterization schemes in the plume simulation of FFT-07 diffusion experiment

    NASA Astrophysics Data System (ADS)

    Pandey, Gavendra; Sharan, Maithili

    2018-01-01

    Application of atmospheric dispersion models in air quality analysis requires a proper representation of the vertical and horizontal growth of the plume. For this purpose, various schemes for the parameterization of dispersion parameters σ‧s are described in both stable and unstable conditions. These schemes differ on the use of (i) extent of availability of on-site measurements (ii) formulations developed for other sites and (iii) empirical relations. The performance of these schemes is evaluated in an earlier developed IIT (Indian Institute of Technology) dispersion model with the data set in single and multiple releases conducted at Fusion Field Trials, Dugway Proving Ground, Utah 2007. Qualitative and quantitative evaluation of the relative performance of all the schemes is carried out in both stable and unstable conditions in the light of (i) peak/maximum concentrations, and (ii) overall concentration distribution. The blocked bootstrap resampling technique is adopted to investigate the statistical significance of the differences in performances of each of the schemes by computing 95% confidence limits on the parameters FB and NMSE. The various analysis based on some selected statistical measures indicated consistency in the qualitative and quantitative performances of σ schemes. The scheme which is based on standard deviation of wind velocity fluctuations and Lagrangian time scales exhibits a relatively better performance in predicting the peak as well as the lateral spread.

  8. Development of the local magnification method for quantitative evaluation of endoscope geometric distortion

    NASA Astrophysics Data System (ADS)

    Wang, Quanzeng; Cheng, Wei-Chung; Suresh, Nitin; Hua, Hong

    2016-05-01

    With improved diagnostic capabilities and complex optical designs, endoscopic technologies are advancing. As one of the several important optical performance characteristics, geometric distortion can negatively affect size estimation and feature identification related diagnosis. Therefore, a quantitative and simple distortion evaluation method is imperative for both the endoscopic industry and the medical device regulatory agent. However, no such method is available yet. While the image correction techniques are rather mature, they heavily depend on computational power to process multidimensional image data based on complex mathematical model, i.e., difficult to understand. Some commonly used distortion evaluation methods, such as the picture height distortion (DPH) or radial distortion (DRAD), are either too simple to accurately describe the distortion or subject to the error of deriving a reference image. We developed the basic local magnification (ML) method to evaluate endoscope distortion. Based on the method, we also developed ways to calculate DPH and DRAD. The method overcomes the aforementioned limitations, has clear physical meaning in the whole field of view, and can facilitate lesion size estimation during diagnosis. Most importantly, the method can facilitate endoscopic technology to market and potentially be adopted in an international endoscope standard.

  9. Quantitative methods in assessment of neurologic function.

    PubMed

    Potvin, A R; Tourtellotte, W W; Syndulko, K; Potvin, J

    1981-01-01

    Traditionally, neurologists have emphasized qualitative techniques for assessing results of clinical trials. However, in recent years qualitative evaluations have been increasingly augmented by quantitative tests for measuring neurologic functions pertaining to mental state, strength, steadiness, reactions, speed, coordination, sensation, fatigue, gait, station, and simulated activities of daily living. Quantitative tests have long been used by psychologists for evaluating asymptomatic function, assessing human information processing, and predicting proficiency in skilled tasks; however, their methodology has never been directly assessed for validity in a clinical environment. In this report, relevant contributions from the literature on asymptomatic human performance and that on clinical quantitative neurologic function are reviewed and assessed. While emphasis is focused on tests appropriate for evaluating clinical neurologic trials, evaluations of tests for reproducibility, reliability, validity, and examiner training procedures, and for effects of motivation, learning, handedness, age, and sex are also reported and interpreted. Examples of statistical strategies for data analysis, scoring systems, data reduction methods, and data display concepts are presented. Although investigative work still remains to be done, it appears that carefully selected and evaluated tests of sensory and motor function should be an essential factor for evaluating clinical trials in an objective manner.

  10. StreamQRE: Modular Specification and Efficient Evaluation of Quantitative Queries over Streaming Data.

    PubMed

    Mamouras, Konstantinos; Raghothaman, Mukund; Alur, Rajeev; Ives, Zachary G; Khanna, Sanjeev

    2017-06-01

    Real-time decision making in emerging IoT applications typically relies on computing quantitative summaries of large data streams in an efficient and incremental manner. To simplify the task of programming the desired logic, we propose StreamQRE, which provides natural and high-level constructs for processing streaming data. Our language has a novel integration of linguistic constructs from two distinct programming paradigms: streaming extensions of relational query languages and quantitative extensions of regular expressions. The former allows the programmer to employ relational constructs to partition the input data by keys and to integrate data streams from different sources, while the latter can be used to exploit the logical hierarchy in the input stream for modular specifications. We first present the core language with a small set of combinators, formal semantics, and a decidable type system. We then show how to express a number of common patterns with illustrative examples. Our compilation algorithm translates the high-level query into a streaming algorithm with precise complexity bounds on per-item processing time and total memory footprint. We also show how to integrate approximation algorithms into our framework. We report on an implementation in Java, and evaluate it with respect to existing high-performance engines for processing streaming data. Our experimental evaluation shows that (1) StreamQRE allows more natural and succinct specification of queries compared to existing frameworks, (2) the throughput of our implementation is higher than comparable systems (for example, two-to-four times greater than RxJava), and (3) the approximation algorithms supported by our implementation can lead to substantial memory savings.

  11. Preliminary Results of Acoustic Radiation Force Impulse Imaging by Combined Qualitative and Quantitative Analyses for Evaluation of Breast Lesions.

    PubMed

    Wang, Lin; Wan, Cai-Feng; Du, Jing; Li, Feng-Hua

    2018-04-15

    The purpose of this study was to evaluate the application of a new elastographic technique, acoustic radiation force impulse (ARFI) imaging, and its diagnostic performance for characterizing breast lesions. One hundred consecutive female patients with 126 breast lesions were enrolled in our study. After routine breast ultrasound examinations, the patients underwent ARFI elasticity imaging. Virtual Touch tissue imaging (VTI) and Virtual Touch tissue quantification (Siemens Medical Solutions, Mountain View, CA) were used to qualitatively and quantitatively analyze the elasticity and hardness of tumors. A receiver operating characteristic curve analysis was performed to evaluate the diagnostic performance of ARFI for discrimination between benign and malignant breast lesions. Pathologic analysis revealed 40 lesions in the malignant group and 86 lesions in the benign group. Different VTI patterns were observed in benign and malignant breast lesions. Eighty lesions (93.0%) of benign group had pattern 1, 2, or 3, whereas all pattern 4b lesions (n = 20 [50.0%]) were malignant. Regarding the quantitative analysis, the mean VTI-to-B-mode area ratio, internal shear wave velocity, and marginal shear wave velocity of benign lesions were statistically significantly lower than those of malignant lesions (all P < .001). The cutoff point for a scoring system constructed to evaluate the diagnostic performance of ARFI was estimated to be between 3 and 4 points for malignancy, with sensitivity of 77.5%, specificity of 96.5%, accuracy of 90.5%, and an area under the curve of 0.933. The application of ARFI technology has shown promising results by noninvasively providing substantial complementary information and could potentially serve as an effective diagnostic tool for differentiation between benign and malignant breast lesions. © 2018 by the American Institute of Ultrasound in Medicine.

  12. Quantitative Measures for Evaluation of Ultrasound Therapies of the Prostate

    NASA Astrophysics Data System (ADS)

    Kobelevskiy, Ilya; Burtnyk, Mathieu; Bronskill, Michael; Chopra, Rajiv

    2010-03-01

    Development of non-invasive techniques for prostate cancer treatment requires implementation of quantitative measures for evaluation of the treatment results. In this paper. we introduce measures that estimate spatial targeting accuracy and potential thermal damage to the structures surrounding the prostate. The measures were developed for the technique of treating prostate cancer with a transurethral ultrasound heating applicators guided by active MR temperature feedback. Variations of ultrasound element length and related MR imaging parameters such as MR slice thickness and update time were investigated by performing numerical simulations of the treatment on a database of ten patient prostate geometries segmented from clinical MR images. Susceptibility of each parameter configuration to uncertainty in MR temperature measurements was studied by adding noise to the temperature measurements. Gaussian noise with zero mean and standard deviation of 0, 1, 3 and 5° C was used to model different levels of uncertainty in MR temperature measurements. Results of simulations for each parameter configuration were averaged over the database of the ten prostate patient geometries studied. Results have shown that for update time of 5 seconds both 3- and 5-mm elements achieve appropriate performance for temperature uncertainty up to 3° C, while temperature uncertainty of 5° C leads to noticeable reduction in spatial accuracy and increased risk of damaging rectal wall. Ten-mm elements lacked spatial accuracy and had higher risk of damaging rectal wall compared to 3- and 5-mm elements, but were less sensitive to the level of temperature uncertainty. The effect of changing update time was studied for 5-mm elements. Simulations showed that update time had minor effects on all aspects of treatment for temperature uncertainty of 0° C and 1° C, while temperature uncertainties of 3° C and 5° C led to reduced spatial accuracy, increased potential damage to the rectal wall, and

  13. Quantitative Motor Performance and Sleep Benefit in Parkinson Disease

    PubMed Central

    van Gilst, Merel M.; van Mierlo, Petra; Bloem, Bastiaan R.; Overeem, Sebastiaan

    2015-01-01

    Study Objectives: Many people with Parkinson disease experience “sleep benefit”: temporarily improved mobility upon awakening. Here we used quantitative motor tasks to assess the influence of sleep on motor functioning in Parkinson disease. Design: Eighteen Parkinson patients with and 20 without subjective sleep benefit and 20 healthy controls participated. Before and directly after a regular night sleep and an afternoon nap, subjects performed the timed pegboard dexterity task and quantified finger tapping task. Subjective ratings of motor functioning and mood/vigilange were included. Sleep was monitored using polysomnography. Results: On both tasks, patients were overall slower than healthy controls (night: F2,55 = 16.938, P < 0.001; nap: F2,55 = 15.331, P < 0.001). On the pegboard task, there was a small overall effect of night sleep (F1,55 = 9.695, P = 0.003); both patients and controls were on average slightly slower in the morning. However, in both tasks there was no sleep*group interaction for nighttime sleep nor for afternoon nap. There was a modest correlation between the score on the pegboard task and self-rated motor symptoms among patients (rho = 0.233, P = 0.004). No correlations in task performance and mood/vigilance or sleep time/efficiency were found. Conclusions: A positive effect of sleep on motor function is commonly reported by Parkinson patients. Here we show that the subjective experience of sleep benefit is not paralleled by an actual improvement in motor functioning. Sleep benefit therefore appears to be a subjective phenomenon and not a Parkinson-specific reduction in symptoms. Citation: van Gilst MM, van Mierlo P, Bloem BR, Overeem S. Quantitative Motor Performance and Sleep Benefit in Parkinson Disease. SLEEP 2015;38(10):1567–1573. PMID:25902811

  14. Chemical Fingerprint and Quantitative Analysis for the Quality Evaluation of Docynia dcne Leaves by High-Performance Liquid Chromatography Coupled with Chemometrics Analysis.

    PubMed

    Zhang, Xiaoyu; Mei, Xueran; Wang, Zhanguo; Wu, Jing; Liu, Gang; Hu, Huiling; Li, Qijuan

    2018-05-24

    Docynia dcne leaf from the genus of Docynia Dcne (including three species of Docynia delavayi, Docynia indica and Docynia longiunguis.) is an important raw material of local ethnic minority tea, ethnomedicines and food supplements in southwestern areas of China. However, D. dcne leaves from these three species are usually used confusingly, which could influence the therapeutic effect of it. A rapid and effective method for the chemical fingerprint and quantitative analysis to evaluate the quality of D. dcne leaves was established. The chemometric methods, including similarity analysis, hierarchical cluster analysis and partial least-squares discrimination analysis, were applied to distinguish 30 batches of D. dcne leaf samples from these three species. The above results could validate each other and successfully group these samples into three categories which were closely related to the species of D. dcne leaves. Moreover, isoquercitrin and phlorizin were screened as the chemical markers to evaluate the quality of D. dcne leaves from different species. And the contents of isoquercitrin and phlorizin varied remarkably in these samples, with ranges of 6.41-38.84 and 95.73-217.76 mg/g, respectively. All the results indicated that an integration method of chemical fingerprint couple with chemometrics analysis and quantitative assessment was a powerful and beneficial tool for quality control of D. dcne leaves, and could be applied also for differentiation and quality control of other herbal preparations.

  15. Quantitative Skills as a Graduate Learning Outcome: Exploring Students' Evaluative Expertise

    ERIC Educational Resources Information Center

    Matthews, Kelly E.; Adams, Peter; Goos, Merrilyn

    2017-01-01

    In the biosciences, quantitative skills are an essential graduate learning outcome. Efforts to evidence student attainment at the whole of degree programme level are rare and making sense of such data is complex. We draw on assessment theories from Sadler (evaluative expertise) and Boud (sustainable assessment) to interpret final-year bioscience…

  16. Raman spectral imaging for quantitative contaminant evaluation in skim milk powder

    USDA-ARS?s Scientific Manuscript database

    This study uses a point-scan Raman spectral imaging system for quantitative detection of melamine in milk powder. A sample depth of 2 mm and corresponding laser intensity of 200 mW were selected after evaluating the penetration of a 785 nm laser through milk powder. Horizontal and vertical spatial r...

  17. An evaluation of performance-arts based HIV-prevention events in London with 13- 16-year-olds.

    PubMed

    Campbell, Tomás; Bath, Michael; Bradbear, Rachel; Cottle, Justine; Parrett, Neil

    2009-09-01

    The London borough of Newham is ethnically diverse and is one of the poorest regions in the UK. Rates of teenage pregnancy, sexually transmitted infections (STIs) and HIV are high compared to the rest of the country. One strand of the local school-based HIV-prevention programme for young people utilizes performance arts as a tool for HIV education and prevention. This study evaluated HIV knowledge, confidence and intention to use a condom in two groups of 13- 16-year-olds who had participated in performance-based events. Group 1 (n = 14) participated in a six-week programme of performance arts-based HIV education and prevention workshops, which culminated in a theatre-based performance. Group 2 (n = 65) were audience members who attended the performance. Participants completed a short questionnaire containing both qualitative and quantitative items. Qualitative data suggested that the participants had learned about condoms and their efficacy in preventing acquisition of HIV and sexually transmitted diseases. Quantitative results indicated that after participation in the events, respondents had more information about HIV and condom use; were more confident that they could insist on condom use with partners; and planned to use condoms in the future. There was a statistically significant difference between Groups 1 and 2 but because of the small numbers in Group 1 this result should be interpreted cautiously. Performance-based HIV-prevention activities may be a useful way to deliver HIV-prevention messages to young people. This evaluation will form the basis of a more systematic and robust evaluation of future events.

  18. Quantitative nondestructive evaluation: Requirements for tomorrow's reliability

    NASA Technical Reports Server (NTRS)

    Heyman, Joseph S.

    1991-01-01

    Quantitative Nondestructive Evaluation (QNDE) is the technology of measurement, analysis, and prediction of the state of material/structural systems for safety, reliability, and mission assurance. QNDE has impact on everyday life from the cars we drive, the planes we fly, the buildings we work or live in, literally to the infrastructure of our world. Here, researchers highlight some of the new sciences and technologies that are part of a safer, cost effective tomorrow. Specific technologies that are discussed are thermal QNDE of aircraft structural integrity, ultrasonic QNDE for materials characterization, and technology spinoffs from aerospace to the medical sector. In each case, examples are given of how new requirements result in enabling measurement technologies, which in turn change the boundaries of design/practice.

  19. Quantitative 3D Ultrashort Time-to-Echo (UTE) MRI and Micro-CT (μCT) Evaluation of the Temporomandibular Joint (TMJ) Condylar Morphology

    PubMed Central

    Geiger, Daniel; Bae, Won C.; Statum, Sheronda; Du, Jiang; Chung, Christine B.

    2014-01-01

    Objective Temporomandibular dysfunction involves osteoarthritis of the TMJ, including degeneration and morphologic changes of the mandibular condyle. Purpose of this study was to determine accuracy of novel 3D-UTE MRI versus micro-CT (μCT) for quantitative evaluation of mandibular condyle morphology. Material & Methods Nine TMJ condyle specimens were harvested from cadavers (2M, 3F; Age 85 ± 10 yrs., mean±SD). 3D-UTE MRI (TR=50ms, TE=0.05 ms, 104 μm isotropic-voxel) was performed using a 3-T MR scanner and μCT (18 μm isotropic-voxel) was performed. MR datasets were spatially-registered with μCT dataset. Two observers segmented bony contours of the condyles. Fibrocartilage was segmented on MR dataset. Using a custom program, bone and fibrocartilage surface coordinates, Gaussian curvature, volume of segmented regions and fibrocartilage thickness were determined for quantitative evaluation of joint morphology. Agreement between techniques (MRI vs. μCT) and observers (MRI vs. MRI) for Gaussian curvature, mean curvature and segmented volume of the bone were determined using intraclass correlation correlation (ICC) analyses. Results Between MRI and μCT, the average deviation of surface coordinates was 0.19±0.15 mm, slightly higher than spatial resolution of MRI. Average deviation of the Gaussian curvature and volume of segmented regions, from MRI to μCT, was 5.7±6.5% and 6.6±6.2%, respectively. ICC coefficients (MRI vs. μCT) for Gaussian curvature, mean curvature and segmented volumes were respectively 0.892, 0.893 and 0.972. Between observers (MRI vs. MRI), the ICC coefficients were 0.998, 0.999 and 0.997 respectively. Fibrocartilage thickness was 0.55±0.11 mm, as previously described in literature for grossly normal TMJ samples. Conclusion 3D-UTE MR quantitative evaluation of TMJ condyle morphology ex-vivo, including surface, curvature and segmented volume, shows high correlation against μCT and between observers. In addition, UTE MRI allows

  20. Quantitative nondestructive evaluation of ceramic matrix composite by the resonance method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watanabe, T.; Aizawa, T.; Kihara, J.

    The resonance method was developed to make quantitative nondestructive evaluation on the mechanical properties without any troublesome procedure. Since the present method is indifferent to the geometry of specimen, both monolithic and ceramic matrix composite materials in process can be evaluated in the nondestructive manner. Al{sub 2}O{sub 3}, Si{sub 3}N{sub 4}, SiC/Si{sub 3}N{sub 4}, and various C/C composite materials are employed to demonstrate the validity and effectiveness of the present method.

  1. Potential pros and cons of external healthcare performance evaluation systems: real-life perspectives on Iranian hospital evaluation and accreditation program

    PubMed Central

    Jaafaripooyan, Ebrahim

    2014-01-01

    Background: Performance evaluation is essential to quality improvement in healthcare. The current study has identified the potential pros and cons of external healthcare evaluation programs, utilizing them subsequently to look into the merits of a similar case in a developing country. Methods: A mixed method study employing both qualitative and quantitative data collection and analysis techniques was adopted to achieve the study end. Subject Matter Experts (SMEs) and professionals were approached for two-stage process of data collection. Results: Potential advantages included greater attractiveness of high accreditation rank healthcare organizations to their customers/purchasers and boosted morale of their personnel. Downsides, as such, comprised the programs’ over-reliance on value judgment of surveyors, routinization and incurring undue cost on the organizations. In addition, the improved, standardized care processes as well as the judgmental nature of program survey were associated, as pros and cons, to the program investigated by the professionals. Conclusion: Besides rendering a tentative assessment of Iranian hospital evaluation program, the study provides those running external performance evaluations with a lens to scrutinize the virtues of their own evaluation systems through identifying the potential advantages and drawbacks of such programs. Moreover, the approach followed could be utilized for performance assessment of similar evaluation programs. PMID:25279381

  2. Potential pros and cons of external healthcare performance evaluation systems: real-life perspectives on Iranian hospital evaluation and accreditation program.

    PubMed

    Jaafaripooyan, Ebrahim

    2014-09-01

    Performance evaluation is essential to quality improvement in healthcare. The current study has identified the potential pros and cons of external healthcare evaluation programs, utilizing them subsequently to look into the merits of a similar case in a developing country. A mixed method study employing both qualitative and quantitative data collection and analysis techniques was adopted to achieve the study end. Subject Matter Experts (SMEs) and professionals were approached for two-stage process of data collection. Potential advantages included greater attractiveness of high accreditation rank healthcare organizations to their customers/purchasers and boosted morale of their personnel. Downsides, as such, comprised the programs' over-reliance on value judgment of surveyors, routinization and incurring undue cost on the organizations. In addition, the improved, standardized care processes as well as the judgmental nature of program survey were associated, as pros and cons, to the program investigated by the professionals. Besides rendering a tentative assessment of Iranian hospital evaluation program, the study provides those running external performance evaluations with a lens to scrutinize the virtues of their own evaluation systems through identifying the potential advantages and drawbacks of such programs. Moreover, the approach followed could be utilized for performance assessment of similar evaluation programs.

  3. Design, implementation and multisite evaluation of a system suitability protocol for the quantitative assessment of instrument performance in liquid chromatography-multiple reaction monitoring-MS (LC-MRM-MS).

    PubMed

    Abbatiello, Susan E; Mani, D R; Schilling, Birgit; Maclean, Brendan; Zimmerman, Lisa J; Feng, Xingdong; Cusack, Michael P; Sedransk, Nell; Hall, Steven C; Addona, Terri; Allen, Simon; Dodder, Nathan G; Ghosh, Mousumi; Held, Jason M; Hedrick, Victoria; Inerowicz, H Dorota; Jackson, Angela; Keshishian, Hasmik; Kim, Jong Won; Lyssand, John S; Riley, C Paige; Rudnick, Paul; Sadowski, Pawel; Shaddox, Kent; Smith, Derek; Tomazela, Daniela; Wahlander, Asa; Waldemarson, Sofia; Whitwell, Corbin A; You, Jinsam; Zhang, Shucha; Kinsinger, Christopher R; Mesri, Mehdi; Rodriguez, Henry; Borchers, Christoph H; Buck, Charles; Fisher, Susan J; Gibson, Bradford W; Liebler, Daniel; Maccoss, Michael; Neubert, Thomas A; Paulovich, Amanda; Regnier, Fred; Skates, Steven J; Tempst, Paul; Wang, Mu; Carr, Steven A

    2013-09-01

    Multiple reaction monitoring (MRM) mass spectrometry coupled with stable isotope dilution (SID) and liquid chromatography (LC) is increasingly used in biological and clinical studies for precise and reproducible quantification of peptides and proteins in complex sample matrices. Robust LC-SID-MRM-MS-based assays that can be replicated across laboratories and ultimately in clinical laboratory settings require standardized protocols to demonstrate that the analysis platforms are performing adequately. We developed a system suitability protocol (SSP), which employs a predigested mixture of six proteins, to facilitate performance evaluation of LC-SID-MRM-MS instrument platforms, configured with nanoflow-LC systems interfaced to triple quadrupole mass spectrometers. The SSP was designed for use with low multiplex analyses as well as high multiplex approaches when software-driven scheduling of data acquisition is required. Performance was assessed by monitoring of a range of chromatographic and mass spectrometric metrics including peak width, chromatographic resolution, peak capacity, and the variability in peak area and analyte retention time (RT) stability. The SSP, which was evaluated in 11 laboratories on a total of 15 different instruments, enabled early diagnoses of LC and MS anomalies that indicated suboptimal LC-MRM-MS performance. The observed range in variation of each of the metrics scrutinized serves to define the criteria for optimized LC-SID-MRM-MS platforms for routine use, with pass/fail criteria for system suitability performance measures defined as peak area coefficient of variation <0.15, peak width coefficient of variation <0.15, standard deviation of RT <0.15 min (9 s), and the RT drift <0.5min (30 s). The deleterious effect of a marginally performing LC-SID-MRM-MS system on the limit of quantification (LOQ) in targeted quantitative assays illustrates the use and need for a SSP to establish robust and reliable system performance. Use of a SSP helps

  4. Design, Implementation and Multisite Evaluation of a System Suitability Protocol for the Quantitative Assessment of Instrument Performance in Liquid Chromatography-Multiple Reaction Monitoring-MS (LC-MRM-MS)*

    PubMed Central

    Abbatiello, Susan E.; Mani, D. R.; Schilling, Birgit; MacLean, Brendan; Zimmerman, Lisa J.; Feng, Xingdong; Cusack, Michael P.; Sedransk, Nell; Hall, Steven C.; Addona, Terri; Allen, Simon; Dodder, Nathan G.; Ghosh, Mousumi; Held, Jason M.; Hedrick, Victoria; Inerowicz, H. Dorota; Jackson, Angela; Keshishian, Hasmik; Kim, Jong Won; Lyssand, John S.; Riley, C. Paige; Rudnick, Paul; Sadowski, Pawel; Shaddox, Kent; Smith, Derek; Tomazela, Daniela; Wahlander, Asa; Waldemarson, Sofia; Whitwell, Corbin A.; You, Jinsam; Zhang, Shucha; Kinsinger, Christopher R.; Mesri, Mehdi; Rodriguez, Henry; Borchers, Christoph H.; Buck, Charles; Fisher, Susan J.; Gibson, Bradford W.; Liebler, Daniel; MacCoss, Michael; Neubert, Thomas A.; Paulovich, Amanda; Regnier, Fred; Skates, Steven J.; Tempst, Paul; Wang, Mu; Carr, Steven A.

    2013-01-01

    Multiple reaction monitoring (MRM) mass spectrometry coupled with stable isotope dilution (SID) and liquid chromatography (LC) is increasingly used in biological and clinical studies for precise and reproducible quantification of peptides and proteins in complex sample matrices. Robust LC-SID-MRM-MS-based assays that can be replicated across laboratories and ultimately in clinical laboratory settings require standardized protocols to demonstrate that the analysis platforms are performing adequately. We developed a system suitability protocol (SSP), which employs a predigested mixture of six proteins, to facilitate performance evaluation of LC-SID-MRM-MS instrument platforms, configured with nanoflow-LC systems interfaced to triple quadrupole mass spectrometers. The SSP was designed for use with low multiplex analyses as well as high multiplex approaches when software-driven scheduling of data acquisition is required. Performance was assessed by monitoring of a range of chromatographic and mass spectrometric metrics including peak width, chromatographic resolution, peak capacity, and the variability in peak area and analyte retention time (RT) stability. The SSP, which was evaluated in 11 laboratories on a total of 15 different instruments, enabled early diagnoses of LC and MS anomalies that indicated suboptimal LC-MRM-MS performance. The observed range in variation of each of the metrics scrutinized serves to define the criteria for optimized LC-SID-MRM-MS platforms for routine use, with pass/fail criteria for system suitability performance measures defined as peak area coefficient of variation <0.15, peak width coefficient of variation <0.15, standard deviation of RT <0.15 min (9 s), and the RT drift <0.5min (30 s). The deleterious effect of a marginally performing LC-SID-MRM-MS system on the limit of quantification (LOQ) in targeted quantitative assays illustrates the use and need for a SSP to establish robust and reliable system performance. Use of a SSP helps

  5. Development and evaluation of a model-based downscatter compensation method for quantitative I-131 SPECT

    PubMed Central

    Song, Na; Du, Yong; He, Bin; Frey, Eric C.

    2011-01-01

    Purpose: The radionuclide 131I has found widespread use in targeted radionuclide therapy (TRT), partly due to the fact that it emits photons that can be imaged to perform treatment planning or posttherapy dose verification as well as beta rays that are suitable for therapy. In both the treatment planning and dose verification applications, it is necessary to estimate the activity distribution in organs or tumors at several time points. In vivo estimates of the 131I activity distribution at each time point can be obtained from quantitative single-photon emission computed tomography (QSPECT) images and organ activity estimates can be obtained either from QSPECT images or quantification of planar projection data. However, in addition to the photon used for imaging, 131I decay results in emission of a number of other higher-energy photons with significant abundances. These higher-energy photons can scatter in the body, collimator, or detector and be counted in the 364 keV photopeak energy window, resulting in reduced image contrast and degraded quantitative accuracy; these photons are referred to as downscatter. The goal of this study was to develop and evaluate a model-based downscatter compensation method specifically designed for the compensation of high-energy photons emitted by 131I and detected in the imaging energy window. Methods: In the evaluation study, we used a Monte Carlo simulation (MCS) code that had previously been validated for other radionuclides. Thus, in preparation for the evaluation study, we first validated the code for 131I imaging simulation by comparison with experimental data. Next, we assessed the accuracy of the downscatter model by comparing downscatter estimates with MCS results. Finally, we combined the downscatter model with iterative reconstruction-based compensation for attenuation (A) and scatter (S) and the full (D) collimator-detector response of the 364 keV photons to form a comprehensive compensation method. We evaluated this

  6. Qualitative and quantitative evaluation of avian demineralized bone matrix in heterotopic beds.

    PubMed

    Reza Sanaei, M; Abu, Jalila; Nazari, Mojgan; A B, Mohd Zuki; Allaudin, Zeenathul N

    2013-11-01

    To evaluate the osteogenic potential of avian demineralized bone matrix (DBM) in the context of implant geometry. Experimental. Rock pigeons (n = 24). Tubular and chipped forms of DBM were prepared by acid demineralization of long bones from healthy allogeneic donors and implanted bilaterally into the pectoral region of 24 pigeons. After euthanasia at 1, 4, 6, 8, 10, and 12 weeks, explants were evaluated histologically and compared by means of quantitative (bone area) and semi quantitative measures (scores). All explants had new bone at retrieval with the exception of tubular implants at the end of week 1. The most reactive part in both implants was the interior region between the periosteal and endosteal surfaces followed by the area at the implant-muscle interface. Quantitative measurements demonstrated a significantly (P = .012) greater percentage of new bone formation induced by tubular implants (80.28 ± 8.94) compared with chip implants (57.64 ± 3.12). There was minimal inflammation. Avian DBM initiates heterotopic bone formation in allogeneic recipients with low grades of immunogenicity. Implant geometry affects this phenomenon as osteoconduction appeared to augment the magnitude of the effects in larger tubular implants. © Copyright 2013 by The American College of Veterinary Surgeons.

  7. Quantitative analysis of regional myocardial performance in coronary artery disease

    NASA Technical Reports Server (NTRS)

    Stewart, D. K.; Dodge, H. T.; Frimer, M.

    1975-01-01

    Findings from a group of subjects with significant coronary artery stenosis are given. A group of controls determined by use of a quantitative method for the study of regional myocardial performance based on the frame-by-frame analysis of biplane left ventricular angiograms are presented. Particular emphasis was placed upon the analysis of wall motion in terms of normalized segment dimensions, timing and velocity of contraction. The results were compared with the method of subjective assessment used clinically.

  8. A quantitative evaluation of cell migration by the phagokinetic track motility assay.

    PubMed

    Nogalski, Maciej T; Chan, Gary C T; Stevenson, Emily V; Collins-McMillen, Donna K; Yurochko, Andrew D

    2012-12-04

    Cellular motility is an important biological process for both unicellular and multicellular organisms. It is essential for movement of unicellular organisms towards a source of nutrients or away from unsuitable conditions, as well as in multicellular organisms for tissue development, immune surveillance and wound healing, just to mention a few roles(1,2,3). Deregulation of this process can lead to serious neurological, cardiovascular and immunological diseases, as well as exacerbated tumor formation and spread(4,5). Molecularly, actin polymerization and receptor recycling have been shown to play important roles in creating cellular extensions (lamellipodia), that drive the forward movement of the cell(6,7,8). However, many biological questions about cell migration remain unanswered. The central role for cellular motility in human health and disease underlines the importance of understanding the specific mechanisms involved in this process and makes accurate methods for evaluating cell motility particularly important. Microscopes are usually used to visualize the movement of cells. However, cells move rather slowly, making the quantitative measurement of cell migration a resource-consuming process requiring expensive cameras and software to create quantitative time-lapsed movies of motile cells. Therefore, the ability to perform a quantitative measurement of cell migration that is cost-effective, non-laborious, and that utilizes common laboratory equipment is a great need for many researchers. The phagokinetic track motility assay utilizes the ability of a moving cell to clear gold particles from its path to create a measurable track on a colloidal gold-coated glass coverslip(9,10). With the use of freely available software, multiple tracks can be evaluated for each treatment to accomplish statistical requirements. The assay can be utilized to assess motility of many cell types, such as cancer cells(11,12), fibroblasts(9), neutrophils(13), skeletal muscle cells(14

  9. An anthropomorphic phantom for quantitative evaluation of breast MRI.

    PubMed

    Freed, Melanie; de Zwart, Jacco A; Loud, Jennifer T; El Khouli, Riham H; Myers, Kyle J; Greene, Mark H; Duyn, Jeff H; Badano, Aldo

    2011-02-01

    In this study, the authors aim to develop a physical, tissue-mimicking phantom for quantitative evaluation of breast MRI protocols. The objective of this phantom is to address the need for improved standardization in breast MRI and provide a platform for evaluating the influence of image protocol parameters on lesion detection and discrimination. Quantitative comparisons between patient and phantom image properties are presented. The phantom is constructed using a mixture of lard and egg whites, resulting in a random structure with separate adipose- and glandular-mimicking components. T1 and T2 relaxation times of the lard and egg components of the phantom were estimated at 1.5 T from inversion recovery and spin-echo scans, respectively, using maximum-likelihood methods. The image structure was examined quantitatively by calculating and comparing spatial covariance matrices of phantom and patient images. A static, enhancing lesion was introduced by creating a hollow mold with stereolithography and filling it with a gadolinium-doped water solution. Measured phantom relaxation values fall within 2 standard errors of human values from the literature and are reasonably stable over 9 months of testing. Comparison of the covariance matrices of phantom and patient data demonstrates that the phantom and patient data have similar image structure. Their covariance matrices are the same to within error bars in the anterior-posterior direction and to within about two error bars in the right-left direction. The signal from the phantom's adipose-mimicking material can be suppressed using active fat-suppression protocols. A static, enhancing lesion can also be included with the ability to change morphology and contrast agent concentration. The authors have constructed a phantom and demonstrated its ability to mimic human breast images in terms of key physical properties that are relevant to breast MRI. This phantom provides a platform for the optimization and standardization of

  10. Are Teacher Course Evaluations Biased against Faculty That Teach Quantitative Methods Courses?

    ERIC Educational Resources Information Center

    Royal, Kenneth D.; Stockdale, Myrah R.

    2015-01-01

    The present study investigated graduate students' responses to teacher/course evaluations (TCE) to determine if students' responses were inherently biased against faculty who teach quantitative methods courses. Item response theory (IRT) and Differential Item Functioning (DIF) techniques were utilized for data analysis. Results indicate students…

  11. The Nuclear Renaissance — Implications on Quantitative Nondestructive Evaluations

    NASA Astrophysics Data System (ADS)

    Matzie, Regis A.

    2007-03-01

    The world demand for energy is growing rapidly, particularly in developing countries that are trying to raise the standard of living for billions of people, many of whom do not even have access to electricity. With this increased energy demand and the high and volatile price of fossil fuels, nuclear energy is experiencing resurgence. This so-called nuclear renaissance is broad based, reaching across Asia, the United States, Europe, as well as selected countries in Africa and South America. Some countries, such as Italy, that have actually turned away from nuclear energy are reconsidering the advisability of this design. This renaissance provides the opportunity to deploy more advanced reactor designs that are operating today, with improved safety, economy, and operations. In this keynote address, I will briefly present three such advanced reactor designs in whose development Westinghouse is participating. These designs include the advanced passive PWR, AP1000, which recently received design certification for the US Nuclear Regulatory Commission; the Pebble Bed Modular reactor (PBMR) which is being demonstrated in South Africa; and the International Reactor Innovative and Secure (IRIS), which was showcased in the US Department of Energy's recently announced Global Nuclear Energy Partnership (GNEP), program. The salient features of these designs that impact future requirements on quantitative nondestructive evaluations will be discussed. Such features as reactor vessel materials, operating temperature regimes, and new geometric configurations will be described, and mention will be made of the impact on quantitative nondestructive evaluation (NDE) approaches.

  12. Quantitative reactive modeling and verification.

    PubMed

    Henzinger, Thomas A

    Formal verification aims to improve the quality of software by detecting errors before they do harm. At the basis of formal verification is the logical notion of correctness , which purports to capture whether or not a program behaves as desired. We suggest that the boolean partition of software into correct and incorrect programs falls short of the practical need to assess the behavior of software in a more nuanced fashion against multiple criteria. We therefore propose to introduce quantitative fitness measures for programs, specifically for measuring the function, performance, and robustness of reactive programs such as concurrent processes. This article describes the goals of the ERC Advanced Investigator Project QUAREM. The project aims to build and evaluate a theory of quantitative fitness measures for reactive models. Such a theory must strive to obtain quantitative generalizations of the paradigms that have been success stories in qualitative reactive modeling, such as compositionality, property-preserving abstraction and abstraction refinement, model checking, and synthesis. The theory will be evaluated not only in the context of software and hardware engineering, but also in the context of systems biology. In particular, we will use the quantitative reactive models and fitness measures developed in this project for testing hypotheses about the mechanisms behind data from biological experiments.

  13. StreamQRE: Modular Specification and Efficient Evaluation of Quantitative Queries over Streaming Data*

    PubMed Central

    Mamouras, Konstantinos; Raghothaman, Mukund; Alur, Rajeev; Ives, Zachary G.; Khanna, Sanjeev

    2017-01-01

    Real-time decision making in emerging IoT applications typically relies on computing quantitative summaries of large data streams in an efficient and incremental manner. To simplify the task of programming the desired logic, we propose StreamQRE, which provides natural and high-level constructs for processing streaming data. Our language has a novel integration of linguistic constructs from two distinct programming paradigms: streaming extensions of relational query languages and quantitative extensions of regular expressions. The former allows the programmer to employ relational constructs to partition the input data by keys and to integrate data streams from different sources, while the latter can be used to exploit the logical hierarchy in the input stream for modular specifications. We first present the core language with a small set of combinators, formal semantics, and a decidable type system. We then show how to express a number of common patterns with illustrative examples. Our compilation algorithm translates the high-level query into a streaming algorithm with precise complexity bounds on per-item processing time and total memory footprint. We also show how to integrate approximation algorithms into our framework. We report on an implementation in Java, and evaluate it with respect to existing high-performance engines for processing streaming data. Our experimental evaluation shows that (1) StreamQRE allows more natural and succinct specification of queries compared to existing frameworks, (2) the throughput of our implementation is higher than comparable systems (for example, two-to-four times greater than RxJava), and (3) the approximation algorithms supported by our implementation can lead to substantial memory savings. PMID:29151821

  14. Quantitative T2 mapping evaluation for articular cartilage lesions in a rabbit model of anterior cruciate ligament transection osteoarthritis.

    PubMed

    Wei, Zheng-mao; Du, Xiang-ke; Huo, Tian-long; Li, Xu-bin; Quan, Guang-nan; Li, Tian-ran; Cheng, Jin; Zhang, Wei-tao

    2012-03-01

    Quantitative T2 mapping has been a widely used method for the evaluation of pathological cartilage properties, and the histological assessment system of osteoarthritis in the rabbit has been published recently. The aim of the study was to investigate the effectiveness of quantitative T2 mapping evaluation for articular cartilage lesions of a rabbit model of anterior cruciate ligament transection (ACLT) osteoarthritis. Twenty New Zealand White (NZW) rabbits were divided into ACLT surgical group and sham operated group equally. The anterior cruciate ligaments of the rabbits in ACLT group were transected, while the joints were closed intactly in sham operated group. Magnetic resonance (MR) examinations were performed on 3.0T MR unit at week 0, week 6, and week 12. T2 values were computed on GE ADW4.3 workstation. All rabbits were killed at week 13, and left knees were stained with Haematoxylin and Eosin. Semiquantitative histological grading was obtained according to the osteoarthritis cartilage histopathology assessment system. Computerized image analysis was performed to quantitate the immunostained collagen type II. The average MR T2 value of whole left knee cartilage in ACLT surgical group ((29.05±12.01) ms) was significantly higher than that in sham operated group ((24.52±7.97) ms) (P=0.024) at week 6. The average T2 value increased to (32.18±12.79) ms in ACLT group at week 12, but remained near the baseline level ((27.66±8.08) ms) in the sham operated group (P=0.03). The cartilage lesion level of left knee in ACLT group was significantly increased at week 6 (P=0.005) and week 12 (P<0.001). T2 values had positive correlation with histological grading scores, but inverse correlation with optical densities (OD) of type II collagen. This study demonstrated the reliability and practicability of quantitative T2 mapping for the cartilage injury of rabbit ACLT osteoarthritis model.

  15. 48 CFR 436.604 - Performance evaluation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 4 2012-10-01 2012-10-01 false Performance evaluation... Performance evaluation. Preparation of performance evaluation reports. (a) In addition to the requirements of FAR 36.604, performance evaluation reports shall be prepared for indefinite-delivery type contracts...

  16. 48 CFR 436.604 - Performance evaluation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 4 2014-10-01 2014-10-01 false Performance evaluation... Performance evaluation. Preparation of performance evaluation reports. (a) In addition to the requirements of FAR 36.604, performance evaluation reports shall be prepared for indefinite-delivery type contracts...

  17. 48 CFR 436.604 - Performance evaluation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 4 2011-10-01 2011-10-01 false Performance evaluation... Performance evaluation. Preparation of performance evaluation reports. (a) In addition to the requirements of FAR 36.604, performance evaluation reports shall be prepared for indefinite-delivery type contracts...

  18. 48 CFR 436.604 - Performance evaluation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 4 2013-10-01 2013-10-01 false Performance evaluation... Performance evaluation. Preparation of performance evaluation reports. (a) In addition to the requirements of FAR 36.604, performance evaluation reports shall be prepared for indefinite-delivery type contracts...

  19. 48 CFR 436.604 - Performance evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Performance evaluation... Performance evaluation. Preparation of performance evaluation reports. (a) In addition to the requirements of FAR 36.604, performance evaluation reports shall be prepared for indefinite-delivery type contracts...

  20. Quantitative Guidance for Stove Usage and Performance to Achieve Health and Environmental Targets.

    PubMed

    Johnson, Michael A; Chiang, Ranyee A

    2015-08-01

    Displacing the use of polluting and inefficient cookstoves in developing countries is necessary to achieve the potential health and environmental benefits sought through clean cooking solutions. Yet little quantitative context has been provided on how much displacement of traditional technologies is needed to achieve targets for household air pollutant concentrations or fuel savings. This paper provides instructive guidance on the usage of cooking technologies required to achieve health and environmental improvements. We evaluated different scenarios of displacement of traditional stoves with use of higher performing technologies. The air quality and fuel consumption impacts were estimated for these scenarios using a single-zone box model of indoor air quality and ratios of thermal efficiency. Stove performance and usage should be considered together, as lower performing stoves can result in similar or greater benefits than a higher performing stove if the lower performing stove has considerably higher displacement of the baseline stove. Based on the indoor air quality model, there are multiple performance-usage scenarios for achieving modest indoor air quality improvements. To meet World Health Organization guidance levels, however, three-stone fire and basic charcoal stove usage must be nearly eliminated to achieve the particulate matter target (< 1-3 hr/week), and substantially limited to meet the carbon monoxide guideline (< 7-9 hr/week). Moderate health gains may be achieved with various performance-usage scenarios. The greatest benefits are estimated to be achieved by near-complete displacement of traditional stoves with clean technologies, emphasizing the need to shift in the long term to near exclusive use of clean fuels and stoves. The performance-usage scenarios are also provided as a tool to guide technology selection and prioritize behavior change opportunities to maximize impact.

  1. Quantitative lung perfusion evaluation using Fourier decomposition perfusion MRI.

    PubMed

    Kjørstad, Åsmund; Corteville, Dominique M R; Fischer, Andre; Henzler, Thomas; Schmid-Bindert, Gerald; Zöllner, Frank G; Schad, Lothar R

    2014-08-01

    To quantitatively evaluate lung perfusion using Fourier decomposition perfusion MRI. The Fourier decomposition (FD) method is a noninvasive method for assessing ventilation- and perfusion-related information in the lungs, where the perfusion maps in particular have shown promise for clinical use. However, the perfusion maps are nonquantitative and dimensionless, making follow-ups and direct comparisons between patients difficult. We present an approach to obtain physically meaningful and quantifiable perfusion maps using the FD method. The standard FD perfusion images are quantified by comparing the partially blood-filled pixels in the lung parenchyma with the fully blood-filled pixels in the aorta. The percentage of blood in a pixel is then combined with the temporal information, yielding quantitative blood flow values. The values of 10 healthy volunteers are compared with SEEPAGE measurements which have shown high consistency with dynamic contrast enhanced-MRI. All pulmonary blood flow (PBF) values are within the expected range. The two methods are in good agreement (mean difference = 0.2 mL/min/100 mL, mean absolute difference = 11 mL/min/100 mL, mean PBF-FD = 150 mL/min/100 mL, mean PBF-SEEPAGE = 151 mL/min/100 mL). The Bland-Altman plot shows a good spread of values, indicating no systematic bias between the methods. Quantitative lung perfusion can be obtained using the Fourier Decomposition method combined with a small amount of postprocessing. Copyright © 2013 Wiley Periodicals, Inc.

  2. Quantitative research on critical thinking and predicting nursing students' NCLEX-RN performance.

    PubMed

    Romeo, Elizabeth M

    2010-07-01

    The concept of critical thinking has been influential in several disciplines. Both education and nursing in general have been attempting to define, teach, and measure this concept for decades. Nurse educators realize that critical thinking is the cornerstone of the objectives and goals for nursing students. The purpose of this article is to review and analyze quantitative research findings relevant to the measurement of critical thinking abilities and skills in undergraduate nursing students and the usefulness of critical thinking as a predictor of National Council Licensure Examination-Registered Nurse (NCLEX-RN) performance. The specific issues that this integrative review examined include assessment and analysis of the theoretical and operational definitions of critical thinking, theoretical frameworks used to guide the studies, instruments used to evaluate critical thinking skills and abilities, and the role of critical thinking as a predictor of NCLEX-RN outcomes. A list of key assumptions related to critical thinking was formulated. The limitations and gaps in the literature were identified, as well as the types of future research needed in this arena. Copyright 2010, SLACK Incorporated.

  3. Quantitative aspects of the clinical performance of transverse tripolar spinal cord stimulation.

    PubMed

    Wesselink, W A; Holsheimer, J; King, G W; Torgerson, N A; Boom, H B

    1999-01-01

    A multicenter study was initiated to evaluate the performance of the transverse tripolar system for spinal cord stimulation. Computer modeling had predicted steering of paresthesia with a dual channel stimulator to be the main benefit of the system. The quantitative analysis presented here includes the results of 484 tests in 30 patients. For each test, paresthesia coverage as a function of voltage levels was stored in a computerized database, including a body map which enabled calculation of the degree of paresthesia coverage of separate body areas, as well as the overlap with the painful areas. The results show that with the transverse tripolar system steering of the paresthesia is possible, although optimal steering requires proper placement of the electrode with respect to the spinal cord. Therefore, with this steering ability as well as a larger therapeutic stimulation window as compared to conventional systems, we expect an increase of the long-term efficacy of spinal cord stimulation. Moreover, in view of the stimulation-induced paresthesia patterns, the system allows selective stimulation of the medial dorsal columns.

  4. Using Performance Tasks to Improve Quantitative Reasoning in an Introductory Mathematics Course

    ERIC Educational Resources Information Center

    Kruse, Gerald; Drews, David

    2013-01-01

    A full-cycle assessment of our efforts to improve quantitative reasoning in an introductory math course is described. Our initial iteration substituted more open-ended performance tasks for the active learning projects than had been used. Using a quasi-experimental design, we compared multiple sections of the same course and found non-significant…

  5. Performance Evaluation of the microPET®—FOCUS-F120

    NASA Astrophysics Data System (ADS)

    Laforest, Richard; Longford, Desmond; Siegel, Stefan; Newport, Danny F.; Yap, Jeffrey

    2007-02-01

    microPETreg-Focus-F120 is the latest model of dedicated small animal PET scanners from CTI-Concorde Microsystems LLC, (Knoxville, TN). This scanner, based on the geometry of the microPET-R4, takes advantage of several detector modifications to the coincidence processing electronics that improve the image resolution, sensitivity, and counting rate performance as compared to the predecessor models. This work evaluates the performance of the Focus-F120 system and shows its improvement over the earlier models. In particular, the spatial resolution is shown to improve from 2.32 to 1.69 mm at 5 mm radial distance and the peak absolute sensitivity increases from 4.1% to 7.1% compared to the microPET-R4. The counting rate capability, expressed in noise equivalent counting rate (NEC-1R), was shown to peak at over 800 kcps at 88 MBq for both systems using a mouse phantom. For this small phantom, the NECR counting rate is limited by the data transmission bandwidth between the scanner and the acquisition console. The rat-like phantom showed peak NEC-1R value at 300 kcps at 140 MBq. Evaluation of image quality and quantitation accuracy was also performed using specially designed phantoms and animal experiments

  6. Reduced short term memory in congenital adrenal hyperplasia (CAH) and its relationship to spatial and quantitative performance.

    PubMed

    Collaer, Marcia L; Hindmarsh, Peter C; Pasterski, Vickie; Fane, Briony A; Hines, Melissa

    2016-02-01

    Girls and women with classical congenital adrenal hyperplasia (CAH) experience elevated androgens prenatally and show increased male-typical development for certain behaviors. Further, individuals with CAH receive glucocorticoid (GC) treatment postnatally, and this GC treatment could have negative cognitive consequences. We investigated two alternative hypotheses, that: (a) early androgen exposure in females with CAH masculinizes (improves) spatial perception and quantitative abilities at which males typically outperform females, or (b) CAH is associated with performance decrements in these domains, perhaps due to reduced short-term-memory (STM). Adolescent and adult individuals with CAH (40 female and 29 male) were compared with relative controls (29 female and 30 male) on spatial perception and quantitative abilities as well as on Digit Span (DS) to assess STM and on Vocabulary to assess general intelligence. Females with CAH did not perform better (more male-typical) on spatial perception or quantitative abilities than control females, failing to support the hypothesis of cognitive masculinization. Rather, in the sample as a whole individuals with CAH scored lower on spatial perception (p ≤ .009), a quantitative composite (p ≤ .036), and DS (p ≤ .001), despite no differences in general intelligence. Separate analyses of adolescent and adult participants suggested the spatial and quantitative effects might be present only in adult patients with CAH; however, reduced DS performance was found in patients with CAH regardless of age group. Separate regression analyses showed that DS predicted both spatial perception and quantitative performance (both p ≤ .001), when age, sex, and diagnosis status were controlled. Thus, reduced STM in CAH patients versus controls may have more general cognitive consequences, potentially reducing spatial perception and quantitative skills. Although hyponatremia or other aspects of salt-wasting crises or additional hormone

  7. Quantitative analysis of benzodiazepines in vitreous humor by high-performance liquid chromatography

    PubMed Central

    Bazmi, Elham; Behnoush, Behnam; Akhgari, Maryam; Bahmanabadi, Leila

    2016-01-01

    Objective: Benzodiazepines are frequently screened drugs in emergency toxicology, drugs of abuse testing, and in forensic cases. As the variations of benzodiazepines concentrations in biological samples during bleeding, postmortem changes, and redistribution could be biasing forensic medicine examinations, hence selecting a suitable sample and a validated accurate method is essential for the quantitative analysis of these main drug categories. The aim of this study was to develop a valid method for the determination of four benzodiazepines (flurazepam, lorazepam, alprazolam, and diazepam) in vitreous humor using liquid–liquid extraction and high-performance liquid chromatography. Methods: Sample preparation was carried out using liquid–liquid extraction with n-hexane: ethyl acetate and subsequent detection by high-performance liquid chromatography method coupled to diode array detector. This method was applied to quantify benzodiazepines in 21 authentic vitreous humor samples. Linear curve for each drug was obtained within the range of 30–3000 ng/mL with coefficient of correlation higher than 0.99. Results: The limit of detection and quantitation were 30 and 100 ng/mL respectively for four drugs. The method showed an appropriate intra- and inter-day precision (coefficient of variation < 10%). Benzodiazepines recoveries were estimated to be over 80%. The method showed high selectivity; no additional peak due to interfering substances in samples was observed. Conclusion: The present method was selective, sensitive, accurate, and precise for the quantitative analysis of benzodiazepines in vitreous humor samples in forensic toxicology laboratory. PMID:27635251

  8. Toward Quantitative Small Animal Pinhole SPECT: Assessment of Quantitation Accuracy Prior to Image Compensations

    PubMed Central

    Chen, Chia-Lin; Wang, Yuchuan; Lee, Jason J. S.; Tsui, Benjamin M. W.

    2011-01-01

    Purpose We assessed the quantitation accuracy of small animal pinhole single photon emission computed tomography (SPECT) under the current preclinical settings, where image compensations are not routinely applied. Procedures The effects of several common image-degrading factors and imaging parameters on quantitation accuracy were evaluated using Monte-Carlo simulation methods. Typical preclinical imaging configurations were modeled, and quantitative analyses were performed based on image reconstructions without compensating for attenuation, scatter, and limited system resolution. Results Using mouse-sized phantom studies as examples, attenuation effects alone degraded quantitation accuracy by up to −18% (Tc-99m or In-111) or −41% (I-125). The inclusion of scatter effects changed the above numbers to −12% (Tc-99m or In-111) and −21% (I-125), respectively, indicating the significance of scatter in quantitative I-125 imaging. Region-of-interest (ROI) definitions have greater impacts on regional quantitation accuracy for small sphere sources as compared to attenuation and scatter effects. For the same ROI, SPECT acquisitions using pinhole apertures of different sizes could significantly affect the outcome, whereas the use of different radii-of-rotation yielded negligible differences in quantitation accuracy for the imaging configurations simulated. Conclusions We have systematically quantified the influence of several factors affecting the quantitation accuracy of small animal pinhole SPECT. In order to consistently achieve accurate quantitation within 5% of the truth, comprehensive image compensation methods are needed. PMID:19048346

  9. Evaluation of empirical rule of linearly correlated peptide selection (ERLPS) for proteotypic peptide-based quantitative proteomics.

    PubMed

    Liu, Kehui; Zhang, Jiyang; Fu, Bin; Xie, Hongwei; Wang, Yingchun; Qian, Xiaohong

    2014-07-01

    Precise protein quantification is essential in comparative proteomics. Currently, quantification bias is inevitable when using proteotypic peptide-based quantitative proteomics strategy for the differences in peptides measurability. To improve quantification accuracy, we proposed an "empirical rule for linearly correlated peptide selection (ERLPS)" in quantitative proteomics in our previous work. However, a systematic evaluation on general application of ERLPS in quantitative proteomics under diverse experimental conditions needs to be conducted. In this study, the practice workflow of ERLPS was explicitly illustrated; different experimental variables, such as, different MS systems, sample complexities, sample preparations, elution gradients, matrix effects, loading amounts, and other factors were comprehensively investigated to evaluate the applicability, reproducibility, and transferability of ERPLS. The results demonstrated that ERLPS was highly reproducible and transferable within appropriate loading amounts and linearly correlated response peptides should be selected for each specific experiment. ERLPS was used to proteome samples from yeast to mouse and human, and in quantitative methods from label-free to O18/O16-labeled and SILAC analysis, and enabled accurate measurements for all proteotypic peptide-based quantitative proteomics over a large dynamic range. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Avoiding full corrections in dynamic SPECT images impacts the performance of SPECT myocardial blood flow quantitation.

    PubMed

    Wang, Lei; Wu, Dayong; Yang, Yong; Chen, Ing-Jou; Lin, Chih-Yuan; Hsu, Bailing; Fang, Wei; Tang, Yi-Da

    2017-08-01

    This study investigated the performance of SPECT myocardial blood flow (MBF) quantitation lacking full physical corrections (All Corr) in dynamic SPECT (DySPECT) images. Eleven healthy normal volunteers (HVT) and twenty-four patients with angiography-documented CAD were assessed. All Corr in 99m Tc-sestamibi DySPECT encompassed noise reduction (NR), resolution recovery (RR), and corrections for scatter (SC) and attenuation (AC), otherwise no correction (NC) or only partial corrections. The performance was evaluated by quality index (R 2 ) and blood-pool spillover index (FBV) in kinetic modeling, and by rest flow (RMBF) and stress flow (SMBF) compared with those of All Corr. In HVT group, NC diminished 2-fold flow uniformity with the most degraded quality (15%-18% reduced R 2 ) and elevated spillover effect (45%-50% increased FBV). Consistently higher RMBF and SMBF were discovered in both groups (HVT 1.54/2.31 higher; CAD 1.60/1.72; all P < .0001). Bland-Altman analysis revealed positive flow bias (HVT 0.9-2.6 mL/min/g; CAD 0.7-1.3) with wide ranges of 95% CI of agreement (HVT NC -1.9-7.1; NR -0.4-4.4; NR + SC -1.1-4.3; NR + SC + RR -0.7-2.5) (CAD NC -1.2-3.8; NR -1.0-2.8; NR + SC -1.0-2.5; NR + SC + RR -1.1-2.6). Uncorrected physical interference in DySPECT images can extensively impact the performance of MBF quantitation. Full physical corrections should be considered to warrant this tool for clinical utilization.

  11. Performance evaluation for pinhole collimators of small gamma camera by MTF and NNPS analysis: Monte Carlo simulation study

    NASA Astrophysics Data System (ADS)

    Jeon, Hosang; Kim, Hyunduk; Cha, Bo Kyung; Kim, Jong Yul; Cho, Gyuseong; Chung, Yong Hyun; Yun, Jong-Il

    2009-06-01

    Presently, the gamma camera system is widely used in various medical diagnostic, industrial and environmental fields. Hence, the quantitative and effective evaluation of its imaging performance is essential for design and quality assurance. The National Electrical Manufacturers Association (NEMA) standards for gamma camera evaluation are insufficient to perform sensitive evaluation. In this study, modulation transfer function (MTF) and normalized noise power spectrum (NNPS) will be suggested to evaluate the performance of small gamma camera with changeable pinhole collimators using Monte Carlo simulation. We simulated the system with a cylinder and a disk source, and seven different pinhole collimators from 1- to 4-mm-diameter pinhole with lead. The MTF and NNPS data were obtained from output images and were compared with full-width at half-maximum (FWHM), sensitivity and differential uniformity. In the result, we found that MTF and NNPS are effective and novel standards to evaluate imaging performance of gamma cameras instead of conventional NEMA standards.

  12. Quantitative T2 evaluation at 3.0T compared to morphological grading of the lumbar intervertebral disc: a standardized evaluation approach in patients with low back pain.

    PubMed

    Stelzeneder, David; Welsch, Goetz Hannes; Kovács, Balázs Krisztián; Goed, Sabine; Paternostro-Sluga, Tatjana; Vlychou, Marianna; Friedrich, Klaus; Mamisch, Tallal Charles; Trattnig, Siegfried

    2012-02-01

    The purpose of our investigation was to compare quantitative T2 relaxation time measurement evaluation of lumbar intervertebral discs with morphological grading in young to middle-aged patients with low back pain, using a standardized region-of-interest evaluation approach. Three hundred thirty lumbar discs from 66 patients (mean age, 39 years) with low back pain were examined on a 3.0T MR unit. Sagittal T1-FSE, sagittal, coronal, and axial T2-weighted FSE for morphological MRI, as well as a multi-echo spin-echo sequence for T2 mapping, were performed. Morphologically, all discs were classified according to Pfirrmann et al. Equally sized rectangular regions of interest (ROIs) for the annulus fibrosus were selected anteriorly and posteriorly in the outermost 20% of the disc. The space between was defined as the nucleus pulposus. To assess the reproducibility of this evaluation, inter- and intraobserver statistics were performed. The Pfirrmann scoring of 330 discs showed the following results: grade I: six discs (1.8%); grade II: 189 (57.3%); grade III: 96 (29.1%); grade IV: 38 (11.5%); and grade V: one (0.3%). The mean T2 values (in milliseconds) for the anterior and the posterior annulus, and the nucleus pulposus for the respective Pfirrmann groups were: I: 57/30/239; II: 44/67/129; III: 42/51/82; and IV: 42/44/56. The nucleus pulposus T2 values showed a stepwise decrease from Pfirrmann grade I to IV. The posterior annulus showed the highest T2 values in Pfirrmann group II, while the anterior annulus showed relatively constant T2 values in all Pfirrmann groups. The inter- and intraobserver analysis yielded intraclass correlation coefficients (ICC) for average measures in a range from 0.82 (anterior annulus) to 0.99 (nucleus). Our standardized method of region-specific quantitative T2 relaxation time evaluation seems to be able to characterize different degrees of disc degeneration quantitatively. The reproducibility of our ROI measurements is sufficient to

  13. Quantitative determination of reserpine, ajmaline, and ajmalicine in Rauvolfia serpentina by reversed-phase high-performance liquid chromatography.

    PubMed

    Srivastava, A; Tripathi, A K; Pandey, R; Verma, R K; Gupta, M M

    2006-10-01

    A sensitive and reproducible reversed-phase high-performance liquid chromatography (HPLC) method using photodiode array detection is established for the simultaneous quantitation of important root alkaloids of Rauvolfia serpentina, namely, reserpine, ajmaline, and ajmalicine. A Chromolith Performance RP-18e column (100 x 4.6-mm i.d.) and a binary gradient mobile phase composed of 0.01 M (pH 3.5) phosphate buffer (NaH(2)PO(4)) containing 0.5% glacial acetic acid and acetonitrile are used. Analysis is run at a flow rate of 1.0 mL/min with the detector operated at a wavelength of 254 nm. The calibration curves are linear over a concentration range of 1-20 microg/mL (r = 1.000) for all the alkaloids. The various other aspects of analysis (i.e., peak purity, similarity, recovery, and repeatability) are also validated. For the three components, the recoveries are found to be 98.27%, 97.03%, and 98.38%, respectively. The limits of detection are 6, 4, and 8 microg/mL for ajmaline, ajmalicine, and reserpine, respectively, and the limits of quantitation are 19, 12, and 23 microg/mL for ajmaline, ajmalicine, and reserpine, respectively. The developed method is simple, reproducible, and easy to operate. It is useful for the evaluation of R. serpentina.

  14. Localization of Broca's Area Using Functional MR Imaging: Quantitative Evaluation of Paradigms.

    PubMed

    Kim, Chi Heon; Kim, Jae-Hun; Chung, Chun Kee; Kim, June Sic; Lee, Jong-Min; Lee, Sang Kun

    2009-04-01

    Functional magnetic resonance imaging (fMRI) is frequently used to localize language areas in a non-invasive manner. Various paradigms for presurgical localization of language areas have been developed, but a systematic quantitative evaluation of the efficiency of those paradigms has not been performed. In the present study, the authors analyzed different language paradigms to see which paradigm is most efficient in localizing frontal language areas. Five men and five women with no neurological deficits participated (mean age, 24 years) in this study. All volunteers were right-handed. Each subject performed 4 tasks, including fixation (Fix), sentence reading (SR), pseudoword reading (PR), and word generation (WG). Fixation and pseudoword reading were used as contrasts. The functional area was defined as the area(s) with a t-value of more than 3.92 in fMRI with different tasks. To apply an anatomical constraint, we used a brain atlas mapping system, which is available in AFNI, to define the anatomical frontal language area. The numbers of voxels in overlapped area between anatomical and functional area were individually counted in the frontal expressive language area. Of the various combinations, the word generation task was most effective in delineating the frontal expressive language area when fixation was used as a contrast (p<0.05). The sensitivity of this test for localizing Broca's area was 81% and specificity was 70%. Word generation versus fixation could effectively and reliably delineate the frontal language area. A customized effective paradigm should be analyzed in order to evaluate various language functions.

  15. Determining quantitative immunophenotypes and evaluating their implications

    NASA Astrophysics Data System (ADS)

    Redelman, Douglas; Hudig, Dorothy; Berner, Dave; Castell, Linda M.; Roberts, Don; Ensign, Wayne

    2002-05-01

    Quantitative immunophenotypes varied widely among > 100 healthy young males but were maintained at characteristic levels within individuals. The initial results (SPIE Proceedings 4260:226) that examined cell numbers and the quantitative expression of adhesion and lineage-specific molecules, e.g., CD2 and CD14, have now been confirmed and extended to include the quantitative expression of inducible molecules such as HLA-DR and perforin (Pf). Some properties, such as the ratio of T helper (Th) to T cytotoxic/suppressor (Tc/s) cells, are known to be genetically determined. Other properties, e.g., the T:B cell ratio, the amount of CD19 per B cell, etc., behaved similarly and may also be inherited traits. Since some patterns observed in these healthy individuals resembled those found in pathological situations we tested whether the patterns could be associated with the occurrence of disease. The current studies shows that there were associations between quantitative immunophenotypes and the subsequent incidence and severity of disease. For example, individuals with characteristically low levels of HLA-DR or B cells or reduced numbers of Pf+ Tc/s cells had more frequent and/or more severe upper respiratory infections. Quantitative immunophenotypes will be more widely measured if the necessary standards are available and if appropriate procedures are made more accessible.

  16. Measurement and Evaluation of Quantitative Performance of PET/CT Images before a Multicenter Clinical Trial.

    PubMed

    Zhu, Yanjia; Geng, Caizheng; Huang, Jia; Liu, Juzhen; Wu, Ning; Xin, Jun; Xu, Hao; Yu, Lijuan; Geng, Jianhua

    2018-06-13

    To ensure the reliability of the planned multi-center clinical trial, we assessed the consistence and comparability of the quantitative parameters of the eight PET/CT units that will be used in this trial. PET/CT images were scanned using a PET NEMA image quality phantom (Biodex) on the eight units of Discovery PET/CT 690 from GE Healthcare. The scanning parameters were the same with the ones to be used in the planned trial. The 18 F-NaF concentration in the background was 5.3 kBq/ml, while the ones in the spheres of diameter 37 mm, 22 mm, 17 mm and 10 mm were 8:1 as to that of the background and the ones in the spheres of diameter 28 mm and 13 mm were 0 kBq/ml. The consistency of hot sphere recovery coefficient (HRC), cold sphere recovery coefficient (CRC), hot sphere contrast (Q H ) and cold sphere contrast (Q c ) among these 8 PET/CTs was analyzed. The variation of the main quantitative parameters of the eight PET/CT systems was within 10%, which is acceptable for the clinical trial.

  17. [Clinical evaluation of a novel HBsAg quantitative assay].

    PubMed

    Takagi, Kazumi; Tanaka, Yasuhito; Naganuma, Hatsue; Hiramatsu, Kumiko; Iida, Takayasu; Takasaka, Yoshimitsu; Mizokami, Masashi

    2007-07-01

    The clinical implication of the hepatitis B surface antigen (HBsAg) concentrations in HBV-infected individuals remains unclear. The aim of this study was to evaluate a novel fully automated Chemiluminescence Enzyme Immunoassay (Sysmex HBsAg quantitative assay) by comparative measurements of the reference serum samples versus two independent commercial assays (Lumipulse f or Architect HBsAg QT). Furthermore, clinical usefulness was assessed for monitoring of the serum HBsAg levels during antiviral therapy. A dilution test using 5 reference-serum samples showed linear correlation curve in range from 0.03 to 2,360 IU/ml. The HBsAg was measured in total of 400 serum samples and 99.8% had consistent results between Sysmex and Lumipulse f. Additionally, a positive linear correlation was observed between Sysmex and Architect. To compare the Architect and Sysmex, both methods were applied to quantify the HBsAg in serum samples with different HBV genotypes/subgenotypes, as well as in serum contained HBV vaccine escape mutants (126S, 145R). Correlation between the methods was observed in results for escape mutants and common genotypes (A, B, C) in Japan. Observed during lamivudine therapy, an increase in HBsAg and HBV DNA concentrations preceded the aminotransferase (ALT) elevation associated with drug-resistant HBV variant emergence (breakthrough hepatitis). In conclusion, reliability of the Sysmex HBsAg quantitative assay was confirmed for all HBV genetic variants common in Japan. Monitoring of serum HBsAg concentrations in addition to HBV DNA quantification, is helpful in evaluation of the response to lamivudine treatment and diagnosis of the breakthrough hepatitis.

  18. 48 CFR 236.604 - Performance evaluation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Performance evaluation... Architect-Engineer Services 236.604 Performance evaluation. Prepare a separate performance evaluation after... familiar with the architect-engineer contractor's performance. [76 FR 58155, Sept. 20, 2011] ...

  19. 48 CFR 236.604 - Performance evaluation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Performance evaluation... Architect-Engineer Services 236.604 Performance evaluation. Prepare a separate performance evaluation after... familiar with the architect-engineer contractor's performance. [76 FR 58155, Sept. 20, 2011] ...

  20. 48 CFR 236.604 - Performance evaluation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Performance evaluation... Architect-Engineer Services 236.604 Performance evaluation. Prepare a separate performance evaluation after... familiar with the architect-engineer contractor's performance. [76 FR 58155, Sept. 20, 2011] ...

  1. 48 CFR 236.604 - Performance evaluation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Performance evaluation... Architect-Engineer Services 236.604 Performance evaluation. Prepare a separate performance evaluation after... familiar with the architect-engineer contractor's performance. [76 FR 58155, Sept. 20, 2011] ...

  2. 48 CFR 236.604 - Performance evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Performance evaluation... Architect-Engineer Services 236.604 Performance evaluation. (a) Preparation of performance reports. Use DD Form 2631, Performance Evaluation (Architect-Engineer), instead of SF 1421. (2) Prepare a separate...

  3. Quantitative evaluation of the fetal cerebellar vermis using the median view on three-dimensional ultrasound.

    PubMed

    Zhao, Dan; Liu, Wei; Cai, Ailu; Li, Jingyu; Chen, Lizhu; Wang, Bing

    2013-02-01

    The purpose of this study was to investigate the effectiveness for quantitative evaluation of cerebellar vermis using three-dimensional (3D) ultrasound and to establish a nomogram for Chinese fetal vermis measurements during gestation. Sonographic examinations were performed in normal fetuses and in cases suspected of the diagnosis of vermian rotation. 3D median planes were obtained with both OMNIVIEW and tomographic ultrasound imaging. Measurements of the cerebellar vermis were highly correlated between two-dimensional and 3D median planes. The diameter of the cerebellar vermis follows growth approximately predicted by the quadratic regression equation. The normal vermis was almost parallel to the brain stem, with the average angle degree to be <2° in normal fetuses. The average angle degree of the 9 cases of vermian rotation was >5°. Three-dimensional median planes are obtained more easily than two-dimensional ones, and allow accurate measurements of the cerebellar vermis. The 3D approach may enable rapid assessment of fetal cerebral anatomy in standard examination. Measurements of cerebellar vermis may provide a quantitative index for prenatal diagnosis of posterior fossa malformations. © 2012 John Wiley & Sons, Ltd.

  4. Factors Influencing Academic Performance in Quantitative Courses among Undergraduate Business Students of a Public Higher Education Institution

    ERIC Educational Resources Information Center

    Yousef, Darwish Abdulrahamn

    2017-01-01

    Purpose: This paper aims to investigate the impacts of teaching style, English language and communication and assessment methods on the academic performance of undergraduate business students in introductory quantitative courses such as Statistics for Business 1 and 2, Quantitative Methods for Business, Operations and Production Management and…

  5. Performance evaluation model of a pilot food waste collection system in Suzhou City, China.

    PubMed

    Wen, Zongguo; Wang, Yuanjia; De Clercq, Djavan

    2015-05-01

    This paper analyses the food waste collection and transportation (C&T) system in a pilot project in Suzhou by using a novel performance evaluation method. The method employed to conduct this analysis involves a unified performance evaluation index containing qualitative and quantitative indicators applied to data from Suzhou City. Two major inefficiencies were identified: a) low system efficiency due to insufficient processing capacity of commercial food waste facilities; and b) low waste resource utilization due to low efficiency of manual sorting. The performance evaluation indicated that the pilot project collection system's strong points included strong economics, low environmental impact and low social impact. This study also shows that Suzhou's integrated system has developed a comprehensive body of laws and clarified regulatory responsibilities for each of the various government departments to solve the problems of commercial food waste management. Based on Suzhou's experience, perspectives and lessons can be drawn for other cities and areas where food waste management systems are in the planning stage, or are encountering operational problems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Faculty performance evaluation in accredited U.S. public health graduate schools and programs: a national study.

    PubMed

    Gimbel, Ronald W; Cruess, David F; Schor, Kenneth; Hooper, Tomoko I; Barbour, Galen L

    2008-10-01

    To provide baseline data on evaluation of faculty performance in U.S. schools and programs of public health. The authors administered an anonymous Internet-based questionnaire using PHP Surveyor. The invited sample consisted of individuals listed in the Council on Education for Public Health (CEPH) Directory of Accredited Schools and Programs of Public Health. The authors explored performance measures in teaching, research, and service, and assessed how faculty performance measures are used. A total of 64 individuals (60.4%) responded to the survey, with 26 (40.6%) reporting accreditation/reaccreditation by CEPH within the preceding 24 months. Although all schools and programs employ faculty performance evaluations, a significant difference exists between schools and programs in the use of results for merit pay increases and mentoring purposes. Thirty-one (48.4%) of the organizations published minimum performance expectations. Fifty-nine (92.2%) of the respondents counted number of publications, but only 22 (34.4%) formally evaluated their quality. Sixty-two (96.9%) evaluated teaching through student course evaluations, and only 29 (45.3%) engaged in peer assessment. Although aggregate results of teaching evaluation are available to faculty and administrators, this information is often unavailable to students and the public. Most schools and programs documented faculty service activities qualitatively but neither assessed it quantitatively nor evaluated its impact. This study provides insight into how schools and programs of public health evaluate faculty performance. Results suggest that although schools and programs do evaluate faculty performance on a basic level, many do not devote substantial attention to this process.

  7. 48 CFR 2936.604 - Performance evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Performance evaluation... Performance evaluation. (a) The HCA must establish procedures to evaluate architect-engineer contractor performance as required in FAR 36.604. Normally, the performance report must be prepared by the contracting...

  8. 48 CFR 2936.604 - Performance evaluation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 7 2011-10-01 2011-10-01 false Performance evaluation... Performance evaluation. (a) The HCA must establish procedures to evaluate architect-engineer contractor performance as required in FAR 36.604. Normally, the performance report must be prepared by the contracting...

  9. 48 CFR 2936.604 - Performance evaluation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 7 2014-10-01 2014-10-01 false Performance evaluation... Performance evaluation. (a) The HCA must establish procedures to evaluate architect-engineer contractor performance as required in FAR 36.604. Normally, the performance report must be prepared by the contracting...

  10. 48 CFR 2936.604 - Performance evaluation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 7 2012-10-01 2012-10-01 false Performance evaluation... Performance evaluation. (a) The HCA must establish procedures to evaluate architect-engineer contractor performance as required in FAR 36.604. Normally, the performance report must be prepared by the contracting...

  11. 48 CFR 2936.604 - Performance evaluation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 7 2013-10-01 2012-10-01 true Performance evaluation... Performance evaluation. (a) The HCA must establish procedures to evaluate architect-engineer contractor performance as required in FAR 36.604. Normally, the performance report must be prepared by the contracting...

  12. Susceptibility constants of airborne bacteria to dielectric barrier discharge for antibacterial performance evaluation.

    PubMed

    Park, Chul Woo; Hwang, Jungho

    2013-01-15

    Dielectric barrier discharge (DBD) is a promising method to remove contaminant bioaerosols. The collection efficiency of a DBD reactor is an important factor for determining a reactor's removal efficiency. Without considering collection, simply defining the inactivation efficiency based on colony counting numbers for DBD as on and off may lead to overestimation of the inactivation efficiency of the DBD reactor. One-pass removal tests of bioaerosols were carried out to deduce the inactivation efficiency of the DBD reactor using both aerosol- and colony-counting methods. Our DBD reactor showed good performance for removing test bioaerosols for an applied voltage of 7.5 kV and a residence time of 0.24s, with η(CFU), η(Number), and η(Inactivation) values of 94%, 64%, and 83%, respectively. Additionally, we introduce the susceptibility constant of bioaerosols to DBD as a quantitative parameter for the performance evaluation of a DBD reactor. The modified susceptibility constant, which is the ratio of the susceptibility constant to the volume of the plasma reactor, has been successfully demonstrated for the performance evaluation of different sized DBD reactors under different DBD operating conditions. Our methodology will be used for design optimization, performance evaluation, and prediction of power consumption of DBD for industrial applications. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Performance index: A method for quantitative evaluation of filters used in clinical SPECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Contino, J.; Touya, J.J.; Corbus, H.F.

    1984-01-01

    The purpose of this study was to design a method for optimal filter selection during the reconstruction of clinical SPECT images. Hamming, Bartlett, Parzen and Butterworth filters were evaluated at different cutoff frequencies when applied to reconstruction of the Jaszczak phantom and liver SPECTs. The phantom filled with 6 mCi of Tc-99m was imaged following 4 different protocols which varied in matrix sizes (128 x 128 or 64 x 64) and in number of steps (128 or 64). Total imaging time in the 4 protocols was 24 minutes. A total of 160 reconstructions were analyzed. Liver SPECTs from 2 patientsmore » with small metastatic lesions from colon Ca were similarly studied. An ECT Performance Index (ECT PI) was defined as the product of the contrast efficiency function (ECT C) and uniformity (ECT U). ECT C as a function of the radius was measured following Rollo's approach. ECT U was measured as the ratio between min. and max. counts per pixel in a known uniform region. ECT PI was computed on a slice through the void spheres region of the phantom. In liver SPECTs the ECT U was measured over the spleen. The most favorable ECT PI (0.35, radius 7.9 mm) was obtained with images in 128 x 128 matrices, 128 steps, processed with a Butterworth cutoff frequency of 0.19, filter order 4. When images were acquired in 64 x 64 matrices using 64 steps the ECT PI was lower and influenced to a lesser degree by both choice of filter and cutoff frequency. Results in the two liver SPECT examinations were parallel to those found in the phantom studies confirming the clinical usefulness of the ECT PI in the evaluation of filters for reconstruction of SPECT images.« less

  14. How to quantitatively evaluate safety of driver behavior upon accident? A biomechanical methodology

    PubMed Central

    Zhang, Wen; Cao, Jieer

    2017-01-01

    How to evaluate driver spontaneous reactions in various collision patterns in a quantitative way is one of the most important topics in vehicle safety. Firstly, this paper constructs representative numerical crash scenarios described by impact velocity, impact angle and contact position based on finite element (FE) computation platform. Secondly, a driver cabin model is extracted and described in the well validated multi-rigid body (MB) model to compute the value of weighted injury criterion to quantitatively assess drivers’ overall injury under certain circumstances. Furthermore, based on the coupling of FE and MB, parametric studies on various crash scenarios are conducted. It is revealed that the WIC (Weighted Injury Criteria) value variation law under high impact velocities is quite distinct comparing with the one in low impact velocities. In addition, the coupling effect can be elucidated by the fact that the difference of WIC value among three impact velocities under smaller impact angles tends to be distinctly higher than that under larger impact angles. Meanwhile, high impact velocity also increases the sensitivity of WIC under different collision positions and impact angles. Results may provide a new methodology to quantitatively evaluate driving behaviors and serve as a significant guiding step towards collision avoidance for autonomous driving vehicles. PMID:29240789

  15. How to quantitatively evaluate safety of driver behavior upon accident? A biomechanical methodology.

    PubMed

    Zhang, Wen; Cao, Jieer; Xu, Jun

    2017-01-01

    How to evaluate driver spontaneous reactions in various collision patterns in a quantitative way is one of the most important topics in vehicle safety. Firstly, this paper constructs representative numerical crash scenarios described by impact velocity, impact angle and contact position based on finite element (FE) computation platform. Secondly, a driver cabin model is extracted and described in the well validated multi-rigid body (MB) model to compute the value of weighted injury criterion to quantitatively assess drivers' overall injury under certain circumstances. Furthermore, based on the coupling of FE and MB, parametric studies on various crash scenarios are conducted. It is revealed that the WIC (Weighted Injury Criteria) value variation law under high impact velocities is quite distinct comparing with the one in low impact velocities. In addition, the coupling effect can be elucidated by the fact that the difference of WIC value among three impact velocities under smaller impact angles tends to be distinctly higher than that under larger impact angles. Meanwhile, high impact velocity also increases the sensitivity of WIC under different collision positions and impact angles. Results may provide a new methodology to quantitatively evaluate driving behaviors and serve as a significant guiding step towards collision avoidance for autonomous driving vehicles.

  16. Application of robotic manipulability indices to evaluate thumb performance during smartphone touch operations.

    PubMed

    Endo, Hiroshi

    2015-01-01

    This study examined whether manipulability during smartphone thumb-based touch operations could be predicted by the following robotic manipulability indices: the volume and direction of the 'manipulability ellipsoid' (MEd), both of which evaluate the influence of kinematics on manipulability. Limits of the thumb's range of motion were considered in the MEd to improve predictability. Thumb postures at 25 key target locations were measured in 16 subjects. Though there was no correlation between subjective evaluation and the volume of the MEd, high correlation was obtained when motion range limits were taken into account. These limits changed the size of the MEd and improved the accuracy of the manipulability evaluation. Movement directions associated with higher performance could also be predicted. In conclusion, robotic manipulability indices with motion range limits were considered to be useful measures for quantitatively evaluating human hand operations.

  17. Quantitative evaluation of translational medicine based on scientometric analysis and information extraction.

    PubMed

    Zhang, Yin; Diao, Tianxi; Wang, Lei

    2014-12-01

    Designed to advance the two-way translational process between basic research and clinical practice, translational medicine has become one of the most important areas in biomedicine. The quantitative evaluation of translational medicine is valuable for the decision making of global translational medical research and funding. Using the scientometric analysis and information extraction techniques, this study quantitatively analyzed the scientific articles on translational medicine. The results showed that translational medicine had significant scientific output and impact, specific core field and institute, and outstanding academic status and benefit. While it is not considered in this study, the patent data are another important indicators that should be integrated in the relevant research in the future. © 2014 Wiley Periodicals, Inc.

  18. [Reconsidering evaluation criteria regarding health care research: toward an integrative framework of quantitative and qualitative criteria].

    PubMed

    Miyata, Hiroaki; Kai, Ichiro

    2006-05-01

    Debate about the relationship between quantitative and qualitative paradigms is often muddled and confused and the clutter of terms and arguments has resulted in the concepts becoming obscure and unrecognizable. It is therefore very important to reconsider evaluation criteria regarding rigor in social science. As Lincoln & Guba have already compared quantitative paradigms (validity, reliability, neutrality, generalizability) with qualitative paradigms (credibility, dependability, confirmability, transferability), we have discuss use of evaluation criteria based on pragmatic perspective. Validity/Credibility is the paradigm concerned to observational framework, while Reliability/Dependability refer to the range of stability in observations, Neutrality/Confirmability reflect influences between observers and subjects, Generalizability/Transferability have epistemological difference in the way findings are applied. Qualitative studies, however, does not always chose the qualitative paradigms. If we assume the stability to some extent, it is better to use the quantitative paradigm (reliability). Moreover as a quantitative study can not always guarantee a perfect observational framework, with stability in all phases of observations, it is useful to use qualitative paradigms to enhance the rigor in the study.

  19. Quantitative light-induced fluorescence technology for quantitative evaluation of tooth wear

    NASA Astrophysics Data System (ADS)

    Kim, Sang-Kyeom; Lee, Hyung-Suk; Park, Seok-Woo; Lee, Eun-Song; de Josselin de Jong, Elbert; Jung, Hoi-In; Kim, Baek-Il

    2017-12-01

    Various technologies used to objectively determine enamel thickness or dentin exposure have been suggested. However, most methods have clinical limitations. This study was conducted to confirm the potential of quantitative light-induced fluorescence (QLF) using autofluorescence intensity of occlusal surfaces of worn teeth according to enamel grinding depth in vitro. Sixteen permanent premolars were used. Each tooth was gradationally ground down at the occlusal surface in the apical direction. QLF-digital and swept-source optical coherence tomography images were acquired at each grinding depth (in steps of 100 μm). All QLF images were converted to 8-bit grayscale images to calculate the fluorescence intensity. The maximum brightness (MB) values of the same sound regions in grayscale images before (MB) and phased values after (MB) the grinding process were calculated. Finally, 13 samples were evaluated. MB increased over the grinding depth range with a strong correlation (r=0.994, P<0.001). In conclusion, the fluorescence intensity of the teeth and grinding depth was strongly correlated in the QLF images. Therefore, QLF technology may be a useful noninvasive tool used to monitor the progression of tooth wear and to conveniently estimate enamel thickness.

  20. Metrology Standards for Quantitative Imaging Biomarkers

    PubMed Central

    Obuchowski, Nancy A.; Kessler, Larry G.; Raunig, David L.; Gatsonis, Constantine; Huang, Erich P.; Kondratovich, Marina; McShane, Lisa M.; Reeves, Anthony P.; Barboriak, Daniel P.; Guimaraes, Alexander R.; Wahl, Richard L.

    2015-01-01

    Although investigators in the imaging community have been active in developing and evaluating quantitative imaging biomarkers (QIBs), the development and implementation of QIBs have been hampered by the inconsistent or incorrect use of terminology or methods for technical performance and statistical concepts. Technical performance is an assessment of how a test performs in reference objects or subjects under controlled conditions. In this article, some of the relevant statistical concepts are reviewed, methods that can be used for evaluating and comparing QIBs are described, and some of the technical performance issues related to imaging biomarkers are discussed. More consistent and correct use of terminology and study design principles will improve clinical research, advance regulatory science, and foster better care for patients who undergo imaging studies. © RSNA, 2015 PMID:26267831

  1. Performance evaluation and clinical applications of 3D plenoptic cameras

    NASA Astrophysics Data System (ADS)

    Decker, Ryan; Shademan, Azad; Opfermann, Justin; Leonard, Simon; Kim, Peter C. W.; Krieger, Axel

    2015-06-01

    The observation and 3D quantification of arbitrary scenes using optical imaging systems is challenging, but increasingly necessary in many fields. This paper provides a technical basis for the application of plenoptic cameras in medical and medical robotics applications, and rigorously evaluates camera integration and performance in the clinical setting. It discusses plenoptic camera calibration and setup, assesses plenoptic imaging in a clinically relevant context, and in the context of other quantitative imaging technologies. We report the methods used for camera calibration, precision and accuracy results in an ideal and simulated surgical setting. Afterwards, we report performance during a surgical task. Test results showed the average precision of the plenoptic camera to be 0.90mm, increasing to 1.37mm for tissue across the calibrated FOV. The ideal accuracy was 1.14mm. The camera showed submillimeter error during a simulated surgical task.

  2. Data Driven Performance Evaluation of Wireless Sensor Networks

    PubMed Central

    Frery, Alejandro C.; Ramos, Heitor S.; Alencar-Neto, José; Nakamura, Eduardo; Loureiro, Antonio A. F.

    2010-01-01

    Wireless Sensor Networks are presented as devices for signal sampling and reconstruction. Within this framework, the qualitative and quantitative influence of (i) signal granularity, (ii) spatial distribution of sensors, (iii) sensors clustering, and (iv) signal reconstruction procedure are assessed. This is done by defining an error metric and performing a Monte Carlo experiment. It is shown that all these factors have significant impact on the quality of the reconstructed signal. The extent of such impact is quantitatively assessed. PMID:22294920

  3. Mechanical Model Analysis for Quantitative Evaluation of Liver Fibrosis Based on Ultrasound Tissue Elasticity Imaging

    NASA Astrophysics Data System (ADS)

    Shiina, Tsuyoshi; Maki, Tomonori; Yamakawa, Makoto; Mitake, Tsuyoshi; Kudo, Masatoshi; Fujimoto, Kenji

    2012-07-01

    Precise evaluation of the stage of chronic hepatitis C with respect to fibrosis has become an important issue to prevent the occurrence of cirrhosis and to initiate appropriate therapeutic intervention such as viral eradication using interferon. Ultrasound tissue elasticity imaging, i.e., elastography can visualize tissue hardness/softness, and its clinical usefulness has been studied to detect and evaluate tumors. We have recently reported that the texture of elasticity image changes as fibrosis progresses. To evaluate fibrosis progression quantitatively on the basis of ultrasound tissue elasticity imaging, we introduced a mechanical model of fibrosis progression and simulated the process by which hepatic fibrosis affects elasticity images and compared the results with those clinical data analysis. As a result, it was confirmed that even in diffuse diseases like chronic hepatitis, the patterns of elasticity images are related to fibrous structural changes caused by hepatic disease and can be used to derive features for quantitative evaluation of fibrosis stage.

  4. Using qualitative and quantitative methods to evaluate small-scale disease management pilot programs.

    PubMed

    Esposito, Dominick; Taylor, Erin Fries; Gold, Marsha

    2009-02-01

    Interest in disease management programs continues to grow as managed care plans, the federal and state governments, and other organizations consider such efforts as a means to improve health care quality and reduce costs. These efforts vary in size, scope, and target population. While large-scale programs provide the means to measure impacts, evaluation of smaller interventions remains valuable as they often represent the early planning stages of larger initiatives. This paper describes a multi-method approach for evaluating small interventions that sought to improve the quality of care for Medicaid beneficiaries with multiple chronic conditions. Our approach relied on quantitative and qualitative methods to develop a complete understanding of each intervention. Quantitative data in the form of both process measures, such as case manager contacts, and outcome measures, such as hospital use, were reported and analyzed. Qualitative information was collected through interviews and the development of logic models to document the flow of intervention activities and how they were intended to affect outcomes. The logic models helped us to understand the underlying reasons for the success or lack thereof of each intervention. The analysis provides useful information on several fronts. First, qualitative data provided valuable information about implementation. Second, process measures helped determine whether implementation occurred as anticipated. Third, outcome measures indicated the potential for favorable results later, possibly suggesting further study. Finally, the evaluation of qualitative and quantitative data in combination helped us assess the potential promise of each intervention and identify common themes and challenges across all interventions.

  5. Quantitative Evaluation of PET Respiratory Motion Correction Using MR Derived Simulated Data

    NASA Astrophysics Data System (ADS)

    Polycarpou, Irene; Tsoumpas, Charalampos; King, Andrew P.; Marsden, Paul K.

    2015-12-01

    The impact of respiratory motion correction on quantitative accuracy in PET imaging is evaluated using simulations for variable patient specific characteristics such as tumor uptake and respiratory pattern. Respiratory patterns from real patients were acquired, with long quiescent motion periods (type-1) as commonly observed in most patients and with long-term amplitude variability as is expected under conditions of difficult breathing (type-2). The respiratory patterns were combined with an MR-derived motion model to simulate real-time 4-D PET-MR datasets. Lung and liver tumors were simulated with diameters of 10 and 12 mm and tumor-to-background ratio ranging from 3:1 to 6:1. Projection data for 6- and 3-mm PET resolution were generated for the Philips Gemini scanner and reconstructed without and with motion correction using OSEM (2 iterations, 23 subsets). Motion correction was incorporated into the reconstruction process based on MR-derived motion fields. Tumor peak standardized uptake values (SUVpeak) were calculated from 30 noise realizations. Respiratory motion correction improves the quantitative performance with the greatest benefit observed for patients of breathing type-2. For breathing type-1 after applying motion correction, SUVpeak of 12-mm liver tumor with 6:1 contrast was increased by 46% for a current PET resolution (i.e., 6 mm) and by 47% for a higher PET resolution (i.e., 3 mm). Furthermore, the results of this study indicate that the benefit of higher scanner resolution is small unless motion correction is applied. In particular, for large liver tumor (12 mm) with low contrast (3:1) after motion correction, the SUVpeak was increased by 34% for 6-mm resolution and by 50% for a higher PET resolution (i.e., 3-mm resolution. This investigation indicates that there is a high impact of respiratory motion correction on tumor quantitative accuracy and that motion correction is important in order to benefit from the increased resolution of future PET

  6. Quantitative evaluation of stone fragments in extracorporeal shock wave lithotripsy using a time reversal operator

    NASA Astrophysics Data System (ADS)

    Wang, Jen-Chieh; Zhou, Yufeng

    2017-03-01

    Extracorporeal shock wave lithotripsy (ESWL) has been used widely in the noninvasive treatment of kidney calculi. The fine fragments less than 2 mm in size can be discharged by urination, which determines the success of ESWL. Although ultrasonic and fluorescent imaging are used to localize the calculi, it's challenging to monitor the stone comminution progress, especially at the late stage of ESWL when fragments spread out as a cloud. The lack of real-time and quantitative evaluation makes this procedure semi-blind, resulting in either under- or over-treatment after the legal number of pulses required by FDA. The time reversal operator (TRO) method has the ability to detect point-like scatterers, and the number of non-zero eigenvalues of TRO is equal to that of the scatterers. In this study, the validation of TRO method to identify stones was illustrated from both numerical and experimental results for one to two stones with various sizes and locations. Furthermore, the parameters affecting the performance of TRO method has also been investigated. Overall, TRO method is effective in identifying the fragments in a stone cluster in real-time. Further development of a detection system and evaluation of its performance both in vitro and in vivo during ESWL is necessary for application.

  7. Quantitative methods for evaluating the efficacy of thalamic deep brain stimulation in patients with essential tremor.

    PubMed

    Wastensson, Gunilla; Holmberg, Björn; Johnels, Bo; Barregard, Lars

    2013-01-01

    Deep brain stimulation (DBS) of the thalamus is a safe and efficient method for treatment of disabling tremor in patient with essential tremor (ET). However, successful tremor suppression after surgery requires careful selection of stimulus parameters. Our aim was to examine the possible use of certain quantitative methods for evaluating the efficacy of thalamic DBS in ET patients in clinical practice, and to compare these methods with traditional clinical tests. We examined 22 patients using the Essential Tremor Rating Scale (ETRS) and quantitative assessment of tremor with the stimulator both activated and deactivated. We used an accelerometer (CATSYS tremor Pen) for quantitative measurement of postural tremor, and a eurythmokinesimeter (EKM) to evaluate kinetic tremor in a rapid pointing task. The efficacy of DBS on tremor suppression was prominent irrespective of the method used. The agreement between clinical rating of postural tremor and tremor intensity as measured by the CATSYS tremor pen was relatively high (rs = 0.74). The agreement between kinetic tremor as assessed by the ETRS and the main outcome variable from the EKM test was low (rs = 0.34). The lack of agreement indicates that the EKM test is not comparable with the clinical test. Quantitative methods, such as the CATSYS tremor pen, could be a useful complement to clinical tremor assessment in evaluating the efficacy of DBS in clinical practice. Future studies should evaluate the precision of these methods and long-term impact on tremor suppression, activities of daily living (ADL) function and quality of life.

  8. Evaluation of Effective Factors on the Clinical Performance of General Surgeons in Tehran University of Medical Science, 2015.

    PubMed

    Farzianpour, Fereshteh; Mohamadi, Efat; Najafpour, Zhila; Yousefinezhadi, Taraneh; Forootan, Sara; Foroushani, Abbas Rahimi

    2016-09-01

    Existence of doctors with high performance is one of the necessary conditions to provide high quality services. There are different motivations, which could affect their performance. Recognizing Factors which effect the performance of doctors as an effective force in health care centers is necessary. The aim of this article was evaluate the effective factors which influence on clinical performance of general surgery of Tehran University of Medical Sciences in 2015. This is a cross-sectional qualitative-quantitative study. This research conducted in 3 phases-phases I: (use of library studies and databases to collect data), phase II: localization of detected factors in first phase by using the Delphi technique and phase III: prioritizing the affecting factors on performance of doctors by using qualitative interviews. 12 articles were analyzed from 300 abstracts during the evaluation process. The output of assessment identified 23 factors was sent to surgeons and their assistants for obtaining their opinions. Quantitative analysis of the findings showed that "work qualification" (86.1%) and "managers and supervisors style" (50%) have respectively the most and the least impact on the performance of doctors. Finally 18 effective factors were identified and prioritized in the performance of general surgeons. The results showed that motivation and performance is not a single operating parameter and it depends on several factors according to cultural background. Therefore it is necessary to design, implementation and monitoring based on key determinants of effective interventions due to cultural background.

  9. Qualitative and quantitative evaluation of human dental enamel after bracket debonding: a noncontact three-dimensional optical profilometry analysis.

    PubMed

    Ferreira, Fabiano G; Nouer, Darcy F; Silva, Nelson P; Garbui, Ivana U; Correr-Sobrinho, Lourenço; Nouer, Paulo R A

    2014-09-01

    The aim of this study was to undertake a qualitative and quantitative evaluation of changes on enamel surfaces after debonding of brackets followed by finishing procedures, using a high-resolution three-dimensional optical profiler and to investigate the accuracy of the technique. The labial surfaces of 36 extracted upper central incisors were examined. Before bonding, the enamel surfaces were subjected to profilometry, recording four amplitude parameters. Brackets were then bonded using two types of light-cured orthodontic adhesive: composite resin and resin-modified glass ionomer cement. Finishing was performed by three different methods: pumice on a rubber cup, fine and ultrafine aluminum oxide discs, and microfine diamond cups followed by silicon carbide brushes. The samples were subsequently re-analyzed by profilometry. Wilcoxon signed-rank test, Kruskal-Wallis test (p < 0.05) and a posteriori Mann-Whitney U test with Bonferroni correction (p < 0.0167) revealed a significant reduction of enamel roughness when diamond cups followed by silicon carbide brushes were used to finish surfaces that had remnants of resin-modified glass ionomer adhesive and when pumice was used to finish surfaces that had traces of composite resin. Enamel loss was minimal. The 3D optical profilometry technique was able to provide accurate qualitative and quantitative assessment of changes on the enamel surface after debonding. Morphological changes in the topography of dental surfaces, especially if related to enamel loss and roughness, are of considerable clinical importance. The quantitative evaluation method used herein enables a more comprehensive understanding of the effects of orthodontic bonding on teeth.

  10. Quantitative skills as a graduate learning outcome of university science degree programmes: student performance explored through theplanned-enacted-experiencedcurriculum model

    NASA Astrophysics Data System (ADS)

    Matthews, Kelly E.; Adams, Peter; Goos, Merrilyn

    2016-07-01

    Application of mathematical and statistical thinking and reasoning, typically referred to as quantitative skills, is essential for university bioscience students. First, this study developed an assessment task intended to gauge graduating students' quantitative skills. The Quantitative Skills Assessment of Science Students (QSASS) was the result, which examined 10 mathematical and statistical sub-topics. Second, the study established an evidential baseline of students' quantitative skills performance and confidence levels by piloting the QSASS with 187 final-year biosciences students at a research-intensive university. The study is framed within the planned-enacted-experienced curriculum model and contributes to science reform efforts focused on enhancing the quantitative skills of university graduates, particularly in the biosciences. The results found, on average, weak performance and low confidence on the QSASS, suggesting divergence between academics' intentions and students' experiences of learning quantitative skills. Implications for curriculum design and future studies are discussed.

  11. Quantitative evaluation of orbital hybridization in carbon nanotubes under radial deformation using π-orbital axis vector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohnishi, Masato, E-mail: masato.ohnishi@rift.mech.tohoku.ac.jp; Suzuki, Ken; Miura, Hideo, E-mail: hmiura@rift.mech.tohoku.ac.jp

    2015-04-15

    When a radial strain is applied to a carbon nanotube (CNT), the increase in local curvature induces orbital hybridization. The effect of the curvature-induced orbital hybridization on the electronic properties of CNTs, however, has not been evaluated quantitatively. In this study, the strength of orbital hybridization in CNTs under homogeneous radial strain was evaluated quantitatively. Our analyses revealed the detailed procedure of the change in electronic structure of CNTs. In addition, the dihedral angle, the angle between π-orbital axis vectors of adjacent atoms, was found to effectively predict the strength of local orbital hybridization in deformed CNTs.

  12. 48 CFR 36.604 - Performance evaluation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Performance evaluation. 36.604 Section 36.604 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION SPECIAL... Performance evaluation. See 42.1502(f) for the requirements for preparing past performance evaluations for...

  13. 48 CFR 36.604 - Performance evaluation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 1 2012-10-01 2012-10-01 false Performance evaluation. 36.604 Section 36.604 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION SPECIAL... Performance evaluation. See 42.1502(f) for the requirements for preparing past performance evaluations for...

  14. 48 CFR 36.604 - Performance evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Performance evaluation. 36.604 Section 36.604 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION SPECIAL... Performance evaluation. See 42.1502(f) for the requirements for preparing past performance evaluations for...

  15. 48 CFR 36.604 - Performance evaluation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 1 2014-10-01 2014-10-01 false Performance evaluation. 36.604 Section 36.604 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION SPECIAL... Performance evaluation. See 42.1502(f) for the requirements for preparing past performance evaluations for...

  16. Quantitative evaluation for accumulative calibration error and video-CT registration errors in electromagnetic-tracked endoscopy.

    PubMed

    Liu, Sheena Xin; Gutiérrez, Luis F; Stanton, Doug

    2011-05-01

    Electromagnetic (EM)-guided endoscopy has demonstrated its value in minimally invasive interventions. Accuracy evaluation of the system is of paramount importance to clinical applications. Previously, a number of researchers have reported the results of calibrating the EM-guided endoscope; however, the accumulated errors of an integrated system, which ultimately reflect intra-operative performance, have not been characterized. To fill this vacancy, we propose a novel system to perform this evaluation and use a 3D metric to reflect the intra-operative procedural accuracy. This paper first presents a portable design and a method for calibration of an electromagnetic (EM)-tracked endoscopy system. An evaluation scheme is then described that uses the calibration results and EM-CT registration to enable real-time data fusion between CT and endoscopic video images. We present quantitative evaluation results for estimating the accuracy of this system using eight internal fiducials as the targets on an anatomical phantom: the error is obtained by comparing the positions of these targets in the CT space, EM space and endoscopy image space. To obtain 3D error estimation, the 3D locations of the targets in the endoscopy image space are reconstructed from stereo views of the EM-tracked monocular endoscope. Thus, the accumulated errors are evaluated in a controlled environment, where the ground truth information is present and systematic performance (including the calibration error) can be assessed. We obtain the mean in-plane error to be on the order of 2 pixels. To evaluate the data integration performance for virtual navigation, target video-CT registration error (TRE) is measured as the 3D Euclidean distance between the 3D-reconstructed targets of endoscopy video images and the targets identified in CT. The 3D error (TRE) encapsulates EM-CT registration error, EM-tracking error, fiducial localization error, and optical-EM calibration error. We present in this paper our

  17. 13 CFR 304.4 - Performance evaluations.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 13 Business Credit and Assistance 1 2011-01-01 2011-01-01 false Performance evaluations. 304.4... ECONOMIC DEVELOPMENT DISTRICTS § 304.4 Performance evaluations. (a) EDA shall evaluate the management standards, financial accountability and program performance of each District Organization within three (3...

  18. 13 CFR 304.4 - Performance evaluations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 13 Business Credit and Assistance 1 2014-01-01 2014-01-01 false Performance evaluations. 304.4... ECONOMIC DEVELOPMENT DISTRICTS § 304.4 Performance evaluations. (a) EDA shall evaluate the management standards, financial accountability and program performance of each District Organization within three (3...

  19. 13 CFR 304.4 - Performance evaluations.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 13 Business Credit and Assistance 1 2013-01-01 2013-01-01 false Performance evaluations. 304.4... ECONOMIC DEVELOPMENT DISTRICTS § 304.4 Performance evaluations. (a) EDA shall evaluate the management standards, financial accountability and program performance of each District Organization within three (3...

  20. Quantitative evaluation of software packages for single-molecule localization microscopy.

    PubMed

    Sage, Daniel; Kirshner, Hagai; Pengo, Thomas; Stuurman, Nico; Min, Junhong; Manley, Suliana; Unser, Michael

    2015-08-01

    The quality of super-resolution images obtained by single-molecule localization microscopy (SMLM) depends largely on the software used to detect and accurately localize point sources. In this work, we focus on the computational aspects of super-resolution microscopy and present a comprehensive evaluation of localization software packages. Our philosophy is to evaluate each package as a whole, thus maintaining the integrity of the software. We prepared synthetic data that represent three-dimensional structures modeled after biological components, taking excitation parameters, noise sources, point-spread functions and pixelation into account. We then asked developers to run their software on our data; most responded favorably, allowing us to present a broad picture of the methods available. We evaluated their results using quantitative and user-interpretable criteria: detection rate, accuracy, quality of image reconstruction, resolution, software usability and computational resources. These metrics reflect the various tradeoffs of SMLM software packages and help users to choose the software that fits their needs.

  1. Evaluating the Effectiveness of Remedial Reading Courses at Community Colleges: A Quantitative Study

    ERIC Educational Resources Information Center

    Lavonier, Nicole

    2014-01-01

    The present study evaluated the effectiveness of two instructional approaches for remedial reading courses at a community college. The instructional approaches were strategic reading and traditional, textbook-based instruction. The two research questions that guided the quantitative, quasi-experimental study were: (a) what is the effect of…

  2. Simulation-based evaluation of the resolution and quantitative accuracy of temperature-modulated fluorescence tomography.

    PubMed

    Lin, Yuting; Nouizi, Farouk; Kwong, Tiffany C; Gulsen, Gultekin

    2015-09-01

    Conventional fluorescence tomography (FT) can recover the distribution of fluorescent agents within a highly scattering medium. However, poor spatial resolution remains its foremost limitation. Previously, we introduced a new fluorescence imaging technique termed "temperature-modulated fluorescence tomography" (TM-FT), which provides high-resolution images of fluorophore distribution. TM-FT is a multimodality technique that combines fluorescence imaging with focused ultrasound to locate thermo-sensitive fluorescence probes using a priori spatial information to drastically improve the resolution of conventional FT. In this paper, we present an extensive simulation study to evaluate the performance of the TM-FT technique on complex phantoms with multiple fluorescent targets of various sizes located at different depths. In addition, the performance of the TM-FT is tested in the presence of background fluorescence. The results obtained using our new method are systematically compared with those obtained with the conventional FT. Overall, TM-FT provides higher resolution and superior quantitative accuracy, making it an ideal candidate for in vivo preclinical and clinical imaging. For example, a 4 mm diameter inclusion positioned in the middle of a synthetic slab geometry phantom (D:40  mm×W:100  mm) is recovered as an elongated object in the conventional FT (x=4.5  mm; y=10.4  mm), while TM-FT recovers it successfully in both directions (x=3.8  mm; y=4.6  mm). As a result, the quantitative accuracy of the TM-FT is superior because it recovers the concentration of the agent with a 22% error, which is in contrast with the 83% error of the conventional FT.

  3. Simulation-based evaluation of the resolution and quantitative accuracy of temperature-modulated fluorescence tomography

    PubMed Central

    Lin, Yuting; Nouizi, Farouk; Kwong, Tiffany C.; Gulsen, Gultekin

    2016-01-01

    Conventional fluorescence tomography (FT) can recover the distribution of fluorescent agents within a highly scattering medium. However, poor spatial resolution remains its foremost limitation. Previously, we introduced a new fluorescence imaging technique termed “temperature-modulated fluorescence tomography” (TM-FT), which provides high-resolution images of fluorophore distribution. TM-FT is a multimodality technique that combines fluorescence imaging with focused ultrasound to locate thermo-sensitive fluorescence probes using a priori spatial information to drastically improve the resolution of conventional FT. In this paper, we present an extensive simulation study to evaluate the performance of the TM-FT technique on complex phantoms with multiple fluorescent targets of various sizes located at different depths. In addition, the performance of the TM-FT is tested in the presence of background fluorescence. The results obtained using our new method are systematically compared with those obtained with the conventional FT. Overall, TM-FT provides higher resolution and superior quantitative accuracy, making it an ideal candidate for in vivo preclinical and clinical imaging. For example, a 4 mm diameter inclusion positioned in the middle of a synthetic slab geometry phantom (D:40 mm × W :100 mm) is recovered as an elongated object in the conventional FT (x = 4.5 mm; y = 10.4 mm), while TM-FT recovers it successfully in both directions (x = 3.8 mm; y = 4.6 mm). As a result, the quantitative accuracy of the TM-FT is superior because it recovers the concentration of the agent with a 22% error, which is in contrast with the 83% error of the conventional FT. PMID:26368884

  4. Clinical performance of the LCx HCV RNA quantitative assay.

    PubMed

    Bertuzis, Rasa; Hardie, Alison; Hottentraeger, Barbara; Izopet, Jacques; Jilg, Wolfgang; Kaesdorf, Barbara; Leckie, Gregor; Leete, Jean; Perrin, Luc; Qiu, Chunfu; Ran, Iris; Schneider, George; Simmonds, Peter; Robinson, John

    2005-02-01

    This study was conducted to assess the performance of the Abbott laboratories LCx HCV RNA Quantitative Assay (LCx assay) in the clinical setting. Four clinical laboratories measured LCx assay precision, specificity, and linearity. In addition, a method comparison was conducted between the LCx assay and the Roche HCV Amplicor Monitor, version 2.0 (Roche Monitor 2.0) and the Bayer VERSANT HCV RNA 3.0 Assay (Bayer bDNA 3.0) quantitative assays. For precision, the observed LCx assay intra-assay standard deviation (S.D.) was 0.060-0.117 log IU/ml, the inter-assay S.D. was 0.083-0.133 log IU/ml, the inter-lot S.D. was 0.105-0.177 log IU/ml, the inter-site S.D. was 0.099-0.190 log IU/ml, and the total S.D. was 0.113-0.190 log IU/ml. The specificity of the LCx assay was 99.4% (542/545; 95% CI, 98.4-99.9%). For linearity, the mean pooled LCx assay results were linear (r=0.994) over the range of the panel (2.54-5.15 log IU/ml). A method comparison demonstrated a correlation coefficient of 0.881 between the LCx assay and Roche Monitor 2.0, 0.872 between the LCx assay and Bayer bDNA 3.0, and 0.870 between Roche Monitor 2.0 and Bayer bDNA 3.0. The mean LCx assay result was 0.04 log IU/ml (95% CI, -0.08, 0.01) lower than the mean Roche Monitor 2.0 result, but 0.57 log IU/ml (95% CI, 0.53, 0.61) higher than the mean Bayer bDNA 3.0 result. The mean Roche Monitor 2.0 result was 0.60 log IU/ml (95% CI, 0.56, 0.65) higher than the mean Bayer bDNA 3.0 result. The LCx assay quantitated genotypes 1-4 with statistical equivalency. The vast majority (98.9%, 278/281) of paired LCx assay-Roche Monitor 2.0 specimen results were within 1 log IU/ml. Similarly, 86.6% (240/277) of paired LCx assay and Bayer bDNA 3.0 specimen results were within 1 log, as were 85.6% (237/277) of paired Roche Monitor 2.0 and Bayer specimen results. These data demonstrate that the LCx assay may be used for quantitation of HCV RNA in HCV-infected individuals.

  5. Quantitative Evaluation of Electrodes for External Urethral Sphincter Electromyography during Bladder-to-Urethral Guarding Reflex

    PubMed Central

    Steward, James E.; Clemons, Jessica D.; Zaszczurynski, Paul J.; Butler, Robert S.; Damaser, Margot S.; Jiang, Hai-Hong

    2009-01-01

    Purpose Accuracy in the recording of external urethral sphincter (EUS) electromyography (EMG) is an important goal in the quantitative evaluation of urethral function. This study aim was to quantitatively compare electrode recordings taken during tonic activity and leak point pressure (LPP) testing. Methods Several electrodes, including the surface electrode (SE), concentric electrode (CE), and wire electrode (WE), were placed on the EUS singly and simultaneously in six female Sprague-Dawley rats under urethane anesthesia. The bladder was filled via a retropubic catheter while LPP testing and EUS EMG recording were done. Quantitative baseline correction of the EUS EMG signal was performed to reduce baseline variation. Amplitude and frequency of one-second samples of the EUS EMG signal were measured before LPP (tonic activity) and during peak LPP activity. Results The SE, CE, and WE signals demonstrated tonic activity before LPP and an increase in activity during LPP, suggesting that the electrodes accurately recorded EUS activity during tonic activity and during the bladder-to-EUS guarding reflex, regardless of the size or location of detection areas. SE recordings required significantly less baseline correction than both CE and WE recordings. The activity in CE-recorded EMG was significantly higher than that of the SE and WE both in single and simultaneous recordings. Conclusions These electrodes may be suitable for testing EUS EMG activity. The SE signal had significantly less baseline variation and the CE detected local activity more sensitively than the other electrodes, which may provide insight into choosing an appropriate electrode for EUS EMG recording. PMID:19680661

  6. Analytical insight into "breathing" crack-induced acoustic nonlinearity with an application to quantitative evaluation of contact cracks.

    PubMed

    Wang, Kai; Liu, Menglong; Su, Zhongqing; Yuan, Shenfang; Fan, Zheng

    2018-08-01

    To characterize fatigue cracks, in the undersized stage in particular, preferably in a quantitative and precise manner, a two-dimensional (2D) analytical model is developed for interpreting the modulation mechanism of a "breathing" crack on guided ultrasonic waves (GUWs). In conjunction with a modal decomposition method and a variational principle-based algorithm, the model is capable of analytically depicting the propagating and evanescent waves induced owing to the interaction of probing GUWs with a "breathing" crack, and further extracting linear and nonlinear wave features (e.g., reflection, transmission, mode conversion and contact acoustic nonlinearity (CAN)). With the model, a quantitative correlation between CAN embodied in acquired GUWs and crack parameters (e.g., location and severity) is obtained, whereby a set of damage indices is proposed via which the severity of the crack can be evaluated quantitatively. The evaluation, in principle, does not entail a benchmarking process against baseline signals. As validation, the results obtained from the analytical model are compared with those from finite element simulation, showing good consistency. This has demonstrated accuracy of the developed analytical model in interpreting contact crack-induced CAN, and spotlighted its application to quantitative evaluation of fatigue damage. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Integrating quantitative and qualitative evaluation methods to compare two teacher inservice training programs

    NASA Astrophysics Data System (ADS)

    Lawrenz, Frances; McCreath, Heather

    Qualitative and quantitative evaluation procedures were used to compare two physical-science teacher inservice training programs. The two programs followed the master teacher training model espoused by NSF but used different types of master teachers and types of activities. The two evaluation procedures produced different results and together they provided a much clearer picture of the strengths and weaknesses of the two programs. Using only one approach or the other would have substantially altered the conclusions.

  8. Proteus mirabilis biofilm - qualitative and quantitative colorimetric methods-based evaluation.

    PubMed

    Kwiecinska-Piróg, Joanna; Bogiel, Tomasz; Skowron, Krzysztof; Wieckowska, Ewa; Gospodarek, Eugenia

    2014-01-01

    Proteus mirabilis strains ability to form biofilm is a current topic of a number of research worldwide. In this study the biofilm formation of P. mirabilis strains derived from urine of the catheterized and non-catheterized patients has been investigated. A total number of 39 P. mirabilis strains isolated from the urine samples of the patients of dr Antoni Jurasz University Hospital No. 1 in Bydgoszcz clinics between 2011 and 2012 was used. Biofilm formation was evaluated using two independent quantitative and qualitative methods with TTC (2,3,5-triphenyl-tetrazolium chloride) and CV (crystal violet) application. The obtained results confirmed biofilm formation by all the examined strains, except quantitative method with TTC, in which 7.7% of the strains did not have this ability. It was shown that P. mirabilis rods have the ability to form biofilm on the surfaces of both biomaterials applied, polystyrene and polyvinyl chloride (Nelaton catheters). The differences in ability to form biofilm observed between P. mirabilis strains derived from the urine of the catheterized and non-catheterized patients were not statistically significant.

  9. Preparing systems engineering and computing science students in disciplined methods, quantitative, and advanced statistical techniques to improve process performance

    NASA Astrophysics Data System (ADS)

    McCray, Wilmon Wil L., Jr.

    The research was prompted by a need to conduct a study that assesses process improvement, quality management and analytical techniques taught to students in U.S. colleges and universities undergraduate and graduate systems engineering and the computing science discipline (e.g., software engineering, computer science, and information technology) degree programs during their academic training that can be applied to quantitatively manage processes for performance. Everyone involved in executing repeatable processes in the software and systems development lifecycle processes needs to become familiar with the concepts of quantitative management, statistical thinking, process improvement methods and how they relate to process-performance. Organizations are starting to embrace the de facto Software Engineering Institute (SEI) Capability Maturity Model Integration (CMMI RTM) Models as process improvement frameworks to improve business processes performance. High maturity process areas in the CMMI model imply the use of analytical, statistical, quantitative management techniques, and process performance modeling to identify and eliminate sources of variation, continually improve process-performance; reduce cost and predict future outcomes. The research study identifies and provides a detail discussion of the gap analysis findings of process improvement and quantitative analysis techniques taught in U.S. universities systems engineering and computing science degree programs, gaps that exist in the literature, and a comparison analysis which identifies the gaps that exist between the SEI's "healthy ingredients " of a process performance model and courses taught in U.S. universities degree program. The research also heightens awareness that academicians have conducted little research on applicable statistics and quantitative techniques that can be used to demonstrate high maturity as implied in the CMMI models. The research also includes a Monte Carlo simulation optimization

  10. Quantitative image analysis for evaluating the abrasion resistance of nanoporous silica films on glass

    PubMed Central

    Nielsen, Karsten H.; Karlsson, Stefan; Limbach, Rene; Wondraczek, Lothar

    2015-01-01

    The abrasion resistance of coated glass surfaces is an important parameter for judging lifetime performance, but practical testing procedures remain overly simplistic and do often not allow for direct conclusions on real-world degradation. Here, we combine quantitative two-dimensional image analysis and mechanical abrasion into a facile tool for probing the abrasion resistance of anti-reflective (AR) coatings. We determine variations in the average coated area, during and after controlled abrasion. Through comparison with other experimental techniques, we show that this method provides a practical, rapid and versatile tool for the evaluation of the abrasion resistance of sol-gel-derived thin films on glass. The method yields informative data, which correlates with measurements of diffuse reflectance and is further supported by qualitative investigations through scanning electron microscopy. In particular, the method directly addresses degradation of coating performance, i.e., the gradual areal loss of antireflective functionality. As an exemplary subject, we studied the abrasion resistance of state-of-the-art nanoporous SiO2 thin films which were derived from 5–6 wt% aqueous solutions of potassium silicates, or from colloidal suspensions of SiO2 nanoparticles. It is shown how abrasion resistance is governed by coating density and film adhesion, defining the trade-off between optimal AR performance and acceptable mechanical performance. PMID:26656260

  11. Quantitative Evaluation Method of Each Generation Margin for Power System Planning

    NASA Astrophysics Data System (ADS)

    Su, Su; Tanaka, Kazuyuki

    As the power system deregulation advances, the competition among the power companies becomes heated, and they seek more efficient system planning using existing facilities. Therefore, an efficient system planning method has been expected. This paper proposes a quantitative evaluation method for the (N-1) generation margin considering the overload and the voltage stability restriction. Concerning the generation margin related with the overload, a fast solution method without the recalculation of the (N-1) Y-matrix is proposed. Referred to the voltage stability, this paper proposes an efficient method to search the stability limit. The IEEE30 model system which is composed of 6 generators and 14 load nodes is employed to validate the proposed method. According to the results, the proposed method can reduce the computational cost for the generation margin related with the overload under the (N-1) condition, and specify the value quantitatively.

  12. Quantitative evaluation of intensive remedial action using long-term monitoring and tracer data at a DNAPL contaminated site, Wonju, Korea

    NASA Astrophysics Data System (ADS)

    Lee, S. S.; Kim, H. J.; Kim, M. O.; Lee, K.; Lee, K. K.

    2016-12-01

    A study finding evidence of remediation represented on monitoring data before and after in site intensive remedial action was performed with various quantitative evaluation methods such as mass discharge analysis, tracer data, statistical trend analysis, and analytical solutions at DNAPL contaminated site, Wonju, Korea. Remediation technologies such as soil vapor extraction, soil flushing, biostimulation, and pump-and-treat have been applied to eliminate the contaminant sources of trichloroethylene (TCE) and to prevent the migration of TCE plume from remediation target zones. Prior to the remediation action, the concentration and mass discharges of TCE at all transects were affected by seasonal recharge variation and residual DNAPLs sources. After the remediation, the effect of remediation took place clearly at the main source zone and industrial complex. By tracing a time-series of plume evolution, a greater variation in the TCE concentrations was detected at the plumes near the source zones compared to the relatively stable plumes in the downstream. The removal amount of the residual source mass during the intensive remedial action was estimated to evaluate the efficiency of the intensive remedial action using analytical solution. From results of quantitative evaluation using analytical solution, it is assessed that the intensive remedial action had effectively performed with removal efficiency of 70% for the residual source mass during the remediation period. Analytical solution which can consider and quantify the impacts of partial mass reduction have been proven to be useful tools for quantifying unknown contaminant source mass and verifying dissolved concentration at the DNAPL contaminated site and evaluating the efficiency of remediation using long-term monitoring data. Acknowledgement : This subject was supported by the Korea Ministry of Environment under "GAIA project (173-092-009) and (201400540010)", R&D Project on Enviornmental Management of Geologic CO

  13. Evacuation performance evaluation tool.

    PubMed

    Farra, Sharon; Miller, Elaine T; Gneuhs, Matthew; Timm, Nathan; Li, Gengxin; Simon, Ashley; Brady, Whittney

    2016-01-01

    Hospitals conduct evacuation exercises to improve performance during emergency events. An essential aspect in this process is the creation of reliable and valid evaluation tools. The objective of this article is to describe the development and implications of a disaster evacuation performance tool that measures one portion of the very complex process of evacuation. Through the application of the Delphi technique and DeVellis's framework, disaster and neonatal experts provided input in developing this performance evaluation tool. Following development, content validity and reliability of this tool were assessed. Large pediatric hospital and medical center in the Midwest. The tool was pilot tested with an administrative, medical, and nursing leadership group and then implemented with a group of 68 healthcare workers during a disaster exercise of a neonatal intensive care unit (NICU). The tool has demonstrated high content validity with a scale validity index of 0.979 and inter-rater reliability G coefficient (0.984, 95% CI: 0.948-0.9952). The Delphi process based on the conceptual framework of DeVellis yielded a psychometrically sound evacuation performance evaluation tool for a NICU.

  14. A framework to assess management performance in district health systems: a qualitative and quantitative case study in Iran.

    PubMed

    Tabrizi, Jafar Sadegh; Gholipour, Kamal; Iezadi, Shabnam; Farahbakhsh, Mostafa; Ghiasi, Akbar

    2018-01-01

    The aim was to design a district health management performance framework for Iran's healthcare system. The mixed-method study was conducted between September 2015 and May 2016 in Tabriz, Iran. In this study, the indicators of district health management performance were obtained by analyzing the 45 semi-structured surveys of experts in the public health system. Content validity of performance indicators which were generated in qualitative part were reviewed and confirmed based on content validity index (CVI). Also content validity ratio (CVR) was calculated using data acquired from a survey of 21 experts in quantitative part. The result of this study indicated that, initially, 81 indicators were considered in framework of district health management performance and, at the end, 53 indicators were validated and confirmed. These indicators were classified in 11 categories which include: human resources and organizational creativity, management and leadership, rules and ethics, planning and evaluation, district managing, health resources management and economics, community participation, quality improvement, research in health system, health information management, epidemiology and situation analysis. The designed framework model can be used to assess the district health management and facilitates performance improvement at the district level.

  15. Introduction of a method for quantitative evaluation of spontaneous motor activity development with age in infants.

    PubMed

    Disselhorst-Klug, Catherine; Heinze, Franziska; Breitbach-Faller, Nico; Schmitz-Rode, Thomas; Rau, Günter

    2012-04-01

    Coordination between perception and action is required to interact with the environment successfully. This is already trained by very young infants who perform spontaneous movements to learn how their body interacts with the environment. The strategies used by the infants for this purpose change with age. Therefore, very early progresses in action control made by the infants can be investigated by monitoring the development of spontaneous motor activity. In this paper, an objective method is introduced, which allows the quantitative evaluation of the development of spontaneous motor activity in newborns. The introduced methodology is based on the acquisition of spontaneous movement trajectories of the feet by 3D movement analysis and subsequent calculation of specific movement parameters from them. With these movement-based parameters, it was possible to provide an objective description of age-dependent developmental steps in healthy newborns younger than 6 months. Furthermore, it has been shown that pathologies like infantile cerebral palsy influence development of motor activity significantly. Since the introduced methodology is objective and quantitative, it is suitable to monitor how newborns train their cognitive processes, which will enable them to cope with their environment by motor interaction.

  16. Mechanism of variable structural colour in the neon tetra: quantitative evaluation of the Venetian blind model.

    PubMed

    Yoshioka, S; Matsuhana, B; Tanaka, S; Inouye, Y; Oshima, N; Kinoshita, S

    2011-01-06

    The structural colour of the neon tetra is distinguishable from those of, e.g., butterfly wings and bird feathers, because it can change in response to the light intensity of the surrounding environment. This fact clearly indicates the variability of the colour-producing microstructures. It has been known that an iridophore of the neon tetra contains a few stacks of periodically arranged light-reflecting platelets, which can cause multilayer optical interference phenomena. As a mechanism of the colour variability, the Venetian blind model has been proposed, in which the light-reflecting platelets are assumed to be tilted during colour change, resulting in a variation in the spacing between the platelets. In order to quantitatively evaluate the validity of this model, we have performed a detailed optical study of a single stack of platelets inside an iridophore. In particular, we have prepared a new optical system that can simultaneously measure both the spectrum and direction of the reflected light, which are expected to be closely related to each other in the Venetian blind model. The experimental results and detailed analysis are found to quantitatively verify the model.

  17. A front-of-pack nutrition logo: a quantitative and qualitative process evaluation in the Netherlands.

    PubMed

    Vyth, Ellis L; Steenhuis, Ingrid H M; Mallant, Sanne F; Mol, Zinzi L; Brug, Johannes; Temminghoff, Marcel; Feunekes, Gerda I; Jansen, Leon; Verhagen, Hans; Seidell, Jacob C

    2009-01-01

    This study aimed to perform a quantitative and qualitative process evaluation of the introduction of the Choices logo, a front-of-pack nutrition logo on products with a favorable product composition, adopted by many food producers, retail and food service organizations, conditionally endorsed by the Dutch government, validated by scientists, and in the process of international dissemination. An online questionnaire was sent to adult consumers 4 months after the introduction of the logo (n = 1,032) and 1 year later (n = 1,127). Additionally, seven consumer focus groups (n = 41) were conducted to provide more insight into the questionnaire responses. Quantitative analyses showed that exposure to the logo had significantly increased. Elderly and obese respondents reported to be more in need of a logo than younger and normal-weight individuals. Women perceived the logo more attractive and credible than men did. Further qualitative analyses indicated that the logo's credibility would improve if it became known that governmental and scientific authorities support it. Elderly respondents indicated that they needed a logo due to health concerns. Consumers interested in health reported that they used the logo. Further research focusing on specific target groups, forming healthful diets, and health outcomes is needed to investigate the effectiveness of the Choices logo.

  18. [Evaluation on methodological problems in reports concerning quantitative analysis of syndrome differentiation of diabetes mellitus].

    PubMed

    Chen, Bi-Cang; Wu, Qiu-Ying; Xiang, Cheng-Bin; Zhou, Yi; Guo, Ling-Xiang; Zhao, Neng-Jiang; Yang, Shu-Yu

    2006-01-01

    To evaluate the quality of reports published in recent 10 years in China about quantitative analysis of syndrome differentiation for diabetes mellitus (DM) in order to explore the methodological problems in these reports and find possible solutions. The main medical literature databases in China were searched. Thirty-one articles were included and evaluated by the principles of clinical epidemiology. There were many mistakes and deficiencies in these articles, such as clinical trial designs, diagnosis criteria for DM, standards of syndrome differentiation of DM, case inclusive and exclusive criteria, sample size and estimation, data comparability and statistical methods. It is necessary and important to improve the quality of reports concerning quantitative analysis of syndrome differentiation of DM in light of the principles of clinical epidemiology.

  19. 48 CFR 8.606 - Evaluating FPI performance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... 8.606 Evaluating FPI performance. Agencies shall evaluate FPI contract performance in accordance with subpart 42.15. Performance evaluations do not negate the requirements of 8.602 and 8.604, but they... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Evaluating FPI performance...

  20. Performance Evaluation: A Deadly Disease?

    ERIC Educational Resources Information Center

    Aluri, Rao; Reichel, Mary

    1994-01-01

    W. Edwards Deming condemned performance evaluations as a deadly disease afflicting American management. He argued that performance evaluations nourish fear, encourage short-term thinking, stifle teamwork, and are no better than lotteries. This article examines library literature from Deming's perspective. Although that literature accepts…

  1. The supervisor's performance appraisal: evaluating the evaluator.

    PubMed

    McConnell, C R

    1993-04-01

    The focus of much performance appraisal in the coming decade or so will likely be on the level of customer satisfaction achieved through performance. Ultimately, evaluating the evaluator--that is, appraising the supervisor--will likely become a matter of assessing how well the supervisor's department meets the needs of its customers. Since meeting the needs of one's customers can well become the strongest determinant of organizational success or failure, it follows that relative success in ensuring these needs are met can become the primary indicator of one's relative success as a supervisor. This has the effect of placing the emphasis on supervisory performance exactly at the point it belongs, right on the bottom-line results of the supervisor's efforts.

  2. Advancing the use of performance evaluation in health care.

    PubMed

    Traberg, Andreas; Jacobsen, Peter; Duthiers, Nadia Monique

    2014-01-01

    The purpose of this paper is to develop a framework for health care performance evaluation that enables decision makers to identify areas indicative of corrective actions. The framework should provide information on strategic pro-/regress in an operational context that justifies the need for organizational adjustments. The study adopts qualitative methods for constructing the framework, subsequently implementing the framework in a Danish magnetic resonance imaging (MRI) unit. Workshops and interviews form the basis of the qualitative construction phase, and two internal and five external databases are used for a quantitative data collection. By aggregating performance outcomes, collective measures of performance are achieved. This enables easy and intuitive identification of areas not strategically aligned. In general, the framework has proven helpful in an MRI unit, where operational decision makers have been struggling with extensive amounts of performance information. The implementation of the framework in a single case in a public and highly political environment restricts the generalizing potential. The authors acknowledge that there may be more suitable approaches in organizations with different settings. The strength of the framework lies in the identification of performance problems prior to decision making. The quality of decisions is directly related to the individual decision maker. The only function of the framework is to support these decisions. The study demonstrates a more refined and transparent use of performance reporting by combining strategic weight assignment and performance aggregation in hierarchies. In this way, the framework accentuates performance as a function of strategic progress or regress, thus assisting decision makers in exerting operational effort in pursuit of strategic alignment.

  3. Evaluation of acute ischemic stroke using quantitative EEG: a comparison with conventional EEG and CT scan.

    PubMed

    Murri, L; Gori, S; Massetani, R; Bonanni, E; Marcella, F; Milani, S

    1998-06-01

    The sensitivity of quantitative electroencephalogram (EEG) was compared with that of conventional EEG in patients with acute ischaemic stroke. In addition, a correlation between quantitative EEG data and computerized tomography (CT) scan findings was carried out for all the areas of lesion in order to reassess the actual role of EEG in the evaluation of stroke. Sixty-five patients were tested with conventional and quantitative EEG within 24 h from the onset of neurological symptoms, whereas CT scan was performed within 4 days from the onset of stroke. EEG was recorded from 19 electrodes placed upon the scalp according to the International 10-20 System. Spectral analysis was carried out on 30 artefact-free 4-sec epochs. For each channel absolute and relative power were calculated for the delta, theta, alpha and beta frequency bands and such data were successively represented in colour-coded maps. Ten patients with extensive lesions documented by CT scan were excluded. The results indicated that conventional EEG revealed abnormalities in 40 of 55 cases, while EEG mapping showed abnormalities in 46 of 55 cases: it showed focal abnormalities in five cases and nonfocal abnormalities in one of six cases which had appeared to be normal according to visual inspection of EEG. In a further 11 cases, where the conventional EEG revealed abnormalities in one hemisphere, the quantitative EEG and maps allowed to further localize abnormal activity in a more localized way. The sensitivity of both methods was higher for frontocentral, temporal and parieto-occipital cortical-subcortical infarctions than for basal ganglia and internal capsule lesions; however, quantitative EEG was more efficient for all areas of lesion in detecting cases that had appeared normal by visual inspection and was clearly superior in revealing focal abnormalities. When we considered the electrode related to which the maximum power of the delta frequency band is recorded, a fairly close correlation was found

  4. 40 CFR Table 6 to Subpart Wwww of... - Basic Requirements for Performance Tests, Performance Evaluations, and Design Evaluations for New...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Tests, Performance Evaluations, and Design Evaluations for New and Existing Sources Using Add-On Control... Performance Tests, Performance Evaluations, and Design Evaluations for New and Existing Sources Using Add-On Control Devices As required in § 63.5850 you must conduct performance tests, performance evaluations, and...

  5. 40 CFR Table 6 to Subpart Wwww of... - Basic Requirements for Performance Tests, Performance Evaluations, and Design Evaluations for New...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Tests, Performance Evaluations, and Design Evaluations for New and Existing Sources Using Add-On Control... Performance Tests, Performance Evaluations, and Design Evaluations for New and Existing Sources Using Add-On Control Devices As required in § 63.5850 you must conduct performance tests, performance evaluations, and...

  6. 40 CFR Table 6 to Subpart Wwww of... - Basic Requirements for Performance Tests, Performance Evaluations, and Design Evaluations for New...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Tests, Performance Evaluations, and Design Evaluations for New and Existing Sources Using Add-On Control... Performance Tests, Performance Evaluations, and Design Evaluations for New and Existing Sources Using Add-On Control Devices As required in § 63.5850 you must conduct performance tests, performance evaluations, and...

  7. Quantitative Evaluation of the Use of Actigraphy for Neurological and Psychiatric Disorders

    PubMed Central

    Song, Yu; Kwak, Shin; Yoshida, Sohei; Yamamoto, Yoshiharu

    2014-01-01

    Quantitative and objective evaluation of disease severity and/or drug effect is necessary in clinical practice. Wearable accelerometers such as an actigraph enable long-term recording of a patient's movement during activities and they can be used for quantitative assessment of symptoms due to various diseases. We reviewed some applications of actigraphy with analytical methods that are sufficiently sensitive and reliable to determine the severity of diseases and disorders such as motor and nonmotor disorders like Parkinson's disease, sleep disorders, depression, behavioral and psychological symptoms of dementia (BPSD) for vascular dementia (VD), seasonal affective disorder (SAD), and stroke, as well as the effects of drugs used to treat them. We believe it is possible to develop analytical methods to assess more neurological or psychopathic disorders using actigraphy records. PMID:25214709

  8. Performance evaluation of DAAF as a booster material using the onionskin test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, John S; Francois, Elizabeth G; Hooks, Daniel E

    Initiation of insensitive high explosive (IHE) formulations requires the use of a booster explosive in the initiation train. Booster material selection is crucial, as the initiation must reliably function across some spectrum of physical parameters. The interest in Diaminoazoxyfurazan (DAAF) for this application stems from the fact that it possesses many traits of an IHE but is shock sensitive enough to serve as an explosive booster. A hemispherical wave breakout test, termed the onionskin test, is one of the methods used to evaluate the performance of a booster material. The wave breakout time-position history at the surface of a hemisphericalmore » IHE charge is recorded and the relative uniformity of the breakout can be quantitatively compared between booster materials. A series of onionskin tests were performed to investigate breakout and propagation diaminoazoxyfurazan (DAAF) at low temperatures to evaluate ignition and detonation spreading in comparison to other explosives commonly used in booster applications. Some wave perturbation was observed with the DAAF booster in the onionskin tests presented. The results of these tests will be presented and discussed.« less

  9. Comparative analysis of quantitative efficiency evaluation methods for transportation networks.

    PubMed

    He, Yuxin; Qin, Jin; Hong, Jian

    2017-01-01

    An effective evaluation of transportation network efficiency could offer guidance for the optimal control of urban traffic. Based on the introduction and related mathematical analysis of three quantitative evaluation methods for transportation network efficiency, this paper compares the information measured by them, including network structure, traffic demand, travel choice behavior and other factors which affect network efficiency. Accordingly, the applicability of various evaluation methods is discussed. Through analyzing different transportation network examples it is obtained that Q-H method could reflect the influence of network structure, traffic demand and user route choice behavior on transportation network efficiency well. In addition, the transportation network efficiency measured by this method and Braess's Paradox can be explained with each other, which indicates a better evaluation of the real operation condition of transportation network. Through the analysis of the network efficiency calculated by Q-H method, it can also be drawn that a specific appropriate demand is existed to a given transportation network. Meanwhile, under the fixed demand, both the critical network structure that guarantees the stability and the basic operation of the network and a specific network structure contributing to the largest value of the transportation network efficiency can be identified.

  10. Quantitative aspects of inductively coupled plasma mass spectrometry

    NASA Astrophysics Data System (ADS)

    Bulska, Ewa; Wagner, Barbara

    2016-10-01

    Accurate determination of elements in various kinds of samples is essential for many areas, including environmental science, medicine, as well as industry. Inductively coupled plasma mass spectrometry (ICP-MS) is a powerful tool enabling multi-elemental analysis of numerous matrices with high sensitivity and good precision. Various calibration approaches can be used to perform accurate quantitative measurements by ICP-MS. They include the use of pure standards, matrix-matched standards, or relevant certified reference materials, assuring traceability of the reported results. This review critically evaluates the advantages and limitations of different calibration approaches, which are used in quantitative analyses by ICP-MS. Examples of such analyses are provided. This article is part of the themed issue 'Quantitative mass spectrometry'.

  11. Left ventricular performance in various heart diseases with or without heart failure:--an appraisal by quantitative one-plane cineangiocardiography.

    PubMed

    Lien, W P; Lee, Y S; Chang, F Z; Chen, J J; Shieh, W B

    1978-01-01

    Quantitative one-plane cineangiocardiography in right anterior oblique position for evaluation of LV performance was carried out in 62 patients with various heart diseases and in 13 subjects with normal LV. Parameters for evaluating both pump and muscle performances were derived from volume and pressure measurements. Of 31 patients with either systolic hypertension or LV myocardial diseases (coronary artery disease or idiopathic cardiomyopathy), 14 had clinical evidence of LV failure before the study. It was found that mean VCF and EF were most sensitive indicators of impaired LV performance among the various parameters. There was a close correlation between mean VCF and EF, yet discordant changes of both parameters were noted in some patients. Furthermore, wall motion abnormalities were not infrequently observed in patients with coronary artery disease or primary cardiomyopathy. Therefore, assessment of at least three ejection properties (EF, mean VCF and wall motion abnormalities) are considered to be essential for full understanding of derangement of LV function in heart disease. This is especially true of patients with coronary artery disease. LV behavior in relation to different pathological stresses or lesions, such as chronic pressure or volume load, myocardial disease and mitral stenosis, was also studied and possible cause of impaired LV myocardial function in mitral stenosis was discussed.

  12. Quantitative performance measurements of bent crystal Laue analyzers for X-ray fluorescence spectroscopy.

    PubMed

    Karanfil, C; Bunker, G; Newville, M; Segre, C U; Chapman, D

    2012-05-01

    Third-generation synchrotron radiation sources pose difficult challenges for energy-dispersive detectors for XAFS because of their count rate limitations. One solution to this problem is the bent crystal Laue analyzer (BCLA), which removes most of the undesired scatter and fluorescence before it reaches the detector, effectively eliminating detector saturation due to background. In this paper experimental measurements of BCLA performance in conjunction with a 13-element germanium detector, and a quantitative analysis of the signal-to-noise improvement of BCLAs are presented. The performance of BCLAs are compared with filters and slits.

  13. Quantitative evaluation of the requirements for the promotion as associate professor at German medical faculties.

    PubMed

    Sorg, Heiko; Knobloch, Karsten

    2012-01-01

    First quantitative evaluation of the requirements for the promotion as associate professor (AP) at German medical faculties. Analysis of the AP-regulations of German medical faculties according to a validated scoring system, which has been adapted to this study. The overall scoring for the AP-requirements at 35 German medical faculties was 13.5±0.6 of 20 possible scoring points (95% confidence interval 12.2-14.7). More than 88% of the AP-regulations demand sufficient performance in teaching and research with adequate scientific publication. Furthermore, 83% of the faculties expect an expert review of the candidate's performance. Conference presentations required as an assistant professor as well as the reduction of the minimum time as an assistant professor do only play minor roles. The requirements for assistant professors to get nominated as an associate professor at German medical faculties are high with an only small range. In detail, however, it can be seen that there still exists large heterogeneity, which hinders equal opportunities and career possibilities. These data might be used for the ongoing objective discussion.

  14. Quantitative evaluation of pairs and RS steganalysis

    NASA Astrophysics Data System (ADS)

    Ker, Andrew D.

    2004-06-01

    We give initial results from a new project which performs statistically accurate evaluation of the reliability of image steganalysis algorithms. The focus here is on the Pairs and RS methods, for detection of simple LSB steganography in grayscale bitmaps, due to Fridrich et al. Using libraries totalling around 30,000 images we have measured the performance of these methods and suggest changes which lead to significant improvements. Particular results from the project presented here include notes on the distribution of the RS statistic, the relative merits of different "masks" used in the RS algorithm, the effect on reliability when previously compressed cover images are used, and the effect of repeating steganalysis on the transposed image. We also discuss improvements to the Pairs algorithm, restricting it to spatially close pairs of pixels, which leads to a substantial performance improvement, even to the extent of surpassing the RS statistic which was previously thought superior for grayscale images. We also describe some of the questions for a general methodology of evaluation of steganalysis, and potential pitfalls caused by the differences between uncompressed, compressed, and resampled cover images.

  15. Quantitative PET/CT scanner performance characterization based upon the society of nuclear medicine and molecular imaging clinical trials network oncology clinical simulator phantom.

    PubMed

    Sunderland, John J; Christian, Paul E

    2015-01-01

    The Clinical Trials Network (CTN) of the Society of Nuclear Medicine and Molecular Imaging (SNMMI) operates a PET/CT phantom imaging program using the CTN's oncology clinical simulator phantom, designed to validate scanners at sites that wish to participate in oncology clinical trials. Since its inception in 2008, the CTN has collected 406 well-characterized phantom datasets from 237 scanners at 170 imaging sites covering the spectrum of commercially available PET/CT systems. The combined and collated phantom data describe a global profile of quantitative performance and variability of PET/CT data used in both clinical practice and clinical trials. Individual sites filled and imaged the CTN oncology PET phantom according to detailed instructions. Standard clinical reconstructions were requested and submitted. The phantom itself contains uniform regions suitable for scanner calibration assessment, lung fields, and 6 hot spheric lesions with diameters ranging from 7 to 20 mm at a 4:1 contrast ratio with primary background. The CTN Phantom Imaging Core evaluated the quality of the phantom fill and imaging and measured background standardized uptake values to assess scanner calibration and maximum standardized uptake values of all 6 lesions to review quantitative performance. Scanner make-and-model-specific measurements were pooled and then subdivided by reconstruction to create scanner-specific quantitative profiles. Different makes and models of scanners predictably demonstrated different quantitative performance profiles including, in some cases, small calibration bias. Differences in site-specific reconstruction parameters increased the quantitative variability among similar scanners, with postreconstruction smoothing filters being the most influential parameter. Quantitative assessment of this intrascanner variability over this large collection of phantom data gives, for the first time, estimates of reconstruction variance introduced into trials from allowing

  16. Comparison of semi-quantitative and quantitative dynamic contrast-enhanced MRI evaluations of vertebral marrow perfusion in a rat osteoporosis model.

    PubMed

    Zhu, Jingqi; Xiong, Zuogang; Zhang, Jiulong; Qiu, Yuyou; Hua, Ting; Tang, Guangyu

    2017-11-14

    This study aims to investigate the technical feasibility of semi-quantitative and quantitative dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) in the assessment of longitudinal changes of marrow perfusion in a rat osteoporosis model, using bone mineral density (BMD) measured by micro-computed tomography (micro-CT) and histopathology as the gold standards. Fifty rats were randomly assigned to the control group (n=25) and ovariectomy (OVX) group whose bilateral ovaries were excised (n=25). Semi-quantitative and quantitative DCE-MRI, micro-CT, and histopathological examinations were performed on lumbar vertebrae at baseline and 3, 6, 9, and 12 weeks after operation. The differences between the two groups in terms of semi-quantitative DCE-MRI parameter (maximum enhancement, E max ), quantitative DCE-MRI parameters (volume transfer constant, K trans ; interstitial volume, V e ; and efflux rate constant, K ep ), micro-CT parameter (BMD), and histopathological parameter (microvessel density, MVD) were compared at each of the time points using an independent-sample t test. The differences in these parameters between baseline and other time points in each group were assessed via Bonferroni's multiple comparison test. A Pearson correlation analysis was applied to assess the relationships between DCE-MRI, micro-CT, and histopathological parameters. In the OVX group, the E max values decreased significantly compared with those of the control group at weeks 6 and 9 (p=0.003 and 0.004, respectively). The K trans values decreased significantly compared with those of the control group from week 3 (p<0.05). However, the V e values decreased significantly only at week 9 (p=0.032), and no difference in the K ep was found between two groups. The BMD values of the OVX group decreased significantly compared with those of the control group from week 3 (p<0.05). Transmission electron microscopy showed tighter gaps between vascular endothelial cells with swollen mitochondria

  17. Quantitative oxygen concentration imaging in toluene atmospheres using Dual Imaging with Modeling Evaluation

    NASA Astrophysics Data System (ADS)

    Ehn, Andreas; Jonsson, Malin; Johansson, Olof; Aldén, Marcus; Bood, Joakim

    2013-01-01

    Fluorescence lifetimes of toluene as a function of oxygen concentration in toluene/nitrogen/oxygen mixtures have been measured at room temperature using picosecond-laser excitation of the S1-S0 transition at 266 nm. The data satisfy the Stern-Volmer relation with high accuracy, providing an updated value of the Stern-Volmer slope. A newly developed fluorescence lifetime imaging scheme, called Dual Imaging with Modeling Evaluation (DIME), is evaluated and successfully demonstrated for quantitative oxygen concentration imaging in toluene-seeded O2/N2 gas mixtures.

  18. Quantitative oxygen concentration imaging in toluene atmospheres using Dual Imaging with Modeling Evaluation

    NASA Astrophysics Data System (ADS)

    Ehn, Andreas; Jonsson, Malin; Johansson, Olof; Aldén, Marcus; Bood, Joakim

    2012-12-01

    Fluorescence lifetimes of toluene as a function of oxygen concentration in toluene/nitrogen/oxygen mixtures have been measured at room temperature using picosecond-laser excitation of the S1-S0 transition at 266 nm. The data satisfy the Stern-Volmer relation with high accuracy, providing an updated value of the Stern-Volmer slope. A newly developed fluorescence lifetime imaging scheme, called Dual Imaging with Modeling Evaluation (DIME), is evaluated and successfully demonstrated for quantitative oxygen concentration imaging in toluene-seeded O2/N2 gas mixtures.

  19. Quantitative evaluation of phase processing approaches in susceptibility weighted imaging

    NASA Astrophysics Data System (ADS)

    Li, Ningzhi; Wang, Wen-Tung; Sati, Pascal; Pham, Dzung L.; Butman, John A.

    2012-03-01

    Susceptibility weighted imaging (SWI) takes advantage of the local variation in susceptibility between different tissues to enable highly detailed visualization of the cerebral venous system and sensitive detection of intracranial hemorrhages. Thus, it has been increasingly used in magnetic resonance imaging studies of traumatic brain injury as well as other intracranial pathologies. In SWI, magnitude information is combined with phase information to enhance the susceptibility induced image contrast. Because of global susceptibility variations across the image, the rate of phase accumulation varies widely across the image resulting in phase wrapping artifacts that interfere with the local assessment of phase variation. Homodyne filtering is a common approach to eliminate this global phase variation. However, filter size requires careful selection in order to preserve image contrast and avoid errors resulting from residual phase wraps. An alternative approach is to apply phase unwrapping prior to high pass filtering. A suitable phase unwrapping algorithm guarantees no residual phase wraps but additional computational steps are required. In this work, we quantitatively evaluate these two phase processing approaches on both simulated and real data using different filters and cutoff frequencies. Our analysis leads to an improved understanding of the relationship between phase wraps, susceptibility effects, and acquisition parameters. Although homodyne filtering approaches are faster and more straightforward, phase unwrapping approaches perform more accurately in a wider variety of acquisition scenarios.

  20. Evaluation of MRI sequences for quantitative T1 brain mapping

    NASA Astrophysics Data System (ADS)

    Tsialios, P.; Thrippleton, M.; Glatz, A.; Pernet, C.

    2017-11-01

    T1 mapping constitutes a quantitative MRI technique finding significant application in brain imaging. It allows evaluation of contrast uptake, blood perfusion, volume, providing a more specific biomarker of disease progression compared to conventional T1-weighted images. While there are many techniques for T1-mapping there is a wide range of reported T1-values in tissues, raising the issue of protocols reproducibility and standardization. The gold standard for obtaining T1-maps is based on acquiring IR-SE sequence. Widely used alternative sequences are IR-SE-EPI, VFA (DESPOT), DESPOT-HIFI and MP2RAGE that speed up scanning and fitting procedures. A custom MRI phantom was used to assess the reproducibility and accuracy of the different methods. All scans were performed using a 3T Siemens Prisma scanner. The acquired data processed using two different codes. The main difference was observed for VFA (DESPOT) which grossly overestimated T1 relaxation time by 214 ms [126 270] compared to the IR-SE sequence. MP2RAGE and DESPOT-HIFI sequences gave slightly shorter time than IR-SE (~20 to 30ms) and can be considered as alternative and time-efficient methods for acquiring accurate T1 maps of the human brain, while IR-SE-EPI gave identical result, at a cost of a lower image quality.

  1. The Interpretation of Student Performance on Evaluative Tests.

    ERIC Educational Resources Information Center

    Aikenhead, Glen S.

    Reported is a study on the use of quantitative data in evaluating a science course for the purpose of introducing an alternative form of information presentation capable of supplying qualitative feedback valuable to students, teachers, and curriculum developers. Fifty-five teachers, randomly selected during the 1967-68 Project Physics (PP)…

  2. Polymer on Top: Current Limits and Future Perspectives of Quantitatively Evaluating Surface Grafting.

    PubMed

    Michalek, Lukas; Barner, Leonie; Barner-Kowollik, Christopher

    2018-03-07

    Well-defined polymer strands covalently tethered onto solid substrates determine the properties of the resulting functional interface. Herein, the current approaches to determine quantitative grafting densities are assessed. Based on a brief introduction into the key theories describing polymer brush regimes, a user's guide is provided to estimating maximum chain coverage and-importantly-examine the most frequently employed approaches for determining grafting densities, i.e., dry thickness measurements, gravimetric assessment, and swelling experiments. An estimation of the reliability of these determination methods is provided via carefully evaluating their assumptions and assessing the stability of the underpinning equations. A practical access guide for comparatively and quantitatively evaluating the reliability of a given approach is thus provided, enabling the field to critically judge experimentally determined grafting densities and to avoid the reporting of grafting densities that fall outside the physically realistic parameter space. The assessment is concluded with a perspective on the development of advanced approaches for determination of grafting density, in particular, on single-chain methodologies. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. 40 CFR Table 6 to Subpart Wwww of... - Basic Requirements for Performance Tests, Performance Evaluations, and Design Evaluations for New...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Tests, Performance Evaluations, and Design Evaluations for New and Existing Sources Using Add-On Control... Tests, Performance Evaluations, and Design Evaluations for New and Existing Sources Using Add-On Control Devices As required in § 63.5850 you must conduct performance tests, performance evaluations, and design...

  4. 40 CFR Table 6 to Subpart Wwww of... - Basic Requirements for Performance Tests, Performance Evaluations, and Design Evaluations for New...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Tests, Performance Evaluations, and Design Evaluations for New and Existing Sources Using Add-On Control... Tests, Performance Evaluations, and Design Evaluations for New and Existing Sources Using Add-On Control Devices As required in § 63.5850 you must conduct performance tests, performance evaluations, and design...

  5. Comparison of qualitative and quantitative evaluation of diffusion-weighted MRI and chemical-shift imaging in the differentiation of benign and malignant vertebral body fractures.

    PubMed

    Geith, Tobias; Schmidt, Gerwin; Biffar, Andreas; Dietrich, Olaf; Dürr, Hans Roland; Reiser, Maximilian; Baur-Melnyk, Andrea

    2012-11-01

    The objective of our study was to compare the diagnostic value of qualitative diffusion-weighted imaging (DWI), quantitative DWI, and chemical-shift imaging in a single prospective cohort of patients with acute osteoporotic and malignant vertebral fractures. The study group was composed of patients with 26 osteoporotic vertebral fractures (18 women, eight men; mean age, 69 years; age range, 31 years 6 months to 86 years 2 months) and 20 malignant vertebral fractures (nine women, 11 men; mean age, 63.4 years; age range, 24 years 8 months to 86 years 4 months). T1-weighted, STIR, and T2-weighted sequences were acquired at 1.5 T. A DW reverse fast imaging with steady-state free precession (PSIF) sequence at different delta values was evaluated qualitatively. A DW echo-planar imaging (EPI) sequence and a DW single-shot turbo spin-echo (TSE) sequence at different b values were evaluated qualitatively and quantitatively using the apparent diffusion coefficient. Opposed-phase sequences were used to assess signal intensity qualitatively. The signal loss between in- and opposed-phase images was determined quantitatively. Two-tailed Fisher exact test, Mann-Whitney test, and receiver operating characteristic analysis were performed. Sensitivities, specificities, and accuracies were determined. Qualitative DW-PSIF imaging (delta = 3 ms) showed the best performance for distinguishing between benign and malignant fractures (sensitivity, 100%; specificity, 88.5%; accuracy, 93.5%). Qualitative DW-EPI (b = 50 s/mm(2) [p = 1.00]; b = 250 s/mm(2) [p = 0.50]) and DW single-shot TSE imaging (b = 100 s/mm(2) [p = 1.00]; b = 250 s/mm(2) [p = 0.18]; b = 400 s/mm(2) [p = 0.18]; b = 600 s/mm(2) [p = 0.39]) did not indicate significant differences between benign and malignant fractures. DW-EPI using a b value of 500 s/mm(2) (p = 0.01) indicated significant differences between benign and malignant vertebral fractures. Quantitative DW-EPI (p = 0.09) and qualitative opposed-phase imaging (p = 0

  6. Probe colorimeter for quantitating enzyme-linked immunosorbent assays and other colorimetric assays performed with microplates.

    PubMed Central

    Ackerman, S B; Kelley, E A

    1983-01-01

    The performance of a fiberoptic probe colorimeter (model PC800; Brinkmann Instruments, Inc., Westbury, N.Y.) for quantitating enzymatic or colorimetric assays in 96-well microtiter plates was compared with the performances of a spectrophotometer (model 240; Gilford Instrument Laboratories, Inc., Oberlin, Ohio) and a commercially available enzyme immunoassay reader (model MR590; Dynatech Laboratories, Inc., Alexandria, Va.). Alkaline phosphatase-p-nitrophenyl phosphate in 3 M NaOH was used as the chromophore source. Six types of plates were evaluated for use with the probe colorimeter; they generated reproducibility values (100% coefficient of variation) ranging from 91 to 98% when one individual made 24 independent measurements on the same dilution of chromophore on each plate. Eleven individuals each performed 24 measurements with the colorimeter on either a visually light (absorbance of 0.10 at 420 nm) or a dark (absorbance of 0.80 at 420 nm) dilution of chromophore; reproducibilities averaged 87% for the light dilution and 97% for the dark dilution. When one individual measured the same chromophore sample at least 20 times in the colorimeter, in the spectrophotometer or in the enzyme immunoassay reader, reproducibility for each instrument was greater than 99%. Measurements of a dilution series of chromophore in a fixed volume indicated that the optical responses of each instrument were linear in a range of 0.05 to 1.10 absorbance units. Images PMID:6341399

  7. Probe colorimeter for quantitating enzyme-linked immunosorbent assays and other colorimetric assays performed with microplates.

    PubMed

    Ackerman, S B; Kelley, E A

    1983-03-01

    The performance of a fiberoptic probe colorimeter (model PC800; Brinkmann Instruments, Inc., Westbury, N.Y.) for quantitating enzymatic or colorimetric assays in 96-well microtiter plates was compared with the performances of a spectrophotometer (model 240; Gilford Instrument Laboratories, Inc., Oberlin, Ohio) and a commercially available enzyme immunoassay reader (model MR590; Dynatech Laboratories, Inc., Alexandria, Va.). Alkaline phosphatase-p-nitrophenyl phosphate in 3 M NaOH was used as the chromophore source. Six types of plates were evaluated for use with the probe colorimeter; they generated reproducibility values (100% coefficient of variation) ranging from 91 to 98% when one individual made 24 independent measurements on the same dilution of chromophore on each plate. Eleven individuals each performed 24 measurements with the colorimeter on either a visually light (absorbance of 0.10 at 420 nm) or a dark (absorbance of 0.80 at 420 nm) dilution of chromophore; reproducibilities averaged 87% for the light dilution and 97% for the dark dilution. When one individual measured the same chromophore sample at least 20 times in the colorimeter, in the spectrophotometer or in the enzyme immunoassay reader, reproducibility for each instrument was greater than 99%. Measurements of a dilution series of chromophore in a fixed volume indicated that the optical responses of each instrument were linear in a range of 0.05 to 1.10 absorbance units.

  8. Influence of sulphur-fumigation on the quality of white ginseng: a quantitative evaluation of major ginsenosides by high performance liquid chromatography.

    PubMed

    Jin, Xin; Zhu, Ling-Ying; Shen, Hong; Xu, Jun; Li, Song-Lin; Jia, Xiao-Bin; Cai, Hao; Cai, Bao-Chang; Yan, Ru

    2012-12-01

    White ginseng was reported to be sulphur-fumigated during post-harvest handling. In the present study, the influence of sulphur-fumigation on the quality of white ginseng and its decoction were quantitatively evaluated through simultaneous quantification of 14 major ginsenosides by a validated high performance liquid chromatography. Poroshell 120 EC-C18 (100mm×3.0mm, 2.7μm) column was chosen for the separation of the major ginsenosides, which were eluted with gradient water and acetonitrile as mobile phase. The analytes were monitored by UV at 203nm. The method was validated in terms of linearity, sensitivity, precision, accuracy and stability. The sulphur-fumigated and non-fumigated white ginseng samples, as well as their respective decoctions, were comparatively analysed with the newly-validated method. It was found that the contents of nine ginsenosides detected in raw materials decreased by about 3-85%, respectively, and the total content of the nine ginsenosides detected in raw materials, decreased by almost 54% after sulphur-fumigation. On the other hand, the contents of 10 ginsenosides detected in decoctions of sulphur-fumigated white ginseng were decreased by about 33-83%, respectively, and the total content of ginsenosides was decreased by up to 64% when compared with that of non-fumigated white ginseng. In addition, ginsenoside Rh(2) and Rg(5) could be detected in the decoctions of sulphur-fumigated white ginseng but not in that of non-fumigated white ginseng. It is suggested that sulphur-fumigation can significantly influence not only the contents of original ginsenosides, but also the decocting-induced chemical transformation of ginsenosides in white ginseng. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Quantitative Acoustic Model for Adhesion Evaluation of Pmma/silicon Film Structures

    NASA Astrophysics Data System (ADS)

    Ju, H. S.; Tittmann, B. R.

    2010-02-01

    A Poly-methyl-methacrylate (PMMA) film on a silicon substrate is a main structure for photolithography in semiconductor manufacturing processes. This paper presents a potential of scanning acoustic microscopy (SAM) for nondestructive evaluation of the PMMA/Si film structure, whose adhesion failure is commonly encountered during the fabrication and post-fabrication processes. A physical model employing a partial discontinuity in displacement is developed for rigorously quantitative evaluation of the interfacial weakness. The model is implanted to the matrix method for the surface acoustic wave (SAW) propagation in anisotropic media. Our results show that variations in the SAW velocity and reflectance are predicted to show their sensitivity to the adhesion condition. Experimental results by the v(z) technique and SAW velocity reconstruction verify the prediction.

  10. 13 CFR 304.4 - Performance evaluations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Performance evaluations. 304.4 Section 304.4 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION, DEPARTMENT OF COMMERCE ECONOMIC DEVELOPMENT DISTRICTS § 304.4 Performance evaluations. (a) EDA shall evaluate the management...

  11. Non cardiopatic and cardiopatic beta thalassemic patients: quantitative and qualitative cardiac iron deposition evaluation with MRI.

    PubMed

    Macarini, L; Marini, S; Pietrapertosa, A; Scardapane, A; Ettorre, G C

    2005-01-01

    Cardiomyopathy is one of the major complications of b thalassaemia major as a result of transfusional iron overload. The aim of our study is to evaluate with MR if there is any difference of iron deposition signal intensity (SI) or distribution between non-cardiopathic and cardiopathic thalassaemic patients in order to establish if there is a relationship between cardiopathy and iron deposition. We studied 20 patients affected by b thalassaemia major, of whom 10 cardiopathic and 10 non-cardiopathic, and 10 healthy volunteers as control group. Serum ferritin and left ventricular ejection fraction were calculated in thalassaemic patients. All patients were examined using a 1.5 MR unit with ECG-gated GE cine-MR T2*-weighted, SE T1-weighted and GE T2*-weighted sequences. In all cases, using an adequate ROI, the myocardial and skeletal muscle signal intensity (SI), the myocardial/skeletal muscle signal intensity ratio (SIR) and the SI average of the myocardium and skeletal muscle were calculated for every study group. The qualitative evaluation of iron deposition distribution was independently performed by three radiologists who analyzed the extension, the site and the morphology of iron deposition on the MR images and reported their observations on the basis of a four-level rating scale: 0 (absent), 1 (limited), 2 (partial), 3 (widespread deposition). The result of quantitative and qualitative evaluations were analysed with statistical tests. Cardiac iron deposition was found in 8/10 non-cardiopathic thalassaemic patients and in all cardiopathic thalassaemic patients. We noticed a significant SI difference (p>0.05) between the healthy volunteer control group and the thalassaemic patients with iron deposition, but no significant SI difference in iron deposition between non-cardiopathic and cardiopathic thalassaemic patients in the areas evaluated. The qualitative evaluation revealed a different distribution of iron deposition between the two thalassaemic groups, with

  12. An Elephant in the Room: Bias in Evaluating a Required Quantitative Methods Course

    ERIC Educational Resources Information Center

    Fletcher, Joseph F.; Painter-Main, Michael A.

    2014-01-01

    Undergraduate Political Science programs often require students to take a quantitative research methods course. Such courses are typically among the most poorly rated. This can be due, in part, to the way in which courses are evaluated. Students are generally asked to provide an overall rating, which, in turn, is widely used by students, faculty,…

  13. Practical no-gold-standard evaluation framework for quantitative imaging methods: application to lesion segmentation in positron emission tomography

    PubMed Central

    Jha, Abhinav K.; Mena, Esther; Caffo, Brian; Ashrafinia, Saeed; Rahmim, Arman; Frey, Eric; Subramaniam, Rathan M.

    2017-01-01

    Abstract. Recently, a class of no-gold-standard (NGS) techniques have been proposed to evaluate quantitative imaging methods using patient data. These techniques provide figures of merit (FoMs) quantifying the precision of the estimated quantitative value without requiring repeated measurements and without requiring a gold standard. However, applying these techniques to patient data presents several practical difficulties including assessing the underlying assumptions, accounting for patient-sampling-related uncertainty, and assessing the reliability of the estimated FoMs. To address these issues, we propose statistical tests that provide confidence in the underlying assumptions and in the reliability of the estimated FoMs. Furthermore, the NGS technique is integrated within a bootstrap-based methodology to account for patient-sampling-related uncertainty. The developed NGS framework was applied to evaluate four methods for segmenting lesions from F-Fluoro-2-deoxyglucose positron emission tomography images of patients with head-and-neck cancer on the task of precisely measuring the metabolic tumor volume. The NGS technique consistently predicted the same segmentation method as the most precise method. The proposed framework provided confidence in these results, even when gold-standard data were not available. The bootstrap-based methodology indicated improved performance of the NGS technique with larger numbers of patient studies, as was expected, and yielded consistent results as long as data from more than 80 lesions were available for the analysis. PMID:28331883

  14. Development and evaluation of a quantitative PCR assay for detection of Hepatozoon sp.

    PubMed

    Criado-Fornelio, A; Buling, A; Cunha-Filho, N A; Ruas, J L; Farias, N A R; Rey-Valeiron, C; Pingret, J L; Etievant, M; Barba-Carretero, J C

    2007-12-25

    With the aim to improve current molecular diagnostic techniques of Hepatozoon sp. in carnivore mammals, we developed a quantitative PCR (qPCR) assay with SYBR Green I((R)). The method, consisting of amplification of a 235bp fragment of the 18S rRNA gene, is able to detect at least 0.1fg of parasite DNA. Reproducible quantitative results were obtained over a range of 0.1ng-0.1fg of Hepatozoon sp. DNA. To assess the performance of the qPCR assay, DNA samples from dogs (140) and cats (50) were tested with either standard PCR or qPCR. Positive samples were always confirmed by partial sequencing of the 18S rRNA gene. Quantitative PCR was 15.8% more sensitive than standard PCR to detect H. canis in dogs. In cats, no infections were detected by standard PCR, compared to two positives by qPCR (which were infected by H. canis as shown by sequencing).

  15. Non-destructive evaluation of UV pulse laser-induced damage performance of fused silica optics.

    PubMed

    Huang, Jin; Wang, Fengrui; Liu, Hongjie; Geng, Feng; Jiang, Xiaodong; Sun, Laixi; Ye, Xin; Li, Qingzhi; Wu, Weidong; Zheng, Wanguo; Sun, Dunlu

    2017-11-24

    The surface laser damage performance of fused silica optics is related to the distribution of surface defects. In this study, we used chemical etching assisted by ultrasound and magnetorheological finishing to modify defect distribution in a fused silica surface, resulting in fused silica samples with different laser damage performance. Non-destructive test methods such as UV laser-induced fluorescence imaging and photo-thermal deflection were used to characterize the surface defects that contribute to the absorption of UV laser radiation. Our results indicate that the two methods can quantitatively distinguish differences in the distribution of absorptive defects in fused silica samples subjected to different post-processing steps. The percentage of fluorescence defects and the weak absorption coefficient were strongly related to the damage threshold and damage density of fused silica optics, as confirmed by the correlation curves built from statistical analysis of experimental data. The results show that non-destructive evaluation methods such as laser-induced fluorescence and photo-thermal absorption can be effectively applied to estimate the damage performance of fused silica optics at 351 nm pulse laser radiation. This indirect evaluation method is effective for laser damage performance assessment of fused silica optics prior to utilization.

  16. Quantitative analysis of three chiral pesticide enantiomers by high-performance column liquid chromatography.

    PubMed

    Wang, Peng; Liu, Donghui; Gu, Xu; Jiang, Shuren; Zhou, Zhiqiang

    2008-01-01

    Methods for the enantiomeric quantitative determination of 3 chiral pesticides, paclobutrazol, myclobutanil, and uniconazole, and their residues in soil and water are reported. An effective chiral high-performance liquid chromatographic (HPLC)-UV method using an amylose-tris(3,5-dimethylphenylcarbamate; AD) column was developed for resolving the enantiomers and quantitative determination. The enantiomers were identified by a circular dichroism detector. Validation involved complete resolution of each of the 2 enantiomers, plus determination of linearity, precision, and limit of detection (LOD). The pesticide enantiomers were isolated by solvent extraction from soil and C18 solid-phase extraction from water. The 2 enantiomers of the 3 pesticides could be completely separated on the AD column using n-hexane isopropanol mobile phase. The linearity and precision results indicated that the method was reliable for the quantitative analysis of the enantiomers. LODs were 0.025, 0.05, and 0.05 mg/kg for each enantiomer of paclobutrazol, myclobutanil, and uniconazole, respectively. Recovery and precision data showed that the pretreatment procedures were satisfactory for enantiomer extraction and cleanup. This method can be used for optical purity determination of technical material and analysis of environmental residues.

  17. A longitudinal evaluation of performance of automated BCR-ABL1 quantitation using cartridge-based detection system.

    PubMed

    Enjeti, Anoop; Granter, Neil; Ashraf, Asma; Fletcher, Linda; Branford, Susan; Rowlings, Philip; Dooley, Susan

    2015-10-01

    An automated cartridge-based detection system (GeneXpert; Cepheid) is being widely adopted in low throughput laboratories for monitoring BCR-ABL1 transcript in chronic myelogenous leukaemia. This Australian study evaluated the longitudinal performance specific characteristics of the automated system.The automated cartridge-based system was compared prospectively with the manual qRT-PCR-based reference method at SA Pathology, Adelaide, over a period of 2.5 years. A conversion factor determination was followed by four re-validations. Peripheral blood samples (n = 129) with international scale (IS) values within detectable range were selected for assessment. The mean bias, proportion of results within specified fold difference (2-, 3- and 5-fold), the concordance rate of major molecular remission (MMR) and concordance across a range of IS values on paired samples were evaluated.The initial conversion factor for the automated system was determined as 0.43. Except for the second re-validation, where a negative bias of 1.9-fold was detected, all other biases fell within desirable limits. A cartridge-specific conversion factor and efficiency value was introduced and the conversion factor was confirmed to be stable in subsequent re-validation cycles. Concordance with the reference method/laboratory at >0.1-≤10 IS was 78.2% and at ≤0.001 was 80%, compared to 86.8% in the >0.01-≤0.1 IS range. The overall and MMR concordance were 85.7% and 94% respectively, for samples that fell within ± 5-fold of the reference laboratory value over the entire period of study.Conversion factor and performance specific characteristics for the automated system were longitudinally stable in the clinically relevant range, following introduction by the manufacturer of lot specific efficiency values.

  18. Comparative Evaluation of Four Real-Time PCR Methods for the Quantitative Detection of Epstein-Barr Virus from Whole Blood Specimens.

    PubMed

    Buelow, Daelynn; Sun, Yilun; Tang, Li; Gu, Zhengming; Pounds, Stanley; Hayden, Randall

    2016-07-01

    Monitoring of Epstein-Barr virus (EBV) load in immunocompromised patients has become integral to their care. An increasing number of reagents are available for quantitative detection of EBV; however, there are little published comparative data. Four real-time PCR systems (one using laboratory-developed reagents and three using analyte-specific reagents) were compared with one another for detection of EBV from whole blood. Whole blood specimens seeded with EBV were used to determine quantitative linearity, analytical measurement range, lower limit of detection, and CV for each assay. Retrospective testing of 198 clinical samples was performed in parallel with all methods; results were compared to determine relative quantitative and qualitative performance. All assays showed similar performance. No significant difference was found in limit of detection (3.12-3.49 log10 copies/mL; P = 0.37). A strong qualitative correlation was seen with all assays that used clinical samples (positive detection rates of 89.5%-95.8%). Quantitative correlation of clinical samples across assays was also seen in pairwise regression analysis, with R(2) ranging from 0.83 to 0.95. Normalizing clinical sample results to IU/mL did not alter the quantitative correlation between assays. Quantitative EBV detection by real-time PCR can be performed over a wide linear dynamic range, using three different commercially available reagents and laboratory-developed methods. EBV was detected with comparable sensitivity and quantitative correlation for all assays. Copyright © 2016 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  19. Quantitative evaluation of optically induced disorientation.

    DOT National Transportation Integrated Search

    1970-01-01

    The purpose of this study was to establish quantitatively and systematically the association between the speed of movement of an optical environment and the extent of disorientation experienced by an individual viewing this environment. The degree of...

  20. A TEM quantitative evaluation of strengthening in an Mg-RE alloy reinforced with SiC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cabibbo, Marcello, E-mail: m.cabibbo@univpm.it; Spigarelli, Stefano

    2011-10-15

    Magnesium alloys containing rare earth elements are known to have high specific strength, good creep and corrosion resistance up to 523 K. The addition of SiC ceramic particles strengthens the metal matrix composite resulting in better wear and creep resistance while maintaining good machinability. The role of the reinforcement particles in enhancing strength can be quantitatively evaluated using transmission electron microscopy (TEM). This paper presents a quantitative evaluation of the different strengthening contributions, determined through TEM inspections, in an SiC Mg-RE composite alloy containing yttrium, neodymium, gadolinium and dysprosium. Compression tests at temperatures ranging between 290 and 573 K weremore » carried out. The microstructure strengthening mechanism was studied for all the compression conditions. Strengthening was compared to the mechanical results and the way the different contributions were combined is also discussed and justified. - Research Highlights: {yields} TEM yield strengthening terms evaluation on a Mg-RE SiC alloy. {yields} The evaluation has been extended to different compression temperature conditions. {yields} Linear and Quadratic sum has been proposed and validated. {yields} Hall-Petch was found to be the most prominent strengthening contributions.« less

  1. Performance Evaluation of Nano-JASMINE

    NASA Astrophysics Data System (ADS)

    Hatsutori, Y.; Kobayashi, Y.; Gouda, N.; Yano, T.; Murooka, J.; Niwa, Y.; Yamada, Y.

    We report the results of performance evaluation of the first Japanese astrometry satellite, Nano-JASMINE. It is a very small satellite and weighs only 35 kg. It aims to carry out astrometry measurement of nearby bright stars (z ≤ 7.5 mag) with an accuracy of 3 milli-arcseconds. Nano-JASMINE will be launched by Cyclone-4 rocket in August 2011 from Brazil. The current status is in the process of evaluating the performances. A series of performance tests and numerical analysis were conducted. As a result, the engineering model (EM) of the telescope was measured to be achieving a diffraction-limited performance and confirmed that it has enough performance for scientific astrometry.

  2. EVALUATION OF QUANTITATIVE REAL TIME PCR FOR THE MEASUREMENT OF HELICOBATER PYLORI AT LOW CONCENTRATIONS IN DRINKING WATER

    EPA Science Inventory

    Aims: To determine the performance of a rapid, real time polymerase chain reaction (PCR) method for the detection and quantitative analysis Helicobacter pylori at low concentrations in drinking water.

    Methods and Results: A rapid DNA extraction and quantitative PCR (QPCR)...

  3. Multicenter Evaluation of a Commercial Cytomegalovirus Quantitative Standard: Effects of Commutability on Interlaboratory Concordance

    PubMed Central

    Shahbazian, M. D.; Valsamakis, A.; Boonyaratanakornkit, J.; Cook, L.; Pang, X. L.; Preiksaitis, J. K.; Schönbrunner, E. R.; Caliendo, A. M.

    2013-01-01

    Commutability of quantitative reference materials has proven important for reliable and accurate results in clinical chemistry. As international reference standards and commercially produced calibration material have become available to address the variability of viral load assays, the degree to which such materials are commutable and the effect of commutability on assay concordance have been questioned. To investigate this, 60 archived clinical plasma samples, which previously tested positive for cytomegalovirus (CMV), were retested by five different laboratories, each using a different quantitative CMV PCR assay. Results from each laboratory were calibrated both with lab-specific quantitative CMV standards (“lab standards”) and with common, commercially available standards (“CMV panel”). Pairwise analyses among laboratories were performed using mean results from each clinical sample, calibrated first with lab standards and then with the CMV panel. Commutability of the CMV panel was determined based on difference plots for each laboratory pair showing plotted values of standards that were within the 95% prediction intervals for the clinical specimens. Commutability was demonstrated for 6 of 10 laboratory pairs using the CMV panel. In half of these pairs, use of the CMV panel improved quantitative agreement compared to use of lab standards. Two of four laboratory pairs for which the CMV panel was noncommutable showed reduced quantitative agreement when that panel was used as a common calibrator. Commutability of calibration material varies across different quantitative PCR methods. Use of a common, commutable quantitative standard can improve agreement across different assays; use of a noncommutable calibrator can reduce agreement among laboratories. PMID:24025907

  4. Quantitative aspects of inductively coupled plasma mass spectrometry

    PubMed Central

    Wagner, Barbara

    2016-01-01

    Accurate determination of elements in various kinds of samples is essential for many areas, including environmental science, medicine, as well as industry. Inductively coupled plasma mass spectrometry (ICP-MS) is a powerful tool enabling multi-elemental analysis of numerous matrices with high sensitivity and good precision. Various calibration approaches can be used to perform accurate quantitative measurements by ICP-MS. They include the use of pure standards, matrix-matched standards, or relevant certified reference materials, assuring traceability of the reported results. This review critically evaluates the advantages and limitations of different calibration approaches, which are used in quantitative analyses by ICP-MS. Examples of such analyses are provided. This article is part of the themed issue ‘Quantitative mass spectrometry’. PMID:27644971

  5. Evaluation of EIT system performance.

    PubMed

    Yasin, Mamatjan; Böhm, Stephan; Gaggero, Pascal O; Adler, Andy

    2011-07-01

    An electrical impedance tomography (EIT) system images internal conductivity from surface electrical stimulation and measurement. Such systems necessarily comprise multiple design choices from cables and hardware design to calibration and image reconstruction. In order to compare EIT systems and study the consequences of changes in system performance, this paper describes a systematic approach to evaluate the performance of the EIT systems. The system to be tested is connected to a saline phantom in which calibrated contrasting test objects are systematically positioned using a position controller. A set of evaluation parameters are proposed which characterize (i) data and image noise, (ii) data accuracy, (iii) detectability of single contrasts and distinguishability of multiple contrasts, and (iv) accuracy of reconstructed image (amplitude, resolution, position and ringing). Using this approach, we evaluate three different EIT systems and illustrate the use of these tools to evaluate and compare performance. In order to facilitate the use of this approach, all details of the phantom, test objects and position controller design are made publicly available including the source code of the evaluation and reporting software.

  6. Diagnostic performance of different measurement methods for lung nodule enhancement at quantitative contrast-enhanced computed tomography

    NASA Astrophysics Data System (ADS)

    Wormanns, Dag; Klotz, Ernst; Dregger, Uwe; Beyer, Florian; Heindel, Walter

    2004-05-01

    Lack of angiogenesis virtually excludes malignancy of a pulmonary nodule; assessment with quantitative contrast-enhanced CT (QECT) requires a reliable enhancement measurement technique. Diagnostic performance of different measurement methods in the distinction between malignant and benign nodules was evaluated. QECT (unenhanced scan and 4 post-contrast scans) was performed in 48 pulmonary nodules (12 malignant, 12 benign, 24 indeterminate). Nodule enhancement was the difference between the highest nodule density at any post-contrast scan and the unenhanced scan. Enhancement was determined with: A) the standard 2D method; B) a 3D method consisting of segmentation, removal of peripheral structures and density averaging. Enhancement curves were evaluated for their plausibility using a predefined set of criteria. Sensitivity and specificity were 100% and 33% for the 2D method resp. 92% and 55% for the 3D method using a threshold of 20 HU. One malignant nodule did not show significant enhancement with method B due to adjacent atelectasis which disappeared within the few minutes of the QECT examination. Better discrimination between benign and malignant lesions was achieved with a slightly higher threshold than proposed in the literature. Application of plausibility criteria to the enhancement curves rendered less plausibility faults with the 3D method. A new 3D method for analysis of QECT scans yielded less artefacts and better specificity in the discrimination between benign and malignant pulmonary nodules when using an appropriate enhancement threshold. Nevertheless, QECT results must be interpreted with care.

  7. Comparative analysis of quantitative efficiency evaluation methods for transportation networks

    PubMed Central

    He, Yuxin; Hong, Jian

    2017-01-01

    An effective evaluation of transportation network efficiency could offer guidance for the optimal control of urban traffic. Based on the introduction and related mathematical analysis of three quantitative evaluation methods for transportation network efficiency, this paper compares the information measured by them, including network structure, traffic demand, travel choice behavior and other factors which affect network efficiency. Accordingly, the applicability of various evaluation methods is discussed. Through analyzing different transportation network examples it is obtained that Q-H method could reflect the influence of network structure, traffic demand and user route choice behavior on transportation network efficiency well. In addition, the transportation network efficiency measured by this method and Braess’s Paradox can be explained with each other, which indicates a better evaluation of the real operation condition of transportation network. Through the analysis of the network efficiency calculated by Q-H method, it can also be drawn that a specific appropriate demand is existed to a given transportation network. Meanwhile, under the fixed demand, both the critical network structure that guarantees the stability and the basic operation of the network and a specific network structure contributing to the largest value of the transportation network efficiency can be identified. PMID:28399165

  8. Evaluating 'good governance': The development of a quantitative tool in the Greater Serengeti Ecosystem.

    PubMed

    Kisingo, Alex; Rollins, Rick; Murray, Grant; Dearden, Phil; Clarke, Marlea

    2016-10-01

    Protected areas (PAs) can provide important benefits to conservation and to communities. A key factor in the effective delivery of these benefits is the role of governance. There has been a growth in research developing frameworks to evaluate 'good' PA governance, usually drawing on a set of principles that are associated with groups of indicators. In contrast to dominant qualitative approaches, this paper describes the development of a quantitative method for measuring effectiveness of protected area governance, as perceived by stakeholders in the Greater Serengeti Ecosystem in Tanzania. The research developed a quantitative method for developing effectiveness measures of PA governance, using a set of 65 statements related to governance principles developed from a literature review. The instrument was administered to 389 individuals from communities located near PAs in the Greater Serengeti Ecosystem. The results of a factor analysis suggest that statements load onto 10 factors that demonstrate high psychometric validity as measured by factor loadings, explained variance, and Cronbach's alpha reliability. The ten common factors that were extracted were: 1) legitimacy, 2) transparency and accountability, 3) responsiveness, 4) fairness, 5) participation, 6) ecosystem based management (EBM) and connectivity, 7) resilience, 8) achievements, 9) consensus orientation, and 10) power. The paper concludes that quantitative surveys can be used to evaluate governance of protected areas from a community-level perspective. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. In vivo quantitative evaluation of tooth color with hand-held colorimeter and custom template.

    PubMed

    Shimada, Kazuki; Kakehashi, Yoshiyuki; Matsumura, Hideo; Tanoue, Naomi

    2004-04-01

    This article presents a technique for quantitatively evaluating the color of teeth, as well as color change in restorations and tooth surfaces. Through use of a custom template made of a thermoplastic polymer and a dental colorimeter, tooth surface color can be recorded periodically at the same location intraorally.

  10. Taking stock of four decades of quantitative research on stakeholder participation and evaluation use: a systematic map.

    PubMed

    Daigneault, Pierre-Marc

    2014-08-01

    Stakeholder participation and evaluation use have attracted a lot of attention from practitioners, theorists and researchers. A common hypothesis is that participation is positively associated with evaluation use. Whereas the number of empirical studies conducted on this topic is impressive, quantitative research has held a minority position within this scientific production. This study mobilizes systematic review methods to 'map' the empirical literature that has quantitatively studied participation and use. The goal is to take stock and assess the strength of evidence of this literature (but not to synthesize the findings) and, based on this assessment, to provide directions for future research. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Quantitative evaluation of manufacturability and performance for ILT produced mask shapes using a single-objective function

    NASA Astrophysics Data System (ADS)

    Choi, Heon; Wang, Wei-long; Kallingal, Chidam

    2015-03-01

    The continuous scaling of semiconductor devices is quickly outpacing the resolution improvements of lithographic exposure tools and processes. This one-sided progression has pushed optical lithography to its limits, resulting in the use of well-known techniques such as Sub-Resolution Assist Features (SRAF's), Source-Mask Optimization (SMO), and double-patterning, to name a few. These techniques, belonging to a larger category of Resolution Enhancement Techniques (RET), have extended the resolution capabilities of optical lithography at the cost of increasing mask complexity, and therefore cost. One such technique, called Inverse Lithography Technique (ILT), has attracted much attention for its ability to produce the best possible theoretical mask design. ILT treats the mask design process as an inverse problem, where the known transformation from mask to wafer is carried out backwards using a rigorous mathematical approach. One practical problem in the application of ILT is the resulting contour-like mask shapes that must be "Manhattanized" (composed of straight edges and 90-deg corners) in order to produce a manufacturable mask. This conversion process inherently degrades the mask quality as it is a departure from the "optimal mask" represented by the continuously curved shapes produced by ILT. However, simpler masks composed of longer straight edges reduce the mask cost as it lowers the shot count and saves mask writing time during mask fabrication, resulting in a conflict between manufacturability and performance for ILT produced masks1,2. In this study, various commonly used metrics will be combined into an objective function to produce a single number to quantitatively measure a particular ILT solution's ability to balance mask manufacturability and RET performance. Several metrics that relate to mask manufacturing costs (i.e. mask vertex count, ILT computation runtime) are appropriately weighted against metrics that represent RET capability (i.e. process

  12. Methods for quantitative and qualitative evaluation of vaginal microflora during menstruation.

    PubMed Central

    Onderdonk, A B; Zamarchi, G R; Walsh, J A; Mellor, R D; Muñoz, A; Kass, E H

    1986-01-01

    The quantitative and qualitative changes in the bacterial flora of the vagina during menstruation have received inadequate study. Similarly, the effect of vaginal tampons on the microbial flora as well as the relationship between the microbial flora of the vagina and that of the tampon has not been adequately evaluated. The purposes of the present study were (i) to develop quantitative methods for studying the vaginal flora and the flora of tampons obtained during menstruation and (ii) to determine whether there were differences between the microflora of the tampon and that of the vaginal vault. Tampon and swab samples were obtained at various times from eight young healthy volunteers for 8 to 10 menstrual cycles. Samples consisted of swabs from women wearing menstrual pads compared with swab and tampon samples taken at various times during the menstrual cycle. Samples were analyzed for total facultative and anaerobic bacterial counts, and the six dominant bacterial species in each culture were identified. Statistical evaluation of the results indicates that total bacterial counts decreased during menstruation and that swab and tampon samples yielded similar total counts per unit weight of sample. The numbers of bacteria in tampons tended to be lower than in swabs taken at the same time. Overall, during menstruation, the concentrations of lactobacilli declined, but otherwise there was little difference among the species found during menstruation compared with those found in intermenstrual samples. Cotton tampons had little discernible effect on the microbial flora. PMID:3954346

  13. Detection of Prostate Cancer: Quantitative Multiparametric MR Imaging Models Developed Using Registered Correlative Histopathology.

    PubMed

    Metzger, Gregory J; Kalavagunta, Chaitanya; Spilseth, Benjamin; Bolan, Patrick J; Li, Xiufeng; Hutter, Diane; Nam, Jung W; Johnson, Andrew D; Henriksen, Jonathan C; Moench, Laura; Konety, Badrinath; Warlick, Christopher A; Schmechel, Stephen C; Koopmeiners, Joseph S

    2016-06-01

    Purpose To develop multiparametric magnetic resonance (MR) imaging models to generate a quantitative, user-independent, voxel-wise composite biomarker score (CBS) for detection of prostate cancer by using coregistered correlative histopathologic results, and to compare performance of CBS-based detection with that of single quantitative MR imaging parameters. Materials and Methods Institutional review board approval and informed consent were obtained. Patients with a diagnosis of prostate cancer underwent multiparametric MR imaging before surgery for treatment. All MR imaging voxels in the prostate were classified as cancer or noncancer on the basis of coregistered histopathologic data. Predictive models were developed by using more than one quantitative MR imaging parameter to generate CBS maps. Model development and evaluation of quantitative MR imaging parameters and CBS were performed separately for the peripheral zone and the whole gland. Model accuracy was evaluated by using the area under the receiver operating characteristic curve (AUC), and confidence intervals were calculated with the bootstrap procedure. The improvement in classification accuracy was evaluated by comparing the AUC for the multiparametric model and the single best-performing quantitative MR imaging parameter at the individual level and in aggregate. Results Quantitative T2, apparent diffusion coefficient (ADC), volume transfer constant (K(trans)), reflux rate constant (kep), and area under the gadolinium concentration curve at 90 seconds (AUGC90) were significantly different between cancer and noncancer voxels (P < .001), with ADC showing the best accuracy (peripheral zone AUC, 0.82; whole gland AUC, 0.74). Four-parameter models demonstrated the best performance in both the peripheral zone (AUC, 0.85; P = .010 vs ADC alone) and whole gland (AUC, 0.77; P = .043 vs ADC alone). Individual-level analysis showed statistically significant improvement in AUC in 82% (23 of 28) and 71% (24 of 34

  14. Bridging the qualitative-quantitative divide: Experiences from conducting a mixed methods evaluation in the RUCAS programme.

    PubMed

    Makrakis, Vassilios; Kostoulas-Makrakis, Nelly

    2016-02-01

    Quantitative and qualitative approaches to planning and evaluation in education for sustainable development have often been treated by practitioners from a single research paradigm. This paper discusses the utility of mixed method evaluation designs which integrate qualitative and quantitative data through a sequential transformative process. Sequential mixed method data collection strategies involve collecting data in an iterative process whereby data collected in one phase contribute to data collected in the next. This is done through examples from a programme addressing the 'Reorientation of University Curricula to Address Sustainability (RUCAS): A European Commission Tempus-funded Programme'. It is argued that the two approaches are complementary and that there are significant gains from combining both. Using methods from both research paradigms does not, however, mean that the inherent differences among epistemologies and methodologies should be neglected. Based on this experience, it is recommended that using a sequential transformative mixed method evaluation can produce more robust results than could be accomplished using a single approach in programme planning and evaluation focussed on education for sustainable development. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Quantitative performance of a polarization diffraction grating polarimeter encoded onto two liquid-crystal-on-silicon displays

    NASA Astrophysics Data System (ADS)

    Cofré, Aarón; Vargas, Asticio; Torres-Ruiz, Fabián A.; Campos, Juan; Lizana, Angel; del Mar Sánchez-López, María; Moreno, Ignacio

    2017-11-01

    We present a quantitative analysis of the performance of a complete snapshot polarimeter based on a polarization diffraction grating (PDGr). The PDGr is generated in a common path polarization interferometer with a Z optical architecture that uses two liquid-crystal on silicon (LCoS) displays to imprint two different phase-only diffraction gratings onto two orthogonal linear states of polarization. As a result, we obtain a programmable PDGr capable to act as a simultaneous polarization state generator (PSG), yielding diffraction orders with different states of polarization. The same system is also shown to operate as a polarization state analyzer (PSA), therefore useful for the realization of a snapshot polarimeter. We analyze its performance using quantitative metrics such as the conditional number, and verify its reliability for the detection of states of polarization.

  16. Quantitative ultrasonic evaluation of engineering properties in metals, composites and ceramics

    NASA Technical Reports Server (NTRS)

    Vary, A.

    1980-01-01

    Ultrasonic technology from the perspective of nondestructive evaluation approaches to material strength prediction and property verification is reviewed. Emergent advanced technology involving quantitative ultrasonic techniques for materials characterization is described. Ultrasonic methods are particularly useful in this area because they involve mechanical elastic waves that are strongly modulated by the same morphological factors that govern mechanical strength and dynamic failure processes. It is emphasized that the technology is in its infancy and that much effort is still required before all the available techniques can be transferred from laboratory to industrial environments.

  17. Comprehensive evaluation of direct injection mass spectrometry for the quantitative profiling of volatiles in food samples

    PubMed Central

    2016-01-01

    Although qualitative strategies based on direct injection mass spectrometry (DIMS) have recently emerged as an alternative for the rapid classification of food samples, the potential of these approaches in quantitative tasks has scarcely been addressed to date. In this paper, the applicability of different multivariate regression procedures to data collected by DIMS from simulated mixtures has been evaluated. The most relevant factors affecting quantitation, such as random noise, the number of calibration samples, type of validation, mixture complexity and similarity of mass spectra, were also considered and comprehensively discussed. Based on the conclusions drawn from simulated data, and as an example of application, experimental mass spectral fingerprints collected by direct thermal desorption coupled to mass spectrometry were used for the quantitation of major volatiles in Thymus zygis subsp. zygis chemotypes. The results obtained, validated with the direct thermal desorption coupled to gas chromatography–mass spectrometry method here used as a reference, show the potential of DIMS approaches for the fast and precise quantitative profiling of volatiles in foods. This article is part of the themed issue ‘Quantitative mass spectrometry’. PMID:27644978

  18. Virginia Transit Performance Evaluation Package (VATPEP).

    DOT National Transportation Integrated Search

    1987-01-01

    The Virginia Transit Performance Evaluation Package (VATPEP), a computer software package, is documented. This is the computerized version of the methodology used by the Virginia Department of Transportation to evaluate the performance of public tran...

  19. Quantitative evaluation of the disintegration of orally rapid disintegrating tablets by X-ray computed tomography.

    PubMed

    Otsuka, Makoto; Yamanaka, Azusa; Uchino, Tomohiro; Otsuka, Kuniko; Sadamoto, Kiyomi; Ohshima, Hiroyuki

    2012-01-01

    To measure the rapid disintegration of Oral Disintegrating Tablets (ODT), a new test (XCT) was developed using X-ray computing tomography (X-ray CT). Placebo ODT, rapid disintegration candy (RDC) and Gaster®-D-Tablets (GAS) were used as model samples. All these ODTs were used to measure oral disintegration time (DT) in distilled water at 37±2°C by XCT. DTs were affected by the width of mesh screens, and degree to which the tablet holder vibrated from air bubbles. An in-vivo tablet disintegration test was performed for RDC using 11 volunteers. DT by the in-vivo method was significantly longer than that using the conventional tester. The experimental conditions for XCT such as the width of the mesh screen and degree of vibration were adjusted to be consistent with human DT values. Since DTs by the XCT method were almost the same as the human data, this method was able to quantitatively evaluate the rapid disintegration of ODT under the same conditions as inside the oral cavity. The DTs of four commercially available ODTs were comparatively evaluated by the XCT method, conventional tablet disintegration test and in-vivo method.

  20. How Students Process Equations in Solving Quantitative Synthesis Problems? Role of Mathematical Complexity in Students' Mathematical Performance

    ERIC Educational Resources Information Center

    Ibrahim, Bashirah; Ding, Lin; Heckler, Andrew F.; White, Daniel R.; Badeau, Ryan

    2017-01-01

    We examine students' mathematical performance on quantitative "synthesis problems" with varying mathematical complexity. Synthesis problems are tasks comprising multiple concepts typically taught in different chapters. Mathematical performance refers to the formulation, combination, and simplification of equations. Generally speaking,…

  1. A quantitative evaluation of the public response to climate engineering

    NASA Astrophysics Data System (ADS)

    Wright, Malcolm J.; Teagle, Damon A. H.; Feetham, Pamela M.

    2014-02-01

    Atmospheric greenhouse gas concentrations continue to increase, with CO2 passing 400 parts per million in May 2013. To avoid severe climate change and the attendant economic and social dislocation, existing energy efficiency and emissions control initiatives may need support from some form of climate engineering. As climate engineering will be controversial, there is a pressing need to inform the public and understand their concerns before policy decisions are taken. So far, engagement has been exploratory, small-scale or technique-specific. We depart from past research to draw on the associative methods used by corporations to evaluate brands. A systematic, quantitative and comparative approach for evaluating public reaction to climate engineering is developed. Its application reveals that the overall public evaluation of climate engineering is negative. Where there are positive associations they favour carbon dioxide removal (CDR) over solar radiation management (SRM) techniques. Therefore, as SRM techniques become more widely known they are more likely to elicit negative reactions. Two climate engineering techniques, enhanced weathering and cloud brightening, have indistinct concept images and so are less likely to draw public attention than other CDR or SRM techniques.

  2. A methodology to quantitatively evaluate the safety of a glazing robot.

    PubMed

    Lee, Seungyeol; Yu, Seungnam; Choi, Junho; Han, Changsoo

    2011-03-01

    A new construction method using robots is spreading widely among construction sites in order to overcome labour shortages and frequent construction accidents. Along with economical efficiency, safety is a very important factor for evaluating the use of construction robots in construction sites. However, the quantitative evaluation of safety is difficult compared with that of economical efficiency. In this study, we suggested a safety evaluation methodology by defining the 'worker' and 'work conditions' as two risk factors, defining the 'worker' factor as posture load and the 'work conditions' factor as the work environment and the risk exposure time. The posture load evaluation reflects the risk of musculoskeletal disorders which can be caused by work posture and the risk of accidents which can be caused by reduced concentration. We evaluated the risk factors that may cause various accidents such as falling, colliding, capsizing, and squeezing in work environments, and evaluated the operational risk by considering worker exposure time to risky work environments. With the results of the evaluations for each factor, we calculated the general operational risk and deduced the improvement ratio in operational safety by introducing a construction robot. To verify these results, we compared the safety of the existing human manual labour and the proposed robotic labour construction methods for manipulating large glass panels. Copyright © 2010 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  3. 48 CFR 436.201 - Evaluation of contractor performance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Construction 436.201 Evaluation of contractor performance. Preparation of performance evaluation reports. In addition to the requirements of FAR 36.201, performance evaluation reports shall be prepared for indefinite... of services to be ordered exceeds $500,000.00. For these contracts, performance evaluation reports...

  4. Synthetic Vision Systems in GA Cockpit-Evaluation of Basic Maneuvers Performed by Low Time GA Pilots During Transition from VMC to IMC

    NASA Technical Reports Server (NTRS)

    Takallu, M. A.; Wong, D. T.; Uenking, M. D.

    2002-01-01

    An experimental investigation was conducted to study the effectiveness of modern flight displays in general aviation cockpits for mitigating Low Visibility Loss of Control and the Controlled Flight Into Terrain accidents. A total of 18 General Aviation (GA) pilots with private pilot, single engine land rating, with no additional instrument training beyond private pilot license requirements, were recruited to evaluate three different display concepts in a fixed-based flight simulator at the NASA Langley Research Center's General Aviation Work Station. Evaluation pilots were asked to continue flight from Visual Meteorological Conditions (VMC) into Instrument Meteorological Conditions (IMC) while performing a series of 4 basic precision maneuvers. During the experiment, relevant pilot/vehicle performance variables, pilot control inputs and physiological data were recorded. Human factors questionnaires and interviews were administered after each scenario. Qualitative and quantitative data have been analyzed and the results are presented here. Pilot performance deviations from the established target values (errors) were computed and compared with the FAA Practical Test Standards. Results of the quantitative data indicate that evaluation pilots committed substantially fewer errors when using the Synthetic Vision Systems (SVS) displays than when they were using conventional instruments. Results of the qualitative data indicate that evaluation pilots perceived themselves to have a much higher level of situation awareness while using the SVS display concept.

  5. Evaluation of reference genes for quantitative RT-PCR in Lolium temulentum under abiotic stress

    USDA-ARS?s Scientific Manuscript database

    Lolium temulentum is a valuable model grass species for the study of stress in forage and turf grasses. Gene expression analysis by quantitative real time RT-PCR relies on the use of proper internal standards. The aim of this study was to identify and evaluate reference genes for use in real-time q...

  6. Quantitative Evaluation of a First Year Seminar Program: Relationships to Persistence and Academic Success

    ERIC Educational Resources Information Center

    Jenkins-Guarnieri, Michael A.; Horne, Melissa M.; Wallis, Aaron L.; Rings, Jeffrey A.; Vaughan, Angela L.

    2015-01-01

    In the present study, we conducted a quantitative evaluation of a novel First Year Seminar (FYS) program with a coordinated curriculum implemented at a public, four-year university to assess its potential role in undergraduate student persistence decisions and academic success. Participants were 2,188 first-year students, 342 of whom completed the…

  7. 48 CFR 1252.216-72 - Performance evaluation plan.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Performance evaluation....216-72 Performance evaluation plan. As prescribed in (TAR) 48 CFR 1216.406(b), insert the following clause: Performance Evaluation Plan (OCT 1994) (a) A Performance Evaluation Plan shall be unilaterally...

  8. Quantitative evaluation improves specificity of myocardial perfusion SPECT in the assessment of functionally significant intermediate coronary artery stenoses: a comparative study with fractional flow reserve measurements.

    PubMed

    Sahiner, Ilgin; Akdemir, Umit O; Kocaman, Sinan A; Sahinarslan, Asife; Timurkaynak, Timur; Unlu, Mustafa

    2013-02-01

    Myocardial perfusion SPECT (MPS) is a noninvasive method commonly used for assessment of the hemodynamic significance of intermediate coronary stenoses. Fractional flow reserve (FFR) measurement is a well-validated invasive method used for the evaluation of intermediate stenoses. We aimed to determine the association between MPS and FFR findings in intermediate degree stenoses and evaluate the added value of quantification in MPS. Fifty-eight patients who underwent intracoronary pressure measurement in the catheterization laboratory to assess the physiological significance of intermediate (40-70%) left anterior descending (LAD) artery lesions, and who also underwent stress myocardial perfusion SPECT either for the assessment of an intermediate stenosis or for suspected coronary artery disease were analyzed retrospectively in the study. Quantitative analysis was performed using the 4DMSPECT program, with visual assessment performed by two experienced nuclear medicine physicians blinded to the angiographic findings. Summed stress scores (SSS) and summed difference scores (SDS) in the LAD artery territory according to the 20 segment model were calculated. A summed stress score of ≥ 3 and an SDS of ≥ 2 were assumed as pathologic, indicating significance of the lesion; a cutoff value of 0.75 was used to define abnormal FFR. Both visual and quantitative assessment results were compared with FFR using Chi-square (χ²) test. The mean time interval between two studies was 13 ± 11 days. FFR was normal in 45 and abnormal in 13 patients. Considering the FFR results as the gold standard method for assessing the significance of the lesion, the sensitivity and specificity of quantitative analysis determining the abnormal flow reserve were 85 and 84%, respectively, while visual analysis had a sensitivity of 77% and a specificity of 51%. There was a good agreement between the observers (κ = 0.856). Summed stress and difference scores demonstrated moderate inverse

  9. Designing automation for human use: empirical studies and quantitative models.

    PubMed

    Parasuraman, R

    2000-07-01

    An emerging knowledge base of human performance research can provide guidelines for designing automation that can be used effectively by human operators of complex systems. Which functions should be automated and to what extent in a given system? A model for types and levels of automation that provides a framework and an objective basis for making such choices is described. The human performance consequences of particular types and levels of automation constitute primary evaluative criteria for automation design when using the model. Four human performance areas are considered--mental workload, situation awareness, complacency and skill degradation. Secondary evaluative criteria include such factors as automation reliability, the risks of decision/action consequences and the ease of systems integration. In addition to this qualitative approach, quantitative models can inform design. Several computational and formal models of human interaction with automation that have been proposed by various researchers are reviewed. An important future research need is the integration of qualitative and quantitative approaches. Application of these models provides an objective basis for designing automation for effective human use.

  10. TECHNOLOGICAL INNOVATION IN NEUROSURGERY: A QUANTITATIVE STUDY

    PubMed Central

    Marcus, Hani J; Hughes-Hallett, Archie; Kwasnicki, Richard M; Darzi, Ara; Yang, Guang-Zhong; Nandi, Dipankar

    2015-01-01

    Object Technological innovation within healthcare may be defined as the introduction of a new technology that initiates a change in clinical practice. Neurosurgery is a particularly technologically intensive surgical discipline, and new technologies have preceded many of the major advances in operative neurosurgical technique. The aim of the present study was to quantitatively evaluate technological innovation in neurosurgery using patents and peer-reviewed publications as metrics of technology development and clinical translation respectively. Methods A patent database was searched between 1960 and 2010 using the search terms “neurosurgeon” OR “neurosurgical” OR “neurosurgery”. The top 50 performing patent codes were then grouped into technology clusters. Patent and publication growth curves were then generated for these technology clusters. A top performing technology cluster was then selected as an exemplar for more detailed analysis of individual patents. Results In all, 11,672 patents and 208,203 publications relating to neurosurgery were identified. The top performing technology clusters over the 50 years were: image guidance devices, clinical neurophysiology devices, neuromodulation devices, operating microscopes and endoscopes. Image guidance and neuromodulation devices demonstrated a highly correlated rapid rise in patents and publications, suggesting they are areas of technology expansion. In-depth analysis of neuromodulation patents revealed that the majority of high performing patents were related to Deep Brain Stimulation (DBS). Conclusions Patent and publication data may be used to quantitatively evaluate technological innovation in neurosurgery. PMID:25699414

  11. Technological innovation in neurosurgery: a quantitative study.

    PubMed

    Marcus, Hani J; Hughes-Hallett, Archie; Kwasnicki, Richard M; Darzi, Ara; Yang, Guang-Zhong; Nandi, Dipankar

    2015-07-01

    Technological innovation within health care may be defined as the introduction of a new technology that initiates a change in clinical practice. Neurosurgery is a particularly technology-intensive surgical discipline, and new technologies have preceded many of the major advances in operative neurosurgical techniques. The aim of the present study was to quantitatively evaluate technological innovation in neurosurgery using patents and peer-reviewed publications as metrics of technology development and clinical translation, respectively. The authors searched a patent database for articles published between 1960 and 2010 using the Boolean search term "neurosurgeon OR neurosurgical OR neurosurgery." The top 50 performing patent codes were then grouped into technology clusters. Patent and publication growth curves were then generated for these technology clusters. A top-performing technology cluster was then selected as an exemplar for a more detailed analysis of individual patents. In all, 11,672 patents and 208,203 publications related to neurosurgery were identified. The top-performing technology clusters during these 50 years were image-guidance devices, clinical neurophysiology devices, neuromodulation devices, operating microscopes, and endoscopes. In relation to image-guidance and neuromodulation devices, the authors found a highly correlated rapid rise in the numbers of patents and publications, which suggests that these are areas of technology expansion. An in-depth analysis of neuromodulation-device patents revealed that the majority of well-performing patents were related to deep brain stimulation. Patent and publication data may be used to quantitatively evaluate technological innovation in neurosurgery.

  12. Quantitative contrast-enhanced ultrasound evaluation of pathological complete response in patients with locally advanced breast cancer receiving neoadjuvant chemotherapy.

    PubMed

    Wan, Cai-Feng; Liu, Xue-Song; Wang, Lin; Zhang, Jie; Lu, Jin-Song; Li, Feng-Hua

    2018-06-01

    To clarify whether the quantitative parameters of contrast-enhanced ultrasound (CEUS) can be used to predict pathological complete response (pCR) in patients with locally advanced breast cancer receiving neoadjuvant chemotherapy (NAC). Fifty-one patients with histologically proved locally advanced breast cancer scheduled for NAC were enrolled. The quantitative data for CEUS and the tumor diameter were collected at baseline and before surgery, and compared with the pathological response. Multiple logistic regression analysis was performed to examine quantitative parameters at CEUS and the tumor diameter to predict the pCR, and receiver operating characteristic (ROC) curve analysis was used as a summary statistic. Multiple logistic regression analysis revealed that PEAK (the maximum intensity of the time-intensity curve during bolus transit), PEAK%, TTP% (time to peak), and diameter% were significant independent predictors of pCR, and the area under the ROC curve was 0.932(Az 1 ), and the sensitivity and specificity to predict pCR were 93.7% and 80.0%. The area under the ROC curve for the quantitative parameters was 0.927(Az 2 ), and the sensitivity and specificity to predict pCR were 81.2% and 94.3%. For diameter%, the area under the ROC curve was 0.786 (Az 3 ), and the sensitivity and specificity to predict pCR were 93.8% and 54.3%. The values of Az 1 and Az 2 were significantly higher than that of Az 3 (P = 0.027 and P = 0.034, respectively). However, there was no significant difference between the values of Az 1 and Az 2 (P = 0.825). Quantitative analysis of tumor blood perfusion with CEUS is superior to diameter% to predict pCR, and can be used as a functional technique to evaluate tumor response to NAC. Copyright © 2018. Published by Elsevier B.V.

  13. Sequence stratigraphy and high-frequency cycles: New aspects for a quantitative evaluation of the Gulf of Suez basin, Egypt

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nio, S.D.; Yang, C.S.; Tewfik, N.

    1993-09-01

    A new development in the application of sequence stratigraphic concepts in marine as well as continental basins is the recognition of high-frequency cyclic patterns in rock successions in the subsurface. Studies of six wells from the northern, central, and southern parts of the Gulf of Suez show the presence of well-preserved, high-frequency cycles with periodicities similar to the orbitally forced Malankovitch parameters. Subsurface rock successions, third-order sequences, and high-frequency cycles were compared with outcrops. After establishing the biostratigraphic framework for the above-mentioned wells, a sequence analysis was performed. Sequence boundaries and maximum flooding positions in each well were calibrated withmore » the occurrences and evaluation of the high-frequency cycles. It became obvious that there is an intimate relationship between these high-frequency Milankovitch cycles and sequence organization. In addition, a close relationship can be observed in the subsurface as well as in outcrops between high-frequency climatic changes (connected to the Milankovitch cycles) and (litho)facies variability. Quantitative evaluations of each sequence and/or systems tract can be computed with the International Geoservices' cyclicity analysis tool (MILABAR). The results are summarized in a well composite chart, rate (NAR), and ratio of preserved time. In correlations between the wells, an accuracy of 500-100 Ka can be obtained. The quantitative evaluation of the sequence and high-frequency cycle analysis gave some new aspects concerning the (litho)facies and geodynamic development during the pre- as well as the synrift stages of the Gulf of Suez Basin.« less

  14. Quantitative Evaluation of Musical Scale Tunings

    ERIC Educational Resources Information Center

    Hall, Donald E.

    1974-01-01

    The acoustical and mathematical basis of the problem of tuning the twelve-tone chromatic scale is reviewed. A quantitative measurement showing how well any tuning succeeds in providing just intonation for any specific piece of music is explained and applied to musical examples using a simple computer program. (DT)

  15. Evaluation of Calibration Laboratories Performance

    NASA Astrophysics Data System (ADS)

    Filipe, Eduarda

    2011-12-01

    One of the main goals of interlaboratory comparisons (ILCs) is the evaluation of the laboratories performance for the routine calibrations they perform for the clients. In the frame of Accreditation of Laboratories, the national accreditation boards (NABs) in collaboration with the national metrology institutes (NMIs) organize the ILCs needed to comply with the requirements of the international accreditation organizations. In order that an ILC is a reliable tool for a laboratory to validate its best measurement capability (BMC), it is needed that the NMI (reference laboratory) provides a better traveling standard—in terms of accuracy class or uncertainty—than the laboratories BMCs. Although this is the general situation, there are cases where the NABs ask the NMIs to evaluate the performance of the accredited laboratories when calibrating industrial measuring instruments. The aim of this article is to discuss the existing approaches for the evaluation of ILCs and propose a basis for the validation of the laboratories measurement capabilities. An example is drafted with the evaluation of the results of mercury-in-glass thermometers ILC with 12 participant laboratories.

  16. Evaluation of Quantitative Literacy Series: Exploring Data and Exploring Probability. Program Report 87-5.

    ERIC Educational Resources Information Center

    Day, Roger P.; And Others

    A quasi-experimental design with two experimental groups and one control group was used to evaluate the use of two books in the Quantitative Literacy Series, "Exploring Data" and "Exploring Probability." Group X teachers were those who had attended a workshop on the use of the materials and were using the materials during the…

  17. Qualitative and quantitative evaluation of some vocal function parameters following fitting of a prosthesis.

    PubMed

    Cavalot, A L; Palonta, F; Preti, G; Nazionale, G; Ricci, E; Vione, N; Albera, R; Cortesina, G

    2001-12-01

    The insertion of a prosthesis and restoration with pectoralis major myocutaneous flaps for patients subjected to total pharyngolaryngectomy is a technique now universally accepted; however the literature on the subject is lacking. Our study considers 10 patients subjected to total pharyngolaryngectomy and restoration with pectoralis major myocutaneous flaps who were fitted with vocal function prostheses and a control group of 50 subjects treated with a total laryngectomy without pectoralis major myocutaneous flaps and who were fitted with vocal function prostheses. Specific qualitative and quantitative parameters were compared. The quantitative measurement of the levels of voice intensity and the evaluation of the harmonics-to-noise ratio were not statistically significant (p > 0.05) between the two study groups at either high- or low-volume speech. On the contrary, statistically significant differences were found (p < 0.05) for the basic frequency of both the low and the high volume voice. For the qualitative analysis seven parameters were established for evaluation by trained and untrained listeners: on the basis of these parameters the control group had statistically better voices.

  18. Quantitative maps of genetic interactions in yeast - comparative evaluation and integrative analysis.

    PubMed

    Lindén, Rolf O; Eronen, Ville-Pekka; Aittokallio, Tero

    2011-03-24

    High-throughput genetic screening approaches have enabled systematic means to study how interactions among gene mutations contribute to quantitative fitness phenotypes, with the aim of providing insights into the functional wiring diagrams of genetic interaction networks on a global scale. However, it is poorly known how well these quantitative interaction measurements agree across the screening approaches, which hinders their integrated use toward improving the coverage and quality of the genetic interaction maps in yeast and other organisms. Using large-scale data matrices from epistatic miniarray profiling (E-MAP), genetic interaction mapping (GIM), and synthetic genetic array (SGA) approaches, we carried out here a systematic comparative evaluation among these quantitative maps of genetic interactions in yeast. The relatively low association between the original interaction measurements or their customized scores could be improved using a matrix-based modelling framework, which enables the use of single- and double-mutant fitness estimates and measurements, respectively, when scoring genetic interactions. Toward an integrative analysis, we show how the detections from the different screening approaches can be combined to suggest novel positive and negative interactions which are complementary to those obtained using any single screening approach alone. The matrix approximation procedure has been made available to support the design and analysis of the future screening studies. We have shown here that even if the correlation between the currently available quantitative genetic interaction maps in yeast is relatively low, their comparability can be improved by means of our computational matrix approximation procedure, which will enable integrative analysis and detection of a wider spectrum of genetic interactions using data from the complementary screening approaches.

  19. Co-Teaching in Middle School Classrooms: Quantitative Comparative Study of Special Education Student Assessment Performance

    ERIC Educational Resources Information Center

    Reese, De'borah Reese

    2017-01-01

    The purpose of this quantitative comparative study was to determine the existence or nonexistence of performance pass rate differences of special education middle school students on standardized assessments between pre and post co-teaching eras disaggregated by subject area and school. Co-teaching has altered classroom environments in many ways.…

  20. MathPatch - Raising Retention and Performance in an Intro-geoscience Class by Raising Students' Quantitative Skills

    NASA Astrophysics Data System (ADS)

    Baer, E. M.; Whittington, C.; Burn, H.

    2008-12-01

    The geological sciences are fundamentally quantitative. However, the diversity of students' mathematical preparation and skills makes the successful use of quantitative concepts difficult in introductory level classes. At Highline Community College, we have implemented a one-credit co-requisite course to give students supplemental instruction for quantitative skills used in the course. The course, formally titled "Quantitative Geology," nicknamed "MathPatch," runs parallel to our introductory Physical Geology course. MathPatch teaches the quantitative skills required for the geology class right before they are needed. Thus, students learn only the skills they need and are given opportunities to apply them immediately. Topics include complex-graph reading, unit conversions, large numbers, scientific notation, scale and measurement, estimation, powers of 10, and other fundamental mathematical concepts used in basic geological concepts. Use of this course over the past 8 years has successfully accomplished the goals of increasing students' quantitative skills, success and retention. Students master the quantitative skills to a greater extent than before the course was implemented, and less time is spent covering basic quantitative skills in the classroom. Because the course supports the use of quantitative skills, the large number of faculty that teach Geology 101 are more comfortable in using quantitative analysis, and indeed see it as an expectation of the course at Highline. Also significant, retention in the geology course has increased substantially, from 75% to 85%. Although successful, challenges persist with requiring MathPatch as a supplementary course. One, we have seen enrollments decrease in Geology 101, which may be the result of adding this co-requisite. Students resist mandatory enrollment in the course, although they are not good at evaluating their own need for the course. The logistics utilizing MathPatch in an evening class with fewer and longer

  1. 48 CFR 236.201 - Evaluation of contractor performance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... CONTRACTS Special Aspects of Contracting for Construction 236.201 Evaluation of contractor performance. (a) Preparation of performance evaluation reports. Use DD Form 2626, Performance Evaluation (Construction... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Evaluation of contractor...

  2. Technical Note: Evaluation of a 160-mm/256-row CT scanner for whole-heart quantitative myocardial perfusion imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    So, Aaron, E-mail: aso@robarts.ca

    Purpose: The authors investigated the performance of a recently introduced 160-mm/256-row CT system for low dose quantitative myocardial perfusion (MP) imaging of the whole heart. This platform is equipped with a gantry capable of rotating at 280 ms per full cycle, a second generation of adaptive statistical iterative reconstruction (ASiR-V) to correct for image noise arising from low tube voltage potential/tube current dynamic scanning, and image reconstruction algorithms to tackle beam-hardening, cone-beam, and partial-scan effects. Methods: Phantom studies were performed to investigate the effectiveness of image noise and artifact reduction with a GE Healthcare Revolution CT system for three acquisitionmore » protocols used in quantitative CT MP imaging: 100, 120, and 140 kVp/25 mAs. The heart chambers of an anthropomorphic chest phantom were filled with iodinated contrast solution at different concentrations (contrast levels) to simulate the circulation of contrast through the heart in quantitative CT MP imaging. To evaluate beam-hardening correction, the phantom was scanned at each contrast level to measure the changes in CT number (in Hounsfield unit or HU) in the water-filled region surrounding the heart chambers with respect to baseline. To evaluate cone-beam artifact correction, differences in mean water HU between the central and peripheral slices were compared. Partial-scan artifact correction was evaluated from the fluctuation of mean water HU in successive partial scans. To evaluate image noise reduction, a small hollow region adjacent to the heart chambers was filled with diluted contrast, and contrast-to-noise ratio in the region before and after noise correction with ASiR-V was compared. The quality of MP maps acquired with the CT system was also evaluated in porcine CT MP studies. Myocardial infarct was induced in a farm pig from a transient occlusion of the distal left anterior descending (LAD) artery with a catheter-based interventional

  3. Technical Note: Evaluation of a 160-mm/256-row CT scanner for whole-heart quantitative myocardial perfusion imaging.

    PubMed

    So, Aaron; Imai, Yasuhiro; Nett, Brian; Jackson, John; Nett, Liz; Hsieh, Jiang; Wisenberg, Gerald; Teefy, Patrick; Yadegari, Andrew; Islam, Ali; Lee, Ting-Yim

    2016-08-01

    The authors investigated the performance of a recently introduced 160-mm/256-row CT system for low dose quantitative myocardial perfusion (MP) imaging of the whole heart. This platform is equipped with a gantry capable of rotating at 280 ms per full cycle, a second generation of adaptive statistical iterative reconstruction (ASiR-V) to correct for image noise arising from low tube voltage potential/tube current dynamic scanning, and image reconstruction algorithms to tackle beam-hardening, cone-beam, and partial-scan effects. Phantom studies were performed to investigate the effectiveness of image noise and artifact reduction with a GE Healthcare Revolution CT system for three acquisition protocols used in quantitative CT MP imaging: 100, 120, and 140 kVp/25 mAs. The heart chambers of an anthropomorphic chest phantom were filled with iodinated contrast solution at different concentrations (contrast levels) to simulate the circulation of contrast through the heart in quantitative CT MP imaging. To evaluate beam-hardening correction, the phantom was scanned at each contrast level to measure the changes in CT number (in Hounsfield unit or HU) in the water-filled region surrounding the heart chambers with respect to baseline. To evaluate cone-beam artifact correction, differences in mean water HU between the central and peripheral slices were compared. Partial-scan artifact correction was evaluated from the fluctuation of mean water HU in successive partial scans. To evaluate image noise reduction, a small hollow region adjacent to the heart chambers was filled with diluted contrast, and contrast-to-noise ratio in the region before and after noise correction with ASiR-V was compared. The quality of MP maps acquired with the CT system was also evaluated in porcine CT MP studies. Myocardial infarct was induced in a farm pig from a transient occlusion of the distal left anterior descending (LAD) artery with a catheter-based interventional procedure. MP maps were generated

  4. Conductor gestures influence evaluations of ensemble performance

    PubMed Central

    Morrison, Steven J.; Price, Harry E.; Smedley, Eric M.; Meals, Cory D.

    2014-01-01

    Previous research has found that listener evaluations of ensemble performances vary depending on the expressivity of the conductor’s gestures, even when performances are otherwise identical. It was the purpose of the present study to test whether this effect of visual information was evident in the evaluation of specific aspects of ensemble performance: articulation and dynamics. We constructed a set of 32 music performances that combined auditory and visual information and were designed to feature a high degree of contrast along one of two target characteristics: articulation and dynamics. We paired each of four music excerpts recorded by a chamber ensemble in both a high- and low-contrast condition with video of four conductors demonstrating high- and low-contrast gesture specifically appropriate to either articulation or dynamics. Using one of two equivalent test forms, college music majors and non-majors (N = 285) viewed sixteen 30 s performances and evaluated the quality of the ensemble’s articulation, dynamics, technique, and tempo along with overall expressivity. Results showed significantly higher evaluations for performances featuring high rather than low conducting expressivity regardless of the ensemble’s performance quality. Evaluations for both articulation and dynamics were strongly and positively correlated with evaluations of overall ensemble expressivity. PMID:25104944

  5. 77 FR 41985 - Use of Influenza Disease Models To Quantitatively Evaluate the Benefits and Risks of Vaccines: A...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-17

    ... Evaluation and Research (CBER) and suggestions for further development. The public workshop will include... Evaluation and Research (HFM-210), Food and Drug Administration, 1401 Rockville Pike, suite 200N, Rockville... models to generate quantitative estimates of the benefits and risks of influenza vaccination. The public...

  6. Evaluating the Performance of the IEEE Standard 1366 Method for Identifying Major Event Days

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eto, Joseph H.; LaCommare, Kristina Hamachi; Sohn, Michael D.

    IEEE Standard 1366 offers a method for segmenting reliability performance data to isolate the effects of major events from the underlying year-to-year trends in reliability. Recent analysis by the IEEE Distribution Reliability Working Group (DRWG) has found that reliability performance of some utilities differs from the expectations that helped guide the development of the Standard 1366 method. This paper proposes quantitative metrics to evaluate the performance of the Standard 1366 method in identifying major events and in reducing year-to-year variability in utility reliability. The metrics are applied to a large sample of utility-reported reliability data to assess performance of themore » method with alternative specifications that have been considered by the DRWG. We find that none of the alternatives perform uniformly 'better' than the current Standard 1366 method. That is, none of the modifications uniformly lowers the year-to-year variability in System Average Interruption Duration Index without major events. Instead, for any given alternative, while it may lower the value of this metric for some utilities, it also increases it for other utilities (sometimes dramatically). Thus, we illustrate some of the trade-offs that must be considered in using the Standard 1366 method and highlight the usefulness of the metrics we have proposed in conducting these evaluations.« less

  7. 48 CFR 3052.216-72 - Performance evaluation plan.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Performance evaluation... CONTRACT CLAUSES Text of Provisions and Clauses 3052.216-72 Performance evaluation plan. As prescribed in... Evaluation Plan (DEC 2003) (a) A Performance Evaluation Plan shall be unilaterally established by the...

  8. Quantitative evaluation of learning and memory trace in studies of mnemotropic effects of immunotropic drugs.

    PubMed

    Kiseleva, N M; Novoseletskaya, A V; Voevodina, Ye B; Kozlov, I G; Inozemtsev, A N

    2012-12-01

    Apart from restoration of disordered immunological parameters, tactivin and derinat exhibit a pronounced effect on the higher integrative functions of the brain. Experiments on Wistar rats have shown that these drugs accelerated conditioning of food and defense responses. New methods for quantitative evaluation of memory trace consolidation are proposed.

  9. 3.0T MR imaging of the ankle: Axial traction for morphological cartilage evaluation, quantitative T2 mapping and cartilage diffusion imaging-A preliminary study.

    PubMed

    Jungmann, Pia M; Baum, Thomas; Schaeffeler, Christoph; Sauerschnig, Martin; Brucker, Peter U; Mann, Alexander; Ganter, Carl; Bieri, Oliver; Rummeny, Ernst J; Woertler, Klaus; Bauer, Jan S

    2015-08-01

    To determine the impact of axial traction during high resolution 3.0T MR imaging of the ankle on morphological assessment of articular cartilage and quantitative cartilage imaging parameters. MR images of n=25 asymptomatic ankles were acquired with and without axial traction (6kg). Coronal and sagittal T1-weighted (w) turbo spin echo (TSE) sequences with a driven equilibrium pulse and sagittal fat-saturated intermediate-w (IMfs) TSE sequences were acquired for morphological evaluation on a four-point scale (1=best, 4=worst). For quantitative assessment of cartilage degradation segmentation was performed on 2D multislice-multiecho (MSME) SE T2, steady-state free-precession (SSFP; n=8) T2 and SSFP diffusion-weighted imaging (DWI; n=8) images. Wilcoxon-tests and paired t-tests were used for statistical analysis. With axial traction, joint space width increased significantly and delineation of cartilage surfaces was rated superior (P<0.05). Cartilage surfaces were best visualized on coronal T1-w images (P<0.05). Differences for cartilage matrix evaluation were smaller. Subchondral bone evaluation, motion artifacts and image quality were not significantly different between the acquisition methods (P>0.05). T2 values were lower at the tibia than at the talus (P<0.001). Reproducibility was better for images with axial traction. Axial traction increased the joint space width, allowed for better visualization of cartilage surfaces and improved compartment discrimination and reproducibility of quantitative cartilage parameters. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  10. A quantitative risk-assessment system (QR-AS) evaluating operation safety of Organic Rankine Cycle using flammable mixture working fluid.

    PubMed

    Tian, Hua; Wang, Xueying; Shu, Gequn; Wu, Mingqiang; Yan, Nanhua; Ma, Xiaonan

    2017-09-15

    Mixture of hydrocarbon and carbon dioxide shows excellent cycle performance in Organic Rankine Cycle (ORC) used for engine waste heat recovery, but the unavoidable leakage in practical application is a threat for safety due to its flammability. In this work, a quantitative risk assessment system (QR-AS) is established aiming at providing a general method of risk assessment for flammable working fluid leakage. The QR-AS covers three main aspects: analysis of concentration distribution based on CFD simulations, explosive risk assessment based on the TNT equivalent method and risk mitigation based on evaluation results. A typical case of propane/carbon dioxide mixture leaking from ORC is investigated to illustrate the application of QR-AS. According to the assessment results, proper ventilation speed, safe mixture ratio and location of gas-detecting devices have been proposed to guarantee the security in case of leakage. The results revealed that this presented QR-AS was reliable for the practical application and the evaluation results could provide valuable guidance for the design of mitigation measures to improve the safe performance of ORC system. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Performance Evaluation Process.

    ERIC Educational Resources Information Center

    1998

    This document contains four papers from a symposium on the performance evaluation process and human resource development (HRD). "Assessing the Effectiveness of OJT (On the Job Training): A Case Study Approach" (Julie Furst-Bowe, Debra Gates) is a case study of the effectiveness of OJT in one of a high-tech manufacturing company's product…

  12. The Context and Process for Performance Evaluations: Necessary Preconditions for the Use of Performance Evaluations as a Measure of Performance--A Critique of Perry

    ERIC Educational Resources Information Center

    McCarthy, Mary L.

    2006-01-01

    This article challenges Perry's research using performance evaluations to determine whether the educational background of child welfare workers is predictive of performance. Institutional theory, an understanding of street-level bureaucracies, and evaluations of field education performance measures are offered as necessary frameworks for Perry's…

  13. A Comparative Assessment of Greek Universities' Efficiency Using Quantitative Analysis

    ERIC Educational Resources Information Center

    Katharaki, Maria; Katharakis, George

    2010-01-01

    In part due to the increased demand for higher education, typical evaluation frameworks for universities often address the key issue of available resource utilisation. This study seeks to estimate the efficiency of 20 public universities in Greece through quantitative analysis (including performance indicators, data envelopment analysis (DEA) and…

  14. Three-phase bone scintigraphy for diagnosis of Charcot neuropathic osteoarthropathy in the diabetic foot - does quantitative data improve diagnostic value?

    PubMed

    Fosbøl, M; Reving, S; Petersen, E H; Rossing, P; Lajer, M; Zerahn, B

    2017-01-01

    To investigate whether inclusion of quantitative data on blood flow distribution compared with visual qualitative evaluation improve the reliability and diagnostic performance of 99 m Tc-hydroxymethylene diphosphate three-phase bone scintigraphy (TPBS) in patients suspected for charcot neuropathic osteoarthropathy (CNO) of the foot. A retrospective cohort study of TPBS performed on 148 patients with suspected acute CNO referred from a single specialized diabetes care centre. The quantitative blood flow distribution was calculated based on the method described by Deutsch et al. All scintigraphies were re-evaluated by independent, blinded observers twice with and without quantitative data on blood flow distribution at ankle and focus level, respectively. The diagnostic validity of TPBS was determined by subsequent review of clinical data and radiological examinations. A total of 90 patients (61%) had confirmed diagnosis of CNO. The sensitivity, specificity and accuracy of three-phase bone scintigraphy without/with quantitative data were 89%/88%, 58%/62% and 77%/78%, respectively. The intra-observer agreement improved significantly by adding quantitative data in the evaluation (Kappa value 0·79/0·94). The interobserver agreement was not significantly improved. Adding quantitative data on blood flow distribution in the interpretation of TBPS improves intra-observer variation, whereas no difference in interobserver variation was observed. The sensitivity of TPBS in the diagnosis of CNO is high, but holds limited specificity. Diagnostic performance does not improve using quantitative data in the evaluation. This may be due to the reference intervals applied in the study or the absence of a proper gold standard diagnostic procedure for comparison. © 2015 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.

  15. Evaluation of fluorophores for optimal performance in localization-based super-resolution imaging

    PubMed Central

    Dempsey, Graham T.; Vaughan, Joshua C.; Chen, Kok Hao; Bates, Mark; Zhuang, Xiaowei

    2011-01-01

    One approach to super-resolution fluorescence imaging uses sequential activation and localization of individual fluorophores to achieve high spatial resolution. Essential to this technique is the choice of fluorescent probes — the properties of the probes, including photons per switching event, on/off duty cycle, photostability, and number of switching cycles, largely dictate the quality of super-resolution images. While many probes have been reported, a systematic characterization of the properties of these probes and their impact on super-resolution image quality has been described in only a few cases. Here, we quantitatively characterized the switching properties of 26 organic dyes and directly related these properties to the quality of super-resolution images. This analysis provides a set of guidelines for characterization of super-resolution probes and a resource for selecting probes based on performance. Our evaluation identified several photoswitchable dyes with good to excellent performance in four independent spectral ranges, with which we demonstrated low crosstalk, four-color super-resolution imaging. PMID:22056676

  16. A combined pulmonary-radiology workshop for visual evaluation of COPD: study design, chest CT findings and concordance with quantitative evaluation.

    PubMed

    Barr, R Graham; Berkowitz, Eugene A; Bigazzi, Francesca; Bode, Frederick; Bon, Jessica; Bowler, Russell P; Chiles, Caroline; Crapo, James D; Criner, Gerard J; Curtis, Jeffrey L; Dass, Chandra; Dirksen, Asger; Dransfield, Mark T; Edula, Goutham; Erikkson, Leif; Friedlander, Adam; Galperin-Aizenberg, Maya; Gefter, Warren B; Gierada, David S; Grenier, Philippe A; Goldin, Jonathan; Han, MeiLan K; Hanania, Nicola A; Hansel, Nadia N; Jacobson, Francine L; Kauczor, Hans-Ulrich; Kinnula, Vuokko L; Lipson, David A; Lynch, David A; MacNee, William; Make, Barry J; Mamary, A James; Mann, Howard; Marchetti, Nathaniel; Mascalchi, Mario; McLennan, Geoffrey; Murphy, James R; Naidich, David; Nath, Hrudaya; Newell, John D; Pistolesi, Massimo; Regan, Elizabeth A; Reilly, John J; Sandhaus, Robert; Schroeder, Joyce D; Sciurba, Frank; Shaker, Saher; Sharafkhaneh, Amir; Silverman, Edwin K; Steiner, Robert M; Strange, Charlton; Sverzellati, Nicola; Tashjian, Joseph H; van Beek, Edwin J R; Washington, Lacey; Washko, George R; Westney, Gloria; Wood, Susan A; Woodruff, Prescott G

    2012-04-01

    The purposes of this study were: to describe chest CT findings in normal non-smoking controls and cigarette smokers with and without COPD; to compare the prevalence of CT abnormalities with severity of COPD; and to evaluate concordance between visual and quantitative chest CT (QCT) scoring. Volumetric inspiratory and expiratory CT scans of 294 subjects, including normal non-smokers, smokers without COPD, and smokers with GOLD Stage I-IV COPD, were scored at a multi-reader workshop using a standardized worksheet. There were 58 observers (33 pulmonologists, 25 radiologists); each scan was scored by 9-11 observers. Interobserver agreement was calculated using kappa statistic. Median score of visual observations was compared with QCT measurements. Interobserver agreement was moderate for the presence or absence of emphysema and for the presence of panlobular emphysema; fair for the presence of centrilobular, paraseptal, and bullous emphysema subtypes and for the presence of bronchial wall thickening; and poor for gas trapping, centrilobular nodularity, mosaic attenuation, and bronchial dilation. Agreement was similar for radiologists and pulmonologists. The prevalence on CT readings of most abnormalities (e.g. emphysema, bronchial wall thickening, mosaic attenuation, expiratory gas trapping) increased significantly with greater COPD severity, while the prevalence of centrilobular nodularity decreased. Concordances between visual scoring and quantitative scoring of emphysema, gas trapping and airway wall thickening were 75%, 87% and 65%, respectively. Despite substantial inter-observer variation, visual assessment of chest CT scans in cigarette smokers provides information regarding lung disease severity; visual scoring may be complementary to quantitative evaluation.

  17. Quantitative evaluation of his-tag purification and immunoprecipitation of tristetraprolin and its mutant proteins from transfected human cells

    USDA-ARS?s Scientific Manuscript database

    Histidine (His)-tag is widely used for affinity purification of recombinant proteins, but the yield and purity of expressed proteins are quite different. Little information is available about quantitative evaluation of this procedure. The objective of the current study was to evaluate the His-tag pr...

  18. Generalized PSF modeling for optimized quantitation in PET imaging.

    PubMed

    Ashrafinia, Saeed; Mohy-Ud-Din, Hassan; Karakatsanis, Nicolas A; Jha, Abhinav K; Casey, Michael E; Kadrmas, Dan J; Rahmim, Arman

    2017-06-21

    Point-spread function (PSF) modeling offers the ability to account for resolution degrading phenomena within the PET image generation framework. PSF modeling improves resolution and enhances contrast, but at the same time significantly alters image noise properties and induces edge overshoot effect. Thus, studying the effect of PSF modeling on quantitation task performance can be very important. Frameworks explored in the past involved a dichotomy of PSF versus no-PSF modeling. By contrast, the present work focuses on quantitative performance evaluation of standard uptake value (SUV) PET images, while incorporating a wide spectrum of PSF models, including those that under- and over-estimate the true PSF, for the potential of enhanced quantitation of SUVs. The developed framework first analytically models the true PSF, considering a range of resolution degradation phenomena (including photon non-collinearity, inter-crystal penetration and scattering) as present in data acquisitions with modern commercial PET systems. In the context of oncologic liver FDG PET imaging, we generated 200 noisy datasets per image-set (with clinically realistic noise levels) using an XCAT anthropomorphic phantom with liver tumours of varying sizes. These were subsequently reconstructed using the OS-EM algorithm with varying PSF modelled kernels. We focused on quantitation of both SUV mean and SUV max , including assessment of contrast recovery coefficients, as well as noise-bias characteristics (including both image roughness and coefficient of-variability), for different tumours/iterations/PSF kernels. It was observed that overestimated PSF yielded more accurate contrast recovery for a range of tumours, and typically improved quantitative performance. For a clinically reasonable number of iterations, edge enhancement due to PSF modeling (especially due to over-estimated PSF) was in fact seen to lower SUV mean bias in small tumours. Overall, the results indicate that exactly matched PSF

  19. Handbook for Improving Superintendent Performance Evaluation.

    ERIC Educational Resources Information Center

    Candoli, Carl; And Others

    This handbook for superintendent performance evaluation contains information for boards of education as they institute or improve their evaluation system. The handbook answers questions involved in operationalizing, implementing, and evaluating a superintendent-evaluation system. The information was developed from research on superintendent…

  20. Global scaling for semi-quantitative analysis in FP-CIT SPECT.

    PubMed

    Kupitz, D; Apostolova, I; Lange, C; Ulrich, G; Amthauer, H; Brenner, W; Buchert, R

    2014-01-01

    Semi-quantitative characterization of dopamine transporter availability from single photon emission computed tomography (SPECT) with 123I-ioflupane (FP-CIT) is based on uptake ratios relative to a reference region. The aim of this study was to evaluate the whole brain as reference region for semi-quantitative analysis of FP-CIT SPECT. The rationale was that this might reduce statistical noise associated with the estimation of non-displaceable FP-CIT uptake. 150 FP-CIT SPECTs were categorized as neurodegenerative or non-neurodegenerative by an expert. Semi-quantitative analysis of specific binding ratios (SBR) was performed with a custom-made tool based on the Statistical Parametric Mapping software package using predefined regions of interest (ROIs) in the anatomical space of the Montreal Neurological Institute. The following reference regions were compared: predefined ROIs for frontal and occipital lobe and whole brain (without striata, thalamus and brainstem). Tracer uptake in the reference region was characterized by the mean, median or 75th percentile of its voxel intensities. The area (AUC) under the receiver operating characteristic curve was used as performance measure. The highest AUC of 0.973 was achieved by the SBR of the putamen with the 75th percentile in the whole brain as reference. The lowest AUC for the putamen SBR of 0.937 was obtained with the mean in the frontal lobe as reference. We recommend the 75th percentile in the whole brain as reference for semi-quantitative analysis in FP-CIT SPECT. This combination provided the best agreement of the semi-quantitative analysis with visual evaluation of the SPECT images by an expert and, therefore, is appropriate to support less experienced physicians.

  1. Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial

    EPA Science Inventory

    This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit m...

  2. Chronic obstructive pulmonary disease: quantitative and visual ventilation pattern analysis at xenon ventilation CT performed by using a dual-energy technique.

    PubMed

    Park, Eun-Ah; Goo, Jin Mo; Park, Sang Joon; Lee, Hyun Ju; Lee, Chang Hyun; Park, Chang Min; Yoo, Chul-Gyu; Kim, Jong Hyo

    2010-09-01

    To evaluate the potential of xenon ventilation computed tomography (CT) in the quantitative and visual analysis of chronic obstructive pulmonary disease (COPD). This study was approved by the institutional review board. After informed consent was obtained, 32 patients with COPD underwent CT performed before the administration of xenon, two-phase xenon ventilation CT with wash-in (WI) and wash-out (WO) periods, and pulmonary function testing (PFT). For quantitative analysis, results of PFT were compared with attenuation parameters from prexenon images and xenon parameters from xenon-enhanced images in the following three areas at each phase: whole lung, lung with normal attenuation, and low-attenuating lung (LAL). For visual analysis, ventilation patterns were categorized according to the pattern of xenon attenuation in the area of structural abnormalities compared with that in the normal-looking background on a per-lobe basis: pattern A consisted of isoattenuation or high attenuation in the WI period and isoattenuation in the WO period; pattern B, isoattenuation or high attenuation in the WI period and high attenuation in the WO period; pattern C, low attenuation in both the WI and WO periods; and pattern D, low attenuation in the WI period and isoattenuation or high attenuation in the WO period. Among various attenuation and xenon parameters, xenon parameters of the LAL in the WO period showed the best inverse correlation with results of PFT (P < .0001). At visual analysis, while emphysema (which affected 99 lobes) commonly showed pattern A or B, airway diseases such as obstructive bronchiolitis (n = 5) and bronchiectasis (n = 2) and areas with a mucus plug (n = 1) or centrilobular nodules (n = 5) showed pattern D or C. WI and WO xenon ventilation CT is feasible for the simultaneous regional evaluation of structural and ventilation abnormalities both quantitatively and qualitatively in patients with COPD. (c) RSNA, 2010.

  3. Survey of Quantitative Research Metrics to Assess Pilot Performance in Upset Recovery

    NASA Technical Reports Server (NTRS)

    Le Vie, Lisa R.

    2016-01-01

    Accidents attributable to in-flight loss of control are the primary cause for fatal commercial jet accidents worldwide. The National Aeronautics and Space Administration (NASA) conducted a literature review to determine and identify the quantitative standards for assessing upset recovery performance. This review contains current recovery procedures for both military and commercial aviation and includes the metrics researchers use to assess aircraft recovery performance. Metrics include time to first input, recognition time and recovery time and whether that input was correct or incorrect. Other metrics included are: the state of the autopilot and autothrottle, control wheel/sidestick movement resulting in pitch and roll, and inputs to the throttle and rudder. In addition, airplane state measures, such as roll reversals, altitude loss/gain, maximum vertical speed, maximum/minimum air speed, maximum bank angle and maximum g loading are reviewed as well.

  4. Quantitative metrics for evaluating the phased roll-out of clinical information systems.

    PubMed

    Wong, David; Wu, Nicolas; Watkinson, Peter

    2017-09-01

    We introduce a novel quantitative approach for evaluating the order of roll-out during phased introduction of clinical information systems. Such roll-outs are associated with unavoidable risk due to patients transferring between clinical areas using both the old and new systems. We proposed a simple graphical model of patient flow through a hospital. Using a simple instance of the model, we showed how a roll-out order can be generated by minimising the flow of patients from the new system to the old system. The model was applied to admission and discharge data acquired from 37,080 patient journeys at the Churchill Hospital, Oxford between April 2013 and April 2014. The resulting order was evaluated empirically and produced acceptable orders. The development of data-driven approaches to clinical Information system roll-out provides insights that may not necessarily be ascertained through clinical judgment alone. Such methods could make a significant contribution to the smooth running of an organisation during the roll-out of a potentially disruptive technology. Unlike previous approaches, which are based on clinical opinion, the approach described here quantitatively assesses the appropriateness of competing roll-out strategies. The data-driven approach was shown to produce strategies that matched clinical intuition and provides a flexible framework that may be used to plan and monitor Clinical Information System roll-out. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  5. Quantitative image quality evaluation of MR images using perceptual difference models

    PubMed Central

    Miao, Jun; Huo, Donglai; Wilson, David L.

    2008-01-01

    The authors are using a perceptual difference model (Case-PDM) to quantitatively evaluate image quality of the thousands of test images which can be created when optimizing fast magnetic resonance (MR) imaging strategies and reconstruction techniques. In this validation study, they compared human evaluation of MR images from multiple organs and from multiple image reconstruction algorithms to Case-PDM and similar models. The authors found that Case-PDM compared very favorably to human observers in double-stimulus continuous-quality scale and functional measurement theory studies over a large range of image quality. The Case-PDM threshold for nonperceptible differences in a 2-alternative forced choice study varied with the type of image under study, but was ≈1.1 for diffuse image effects, providing a rule of thumb. Ordering the image quality evaluation models, we found in overall Case-PDM ≈ IDM (Sarnoff Corporation) ≈ SSIM [Wang et al. IEEE Trans. Image Process. 13, 600–612 (2004)] > mean squared error ≈ NR [Wang et al. (2004) (unpublished)] > DCTune (NASA) > IQM (MITRE Corporation). The authors conclude that Case-PDM is very useful in MR image evaluation but that one should probably restrict studies to similar images and similar processing, normally not a limitation in image reconstruction studies. PMID:18649487

  6. Extensions and evaluations of a general quantitative theory of forest structure and dynamics

    PubMed Central

    Enquist, Brian J.; West, Geoffrey B.; Brown, James H.

    2009-01-01

    Here, we present the second part of a quantitative theory for the structure and dynamics of forests under demographic and resource steady state. The theory is based on individual-level allometric scaling relations for how trees use resources, fill space, and grow. These scale up to determine emergent properties of diverse forests, including size–frequency distributions, spacing relations, canopy configurations, mortality rates, population dynamics, successional dynamics, and resource flux rates. The theory uniquely makes quantitative predictions for both stand-level scaling exponents and normalizations. We evaluate these predictions by compiling and analyzing macroecological datasets from several tropical forests. The close match between theoretical predictions and data suggests that forests are organized by a set of very general scaling rules. Our mechanistic theory is based on allometric scaling relations, is complementary to “demographic theory,” but is fundamentally different in approach. It provides a quantitative baseline for understanding deviations from predictions due to other factors, including disturbance, variation in branching architecture, asymmetric competition, resource limitation, and other sources of mortality, which are not included in the deliberately simplified theory. The theory should apply to a wide range of forests despite large differences in abiotic environment, species diversity, and taxonomic and functional composition. PMID:19363161

  7. Evaluation of background parenchymal enhancement on breast MRI: a systematic review

    PubMed Central

    Signori, Alessio; Valdora, Francesca; Rossi, Federica; Calabrese, Massimo; Durando, Manuela; Mariscotto, Giovanna; Tagliafico, Alberto

    2017-01-01

    Objective: To perform a systematic review of the methods used for background parenchymal enhancement (BPE) evaluation on breast MRI. Methods: Studies dealing with BPE assessment on breast MRI were retrieved from major medical libraries independently by four reviewers up to 6 October 2015. The keywords used for database searching are “background parenchymal enhancement”, “parenchymal enhancement”, “MRI” and “breast”. The studies were included if qualitative and/or quantitative methods for BPE assessment were described. Results: Of the 420 studies identified, a total of 52 articles were included in the systematic review. 28 studies performed only a qualitative assessment of BPE, 13 studies performed only a quantitative assessment and 11 studies performed both qualitative and quantitative assessments. A wide heterogeneity was found in the MRI sequences and in the quantitative methods used for BPE assessment. Conclusion: A wide variability exists in the quantitative evaluation of BPE on breast MRI. More studies focused on a reliable and comparable method for quantitative BPE assessment are needed. Advances in knowledge: More studies focused on a quantitative BPE assessment are needed. PMID:27925480

  8. The effects of non-evaluative feedback on drivers' self-evaluation and performance.

    PubMed

    Dogan, Ebru; Steg, Linda; Delhomme, Patricia; Rothengatter, Talib

    2012-03-01

    Drivers' tend to overestimate their competences, which may result in risk taking behavior. Providing drivers with feedback has been suggested as one of the solutions to overcome drivers' inaccurate self-evaluations. In practice, many tests and driving simulators provide drivers with non-evaluative feedback, which conveys information on the level of performance but not on what caused the performance. Is this type of feedback indeed effective in reducing self-enhancement biases? The current study aimed to investigate the effect of non-evaluative performance feedback on drivers' self-evaluations using a computerized hazard perception test. A between-subjects design was used with one group receiving feedback on performance in the hazard perception test while the other group not receiving any feedback. The results indicated that drivers had a robust self-enhancement bias in their self-evaluations regardless of the presence of performance feedback and that they systematically estimated their performance to be higher than they actually achieved in the test. Furthermore, they devalued the credibility of the test instead of adjusting their self-evaluations in order to cope with the negative feelings following the failure feedback. We discuss the theoretical and practical implications of these counterproductive effects of non-evaluative feedback. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Quantitative evaluation of brain development using anatomical MRI and diffusion tensor imaging☆

    PubMed Central

    Oishi, Kenichi; Faria, Andreia V.; Yoshida, Shoko; Chang, Linda; Mori, Susumu

    2013-01-01

    The development of the brain is structure-specific, and the growth rate of each structure differs depending on the age of the subject. Magnetic resonance imaging (MRI) is often used to evaluate brain development because of the high spatial resolution and contrast that enable the observation of structure-specific developmental status. Currently, most clinical MRIs are evaluated qualitatively to assist in the clinical decision-making and diagnosis. The clinical MRI report usually does not provide quantitative values that can be used to monitor developmental status. Recently, the importance of image quantification to detect and evaluate mild-to-moderate anatomical abnormalities has been emphasized because these alterations are possibly related to several psychiatric disorders and learning disabilities. In the research arena, structural MRI and diffusion tensor imaging (DTI) have been widely applied to quantify brain development of the pediatric population. To interpret the values from these MR modalities, a “growth percentile chart,” which describes the mean and standard deviation of the normal developmental curve for each anatomical structure, is required. Although efforts have been made to create such a growth percentile chart based on MRI and DTI, one of the greatest challenges is to standardize the anatomical boundaries of the measured anatomical structures. To avoid inter- and intra-reader variability about the anatomical boundary definition, and hence, to increase the precision of quantitative measurements, an automated structure parcellation method, customized for the neonatal and pediatric population, has been developed. This method enables quantification of multiple MR modalities using a common analytic framework. In this paper, the attempt to create an MRI- and a DTI-based growth percentile chart, followed by an application to investigate developmental abnormalities related to cerebral palsy, Williams syndrome, and Rett syndrome, have been introduced

  10. Assisting allied health in performance evaluation: a systematic review.

    PubMed

    Lizarondo, Lucylynn; Grimmer, Karen; Kumar, Saravana

    2014-11-14

    Performance evaluation raises several challenges to allied health practitioners and there is no agreed approach to measuring or monitoring allied health service performance. The aim of this review was to examine the literature on performance evaluation in healthcare to assist in the establishment of a framework that can guide the measurement and evaluation of allied health clinical service performance. This review determined the core elements of a performance evaluation system, tools for evaluating performance, and barriers to the implementation of performance evaluation. A systematic review of the literature was undertaken. Five electronic databases were used to search for relevant articles: MEDLINE, Embase, CINAHL, PsychInfo, and Academic Search Premier. Articles which focussed on any allied health performance evaluation or those which examined performance in health care in general were considered in the review. Content analysis was used to synthesise the findings from individual articles. A total of 37 articles were included in the review. The literature suggests there are core elements involved in performance evaluation which include prioritising clinical areas for measurement, setting goals, selecting performance measures, identifying sources of feedback, undertaking performance measurement, and reporting the results to relevant stakeholders. The literature describes performance evaluation as multi-dimensional, requiring information or data from more than one perspective to provide a rich assessment of performance. A range of tools or instruments are available to capture various perspectives and gather a comprehensive picture of health care quality. Every allied health care delivery system has different performance needs and will therefore require different approaches. However, there are core processes that can be used as a framework to evaluate allied health performance. A careful examination of barriers to performance evaluation and subsequent tailoring of

  11. 48 CFR 2936.201 - Evaluation of contractor performance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Evaluation of contractor... Construction 2936.201 Evaluation of contractor performance. The HCA must establish procedures to evaluate construction contractor performance and prepare performance reports as required by FAR 36.201. ...

  12. Optimizing experimental procedures for quantitative evaluation of crop plant performance in high throughput phenotyping systems

    PubMed Central

    Junker, Astrid; Muraya, Moses M.; Weigelt-Fischer, Kathleen; Arana-Ceballos, Fernando; Klukas, Christian; Melchinger, Albrecht E.; Meyer, Rhonda C.; Riewe, David; Altmann, Thomas

    2015-01-01

    Detailed and standardized protocols for plant cultivation in environmentally controlled conditions are an essential prerequisite to conduct reproducible experiments with precisely defined treatments. Setting up appropriate and well defined experimental procedures is thus crucial for the generation of solid evidence and indispensable for successful plant research. Non-invasive and high throughput (HT) phenotyping technologies offer the opportunity to monitor and quantify performance dynamics of several hundreds of plants at a time. Compared to small scale plant cultivations, HT systems have much higher demands, from a conceptual and a logistic point of view, on experimental design, as well as the actual plant cultivation conditions, and the image analysis and statistical methods for data evaluation. Furthermore, cultivation conditions need to be designed that elicit plant performance characteristics corresponding to those under natural conditions. This manuscript describes critical steps in the optimization of procedures for HT plant phenotyping systems. Starting with the model plant Arabidopsis, HT-compatible methods were tested, and optimized with regard to growth substrate, soil coverage, watering regime, experimental design (considering environmental inhomogeneities) in automated plant cultivation and imaging systems. As revealed by metabolite profiling, plant movement did not affect the plants' physiological status. Based on these results, procedures for maize HT cultivation and monitoring were established. Variation of maize vegetative growth in the HT phenotyping system did match well with that observed in the field. The presented results outline important issues to be considered in the design of HT phenotyping experiments for model and crop plants. It thereby provides guidelines for the setup of HT experimental procedures, which are required for the generation of reliable and reproducible data of phenotypic variation for a broad range of applications. PMID

  13. Quantitative trait loci for maternal performance for offspring survival in mice.

    PubMed Central

    Peripato, Andréa C; De Brito, Reinaldo A; Vaughn, Ty T; Pletscher, L Susan; Matioli, Sergio R; Cheverud, James M

    2002-01-01

    Maternal performance refers to the effect that the environment provided by mothers has on their offspring's phenotypes, such as offspring survival and growth. Variations in maternal behavior and physiology are responsible for variations in maternal performance, which in turn affects offspring survival. In our study we found females that failed to nurture their offspring and showed abnormal maternal behaviors. The genetic architecture of maternal performance for offspring survival was investigated in 241 females of an F(2) intercross of the SM/J and LG/J inbred mouse strains. Using interval-mapping methods we found two quantitative trait loci (QTL) affecting maternal performance at D2Mit17 + 6 cM and D7Mit21 + 2 cM on chromosomes 2 and 7, respectively. In a two-way genome-wide epistasis scan we found 15 epistatic interactions involving 23 QTL distributed across all chromosomes except 12, 16, and 17. These loci form several small sets of interacting QTL, suggesting a complex set of mechanisms operating to determine maternal performance for offspring survival. Taken all together and correcting for the large number of significant factors, QTL and their interactions explain almost 35% of the phenotypic variation for maternal performance for offspring survival in this cross. This study allowed the identification of many possible candidate genes, as well as the relative size of gene effects and patterns of gene action affecting maternal performance in mice. Detailed behavior observation of mothers from later generations suggests that offspring survival in the first week is related to maternal success in building nests, grooming their pups, providing milk, and/or manifesting aggressive behavior against intruders. PMID:12454078

  14. Determination of a quantitative parameter to evaluate swimming technique based on the maximal tethered swimming test.

    PubMed

    Soncin, Rafael; Mezêncio, Bruno; Ferreira, Jacielle Carolina; Rodrigues, Sara Andrade; Huebner, Rudolf; Serrão, Julio Cerca; Szmuchrowski, Leszek

    2017-06-01

    The aim of this study was to propose a new force parameter, associated with swimmers' technique and performance. Twelve swimmers performed five repetitions of 25 m sprint crawl and a tethered swimming test with maximal effort. The parameters calculated were: the mean swimming velocity for crawl sprint, the mean propulsive force of the tethered swimming test as well as an oscillation parameter calculated from force fluctuation. The oscillation parameter evaluates the force variation around the mean force during the tethered test as a measure of swimming technique. Two parameters showed significant correlations with swimming velocity: the mean force during the tethered swimming (r = 0.85) and the product of the mean force square root and the oscillation (r = 0.86). However, the intercept coefficient was significantly different from zero only for the mean force, suggesting that although the correlation coefficient of the parameters was similar, part of the mean velocity magnitude that was not associated with the mean force was associated with the product of the mean force square root and the oscillation. Thus, force fluctuation during tethered swimming can be used as a quantitative index of swimmers' technique.

  15. Comparison of Diagnostic Performance of Semi-Quantitative Knee Ultrasound and Knee Radiography with MRI: Oulu Knee Osteoarthritis Study.

    PubMed

    Podlipská, Jana; Guermazi, Ali; Lehenkari, Petri; Niinimäki, Jaakko; Roemer, Frank W; Arokoski, Jari P; Kaukinen, Päivi; Liukkonen, Esa; Lammentausta, Eveliina; Nieminen, Miika T; Tervonen, Osmo; Koski, Juhani M; Saarakkala, Simo

    2016-03-01

    Osteoarthritis (OA) is a common degenerative musculoskeletal disease highly prevalent in aging societies worldwide. Traditionally, knee OA is diagnosed using conventional radiography. However, structural changes of articular cartilage or menisci cannot be directly evaluated using this method. On the other hand, ultrasound is a promising tool able to provide direct information on soft tissue degeneration. The aim of our study was to systematically determine the site-specific diagnostic performance of semi-quantitative ultrasound grading of knee femoral articular cartilage, osteophytes and meniscal extrusion, and of radiographic assessment of joint space narrowing and osteophytes, using MRI as a reference standard. Eighty asymptomatic and 79 symptomatic subjects with mean age of 57.7 years were included in the study. Ultrasound performed best in the assessment of femoral medial and lateral osteophytes, and medial meniscal extrusion. In comparison to radiography, ultrasound performed better or at least equally well in identification of tibio-femoral osteophytes, medial meniscal extrusion and medial femoral cartilage morphological degeneration. Ultrasound provides relevant additional diagnostic information on tissue-specific morphological changes not depicted by conventional radiography. Consequently, the use of ultrasound as a complementary imaging tool along with radiography may enable more accurate and cost-effective diagnostics of knee osteoarthritis at the primary healthcare level.

  16. 48 CFR 2452.216-73 - Performance evaluation plan.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Performance evaluation plan... 2452.216-73 Performance evaluation plan. As prescribed in 2416.406(e)(3), insert the following clause in all award fee contracts: Performance Evaluation Plan (AUG 1987) (a) The Government shall...

  17. Defining an Analytic Framework to Evaluate Quantitative MRI Markers of Traumatic Axonal Injury: Preliminary Results in a Mouse Closed Head Injury Model

    PubMed Central

    Sadeghi, N.; Namjoshi, D.; Irfanoglu, M. O.; Wellington, C.; Diaz-Arrastia, R.

    2017-01-01

    Diffuse axonal injury (DAI) is a hallmark of traumatic brain injury (TBI) pathology. Recently, the Closed Head Injury Model of Engineered Rotational Acceleration (CHIMERA) was developed to generate an experimental model of DAI in a mouse. The characterization of DAI using diffusion tensor magnetic resonance imaging (MRI; diffusion tensor imaging, DTI) may provide a useful set of outcome measures for preclinical and clinical studies. The objective of this study was to identify the complex neurobiological underpinnings of DTI features following DAI using a comprehensive and quantitative evaluation of DTI and histopathology in the CHIMERA mouse model. A consistent neuroanatomical pattern of pathology in specific white matter tracts was identified across ex vivo DTI maps and photomicrographs of histology. These observations were confirmed by voxelwise and regional analysis of DTI maps, demonstrating reduced fractional anisotropy (FA) in distinct regions such as the optic tract. Similar regions were identified by quantitative histology and exhibited axonal damage as well as robust gliosis. Additional analysis using a machine-learning algorithm was performed to identify regions and metrics important for injury classification in a manner free from potential user bias. This analysis found that diffusion metrics were able to identify injured brains almost with the same degree of accuracy as the histology metrics. Good agreement between regions detected as abnormal by histology and MRI was also found. The findings of this work elucidate the complexity of cellular changes that give rise to imaging abnormalities and provide a comprehensive and quantitative evaluation of the relative importance of DTI and histological measures to detect brain injury. PMID:28966972

  18. Performance of human fecal anaerobe-associated PCR-based assays in a multi-laboratory method evaluation study

    USGS Publications Warehouse

    Layton, Blythe A.; Cao, Yiping; Ebentier, Darcy L.; Hanley, Kaitlyn; Ballesté, Elisenda; Brandão, João; Byappanahalli, Muruleedhara N.; Converse, Reagan; Farnleitner, Andreas H.; Gentry-Shields, Jennifer; Gourmelon, Michèle; Lee, Chang Soo; Lee, Jiyoung; Lozach, Solen; Madi, Tania; Meijer, Wim G.; Noble, Rachel; Peed, Lindsay; Reischer, Georg H.; Rodrigues, Raquel; Rose, Joan B.; Schriewer, Alexander; Sinigalliano, Chris; Srinivasan, Sangeetha; Stewart, Jill; ,; Laurie, C.; Wang, Dan; Whitman, Richard; Wuertz, Stefan; Jay, Jenny; Holden, Patricia A.; Boehm, Alexandria B.; Shanks, Orin; Griffith, John F.

    2013-01-01

    A number of PCR-based methods for detecting human fecal material in environmental waters have been developed over the past decade, but these methods have rarely received independent comparative testing in large multi-laboratory studies. Here, we evaluated ten of these methods (BacH, BacHum-UCD, Bacteroides thetaiotaomicron (BtH), BsteriF1, gyrB, HF183 endpoint, HF183 SYBR, HF183 Taqman®, HumM2, and Methanobrevibacter smithii nifH (Mnif)) using 64 blind samples prepared in one laboratory. The blind samples contained either one or two fecal sources from human, wastewater or non-human sources. The assay results were assessed for presence/absence of the human markers and also quantitatively while varying the following: 1) classification of samples that were detected but not quantifiable (DNQ) as positive or negative; 2) reference fecal sample concentration unit of measure (such as culturable indicator bacteria, wet mass, total DNA, etc); and 3) human fecal source type (stool, sewage or septage). Assay performance using presence/absence metrics was found to depend on the classification of DNQ samples. The assays that performed best quantitatively varied based on the fecal concentration unit of measure and laboratory protocol. All methods were consistently more sensitive to human stools compared to sewage or septage in both the presence/absence and quantitative analysis. Overall, HF183 Taqman® was found to be the most effective marker of human fecal contamination in this California-based study.

  19. An electronic portfolio for quantitative assessment of surgical skills in undergraduate medical education.

    PubMed

    Sánchez Gómez, Serafín; Ostos, Elisa María Cabot; Solano, Juan Manuel Maza; Salado, Tomás Francisco Herrero

    2013-05-06

    We evaluated a newly designed electronic portfolio (e-Portfolio) that provided quantitative evaluation of surgical skills. Medical students at the University of Seville used the e-Portfolio on a voluntary basis for evaluation of their performance in undergraduate surgical subjects. Our new web-based e-Portfolio was designed to evaluate surgical practical knowledge and skills targets. Students recorded each activity on a form, attached evidence, and added their reflections. Students self-assessed their practical knowledge using qualitative criteria (yes/no), and graded their skills according to complexity (basic/advanced) and participation (observer/assistant/independent). A numerical value was assigned to each activity, and the values of all activities were summated to obtain the total score. The application automatically displayed quantitative feedback. We performed qualitative evaluation of the perceived usefulness of the e-Portfolio and quantitative evaluation of the targets achieved. Thirty-seven of 112 students (33%) used the e-Portfolio, of which 87% reported that they understood the methodology of the portfolio. All students reported an improved understanding of their learning objectives resulting from the numerical visualization of progress, all students reported that the quantitative feedback encouraged their learning, and 79% of students felt that their teachers were more available because they were using the e-Portfolio. Only 51.3% of students reported that the reflective aspects of learning were useful. Individual students achieved a maximum of 65% of the total targets and 87% of the skills targets. The mean total score was 345 ± 38 points. For basic skills, 92% of students achieved the maximum score for participation as an independent operator, and all achieved the maximum scores for participation as an observer and assistant. For complex skills, 62% of students achieved the maximum score for participation as an independent operator, and 98% achieved

  20. Performance Evaluation Model for Application Layer Firewalls.

    PubMed

    Xuan, Shichang; Yang, Wu; Dong, Hui; Zhang, Jiangchuan

    2016-01-01

    Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers). Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.

  1. Modeling logistic performance in quantitative microbial risk assessment.

    PubMed

    Rijgersberg, Hajo; Tromp, Seth; Jacxsens, Liesbeth; Uyttendaele, Mieke

    2010-01-01

    In quantitative microbial risk assessment (QMRA), food safety in the food chain is modeled and simulated. In general, prevalences, concentrations, and numbers of microorganisms in media are investigated in the different steps from farm to fork. The underlying rates and conditions (such as storage times, temperatures, gas conditions, and their distributions) are determined. However, the logistic chain with its queues (storages, shelves) and mechanisms for ordering products is usually not taken into account. As a consequence, storage times-mutually dependent in successive steps in the chain-cannot be described adequately. This may have a great impact on the tails of risk distributions. Because food safety risks are generally very small, it is crucial to model the tails of (underlying) distributions as accurately as possible. Logistic performance can be modeled by describing the underlying planning and scheduling mechanisms in discrete-event modeling. This is common practice in operations research, specifically in supply chain management. In this article, we present the application of discrete-event modeling in the context of a QMRA for Listeria monocytogenes in fresh-cut iceberg lettuce. We show the potential value of discrete-event modeling in QMRA by calculating logistic interventions (modifications in the logistic chain) and determining their significance with respect to food safety.

  2. 24 CFR 570.491 - Performance and evaluation report.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 3 2010-04-01 2010-04-01 false Performance and evaluation report... Development Block Grant Program § 570.491 Performance and evaluation report. The annual performance and evaluation report shall be submitted in accordance with 24 CFR part 91. (Approved by the Office of Management...

  3. Quantitative evaluation of malignant gliomas damage induced by photoactivation of IR700 dye

    NASA Astrophysics Data System (ADS)

    Sakuma, Morito; Kita, Sayaka; Higuchi, Hideo

    2016-01-01

    The processes involved in malignant gliomas damage were quantitatively evaluated by microscopy. The near-infrared fluorescent dye IR700 that is conjugated to an anti-CD133 antibody (IR700-CD133) specifically targets malignant gliomas (U87MG) and stem cells (BT142) and is endocytosed into the cells. The gliomas are then photodamaged by the release of reactive oxygen species (ROS) and the heat induced by illumination of IR700 by a red laser, and the motility of the vesicles within these cells is altered as a result of cellular damage. To investigate these changes in motility, we developed a new method that measures fluctuations in the intensity of phase-contrast images obtained from small areas within cells. The intensity fluctuation in U87MG cells gradually decreased as cell damage progressed, whereas the fluctuation in BT142 cells increased. The endocytosed IR700 dye was co-localized in acidic organelles such as endosomes and lysosomes. The pH in U87MG cells, as monitored by a pH indicator, was decreased and then gradually increased by the illumination of IR700, while the pH in BT142 cells increased monotonically. In these experiments, the processes of cell damage were quantitatively evaluated according to the motility of vesicles and changes in pH.

  4. Performability evaluation of the SIFT computer

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.; Furchtgott, D. G.; Wu, L. T.

    1979-01-01

    Performability modeling and evaluation techniques are applied to the SIFT computer as it might operate in the computational evironment of an air transport mission. User-visible performance of the total system (SIFT plus its environment) is modeled as a random variable taking values in a set of levels of accomplishment. These levels are defined in terms of four attributes of total system behavior: safety, no change in mission profile, no operational penalties, and no economic process whose states describe the internal structure of SIFT as well as relavant conditions of the environment. Base model state trajectories are related to accomplishment levels via a capability function which is formulated in terms of a 3-level model hierarchy. Performability evaluation algorithms are then applied to determine the performability of the total system for various choices of computer and environment parameter values. Numerical results of those evaluations are presented and, in conclusion, some implications of this effort are discussed.

  5. Quantitative Evaluation of 2 Scatter-Correction Techniques for 18F-FDG Brain PET/MRI in Regard to MR-Based Attenuation Correction.

    PubMed

    Teuho, Jarmo; Saunavaara, Virva; Tolvanen, Tuula; Tuokkola, Terhi; Karlsson, Antti; Tuisku, Jouni; Teräs, Mika

    2017-10-01

    In PET, corrections for photon scatter and attenuation are essential for visual and quantitative consistency. MR attenuation correction (MRAC) is generally conducted by image segmentation and assignment of discrete attenuation coefficients, which offer limited accuracy compared with CT attenuation correction. Potential inaccuracies in MRAC may affect scatter correction, because the attenuation image (μ-map) is used in single scatter simulation (SSS) to calculate the scatter estimate. We assessed the impact of MRAC to scatter correction using 2 scatter-correction techniques and 3 μ-maps for MRAC. Methods: The tail-fitted SSS (TF-SSS) and a Monte Carlo-based single scatter simulation (MC-SSS) algorithm implementations on the Philips Ingenuity TF PET/MR were used with 1 CT-based and 2 MR-based μ-maps. Data from 7 subjects were used in the clinical evaluation, and a phantom study using an anatomic brain phantom was conducted. Scatter-correction sinograms were evaluated for each scatter correction method and μ-map. Absolute image quantification was investigated with the phantom data. Quantitative assessment of PET images was performed by volume-of-interest and ratio image analysis. Results: MRAC did not result in large differences in scatter algorithm performance, especially with TF-SSS. Scatter sinograms and scatter fractions did not reveal large differences regardless of the μ-map used. TF-SSS showed slightly higher absolute quantification. The differences in volume-of-interest analysis between TF-SSS and MC-SSS were 3% at maximum in the phantom and 4% in the patient study. Both algorithms showed excellent correlation with each other with no visual differences between PET images. MC-SSS showed a slight dependency on the μ-map used, with a difference of 2% on average and 4% at maximum when a μ-map without bone was used. Conclusion: The effect of different MR-based μ-maps on the performance of scatter correction was minimal in non-time-of-flight 18 F-FDG PET

  6. Performance evaluation of cryogenic counter-flow heat exchangers with longitudinal conduction, heat in-leak and property variations

    NASA Astrophysics Data System (ADS)

    Jiang, Q. F.; Zhuang, M.; Zhu, Z. G.; Y Zhang, Q.; Sheng, L. H.

    2017-12-01

    Counter-flow plate-fin heat exchangers are commonly utilized in cryogenic applications due to their high effectiveness and compact size. For cryogenic heat exchangers in helium liquefaction/refrigeration systems, conventional design theory is no longer applicable and they are usually sensitive to longitudinal heat conduction, heat in-leak from surroundings and variable fluid properties. Governing equations based on distributed parameter method are developed to evaluate performance deterioration caused by these effects. The numerical model could also be applied in many other recuperators with different structures and, hence, available experimental data are used to validate it. For a specific case of the multi-stream heat exchanger in the EAST helium refrigerator, quantitative effects of these heat losses are further discussed, in comparison with design results obtained by the common commercial software. The numerical model could be useful to evaluate and rate the heat exchanger performance under the actual cryogenic environment.

  7. Evaluation of urine for Leishmania infantum DNA detection by real-time quantitative PCR.

    PubMed

    Pessoa-E-Silva, Rômulo; Mendonça Trajano-Silva, Lays Adrianne; Lopes da Silva, Maria Almerice; da Cunha Gonçalves-de-Albuquerque, Suênia; de Goes, Tayná Correia; Silva de Morais, Rayana Carla; Lopes de Melo, Fábio; de Paiva-Cavalcanti, Milena

    2016-12-01

    The availability of some sorts of biological samples which require noninvasive collection methods has led to an even greater interest in applying molecular biology on visceral leishmaniasis (VL) diagnosis, since these samples increase the safety and comfort of both patients and health professionals. In this context, this work aimed to evaluate the suitability of the urine as a specimen for Leishmania infantum kinetoplast DNA detection by real-time quantitative PCR (qPCR). Subsequent to the reproducibility analysis, the detection limit of the qPCR assay was set at 5fg (~0.025 parasites) per μL of urine. From the comparative analysis performed with a set of diagnostic criteria (serological and molecular reference tests), concordance value of 96.08% was obtained (VL-suspected and HIV/AIDS patients, n=51) (P>0.05). Kappa coefficient (95% CI) indicated a good agreement between the test and the set of diagnostic criteria (k=0.778±0.151). The detection of Leishmania DNA in urine by qPCR was possible in untreated individuals, and in those with or without suggestive renal impairment. Fast depletion of the parasite's DNA in urine after treatment (from one dose of meglumine antimoniate) was suggested by negative qPCR results, thus indicating it as a potential alternative specimen to follow up the efficacy of therapeutic approaches. Even when evaluated in a clinically heterogeneous set of patients, the urine showed good prospect as sample for VL diagnosis by qPCR, also indicating a good negative predictive value for untreated suspected patients. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Evaluating resective surgery targets in epilepsy patients: A comparison of quantitative EEG methods.

    PubMed

    Müller, Michael; Schindler, Kaspar; Goodfellow, Marc; Pollo, Claudio; Rummel, Christian; Steimer, Andreas

    2018-07-15

    Quantitative analysis of intracranial EEG is a promising tool to assist clinicians in the planning of resective brain surgery in patients suffering from pharmacoresistant epilepsies. Quantifying the accuracy of such tools, however, is nontrivial as a ground truth to verify predictions about hypothetical resections is missing. As one possibility to address this, we use customized hypotheses tests to examine the agreement of the methods on a common set of patients. One method uses machine learning techniques to enable the predictive modeling of EEG time series. The other estimates nonlinear interrelation between EEG channels. Both methods were independently shown to distinguish patients with excellent post-surgical outcome (Engel class I) from those without improvement (Engel class IV) when assessing the electrodes associated with the tissue that was actually resected during brain surgery. Using the AND and OR conjunction of both methods we evaluate the performance gain that can be expected when combining them. Both methods' assessments correlate strongly positively with the similarity between a hypothetical resection and the corresponding actual resection in class I patients. Moreover, the Spearman rank correlation between the methods' patient rankings is significantly positive. To our best knowledge, this is the first study comparing surgery target assessments from fundamentally differing techniques. Although conceptually completely independent, there is a relation between the predictions obtained from both methods. Their broad consensus supports their application in clinical practice to provide physicians additional information in the process of presurgical evaluation. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  9. Qualitative and quantitative evaluation of a local lymph node assay based on ex vivo interleukin-2 production.

    PubMed

    Azam, Philippe; Peiffer, Jean-Luc; Ourlin, Jean-Claude; Bonnet, Pierre-Antoine; Tissier, Marie-Hélène; Vian, Laurence; Fabre, Isabelle

    2005-01-15

    The local lymph node assay (LLNA) is a regular method for the detection of sensitizing chemicals in mice which measures the incorporation of tritiated thymidine in lymph node cells. We have evaluated an alternative to this method based on the interleukin-2 (IL-2) production of lymph node cells. At the mRNA level, no change in the IL-2 gene expression level was detected by real-time PCR analysis. At the protein level, various experimental conditions were checked in order to improve the irritant versus sensitizer discrimination with a restricted set of prototypic compounds. In particular, the use of phytohemagglutinin A (PHA) in an ex vivo cell culture step showed an improvement of both signal and discrimination. In these optimised conditions, a panel of irritants and potency-graded sensitizers was used to assess the performance of the modified method. IFN-gamma production was used as a positive control. For each compound, a dose-response was performed and stimulation indexes (SI) were determined. Effective concentrations (EC) for each sensitizers were then extracted and compared to the literature data of the regular LLNA. The IL-2-based LLNA showed similar performances at both qualitative and quantitative levels compared to regular LLNA.

  10. The use of digital PCR to improve the application of quantitative molecular diagnostic methods for tuberculosis.

    PubMed

    Devonshire, Alison S; O'Sullivan, Denise M; Honeyborne, Isobella; Jones, Gerwyn; Karczmarczyk, Maria; Pavšič, Jernej; Gutteridge, Alice; Milavec, Mojca; Mendoza, Pablo; Schimmel, Heinz; Van Heuverswyn, Fran; Gorton, Rebecca; Cirillo, Daniela Maria; Borroni, Emanuele; Harris, Kathryn; Barnard, Marinus; Heydenrych, Anthenette; Ndusilo, Norah; Wallis, Carole L; Pillay, Keshree; Barry, Thomas; Reddington, Kate; Richter, Elvira; Mozioğlu, Erkan; Akyürek, Sema; Yalçınkaya, Burhanettin; Akgoz, Muslum; Žel, Jana; Foy, Carole A; McHugh, Timothy D; Huggett, Jim F

    2016-08-03

    Real-time PCR (qPCR) based methods, such as the Xpert MTB/RIF, are increasingly being used to diagnose tuberculosis (TB). While qualitative methods are adequate for diagnosis, the therapeutic monitoring of TB patients requires quantitative methods currently performed using smear microscopy. The potential use of quantitative molecular measurements for therapeutic monitoring has been investigated but findings have been variable and inconclusive. The lack of an adequate reference method and reference materials is a barrier to understanding the source of such disagreement. Digital PCR (dPCR) offers the potential for an accurate method for quantification of specific DNA sequences in reference materials which can be used to evaluate quantitative molecular methods for TB treatment monitoring. To assess a novel approach for the development of quality assurance materials we used dPCR to quantify specific DNA sequences in a range of prototype reference materials and evaluated accuracy between different laboratories and instruments. The materials were then also used to evaluate the quantitative performance of qPCR and Xpert MTB/RIF in eight clinical testing laboratories. dPCR was found to provide results in good agreement with the other methods tested and to be highly reproducible between laboratories without calibration even when using different instruments. When the reference materials were analysed with qPCR and Xpert MTB/RIF by clinical laboratories, all laboratories were able to correctly rank the reference materials according to concentration, however there was a marked difference in the measured magnitude. TB is a disease where the quantification of the pathogen could lead to better patient management and qPCR methods offer the potential to rapidly perform such analysis. However, our findings suggest that when precisely characterised materials are used to evaluate qPCR methods, the measurement result variation is too high to determine whether molecular quantification

  11. Evaluation of the Abbott RealTime HCV assay for quantitative detection of hepatitis C virus RNA.

    PubMed

    Michelin, Birgit D A; Muller, Zsofia; Stelzl, Evelyn; Marth, Egon; Kessler, Harald H

    2007-02-01

    The Abbott RealTime HCV assay for quantitative detection of HCV RNA has recently been introduced. In this study, the performance of the Abbott RealTime HCV assay was evaluated and compared to the COBAS AmpliPrep/COBAS TaqMan HCV test. Accuracy, linearity, interassay and intra-assay variations were determined, and a total of 243 routine clinical samples were investigated. When accuracy of the new assay was tested, the majority of results were found to be within +/-0.5 log(10) unit of the results obtained by reference laboratories. Determination of linearity resulted in a quasilinear curve up to 1.0 x 10(6)IU/ml. The interassay variation ranged from 15% to 32%, and the intra-assay variation ranged from 5% to 8%. When clinical samples were tested by the Abbott RealTime HCV assay and the results were compared with those obtained by the COBAS AmpliPrep/COBAS TaqMan HCV test, the results for 93% of all samples with positive results by both tests were found to be within +/-1.0 log(10) unit. The viral loads for all patients measured by the Abbott and Roche assays showed a high correlation (R(2)=0.93); quantitative results obtained by the Abbott assay were found to be lower than those obtained by the Roche assay. The Abbott RealTime HCV assay proved to be suitable for use in the routine diagnostic laboratory. The time to results was similar for both of the assays.

  12. Application of Organosilane Monolayer Template to Quantitative Evaluation of Cancer Cell Adhesive Ability

    NASA Astrophysics Data System (ADS)

    Tanii, Takashi; Sasaki, Kosuke; Ichisawa, Kota; Demura, Takanori; Beppu, Yuichi; Vu, Hoan Anh; Thanh Chi, Hoan; Yamamoto, Hideaki; Sato, Yuko

    2011-06-01

    The adhesive ability of two human pancreatic cancer cell lines was evaluated using organosilane monolayer templates (OMTs). Using the OMT, the spreading area of adhered cells can be limited, and this enables us to focus on the initial attachment process of adhesion. Moreover, it becomes possible to arrange the cells in an array and to quantitatively evaluate the number of attached cells. The adhesive ability of the cancer cells cultured on the OMT was controlled by adding (-)-epigallocatechin-3-gallate (EGCG), which blocks a receptor that mediates cell adhesion and is overexpressed in cancer cells. Measurement of the relative ability of the cancer cells to attach to the OMT revealed that the ability for attachment decreased with increasing EGCG concentration. The results agreed well with the western blot analysis, indicating that the OMT can potentially be employed to evaluate the adhesive ability of various cancer cells.

  13. Assessing the Expected Impact of Global Health Treaties: Evidence From 90 Quantitative Evaluations

    PubMed Central

    Røttingen, John-Arne

    2015-01-01

    We assessed what impact can be expected from global health treaties on the basis of 90 quantitative evaluations of existing treaties on trade, finance, human rights, conflict, and the environment. It appears treaties consistently succeed in shaping economic matters and consistently fail in achieving social progress. There are at least 3 differences between these domains that point to design characteristics that new global health treaties can incorporate to achieve positive impact: (1) incentives for those with power to act on them; (2) institutions designed to bring edicts into effect; and (3) interests advocating their negotiation, adoption, ratification, and domestic implementation. Experimental and quasiexperimental evaluations of treaties would provide more information about what can be expected from this type of global intervention. PMID:25393196

  14. Evaluation of Model Performance over the Maritime Continent

    NASA Astrophysics Data System (ADS)

    Reynolds, C. A.; Barton, N. P.; Chen, S.; Flatau, M. K.; Ridout, J. A.; Janiga, M.; Jensen, T.; Richman, J. G.; Metzger, E. J.; Baranowski, D.

    2017-12-01

    The introduction of high-resolution global coupled models holds promise for extended-range (subseasonal to seasonal) prediction of high-impact weather. While forecast models have shown considerable improvement in the prediction of tropical phenomena on these timescales, specifically in the simulation and prediction of the Madden-Julian Oscillation (MJO), obstacles remain. In particular, many models still have difficulty accurately simulating the propagation of the MJO over the maritime continent. This has been hypothesized, at least in part, to be related to deficiencies in simulating the diurnal cycle over this region, which in turn is dependent on accurate representation of fine-scale atmosphere-ocean-land interactions, orography, and atmospheric convection. These issues have motivated the international Year of Maritime Continent (YMC) effort and the Office of Naval Research Propagation of Intra-Seasonal Tropical Oscillations (PISTON) initiative. In preparation for YMC and PISTON, we closely evaluate the performance of the Navy Earth System Model (NESM), a coupled global forecast model, in representing the diurnal cycle and other prominent phenomena in the maritime continent region. NESM performance is compared with stand-alone atmospheric simulations with prescribed fixed and analyzed sea surface temperatures (SSTs). Initial results from the Dynamics of the Madden-Julian Oscillation field phase (Fall 2011) period indicate that NESM is able to capture the precipitation day-time maximum over land and night-time maximum over ocean, but day-time precipitation over Borneo, Sumatra and the Malay Peninsula is too strong as compared to TRMM observations. The simulation of low-level winds qualitatively captures sea and land breeze patterns as compared with ERA-Interim analysis, with quantitative biases varying by island. The fully-coupled system and the stand-alone atmospheric model simulations are more similar to each other than to the observations, indicating that

  15. Quantitative measurement of feline colonic transit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krevsky, B.; Somers, M.B.; Maurer, A.H.

    1988-10-01

    Colonic transit scintigraphy, a method for quantitatively evaluating the movement of the fecal stream in vivo, was employed to evaluate colonic transit in the cat. Scintigraphy was performed in duplicate in five cats and repeated four times in one cat. After instillation of an 111In marker into the cecum through a surgically implanted silicone cecostomy tube, colonic movement of the instillate was quantitated for 24 h using gamma scintigraphy. Antegrade and retrograde motion of radionuclide was observed. The cecum and ascending colon emptied rapidly, with a half-emptying time of 1.68 +/- 0.56 h (mean +/- SE). After 24 h, 25.1more » +/- 5.2% of the activity remained in the transverse colon. The progression of the geometric center was initially rapid, followed later by a delayed phase. Geometric center reproducibility was found to be high when analyzed using simple linear regression (slope = 0.92; r = 0.73; P less than 0.01). Atropine (0.1 mg/kg im) was found to delay cecum and ascending colon emptying and delay progression of the geometric center. These results demonstrate both 1) the ability of colonic transit scintigraphy to detect changes in transit induced by pharmacological manipulation and 2) the fact that muscarinic blockade inhibits antegrade transit of the fecal stream. We conclude that feline colonic transit may be studied in a quantitative and reproducible manner with colonic transit scintigraphy.« less

  16. Evaluation of virtual monoenergetic imaging algorithms for dual-energy carotid and intracerebral CT angiography: Effects on image quality, artefacts and diagnostic performance for the detection of stenosis.

    PubMed

    Leithner, Doris; Mahmoudi, Scherwin; Wichmann, Julian L; Martin, Simon S; Lenga, Lukas; Albrecht, Moritz H; Booz, Christian; Arendt, Christophe T; Beeres, Martin; D'Angelo, Tommaso; Bodelle, Boris; Vogl, Thomas J; Scholtz, Jan-Erik

    2018-02-01

    To investigate the impact of traditional (VMI) and noise-optimized virtual monoenergetic imaging (VMI+) algorithms on quantitative and qualitative image quality, and the assessment of stenosis in carotid and intracranial dual-energy CTA (DE-CTA). DE-CTA studies of 40 patients performed on a third-generation 192-slice dual-source CT scanner were included in this retrospective study. 120-kVp image-equivalent linearly-blended, VMI and VMI+ series were reconstructed. Quantitative analysis included evaluation of contrast-to-noise ratios (CNR) of the aorta, common carotid artery, internal carotid artery, middle cerebral artery, and basilar artery. VMI and VMI+ with highest CNR, and linearly-blended series were rated qualitatively. Three radiologists assessed artefacts and suitability for evaluation at shoulder height, carotid bifurcation, siphon, and intracranial using 5-point Likert scales. Detection and grading of stenosis were performed at carotid bifurcation and siphon. Highest CNR values were observed for 40-keV VMI+ compared to 65-keV VMI and linearly-blended images (P < 0.001). Artefacts were low in all qualitatively assessed series with excellent suitability for supraaortic artery evaluation at shoulder and bifurcation height. Suitability was significantly higher in VMI+ and VMI compared to linearly-blended images for intracranial and ICA assessment (P < 0.002). VMI and VMI+ showed excellent accordance for detection and grading of stenosis at carotid bifurcation and siphon with no differences in diagnostic performance. 40-keV VMI+ showed improved quantitative image quality compared to 65-keV VMI and linearly-blended series in supraaortic DE-CTA. VMI and VMI+ provided increased suitability for carotid and intracranial artery evaluation with excellent assessment of stenosis, but did not translate into increased diagnostic performance. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. [Study on the quantitative evaluation on the degree of TCM basic syndromes often encountered in patients with primary liver cancer].

    PubMed

    Li, Dong-tao; Ling, Chang-quan; Zhu, De-zeng

    2007-07-01

    To establish a quantitative model for evaluating the degree of the TCM basic syndromes often encountered in patients with primary liver cancer (PLC). Medical literatures concerning the clinical investigation and TCM syndrome of PLC were collected and analyzed adopting expert-composed symposium method, and the 100 millimeter scaling was applied in combining with scoring on degree of symptoms to establish a quantitative criterion for symptoms and signs degree classification in patients with PLC. Two models, i.e. the additive model and the additive-multiplicative model, were established by using comprehensive analytic hierarchy process (AHP) as the mathematical tool to estimate the weight of the criterion for evaluating basic syndromes in various layers by specialists. Then the two models were verified in clinical practice and the outcomes were compared with that fuzzy evaluated by specialists. Verification on 459 times/case of PLC showed that the coincidence rate between the outcomes derived from specialists with that from the additive model was 84.53 %, and with that from the additive-multificative model was 62.75 %, the difference between the two showed statistical significance (P<0.01). It could be decided that the additive model is the principle model suitable for quantitative evaluation on the degree of TCM basic syndromes in patients with PLC.

  18. A quantitative swab is a good non-invasive alternative to a quantitative biopsy for quantifying bacterial load in wounds healing by second intention in horses.

    PubMed

    Van Hecke, L L; Hermans, K; Haspeslagh, M; Chiers, K; Pint, E; Boyen, F; Martens, A M

    2017-07-01

    The aim of this study was to evaluate different techniques for diagnosing wound infection in wounds healing by second intention in horses and to assess the effect of a vortex and sonication protocol on quantitative bacteriology in specimens with a histologically confirmed biofilm. In 50 wounds healing by second intention, a clinical assessment, a quantitative swab, a semi-quantitative swab, and a swab for cytology were compared to a quantitative tissue biopsy (reference standard). Part of the biopsy specimen was examined histologically for evidence of a biofilm. There was a significant, high correlation (P<0.001; r=0.747) between the outcome of the quantitative swabs and the quantitative biopsies. The semi-quantitative swabs showed a significant, moderate correlation with the quantitative biopsies (P<0.001; ρ=0.524). Higher white blood cell counts for cytology were significantly associated with lower log 10 colony-forming units (CFU) in the wounds (P=0.02). Wounds with black granulation tissue showed significantly higher log 10 CFU (P=0.003). Specimens with biofilms did not yield higher bacteriological counts after a vortex and sonication protocol was performed to release bacteria from the biofilm. Based on these findings, a quantitative swab is an acceptable non-invasive alternative to a quantitative biopsy for quantifying bacterial load in equine wounds healing by second intention. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Phase-contrast CT: Qualitative and Quantitative Evaluation of Capillarized Sinusoids and Trabecular Structure in Human Hepatocellular Carcinoma Tissues.

    PubMed

    Jian, Jianbo; Zhang, Wenxue; Yang, Hao; Zhao, Xinyan; Xuan, Ruijiao; Li, Dongyue; Hu, Chunhong

    2017-01-01

    Capillarization of sinusoids and change of trabecular thickness are the main histologic features in hepatocellular carcinoma (HCC). Of particular interest are the three-dimensional (3D) visualization and quantitative evaluation of such alterations in the HCC progression. X-ray phase-contrast computed tomography (PCCT) is an emerging imaging method that provides excellent image contrast for soft tissues. This study aimed to explore the potential of in-line PCCT in microstructure imaging of capillarized sinusoids and trabecular structure in human HCC tissues and to quantitatively evaluate the alterations of those fine structures during the development of HCC. This project was designed as an ex vivo experimental study. The study was approved by the institutional review board, and informed consent was obtained from the patients. Eight human resected HCC tissue samples were imaged using in-line PCCT. After histologic processing, PCCT images and histopathologic data were matched. Fine structures in HCC tissues were revealed. Quantitative analyses of capillarized sinusoids (ie, percentage of sinusoidal area [PSA], sinusoidal volume) and trabecular structure (ie, trabecular thickness, surface-area-to-volume ratio [SA/V]) in low-grade (well or moderately differentiated) and high-grade (poorly differentiated) HCC groups were performed. Using PCCT, the alterations of capillarized sinusoids and trabecular structure were clearly observed in 3D geometry, which was confirmed by the corresponding histologic sections. The 3D qualitative analyses of sinusoids in the high-grade HCC group were significantly different (P < 0.05) in PSA (7.8 ± 2.5%) and sinusoidal volume (2.9 ± 0.6 × 10 7  µm 3 ) from those in the low-grade HCC group (PSA, 12.9 ± 2.2%; sinusoidal volume, 2.4 ± 0.3 × 10 7  µm 3 ). Moreover, the 3D quantitative evaluation of the trabecular structure in the high-grade HCC group showed a significant change (P < 0.05) in the

  20. Method and platform standardization in MRM-based quantitative plasma proteomics.

    PubMed

    Percy, Andrew J; Chambers, Andrew G; Yang, Juncong; Jackson, Angela M; Domanski, Dominik; Burkhart, Julia; Sickmann, Albert; Borchers, Christoph H

    2013-12-16

    There exists a growing demand in the proteomics community to standardize experimental methods and liquid chromatography-mass spectrometry (LC/MS) platforms in order to enable the acquisition of more precise and accurate quantitative data. This necessity is heightened by the evolving trend of verifying and validating candidate disease biomarkers in complex biofluids, such as blood plasma, through targeted multiple reaction monitoring (MRM)-based approaches with stable isotope-labeled standards (SIS). Considering the lack of performance standards for quantitative plasma proteomics, we previously developed two reference kits to evaluate the MRM with SIS peptide approach using undepleted and non-enriched human plasma. The first kit tests the effectiveness of the LC/MRM-MS platform (kit #1), while the second evaluates the performance of an entire analytical workflow (kit #2). Here, these kits have been refined for practical use and then evaluated through intra- and inter-laboratory testing on 6 common LC/MS platforms. For an identical panel of 22 plasma proteins, similar concentrations were determined, regardless of the kit, instrument platform, and laboratory of analysis. These results demonstrate the value of the kit and reinforce the utility of standardized methods and protocols. The proteomics community needs standardized experimental protocols and quality control methods in order to improve the reproducibility of MS-based quantitative data. This need is heightened by the evolving trend for MRM-based validation of proposed disease biomarkers in complex biofluids such as blood plasma. We have developed two kits to assist in the inter- and intra-laboratory quality control of MRM experiments: the first kit tests the effectiveness of the LC/MRM-MS platform (kit #1), while the second evaluates the performance of an entire analytical workflow (kit #2). In this paper, we report the use of these kits in intra- and inter-laboratory testing on 6 common LC/MS platforms. This

  1. Prognostic Value of Quantitative Stress Perfusion Cardiac Magnetic Resonance.

    PubMed

    Sammut, Eva C; Villa, Adriana D M; Di Giovine, Gabriella; Dancy, Luke; Bosio, Filippo; Gibbs, Thomas; Jeyabraba, Swarna; Schwenke, Susanne; Williams, Steven E; Marber, Michael; Alfakih, Khaled; Ismail, Tevfik F; Razavi, Reza; Chiribiri, Amedeo

    2018-05-01

    This study sought to evaluate the prognostic usefulness of visual and quantitative perfusion cardiac magnetic resonance (CMR) ischemic burden in an unselected group of patients and to assess the validity of consensus-based ischemic burden thresholds extrapolated from nuclear studies. There are limited data on the prognostic value of assessing myocardial ischemic burden by CMR, and there are none using quantitative perfusion analysis. Patients with suspected coronary artery disease referred for adenosine-stress perfusion CMR were included (n = 395; 70% male; age 58 ± 13 years). The primary endpoint was a composite of cardiovascular death, nonfatal myocardial infarction, aborted sudden death, and revascularization after 90 days. Perfusion scans were assessed visually and with quantitative analysis. Cross-validated Cox regression analysis and net reclassification improvement were used to assess the incremental prognostic value of visual or quantitative perfusion analysis over a baseline clinical model, initially as continuous covariates, then using accepted thresholds of ≥2 segments or ≥10% myocardium. After a median 460 days (interquartile range: 190 to 869 days) follow-up, 52 patients reached the primary endpoint. At 2 years, the addition of ischemic burden was found to increase prognostic value over a baseline model of age, sex, and late gadolinium enhancement (baseline model area under the curve [AUC]: 0.75; visual AUC: 0.84; quantitative AUC: 0.85). Dichotomized quantitative ischemic burden performed better than visual assessment (net reclassification improvement 0.043 vs. 0.003 against baseline model). This study was the first to address the prognostic benefit of quantitative analysis of perfusion CMR and to support the use of consensus-based ischemic burden thresholds by perfusion CMR for prognostic evaluation of patients with suspected coronary artery disease. Quantitative analysis provided incremental prognostic value to visual assessment and

  2. Performance Theories for Sentence Coding: Some Quantitative Models

    ERIC Educational Resources Information Center

    Aaronson, Doris; And Others

    1977-01-01

    This study deals with the patterns of word-by-word reading times over a sentence when the subject must code the linguistic information sufficiently for immediate verbatim recall. A class of quantitative models is considered that would account for reading times at phrase breaks. (Author/RM)

  3. Formative and Summative Evaluation: Related Issues in Performance Measurement.

    ERIC Educational Resources Information Center

    Wholey, Joseph S.

    1996-01-01

    Performance measurement can serve both formative and summative evaluation functions. Formative evaluation is typically more useful for government purposes whereas performance measurement is more useful than one-shot evaluations of either formative or summative nature. Evaluators should study performance measurement through case studies and…

  4. Comparison of the quantitative analysis performance between pulsed voltage atom probe and pulsed laser atom probe.

    PubMed

    Takahashi, J; Kawakami, K; Raabe, D

    2017-04-01

    The difference in quantitative analysis performance between the voltage-mode and laser-mode of a local electrode atom probe (LEAP3000X HR) was investigated using a Fe-Cu binary model alloy. Solute copper atoms in ferritic iron preferentially field evaporate because of their significantly lower evaporation field than the matrix iron, and thus, the apparent concentration of solute copper tends to be lower than the actual concentration. However, in voltage-mode, the apparent concentration was higher than the actual concentration at 40K or less due to a detection loss of matrix iron, and the concentration decreased with increasing specimen temperature due to the preferential evaporation of solute copper. On the other hand, in laser-mode, the apparent concentration never exceeded the actual concentration, even at lower temperatures (20K), and this mode showed better quantitative performance over a wide range of specimen temperatures. These results indicate that the pulsed laser atom probe prevents both detection loss and preferential evaporation under a wide range of measurement conditions. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Evaluating a nursing communication skills training course: The relationships between self-rated ability, satisfaction, and actual performance.

    PubMed

    Mullan, Barbara A; Kothe, Emily J

    2010-11-01

    Effective communication is a vital component of nursing care, however, nurses often lack the skills to communicate with patients, carers and other health care professionals. Communication skills training programs are frequently used to develop these skills. However, there is a paucity of data on how best to evaluate such courses. The aim of the current study was to evaluate the relationship between student self rating of their own ability and their satisfaction with a nurse training course as compared with an objective measure of communication skills. 209 first year nursing students completed a communication skills program. Both qualitative and quantitative data were collected and associations between measures were investigated. Paired samples t-tests showed significant improvement in self-rated ability over the course of the program. Students generally were very satisfied with the course which was reflected in both qualitative and quantitative measures. However, neither self-rated ability nor satisfaction was significantly correlated with the objective measure of performance, but self-rated ability and satisfaction were highly correlated with one another. The importance of these findings is discussed and implications for nurse education are proposed. Copyright © 2010 Elsevier Ltd. All rights reserved.

  6. Quantitative electromyography in ambulatory boys with Duchenne muscular dystrophy.

    PubMed

    Verma, Sumit; Lin, Jenny; Travers, Curtis; McCracken, Courtney; Shah, Durga

    2017-12-01

    This study's objective was to evaluate quantitative electromyography (QEMG) using multiple-motor-unit (multi-MUP) analysis in Duchenne muscular dystrophy (DMD). Ambulatory DMD boys, aged 5-15 years, were evaluated with QEMG at 6-month intervals over 14 months. EMG was performed in the right biceps brachii (BB) and tibialis anterior (TA) muscles. Normative QEMG data were obtained from age-matched healthy boys. Wilcoxon signed-rank tests were performed. Eighteen DMD subjects were enrolled, with a median age of 7 (interquartile range 7-10) years. Six-month evaluations were performed on 14 subjects. QEMG showed significantly abnormal mean MUP duration in BB and TA muscles, with no significant change over 6 months. QEMG is a sensitive electrophysiological marker of myopathy in DMD. Preliminary data do not reflect a significant change in MUP parameters over a 6-month interval; long-term follow-up QEMG studies are needed to understand its role as a biomarker for disease progression. Muscle Nerve 56: 1361-1364, 2017. © 2017 Wiley Periodicals, Inc.

  7. Performance evaluation of Louisiana superpave mixtures.

    DOT National Transportation Integrated Search

    2008-12-01

    This report documents the performance of Louisiana Superpave mixtures through laboratory mechanistic tests, mixture : volumetric properties, gradation analysis, and early field performance. Thirty Superpave mixtures were evaluated in this : study. Fo...

  8. Diagnostic performance and color overlay pattern in shear wave elastography (SWE) for palpable breast mass.

    PubMed

    Park, Jiyoon; Woo, Ok Hee; Shin, Hye Seon; Cho, Kyu Ran; Seo, Bo Kyoung; Kang, Eun Young

    2015-10-01

    The purpose of this study is to evaluate the diagnostic performance of SWE in palpable breast mass and to compare with color overlay pattern in SWE with conventional US and quantitative SWE for assessing palpable breast mass. SWE and conventional breast US were performed in 133 women with 156 palpable breast lesions (81 benign, 75 malignant) between August 2013 to June 2014. Either pathology or periodic imaging surveillance more than 2 years was a reference standard. Existence of previous image was blinded to performing radiologists. US BI-RADS final assessment, qualitative and quantitative SWE measurements were evaluated. Diagnostic performances of grayscale US, SWE and US combined to SWE were calculated and compared. Correlation between pattern classification and quantitative SWE was evaluated. Both color overlay pattern and quantitative SWE improved the specificity of conventional US, from 81.48% to 96.30% (p=0.0005), without improvement in sensitivity. Color overlay pattern was significantly related to all quantitative SWE parameters and malignancy rate (p<0.0001.). The optimal cutoff of color overlay pattern was between 2 and 3. Emax with optimal cutoff at 45.1 kPa showed the highest Az value, sensitivity, specificity and accuracy among other quantitative SWE parameters (p<0.0001). Echogenic halo on grayscale US showed significant correlation with color overlay pattern and pathology (p<0.0001). In evaluation of palpable breast mass, conventional US combine to SWE improves specificity and reduces the number of biopsies that ultimately yield a benign result. Color overlay pattern classification is more quick and easy and may represent quantitative SWE measurements with similar diagnostic performances. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  9. Model Performance Evaluation and Scenario Analysis ...

    EPA Pesticide Factsheets

    This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors. The performance measures include error analysis, coefficient of determination, Nash-Sutcliffe efficiency, and a new weighted rank method. These performance metrics only provide useful information about the overall model performance. Note that MPESA is based on the separation of observed and simulated time series into magnitude and sequence components. The separation of time series into magnitude and sequence components and the reconstruction back to time series provides diagnostic insights to modelers. For example, traditional approaches lack the capability to identify if the source of uncertainty in the simulated data is due to the quality of the input data or the way the analyst adjusted the model parameters. This report presents a suite of model diagnostics that identify if mismatches between observed and simulated data result from magnitude or sequence related errors. MPESA offers graphical and statistical options that allow HSPF users to compare observed and simulated time series and identify the parameter values to adjust or the input data to modify. The scenario analysis part of the too

  10. EXTRACTION AND QUANTITATIVE ANALYSIS OF ELEMENTAL SULFUR FROM SULFIDE MINERAL SURFACES BY HIGH-PERFORMANCE LIQUID CHROMATOGRAPHY. (R826189)

    EPA Science Inventory

    A simple method for the quantitative determination of elemental sulfur on oxidized sulfide minerals is described. Extraction of elemental sulfur in perchloroethylene and subsequent analysis with high-performance liquid chromatography were used to ascertain the total elemental ...

  11. Classification of cassava genotypes based on qualitative and quantitative data.

    PubMed

    Oliveira, E J; Oliveira Filho, O S; Santos, V S

    2015-02-02

    We evaluated the genetic variation of cassava accessions based on qualitative (binomial and multicategorical) and quantitative traits (continuous). We characterized 95 accessions obtained from the Cassava Germplasm Bank of Embrapa Mandioca e Fruticultura; we evaluated these accessions for 13 continuous, 10 binary, and 25 multicategorical traits. First, we analyzed the accessions based only on quantitative traits; next, we conducted joint analysis (qualitative and quantitative traits) based on the Ward-MLM method, which performs clustering in two stages. According to the pseudo-F, pseudo-t2, and maximum likelihood criteria, we identified five and four groups based on quantitative trait and joint analysis, respectively. The smaller number of groups identified based on joint analysis may be related to the nature of the data. On the other hand, quantitative data are more subject to environmental effects in the phenotype expression; this results in the absence of genetic differences, thereby contributing to greater differentiation among accessions. For most of the accessions, the maximum probability of classification was >0.90, independent of the trait analyzed, indicating a good fit of the clustering method. Differences in clustering according to the type of data implied that analysis of quantitative and qualitative traits in cassava germplasm might explore different genomic regions. On the other hand, when joint analysis was used, the means and ranges of genetic distances were high, indicating that the Ward-MLM method is very useful for clustering genotypes when there are several phenotypic traits, such as in the case of genetic resources and breeding programs.

  12. Evaluation of species richness estimators based on quantitative performance measures and sensitivity to patchiness and sample grain size

    NASA Astrophysics Data System (ADS)

    Willie, Jacob; Petre, Charles-Albert; Tagg, Nikki; Lens, Luc

    2012-11-01

    Data from forest herbaceous plants in a site of known species richness in Cameroon were used to test the performance of rarefaction and eight species richness estimators (ACE, ICE, Chao1, Chao2, Jack1, Jack2, Bootstrap and MM). Bias, accuracy, precision and sensitivity to patchiness and sample grain size were the evaluation criteria. An evaluation of the effects of sampling effort and patchiness on diversity estimation is also provided. Stems were identified and counted in linear series of 1-m2 contiguous square plots distributed in six habitat types. Initially, 500 plots were sampled in each habitat type. The sampling process was monitored using rarefaction and a set of richness estimator curves. Curves from the first dataset suggested adequate sampling in riparian forest only. Additional plots ranging from 523 to 2143 were subsequently added in the undersampled habitats until most of the curves stabilized. Jack1 and ICE, the non-parametric richness estimators, performed better, being more accurate and less sensitive to patchiness and sample grain size, and significantly reducing biases that could not be detected by rarefaction and other estimators. This study confirms the usefulness of non-parametric incidence-based estimators, and recommends Jack1 or ICE alongside rarefaction while describing taxon richness and comparing results across areas sampled using similar or different grain sizes. As patchiness varied across habitat types, accurate estimations of diversity did not require the same number of plots. The number of samples needed to fully capture diversity is not necessarily the same across habitats, and can only be known when taxon sampling curves have indicated adequate sampling. Differences in observed species richness between habitats were generally due to differences in patchiness, except between two habitats where they resulted from differences in abundance. We suggest that communities should first be sampled thoroughly using appropriate taxon sampling

  13. Quantitative Motor Performance and Sleep Benefit in Parkinson Disease.

    PubMed

    van Gilst, Merel M; van Mierlo, Petra; Bloem, Bastiaan R; Overeem, Sebastiaan

    2015-10-01

    Many people with Parkinson disease experience "sleep benefit": temporarily improved mobility upon awakening. Here we used quantitative motor tasks to assess the influence of sleep on motor functioning in Parkinson disease. Eighteen Parkinson patients with and 20 without subjective sleep benefit and 20 healthy controls participated. Before and directly after a regular night sleep and an afternoon nap, subjects performed the timed pegboard dexterity task and quantified finger tapping task. Subjective ratings of motor functioning and mood/vigilange were included. Sleep was monitored using polysomnography. On both tasks, patients were overall slower than healthy controls (night: F2,55 = 16.938, P < 0.001; nap: F2,55 = 15.331, P < 0.001). On the pegboard task, there was a small overall effect of night sleep (F1,55 = 9.695, P = 0.003); both patients and controls were on average slightly slower in the morning. However, in both tasks there was no sleep*group interaction for nighttime sleep nor for afternoon nap. There was a modest correlation between the score on the pegboard task and self-rated motor symptoms among patients (rho = 0.233, P = 0.004). No correlations in task performance and mood/vigilance or sleep time/efficiency were found. A positive effect of sleep on motor function is commonly reported by Parkinson patients. Here we show that the subjective experience of sleep benefit is not paralleled by an actual improvement in motor functioning. Sleep benefit therefore appears to be a subjective phenomenon and not a Parkinson-specific reduction in symptoms. © 2015 Associated Professional Sleep Societies, LLC.

  14. Quantitative evaluation of protocorm growth and fungal colonization in Bletilla striata (Orchidaceae) reveals less-productive symbiosis with a non-native symbiotic fungus.

    PubMed

    Yamamoto, Tatsuki; Miura, Chihiro; Fuji, Masako; Nagata, Shotaro; Otani, Yuria; Yagame, Takahiro; Yamato, Masahide; Kaminaka, Hironori

    2017-02-21

    In nature, orchid plants depend completely on symbiotic fungi for their nutrition at the germination and the subsequent seedling (protocorm) stages. However, only limited quantitative methods for evaluating the orchid-fungus interactions at the protocorm stage are currently available, which greatly constrains our understanding of the symbiosis. Here, we aimed to improve and integrate quantitative evaluations of the growth and fungal colonization in the protocorms of a terrestrial orchid, Blettila striata, growing on a plate medium. We achieved both symbiotic and asymbiotic germinations for the terrestrial orchid B. striata. The protocorms produced by the two germination methods grew almost synchronously for the first three weeks. At week four, however, the length was significantly lower in the symbiotic protocorms. Interestingly, the dry weight of symbiotic protocorms did not significantly change during the growth period, which implies that there was only limited transfer of carbon compounds from the fungus to the protocorms in this relationship. Next, to evaluate the orchid-fungus interactions, we developed an ink-staining method to observe the hyphal coils in protocorms without preparing thin sections. Crushing the protocorm under the coverglass enables us to observe all hyphal coils in the protocorms with high resolution. For this observation, we established a criterion to categorize the stages of hyphal coils, depending on development and degradation. By counting the symbiotic cells within each stage, it was possible to quantitatively evaluate the orchid-fungus symbiosis. We describe a method for quantitative evaluation of orchid-fungus symbiosis by integrating the measurements of plant growth and fungal colonization. The current study revealed that although fungal colonization was observed in the symbiotic protocorms, the weight of the protocorm did not significantly increase, which is probably due to the incompatibility of the fungus in this symbiosis. These

  15. The Impact of Self-Evaluation Instruction on Student Self-Evaluation, Music Performance, and Self-Evaluation Accuracy

    ERIC Educational Resources Information Center

    Hewitt, Michael P.

    2011-01-01

    The author sought to determine whether self-evaluation instruction had an impact on student self-evaluation, music performance, and self-evaluation accuracy of music performance among middle school instrumentalists. Participants (N = 211) were students at a private middle school located in a metropolitan area of a mid-Atlantic state. Students in…

  16. 48 CFR 1552.209-76 - Contractor performance evaluations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Contractor performance... 1552.209-76 Contractor performance evaluations. As prescribed in section 1509.170-1, insert the following clause in all applicable solicitations and contracts. Contractor Performance Evaluations (OCT 2002...

  17. 48 CFR 8.406-7 - Contractor Performance Evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Contractor Performance... ACQUISITION PLANNING REQUIRED SOURCES OF SUPPLIES AND SERVICES Federal Supply Schedules 8.406-7 Contractor Performance Evaluation. Ordering activities must prepare an evaluation of contractor performance for each...

  18. 48 CFR 1536.201 - Evaluation of contracting performance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... performance. 1536.201 Section 1536.201 Federal Acquisition Regulations System ENVIRONMENTAL PROTECTION AGENCY... Contracting for Construction 1536.201 Evaluation of contracting performance. (a) The Contracting Officer will... will file the form in the contractor performance evaluation files which it maintains. (e) The Quality...

  19. Quantitative evaluation of haze formation of koji and progression of internal haze by drying of koji during koji making.

    PubMed

    Ito, Kazunari; Gomi, Katsuya; Kariyama, Masahiro; Miyake, Tsuyoshi

    2017-07-01

    The construction of an experimental system that can mimic koji making in the manufacturing setting of a sake brewery is initially required for the quantitative evaluation of mycelia grown on/in koji pellets (haze formation). Koji making with rice was investigated with a solid-state fermentation (SSF) system using a non-airflow box (NAB), which produced uniform conditions in the culture substrate with high reproducibility and allowed for the control of favorable conditions in the substrate during culture. The SSF system using NAB accurately reproduced koji making in a manufacturing setting. To evaluate haze formation during koji making, surfaces and cross sections of koji pellets obtained from koji making tests were observed using a digital microscope. Image analysis was used to distinguish between haze and non-haze sections of koji pellets, enabling the evaluation of haze formation in a batch by measuring the haze rate of a specific number of koji pellets. This method allowed us to obtain continuous and quantitative data on the time course of haze formation. Moreover, drying koji during the late stage of koji making was revealed to cause further penetration of mycelia into koji pellets (internal haze). The koji making test with the SSF system using NAB and quantitative evaluation of haze formation in a batch by image analysis is a useful method for understanding the relations between haze formation and koji making conditions. Copyright © 2017 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  20. Qualitative evaluation and quantitative determination of 10 major active components in Carthamus tinctorius L. by high-performance liquid chromatography coupled with diode array detector.

    PubMed

    Fan, Li; Zhao, Hai-Yu; Xu, Man; Zhou, Lei; Guo, Hui; Han, Jian; Wang, Bao-Rong; Guo, De-An

    2009-03-13

    Flavonoids in the water extract of Carthamus tinctorius L. exhibit potent biological activities such as anti-coagulant, vasodilation, anti-oxidant, neuroprotection and immunosuppressant. A high-performance liquid chromatographic method was established to evaluate the quality of Carthamus tinctorius through a simultaneous quantitation of eight flavonoids, hydroxysafflor yellow A (2), 6-hydroxykaempferol 3,6-di-O-beta-glucoside-7-O-beta-glucuronide (3), 6-hydroxykaempferol 3,6,7-tri-O-beta-glucoside (4), 6-hydroxykaempferol 3-O-beta-rutinoside-6-O-beta-glucoside (6), 6-hydroxykaempferol 3,6-di-O-beta-glucoside (7), 6-hydroxyapigenin 6-O-glucoside-7-O-glucuronide (8), anhydrosafflor yellow B (9), and kaempferol 3-O-beta-rutinoside (10), together with two other compounds named guanosine (1) and syringin (5). Among them, compound 8 was identified as a new compound. The compounds were separated on an Alltech Alltima-C(18) column with gradient elution of acetonitrile and 0.01% trifluoroacetic acid. The detection wavelength was 280 nm. All the compounds showed good linearity (r(2) >or= 0.9989). The recoveries, measured at three concentration levels, varied from 94.9% to 105.2%. This method was also validated with respect to precision, repeatability and accuracy, and was successfully applied to quantify the 10 components in 46 batches of C. tinctorius samples from different areas. Significant variations were found in the contents of these compounds in these samples. Compared with the reported analytical methods of C. tinctorius, this simple and reliable method provided a new basis for overall assessment on quality of C. tinctorius and should be considered as a suitable quality control method.

  1. Nuclear medicine and imaging research (Instrumentation and quantitative methods of evaluation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beck, R.N.; Cooper, M.D.

    1989-09-01

    This program addresses the problems involving the basic science and technology underlying the physical and conceptual tools of radioactive tracer methodology as they relate to the measurement of structural and functional parameters of physiologic importance in health and disease. The principal tool is quantitative radionuclide imaging. The overall objective of this program is to further the development and transfer of radiotracer methodology from basic theory to routine clinical practice in order that individual patients and society as a whole will receive the maximum net benefit from the new knowledge gained. The focus of the research is on the development ofmore » new instruments and radiopharmaceuticals, and the evaluation of these through the phase of clinical feasibility.« less

  2. Performance evaluation of new automated hepatitis B viral markers in the clinical laboratory: two quantitative hepatitis B surface antigen assays and an HBV core-related antigen assay.

    PubMed

    Park, Yongjung; Hong, Duck Jin; Shin, Saeam; Cho, Yonggeun; Kim, Hyon-Suk

    2012-05-01

    We evaluated quantitative hepatitis B surface antigen (qHBsAg) assays and a hepatitis B virus (HBV) core-related antigen (HBcrAg) assay. A total of 529 serum samples from patients with hepatitis B were tested. HBsAg levels were determined by using the Elecsys (Roche Diagnostics, Indianapolis, IN) and Architect (Abbott Laboratories, Abbott Park, IL) qHBsAg assays. HBcrAg was measured by using Lumipulse HBcrAg assay (Fujirebio, Tokyo, Japan). Serum aminotransferases and HBV DNA were respectively quantified by using the Hitachi 7600 analyzer (Hitachi High-Technologies, Tokyo, Japan) and the Cobas AmpliPrep/Cobas TaqMan test (Roche). Precision of the qHBsAg and HBcrAg assays was assessed, and linearity of the qHBsAg assays was verified. All assays showed good precision performance with coefficients of variation between 4.5% and 5.3% except for some levels. Both qHBsAg assays showed linearity from 0.1 to 12,000.0 IU/mL and correlated well (r = 0.9934). HBsAg levels correlated with HBV DNA (r = 0.3373) and with HBcrAg (r = 0.5164), and HBcrAg also correlated with HBV DNA (r = 0.5198; P < .0001). This observation could provide impetus for further research to elucidate the clinical usefulness of the qHBsAg and HBcrAg assays.

  3. Development and validation of stability indicating method for the quantitative determination of venlafaxine hydrochloride in extended release formulation using high performance liquid chromatography

    PubMed Central

    Kaur, Jaspreet; Srinivasan, K. K.; Joseph, Alex; Gupta, Abhishek; Singh, Yogendra; Srinivas, Kona S.; Jain, Garima

    2010-01-01

    Objective: Venlafaxine,hydrochloride is a structurally novel phenethyl bicyclic antidepressant, and is usually categorized as a serotonin–norepinephrine reuptake inhibitor (SNRI) but it has been referred to as a serotonin–norepinephrine–dopamine reuptake inhibitor. It inhibits the reuptake of dopamine. Venlafaxine HCL is widely prescribed in the form of sustained release formulations. In the current article we are reporting the development and validation of a fast and simple stability indicating, isocratic high performance liquid chromatographic (HPLC) method for the determination of venlafaxine hydrochloride in sustained release formulations. Materials and Methods: The quantitative determination of venlafaxine hydrochloride was performed on a Kromasil C18 analytical column (250 × 4.6 mm i.d., 5 μm particle size) with 0.01 M phosphate buffer (pH 4.5): methanol (40: 60) as a mobile phase, at a flow rate of 1.0 ml/min. For HPLC methods, UV detection was made at 225 nm. Results: During method validation, parameters such as precision, linearity, accuracy, stability, limit of quantification and detection and specificity were evaluated, which remained within acceptable limits. Conclusions: The method has been successfully applied for the quantification and dissolution profiling of Venlafaxine HCL in sustained release formulation. The method presents a simple and reliable solution for the routine quantitative analysis of Venlafaxine HCL. PMID:21814426

  4. Effects of Performers' External Characteristics on Performance Evaluations.

    ERIC Educational Resources Information Center

    Bermingham, Gudrun A.

    2000-01-01

    States that fairness has been a major concern in the field of music adjudication. Reviews the research literature to reveal information about three external characteristics (race, gender, and physical attractiveness) that may affect judges' performance evaluations and influence fairness of music adjudication. Includes references. (CMK)

  5. Ionization Electron Signal Processing in Single Phase LArTPCs I. Algorithm Description and Quantitative Evaluation with MicroBooNE Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, C.; et al.

    We describe the concept and procedure of drifted-charge extraction developed in the MicroBooNE experiment, a single-phase liquid argon time projection chamber (LArTPC). This technique converts the raw digitized TPC waveform to the number of ionization electrons passing through a wire plane at a given time. A robust recovery of the number of ionization electrons from both induction and collection anode wire planes will augment the 3D reconstruction, and is particularly important for tomographic reconstruction algorithms. A number of building blocks of the overall procedure are described. The performance of the signal processing is quantitatively evaluated by comparing extracted charge withmore » the true charge through a detailed TPC detector simulation taking into account position-dependent induced current inside a single wire region and across multiple wires. Some areas for further improvement of the performance of the charge extraction procedure are also discussed.« less

  6. Infusion of Quantitative and Statistical Concepts into Biology Courses Does Not Improve Quantitative Literacy

    ERIC Educational Resources Information Center

    Beck, Christopher W.

    2018-01-01

    Multiple national reports have pushed for the integration of quantitative concepts into the context of disciplinary science courses. The aim of this study was to evaluate the quantitative and statistical literacy of biology students and explore learning gains when those skills were taught implicitly in the context of biology. I examined gains in…

  7. Quantitative Muscle Ultrasonography in Carpal Tunnel Syndrome.

    PubMed

    Lee, Hyewon; Jee, Sungju; Park, Soo Ho; Ahn, Seung-Chan; Im, Juneho; Sohn, Min Kyun

    2016-12-01

    To assess the reliability of quantitative muscle ultrasonography (US) in healthy subjects and to evaluate the correlation between quantitative muscle US findings and electrodiagnostic study results in patients with carpal tunnel syndrome (CTS). The clinical significance of quantitative muscle US in CTS was also assessed. Twenty patients with CTS and 20 age-matched healthy volunteers were recruited. All control and CTS subjects underwent a bilateral median and ulnar nerve conduction study (NCS) and quantitative muscle US. Transverse US images of the abductor pollicis brevis (APB) and abductor digiti minimi (ADM) were obtained to measure muscle cross-sectional area (CSA), thickness, and echo intensity (EI). EI was determined using computer-assisted, grayscale analysis. Inter-rater and intra-rater reliability for quantitative muscle US in control subjects, and differences in muscle thickness, CSA, and EI between the CTS patient and control groups were analyzed. Relationships between quantitative US parameters and electrodiagnostic study results were evaluated. Quantitative muscle US had high inter-rater and intra-rater reliability in the control group. Muscle thickness and CSA were significantly decreased, and EI was significantly increased in the APB of the CTS group (all p<0.05). EI demonstrated a significant positive correlation with latency of the median motor and sensory NCS in CTS patients (p<0.05). These findings suggest that quantitative muscle US parameters may be useful for detecting muscle changes in CTS. Further study involving patients with other neuromuscular diseases is needed to evaluate peripheral muscle change using quantitative muscle US.

  8. Alternative performance measures for evaluating congestion.

    DOT National Transportation Integrated Search

    2004-04-01

    This report summarizes the results of the work performed under the project Alternative Performance Measures for Evaluating : Congestion. The study first outlines existing approaches to looking at congestion. It then builds on the previous work in the...

  9. Thermal evaluation for exposed stone house with quantitative and qualitative approach in mountainous area, Wonosobo, Indonesia

    NASA Astrophysics Data System (ADS)

    Hermawan, Hermawan; Prianto, Eddy

    2017-12-01

    A building can be considered as having a good thermal performance if it can make the occupant comfortable. Thermal comfort can be seen from the occupant's respond toward the architectural elements and the environment, such as lighting, the room crowding, air temperature, humidity, oxygen level, and occupant's behaviours. The objective of this research is to analyse the thermal performance of four different orientation houses in mountainous area. The research was conducted on the four expose stone houses with four different orientations in the slope of Sindoro Mountain which has relative cool temperature, about 26°C. The measurement of the elements above was done quantitatively and qualitatively for 24 hours. The results are as follows. First, the most comfortable house is west-orientation house. Second, based on the quantitative and qualitative observation, there is no significant difference (±5 %). Third, the occupant's behaviours (caring and genen) also become factors influencing occupant's comfort.

  10. Quantitative evaluation of potential irradiation geometries for carbon-ion beam grid therapy.

    PubMed

    Tsubouchi, Toshiro; Henry, Thomas; Ureba, Ana; Valdman, Alexander; Bassler, Niels; Siegbahn, Albert

    2018-03-01

    Radiotherapy using grids containing cm-wide beam elements has been carried out sporadically for more than a century. During the past two decades, preclinical research on radiotherapy with grids containing small beam elements, 25 μm-0.7 mm wide, has been performed. Grid therapy with larger beam elements is technically easier to implement, but the normal tissue tolerance to the treatment is decreasing. In this work, a new approach in grid therapy, based on irradiations with grids containing narrow carbon-ion beam elements was evaluated dosimetrically. The aim formulated for the suggested treatment was to obtain a uniform target dose combined with well-defined grids in the irradiated normal tissue. The gain, obtained by crossfiring the carbon-ion beam grids over a simulated target volume, was quantitatively evaluated. The dose distributions produced by narrow rectangular carbon-ion beams in a water phantom were simulated with the PHITS Monte Carlo code. The beam-element height was set to 2.0 cm in the simulations, while the widths varied from 0.5 to 10.0 mm. A spread-out Bragg peak (SOBP) was then created for each beam element in the grid, to cover the target volume with dose in the depth direction. The dose distributions produced by the beam-grid irradiations were thereafter constructed by adding the dose profiles simulated for single beam elements. The variation of the valley-to-peak dose ratio (VPDR) with depth in water was thereafter evaluated. The separation of the beam elements inside the grids were determined for different irradiation geometries with a selection criterion. The simulated carbon-ion beams remained narrow down to the depths of the Bragg peaks. With the formulated selection criterion, a beam-element separation which was close to the beam-element width was found optimal for grids containing 3.0-mm-wide beam elements, while a separation which was considerably larger than the beam-element width was found advantageous for grids containing 0.5-mm

  11. Dynamic phase differences based on quantitative phase imaging for the objective evaluation of cell behavior.

    PubMed

    Krizova, Aneta; Collakova, Jana; Dostal, Zbynek; Kvasnica, Lukas; Uhlirova, Hana; Zikmund, Tomas; Vesely, Pavel; Chmelik, Radim

    2015-01-01

    Quantitative phase imaging (QPI) brought innovation to noninvasive observation of live cell dynamics seen as cell behavior. Unlike the Zernike phase contrast or differential interference contrast, QPI provides quantitative information about cell dry mass distribution. We used such data for objective evaluation of live cell behavioral dynamics by the advanced method of dynamic phase differences (DPDs). The DPDs method is considered a rational instrument offered by QPI. By subtracting the antecedent from the subsequent image in a time-lapse series, only the changes in mass distribution in the cell are detected. The result is either visualized as a two dimensional color-coded projection of these two states of the cell or as a time dependence of changes quantified in picograms. Then in a series of time-lapse recordings, the chain of cell mass distribution changes that would otherwise escape attention is revealed. Consequently, new salient features of live cell behavior should emerge. Construction of the DPDs method and results exhibiting the approach are presented. Advantage of the DPDs application is demonstrated on cells exposed to an osmotic challenge. For time-lapse acquisition of quantitative phase images, the recently developed coherence-controlled holographic microscope was employed.

  12. Dynamic phase differences based on quantitative phase imaging for the objective evaluation of cell behavior

    NASA Astrophysics Data System (ADS)

    Krizova, Aneta; Collakova, Jana; Dostal, Zbynek; Kvasnica, Lukas; Uhlirova, Hana; Zikmund, Tomas; Vesely, Pavel; Chmelik, Radim

    2015-11-01

    Quantitative phase imaging (QPI) brought innovation to noninvasive observation of live cell dynamics seen as cell behavior. Unlike the Zernike phase contrast or differential interference contrast, QPI provides quantitative information about cell dry mass distribution. We used such data for objective evaluation of live cell behavioral dynamics by the advanced method of dynamic phase differences (DPDs). The DPDs method is considered a rational instrument offered by QPI. By subtracting the antecedent from the subsequent image in a time-lapse series, only the changes in mass distribution in the cell are detected. The result is either visualized as a two-dimensional color-coded projection of these two states of the cell or as a time dependence of changes quantified in picograms. Then in a series of time-lapse recordings, the chain of cell mass distribution changes that would otherwise escape attention is revealed. Consequently, new salient features of live cell behavior should emerge. Construction of the DPDs method and results exhibiting the approach are presented. Advantage of the DPDs application is demonstrated on cells exposed to an osmotic challenge. For time-lapse acquisition of quantitative phase images, the recently developed coherence-controlled holographic microscope was employed.

  13. Quantitative MR imaging in fracture dating--Initial results.

    PubMed

    Baron, Katharina; Neumayer, Bernhard; Widek, Thomas; Schick, Fritz; Scheicher, Sylvia; Hassler, Eva; Scheurer, Eva

    2016-04-01

    For exact age determinations of bone fractures in a forensic context (e.g. in cases of child abuse) improved knowledge of the time course of the healing process and use of non-invasive modern imaging technology is of high importance. To date, fracture dating is based on radiographic methods by determining the callus status and thereby relying on an expert's experience. As a novel approach, this study aims to investigate the applicability of magnetic resonance imaging (MRI) for bone fracture dating by systematically investigating time-resolved changes in quantitative MR characteristics after a fracture event. Prior to investigating fracture healing in children, adults were examined for this study in order to test the methodology for this application. Altogether, 31 MR examinations in 17 subjects (♀: 11 ♂: 6; median age 34 ± 15 y, scanned 1-5 times over a period of up to 200 days after the fracture event) were performed on a clinical 3T MR scanner (TimTrio, Siemens AG, Germany). All subjects were treated conservatively for a fracture in either a long bone or in the collar bone. Both, qualitative and quantitative MR measurements were performed in all subjects. MR sequences for a quantitative measurement of relaxation times T1 and T2 in the fracture gap and musculature were applied. Maps of quantitative MR parameters T1, T2, and magnetisation transfer ratio (MTR) were calculated and evaluated by investigating changes over time in the fractured area by defined ROIs. Additionally, muscle areas were examined as reference regions to validate this approach. Quantitative evaluation of 23 MR data sets (12 test subjects, ♀: 7 ♂: 5) showed an initial peak in T1 values in the fractured area (T1=1895 ± 607 ms), which decreased over time to a value of 1094 ± 182 ms (200 days after the fracture event). T2 values also peaked for early-stage fractures (T2=115 ± 80 ms) and decreased to 73 ± 33 ms within 21 days after the fracture event. After that time point, no

  14. A Quantitative Evaluation of Dissolved Oxygen Instrumentation

    NASA Technical Reports Server (NTRS)

    Pijanowski, Barbara S.

    1971-01-01

    The implications of the presence of dissolved oxygen in water are discussed in terms of its deleterious or beneficial effects, depending on the functional consequences to those affected, e.g., the industrialist, the oceanographer, and the ecologist. The paper is devoted primarily to an examination of the performance of five commercially available dissolved oxygen meters. The design of each is briefly reviewed and ease or difficulty of use in the field described. Specifically, the evaluation program treated a number of parameters and user considerations including an initial check and trial calibration for each instrument and a discussion of the measurement methodology employed. Detailed test results are given relating to the effects of primary power variation, water-flow sensitivity, response time, relative accuracy of dissolved-oxygen readout, temperature accuracy (for those instruments which included this feature), error and repeatability, stability, pressure and other environmental effects, and test results obtained in the field. Overall instrument performance is summarized comparatively by chart.

  15. A workload model and measures for computer performance evaluation

    NASA Technical Reports Server (NTRS)

    Kerner, H.; Kuemmerle, K.

    1972-01-01

    A generalized workload definition is presented which constructs measurable workloads of unit size from workload elements, called elementary processes. An elementary process makes almost exclusive use of one of the processors, CPU, I/O processor, etc., and is measured by the cost of its execution. Various kinds of user programs can be simulated by quantitative composition of elementary processes into a type. The character of the type is defined by the weights of its elementary processes and its structure by the amount and sequence of transitions between its elementary processes. A set of types is batched to a mix. Mixes of identical cost are considered as equivalent amounts of workload. These formalized descriptions of workloads allow investigators to compare the results of different studies quantitatively. Since workloads of different composition are assigned a unit of cost, these descriptions enable determination of cost effectiveness of different workloads on a machine. Subsequently performance parameters such as throughput rate, gain factor, internal and external delay factors are defined and used to demonstrate the effects of various workload attributes on the performance of a selected large scale computer system.

  16. The Use of Mouse Models of Breast Cancer and Quantitative Image Analysis to Evaluate Hormone Receptor Antigenicity after Microwave-assisted Formalin Fixation

    PubMed Central

    Engelberg, Jesse A.; Giberson, Richard T.; Young, Lawrence J.T.; Hubbard, Neil E.

    2014-01-01

    Microwave methods of fixation can dramatically shorten fixation times while preserving tissue structure; however, it remains unclear if adequate tissue antigenicity is preserved. To assess and validate antigenicity, robust quantitative methods and animal disease models are needed. We used two mouse mammary models of human breast cancer to evaluate microwave-assisted and standard 24-hr formalin fixation. The mouse models expressed four antigens prognostic for breast cancer outcome: estrogen receptor, progesterone receptor, Ki67, and human epidermal growth factor receptor 2. Using pathologist evaluation and novel methods of quantitative image analysis, we measured and compared the quality of antigen preservation, percentage of positive cells, and line plots of cell intensity. Visual evaluations by pathologists established that the amounts and patterns of staining were similar in tissues fixed by the different methods. The results of the quantitative image analysis provided a fine-grained evaluation, demonstrating that tissue antigenicity is preserved in tissues fixed using microwave methods. Evaluation of the results demonstrated that a 1-hr, 150-W fixation is better than a 45-min, 150-W fixation followed by a 15-min, 650-W fixation. The results demonstrated that microwave-assisted formalin fixation can standardize fixation times to 1 hr and produce immunohistochemistry that is in every way commensurate with longer conventional fixation methods. PMID:24682322

  17. Qualitative, semi-quantitative, and quantitative simulation of the osmoregulation system in yeast

    PubMed Central

    Pang, Wei; Coghill, George M.

    2015-01-01

    In this paper we demonstrate how Morven, a computational framework which can perform qualitative, semi-quantitative, and quantitative simulation of dynamical systems using the same model formalism, is applied to study the osmotic stress response pathway in yeast. First the Morven framework itself is briefly introduced in terms of the model formalism employed and output format. We then built a qualitative model for the biophysical process of the osmoregulation in yeast, and a global qualitative-level picture was obtained through qualitative simulation of this model. Furthermore, we constructed a Morven model based on existing quantitative model of the osmoregulation system. This model was then simulated qualitatively, semi-quantitatively, and quantitatively. The obtained simulation results are presented with an analysis. Finally the future development of the Morven framework for modelling the dynamic biological systems is discussed. PMID:25864377

  18. Building Leadership Talent through Performance Evaluation

    ERIC Educational Resources Information Center

    Clifford, Matthew

    2015-01-01

    Most states and districts scramble to provide professional development to support principals, but "principal evaluation" is often lost amid competing priorities. Evaluation is an important method for supporting principal growth, communicating performance expectations to principals, and improving leadership practice. It provides leaders…

  19. Quantitation of aortic and mitral regurgitation in the pediatric population: evaluation by radionuclide angiocardiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hurwitz, R.A.; Treves, S.; Freed, M.

    The ability to quantitate aortic (AR) or mitral regurgitation (MR), or both, by radionuclide angiocardiography was evaluated in children and young adults at rest and during isometric exercise. Regurgitation was estimated by determining the ratio of left ventricular stroke volume to right ventricular stroke volume obtained during equilibrium ventriculography. The radionuclide measurement was compared with results of cineangiography, with good correlation between both studies in 47 of 48 patients. Radionuclide stroke volume ratio was used to classify severity: the group with equivocal regurgitation differed from the group with mild regurgitation (p less than 0.02); patients with mild regurgitation differed frommore » those with moderate regurgitation (p less than 0.001); and those with moderate regurgitation differed from those with severe regurgitation (p less than 0.01). The stroke volume ratio was responsive to isometric exercise, remaining constant or increasing in 16 of 18 patients. After surgery to correct regurgitation, the stroke volume ratio significantly decreased from preoperative measurements in all 7 patients evaluated. Results from the present study demonstrate that a stroke volume ratio greater than 2.0 is compatible with moderately severe regurgitation and that a ratio greater than 3.0 suggests the presence of severe regurgitation. Thus, radionuclide angiocardiography should be useful for noninvasive quantitation of AR or MR, or both, helping define the course of young patients with left-side valvular regurgitation.« less

  20. Thrust stand evaluation of engine performance improvement algorithms in an F-15 airplane

    NASA Technical Reports Server (NTRS)

    Conners, Timothy R.

    1992-01-01

    An investigation is underway to determine the benefits of a new propulsion system optimization algorithm in an F-15 airplane. The performance seeking control (PSC) algorithm optimizes the quasi-steady-state performance of an F100 derivative turbofan engine for several modes of operation. The PSC algorithm uses an onboard software engine model that calculates thrust, stall margin, and other unmeasured variables for use in the optimization. As part of the PSC test program, the F-15 aircraft was operated on a horizontal thrust stand. Thrust was measured with highly accurate load cells. The measured thrust was compared to onboard model estimates and to results from posttest performance programs. Thrust changes using the various PSC modes were recorded. Those results were compared to benefits using the less complex highly integrated digital electronic control (HIDEC) algorithm. The PSC maximum thrust mode increased intermediate power thrust by 10 percent. The PSC engine model did very well at estimating measured thrust and closely followed the transients during optimization. Quantitative results from the evaluation of the algorithms and performance calculation models are included with emphasis on measured thrust results. The report presents a description of the PSC system and a discussion of factors affecting the accuracy of the thrust stand load measurements.

  1. Quantitative Determination of Bioactive Constituents in Noni Juice by High-performance Liquid Chromatography with Electrospray Ionization Triple Quadrupole Mass Spectrometry.

    PubMed

    Yan, Yongqiu; Lu, Yu; Jiang, Shiping; Jiang, Yu; Tong, Yingpeng; Zuo, Limin; Yang, Jun; Gong, Feng; Zhang, Ling; Wang, Ping

    2018-01-01

    Noni juice has been extensively used as folk medicine for the treatment of arthritis, infections, analgesic, colds, cancers, and diabetes by Polynesians for many years. Due to the lack of standard scientific evaluation methods, various kinds of commercial Noni juice with different quality and price were available on the market. To establish a sensitive, reliable, and accurate high-performance liquid chromatography with electrospray ionization triple quadrupole mass spectrometry (HPLC-ESI-MS/MS) method for separation, identification, and simultaneous quantitative analysis of bioactive constituents in Noni juice. The analytes and eight batches of commercially available samples from different origins were separated and analyzed by the HPLC-ESI-MS/MS method on an Agilent ZORBAX SB-C 18 (150 mm × 4.6 mm i.d., 5 μm) column using a gradient elution of acetonitrile-methanol-0.05% glacial acetic acid in water (v/v) at a constant flow rate of 0.5 mL/min. Seven components were identification and all of the assay parameters were within the required limits. Components were within the correlation coefficient values ( R 2 ≥ 0.9993) at the concentration ranges tested. The precision of the assay method was <0.91% and the repeatability between 1.36% and 3.31%. The accuracy varied from 96.40% to 103.02% and the relative standard deviations of stability were <3.91%. Samples from the same origin showed similar content while different origins showed significant different result. The developed methods would provide a reliable basis and be useful in the establishment of a rational quality control standard of Noni juice. Separation, identification, and simultaneous quantitative analysis method of seven bioactive constituents in Noni juice is originally developed by high-performance liquid chromatography with electrospray ionization triple quadrupole mass spectrometryThe presented method was successfully applied to the quality control of eight batches of commercially available samples

  2. Quantitative Determination of Bioactive Constituents in Noni Juice by High-performance Liquid Chromatography with Electrospray Ionization Triple Quadrupole Mass Spectrometry

    PubMed Central

    Yan, Yongqiu; Lu, Yu; Jiang, Shiping; Jiang, Yu; Tong, Yingpeng; Zuo, Limin; Yang, Jun; Gong, Feng; Zhang, Ling; Wang, Ping

    2018-01-01

    Background: Noni juice has been extensively used as folk medicine for the treatment of arthritis, infections, analgesic, colds, cancers, and diabetes by Polynesians for many years. Due to the lack of standard scientific evaluation methods, various kinds of commercial Noni juice with different quality and price were available on the market. Objective: To establish a sensitive, reliable, and accurate high-performance liquid chromatography with electrospray ionization triple quadrupole mass spectrometry (HPLC-ESI-MS/MS) method for separation, identification, and simultaneous quantitative analysis of bioactive constituents in Noni juice. Materials and Methods: The analytes and eight batches of commercially available samples from different origins were separated and analyzed by the HPLC-ESI-MS/MS method on an Agilent ZORBAX SB-C18 (150 mm × 4.6 mm i.d., 5 μm) column using a gradient elution of acetonitrile-methanol-0.05% glacial acetic acid in water (v/v) at a constant flow rate of 0.5 mL/min. Results: Seven components were identification and all of the assay parameters were within the required limits. Components were within the correlation coefficient values (R2 ≥ 0.9993) at the concentration ranges tested. The precision of the assay method was <0.91% and the repeatability between 1.36% and 3.31%. The accuracy varied from 96.40% to 103.02% and the relative standard deviations of stability were <3.91%. Samples from the same origin showed similar content while different origins showed significant different result. Conclusions: The developed methods would provide a reliable basis and be useful in the establishment of a rational quality control standard of Noni juice. SUMMARY Separation, identification, and simultaneous quantitative analysis method of seven bioactive constituents in Noni juice is originally developed by high-performance liquid chromatography with electrospray ionization triple quadrupole mass spectrometryThe presented method was successfully applied to

  3. Benchmarking quantitative label-free LC-MS data processing workflows using a complex spiked proteomic standard dataset.

    PubMed

    Ramus, Claire; Hovasse, Agnès; Marcellin, Marlène; Hesse, Anne-Marie; Mouton-Barbosa, Emmanuelle; Bouyssié, David; Vaca, Sebastian; Carapito, Christine; Chaoui, Karima; Bruley, Christophe; Garin, Jérôme; Cianférani, Sarah; Ferro, Myriam; Van Dorssaeler, Alain; Burlet-Schiltz, Odile; Schaeffer, Christine; Couté, Yohann; Gonzalez de Peredo, Anne

    2016-01-30

    Proteomic workflows based on nanoLC-MS/MS data-dependent-acquisition analysis have progressed tremendously in recent years. High-resolution and fast sequencing instruments have enabled the use of label-free quantitative methods, based either on spectral counting or on MS signal analysis, which appear as an attractive way to analyze differential protein expression in complex biological samples. However, the computational processing of the data for label-free quantification still remains a challenge. Here, we used a proteomic standard composed of an equimolar mixture of 48 human proteins (Sigma UPS1) spiked at different concentrations into a background of yeast cell lysate to benchmark several label-free quantitative workflows, involving different software packages developed in recent years. This experimental design allowed to finely assess their performances in terms of sensitivity and false discovery rate, by measuring the number of true and false-positive (respectively UPS1 or yeast background proteins found as differential). The spiked standard dataset has been deposited to the ProteomeXchange repository with the identifier PXD001819 and can be used to benchmark other label-free workflows, adjust software parameter settings, improve algorithms for extraction of the quantitative metrics from raw MS data, or evaluate downstream statistical methods. Bioinformatic pipelines for label-free quantitative analysis must be objectively evaluated in their ability to detect variant proteins with good sensitivity and low false discovery rate in large-scale proteomic studies. This can be done through the use of complex spiked samples, for which the "ground truth" of variant proteins is known, allowing a statistical evaluation of the performances of the data processing workflow. We provide here such a controlled standard dataset and used it to evaluate the performances of several label-free bioinformatics tools (including MaxQuant, Skyline, MFPaQ, IRMa-hEIDI and Scaffold) in

  4. SEPARATION AND QUANTITATION OF NITROBENZENES AND THEIR REDUCTION PRODUCTS NITROANILINES AND PHENYLENEDIAMINES BY REVERSED=PHASE HIGH-PERFORMANCE LIQUID CHROMATOGRAPHY

    EPA Science Inventory

    A reversed-phase high-performance liquid chromatographic method for the separation and quantitation of a mixture consisting of nitrobenzene, dinitrobenzene isomers, 1,3,5-trinitrobenzene and their reduction products: aniline, nitroanilines and phenylenediamines has been developed...

  5. Advanced Video Analysis Needs for Human Performance Evaluation

    NASA Technical Reports Server (NTRS)

    Campbell, Paul D.

    1994-01-01

    Evaluators of human task performance in space missions make use of video as a primary source of data. Extraction of relevant human performance information from video is often a labor-intensive process requiring a large amount of time on the part of the evaluator. Based on the experiences of several human performance evaluators, needs were defined for advanced tools which could aid in the analysis of video data from space missions. Such tools should increase the efficiency with which useful information is retrieved from large quantities of raw video. They should also provide the evaluator with new analytical functions which are not present in currently used methods. Video analysis tools based on the needs defined by this study would also have uses in U.S. industry and education. Evaluation of human performance from video data can be a valuable technique in many industrial and institutional settings where humans are involved in operational systems and processes.

  6. Effects of normalization on quantitative traits in association test

    PubMed Central

    2009-01-01

    Background Quantitative trait loci analysis assumes that the trait is normally distributed. In reality, this is often not observed and one strategy is to transform the trait. However, it is not clear how much normality is required and which transformation works best in association studies. Results We performed simulations on four types of common quantitative traits to evaluate the effects of normalization using the logarithm, Box-Cox, and rank-based transformations. The impact of sample size and genetic effects on normalization is also investigated. Our results show that rank-based transformation gives generally the best and consistent performance in identifying the causal polymorphism and ranking it highly in association tests, with a slight increase in false positive rate. Conclusion For small sample size or genetic effects, the improvement in sensitivity for rank transformation outweighs the slight increase in false positive rate. However, for large sample size and genetic effects, normalization may not be necessary since the increase in sensitivity is relatively modest. PMID:20003414

  7. Evaluating supplier quality performance using analytical hierarchy process

    NASA Astrophysics Data System (ADS)

    Kalimuthu Rajoo, Shanmugam Sundram; Kasim, Maznah Mat; Ahmad, Nazihah

    2013-09-01

    This paper elaborates the importance of evaluating supplier quality performance to an organization. Supplier quality performance evaluation reflects the actual performance of the supplier exhibited at customer's end. It is critical in enabling the organization to determine the area of improvement and thereafter works with supplier to close the gaps. Success of the customer partly depends on supplier's quality performance. Key criteria as quality, cost, delivery, technology support and customer service are categorized as main factors in contributing to supplier's quality performance. 18 suppliers' who were manufacturing automotive application parts evaluated in year 2010 using weight point system. There were few suppliers with common rating which led to common ranking observed by few suppliers'. Analytical Hierarchy Process (AHP), a user friendly decision making tool for complex and multi criteria problems was used to evaluate the supplier's quality performance challenging the weight point system that was used for 18 suppliers'. The consistency ratio was checked for criteria and sub-criteria. Final results of AHP obtained with no overlap ratings, therefore yielded a better decision making methodology as compared to weight point rating system.

  8. Proportioning and performance evaluation of self-consolidating concrete

    NASA Astrophysics Data System (ADS)

    Wang, Xuhao

    significance of influence factors on concrete performance. In third paper, proposed DIP method and MATLAB algorithm can be successfully used to derive inter-particle spacing and MTI, and quantitatively evaluate the static stability in hardened SCC samples. These parameters can be applied to overcome the limitations and challenges of existing theoretical frames and construct statistical models associated with rheological parameters to predict flowability of SCC mixtures. The outcome of this study can be of practical value for providing an efficient and useful tool in designing mixture proportions of SCC. Last paper compared several concrete performance measurement techniques, the P-wave test and calorimetric measurements can be efficiently used to monitor the stiffening and setting of SCC mixtures.

  9. Evaluation of pavement marking performance.

    DOT National Transportation Integrated Search

    2008-06-01

    The objective of the investigation was to evaluate the useful life of pavement markings. The Manual on Uniform Traffic Control Devices (MUTCD) provides general guidelines for the application and installation of pavement markings. However, performance...

  10. Qualitative, semi-quantitative, and quantitative simulation of the osmoregulation system in yeast.

    PubMed

    Pang, Wei; Coghill, George M

    2015-05-01

    In this paper we demonstrate how Morven, a computational framework which can perform qualitative, semi-quantitative, and quantitative simulation of dynamical systems using the same model formalism, is applied to study the osmotic stress response pathway in yeast. First the Morven framework itself is briefly introduced in terms of the model formalism employed and output format. We then built a qualitative model for the biophysical process of the osmoregulation in yeast, and a global qualitative-level picture was obtained through qualitative simulation of this model. Furthermore, we constructed a Morven model based on existing quantitative model of the osmoregulation system. This model was then simulated qualitatively, semi-quantitatively, and quantitatively. The obtained simulation results are presented with an analysis. Finally the future development of the Morven framework for modelling the dynamic biological systems is discussed. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  11. Development and evaluation of event-specific quantitative PCR method for genetically modified soybean A2704-12.

    PubMed

    Takabatake, Reona; Akiyama, Hiroshi; Sakata, Kozue; Onishi, Mari; Koiwa, Tomohiro; Futo, Satoshi; Minegishi, Yasutaka; Teshima, Reiko; Mano, Junichi; Furui, Satoshi; Kitta, Kazumi

    2011-01-01

    A novel real-time PCR-based analytical method was developed for the event-specific quantification of a genetically modified (GM) soybean event; A2704-12. During the plant transformation, DNA fragments derived from pUC19 plasmid were integrated in A2704-12, and the region was found to be A2704-12 specific. The pUC19-derived DNA sequences were used as primers for the specific detection of A2704-12. We first tried to construct a standard plasmid for A2704-12 quantification using pUC19. However, non-specific signals appeared with both qualitative and quantitative PCR analyses using the specific primers with pUC19 as a template, and we then constructed a plasmid using pBR322. The conversion factor (C(f)), which is required to calculate the amount of the genetically modified organism (GMO), was experimentally determined with two real-time PCR instruments, the Applied Biosystems 7900HT and the Applied Biosystems 7500. The determined C(f) values were both 0.98. The quantitative method was evaluated by means of blind tests in multi-laboratory trials using the two real-time PCR instruments. The limit of quantitation for the method was estimated to be 0.1%. The trueness and precision were evaluated as the bias and reproducibility of relative standard deviation (RSD(R)), and the determined bias and RSD(R) values for the method were each less than 20%. These results suggest that the developed method would be suitable for practical analyses for the detection and quantification of A2704-12.

  12. In vivo MRS and MRSI: Performance analysis, measurement considerations and evaluation of metabolite concentration images

    NASA Astrophysics Data System (ADS)

    Vikhoff-Baaz, Barbro

    2000-10-01

    The doctoral thesis concerns development, evaluation and performance of quality assessment methods for volume- selection methods in 31P and 1H MR spectroscopy (MRS). It also contains different aspects of the measurement procedure for 1H MR spectroscopic imaging (MRSI) with application on the human brain, image reconstruction of the MRSI images and evaluation methods for lateralization of temporal lobe epilepsy (TLE). Two complementary two-compartment phantoms and evaluation methods for quality assessment of 31P MRS in small-bore MR systems were presented. The first phantom consisted of an inner cube inside a sphere phantom where measurements with and without volume selection where compared for various VOI sizes. The multi-centre showed that the evaluated parameters provide useful information of the performance of volume-selective MRS at the MR system. The second phantom consisted of two compartments divided by a very thin wall and was found useful for measurements of the appearance and position of the VOI profile in specific gradient directions. The second part concerned 1H MRS and MRSI of whole-body MR systems. Different factors that may degrade or complicate the measurement procedure like for MRSI were evaluated, e.g. the volume selection performance, contamination, susceptibility and motion. Two interpolation methods for reconstruction of MRSI images were compared. Measurements and computer simulations showed that Fourier interpolation correctly visualizes the information inherent in the data set, while the results were dependent on the position of the object relative the original matrix using Cubic spline interpolation. Application of spatial filtering may improve the image representation of the data. Finally, 1H MRSI was performed on healthy volunteers and patients with temporal lobe epilepsy (TLE). Metabolite concentration images were used for lateralization of TLE, where the signal intensity in the two hemispheres were compared. Visual analysis of the

  13. Quantiprot - a Python package for quantitative analysis of protein sequences.

    PubMed

    Konopka, Bogumił M; Marciniak, Marta; Dyrka, Witold

    2017-07-17

    The field of protein sequence analysis is dominated by tools rooted in substitution matrices and alignments. A complementary approach is provided by methods of quantitative characterization. A major advantage of the approach is that quantitative properties defines a multidimensional solution space, where sequences can be related to each other and differences can be meaningfully interpreted. Quantiprot is a software package in Python, which provides a simple and consistent interface to multiple methods for quantitative characterization of protein sequences. The package can be used to calculate dozens of characteristics directly from sequences or using physico-chemical properties of amino acids. Besides basic measures, Quantiprot performs quantitative analysis of recurrence and determinism in the sequence, calculates distribution of n-grams and computes the Zipf's law coefficient. We propose three main fields of application of the Quantiprot package. First, quantitative characteristics can be used in alignment-free similarity searches, and in clustering of large and/or divergent sequence sets. Second, a feature space defined by quantitative properties can be used in comparative studies of protein families and organisms. Third, the feature space can be used for evaluating generative models, where large number of sequences generated by the model can be compared to actually observed sequences.

  14. An hierarchical approach to performance evaluation of expert systems

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Kavi, Srinu

    1985-01-01

    The number and size of expert systems is growing rapidly. Formal evaluation of these systems - which is not performed for many systems - increases the acceptability by the user community and hence their success. Hierarchical evaluation that had been conducted for computer systems is applied for expert system performance evaluation. Expert systems are also evaluated by treating them as software systems (or programs). This paper reports many of the basic concepts and ideas in the Performance Evaluation of Expert Systems Study being conducted at the University of Southwestern Louisiana.

  15. SU-E-T-624: Quantitative Evaluation of 2D Versus 3D Dosimetry for Stereotactic Volumetric Modulated Arc Delivery Using COMPASS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vikraman, S; Karrthick, K; Rajesh, T

    2014-06-15

    Purpose: The purpose of this study was to evaluate quantitatively 2D versus 3D dosimetry for stereotactic volumetric modulated arc delivery using COMPASS with 2D array. Methods: Twenty-five patients CT images and RT structures of different sites like brain, head and neck, thorax, abdomen and spine were taken from Multiplan planning system for this study. All these patients underwent radical stereotactic treatment in Cyberknife. For each patient, linac based VMAT stereotactic plans were generated in Monaco TPS v 3.1 using Elekta Beam Modulator MLC. Dose prescription was in the range of 5-20Gy/fraction.TPS calculated VMAT plan delivery accuracy was quantitatively evaluated withmore » COMPASS measured dose and calculated dose based on DVH metrics. In order to ascertain the potential of COMPASS 3D dosimetry for stereotactic plan delivery, 2D fluence verification was performed with MatriXX using Multicube. Results: For each site, D{sub 9} {sub 5} was achieved with 100% of prescription dose with maximum 0.05SD. Conformity index (CI) was observed closer to 1.15 in all cases. Maximum deviation of 2.62 % was observed for D{sub 9} {sub 5} when compared TPS versus COMPASS measured. Considerable deviations were observed in head and neck cases compare to other sites. The maximum mean and standard deviation for D{sub 9} {sub 5}, average target dose and average gamma were -0.78±1.72, -1.10±1.373 and 0.39±0.086 respectively. Numbers of pixels passing 2D fluence verification were observed as a mean of 99.36% ±0.455 SD with 3% dose difference and 3mm DTA. For critical organs in head and neck cases, significant dose differences were observed in 3D dosimetry while the target doses were matched well within limit in both 2D and 3D dosimetry. Conclusion: The quantitative evaluations of 2D versus 3D dosimetry for stereotactic volumetric modulated plans showed the potential of highlighting the delivery errors. This study reveals that COMPASS 3D dosimetry is an effective tool for

  16. Identification and Quantitation of Asparagine and Citrulline Using High-Performance Liquid Chromatography (HPLC)

    PubMed Central

    Bai, Cheng; Reilly, Charles C.; Wood, Bruce W.

    2007-01-01

    High-performance liquid chromatography (HPLC) analysis was used for identification of two problematic ureides, asparagine and citrulline. We report here a technique that takes advantage of the predictable delay in retention time of the co-asparagine/citrulline peak to enable both qualitative and quantitative analysis of asparagine and citrulline using the Platinum EPS reverse-phase C18 column (Alltech Associates). Asparagine alone is eluted earlier than citrulline alone, but when both of them are present in biological samples they may co-elute. HPLC retention times for asparagine and citrulline were influenced by other ureides in the mixture. We found that at various asparagines and citrulline ratios [= 3:1, 1:1, and 1:3; corresponding to 75:25, 50:50, and 25:75 (μMol ml−1/μMol ml−1)], the resulting peak exhibited different retention times. Adjustment of ureide ratios as internal standards enables peak identification and quantification. Both chemicals were quantified in xylem sap samples of pecan [Carya illinoinensis (Wangenh.) K. Koch] trees. Analysis revealed that tree nickel nutrition status affects relative concentrations of Urea Cycle intermediates, asparagine and citrulline, present in sap. Consequently, we concluded that the HPLC methods are presented to enable qualitative and quantitative analysis of these metabolically important ureides. PMID:19662174

  17. Identification and quantitation of asparagine and citrulline using high-performance liquid chromatography (HPLC).

    PubMed

    Bai, Cheng; Reilly, Charles C; Wood, Bruce W

    2007-03-28

    High-performance liquid chromatography (HPLC) analysis was used for identification of two problematic ureides, asparagine and citrulline. We report here a technique that takes advantage of the predictable delay in retention time of the co-asparagine/citrulline peak to enable both qualitative and quantitative analysis of asparagine and citrulline using the Platinum EPS reverse-phase C18 column (Alltech Associates). Asparagine alone is eluted earlier than citrulline alone, but when both of them are present in biological samples they may co-elute. HPLC retention times for asparagine and citrulline were influenced by other ureides in the mixture. We found that at various asparagines and citrulline ratios [= 3:1, 1:1, and 1:3; corresponding to 75:25, 50:50, and 25:75 (microMol ml(-1)/microMol ml(-1))], the resulting peak exhibited different retention times. Adjustment of ureide ratios as internal standards enables peak identification and quantification. Both chemicals were quantified in xylem sap samples of pecan [Carya illinoinensis (Wangenh.) K. Koch] trees. Analysis revealed that tree nickel nutrition status affects relative concentrations of Urea Cycle intermediates, asparagine and citrulline, present in sap. Consequently, we concluded that the HPLC methods are presented to enable qualitative and quantitative analysis of these metabolically important ureides.

  18. TU-AB-202-06: Quantitative Evaluation of Deformable Image Registration in MRI-Guided Adaptive Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mooney, K; Zhao, T; Green, O

    Purpose: To assess the performance of the deformable image registration algorithm used for MRI-guided adaptive radiation therapy using image feature analysis. Methods: MR images were collected from five patients treated on the MRIdian (ViewRay, Inc., Oakwood Village, OH), a three head Cobalt-60 therapy machine with an 0.35 T MR system. The images were acquired immediately prior to treatment with a uniform 1.5 mm resolution. Treatment sites were as follows: head/neck, lung, breast, stomach, and bladder. Deformable image registration was performed using the ViewRay software between the first fraction MRI and the final fraction MRI, and the DICE similarity coefficient (DSC)more » for the skin contours was reported. The SIFT and Harris feature detection and matching algorithms identified point features in each image separately, then found matching features in the other image. The target registration error (TRE) was defined as the vector distance between matched features on the two image sets. Each deformation was evaluated based on comparison of average TRE and DSC. Results: Image feature analysis produced between 2000–9500 points for evaluation on the patient images. The average (± standard deviation) TRE for all patients was 3.3 mm (±3.1 mm), and the passing rate of TRE<3 mm was 60% on the images. The head/neck patient had the best average TRE (1.9 mm±2.3 mm) and the best passing rate (80%). The lung patient had the worst average TRE (4.8 mm±3.3 mm) and the worst passing rate (37.2%). DSC was not significantly correlated with either TRE (p=0.63) or passing rate (p=0.55). Conclusions: Feature matching provides a quantitative assessment of deformable image registration, with a large number of data points for analysis. The TRE of matched features can be used to evaluate the registration of many objects throughout the volume, whereas DSC mainly provides a measure of gross overlap. We have a research agreement with ViewRay Inc.« less

  19. Operator performance evaluation using multi criteria decision making methods

    NASA Astrophysics Data System (ADS)

    Rani, Ruzanita Mat; Ismail, Wan Rosmanira; Razali, Siti Fatihah

    2014-06-01

    Operator performance evaluation is a very important operation in labor-intensive manufacturing industry because the company's productivity depends on the performance of its operators. The aims of operator performance evaluation are to give feedback to operators on their performance, to increase company's productivity and to identify strengths and weaknesses of each operator. In this paper, six multi criteria decision making methods; Analytical Hierarchy Process (AHP), fuzzy AHP (FAHP), ELECTRE, PROMETHEE II, Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) and VlseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR) are used to evaluate the operators' performance and to rank the operators. The performance evaluation is based on six main criteria; competency, experience and skill, teamwork and time punctuality, personal characteristics, capability and outcome. The study was conducted at one of the SME food manufacturing companies in Selangor. From the study, it is found that AHP and FAHP yielded the "outcome" criteria as the most important criteria. The results of operator performance evaluation showed that the same operator is ranked the first using all six methods.

  20. Quantitative susceptibility mapping as a biomarker for evaluating white matter alterations in Parkinson's disease.

    PubMed

    Guan, Xiaojun; Huang, Peiyu; Zeng, Qiaoling; Liu, Chunlei; Wei, Hongjiang; Xuan, Min; Gu, Quanquan; Xu, Xiaojun; Wang, Nian; Yu, Xinfeng; Luo, Xiao; Zhang, Minming

    2018-02-07

    Myelinated white matter showing diamagnetic susceptibility is important for information transfer in the brain. In Parkinson's disease (PD), the white matter is also suffering degenerative alterations. Quantitative susceptibility mapping (QSM) is a novel technique for noninvasive assessment of regional white matter ultrastructure, and provides different information of white matter in addition to standard diffusion tensor imaging (DTI). In this study, we used QSM to detect spatial white matter alterations in PD patients (n = 65) and age- and sex-matched normal controls (n = 46). Voxel-wise tract-based spatial statistics were performed to analyze QSM and DTI data. QSM showed extensive white matter involvement-including regions adjacent to the frontal, parietal, and temporal lobes-in PD patients, which was more widespread than that observed using DTI. Both QSM and DTI showed similar alterations in the left inferior longitudinal fasciculus and right cerebellar hemisphere. Further, alterations in the white matter were correlated with motor impairment and global disease severity in PD patients. We suggest that QSM may provide a novel approach for detecting white matter alterations and underlying network disruptions in PD. Further, the combination of QSM and DTI would provide a more complete evaluation of the diseased brain by analyzing different biological tissue properties.

  1. The quantitative evaluation of intracranial pressure by optic nerve sheath diameter/eye diameter CT measurement.

    PubMed

    Bekerman, Inessa; Sigal, Tal; Kimiagar, Itzhak; Ben Ely, Anna; Vaiman, Michael

    2016-12-01

    The changes of the optic nerve sheath diameter (ONSD) have been used to assess changes of the intracranial pressure for 20 years. The aim of this research was to further quantify the technique of measuring the ONSD for this purpose. Retrospective study of computed tomographic (CT) data of 1766 adult patients with intracranial hypotension (n=134) or hypertension (n=1632) were analyzed. The eyeball transverse diameter (ETD) and ONSD were obtained bilaterally, and the ONSD/ETD ratio was calculated. The ratio was used to calculate the normal ONSD for patients and to estimate the intracranial pressure of the patients before and after the onset of the pathology. Correlation analysis was performed with invasively measured intracranial pressure, the presence or absence of papilledema, sex, and age. In hypotension cases, the ONSD by CT was 3.4±0.7 mm (P=.03 against normative 4.4±0.8 mm). In cases with hypertension, the diameter was 6.9±1.3 (P=.02, with a cutoff value ˃5.5 mm). The ONSD/ETD ratio was 0.29±0.04 against 0.19±0.02 in healthy adults (P=.01). The ONSD and the ONSD/ETD ratio can indicate low intracranial pressure, but quantification is impossible at intracranial pressure less than 13 mm Hg. In elevated intracranial pressure, the ONSD and the ratio provide readings that correspond to readings in millimeters of mercury. The ONSD method, reinforced with additional calculations, may help to indicate a raised intracranial pressure, evaluate its severity quantitatively, and establish quantitative goals for treatment of intracranial hypertension, but the limitations of the method are to be taken into account. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Legal Aspects of Evaluating Teacher Performance.

    ERIC Educational Resources Information Center

    Beckham, Joseph C.

    Chapter 14 in a book on school law concerns the legal aspects of evaluating teacher performance. Careful analysis of recent decisions makes it clear the courts will compel uniform standards and unprecedented rigor in teacher evaluation practices. Particularly in the consideration of equitable standards, state and federal courts are relying on…

  3. 40 CFR 35.115 - Evaluation of performance.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... requirements for progress reporting under 40 CFR 31.40(b). (b) Elements of the evaluation process. The.... The recipient may request review of the Regional Administrator's decision under the dispute processes... Evaluation of performance. (a) Joint evaluation process. The applicant and the Regional Administrator will...

  4. A quality assurance phantom for the performance evaluation of volumetric micro-CT systems

    NASA Astrophysics Data System (ADS)

    Du, Louise Y.; Umoh, Joseph; Nikolov, Hristo N.; Pollmann, Steven I.; Lee, Ting-Yim; Holdsworth, David W.

    2007-12-01

    Small-animal imaging has recently become an area of increased interest because more human diseases can be modeled in transgenic and knockout rodents. As a result, micro-computed tomography (micro-CT) systems are becoming more common in research laboratories, due to their ability to achieve spatial resolution as high as 10 µm, giving highly detailed anatomical information. Most recently, a volumetric cone-beam micro-CT system using a flat-panel detector (eXplore Ultra, GE Healthcare, London, ON) has been developed that combines the high resolution of micro-CT and the fast scanning speed of clinical CT, so that dynamic perfusion imaging can be performed in mice and rats, providing functional physiological information in addition to anatomical information. This and other commercially available micro-CT systems all promise to deliver precise and accurate high-resolution measurements in small animals. However, no comprehensive quality assurance phantom has been developed to evaluate the performance of these micro-CT systems on a routine basis. We have designed and fabricated a single comprehensive device for the purpose of performance evaluation of micro-CT systems. This quality assurance phantom was applied to assess multiple image-quality parameters of a current flat-panel cone-beam micro-CT system accurately and quantitatively, in terms of spatial resolution, geometric accuracy, CT number accuracy, linearity, noise and image uniformity. Our investigations show that 3D images can be obtained with a limiting spatial resolution of 2.5 mm-1 and noise of ±35 HU, using an acquisition interval of 8 s at an entrance dose of 6.4 cGy.

  5. Quantitative analysis of peel-off degree for printed electronics

    NASA Astrophysics Data System (ADS)

    Park, Janghoon; Lee, Jongsu; Sung, Ki-Hak; Shin, Kee-Hyun; Kang, Hyunkyoo

    2018-02-01

    We suggest a facile methodology of peel-off degree evaluation by image processing on printed electronics. The quantification of peeled and printed areas was performed using open source programs. To verify the accuracy of methods, we manually removed areas from the printed circuit that was measured, resulting in 96.3% accuracy. The sintered patterns showed a decreasing tendency in accordance with the increase in the energy density of an infrared lamp, and the peel-off degree increased. Thus, the comparison between both results was presented. Finally, the correlation between performance characteristics was determined by quantitative analysis.

  6. Quantitative evaluation of analyte transport on microfluidic paper-based analytical devices (μPADs).

    PubMed

    Ota, Riki; Yamada, Kentaro; Suzuki, Koji; Citterio, Daniel

    2018-02-07

    The transport efficiency during capillary flow-driven sample transport on microfluidic paper-based analytical devices (μPADs) made from filter paper has been investigated for a selection of model analytes (Ni 2+ , Zn 2+ , Cu 2+ , PO 4 3- , bovine serum albumin, sulforhodamine B, amaranth) representing metal cations, complex anions, proteins and anionic molecules. For the first time, the transport of the analytical target compounds rather than the sample liquid, has been quantitatively evaluated by means of colorimetry and absorption spectrometry-based methods. The experiments have revealed that small paperfluidic channel dimensions, additional user operation steps (e.g. control of sample volume, sample dilution, washing step) as well as the introduction of sample liquid wicking areas allow to increase analyte transport efficiency. It is also shown that the interaction of analytes with the negatively charged cellulosic paper substrate surface is strongly influenced by the physico-chemical properties of the model analyte and can in some cases (Cu 2+ ) result in nearly complete analyte depletion during sample transport. The quantitative information gained through these experiments is expected to contribute to the development of more sensitive μPADs.

  7. A GATE evaluation of the sources of error in quantitative {sup 90}Y PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strydhorst, Jared, E-mail: jared.strydhorst@gmail.

    Purpose: Accurate reconstruction of the dose delivered by {sup 90}Y microspheres using a postembolization PET scan would permit the establishment of more accurate dose–response relationships for treatment of hepatocellular carcinoma with {sup 90}Y. However, the quality of the PET data obtained is compromised by several factors, including poor count statistics and a very high random fraction. This work uses Monte Carlo simulations to investigate what impact factors other than low count statistics have on the quantification of {sup 90}Y PET. Methods: PET acquisitions of two phantoms—a NEMA PET phantom and the NEMA IEC PET body phantom-containing either {sup 90}Y ormore » {sup 18}F were simulated using GATE. Simulated projections were created with subsets of the simulation data allowing the contributions of random, scatter, and LSO background to be independently evaluated. The simulated projections were reconstructed using the commercial software for the simulated scanner, and the quantitative accuracy of the reconstruction and the contrast recovery of the reconstructed images were evaluated. Results: The quantitative accuracy of the {sup 90}Y reconstructions were not strongly influenced by the high random fraction present in the projection data, and the activity concentration was recovered to within 5% of the known value. The contrast recovery measured for simulated {sup 90}Y data was slightly poorer than that for simulated {sup 18}F data with similar count statistics. However, the degradation was not strongly linked to any particular factor. Using a more restricted energy range to reduce the random fraction in the projections had no significant effect. Conclusions: Simulations of {sup 90}Y PET confirm that quantitative {sup 90}Y is achievable with the same approach as that used for {sup 18}F, and that there is likely very little margin for improvement by attempting to model aspects unique to {sup 90}Y, such as the much higher random fraction or the presence of

  8. Accurate single-shot quantitative phase imaging of biological specimens with telecentric digital holographic microscopy.

    PubMed

    Doblas, Ana; Sánchez-Ortiga, Emilio; Martínez-Corral, Manuel; Saavedra, Genaro; Garcia-Sucerquia, Jorge

    2014-04-01

    The advantages of using a telecentric imaging system in digital holographic microscopy (DHM) to study biological specimens are highlighted. To this end, the performances of nontelecentric DHM and telecentric DHM are evaluated from the quantitative phase imaging (QPI) point of view. The evaluated stability of the microscope allows single-shot QPI in DHM by using telecentric imaging systems. Quantitative phase maps of a section of the head of the drosophila melanogaster fly and of red blood cells are obtained via single-shot DHM with no numerical postprocessing. With these maps we show that the use of telecentric DHM provides larger field of view for a given magnification and permits more accurate QPI measurements with less number of computational operations.

  9. Quantitative Evaluation of CART-Containing Cells in Urinary Bladder of Rats with Renovascular Hypertension

    PubMed Central

    Janiuk, I.; Kasacka, I.

    2015-01-01

    Recent biological advances make it possible to discover new peptides associated with hypertension. The cocaine- and amphetamine-regulated transcript (CART) is a known factor in appetite and feeding behaviour. Various lines of evidence suggest that this peptide participates not only in control of feeding behaviour but also in the regulation of the cardiovascular and sympathetic systems and blood pressure. The role of CART in blood pressure regulation led us to undertake a study aimed at analysing quantitative changes in CART-containing cells in urinary bladders (UB) of rats with renovascular hypertension. We used the Goldblatt model of arterial hypertension (two-kidney, one clip) to evaluate quantitative changes. This model provides researchers with a commonly used tool to analyse the renin-angiotensin system of blood pressure control and, eventually, to develop drugs for the treatment of chronic hypertension. The study was performed on sections of urinary bladders of rats after 3-, 14-, 28-, 42 and 91 days from hypertension induction. Immunohistochemical identification of CART cells was performed on paraffin for the UBs of all the study animals. CART was detected in the endocrine cells, especially numerous in the submucosa and muscularis layers, with a few found in the transitional epithelium and only occasionally in serosa. Hypertension significantly increased the number of CART-positive cells in the rat UBs. After 3 and 42 days following the procedure, statistically significantly higher numbers of CART-positive cells were identified in comparison with the control animals. The differences between the hypertensive rats and the control animals concerned not only the number density of CART-immunoreactive cells but also their localization. After a 6-week period, each of the rats subjected to the renal artery clipping procedure developed stable hypertension. CART appeared in numerous transitional epithelium cells. As this study provides novel findings, the question

  10. Quantitative evaluation of CART-containing cells in urinary bladder of rats with renovascular hypertension.

    PubMed

    Janiuk, I; Kasacka, I

    2015-04-13

    Recent biological advances make it possible to discover new peptides associated with hypertension. The cocaine- and amphetamine-regulated transcript (CART) is a known factor in appetite and feeding behaviour. Various lines of evidence suggest that this peptide participates not only in control of feeding behaviour but also in the regulation of the cardiovascular and sympathetic systems and blood pressure. The role of CART in blood pressure regulation led us to undertake a study aimed at analysing quantitative changes in CART-containing cells in urinary bladders (UB) of rats with renovascular hypertension. We used the Goldblatt model of arterial hypertension (two-kidney, one clip) to evaluate quantitative changes. This model provides researchers with a commonly used tool to analyse the renin-angiotensin system of blood pressure control and, eventually, to develop drugs for the treatment of chronic hypertension. The study was performed on sections of urinary bladders of rats after 3-, 14-, 28-, 42 and 91 days from hypertension induction. Immunohistochemical identification of CART cells was performed on paraffin for the UBs of all the study animals. CART was detected in the endocrine cells, especially numerous in the submucosa and muscularis layers, with a few found in the transitional epithelium and only occasionally in serosa. Hypertension significantly increased the number of CART-positive cells in the rat UBs. After 3 and 42 days following the procedure, statistically significantly higher numbers of CART-positive cells were identified in comparison with the control animals. The differences between the hypertensive rats and the control animals concerned not only the number density of CART-immunoreactive cells but also their localization. After a 6-week period, each of the rats subjected to the renal artery clipping procedure developed stable hypertension. CART appeared in numerous transitional epithelium cells. As this study provides novel findings, the question

  11. [Data fusion and multi-components quantitative analysis for identification and quality evaluation of Gentiana rigescens from different geographical origins].

    PubMed

    Wang, Qin-Qin; Shen, Tao; Zuo, Zhi-Tian; Huang, Heng-Yu; Wang, Yuan-Zhong

    2018-03-01

    The accumulation of secondary metabolites of traditional Chinese medicine (TCM) is closely related to its origins. The identification of origins and multi-components quantitative evaluation are of great significance to ensure the quality of medicinal materials. In this study, the identification of Gentiana rigescens from different geographical origins was conducted by data fusion of Fourier transform infrared (FTIR) spectroscopy and high performance liquid chromatography (HPLC) in combination of partial least squares discriminant analysis; meanwhile quantitative analysis of index components was conducted to provide an accurate and comprehensive identification and quality evaluation strategy for selecting the best production areas of G. rigescens. In this study, the FTIR and HPLC information of 169 G. rigescens samples from Yunnan, Sichuan, Guangxi and Guizhou Provinces were collected. The raw infrared spectra were pre-treated by multiplicative scatter correction, standard normal variate (SNV) and Savitzky-Golay (SG) derivative. Then the performances of FTIR, HPLC, and low-level data fusion and mid-level data fusion for identification were compared, and the contents of gentiopicroside, swertiamarin, loganic acid and sweroside were determined by HPLC. The results showed that the FTIR spectra of G. rigescens from different geographical origins were different, and the best pre-treatment method was SNV+SG-derivative (second derivative, 15 as the window parameter, and 2 as the polynomial order). The results showed that the accuracy rate of low- and mid-level data fusion (96.43%) in prediction set was higher than that of FTIR and HPLC (94.64%) in prediction set. In addition, the accuracy of low-level data fusion (100%) in the training set was higher than that of mid-level data fusion (99.12%) in training set. The contents of the iridoid glycosides in Yunnan were the highest among different provinces. The average content of gentiopicroside, as a bioactive marker in Chinese

  12. An urban energy performance evaluation system and its computer implementation.

    PubMed

    Wang, Lei; Yuan, Guan; Long, Ruyin; Chen, Hong

    2017-12-15

    To improve the urban environment and effectively reflect and promote urban energy performance, an urban energy performance evaluation system was constructed, thereby strengthening urban environmental management capabilities. From the perspectives of internalization and externalization, a framework of evaluation indicators and key factors that determine urban energy performance and explore the reasons for differences in performance was proposed according to established theory and previous studies. Using the improved stochastic frontier analysis method, an urban energy performance evaluation and factor analysis model was built that brings performance evaluation and factor analysis into the same stage for study. According to data obtained for the Chinese provincial capitals from 2004 to 2013, the coefficients of the evaluation indicators and key factors were calculated by the urban energy performance evaluation and factor analysis model. These coefficients were then used to compile the program file. The urban energy performance evaluation system developed in this study was designed in three parts: a database, a distributed component server, and a human-machine interface. Its functions were designed as login, addition, edit, input, calculation, analysis, comparison, inquiry, and export. On the basis of these contents, an urban energy performance evaluation system was developed using Microsoft Visual Studio .NET 2015. The system can effectively reflect the status of and any changes in urban energy performance. Beijing was considered as an example to conduct an empirical study, which further verified the applicability and convenience of this evaluation system. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Crowdsourcing Assessment of Surgeon Dissection of Renal Artery and Vein During Robotic Partial Nephrectomy: A Novel Approach for Quantitative Assessment of Surgical Performance.

    PubMed

    Powers, Mary K; Boonjindasup, Aaron; Pinsky, Michael; Dorsey, Philip; Maddox, Michael; Su, Li-Ming; Gettman, Matthew; Sundaram, Chandru P; Castle, Erik P; Lee, Jason Y; Lee, Benjamin R

    2016-04-01

    We sought to describe a methodology of crowdsourcing for obtaining quantitative performance ratings of surgeons performing renal artery and vein dissection of robotic partial nephrectomy (RPN). We sought to compare assessment of technical performance obtained from the crowdsourcers with that of surgical content experts (CE). Our hypothesis is that the crowd can score performances of renal hilar dissection comparably to surgical CE using the Global Evaluative Assessment of Robotic Skills (GEARS). A group of resident and attending robotic surgeons submitted a total of 14 video clips of RPN during hilar dissection. These videos were rated by both crowd and CE for technical skills performance using GEARS. A minimum of 3 CE and 30 Amazon Mechanical Turk crowdworkers evaluated each video with the GEARS scale. Within 13 days, we received ratings of all videos from all CE, and within 11.5 hours, we received 548 GEARS ratings from crowdworkers. Even though CE were exposed to a training module, internal consistency across videos of CE GEARS ratings remained low (ICC = 0.38). Despite this, we found that crowdworker GEARS ratings of videos were highly correlated with CE ratings at both the video level (R = 0.82, p < 0.001) and surgeon level (R = 0.84, p < 0.001). Similarly, crowdworker ratings of the renal artery dissection were highly correlated with expert assessments (R = 0.83, p < 0.001) for the unique surgery-specific assessment question. We conclude that crowdsourced assessment of qualitative performance ratings may be an alternative and/or adjunct to surgical experts' ratings and would provide a rapid scalable solution to triage technical skills.

  14. Quantitative evaluation of the CEEM soil sampling intercomparison.

    PubMed

    Wagner, G; Lischer, P; Theocharopoulos, S; Muntau, H; Desaules, A; Quevauviller, P

    2001-01-08

    The aim of the CEEM soil project was to compare and to test the soil sampling and sample preparation guidelines used in the member states of the European Union and Switzerland for investigations of background and large-scale contamination of soils, soil monitoring and environmental risk assessments. The results of the comparative evaluation of the sampling guidelines demonstrated that, in soil contamination studies carried out with different sampling strategies and methods, comparable results can hardly be expected. Therefore, a reference database (RDB) was established by the organisers, which acted as a basis for the quantitative comparison of the participants' results. The detected deviations were related to the methodological details of the individual strategies. The comparative evaluation concept consisted of three steps: The first step was a comparison of the participants' samples (which were both centrally and individually analysed) between each other, as well as with the reference data base (RDB) and some given soil quality standards on the level of concentrations present. The comparison was made using the example of the metals cadmium, copper, lead and zinc. As a second step, the absolute and relative deviations between the reference database and the participants' results (both centrally analysed under repeatability conditions) were calculated. The comparability of the samples with the RDB was categorised on four levels. Methods of exploratory statistical analysis were applied to estimate the differential method bias among the participants. The levels of error caused by sampling and sample preparation were compared with those caused by the analytical procedures. As a third step, the methodological profiles of the participants were compiled to concisely describe the different procedures used. They were related to the results to find out the main factors leading to their incomparability. The outcome of this evaluation process was a list of strategies and

  15. A Quantitative Human Spacecraft Design Evaluation Model for Assessing Crew Accommodation and Utilization

    NASA Astrophysics Data System (ADS)

    Fanchiang, Christine

    Crew performance, including both accommodation and utilization factors, is an integral part of every human spaceflight mission from commercial space tourism, to the demanding journey to Mars and beyond. Spacecraft were historically built by engineers and technologists trying to adapt the vehicle into cutting edge rocketry with the assumption that the astronauts could be trained and will adapt to the design. By and large, that is still the current state of the art. It is recognized, however, that poor human-machine design integration can lead to catastrophic and deadly mishaps. The premise of this work relies on the idea that if an accurate predictive model exists to forecast crew performance issues as a result of spacecraft design and operations, it can help designers and managers make better decisions throughout the design process, and ensure that the crewmembers are well-integrated with the system from the very start. The result should be a high-quality, user-friendly spacecraft that optimizes the utilization of the crew while keeping them alive, healthy, and happy during the course of the mission. Therefore, the goal of this work was to develop an integrative framework to quantitatively evaluate a spacecraft design from the crew performance perspective. The approach presented here is done at a very fundamental level starting with identifying and defining basic terminology, and then builds up important axioms of human spaceflight that lay the foundation for how such a framework can be developed. With the framework established, a methodology for characterizing the outcome using a mathematical model was developed by pulling from existing metrics and data collected on human performance in space. Representative test scenarios were run to show what information could be garnered and how it could be applied as a useful, understandable metric for future spacecraft design. While the model is the primary tangible product from this research, the more interesting outcome of

  16. Performance-Based Evaluation and School Librarians

    ERIC Educational Resources Information Center

    Church, Audrey P.

    2015-01-01

    Evaluation of instructional personnel is standard procedure in our Pre-K-12 public schools, and its purpose is to document educator effectiveness. With Race to the Top and No Child Left Behind waivers, states are required to implement performance-based evaluations that demonstrate student academic progress. This three-year study describes the…

  17. Theory and Practice on Teacher Performance Evaluation

    ERIC Educational Resources Information Center

    Yonghong, Cai; Chongde, Lin

    2006-01-01

    Teacher performance evaluation plays a key role in educational personnel reform, so it has been an important yet difficult issue in educational reform. Previous evaluations on teachers failed to make strict distinction among the three dominant types of evaluation, namely, capability, achievement, and effectiveness. Moreover, teacher performance…

  18. Qualitative and quantitative evaluation of solvent systems for countercurrent separation.

    PubMed

    Friesen, J Brent; Ahmed, Sana; Pauli, Guido F

    2015-01-16

    Rational solvent system selection for countercurrent chromatography and centrifugal partition chromatography technology (collectively known as countercurrent separation) studies continues to be a scientific challenge as the fundamental questions of comparing polarity range and selectivity within a solvent system family and between putative orthogonal solvent systems remain unanswered. The current emphasis on metabolomic investigations and analysis of complex mixtures necessitates the use of successive orthogonal countercurrent separation (CS) steps as part of complex fractionation protocols. Addressing the broad range of metabolite polarities demands development of new CS solvent systems with appropriate composition, polarity (π), selectivity (σ), and suitability. In this study, a mixture of twenty commercially available natural products, called the GUESSmix, was utilized to evaluate both solvent system polarity and selectively characteristics. Comparisons of GUESSmix analyte partition coefficient (K) values give rise to a measure of solvent system polarity range called the GUESSmix polarity index (GUPI). Solvatochromic dye and electrical permittivity measurements were also evaluated in quantitatively assessing solvent system polarity. The relative selectivity of solvent systems were evaluated with the GUESSmix by calculating the pairwise resolution (αip), the number of analytes found in the sweet spot (Nsw), and the pairwise resolution of those sweet spot analytes (αsw). The combination of these parameters allowed for both intra- and inter-family comparison of solvent system selectivity. Finally, 2-dimensional reciprocal shifted symmetry plots (ReSS(2)) were created to visually compare both the polarities and selectivities of solvent system pairs. This study helps to pave the way to the development of new solvent systems that are amenable to successive orthogonal CS protocols employed in metabolomic studies. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. 48 CFR 36.201 - Evaluation of contractor performance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Evaluation of contractor performance. 36.201 Section 36.201 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION... Contracting for Construction 36.201 Evaluation of contractor performance. See 42.1502(e) for the requirements...

  20. Performance evaluation of knowledge management among hospital employees.

    PubMed

    Chang, Ying-Ying; Hsu, Pi-Fang; Li, Min-Hua; Chang, Ching-Ching

    2011-01-01

    The purpose of this study is to investigate the cognition of knowledge management (KM) among hospital employees and the relationship between KM and the KM enabler activities (financial, customer, internal business processes, learning and growth) in a regional hospital in Taiwan. Both qualitative and quantitative research were used in this study. The instrument was conducted using in-depth interviews of three policy-makers as participants. The quantitative data were collected from a regional hospital in the Northern part of Taiwan with a 77 percent effective response rate (n=154). The findings in this paper indicate that the cognition and demand for KM in subordinates is close to the expectations of policy-makers. The policy-makers expect subordinates working in the hospital to be brave in taking on new responsibilities and complying with hospital operation norms. KM is emphasized as a powerful and positive asset. Moreover, understanding KM predicts good performance in an organization. The findings in this paper can be generalized to other regional hospitals. The findings may be applied to a wider population. This study can provide insights into the perceptions and cognitions of workers in a hospital about KM and the activities of KM enablers. The responses and perceptions observed in the interviews in this study, as well as the quantitative research results could be useful to other hospitals and individuals who engage KM as a new management trend. This study suggested KM guidelines for policy-makers who are experienced managers.

  1. Assessing deep and shallow learning methods for quantitative prediction of acute chemical toxicity.

    PubMed

    Liu, Ruifeng; Madore, Michael; Glover, Kyle P; Feasel, Michael G; Wallqvist, Anders

    2018-05-02

    Animal-based methods for assessing chemical toxicity are struggling to meet testing demands. In silico approaches, including machine-learning methods, are promising alternatives. Recently, deep neural networks (DNNs) were evaluated and reported to outperform other machine-learning methods for quantitative structure-activity relationship modeling of molecular properties. However, most of the reported performance evaluations relied on global performance metrics, such as the root mean squared error (RMSE) between the predicted and experimental values of all samples, without considering the impact of sample distribution across the activity spectrum. Here, we carried out an in-depth analysis of DNN performance for quantitative prediction of acute chemical toxicity using several datasets. We found that the overall performance of DNN models on datasets of up to 30,000 compounds was similar to that of random forest (RF) models, as measured by the RMSE and correlation coefficients between the predicted and experimental results. However, our detailed analyses demonstrated that global performance metrics are inappropriate for datasets with a highly uneven sample distribution, because they show a strong bias for the most populous compounds along the toxicity spectrum. For highly toxic compounds, DNN and RF models trained on all samples performed much worse than the global performance metrics indicated. Surprisingly, our variable nearest neighbor method, which utilizes only structurally similar compounds to make predictions, performed reasonably well, suggesting that information of close near neighbors in the training sets is a key determinant of acute toxicity predictions.

  2. 40 CFR 35.515 - Evaluation of performance.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ....515 Evaluation of performance. (a) Joint evaluation process. The applicant and the Regional... work plan (see section 35.507(b)(2)(iv)). A description of the evaluation process and reporting... annually and must satisfy the requirements for progress reporting under 40 CFR 31.40(b). (b) Elements of...

  3. Establishment of a new method to quantitatively evaluate hyphal fusion ability in Aspergillus oryzae.

    PubMed

    Tsukasaki, Wakako; Maruyama, Jun-Ichi; Kitamoto, Katsuhiko

    2014-01-01

    Hyphal fusion is involved in the formation of an interconnected colony in filamentous fungi, and it is the first process in sexual/parasexual reproduction. However, it was difficult to evaluate hyphal fusion efficiency due to the low frequency in Aspergillus oryzae in spite of its industrial significance. Here, we established a method to quantitatively evaluate the hyphal fusion ability of A. oryzae with mixed culture of two different auxotrophic strains, where the ratio of heterokaryotic conidia growing without the auxotrophic requirements reflects the hyphal fusion efficiency. By employing this method, it was demonstrated that AoSO and AoFus3 are required for hyphal fusion, and that hyphal fusion efficiency of A. oryzae was increased by depleting nitrogen source, including large amounts of carbon source, and adjusting pH to 7.0.

  4. Quantitative cultures of bronchoscopically obtained specimens should be performed for optimal management of ventilator-associated pneumonia.

    PubMed

    Baselski, Vickie; Klutts, J Stacey; Baselski, Vickie; Klutts, J Stacey

    2013-03-01

    Ventilator-associated pneumonia (VAP) is a leading cause of health care-associated infection. It has a high rate of attributed mortality, and this mortality is increased in patients who do not receive appropriate empirical antimicrobial therapy. As a result of the overuse of broad-spectrum antimicrobials such as the carbapenems, strains of Acinetobacter, Enterobacteriaceae, and Pseudomonas aeruginosa susceptible only to polymyxins and tigecycline have emerged as important causes of VAP. The need to accurately diagnose VAP so that appropriate discontinuation or de-escalation of antimicrobial therapy can be initiated to reduce this antimicrobial pressure is essential. Practice guidelines for the diagnosis of VAP advocate the use of bronchoalveolar lavage (BAL) fluid obtained either bronchoscopically or by the use of a catheter passed through the endotracheal tube. The CDC recommends that quantitative cultures be performed on these specimens, using ≥ 10(4) CFU/ml to designate a positive culture (http://www.cdc.gov/nhsn/TOC_PSCManual.html, accessed 30 October 2012). However, there is no consensus in the clinical microbiology community as to whether these specimens should be cultured quantitatively, using the aforementioned designated bacterial cell count to designate infection, or by a semiquantitative approach. We have asked Vickie Baselski, University of Tennessee Health Science Center, who was the lead author on one of the seminal papers on quantitative BAL fluid culture, to explain why she believes that quantitative BAL fluid cultures are the optimal strategy for VAP diagnosis. We have Stacey Klutts, University of Iowa, to advocate the semiquantitative approach.

  5. TU-H-CAMPUS-IeP3-04: Evaluation of Changes in Quantitative Ultrasound Parameters During Prostate Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Najafi, M; El Kaffas, A; Han, B

    Purpose: Clarity Autoscan ultrasound monitoring system allows acquisition of raw radiofrequency (RF) ultrasound data prior and during radiotherapy. This enables the computation of 3D Quantitative Ultrasound (QUS) tissue parametric maps from. We aim to evaluate whether QUS parameters undergo changes with radiotherapy and thus potentially be used as early predictors and/or markers of treatment response in prostate cancer patients. Methods: In-vivo evaluation was performed under IRB protocol to allow data collection in prostate patients treated with VMAT whereby prostate was imaged through the acoustic window of the perineum. QUS spectroscopy analysis was carried out by computing a tissue power spectrummore » normalized to the power spectrum obtained from a quartz to remove system transfer function effects. A ROI was selected within the 3D image volume of the prostate. Because longitudinal registration was optimal, the same features could be used to select ROIs at roughly the same location in images acquired on different days. Parametric maps were generated within the rectangular ROIs with window sizes that were approximately 8 times the wavelength of the ultrasound. The mid-band fit (MBF), spectral slope (SS) and spectral intercept (SI) QUS parameters were computed for each window within the ROI and displayed as parametric maps. Quantitative parameters were obtained by averaging each of the spectral parameters over the whole ROI. Results: Data was acquired for over 21 treatment fractions. Preliminary results show changes in the parametric maps. MBF values decreased from −33.9 dB to −38.7 dB from pre-treatment to the last day of treatment. The spectral slope increased from −1.1 a.u. to −0.5 a.u., and spectral intercept decreased from −28.2 dB to −36.3 dB over the 21 treatment regimen. Conclusion: QUS parametric maps change over the course of treatment which warrants further investigation in their potential use for treatment planning and predicting

  6. Error Reduction Program. [combustor performance evaluation codes

    NASA Technical Reports Server (NTRS)

    Syed, S. A.; Chiappetta, L. M.; Gosman, A. D.

    1985-01-01

    The details of a study to select, incorporate and evaluate the best available finite difference scheme to reduce numerical error in combustor performance evaluation codes are described. The combustor performance computer programs chosen were the two dimensional and three dimensional versions of Pratt & Whitney's TEACH code. The criteria used to select schemes required that the difference equations mirror the properties of the governing differential equation, be more accurate than the current hybrid difference scheme, be stable and economical, be compatible with TEACH codes, use only modest amounts of additional storage, and be relatively simple. The methods of assessment used in the selection process consisted of examination of the difference equation, evaluation of the properties of the coefficient matrix, Taylor series analysis, and performance on model problems. Five schemes from the literature and three schemes developed during the course of the study were evaluated. This effort resulted in the incorporation of a scheme in 3D-TEACH which is usuallly more accurate than the hybrid differencing method and never less accurate.

  7. Evaluation of Propranolol Effect on Experimental Acute and Chronic Toxoplasmosis Using Quantitative PCR

    PubMed Central

    Montazeri, Mahbobeh; Ebrahimzadeh, Mohammad Ali; Ahmadpour, Ehsan; Sharif, Mehdi; Sarvi, Shahabeddin

    2016-01-01

    Current therapies against toxoplasmosis are limited, and drugs have significant side effects and low efficacies. We evaluated the potential anti-Toxoplasma activity of propranolol at a dose of 2 or 3 mg/kg of body weight/day in vivo in the acute and chronic phases. Propranolol as a cell membrane-stabilizing agent is a suitable drug for inhibiting the entrance of Toxoplasma gondii tachyzoites into cells. The acute-phase assay was performed using propranolol, pyrimethamine, and propranolol plus pyrimethamine before (pretreatment) and after (posttreatment) intraperitoneal challenge with 1 × 103 tachyzoites of the virulent T. gondii strain RH in BALB/c mice. Also, in the chronic phase, treatment was performed 12 h before intraperitoneal challenge with 1 × 106 tachyzoites of the virulent strain RH of T. gondii in rats. One week (in the acute phase) and 2 months (in the chronic phase) after postinfection, tissues were isolated and DNA was extracted. Subsequently, parasite load was calculated using quantitative PCR (qPCR). In the acute phase, in both groups, significant anti-Toxoplasma activity was observed using propranolol (P < 0.001). Propranolol in the pretreatment group showed higher anti-Toxoplasma activity than propranolol in posttreatment in brain tissues, displaying therapeutic efficiency on toxoplasmosis. Also, propranolol combined with pyrimethamine reduced the parasite load as well as significantly increased survival of mice in the pretreatment group. In the chronic phase, anti-Toxoplasma activity and decreased parasite load in tissues were observed with propranolol. In conclusion, the presented results demonstrate that propranolol, as an orally available drug, is effective at low doses against acute and latent murine toxoplasmosis, and the efficiency of the drug is increased when it is used in combination therapy with pyrimethamine. PMID:27645234

  8. Comparison of quantitative evaluation between cutaneous and transosseous inertial sensors in anterior cruciate ligament deficient knee: A cadaveric study.

    PubMed

    Murase, Atsunori; Nozaki, Masahiro; Kobayashi, Masaaki; Goto, Hideyuki; Yoshida, Masahito; Yasuma, Sanshiro; Takenaga, Tetsuya; Nagaya, Yuko; Mizutani, Jun; Okamoto, Hideki; Iguchi, Hirotaka; Otsuka, Takanobu

    2017-09-01

    Recently several authors have reported on the quantitative evaluation of the pivot-shift test using cutaneous fixation of inertial sensors. Before utilizing this sensor for clinical studies, it is necessary to evaluate the accuracy of cutaneous sensor in assessing rotational knee instability. To evaluate the accuracy of inertial sensors, we compared cutaneous and transosseous sensors in the quantitative assessment of rotational knee instability in a cadaveric setting, in order to demonstrate their clinical applicability. Eight freshly frozen human cadaveric knees were used in this study. Inertial sensors were fixed on the tibial tuberosity and directly fixed to the distal tibia bone. A single examiner performed the pivot shift test from flexion to extension on the intact knees and ACL deficient knees. The peak overall magnitude of acceleration and the maximum rotational angular velocity in the tibial superoinferior axis was repeatedly measured with the inertial sensor during the pivot shift test. Correlations between cutaneous and transosseous inertial sensors were evaluated, as well as statistical analysis for differences between ACL intact and ACL deficient knees. Acceleration and angular velocity measured with the cutaneous sensor demonstrated a strong positive correlation with the transosseous sensor (r = 0.86 and r = 0.83). Comparison between cutaneous and transosseous sensor indicated significant difference for the peak overall magnitude of acceleration (cutaneous: 10.3 ± 5.2 m/s 2 , transosseous: 14.3 ± 7.6 m/s 2 , P < 0.01) and for the maximum internal rotation angular velocity (cutaneous: 189.5 ± 99.6 deg/s, transosseous: 225.1 ± 103.3 deg/s, P < 0.05), but no significant difference for the maximum external rotation angular velocity (cutaneous: 176.1 ± 87.3 deg/s, transosseous: 195.9 ± 106.2 deg/s, N.S). There is a positive correlation between cutaneous and transosseous inertial sensors. Therefore, this study indicated that

  9. Meta-analysis of the technical performance of an imaging procedure: guidelines and statistical methodology.

    PubMed

    Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun

    2015-02-01

    Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test-retest repeatability data for illustrative purposes. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  10. Meta-analysis of the technical performance of an imaging procedure: Guidelines and statistical methodology

    PubMed Central

    Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun

    2017-01-01

    Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test–retest repeatability data for illustrative purposes. PMID:24872353

  11. Differences between genders in colorectal morphology on CT colonography using a quantitative approach: a pilot study.

    PubMed

    Weber, Charles N; Poff, Jason A; Lev-Toaff, Anna S; Levine, Marc S; Zafar, Hanna M

    To explore quantitative differences between genders in morphologic colonic metrics and determine metric reproducibility. Quantitative colonic metrics from 20 male and 20 female CTC datasets were evaluated twice by two readers; all exams were performed after incomplete optical colonoscopy. Intra-/inter-reader reliability was measured with intraclass correlation coefficient (ICC) and concordance correlation coefficient (CCC). Women had overall decreased colonic volume, increased tortuosity and compactness and lower sigmoid apex height on CTC compared to men (p<0.0001,all). Quantitative measurements in colonic metrics were highly reproducible (ICC=0.9989 and 0.9970; CCC=0.9945). Quantitative morphologic differences between genders can be reproducibility measured. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. A simple hemostasis model for the quantitative evaluation of hydrogel-based local hemostatic biomaterials on tissue surface.

    PubMed

    Murakami, Yoshihiko; Yokoyama, Masayuki; Nishida, Hiroshi; Tomizawa, Yasuko; Kurosawa, Hiromi

    2008-09-01

    Several hemostat hydrogels are clinically used, and some other agents are studied for safer, more facile, and more efficient hemostasis. In the present paper, we proposed a novel method to evaluate local hemostat hydrogel on tissue surface. The procedure consisted of the following steps: (step 1) a mouse was fixed on a cork board, and its abdomen was incised; (step 2) serous fluid was carefully removed because it affected the estimation of the weight gained by the filter paper, and parafilm and preweighted filter paper were placed beneath the liver (parafilm prevented the filter paper's absorption of gradually oozing serous fluid); (step 3) the cork board was tilted and maintained at an angle of about 45 degrees so that the bleeding would more easily flow from the liver toward the filter paper; and (step 4) the bleeding lasted for 3 min. In this step, a hemostat was applied to the liver wound immediately after the liver was pricked with a needle. We found that (1) a careful removal of serous fluid prior to a bleeding and (2) a quantitative determination of the amount of excess aqueous solution that oozed out from a hemostat were important to a rigorous evaluation of hemostat efficacy. We successfully evaluated the efficacy of a fibrin-based hemostat hydrogel by using our method. The method proposed in the present study enabled the quantitative, accurate, and easy evaluation of the efficacy of local hemostatic hydrogel which acts as tissue-adhesive agent on biointerfaces.

  13. Experimental Evaluation of High Performance Integrated Heat Pump

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, William A; Berry, Robert; Durfee, Neal

    2016-01-01

    Integrated heat pump (IHP) technology provides significant potential for energy savings and comfort improvement for residential buildings. In this study, we evaluate the performance of a high performance IHP that provides space heating, cooling, and water heating services. Experiments were conducted according to the ASHRAE Standard 206-2013 where 24 test conditions were identified in order to evaluate the IHP performance indices based on the airside performance. Empirical curve fits of the unit s compressor maps are used in conjunction with saturated condensing and evaporating refrigerant conditions to deduce the refrigerant mass flowrate, which, in turn was used to evaluate themore » refrigerant side performance as a check on the airside performance. Heat pump (compressor, fans, and controls) and water pump power were measured separately per requirements of Standard 206. The system was charged per the system manufacturer s specifications. System test results are presented for each operating mode. The overall IHP performance metrics are determined from the test results per the Standard 206 calculation procedures.« less

  14. Quantitative and Qualitative Evaluation of Iranian Researchers' Scientific Production in Dentistry Subfields.

    PubMed

    Yaminfirooz, Mousa; Motallebnejad, Mina; Gholinia, Hemmat; Esbakian, Somayeh

    2015-10-01

    As in other fields of medicine, scientific production in the field of dentistry has significant placement. This study aimed at quantitatively and qualitatively evaluating Iranian researchers' scientific output in the field of dentistry and determining their contribution in each of dentistry subfields and branches. This research was a scientometric study that applied quantitative and qualitative indices of Web of Science (WoS). Research population consisted of927indexed documents published under the name of Iran in the time span of 1993-2012 which were extracted from WoS on 10 March 2013. The Mann-Whitney test and Pearson correlation coefficient were used to data analyses in SPSS 19. 777 (83. 73%) of indexed items of all scientific output in WoS were scientific articles. The highest growth rate of scientific productionwith90% belonged to endodontic sub field. The correlation coefficient test showed that there was a significant positive relationship between the number of documents and their publication age (P < 0. 0001). There was a significant difference between the mean number of published articles in the first ten- year (1993-2003) and that of the second one (2004-2013), in favor of the latter (P = 0. 001). The distribution frequencies of scientific production in various subfields of dentistry were very different. It needs to reinforce the infrastructure for more balanced scientific production in the field and its related subfields.

  15. A new approach for the quantitative evaluation of drawings in children with learning disabilities.

    PubMed

    Galli, Manuela; Vimercati, Sara Laura; Stella, Giacomo; Caiazzo, Giorgia; Norveti, Federica; Onnis, Francesca; Rigoldi, Chiara; Albertini, Giorgio

    2011-01-01

    A new method for a quantitative and objective description of drawing and for the quantification of drawing ability in children with learning disabilities (LD) is hereby presented. Twenty-four normally developing children (N) (age 10.6 ± 0.5) and 18 children with learning disabilities (LD) (age 10.3 ± 2.4) took part to the study. The drawing tasks were chosen among those already used in clinical daily experience (Denver Developmental Screening Test). Some parameters were defined in order to quantitatively describe the features of the children's drawings, introducing new objective measurements beside the subjective standard clinical evaluation. The experimental set-up revealed to be valid for clinical application with LD children. The parameters highlighted the presence of differences in the drawing features of N and LD children. This paper suggests the applicability of this protocol to other fields of motor and cognitive valuation, as well as the possibility to study the upper limbs position and muscle activation during drawing. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Implementing a Quantitative Analysis Design Tool for Future Generation Interfaces

    DTIC Science & Technology

    2012-03-01

    with Remotely Piloted Aircraft (RPA) has resulted in the need of a platform to evaluate interface design. The Vigilant Spirit Control Station ( VSCS ...Spirit interface. A modified version of the HCI Index was successfully applied to perform a quantitative analysis of the baseline VSCS interface and...time of the original VSCS interface. These results revealed the effectiveness of the tool and demonstrated in the design of future generation

  17. Quantitative contrast enhanced magnetic resonance imaging for the evaluation of peripheral arterial disease: a comparative study versus standard digital angiography.

    PubMed

    Pavlovic, Chris; Futamatsu, Hideki; Angiolillo, Dominick J; Guzman, Luis A; Wilke, Norbert; Siragusa, Daniel; Wludyka, Peter; Percy, Robert; Northrup, Martin; Bass, Theodore A; Costa, Marco A

    2007-04-01

    The purpose of this study is to evaluate the accuracy of semiautomated analysis of contrast enhanced magnetic resonance angiography (MRA) in patients who have undergone standard angiographic evaluation for peripheral vascular disease (PVD). Magnetic resonance angiography is an important tool for evaluating PVD. Although this technique is both safe and noninvasive, the accuracy and reproducibility of quantitative measurements of disease severity using MRA in the clinical setting have not been fully investigated. 43 lesions in 13 patients who underwent both MRA and digital subtraction angiography (DSA) of iliac and common femoral arteries within 6 months were analyzed using quantitative magnetic resonance angiography (QMRA) and quantitative vascular analysis (QVA). Analysis was repeated by a second operator and by the same operator in approximately 1 month time. QMRA underestimated percent diameter stenosis (%DS) compared to measurements made with QVA by 2.47%. Limits of agreement between the two methods were +/- 9.14%. Interobserver variability in measurements of %DS were +/- 12.58% for QMRA and +/- 10.04% for QVA. Intraobserver variability of %DS for QMRA was +/- 4.6% and for QVA was +/- 8.46%. QMRA displays a high level of agreement to QVA when used to determine stenosis severity in iliac and common femoral arteries. Similar levels of interobserver and intraobserver variability are present with each method. Overall, QMRA represents a useful method to quantify severity of PVD.

  18. Building China's municipal healthcare performance evaluation system: a Tuscan perspective.

    PubMed

    Li, Hao; Barsanti, Sara; Bonini, Anna

    2012-08-01

    Regional healthcare performance evaluation systems can help optimize healthcare resources on regional basis and improve the performance of healthcare services provided. The Tuscany region in Italy is a good example of an institution which meets these requirements. China has yet to build such a system based on international experience. In this paper, based on comparative studies between Tuscany and China, we propose that the managing institutions in China's experimental cities can select and commission a third-party agency to, respectively, evaluate the performance of their affiliated hospitals and community health service centers. Following some features of the Tuscan experience, the Chinese municipal healthcare performance evaluation system can be built by focusing on the selection of an appropriate performance evaluation agency, the design of an adequate performance evaluation mechanism and the formulation of a complete set of laws, rules and regulations. When a performance evaluation system at city level is formed, the provincial government can extend the successful experience to other cities.

  19. Model Performance Evaluation and Scenario Analysis (MPESA)

    EPA Pesticide Factsheets

    Model Performance Evaluation and Scenario Analysis (MPESA) assesses the performance with which models predict time series data. The tool was developed Hydrological Simulation Program-Fortran (HSPF) and the Stormwater Management Model (SWMM)

  20. Quantitative analysis of phylloquinone (vitamin K1) in soy bean oils by high-performance liquid chromatography.

    PubMed

    Zonta, F; Stancher, B

    1985-07-19

    A high-performance liquid chromatographic method for determining phylloquinone (vitamin K1) in soy bean oils is described. Resolution of vitamin K1 from interfering peaks of the matrix was obtained after enzymatic digestion, extraction and liquid-solid chromatography on alumina. An isocratic reversed-phase chromatography with UV detection was used in the final stage. The quantitation was carried out by the standard addition method, and the recovery of the whole procedure was 88.2%.